Enterprise

Gloo Mesh Enterprise is required for this feature.

This guide will walk you through the basics of registering clusters for management by Gloo Mesh Enterprise using the meshctl tool or with Helm.

Register A Cluster

In order to identify a cluster as being managed by Gloo Mesh Enterprise, we have to register it in our installation. Registration ensures we are aware of the cluster, and we have properly configured a remote relay agent to talk to the local relay server. In this example, we will register our remote cluster with Gloo Mesh Enterprise running on the management cluster.

Register with meshctl

We can use the CLI tool meshctl to register our remote cluster. The command we use will be meshctl cluster register enterprise. This is specific to Gloo Mesh Enterprise, and different in nature than the meshctl cluster register community command.

To register our remote cluster, there are a few key pieces of information we need:

  1. cluster name - The name we would like to register the cluster with.
  2. remote-context - The Kubernetes context with access to the remote cluster being registered.
  3. relay-server-address - The address of the relay server running on the management cluster.

First, let's get the relay-server-address. The relay-server-address is the cluster-external address at which the grpc port on the enterprise-networking service is exposed on the cluster where the Gloo Mesh Enterprise management plane is installed.

By default, the enterprise-networking service is of type LoadBalancer, and the cloud provider managing your Kubernetes cluster will automatically provision a public IP for the service. Get the complete relay-server-address with:


MGMT_INGRESS_ADDRESS=$(kubectl get svc -n gloo-mesh enterprise-networking -o jsonpath='{.status.loadBalancer.ingress[0].ip}')
MGMT_INGRESS_PORT=$(kubectl -n gloo-mesh get service enterprise-networking -o jsonpath='{.spec.ports[?(@.name=="grpc")].port}')
RELAY_ADDRESS=${MGMT_INGRESS_ADDRESS}:${MGMT_INGRESS_PORT}

MGMT_INGRESS_ADDRESS=$(kubectl get svc -n gloo-mesh enterprise-networking -o jsonpath='{.status.loadBalancer.ingress[0].hostname}')
MGMT_INGRESS_PORT=$(kubectl -n gloo-mesh get service enterprise-networking -o jsonpath='{.spec.ports[?(@.name=="grpc")].port}')
RELAY_ADDRESS=${MGMT_INGRESS_ADDRESS}:${MGMT_INGRESS_PORT}

If the above commands left you with a $RELAY_ADDRESS value that is empty or incomplete, make sure the enterprise-networking service is available to clients outside the cluster, perhaps through a NodePort or ingress solution, and find the address before continuing.

Let's set variables for the remaining values:

CLUSTER_NAME=-cluster-2 # Update value as needed
REMOTE_CONTEXT=kind-cluster-2 # Update value as needed

Now we are ready to register the remote cluster:

meshctl cluster register enterprise \
  --remote-context=$REMOTE_CONTEXT \
  --relay-server-address $RELAY_ADDRESS \
  $CLUSTER_NAME

You should see the following output:

Registering cluster
📃 Copying root CA relay-root-tls-secret.gloo-mesh to remote cluster from management cluster
📃 Copying bootstrap token relay-identity-token-secret.gloo-mesh to remote cluster from management cluster
💻 Installing relay agent in the remote cluster
Finished installing chart 'enterprise-agent' as release gloo-mesh:enterprise-agent
📃 Creating remote-cluster KubernetesCluster CRD in management cluster
⌚ Waiting for relay agent to have a client certificate
         Checking...
         Checking...
🗑 Removing bootstrap token
✅ Done registering cluster!

The meshctl command accomplished the following activities:

Now Gloo Mesh Enterprise and the relay agent on the remote cluster are configured to communicate with one another over mTLS to continuously discover and configure your service meshes and workloads.

When registering a remote cluster using Helm, you will need to run through these tasks yourself. The next section details how to accomplish those tasks and install the relay agent with Helm.

Register with Helm

You can also register a remote cluster using the Enterprise Agent Helm repository. The same information used for meshctl registration will be needed here as well. You will also need to complete the following pre-requisites before running the Helm installation.

Without these prerequisites, the relay agent deployment will fail.

Prerequisites

First create the namespace in the remote cluster:

CLUSTER_NAME=cluster-2 # Update value as needed
REMOTE_CONTEXT=kind-cluster-2 # Update value as needed

kubectl create ns gloo-mesh --context $REMOTE_CONTEXT

Now we will get the value of the root CA certificate and create a secret in the remote cluster:

MGMT_CONTEXT=kind-cluster-1 # Update value as needed

kubectl get secret relay-root-tls-secret -n gloo-mesh --context $MGMT_CONTEXT -o jsonpath='{.data.ca\.crt}' | base64 -d > ca.crt

kubectl create secret generic relay-root-tls-secret -n gloo-mesh --context $REMOTE_CONTEXT --from-file ca.crt=ca.crt

rm ca.crt

By adding the root CA certificate to the remote cluster, the installation of the relay agent will trust the TLS certificate from the relay server. We also need to copy over the bootstrap token used for initial communication. This token is only used to validate initial communication between the agent and server. Once the gRPC connection is established, the relay server will issue a client certificate to the relay agent to establish an mutually authenticated TLS session.

kubectl get secret relay-identity-token-secret -n gloo-mesh --context $MGMT_CONTEXT -o jsonpath='{.data.token}' | base64 -d > token

kubectl create secret generic relay-identity-token-secret -n gloo-mesh --context $REMOTE_CONTEXT --from-file token=token

rm token

With these tasks accomplished, we are now ready to deploy the relay agent using Helm.

Install the Enterprise Agent

We are going to install the Enterprise Agent from the Helm repository located at https://storage.googleapis.com/gloo-mesh-enterprise/enterprise-agent. Make sure to review the Helm values options before installing. Some notable values include:

Also note that the Enterprise Agent's version should match that of the enterprise-networking component running on the management cluster. Run meshctl version on the management cluster to review the enterprise-networking version.

If you haven't already, you can add the repository by running the following:

helm repo add enterprise-agent https://storage.googleapis.com/gloo-mesh-enterprise/enterprise-agent
helm repo update

By default, the enterprise-networking service is of type LoadBalancer, and the cloud provider managing your Kubernetes cluster will automatically provision a public IP for the service. Get the complete relay-server-address with:


MGMT_INGRESS_ADDRESS=$(kubectl get svc -n gloo-mesh enterprise-networking -o jsonpath='{.status.loadBalancer.ingress[0].ip}')
MGMT_INGRESS_PORT=$(kubectl -n gloo-mesh get service enterprise-networking -o jsonpath='{.spec.ports[?(@.name=="grpc")].port}')
RELAY_ADDRESS=${MGMT_INGRESS_ADDRESS}:${MGMT_INGRESS_PORT}

MGMT_INGRESS_ADDRESS=$(kubectl get svc -n gloo-mesh enterprise-networking -o jsonpath='{.status.loadBalancer.ingress[0].hostname}')
MGMT_INGRESS_PORT=$(kubectl -n gloo-mesh get service enterprise-networking -o jsonpath='{.spec.ports[?(@.name=="grpc")].port}')
RELAY_ADDRESS=${MGMT_INGRESS_ADDRESS}:${MGMT_INGRESS_PORT}

If the above commands left you with a $RELAY_ADDRESS value that is empty or incomplete, make sure the enterprise-networking service is available to clients outside the cluster, perhaps through a NodePort or ingress solution, and find the address before continuing.

Then we will set our variables:

CLUSTER_NAME=cluster-2 # Update value as needed
REMOTE_CONTEXT=kind-cluster-2 # Update value as needed
ENTERPRISE_NETWORKING_VERSION=<current version> # Update based on meshctl version output

And now we will deploy the relay agent in the remote cluster.

helm install enterprise-agent enterprise-agent/enterprise-agent \
  --namespace gloo-mesh \
  --set relay.serverAddress=${RELAY_ADDRESS} \
  --set relay.cluster=${CLUSTER_NAME} \
  --kube-context=${REMOTE_CONTEXT} \
  --version ${ENTERPRISE_NETWORKING_VERSION}

Add a Kubernetes Cluster Object

We've successfully deployed the relay agent in the remote cluster. Now we need to add a KubernetesCluster object to the management cluster to make the relay server aware of the remote cluster. The metadata.name of the object must match the value passed for relay.cluster in the Helm chart above. The spec.clusterDomain must match the local cluster domain of the Kubernetes cluster.

kubectl apply --context $MGMT_CONTEXT -f- <<EOF
apiVersion: multicluster.solo.io/v1alpha1
kind: KubernetesCluster
metadata:
  name: ${CLUSTER_NAME}
  namespace: gloo-mesh
spec:
  clusterDomain: cluster.local
EOF

Validating the Registration

To validate that the cluster was registered successfully, you can run meshctl check agent, which if successful will return the following output:

POD LOGS: enterprise-agent-test
Gloo Mesh Registered Cluster Installation
--------------------------------------------

🟢 Gloo Mesh Pods Status

Agent Configuration
----------------------

🟢 Gloo Mesh CRD Versions

Relay Connectivity
---------------------

🟢 Gloo Mesh Agent Connectivity

The same test can be performed via Helm test:

The same check can be performed using Helm:

helm test <release-name> --namespace <release-namespace> --kube-context=kind-${MGMT_CLUSTER} --logs

These checks include the following:

  1. That the agent pod is up and running.

  2. That the CRD versions expected by the agent matches the versions of the CRDs installed on the cluster.

  3. That the agent can talk to the enterprise-networking (i.e. relay server).

In Depth Validation of the Registration

We can validate the registration process by first checking to make sure the relay agent pod and secrets have been created on the remote cluster:

kubectl get pods -n gloo-mesh --context $REMOTE_CONTEXT
NAME                                READY   STATUS    RESTARTS   AGE
enterprise-agent-64fc8cc9c5-v7b97   1/1     Running   7          25m

kubectl get secrets -n gloo-mesh --context $REMOTE_CONTEXT

NAME                                     TYPE                                  DATA   AGE
default-token-fcx9w                      kubernetes.io/service-account-token   3      18h
enterprise-agent-token-55mvq             kubernetes.io/service-account-token   3      25m
relay-client-tls-secret                  Opaque                                3      6m24s
relay-identity-token-secret              Opaque                                1      29m
relay-root-tls-secret                    Opaque                                1      18h
sh.helm.release.v1.enterprise-agent.v1   helm.sh/release.v1                    1      25m

The relay-client-tls-secret secret is the client certificate issued by the relay server. Seeing that entry, we know at the very least communication between the relay agent and server was successful.

We can also check the logs on the enterprise-networking pod on the management cluster for communication from the remote cluster.

kubectl -n gloo-mesh --context $MGMT_CONTEXT logs deployment/enterprise-networking | grep $CLUSTER_NAME

You should see messages similar to:

{"level":"debug","ts":1616160185.5505846,"logger":"pull-resource-deltas","msg":"recieved request for delta: response_nonce:\"1\"","metadata":{":authority":["enterprise-networking.gloo-mesh.svc.cluster.local:11100"],"content-type":["application/grpc"],"user-agent":["grpc-go/1.34.0"],"x-cluster-id":["remote-cluster"]},"peer":"10.244.0.17:40074"}

Next Steps

And we're done! Any meshes in that cluster will be discovered and available for configuration by Gloo Mesh Enterprise. See the guide on installing Istio, to see how to easily get Istio running on that cluster.