Registering clusters with Gloo Mesh

After you install the Gloo Mesh management components, register clusters so that Gloo Mesh can identify and manage their service meshes.

When you installed Gloo Mesh Enterprise in the management cluster, a deployment named enterprise-networking was created to run the relay server. The relay server is exposed by the enterprise-networking LoadBalancer service. When you register remote clusters to be managed by Gloo Mesh Enterprise, a deployment named enterprise-agent is created on each remote cluster to run a relay agent. Each relay agent is exposed by an enterprise-agent ClusterIP service, from which all communication is outbound to the relay server on the management cluster. For more information about relay server-agent communication, see the Architecture page. A KubernetesCluster custom resource is also created during registration on the management cluster to represent the remote cluster and store relevant data, such as the remote cluster's local domain (“cluster.local”).

Before you begin

  1. Create or choose one or more workload clusters to register with Gloo Mesh.

  2. Install Istio into each workload cluster.

  3. Set the names of your clusters from your infrastructure provider.
    export MGMT_CLUSTER=<management_cluster_name>
    export REMOTE_CLUSTER=<remote_cluster_name>
    
  4. Save the kubeconfig contexts for your clusters. Run kubectl config get-contexts, look for your cluster in the CLUSTER column, and get the context name in the NAME column.
    export MGMT_CONTEXT=<management-cluster-context>
    export REMOTE_CONTEXT=<remote-cluster-context>
    
  5. Production installations: Review Best practices for production to prepare your optional security measures. For example, if you provided your own certificates during Gloo Mesh installation, you can use these certificates during cluster registration too.

  6. To customize registration in detail, such as for production environments, register clusters with Helm. For quick registration, such as for testing environments, you can register clusters with meshctl.

Registering with Helm

Customize your cluster registration by using the enterprise-agent Helm chart.

  1. In the management cluster, create a KubernetesCluster resource to represent the remote cluster and store relevant data, such as the remote cluster's local domain. The metadata.name must match the name of the remote cluster that you will specify in the enterprise-agent Helm chart in subsequent steps. The spec.clusterDomain must match the local cluster domain of the Kubernetes cluster.
kubectl apply --context $MGMT_CONTEXT -f- <<EOF
apiVersion: multicluster.solo.io/v1alpha1
kind: KubernetesCluster
metadata:
  name: ${REMOTE_CLUSTER}
  namespace: gloo-mesh
spec:
  clusterDomain: cluster.local
EOF
  1. Get the Gloo Mesh Enterprise version that the enterpise-networking relay server runs in the management cluster. The enterprise-agent relay server must run the same version.
meshctl version --kubecontext $MGMT_CONTEXT

Example output:

"server": [
{
  "Namespace": "gloo-mesh",
  "components": [
    {
      "componentName": "enterprise-networking",
      "images": [
        {
          "name": "enterprise-networking",
          "domain": "gcr.io",
          "path": "gloo-mesh/enterprise-networking",
          "version": "1.3.0-beta7"
        }
      ]
    },

  1. Save the version as an environment variable.
export ENTERPRISE_NETWORKING_VERSION=<version>
  1. In the management cluster, find the external address and port that was assigned by your cloud provider to the enterprise-networking load balancer service. The enterprise-agent relay agent in each cluster accesses this address via a secure connection.
kubectl get svc -n gloo-mesh enterprise-networking --context $MGMT_CONTEXT

In this example output, the address and port are 34.85.240.14:9900.

NAME                    TYPE           CLUSTER-IP       EXTERNAL-IP    PORT(S)          AGE
enterprise-networking   LoadBalancer   10.151.241.242   34.85.240.14   9900:32071/TCP   9m50s

Note that if the external address is unassigned, make sure that the enterprise-networking service in the management cluster is available outside the cluster, such as through a Kubernetes LoadBalancer or NodePort service.

  1. Save the address and port as an environment variable.
export ENTERPRISE_NETWORKING_ADDRESS=<IP:9900>
  1. Create the gloo-mesh namespace in your remote cluster.
kubectl create ns gloo-mesh --context $REMOTE_CONTEXT
  1. Default certificates only: If you installed Gloo Mesh by using the default self-signed certificates, you must copy the root CA certificate to a secret in the remote cluster so that the relay agent will trust the TLS certificate from the relay server. You must also copy the bootstrap token used for initial communication to the remote cluster. This token is used only to validate initial communication between the relay agent and server. After the gRPC connection is established, the relay server issues a client certificate to the relay agent to establish a mutually-authenticated TLS session.

    1. Get the value of the root CA certificate from the management cluster and create a secret in the remote cluster.
    kubectl get secret relay-root-tls-secret -n gloo-mesh --context $MGMT_CONTEXT -o jsonpath='{.data.ca\.crt}' | base64 -d > ca.crt
    kubectl create secret generic relay-root-tls-secret -n gloo-mesh --context $REMOTE_CONTEXT --from-file ca.crt=ca.crt
    rm ca.crt
    
    1. Get the bootstrap token from the management cluster and create a secret in the remote cluster.
    kubectl get secret relay-identity-token-secret -n gloo-mesh --context $MGMT_CONTEXT -o jsonpath='{.data.token}' | base64 -d > token
    kubectl create secret generic relay-identity-token-secret -n gloo-mesh --context $REMOTE_CONTEXT --from-file token=token
    rm token
    
  2. Add and update the Helm repository for the Gloo Mesh Enterprise agent.

helm repo add enterprise-agent https://storage.googleapis.com/gloo-mesh-enterprise/enterprise-agent
helm repo update
  1. Optional: View the Helm values.
helm show values enterprise-agent/enterprise-agent
  1. Make any necessary customizations to the Helm chart for your registration by preparing a Helm values file. The sample command downloads the values file from GitHub to your local workstation.
    Sample values file
    For example, you can edit the values-data-plane.yaml values file to provide your own details for settings that are recommended for production deployments, including FIPS-compliant images, custom certificates and disabling rate limiting and external authentication in the gloo-mesh namespace. For more information about these settings, see Best practices for production and the Helm values documentation.
curl -0L https://raw.githubusercontent.com/solo-io/gloo-mesh-use-cases/main/helm-install/1.3/values-data-plane.yaml > values-data-plane.yaml
  1. Update the Helm values file with the environment variables that you previously set for $REMOTE_CLUSTER and $ENTERPRISE_NETWORKING_ADDRESS.
envsubst < values-data-plane.yaml > values-data-plane-env.yaml
  1. Deploy the relay agent to the remote cluster.
helm install enterprise-agent enterprise-agent/enterprise-agent \
  --namespace gloo-mesh \
  --kube-context=${REMOTE_CONTEXT} \
  --version ${ENTERPRISE_NETWORKING_VERSION} \
  --values values-data-plane-env.yaml
If you installed the Gloo Mesh management plane in insecure mode by including the --set enterprise-networking.global.insecure=true flag in the install command, include the --set global.insecure=true flag in each helm install enterprise-agent command.
  1. Verify the registration.

  2. Repeat steps 4 - 11 to register each workload cluster with Gloo Mesh. Remember to change the variables for each cluster name and context.

export REMOTE_CLUSTER=<remote_cluster_name>
export REMOTE_CONTEXT=<remote-cluster-context>

Registering with meshctl

You can use the CLI tool meshctl to register your remote clusters.

  1. In the management cluster, find the external address and port that was assigned by your cloud provider to the enterprise-networking load balancer service. When you register the remote clusters in subsequent steps, the enterprise-agent relay agent in each cluster accesses this address via a secure connection.

    
    ENTERPRISE_NETWORKING_DOMAIN=$(kubectl get svc -n gloo-mesh enterprise-networking --context $MGMT_CONTEXT -o jsonpath='{.status.loadBalancer.ingress[0].ip}')
    ENTERPRISE_NETWORKING_PORT=$(kubectl -n gloo-mesh get service enterprise-networking --context $MGMT_CONTEXT -o jsonpath='{.spec.ports[?(@.name=="grpc")].port}')
    ENTERPRISE_NETWORKING_ADDRESS=${ENTERPRISE_NETWORKING_DOMAIN}:${ENTERPRISE_NETWORKING_PORT}
    echo $ENTERPRISE_NETWORKING_ADDRESS
    
    
    ENTERPRISE_NETWORKING_DOMAIN=$(kubectl get svc -n gloo-mesh enterprise-networking --context $MGMT_CONTEXT -o jsonpath='{.status.loadBalancer.ingress[0].hostname}')
    ENTERPRISE_NETWORKING_PORT=$(kubectl -n gloo-mesh get service enterprise-networking --context $MGMT_CONTEXT -o jsonpath='{.spec.ports[?(@.name=="grpc")].port}')
    ENTERPRISE_NETWORKING_ADDRESS=${ENTERPRISE_NETWORKING_DOMAIN}:${ENTERPRISE_NETWORKING_PORT}
    echo $ENTERPRISE_NETWORKING_ADDRESS
    
    Note that if $ENTERPRISE_NETWORKING_ADDRESS is empty or incomplete, make sure that the enterprise-networking service in the management cluster is available outside the cluster, such as through a Kubernetes LoadBalancer or NodePort service.

  2. Register the remote cluster. The meshctl command completes the following:

    • Creates the gloo-mesh namespace
    • Copies the root CA certificate to the remote cluster
    • Copies the boostrap token to the remote cluster
    • Installs the relay agent in the remote cluster
    • Creates the KubernetesCluster CRD in the management cluster
meshctl cluster register \
  --remote-context=$REMOTE_CONTEXT \
  --relay-server-address $RELAY_ADDRESS \
  $REMOTE_CLUSTER
If you installed the Gloo Mesh management plane in insecure mode by running meshctl install --set global.insecure=true, include the --relay-server-insecure=true flag in each meshctl cluster register command.

Example output:

Registering cluster
πŸ“ƒ Copying root CA relay-root-tls-secret.gloo-mesh to remote cluster from management cluster
πŸ“ƒ Copying bootstrap token relay-identity-token-secret.gloo-mesh to remote cluster from management cluster
πŸ’» Installing relay agent in the remote cluster
Finished installing chart 'enterprise-agent' as release gloo-mesh:enterprise-agent
πŸ“ƒ Creating remote.cluster KubernetesCluster CRD in management cluster
⌚ Waiting for relay agent to have a client certificate
         Checking...
         Checking...
πŸ—‘ Removing bootstrap token
βœ… Done registering cluster!
  1. Verify the registration.

  2. Repeat these to register each workload cluster with Gloo Mesh. Remember to change the variables for each cluster name and context.

export REMOTE_CLUSTER=<remote_cluster_name>
export REMOTE_CONTEXT=<remote-cluster-context>

Verifying the registration

After you register a remote cluster, verify that the relay agent is successfully deployed and that the management cluster identified the remote cluster.

  1. Verify that the relay agent pod has a status of Running.
kubectl get pods -n gloo-mesh --context $REMOTE_CONTEXT

Example output:

NAME                                READY   STATUS    RESTARTS   AGE
enterprise-agent-64fc8cc9c5-v7b97   1/1     Running   0          25m
  1. Verify that the cluster is successfully registered. This check might take a few seconds to ensure that the relay agent pod is running, that the CRD versions expected by enterprise-agent match the versions of the CRDs that are installed in the remote cluster, and that the relay agent is connected to the relay server in management cluster.
meshctl check agent --kubecontext $REMOTE_CONTEXT

Example output:

POD LOGS: enterprise-agent-test
Gloo Mesh Registered Cluster Installation

🟒 Gloo Mesh Pods Status

Agent Configuration

🟒 Gloo Mesh CRD Versions

Relay Connectivity

🟒 Gloo Mesh Agent Connectivity
  1. Verify that the cluster is successfully identified by the management plane. This check might take a few seconds to ensure that the expected remote relay agent is now running and is connected to the relay server in the management cluster.
meshctl check server --kubecontext $MGMT_CONTEXT

Example output:

Gloo Mesh Management Cluster Installation

🟒 Gloo Mesh Pods Status
+------------------------+------------+-------------------------------+-----------------+
|  CLUSTER  | REGISTERED | DASHBOARDS AND AGENTS PULLING | AGENTS PUSHING  |
+------------------------+------------+-------------------------------+-----------------+
| cluster-1 | true       |                             2 |               1 |
+------------------------+------------+-------------------------------+-----------------+

🟒 Gloo Mesh Agents Connectivity

Management Configuration

🟒 Gloo Mesh CRD Versions

🟒 Gloo Mesh Networking Configuration Resources
  1. Optional: Check the logs on the enterprise-networking pod on the management cluster for communication from the remote cluster.
kubectl -n gloo-mesh --context $MGMT_CONTEXT logs deployment/enterprise-networking | grep $REMOTE_CLUSTER

Example output:

{"level":"debug","ts":1616160185.5505846,"logger":"pull-resource-deltas","msg":"recieved request for delta: response_nonce:\"1\"","metadata":{":authority":["enterprise-networking.gloo-mesh.svc.cluster.local:11100"],"content-type":["application/grpc"],"user-agent":["grpc-go/1.34.0"],"x-cluster-id":["remote.cluster"]},"peer":"10.244.0.17:40074"}

Optional: Setting up rate limiting and external authentication

To enable mTLS with rate limiting and external authentication, you must add an injection directive for those components. Although you can enable an injection directive on the gloo-mesh namespace, this directive makes the management plane components dependent on the functionality of Istio’s mutating webhook, which may be a fragile coupling and is not recommended as best practice. In production setups, install the Gloo Mesh Enterprise chart with just rate limiting and external authentication services enabled to the gloo-mesh-addons namespace, and label the gloo-mesh-addons namespace for Istio injection.

  1. Create the gloo-mesh-addons namespace.
kubectl create ns gloo-mesh-addons --context $REMOTE_CONTEXT
  1. In a values.yaml file, enable rate limiting and external authentication, and disable the enterprise agent.
rate-limiter: 
  enabled: true
ext-auth-service: 
  enabled: true
enterpriseAgent:
  enabled: false
  1. Create a gloo-mesh-addons release from the Gloo Mesh agent Helm chart, to install only rate limiting and external authentication in the gloo-mesh-addons namespace.
helm install enterprise-agent-addons enterprise-agent/enterprise-agent \
   --namespace gloo-mesh-addons \
   --set licenseKey=${GLOO_MESH_LICENSE_KEY} \
   --kube-context=${REMOTE_CONTEXT} \
   --values values.yaml
  1. Label the gloo-mesh-addons namespace for Istio injection.
kubectl --context $REMOTE_CONTEXT label ns gloo-mesh-addons istio-injection=enabled --overwrite
  1. Verify that the rate limiting and external authentication components are successfully installed.
kubectl get pods -n gloo-mesh-addons --context $REMOTE_CONTEXT

Example output:

NAME                                     READY   STATUS    RESTARTS   AGE
rate-limit-3d62244cdb-fcrvd              2/2     Running   0          4m2s
ext-auth-service-3d62244cdb-fcrvd        2/2     Running   0          4m2s

Next, you can check out the guides for rate limiting and external authentication to use these features.

Next Steps

The Gloo Mesh Enterprise management plane and the remote clusters in the data plane can now communicate over mTLS to continuously discover and configure your service meshes and workloads.

Now that you have Gloo Mesh Enterprise up and running, check out some of the following resources to learn more about Gloo Mesh or try other Gloo Mesh features.