Register workload clusters

After you install the Gloo Mesh management components, register clusters so that Gloo Mesh can identify and manage their service meshes.

When you installed Gloo Mesh Enterprise in the management cluster, a deployment named gloo-mesh-mgmt-server was created to run the relay server. The relay server is exposed by the gloo-mesh-mgmt-server LoadBalancer service. When you register workload clusters to be managed by Gloo Mesh Enterprise, a deployment named gloo-mesh-agent is created on each workload cluster to run a relay agent. Each relay agent is exposed by an gloo-mesh-agent ClusterIP service, from which all communication is outbound to the relay server on the management cluster. For more information about relay server-agent communication, see the relay architecture page. Cluster registration also creates a KubernetesCluster custom resource on the management cluster to represent the workload cluster and store relevant data, such as the workload cluster's local domain (“cluster.local”).

Before you begin

  1. Create or choose one or more workload clusters to register with Gloo Mesh. Note: The cluster name cannot include underscores (_).

  2. Set the names of your clusters from your infrastructure provider.
    export MGMT_CLUSTER=<management_cluster_name>
    export REMOTE_CLUSTER=<remote_cluster_name>
    
  3. Save the kubeconfig contexts for your clusters. Run kubectl config get-contexts, look for your cluster in the CLUSTER column, and get the context name in the NAME column.
    export MGMT_CONTEXT=<management-cluster-context>
    export REMOTE_CONTEXT=<remote-cluster-context>
    
  4. Production installations: Review Best practices for production to prepare your optional security measures. For example, if you provided your own certificates during Gloo Mesh installation, you can use these certificates during cluster registration too.

  5. To customize registration in detail, such as for production environments, register clusters with Helm. For quick registration, such as for testing environments, you can register clusters with meshctl.

Registering with Helm

Customize your cluster registration by using the gloo-mesh-agent Helm chart.

  1. In the management cluster, create a KubernetesCluster resource to represent the workload cluster and store relevant data, such as the workload cluster's local domain.

    • The metadata.name must match the name of the workload cluster that you will specify in the gloo-mesh-agent Helm chart in subsequent steps.
    • The spec.clusterDomain must match the local cluster domain of the Kubernetes cluster.
    • You can optionally give your cluster a label, such as env: prod, region: us-east, or another selector. Your workspaces can use the label to automatically add the cluster to the workspace.
    kubectl apply --context $MGMT_CONTEXT -f- <<EOF
    apiVersion: admin.gloo.solo.io/v2
    kind: KubernetesCluster
    metadata:
      name: ${REMOTE_CLUSTER}
      namespace: gloo-mesh
      labels:
        env: prod
    spec:
      clusterDomain: cluster.local
    EOF
    
  2. Get the Gloo Mesh Enterprise version that the gloo-mesh-mgmt-server runs in the management cluster. The gloo-mesh-agent must run the same version.

    meshctl version --kubecontext $MGMT_CONTEXT
    

    Example output:

       server": [
       
        "Namespace": "gloo-mesh",
        "components": [
          {
            "componentName": "gloo-mesh-mgmt-server",
            "images": [
              {
                "name": "gloo-mesh-mgmt-server",
                "domain": "gcr.io",
                "path": "gloo-mesh/gloo-mesh-mgmt-server",
                "version": "2.0.7"
               }
             ]
           },
       

  3. Save the version as an environment variable.

    export GLOO_MESH_VERSION=<version>
    
  4. In the management cluster, find the external address and port that was assigned by your cloud provider to the gloo-mesh-mgmt-server load balancer service. The gloo-mesh-agent relay agent in each cluster accesses this address via a secure connection.

    
       MGMT_SERVER_NETWORKING_DOMAIN=$(kubectl get svc -n gloo-mesh gloo-mesh-mgmt-server --context $MGMT_CONTEXT -o jsonpath='{.status.loadBalancer.ingress[0].ip}')
       MGMT_SERVER_NETWORKING_PORT=$(kubectl -n gloo-mesh get service gloo-mesh-mgmt-server --context $MGMT_CONTEXT -o jsonpath='{.spec.ports[?(@.name=="grpc")].port}')
       MGMT_SERVER_NETWORKING_ADDRESS=${MGMT_SERVER_NETWORKING_DOMAIN}:${MGMT_SERVER_NETWORKING_PORT}
       echo $MGMT_SERVER_NETWORKING_ADDRESS
       
    
       MGMT_SERVER_NETWORKING_DOMAIN=$(kubectl get svc -n gloo-mesh gloo-mesh-mgmt-server --context $MGMT_CONTEXT -o jsonpath='{.status.loadBalancer.ingress[0].hostname}')
       MGMT_SERVER_NETWORKING_PORT=$(kubectl -n gloo-mesh get service gloo-mesh-mgmt-server --context $MGMT_CONTEXT -o jsonpath='{.spec.ports[?(@.name=="grpc")].port}')
       MGMT_SERVER_NETWORKING_ADDRESS=${MGMT_SERVER_NETWORKING_DOMAIN}:${MGMT_SERVER_NETWORKING_PORT}
       echo $MGMT_SERVER_NETWORKING_ADDRESS
       

  5. Create the gloo-mesh namespace in your workload cluster.

    kubectl create ns gloo-mesh --context $REMOTE_CONTEXT
    
  6. Default certificates only: If you installed Gloo Mesh by using the default self-signed certificates, you must copy the root CA certificate to a secret in the workload cluster so that the relay agent will trust the TLS certificate from the relay server. You must also copy the bootstrap token used for initial communication to the workload cluster. This token is used only to validate initial communication between the relay agent and server. After the gRPC connection is established, the relay server issues a client certificate to the relay agent to establish a mutually-authenticated TLS session.

    1. Get the value of the root CA certificate from the management cluster and create a secret in the workload cluster.

      kubectl get secret relay-root-tls-secret -n gloo-mesh --context $MGMT_CONTEXT -o jsonpath='{.data.ca\.crt}' | base64 -d > ca.crt
      kubectl create secret generic relay-root-tls-secret -n gloo-mesh --context $REMOTE_CONTEXT --from-file ca.crt=ca.crt
      rm ca.crt
      
    2. Get the bootstrap token from the management cluster and create a secret in the workload cluster.

      kubectl get secret relay-identity-token-secret -n gloo-mesh --context $MGMT_CONTEXT -o jsonpath='{.data.token}' | base64 -d > token
      kubectl create secret generic relay-identity-token-secret -n gloo-mesh --context $REMOTE_CONTEXT --from-file token=token
      rm token
      
  7. Add and update the Helm repository for the Gloo Mesh Enterprise relay agent.

    helm repo add gloo-mesh-agent https://storage.googleapis.com/gloo-mesh-enterprise/gloo-mesh-agent
    helm repo update
    
  8. Prepare a Helm values file for production-level settings or for default settings.

    You can edit the values-data-plane.yaml values file to provide your own details for settings that are recommended for production-level deployments, including FIPS-compliant images, custom certificates and disabling rate limiting and external authentication in the gloo-mesh namespace. For more information about these settings, see Best practices for production and the agent Helm values documentation.

    1. Download the sample values file from GitHub to your local workstation.
      curl -0L https://raw.githubusercontent.com/solo-io/gloo-mesh-use-cases/main/helm-install/2.0/values-data-plane.yaml > values-data-plane.yaml
      
    2. Update the Helm values file with the environment variables that you previously set for $REMOTE_CLUSTER and $MGMT_SERVER_NETWORKING_ADDRESS.
      envsubst < values-data-plane.yaml > values-data-plane-env.yaml
      
    3. Provide your own details for settings that are recommended for production deployments, including custom certificates, disabling rate limiting and external authentication in the gloo-mesh namespace, and more. If you do not want to use these settings, you must comment them out.
    1. Save the default Helm values. For more information, review the Gloo Mesh Enterprise agent Helm values documentation.
      helm show values gloo-mesh-agent/gloo-mesh-agent --version $GLOO_MESH_VERSION > values-data-plane-env.yaml
      
    2. Edit the file to provide the required details.
      • For cluster, specify $REMOTE_CLUSTER.
      • For relay.serverAddress, specify $MGMT_SERVER_NETWORKING_ADDRESS.

  9. Deploy the relay agent to the workload cluster.

    helm install gloo-mesh-agent gloo-mesh-agent/gloo-mesh-agent \
      --namespace gloo-mesh \
      --kube-context=$REMOTE_CONTEXT \
      --version $GLOO_MESH_VERSION \
      --values values-data-plane-env.yaml
    
    If you installed the Gloo Mesh management plane in insecure mode by including the --set insecure=true flag in the install command, include the --set insecure=true flag in each helm install gloo-mesh-agent command.
  10. Verify the registration.

  11. Repeat steps 5 - 10 to register each workload cluster with Gloo Mesh. Remember to change the variables for each cluster name and context.

    export REMOTE_CLUSTER=<remote_cluster_name>
    export REMOTE_CONTEXT=<remote-cluster-context>
    

Registering with meshctl

You can use the CLI tool meshctl to register your workload clusters.

  1. In the management cluster, find the external address and port that was assigned by your cloud provider to the gloo-mesh-mgmt-server load balancer service. When you register the workload clusters in subsequent steps, the gloo-mesh-agent relay agent in each cluster accesses this address via a secure connection.

    
       MGMT_SERVER_NETWORKING_DOMAIN=$(kubectl get svc -n gloo-mesh gloo-mesh-mgmt-server --context $MGMT_CONTEXT -o jsonpath='{.status.loadBalancer.ingress[0].ip}')
       MGMT_SERVER_NETWORKING_PORT=$(kubectl -n gloo-mesh get service gloo-mesh-mgmt-server --context $MGMT_CONTEXT -o jsonpath='{.spec.ports[?(@.name=="grpc")].port}')
       MGMT_SERVER_NETWORKING_ADDRESS=${MGMT_SERVER_NETWORKING_DOMAIN}:${MGMT_SERVER_NETWORKING_PORT}
       echo $MGMT_SERVER_NETWORKING_ADDRESS
       
    
       MGMT_SERVER_NETWORKING_DOMAIN=$(kubectl get svc -n gloo-mesh gloo-mesh-mgmt-server --context $MGMT_CONTEXT -o jsonpath='{.status.loadBalancer.ingress[0].hostname}')
       MGMT_SERVER_NETWORKING_PORT=$(kubectl -n gloo-mesh get service gloo-mesh-mgmt-server --context $MGMT_CONTEXT -o jsonpath='{.spec.ports[?(@.name=="grpc")].port}')
       MGMT_SERVER_NETWORKING_ADDRESS=${MGMT_SERVER_NETWORKING_DOMAIN}:${MGMT_SERVER_NETWORKING_PORT}
       echo $MGMT_SERVER_NETWORKING_ADDRESS
       
    Note that if $MGMT_SERVER_NETWORKING_ADDRESS is empty or incomplete, make sure that the gloo-mesh-mgmt-server service in the management cluster is available outside the cluster, such as through a Kubernetes LoadBalancer or NodePort service.

  2. Register the workload cluster. The meshctl command completes the following:

    • Creates the gloo-mesh namespace
    • Copies the root CA certificate to the workload cluster
    • Copies the boostrap token to the workload cluster
    • Installs the relay agent in the workload cluster
    • Creates the KubernetesCluster CRD in the management cluster
    meshctl cluster register \
      --remote-context=$REMOTE_CONTEXT \
      --relay-server-address $MGMT_SERVER_NETWORKING_ADDRESS \
      $REMOTE_CLUSTER
    
    If you installed the Gloo Mesh management plane in insecure mode by running meshctl install --set insecure=true, include the --relay-server-insecure=true flag in each meshctl cluster register command.

    Example output:

    Registering cluster
    πŸ“ƒ Copying root CA relay-root-tls-secret.gloo-mesh to remote cluster from management cluster
    πŸ“ƒ Copying bootstrap token relay-identity-token-secret.gloo-mesh to remote cluster from management cluster
    πŸ’» Installing relay agent in the remote cluster
    Finished installing chart 'gloo-mesh-agent' as release gloo-mesh:gloo-mesh-agent
    πŸ“ƒ Creating remote.cluster KubernetesCluster CRD in management cluster
    ⌚ Waiting for relay agent to have a client certificate
             Checking...
             Checking...
    πŸ—‘ Removing bootstrap token
    βœ… Done registering cluster!
    
  3. Verify the registration.

  4. Repeat these to register each workload cluster with Gloo Mesh. Remember to change the variables for each cluster name and context.

    export REMOTE_CLUSTER=<remote_cluster_name>
    export REMOTE_CONTEXT=<remote-cluster-context>
    

Verifying the registration

After you register a workload cluster, verify that the relay agent is successfully deployed and that the management cluster identified the workload cluster.

  1. Verify that the relay agent pod has a status of Running.

    kubectl get pods -n gloo-mesh --context $REMOTE_CONTEXT
    

    Example output:

    NAME                                READY   STATUS    RESTARTS   AGE
    gloo-mesh-agent-64fc8cc9c5-v7b97   1/1     Running   0          25m
    
  2. Verify that each workload cluster is successfully registered with Gloo Mesh.

    kubectl get kubernetescluster -n gloo-mesh --context $MGMT_CONTEXT
    

    Example output:

    NAME           AGE
    cluster-1      27s
    cluster-2      23s
    

Verify the relay connection

  1. Check that the relay connection between the management server and workload agents is healthy.

    1. Forward port 9091 of the gloo-mesh-mgmt-server pod to your localhost.
      kubectl port-forward -n gloo-mesh --context $MGMT_CONTEXT deploy/gloo-mesh-mgmt-server 9091
      
    2. In your browser, connect to http://localhost:9091/metrics.
    3. In the metrics UI, look for the following lines. If the values are 1, the agents in the workload clusters are successfully registered with the management server. If the values are 0, the agents are not successfully connected. The warmed successful message indicates that the management server can push configuration to the agents.
      relay_pull_clients_connected{cluster="cluster-1"} 1
      relay_pull_clients_connected{cluster="cluster-2"} 1
      relay_push_clients_connected{cluster="cluster-1"} 1
      relay_push_clients_connected{cluster="cluster-2"} 1
      relay_push_clients_warmed{cluster="cluster-1"} 1
      relay_push_clients_warmed{cluster="cluster-2"} 1
      
    4. Take snapshots in case you want to refer to the logs later, such as to open a Support issue.
      curl localhost:9091/snapshots/input -o input_snapshot.json 
      curl localhost:9091/snapshots/output -o output_snapshot.json
      
  2. Check that the Gloo Mesh management services are running.

    1. Send a gRPC request to the Gloo Mesh management server.

      kubectl get secret --context $MGMT_CONTEXT -n gloo-mesh relay-root-tls-secret -o json | jq -r '.data["ca.crt"]' | base64 -d  > ca.crt
      grpcurl -authority enterprise-networking.gloo-mesh --cacert=./ca.crt 35.184.21.94:9900 list
      
    2. Verify that the following services are listed.

      envoy.service.accesslog.v3.AccessLogService
      envoy.service.metrics.v2.MetricsService
      envoy.service.metrics.v3.MetricsService
      grpc.reflection.v1alpha.ServerReflection
      relay.multicluster.skv2.solo.io.RelayCertificateService
      relay.multicluster.skv2.solo.io.RelayPullServer
      relay.multicluster.skv2.solo.io.RelayPushServer
      
  3. Check the logs on the gloo-mesh-mgmt-server pod on the management cluster for communication from the workload cluster.

    kubectl -n gloo-mesh --context $MGMT_CONTEXT logs deployment/gloo-mesh-mgmt-server | grep $REMOTE_CLUSTER
    

    Example output:

    {"level":"debug","ts":1616160185.5505846,"logger":"pull-resource-deltas","msg":"recieved request for delta: response_nonce:\"1\"","metadata":{":authority":["gloo-mesh-mgmt-server.gloo-mesh.svc.cluster.local:11100"],"content-type":["application/grpc"],"user-agent":["grpc-go/1.34.0"],"x-cluster-id":["remote.cluster"]},"peer":"10.244.0.17:40074"}
    

Optional: Setting up rate limiting and external authentication

To enable mTLS with rate limiting and external authentication, you must add an injection directive for those components. Although you can enable an injection directive on the gloo-mesh namespace, this directive makes the management plane components dependent on the functionality of Istio’s mutating webhook, which may be a fragile coupling and is not recommended as best practice. In production setups, install the Gloo Mesh Enterprise chart with just rate limiting and external authentication services enabled to the gloo-mesh-addons namespace, and label the gloo-mesh-addons namespace for Istio injection.

  1. Create the gloo-mesh-addons namespace.

    kubectl create ns gloo-mesh-addons --context $REMOTE_CONTEXT
    
  2. In a values.yaml file, enable rate limiting and external authentication, and disable the relay agent.

    rate-limiter:
      enabled: true
    ext-auth-service:
      enabled: true
    glooMeshAgent:
      enabled: false
    
  3. Create a gloo-mesh-addons release from the Gloo Mesh agent Helm chart, to install only rate limiting and external authentication in the gloo-mesh-addons namespace.

    helm install gloo-mesh-agent-addons gloo-mesh-agent/gloo-mesh-agent \
       --namespace gloo-mesh-addons \
       --kube-context=$REMOTE_CONTEXT \
       --values values.yaml
    
  4. Label the gloo-mesh-addons namespace for Istio injection.

    kubectl --context $REMOTE_CONTEXT label ns gloo-mesh-addons istio-injection=enabled --overwrite
    
  5. Verify that the rate limiting and external authentication components are successfully installed.

    kubectl get pods -n gloo-mesh-addons --context $REMOTE_CONTEXT
    

    Example output:

    NAME                                     READY   STATUS    RESTARTS   AGE
    rate-limit-3d62244cdb-fcrvd              2/2     Running   0          4m2s
    ext-auth-service-3d62244cdb-fcrvd        2/2     Running   0          4m2s
    

Next, you can check out the guide for rate limiting to use this feature.

Optional: Configure the locality labels for the nodes

Gloo Mesh uses Kubernetes labels on the nodes in your clusters to indicate locality for the services that run on the nodes. For more information, see the Kubernetes topology and Istio locality documentation.

Verify that your nodes have locality labels

Verify that your nodes have at least region and zone labels. If so, and you do not want to update the labels, you can skip the remaining steps.

kubectl get nodes --context $REMOTE_CONTEXT1 -o jsonpath='{.items[*].metadata.labels}'
kubectl get nodes --context $REMOTE_CONTEXT2 -o jsonpath='{.items[*].metadata.labels}'

Example output with region and zone labels:

..."topology.kubernetes.io/region":"us-east","topology.kubernetes.io/zone":"us-east-2"

Add locality labels to your nodes

If your nodes do not already have region and zone labels, you must add the labels. Depending on your cluster setup, you might add the same region label to each node, but a separate zone label per node. The values are not validated against your underlying infrastructure provider. The following example shows how you might label multizone clusters in two different regions, but you can adapt the steps for your actual setup.

  1. Label all the nodes in each cluster for the region. If your nodes have incorrect region labels, include the --overwrite flag in the command.
    kubectl label nodes --all --context $REMOTE_CONTEXT1 topology.kubernetes.io/region=us-east
    kubectl label nodes --all --context $REMOTE_CONTEXT2 topology.kubernetes.io/region=us-south
    
  2. List the nodes in each cluster. Note the name for each node.
    kubectl get nodes --context $REMOTE_CONTEXT1
    kubectl get nodes --context $REMOTE_CONTEXT2
    
  3. Label each node in each cluster for the zone. If your nodes have incorrect zone labels, include the --overwrite flag in the command.
    kubectl label node <cluster-1_node-1> --context $REMOTE_CONTEXT1 topology.kubernetes.io/zone=us-east-1
    kubectl label node <cluster-1_node-2> --context $REMOTE_CONTEXT1 topology.kubernetes.io/zone=us-east-2
    kubectl label node <cluster-1_node-3> --context $REMOTE_CONTEXT1 topology.kubernetes.io/zone=us-east-3
    
    kubectl label node <cluster-2_node-1> --context $REMOTE_CONTEXT2 topology.kubernetes.io/zone=us-west-1
    kubectl label node <cluster-2_node-2> --context $REMOTE_CONTEXT2 topology.kubernetes.io/zone=us-west-2
    kubectl label node <cluster-2_node-3> --context $REMOTE_CONTEXT2 topology.kubernetes.io/zone=us-west-3
    

Next Steps

The Gloo Mesh Enterprise management plane and the workload clusters in the data plane can now communicate over mTLS to continuously discover and configure your service meshes and workloads.

Now that you have Gloo Mesh Enterprise up and running, check out some of the following resources to learn more about Gloo Mesh or try other Gloo Mesh features.