Install Gloo

Install the Gloo Platform management components in one cluster, and register workload clusters with Gloo Mesh.

Your Gloo setup consists of a management plane and a data plane.

Before you begin

  1. Add your Gloo Mesh Enterprise license that you got from your Solo account representative. If you do not have a key yet, you can get a trial license by contacting an account representative. If you prefer to specify license keys in a secret instead, see Prepare to install.

    export GLOO_MESH_LICENSE_KEY=<license_key>
    
  2. Install the following CLI tools:

    • kubectl, the Kubernetes command line tool. Download the kubectl version that is within one minor version of the Kubernetes clusters you plan to use.
    • meshctl, the Gloo command line tool for bootstrapping Gloo Platform, registering clusters, describing configured resources, and more.
  3. Create or use existing Kubernetes clusters. For a multicluster setup, you need at least two clusters. One cluster is set up as the Gloo management plane where Gloo Platform components are installed. The other cluster runs your Kubernetes workloads and service meshes. You can optionally add more workload clusters to your setup. The instructions in this guide assume one management cluster and two remote workload clusters. Note: The cluster name cannot include underscores (_).

  4. Set the names of your clusters from your infrastructure provider. If your clusters have different names, specify those names instead.

    export MGMT_CLUSTER=mgmt-cluster
    export REMOTE_CLUSTER1=cluster-1
    export REMOTE_CLUSTER2=cluster-2
    
  5. Save the kubeconfig contexts for your clusters. Run kubectl config get-contexts, look for your cluster in the CLUSTER column, and get the context name in the NAME column. Note: Do not use context names with underscores. The context name is used as a SAN specification in the generated certificate that connects workload clusters to the management cluster, and underscores in SAN are not FQDN compliant. You can rename a context by running kubectl config rename-context "<oldcontext>" <newcontext>.
    export MGMT_CONTEXT=<management-cluster-context>
    export REMOTE_CONTEXT1=<remote-cluster-1-context>
    export REMOTE_CONTEXT2=<remote-cluster-2-context>
    
    
  6. Set the Gloo Mesh Enterprise version. The latest version is used as an example. You can find other versions in the Changelog documentation. Append ‘-fips’ for a FIPS-compliant image, such as ‘2.1.0-fips’. Do not include v before the version number.

    Gloo Platform version 2.1.0 is not compatible with previous 1.x releases and custom resources such as VirtualMesh or TrafficPolicy.

    export GLOO_VERSION=2.1.0
    
  7. To customize your installation in detail, such as for production environments, install with Helm. For quick installations, such as for testing environments, you can install with meshctl.

Install with Helm

Customize your Gloo setup by installing with the Gloo Platform Helm chart.

Install the management components

  1. Production installations: Review Best practices for production to prepare your optional security measures. For example, before you begin your Gloo installation, you can provide your own certificates and set up secure access to the Gloo UI.

  2. Install helm, the Kubernetes package manager.

  3. Add and update the Helm repositories for Gloo Platform.

    helm repo add gloo-mesh-enterprise https://storage.googleapis.com/gloo-mesh-enterprise/gloo-mesh-enterprise
    helm repo add gloo-mesh-agent https://storage.googleapis.com/gloo-mesh-enterprise/gloo-mesh-agent
    helm repo update
    
  4. Create the gloo-mesh namespace.

    kubectl create ns gloo-mesh --kube-context $MGMT_CONTEXT
    
  5. Prepare a Helm values file for production-level settings or for default settings.

    You can edit the values-mgmt-plane.yaml values file to provide your own details for settings that are recommended for production-level deployments, including FIPS-compliant images, custom certificates, and OIDC authorization for the Gloo UI. Additionally, this values file includes a glooMeshMgmtServer.serviceOverrides section, which applies the recommended Amazon Web Services (AWS) annotations for modifying the deployed load balancer service. For more information about these settings, see Best practices for production and the Helm values documentation for each component.

    1. Download the sample values file from GitHub to your local workstation.
      curl -0L https://raw.githubusercontent.com/solo-io/gloo-mesh-use-cases/main/gloo-mesh/helm-install/2.1/values-mgmt-plane.yaml > values-mgmt-plane.yaml
      
    2. Update the Helm values file with the environment variables that you previously set for $MGMT_CLUSTER, $GLOO_MESH_LICENSE_KEY, and $GLOO_VERSION. Save the updated file as values-mgmt-plane-env.yaml.
      • Tip: Instead of updating the file manually, try running a terminal command to substitute values, such as the following command.
        envsubst '${MGMT_CLUSTER},${GLOO_MESH_LICENSE_KEY},${GLOO_VERSION}'< values-mgmt-plane.yaml > values-mgmt-plane-env.yaml
        open values-mgmt-plane-env.yaml
        
    3. Edit the file to provide your own details for settings that are recommended for production deployments, such as the following.
      • Provide your custom certificates in the glooMeshMgmtServer.relay section. Otherwise, you can enable the default Gloo CA relay certificates.
      • Optionally set up OIDC authorization for the Gloo UI in the glooMeshUi.auth section. OIDC is disabled by default.
      • For OpenShift clusters, set all instances of floatingUserId to true.
      • Review the other Helm value settings for changes that you might want to make. For example, you might use Gloo Mesh with other Gloo products such as Gloo Gateway and provide the glooGatewayLicenseKey.
    1. Save the default Helm values. Note that the gloo-mesh-enterprise Helm chart bundles multiple components, including glooMeshMgmtServer, glooMeshUi, and glooMeshRedis. Each is versioned in step with the parent gloo-mesh-enterprise chart, and each has its own Helm values for advanced customization. For more information, review the Gloo management Helm values documentation.
      helm show values gloo-mesh-enterprise/gloo-mesh-enterprise --version $GLOO_VERSION > values-mgmt-plane-env.yaml
      open values-mgmt-plane-env.yaml
      
    2. Edit the file to provide the required details.
      • For glooMeshLicenseKey, specify your Gloo Mesh license key (value of $GLOO_MESH_LICENSE_KEY).
      • For global.cluster and mgmtClusterName, specify your management cluster name (value of $MGMT_CLUSTER).
      • For OpenShift clusters, set all instances of floatingUserId to true.

  6. Install the Gloo management Helm chart in the gloo-mesh namespace, including the customizations in your Helm values file.

    helm install gloo-mgmt gloo-mesh-enterprise/gloo-mesh-enterprise \
      --namespace gloo-mesh \
      --kube-context $MGMT_CONTEXT \
      --set licenseKey=$GLOO_MESH_LICENSE_KEY \
      --values values-mgmt-plane-env.yaml
    
  7. Verify that the Gloo component pods have a status of Running.

    kubectl get pods -n gloo-mesh --context $MGMT_CONTEXT
    

    Example output:

    NAME                                    READY   STATUS    RESTARTS   AGE
    gloo-mesh-mgmt-server-7cdcbcbd4-4s8wp   1/1     Running   0          30s
    gloo-mesh-redis-794d79b7df-r2rtp        1/1     Running   0          30s
    gloo-mesh-ui-748fd66f5c-lftcx           3/3     Running   0          30s
    prometheus-server-647b488bb-vg7t5       2/2     Running   0          30s
    
  8. Verify that the management plane is correctly installed. This check might take a few seconds to ensure that the Gloo pods are running and that any expected workload agents are running and connected in workload clusters.

    meshctl check --kubecontext $MGMT_CONTEXT
    

    Note that because no workload clusters are registered yet, the agent connectivity check returns a warning.

    Checking Gloo Mesh Management Cluster Installation
    
    🟒 Gloo Mgmt Server Deployment Status
    
    🟑 Gloo Mgmt Server Connectivity to Agents
       Hints:
       * No registered clusters detected. To register a remote cluster that has a deployed Gloo Mesh agent, add a KubernetesCluster CR.
          For more info, see: https://docs.solo.io/gloo-mesh-enterprise/latest/setup/installation/enterprise_installation/#helm-register
    
  9. Save the external address and port that were assigned by your cloud provider to the gloo-mesh-mgmt-server load balancer service. The gloo-mesh-agent relay agent in each cluster accesses this address via a secure connection.

    
       MGMT_SERVER_NETWORKING_DOMAIN=$(kubectl get svc -n gloo-mesh gloo-mesh-mgmt-server --context $MGMT_CONTEXT -o jsonpath='{.status.loadBalancer.ingress[0].ip}')
       MGMT_SERVER_NETWORKING_PORT=$(kubectl -n gloo-mesh get service gloo-mesh-mgmt-server --context $MGMT_CONTEXT -o jsonpath='{.spec.ports[?(@.name=="grpc")].port}')
       MGMT_SERVER_NETWORKING_ADDRESS=${MGMT_SERVER_NETWORKING_DOMAIN}:${MGMT_SERVER_NETWORKING_PORT}
       echo $MGMT_SERVER_NETWORKING_ADDRESS
       
    
       MGMT_SERVER_NETWORKING_DOMAIN=$(kubectl get svc -n gloo-mesh gloo-mesh-mgmt-server --context $MGMT_CONTEXT -o jsonpath='{.status.loadBalancer.ingress[0].hostname}')
       MGMT_SERVER_NETWORKING_PORT=$(kubectl -n gloo-mesh get service gloo-mesh-mgmt-server --context $MGMT_CONTEXT -o jsonpath='{.spec.ports[?(@.name=="grpc")].port}')
       MGMT_SERVER_NETWORKING_ADDRESS=${MGMT_SERVER_NETWORKING_DOMAIN}:${MGMT_SERVER_NETWORKING_PORT}
       echo $MGMT_SERVER_NETWORKING_ADDRESS
       

Register workload clusters

Register each workload cluster with the management server by deploying the relay agent.

  1. For the workload cluster that you want to register with Gloo, set the following environment variables. You update these variables each time you register another workload cluster.

    export REMOTE_CLUSTER=$REMOTE_CLUSTER1
    export REMOTE_CONTEXT=$REMOTE_CONTEXT1
    
  2. Create a KubernetesCluster resource in the management cluster to represent the workload cluster and store relevant data, such as the workload cluster's local domain.

    • The metadata.name must match the name of the workload cluster that you specify in the gloo-mesh-agent Helm chart in subsequent steps.
    • The spec.clusterDomain must match the local cluster domain of the Kubernetes cluster.
    • You can optionally give your cluster a label, such as env: prod, region: us-east, or another selector. Your workspaces can use the label to automatically add the cluster to the workspace.
    kubectl apply --context $MGMT_CONTEXT -f- <<EOF
    apiVersion: admin.gloo.solo.io/v2
    kind: KubernetesCluster
    metadata:
      name: ${REMOTE_CLUSTER}
      namespace: gloo-mesh
      labels:
        env: prod
    spec:
      clusterDomain: cluster.local
    EOF
    
  3. Create the gloo-mesh namespace.

    kubectl create ns gloo-mesh --context $REMOTE_CONTEXT
    
  4. Default certificates only: If you did not use custom certificates in your mgmt-server.yaml Helm file and instead used default certificates to install the management components, you must copy the root CA certificate to a secret in the workload cluster so that the relay agent will trust the TLS certificate from the relay server. You must also copy the bootstrap token used for initial communication to the workload cluster. This token is used only to validate initial communication between the relay agent and server. After the gRPC connection is established, the relay server issues a client certificate to the relay agent to establish a mutually-authenticated TLS session.

    1. Get the value of the root CA certificate from the management cluster and create a secret in the workload cluster.
      kubectl get secret relay-root-tls-secret -n gloo-mesh --context $MGMT_CONTEXT -o jsonpath='{.data.ca\.crt}' | base64 -d > ca.crt
      kubectl create secret generic relay-root-tls-secret -n gloo-mesh --context $REMOTE_CONTEXT --from-file ca.crt=ca.crt
      rm ca.crt
      
    2. Get the bootstrap token from the management cluster and create a secret in the workload cluster.
      kubectl get secret relay-identity-token-secret -n gloo-mesh --context $MGMT_CONTEXT -o jsonpath='{.data.token}' | base64 -d > token
      kubectl create secret generic relay-identity-token-secret -n gloo-mesh --context $REMOTE_CONTEXT --from-file token=token
      rm token
      
  5. Prepare a Helm values file for production-level settings or for default settings.

    You can edit the values-data-plane.yaml values file to provide your own details for settings that are recommended for production-level deployments, including FIPS-compliant images, custom certificates and disabling rate limiting and external authentication in the gloo-mesh namespace. For more information about these settings, see Best practices for production and the agent Helm values documentation.

    1. Download the sample values file from GitHub to your local workstation.
      curl -0L https://raw.githubusercontent.com/solo-io/gloo-mesh-use-cases/main/gloo-mesh/helm-install/2.1/values-data-plane.yaml > values-data-plane.yaml
      
    2. Update the Helm values file with the environment variables that you previously set for $REMOTE_CLUSTER, $MGMT_SERVER_NETWORKING_ADDRESS, and $GLOO_VERSION. Save the updated file as values-data-plane-env.yaml.
      • Tip: Instead of updating the file manually, try running a terminal command to substitute values, such as the following command.
        envsubst < values-data-plane.yaml > values-data-plane-env.yaml
        open values-data-plane-env.yaml
        
    3. Edit the file to provide your own details for settings that are recommended for production deployments, such as the following settings.
      • Provide your references to custom certificates in the relay section. Otherwise, you can enable the default Gloo CA relay certificates.
      • For OpenShift clusters, set floatingUserId to true.
      • Review the other Helm value settings for changes that you might want to make. For example, you might use deployment overrides to provide settings like node selectors.
    1. Save the default Helm values. For more information, review the Gloo agent Helm values documentation.
      helm show values gloo-mesh-agent/gloo-mesh-agent --version $GLOO_VERSION > values-data-plane-env.yaml
      open values-data-plane-env.yaml
      
    2. Edit values-data-plane-env.yaml to provide the required details.
      • For cluster, specify the workload cluster's name (value of $REMOTE_CLUSTER).
      • For relay.serverAddress, specify the management server's IP address and port (value of $MGMT_SERVER_NETWORKING_ADDRESS).
      • For OpenShift clusters, set floatingUserId to true.

  6. Deploy the relay agent to the workload cluster.

    helm install gloo-agent gloo-mesh-agent/gloo-mesh-agent \
    --namespace gloo-mesh \
    --kube-context $REMOTE_CONTEXT \
    --values agent-values.yaml
    
  7. Verify that the relay agent pod has a status of Running. If not, try debugging the agent.

    kubectl get pods -n gloo-mesh --context $REMOTE_CONTEXT
    

    Example output:

    NAME                                READY   STATUS    RESTARTS   AGE
    gloo-mesh-agent-64fc8cc9c5-v7b97    1/1     Running   0          30s
    
  8. Repeat steps 1 - 7 to register each workload cluster with Gloo.

  9. Verify that each workload cluster is successfully registered with Gloo.

    kubectl get kubernetescluster -n gloo-mesh --context $MGMT_CONTEXT
    

    Example output:

    NAME           AGE
    cluster-1      27s
    cluster-2      23s
    
  10. Verify that the workload clusters are successfully identified by the management plane. If not, try debugging the relay connection. Note that this check might take a few seconds to ensure that the expected relay agents are now running and are connected to the relay server in the management cluster.

    meshctl check --kubecontext $MGMT_CONTEXT
    

    Example output:

    Checking Gloo Mesh Management Cluster Installation
    
    🟒 Gloo Mgmt Server Deployment Status
    
    🟒 Gloo Mgmt Server Connectivity to Agents
    +----------+------------+--------------------------------------------------+
    | CLUSTER  | REGISTERED |                  CONNECTED POD                   |
    +----------+------------+--------------------------------------------------+
    | cluster1 | true       | gloo-mesh/gloo-mesh-mgmt-server-676f4b9945-2pngd |
    +----------+------------+--------------------------------------------------+
    | cluster2 | true       | gloo-mesh/gloo-mesh-mgmt-server-676f4b9945-2pngd |
    +----------+------------+--------------------------------------------------+
    
  11. Deploy Istio in each workload cluster.

Install with meshctl

Quickly install Gloo by using meshctl, such as for testing purposes.

Install the management components

Start by installing the Gloo management components in your management cluster.

  1. Install the Gloo management components in the management cluster.

    meshctl install creates a self-signed certificate authority for mTLS if you do not supply your own certificates. If you prefer to set up Gloo Gateway without secure communication for quick demonstrations, include the --set insecure=true flag. Note that using the default self-signed CAs or using insecure mode are not suitable for production environments.

    meshctl install --namespace gloo-mesh \
    --kubecontext $MGMT_CONTEXT \
    --license $GLOO_MESH_LICENSE_KEY \
    --version $GLOO_VERSION \
    --set global.cluster=$MGMT_CLUSTER \
    --set mgmtClusterName=$MGMT_CLUSTER
    
    meshctl install --namespace gloo-mesh \
    --kubecontext $MGMT_CONTEXT \
    --license $GLOO_MESH_LICENSE_KEY \
    --version $GLOO_VERSION \
    --set global.cluster=$MGMT_CLUSTER \
    --set mgmtClusterName=$MGMT_CLUSTER \
    --set glooMeshMgmtServer.floatingUserId=true \
    --set glooMeshUi.floatingUserId=true \
    --set glooMeshRedis.floatingUserId=true
    
  2. Verify that the Gloo component pods have a status of Running.

    kubectl get pods -n gloo-mesh --context $MGMT_CONTEXT
    

    Example output:

    NAME                                    READY   STATUS    RESTARTS   AGE
    gloo-mesh-mgmt-server-7cdcbcbd4-4s8wp   1/1     Running   0          30s
    gloo-mesh-redis-794d79b7df-r2rtp        1/1     Running   0          30s
    gloo-mesh-ui-748fd66f5c-lftcx           3/3     Running   0          30s
    prometheus-server-647b488bb-vg7t5       2/2     Running   0          30s
    
  3. Verify that the management plane is correctly installed. This check might take a few seconds to ensure that the Gloo pods are running and that any expected workload agents are running and connected in workload clusters.

    meshctl check --kubecontext $MGMT_CONTEXT
    

    Note that because no workload clusters are registered yet, the agent connectivity check returns a warning.

    Checking Gloo Mesh Management Cluster Installation
    
    🟒 Gloo Mgmt Server Deployment Status
    
    🟑 Gloo Mgmt Server Connectivity to Agents
       Hints:
       * No registered clusters detected. To register a remote cluster that has a deployed Gloo Mesh agent, add a KubernetesCluster CR.
          For more info, see: https://docs.solo.io/gloo-mesh-enterprise/latest/setup/installation/enterprise_installation/#helm-register
    

Register workload clusters

Register each workload cluster with the management server by deploying the relay agent.

  1. Optional: For testing purposes, you can install the basic profile for Istio in your workload clusters. Otherwise, you can customize your Istio settings by deploying Istio in each workload cluster after you install the Gloo agent components.

    1. Save the following information as environment variables. For more information, see Get the Gloo Istio version that you want to use.

      • $REPO: A Gloo Istio repo key that you can get by logging in to the Support Center and reviewing the Istio images built by Solo.io support article.
      • $ISTIO_IMAGE: The Istio version, such as 1.15.3-solo.
      • $REVISION: Take the Istio major and minor version numbers and replace the period with a hyphen, such as 1-15.
      export REPO=<repo-key>
      export ISTIO_IMAGE=1.15.3-solo
      export REVISION=1-15
      
    2. Save the following details in a values-istio.yaml file.

      cat << EOF > values-istio.yaml
      managedInstallations:
        controlPlane:
          enabled: true
          overrides: {}
        defaultRevision: true
        enabled: true
        images:
          hub: $REPO
          tag: $ISTIO_IMAGE
        northSouthGateways:
        - enabled: true
          name: istio-ingressgateway
          overrides: {}
        revision: $REVISION
      EOF
      
  2. For the workload cluster that you want to register with Gloo, set the following environment variables. You update these variables each time you register another workload cluster.

    export REMOTE_CLUSTER=$REMOTE_CLUSTER1
    export REMOTE_CONTEXT=$REMOTE_CONTEXT1
    
  3. Register the workload cluster. The meshctl command completes the following:

    • Creates the gloo-mesh namespace
    • Copies the root CA certificate to the workload cluster
    • Copies the boostrap token to the workload cluster
    • Installs the relay agent in the workload cluster
    • Creates the KubernetesCluster CRD in the management cluster
      meshctl cluster register $REMOTE_CLUSTER \
        --kubecontext $MGMT_CONTEXT \
        --remote-context $REMOTE_CONTEXT \
        --version $GLOO_VERSION \
        --gloo-mesh-agent-chart-values values-istio.yaml
      
      meshctl cluster register $REMOTE_CLUSTER \
        --kubecontext $MGMT_CONTEXT \
        --remote-context $REMOTE_CONTEXT \
        --version $GLOO_VERSION \
        --set glooMeshAgent.floatingUserId=true \
        --gloo-mesh-agent-chart-values values-istio.yaml
      
    If you installed the Gloo management plane in insecure mode, include the --relay-server-insecure=true flag in this command.
  4. Verify that the relay agent pod in the workload cluster has a status of Running. If not, try debugging the agent.

    kubectl get pods -n gloo-mesh --context $REMOTE_CONTEXT
    

    Example output:

    NAME                                READY   STATUS    RESTARTS   AGE
    gloo-mesh-agent-64fc8cc9c5-v7b97    1/1     Running   0          30s
    
  5. Repeat steps 2 - 4 to register each workload cluster with Gloo.

  6. Verify that each workload cluster is successfully registered with Gloo.

    kubectl get kubernetescluster -n gloo-mesh --context $MGMT_CONTEXT
    

    Example output:

    NAME           AGE
    cluster-1      27s
    cluster-2      23s
    
  7. Verify that the workload clusters are successfully identified by the management plane. If not, try debugging the relay connection. Note that this check might take a few seconds to ensure that the expected relay agents are now running and are connected to the relay server in the management cluster.

    meshctl check --kubecontext $MGMT_CONTEXT
    

    Example output:

    Checking Gloo Mesh Management Cluster Installation
    
    🟒 Gloo Mgmt Server Deployment Status
    
    🟒 Gloo Mgmt Server Connectivity to Agents
    +----------+------------+--------------------------------------------------+
    | CLUSTER  | REGISTERED |                  CONNECTED POD                   |
    +----------+------------+--------------------------------------------------+
    | cluster1 | true       | gloo-mesh/gloo-mesh-mgmt-server-676f4b9945-2pngd |
    +----------+------------+--------------------------------------------------+
    | cluster2 | true       | gloo-mesh/gloo-mesh-mgmt-server-676f4b9945-2pngd |
    +----------+------------+--------------------------------------------------+
    
  8. If you did not specify the managedInstallations section in a Helm values file, deploy Istio in each workload cluster.

Optional: Set up rate limiting and external authentication

To enable mTLS with rate limiting and external authentication, you must add an injection directive for those components. Although you can enable an injection directive on the gloo-mesh namespace, this directive makes the management plane components dependent on the functionality of Istio’s mutating webhook, which may be a fragile coupling and is not recommended as best practice. In production setups, install the Gloo agent Helm chart with just rate limiting and external authentication services enabled to the gloo-mesh-addons namespace, and label the gloo-mesh-addons namespace for Istio injection.

Want to modify the default deployment values for the external auth or rate limiting services, such as to set resource requests and limits? See Modify external auth or rate limiting subcharts.

  1. Add and update the Helm repositories for the Gloo agent.

    helm repo add gloo-mesh-agent https://storage.googleapis.com/gloo-mesh-enterprise/gloo-mesh-agent
    helm repo update
    
  2. Create the gloo-mesh-addons namespace.

    kubectl create ns gloo-mesh-addons --context $REMOTE_CONTEXT
    
  3. In a values.yaml file, enable rate limiting and external authentication, and disable the relay agent.

    rate-limiter:
      enabled: true
    ext-auth-service:
      enabled: true
    glooMeshAgent:
      enabled: false
    
  4. Create a gloo-mesh-addons release from the Gloo agent Helm chart, to install only rate limiting and external authentication in the gloo-mesh-addons namespace.

    helm install gloo-agent-addons gloo-mesh-agent/gloo-mesh-agent \
       --namespace gloo-mesh-addons \
       --kube-context=$REMOTE_CONTEXT \
       --values values.yaml
    
  5. Label the gloo-mesh-addons namespace for Istio injection.

    kubectl --context $REMOTE_CONTEXT label ns gloo-mesh-addons istio-injection=enabled --overwrite
    
  6. Verify that the rate limiting and external authentication components are successfully installed.

    kubectl get pods -n gloo-mesh-addons --context $REMOTE_CONTEXT
    

    Example output:

    NAME                                     READY   STATUS    RESTARTS   AGE
    rate-limit-3d62244cdb-fcrvd              2/2     Running   0          4m2s
    ext-auth-service-3d62244cdb-fcrvd        2/2     Running   0          4m2s
    

Next, you can check out the guides for external auth and rate limiting policies.

Optional: Configure the locality labels for the nodes

Gloo Mesh uses Kubernetes labels on the nodes in your clusters to indicate locality for the services that run on the nodes. For more information, see the Kubernetes topology and Istio locality documentation.

Verify that your nodes have locality labels

Verify that your nodes have at least region and zone labels. If so, and you do not want to update the labels, you can skip the remaining steps.

kubectl get nodes --context $REMOTE_CONTEXT1 -o jsonpath='{.items[*].metadata.labels}'
kubectl get nodes --context $REMOTE_CONTEXT2 -o jsonpath='{.items[*].metadata.labels}'

Example output with region and zone labels:

..."topology.kubernetes.io/region":"us-east","topology.kubernetes.io/zone":"us-east-2"

Add locality labels to your nodes

If your nodes do not already have region and zone labels, you must add the labels. Depending on your cluster setup, you might add the same region label to each node, but a separate zone label per node. The values are not validated against your underlying infrastructure provider. The following example shows how you might label multizone clusters in two different regions, but you can adapt the steps for your actual setup.

  1. Label all the nodes in each cluster for the region. If your nodes have incorrect region labels, include the --overwrite flag in the command.
    kubectl label nodes --all --context $REMOTE_CONTEXT1 topology.kubernetes.io/region=us-east
    kubectl label nodes --all --context $REMOTE_CONTEXT2 topology.kubernetes.io/region=us-west
    
  2. List the nodes in each cluster. Note the name for each node.
    kubectl get nodes --context $REMOTE_CONTEXT1
    kubectl get nodes --context $REMOTE_CONTEXT2
    
  3. Label each node in each cluster for the zone. If your nodes have incorrect zone labels, include the --overwrite flag in the command.
    kubectl label node <cluster-1_node-1> --context $REMOTE_CONTEXT1 topology.kubernetes.io/zone=us-east-1
    kubectl label node <cluster-1_node-2> --context $REMOTE_CONTEXT1 topology.kubernetes.io/zone=us-east-2
    kubectl label node <cluster-1_node-3> --context $REMOTE_CONTEXT1 topology.kubernetes.io/zone=us-east-3
    
    kubectl label node <cluster-2_node-1> --context $REMOTE_CONTEXT2 topology.kubernetes.io/zone=us-west-1
    kubectl label node <cluster-2_node-2> --context $REMOTE_CONTEXT2 topology.kubernetes.io/zone=us-west-2
    kubectl label node <cluster-2_node-3> --context $REMOTE_CONTEXT2 topology.kubernetes.io/zone=us-west-3
    

Next steps

Now that the Gloo management components are installed and workload clusters are registered, check out the following resources to explore Gloo Mesh capabilities: