Overview

In a multicluster setup, you install the Gloo management plane and gateway proxy in separate clusters.

  • Gloo management plane: When you install the Gloo management plane in a dedicated management cluster, a deployment named gloo-mesh-mgmt-server is created to translate and implement your Gloo configurations.
  • Data plane: Set up one or more workload clusters that are registered with and managed by the Gloo management plane in the management cluster. A deployment named gloo-mesh-agent is created to run the Gloo agent in each workload cluster. Additionally, you use the Gloo management plane to install an ingress gateway proxy in each workload cluster, as part of the Istio lifecycle management. By using Gloo-managed installations, you no longer need to manually install and manage the istiod control plane and gateway proxy in each workload cluster. Instead, you provide the Istio configuration in your gloo-platform Helm chart, and Gloo translates this configuration into managed istiod control plane and gateway proxies in the clusters.

Before you begin

  1. Install the following command-line (CLI) tools.

    • kubectl, the Kubernetes command line tool. Download the kubectl version that is within one minor version of the Kubernetes clusters you plan to use.
    • meshctl, the Solo command line tool.
        curl -sL https://run.solo.io/meshctl/install | GLOO_MESH_VERSION=v2.4.16 sh -
      export PATH=$HOME/.gloo-mesh/bin:$PATH
        
  2. Set your Gloo Mesh Gateway license key as an environment variable. If you do not have one, contact an account representative. If you prefer to specify license keys in a secret instead, see Licensing. To check your license’s validity, you can run meshctl license check --key $(echo ${GLOO_MESH_GATEWAY_LICENSE_KEY} | base64 -w0).

      export GLOO_MESH_GATEWAY_LICENSE_KEY=<license_key>
      
  3. Set the Gloo Mesh Gateway version. This example uses the latest version. You can find other versions in the Changelog documentation. Append -fips for a FIPS-compliant image, such as 2.4.16-fips. Do not include v before the version number.

      export GLOO_VERSION=2.4.16
      
  4. Create or use at least two existing Kubernetes clusters. The instructions in this guide assume one management cluster and two workload clusters. The cluster name must be alphanumeric with no special characters except a hyphen (-), lowercase, and begin with a letter (not a number).

  5. For quick installations, such as for testing environments, you can install with meshctl. To customize your installation in detail, such as for production environments, install with Helm.

Install with meshctl

Quickly install Gloo Mesh Gateway by using meshctl, such as for testing purposes.

Management plane

Deploy the Gloo management plane into a dedicated management cluster.

  1. Install the Gloo management plane in your management cluster. This command uses a basic profile to create a gloo-mesh namespace and install the management plane components, such as the management server and Prometheus server, in your management cluster.

  2. Verify that the management plane pods have a status of Running.

      kubectl get pods -n gloo-mesh --context $MGMT_CONTEXT
      

    Example output:

      NAME                                      READY   STATUS    RESTARTS   AGE
    gloo-mesh-mgmt-server-56c495796b-cx687    1/1     Running   0          30s
    gloo-mesh-redis-8455d49c86-f8qhw          1/1     Running   0          30s
    gloo-mesh-ui-65b6b6df5f-bf4vp             3/3     Running   0          30s
    gloo-telemetry-collector-agent-7rzfb      1/1     Running   0          30s
    gloo-telemetry-gateway-6547f479d5-r4zm6   1/1     Running   0          30s
    prometheus-server-57cd8c74d4-2bc7f        2/2     Running   0          30s
      
  3. Save the external address and port that your cloud provider assigned to the Gloo OpenTelemetry (OTel) gateway service. The OTel collector agents in each workload cluster send metrics to this address.

  4. Create a workspace that selects all clusters and namespaces by default, and workspace settings that enable communication across clusters. Gloo workspaces let you organize team resources across Kubernetes namespaces and clusters. In this example, you create a global workspace that imports and exports all resources and namespaces, and a workspace settings resource in the gloo-mesh-config namespace. Later, as your teams grow, you can create a workspace for each team, to enforce service isolation, set up federation, and even share resources by importing and exporting.

      kubectl apply --context $MGMT_CONTEXT -f- <<EOF
    apiVersion: admin.gloo.solo.io/v2
    kind: Workspace
    metadata:
      name: $MGMT_CLUSTER
      namespace: gloo-mesh
    spec:
      workloadClusters:
        - name: '*'
          namespaces:
            - name: '*'
    ---
    apiVersion: v1
    kind: Namespace
    metadata:
      name: gloo-mesh-config
    ---
    apiVersion: admin.gloo.solo.io/v2
    kind: WorkspaceSettings
    metadata:
      name: $MGMT_CLUSTER
      namespace: gloo-mesh-config
    spec:
      options:
        serviceIsolation:
          enabled: false
        federation:
          enabled: false
          serviceSelector:
          - {}
        eastWestGateways:
        - selector:
            labels:
              istio: eastwestgateway
    EOF
      

Data plane

Register each workload cluster with the Gloo management plane by deploying Gloo data plane components. A deployment named gloo-mesh-agent runs the Gloo agent in each workload cluster.

  1. Register both workload clusters with the management server. These commands use basic profiles to install the Gloo agent, rate limit server, and external auth server in each workload cluster.
  2. Verify that the Gloo data plane components in each workload cluster are healthy. If not, try debugging the agent.

      meshctl check --kubecontext $REMOTE_CONTEXT1
    meshctl check --kubecontext $REMOTE_CONTEXT2
      

    Example output:

      🟢 Gloo deployment status
    
    Namespace | Name                           | Ready | Status
    gloo-mesh | ext-auth-service               | 1/1   | Healthy
    gloo-mesh | gloo-mesh-agent                | 1/1   | Healthy
    gloo-mesh | gloo-telemetry-collector-agent | 3/3   | Healthy
    gloo-mesh | rate-limiter                   | 1/1   | Healthy
      
  3. Verify that your Gloo Mesh Gateway setup is correctly installed. If not, try debugging the relay connection. Note that this check might take a few seconds to verify that:

    • Your Gloo product licenses are valid and current.
    • The Gloo CRDs are installed at the correct version.
    • The management plane pods in the management cluster are running and healthy.
    • The agents in the workload clusters are successfully identified by the management server.
      meshctl check --kubecontext $MGMT_CONTEXT
      

    Example output:

      🟢 License status
    
    INFO  gloo-gateway enterprise license expiration is 25 Aug 24 10:38 CDT
    INFO  No GraphQL license module found for any product
    
    🟢 CRD version check
    
    🟢 Gloo deployment status
    
    Namespace | Name                           | Ready | Status
    gloo-mesh | gloo-mesh-mgmt-server          | 1/1   | Healthy
    gloo-mesh | gloo-mesh-redis                | 1/1   | Healthy
    gloo-mesh | gloo-mesh-ui                   | 1/1   | Healthy
    gloo-mesh | gloo-telemetry-collector-agent | 3/3   | Healthy
    gloo-mesh | gloo-telemetry-gateway         | 1/1   | Healthy
    gloo-mesh | prometheus-server              | 1/1   | Healthy
    
    🟢 Mgmt server connectivity to workload agents
    
    Cluster  | Registered | Connected Pod                                   
    cluster1 | true       | gloo-mesh/gloo-mesh-mgmt-server-65bd557b95-v8qq6
    cluster2 | true       | gloo-mesh/gloo-mesh-mgmt-server-65bd557b95-v8qq6
    
    Connected Pod                                    | Clusters
    gloo-mesh/gloo-mesh-mgmt-server-65bd557b95-v8qq6 | 2  
      

Gateway proxies

Deploy gateway proxies in each workload cluster.

Install with Helm

Customize your Gloo Mesh Gateway setup by installing with the Gloo Platform Helm chart. For more information, see the Gloo Helm chart overview.

Management plane

Deploy the Gloo management plane into a dedicated management cluster.

  1. Production installations: Review Best practices for production to prepare your optional security measures. For example, before you begin your Gloo installation, you can provide your own certificates to secure the management server and agent connection, and set up secure access to the Gloo UI.

  2. Install helm, the Kubernetes package manager.

  3. Save the name and kubeconfig context for your management cluster in environment variables.

      export MGMT_CLUSTER=<management-cluster-name>
    export MGMT_CONTEXT=<management-cluster-context>
      
  4. Add and update the Helm repository for Gloo.

      helm repo add gloo-platform https://storage.googleapis.com/gloo-platform/helm-charts
    helm repo update
      
  5. Install the Gloo CRDs.

      helm upgrade -i gloo-platform-crds gloo-platform/gloo-platform-crds \
       --namespace=gloo-mesh \
       --create-namespace \
       --version=$GLOO_VERSION \
       --kube-context $MGMT_CONTEXT
      
  6. Prepare a Helm values file to provide your customizations. To get started, you can use the minimum settings in the following profile as a basis. These settings enable all components that are required for a Gloo management plane installation.
  7. Decide how you want to secure the relay connection between the Gloo management server and agents. In test and POC environments, you can use self-signed certificates to secure the connection. If you plan to use Gloo Mesh Gateway in production, it is recommended to bring your own certificates instead. For more information, see Setup options.

  8. Edit the file to provide your own details for settings that are recommended for production deployments, such as the following settings.

    FieldDecription
    glooMgmtServer.resources.limitsAdd resource limits for the gloo-mesh-mgmt-server pod, such as cpu: 1000m and memory: 1Gi.
    glooMgmtServer.safeMode
    glooMgmtServer.safeStartWindow
    Configure how you want the Gloo management server to handle translation after a Redis restart. For available options, see Redis safe mode options.
    glooMgmtServer.serviceOverrides.metadata.annotationsAdd annotations for the management server load balancer as needed, such as AWS-specific load balancer annotations. For more information, see Deployment and service overrides.
    glooUi.authSet up OIDC authorization for the Gloo UI. For more information, see UI authentication.
    prometheus.enabledDisable the default Prometheus instance as needed to provide your own. Otherwise, you can keep the default Prometheus server enabled, and deploy a production-level server to scrape metrics from the server. For more information on each option, see Best practices for collecting metrics in production.
    redisDisable the default Redis deployment and provide your own backing database as needed. For more information, see Backing databases.
    OpenShift: glooMgmtServer.serviceType and telemetryGateway.service.typeIn some OpenShift setups, you might not use load balancer service types. You can set these two deployment service types to ClusterIP, and expose them by using OpenShift routes after installation.
  9. Use the customizations in your Helm values file to install the Gloo management plane components in your management cluster.

  10. Verify that the management plane pods have a status of Running.

      kubectl get pods -n gloo-mesh --context $MGMT_CONTEXT
      

    Example output:

      NAME                                      READY   STATUS    RESTARTS   AGE
    gloo-mesh-mgmt-server-56c495796b-cx687    1/1     Running   0          30s
    gloo-mesh-redis-8455d49c86-f8qhw          1/1     Running   0          30s
    gloo-mesh-ui-65b6b6df5f-bf4vp             3/3     Running   0          30s
    gloo-telemetry-collector-agent-7rzfb      1/1     Running   0          30s
    gloo-telemetry-gateway-6547f479d5-r4zm6   1/1     Running   0          30s
    prometheus-server-57cd8c74d4-2bc7f        2/2     Running   0          30s
      
  11. Save the external address and port that your cloud provider assigned to the Gloo OpenTelemetry (OTel) gateway service. The OTel collector agents in each workload cluster send metrics to this address.

  12. Save the external address and port that your cloud provider assigned to the gloo-mesh-mgmt-server service. The gloo-mesh-agent agent in each workload cluster accesses this address via a secure connection.

Data plane

Register each workload cluster with the Gloo management plane by deploying Gloo data plane components. A deployment named gloo-mesh-agent runs the Gloo agent in each workload cluster.

  1. For the workload cluster that you want to register with Gloo, set the following environment variables. You update these variables each time you follow these steps to register another workload cluster.

      export REMOTE_CLUSTER=<workload_cluster_name>
    export REMOTE_CONTEXT=<workload_cluster_context>
      
  2. In the management cluster, create a KubernetesCluster resource to represent the workload cluster and store relevant data, such as the workload cluster’s local domain.

      kubectl apply --context $MGMT_CONTEXT -f- <<EOF
    apiVersion: admin.gloo.solo.io/v2
    kind: KubernetesCluster
    metadata:
       name: ${REMOTE_CLUSTER}
       namespace: gloo-mesh
    spec:
       clusterDomain: cluster.local
    EOF
      
  3. In your workload cluster, apply the Gloo CRDs. Note: If you plan to manually install gateway proxies rather than using Solo’s gateway lifecycle manager, include the --set installIstioOperator=false flag to ensure that the Istio operator CRD is not managed by this Gloo CRD Helm release.

      helm upgrade -i gloo-platform-crds gloo-platform/gloo-platform-crds \
       --namespace=gloo-mesh \
       --create-namespace \
       --version=$GLOO_VERSION \
       --kube-context $REMOTE_CONTEXT
      
  4. Prepare a Helm values file to provide your customizations. To get started, you can use the minimum settings in the following profile as a basis. These settings enable all components that are required for a Gloo data plane installation.
  5. Depending on the method you chose to secure the relay connection, prepare the Helm values for the data plane installation. For more information, see the Setup options.

  6. Edit the file to provide your own details for settings that are recommended for production deployments, such as the following settings.

    FieldDecription
    glooAgent.resources.limitsAdd resource limits for the gloo-mesh-mgmt-server pod, such as cpu: 500m and memory: 512Mi.
    extAuthService.enabledSet to true to install the external auth server add-on.
    rateLimiter.enabledSet to true to install the rate limit server add-on.
  7. Use the customizations in your Helm values file to install the Gloo data plane components in your workload cluster.

  8. Verify that the Gloo data plane component pods are running. If not, try debugging the agent.

      meshctl check --kubecontext $REMOTE_CONTEXT
      

    Example output:

      🟢 Gloo deployment status
    
    Namespace | Name                           | Ready | Status
    gloo-mesh | ext-auth-service               | 1/1   | Healthy
    gloo-mesh | gloo-mesh-agent                | 1/1   | Healthy
    gloo-mesh | gloo-telemetry-collector-agent | 3/3   | Healthy
    gloo-mesh | rate-limiter                   | 1/1   | Healthy
      
  9. Repeat steps 1 - 8 to register each workload cluster with Gloo.

  10. Verify that your Gloo Mesh Gateway setup is correctly installed. If not, try debugging the relay connection. Note that this check might take a few seconds to verify that:

    • Your Gloo product licenses are valid and current.
    • The Gloo CRDs are installed at the correct version.
    • The management plane pods in the management cluster are running and healthy.
    • The agents in the workload clusters are successfully identified by the management server.
      meshctl check --kubecontext $MGMT_CONTEXT
      

    Example output:

      🟢 License status
    
    INFO  gloo-gateway enterprise license expiration is 25 Aug 24 10:38 CDT
    INFO  No GraphQL license module found for any product
    
    🟢 CRD version check
    
    🟢 Gloo deployment status
    
    Namespace | Name                           | Ready | Status
    gloo-mesh | gloo-mesh-mgmt-server          | 1/1   | Healthy
    gloo-mesh | gloo-mesh-redis                | 1/1   | Healthy
    gloo-mesh | gloo-mesh-ui                   | 1/1   | Healthy
    gloo-mesh | gloo-telemetry-collector-agent | 3/3   | Healthy
    gloo-mesh | gloo-telemetry-gateway         | 1/1   | Healthy
    gloo-mesh | prometheus-server              | 1/1   | Healthy
    
    🟢 Mgmt server connectivity to workload agents
    
    Cluster  | Registered | Connected Pod                                   
    cluster1 | true       | gloo-mesh/gloo-mesh-mgmt-server-65bd557b95-v8qq6
    cluster2 | true       | gloo-mesh/gloo-mesh-mgmt-server-65bd557b95-v8qq6
    
    Connected Pod                                    | Clusters
    gloo-mesh/gloo-mesh-mgmt-server-65bd557b95-v8qq6 | 2  
      

Gateway proxies

Deploy gateway proxies in each workload cluster.

Install gateway proxies by using the Istio and Gateway Lifecycle Manager

Streamline the gateway installation process by using the Gloo management plane to install Istio gateways in your clusters, as part of the Istio lifecycle management. By using a Gloo-managed installation, you no longer need to use istioctl to individually install the Istio control plane and gateways. Instead, you can supply IstioOperator configurations in Gloo resources. Gloo translates this configuration into an Istio control plane and gateway proxy in the cluster.

Before you begin, review the following considerations for using the Istio lifecycle manager.

  • Throughout this guide, you use example configuration files that have pre-filled values. You can update some of the values, but unexpected behaviors might occur. For example, if you change the default istio-ingressgateway name, you cannot also use Kubernetes horizontal pod autoscaling. For more information, see the Troubleshooting docs.
  • If you plan to run multiple revisions of Istio in your cluster and use discoverySelectors in each revision to discover the resources in specific namespaces, enable the glooMgmtServer.extraEnvs.IGNORE_REVISIONS_FOR_VIRTUAL_DESTINATION_TRANSLATION environment variable on the Gloo management server. For more information, see Multiple Istio revisions in the same cluster.
  • If your organization restricts elevated Kubernetes RBAC permissions for security reasons, you might need to install the Istio CNI plug-in. The OpenShift steps provide an example. For more information, see the Istio docs.
  • In multicluster setups, one gateway proxy for north-south traffic is deployed to each workload cluster. To learn about your gateway options, such as creating a global load balancer to route to each gateway IP address or registering each gateway IP address in one DNS entry, see the gateway deployment patterns page.

istiod control planes

Prepare an IstioLifecycleManager CR to manage istiod control planes.

  1. Review Supported versions to choose the Solo distribution of Istio that you want to use, and save the version information in the following environment variables.

    • REPO: The repo key for the Solo distribution of Istio that you can get by logging in to the Support Center and reviewing the Istio images built by Solo.io support article.
    • ISTIO_IMAGE: The version that you want to use with the solo tag, such as 1.18.7-patch3-solo. You can optionally append other tags of Solo distributions of Istio as needed.
    • REVISION: Take the Istio major and minor versions and replace the periods with hyphens, such as 1-18.

      export REPO=<repo-key>
    export ISTIO_IMAGE=1.18.7-patch3-solo
    export REVISION=1-18
      
  2. Download the example file, istiod.yaml, which contains a basic IstioLifecycleManager configuration for the control plane.

  3. Update the example file with the environment variables that you previously set. Save the updated file as istiod-values.yaml.

    • For example, you can run a terminal command to substitute values:
        envsubst < istiod.yaml > istiod-values.yaml
        
  4. Verify that the configuration is correct. For example, in spec.installations.clusters, verify that entries exist for each workload cluster name. You can also further edit the file to provide your own details. For more information, see the API reference.

      open istiod-values.yaml
      
  5. Apply the IstioLifecycleManager CR to your management cluster.

      kubectl apply -f istiod-values.yaml --context $MGMT_CONTEXT
      
  6. In each workload cluster, verify that the Istio pods have a status of Running.

         kubectl get pods -n istio-system --context $REMOTE_CONTEXT1
       kubectl get pods -n istio-system --context $REMOTE_CONTEXT2
         

    Example output:

         NAME                            READY   STATUS    RESTARTS   AGE
       istiod-1-18-b65676555-g2vmr     1/1     Running   0          47s
       NAME                            READY   STATUS    RESTARTS   AGE
       istiod-1-18-7b96cb895-4nzv9     1/1     Running   0          43s
         

Ingress gateways

Prepare a GatewayLifecycleManager custom resource to manage the ingress gateways.

  1. Download the gm-ingress-gateway.yaml example file.

      curl -0L https://raw.githubusercontent.com/solo-io/gloo-mesh-use-cases/main/gloo-mesh/istio-install/gm-managed/gm-ingress-gateway.yaml > gm-ingress-gateway.yaml
      
  2. Update the example file with the environment variables that you previously set. Save the updated file as ingress-gateway-values.yaml.

    • For example, you can run a terminal command to substitute values:
        envsubst < ingress-gateway.yaml > ingress-gateway-values.yaml
        
  3. Verify that the configuration is correct. You can also further edit the file to provide your own settings. For more information, see the API reference.

      open ingress-gateway-values.yaml
      
    • You can add cloud provider-specific load balancer annotations to the istioOperatorSpec.components.ingressGateways.k8s section, such as the following AWS annotations:
                ...
                k8s:
                  service:
                    ...
                  serviceAnnotations:
                    service.beta.kubernetes.io/aws-load-balancer-backend-protocol: ssl
                    service.beta.kubernetes.io/aws-load-balancer-cross-zone-load-balancing-enabled: "true"
                    service.beta.kubernetes.io/aws-load-balancer-nlb-target-type: instance
                    service.beta.kubernetes.io/aws-load-balancer-scheme: internet-facing
                    service.beta.kubernetes.io/aws-load-balancer-ssl-cert: "arn:aws:acm:<cert>"
                    service.beta.kubernetes.io/aws-load-balancer-type: external
        
      For testing environments only, you can deploy a revisionless installation by removing the gatewayRevision field.
  4. Apply the GatewayLifecycleManager CR to your management cluster.

      kubectl apply -f ingress-gateway-values.yaml --context $MGMT_CONTEXT
      
  5. In each workload cluster, verify that the ingress gateway pod is running.

      kubectl get pods -n gloo-mesh-gateways --context $REMOTE_CONTEXT1
    kubectl get pods -n gloo-mesh-gateways --context $REMOTE_CONTEXT2
      

    Example output for one cluster:

      NAME                                    READY   STATUS    RESTARTS   AGE
    istio-ingressgateway-665d46686f-nhh52   1/1     Running   0          106s
      
  6. In each workload cluster, verify that the load balancer service has an external address.

      kubectl get svc -n gloo-mesh-gateways --context $REMOTE_CONTEXT1
    kubectl get svc -n gloo-mesh-gateways --context $REMOTE_CONTEXT2
      

    Example output for one cluster:

      NAME                       TYPE           CLUSTER-IP      EXTERNAL-IP   PORT(S)                                                                         AGE
    istio-ingressgateway       LoadBalancer   10.96.252.49    <externalip>  15021:32378/TCP,80:30315/TCP,443:32186/TCP,31400:30313/TCP,15443:31632/TCP      2m2s
      
  7. Optional for OpenShift: Expose the gateways by using OpenShift routes.

      oc -n gloo-mesh-gateways expose svc istio-ingressgateway --port=http2 --context $REMOTE_CONTEXT1
    oc -n gloo-mesh-gateways expose svc istio-ingressgateway --port=http2 --context $REMOTE_CONTEXT2
      

East-west gateways

Deploy an Istio east-west gateway into each cluster in addition to the ingress gateway. In Gloo Mesh Gateway, the east-west gateways allow the ingress gateway in one cluster to route incoming traffic requests to services in another cluster.

  1. Download the example file, ew-gateway.yaml, which contains a basic GatewayLifecycleManager configuration for an east-west gateway.

      curl -0L https://raw.githubusercontent.com/solo-io/gloo-mesh-use-cases/main/gloo-mesh/istio-install/gm-managed/gm-ew-gateway.yaml > ew-gateway.yaml
      
  2. Update the example file with the environment variables that you previously set. Save the updated file as ew-gateway-values.yaml.

    • For example, you can run a terminal command to substitute values:
        envsubst < ew-gateway.yaml > ew-gateway-values.yaml
        
  3. Verify that the configuration is correct. You can also further edit the file to provide your own settings. For more information, see the API reference.

      open ew-gateway-values.yaml
      
    * For testing environments only, you can deploy a revisionless installation by removing the revision field.
  4. Apply the GatewayLifecycleManager CR to your management cluster.

      kubectl apply -f ew-gateway-values.yaml --context $MGMT_CONTEXT
      
  5. In each workload cluster, verify that the east-west gateway pod is running.

      kubectl get pods -n gloo-mesh-gateways --context $REMOTE_CONTEXT1
    kubectl get pods -n gloo-mesh-gateways --context $REMOTE_CONTEXT2
      

    Example output for one cluster:

      NAME                                    READY   STATUS    RESTARTS   AGE
    istio-eastwestgateway-665d46686f-nhh52  1/1     Running   0          106s
      
  6. In each workload cluster, verify that the load balancer service has an external address.

      kubectl get svc -n gloo-mesh-gateways --context $REMOTE_CONTEXT1
    kubectl get svc -n gloo-mesh-gateways --context $REMOTE_CONTEXT2
      

    Example output for one cluster:

      NAME                       TYPE           CLUSTER-IP      EXTERNAL-IP   PORT(S)                                                                         AGE
    istio-eastwestgateway      LoadBalancer   10.96.252.49    <externalip>  15021:32378/TCP,80:30315/TCP,443:32186/TCP,31400:30313/TCP,15443:31632/TCP      2m2s
      

Optional: Configure the locality labels for the nodes

Gloo Mesh Gateway uses Kubernetes labels on the nodes in your clusters to indicate locality for the services that run on the nodes. For more information, see the Kubernetes topology and Istio locality documentation.

  • Cloud: Typically, your cloud provider sets the Kubernetes region and zone labels for each node automatically. Depending on the level of availability that you want, you might have clusters in the same region, but different zones. Or, each cluster might be in a different region, with nodes spread across zones.
  • On-premises: Depending on how you set up your cluster, you likely must set the region and zone labels for each node yourself. Additionally, consider setting a subzone label to specify nodes on the same rack or other more granular setups.
  1. Verify that your nodes have at least region and zone labels.

      kubectl get nodes --context $REMOTE_CONTEXT1 -o jsonpath='{.items[*].metadata.labels}'
    kubectl get nodes --context $REMOTE_CONTEXT2 -o jsonpath='{.items[*].metadata.labels}'
      

    Example output with region and zone labels:

      ..."topology.kubernetes.io/region":"us-east","topology.kubernetes.io/zone":"us-east-2"
      
    • If your nodes have at least region and zone labels, and you do not want to update the labels, you can skip the remaining steps.
    • If your nodes do not already have region and zone labels, you must add the labels. Depending on your cluster setup, you might add the same region label to each node, but a separate zone label per node. The values are not validated against your underlying infrastructure provider. The following steps show how you might label multizone clusters in two different regions, but you can adapt the steps for your actual setup.
  2. Label all the nodes in each cluster for the region. If your nodes have incorrect region labels, include the --overwrite flag in the command.

      kubectl label nodes --all --context $REMOTE_CONTEXT1 topology.kubernetes.io/region=us-east
    kubectl label nodes --all --context $REMOTE_CONTEXT2 topology.kubernetes.io/region=us-west
      
  3. List the nodes in each cluster. Note the name for each node.

      kubectl get nodes --context $REMOTE_CONTEXT1
    kubectl get nodes --context $REMOTE_CONTEXT2
      
  4. Label each node in each cluster for the zone. If your nodes have incorrect zone labels, include the --overwrite flag in the command.

      kubectl label node <cluster1_node-1> --context $REMOTE_CONTEXT1 topology.kubernetes.io/zone=us-east-1
    kubectl label node <cluster1_node-2> --context $REMOTE_CONTEXT1 topology.kubernetes.io/zone=us-east-2
    kubectl label node <cluster1_node-3> --context $REMOTE_CONTEXT1 topology.kubernetes.io/zone=us-east-3
    
    kubectl label node <cluster2_node-1> --context $REMOTE_CONTEXT2 topology.kubernetes.io/zone=us-west-1
    kubectl label node <cluster2_node-2> --context $REMOTE_CONTEXT2 topology.kubernetes.io/zone=us-west-2
    kubectl label node <cluster2_node-3> --context $REMOTE_CONTEXT2 topology.kubernetes.io/zone=us-west-3
      

Next steps

Now that you have Gloo Mesh Gateway up and running, check out some of the following resources to learn more about your API Gateway and expand your routing and network capabilities.

Traffic management:

Gloo Mesh Gateway:

Help and support: