Install Gloo Mesh

Use Helm to customize your setup of Gloo Mesh Enterprise in multiple clusters.

In a multicluster setup, you install the Gloo Mesh control plane and data plane in separate clusters.

Before you begin

  1. Create or use existing Kubernetes clusters. For a multicluster setup, you need at least two clusters. One cluster is set up as the Gloo Mesh control plane where the management components are installed. The other cluster is registered as your data plane and runs your Kubernetes workloads and Istio service mesh. You can optionally add more workload clusters to your setup. The instructions in this guide assume one management cluster and two workload clusters. Note: The cluster name must be alphanumeric with no special characters except a hyphen (-), lowercase, and begin with a letter (not a number).

  2. Set the names of your clusters from your infrastructure provider. If your clusters have different names, specify those names instead.

    export MGMT_CLUSTER=mgmt
    export REMOTE_CLUSTER1=cluster1
    export REMOTE_CLUSTER2=cluster2
    
  3. Save the kubeconfig contexts for your clusters. Run kubectl config get-contexts, look for your cluster in the CLUSTER column, and get the context name in the NAME column. Note: Do not use context names with underscores. The context name is used as a SAN specification in the generated certificate that connects workload clusters to the management cluster, and underscores in SAN are not FQDN compliant. You can rename a context by running kubectl config rename-context "<oldcontext>" <newcontext>.
    export MGMT_CONTEXT=<management-cluster-context>
    export REMOTE_CONTEXT1=<remote-cluster1-context>
    export REMOTE_CONTEXT2=<remote-cluster2-context>
       
    
  4. Add your Gloo Mesh Enterprise license that you got from your Solo account representative. If you do not have a key yet, you can get a trial license by contacting an account representative. If you prefer to specify license keys in a secret instead, see Licensing.

    export GLOO_MESH_LICENSE_KEY=<license_key>
    
  5. Set the Gloo Mesh Enterprise version. The latest version is used as an example. You can find other versions in the Changelog documentation. Append ‘-fips’ for a FIPS-compliant image, such as ‘2.4.0-beta1-fips’. Do not include v before the version number.

    Gloo Platform version 2.4.0-beta1 is not compatible with previous 1.x releases and custom resources such as VirtualMesh or TrafficPolicy.

    export GLOO_VERSION=2.4.0-beta1
    
  6. Install the following CLI tools:

    • kubectl, the Kubernetes command line tool. Download the kubectl version that is within one minor version of the Kubernetes clusters you plan to use.
    • OpenShift only: oc, the OpenShift command line tool. Download the oc version that is the same minor version of the OpenShift cluster you plan to use.
    • meshctl, the Gloo command line tool for bootstrapping Gloo Platform, registering clusters, describing configured resources, and more. Be sure to download version 2.4.0-beta1, which uses the latest Gloo Mesh CRDs.
    • helm, the Kubernetes package manager.
  7. Production installations: Review Best practices for production to prepare your optional security measures. For example, before you begin your Gloo installation, you can provide your own certificates and set up secure access to the Gloo UI.

Install the control plane

Customize your Gloo Mesh setup by installing with the Gloo Platform Helm chart.

This guide uses the new gloo-platform Helm chart, which is available in Gloo Platform 2.3 and later. For more information about this chart, see the gloo-platform chart overview guide.

  1. Add and update the Helm repository for Gloo Platform.

    helm repo add gloo-platform https://storage.googleapis.com/gloo-platform/helm-charts
    helm repo update
    
  2. Apply the Gloo Platform CRDs to your cluster by creating a gloo-platform-crds Helm release.

    helm install gloo-platform-crds gloo-platform/gloo-platform-crds \
       --kube-context $MGMT_CONTEXT \
       --namespace=gloo-mesh \
       --create-namespace \
       --version $GLOO_VERSION
    
  3. Prepare a Helm values file for production-level settings, including FIPS-compliant images, OIDC authorization for the Gloo UI, and more. To get started, you can use the minimum settings in the mgmt-server profile as a basis for your values file. This profile enables all of the necessary components that are required for a Gloo Mesh control plane installation, such as the management server, as well as some optional components, such as the Gloo UI.

    curl https://storage.googleapis.com/gloo-platform/helm-profiles/$GLOO_VERSION/mgmt-server.yaml > mgmt-server.yaml
    open mgmt-server.yaml
    
    cat >mgmt-server.yaml <<EOF
    common:
      cluster: $MGMT_CLUSTER
    glooMgmtServer:
      enabled: true
      floatingUserId: true
      serviceType: LoadBalancer
    glooUi:
      enabled: true
      floatingUserId: true
    prometheus:
      enabled: true
      server:
        securityContext: false
    redis:
      deployment:
        enabled: true
        floatingUserId: true
    telemetryGateway:
      enabled: true
      service:
        type: LoadBalancer
    EOF
    open mgmt-server.yaml
    

    Note: When you use the settings in this profile to install Gloo Gateway in OpenShift 4.11 and later, you might see warnings for the pods and containers which violate the OpenShift PodSecurity "restricted:v1.24" profile, due to the elevated permissions required by Istio. You can ignore these warnings. For more info, see this article.

  4. Edit the file to provide your own details for settings that are recommended for production deployments, such as the following optional settings.

    Field Decription
    glooMgmtServer.relay Secure the relay connection between the Gloo management server and agents. By default, Gloo Mesh generates self-signed certificates and keys for the root CA and uses these credentials to derive the intermediate CA, server and client TLS certificates. This setup is not recommended for production. Instead, use your preferred PKI provider to generate and store your credentials, and to have more control over the certificate management process. For more information, see the relay Setup options.
    glooMgmtServer.resources.limits Add resource limits for the gloo-mesh-mgmt-server pod, such as cpu: 1000m and memory: 1Gi.
    glooMgmtServer.serviceOverrides.metadata.annotations Add annotations for the management server load balancer as needed, such as AWS-specific load balancer annotations. For more information, see Deployment and service overrides.
    glooUi.auth Set up OIDC authorization for the Gloo UI. For more information, see UI authentication.
    istioInstallations Add an istioInstallations section to deploy managed Istio service meshes to each workload cluster. For an example of how this section might look, expand “Example: istioInstallations section” after this table.
    prometheus.enabled Disable the default Prometheus instance as needed to provide your own. Otherwise, you can keep the default Prometheus server enabled, and deploy a production-level server to scrape metrics from the server. For more information on each option, see Best practices for collecting metrics in production.
    redis Disable the default Redis deployment and provide your own backing database as needed. For more information, see Backing databases.
    OpenShift: glooMgmtServer.serviceType and telemetryGateway.service.type In some OpenShift setups, you might not use load balancer service types. You can set these two deployment service types to ClusterIP, and expose them by using OpenShift routes after installation.

    Example: istioInstallations section

    For more information about the settings you can configure:

  5. Install the Gloo Mesh control plane in your management cluster, including the customizations in your Helm values file.

    helm install gloo-platform gloo-platform/gloo-platform \
       --kube-context $MGMT_CONTEXT \
       --namespace gloo-mesh \
       --version $GLOO_VERSION \
       --values mgmt-server.yaml \
       --set common.cluster=$MGMT_CLUSTER \
       --set licensing.glooMeshLicenseKey=$GLOO_MESH_LICENSE_KEY
    

    Note: For quick testing, you can create an insecure connection between the management server and workload agents by including the --set common.insecure=true and --set glooMgmtServer.insecure=true flags.

  6. Verify that the control plane pods have a status of Running.

    kubectl get pods -n gloo-mesh --context $MGMT_CONTEXT
    

    Example output:

    NAME                                      READY   STATUS    RESTARTS   AGE
    gloo-mesh-mgmt-server-56c495796b-cx687    1/1     Running   0          30s
    gloo-mesh-redis-8455d49c86-f8qhw          1/1     Running   0          30s
    gloo-mesh-ui-65b6b6df5f-bf4vp             3/3     Running   0          30s
    gloo-telemetry-gateway-6547f479d5-r4zm6   1/1     Running   0          30s
    prometheus-server-57cd8c74d4-2bc7f        2/2     Running   0          30s
    
  7. Save the external address and port that were assigned by your cloud provider to the gloo-mesh-mgmt-server service. The gloo-mesh-agent relay agent in each cluster accesses this address via a secure connection.

    export MGMT_SERVER_NETWORKING_DOMAIN=$(kubectl get svc -n gloo-mesh gloo-mesh-mgmt-server --context $MGMT_CONTEXT -o jsonpath='{.status.loadBalancer.ingress[0].ip}')
    export MGMT_SERVER_NETWORKING_PORT=$(kubectl -n gloo-mesh get service gloo-mesh-mgmt-server --context $MGMT_CONTEXT -o jsonpath='{.spec.ports[?(@.name=="grpc")].port}')
    export MGMT_SERVER_NETWORKING_ADDRESS=${MGMT_SERVER_NETWORKING_DOMAIN}:${MGMT_SERVER_NETWORKING_PORT}
    echo $MGMT_SERVER_NETWORKING_ADDRESS
    
    export MGMT_SERVER_NETWORKING_DOMAIN=$(kubectl get svc -n gloo-mesh gloo-mesh-mgmt-server --context $MGMT_CONTEXT -o jsonpath='{.status.loadBalancer.ingress[0].hostname}')
    export MGMT_SERVER_NETWORKING_PORT=$(kubectl -n gloo-mesh get service gloo-mesh-mgmt-server --context $MGMT_CONTEXT -o jsonpath='{.spec.ports[?(@.name=="grpc")].port}')
    export MGMT_SERVER_NETWORKING_ADDRESS=${MGMT_SERVER_NETWORKING_DOMAIN}:${MGMT_SERVER_NETWORKING_PORT}
    echo $MGMT_SERVER_NETWORKING_ADDRESS
    
    1. Expose the management server by using an OpenShift route. Your route might look like the following.
      oc apply --context $MGMT_CONTEXT -n gloo-mesh -f- <<EOF
      kind: Route
      apiVersion: route.openshift.io/v1
      metadata:
        name: gloo-mesh-mgmt-server
        namespace: gloo-mesh
        annotations:
          # Needed for the different agents to connect to different replica instances of the management server deployment
          haproxy.router.openshift.io/balance: roundrobin
      spec:
        host: gloo-mesh-mgmt-server.<your_domain>.com
        to:
          kind: Service
          name: gloo-mesh-mgmt-server
          weight: 100
        port:
          targetPort: grpc
        tls:
          termination: passthrough
          insecureEdgeTerminationPolicy: Redirect
        wildcardPolicy: None
      EOF
      
    2. Save the management server's address, which consists of the route host and the route port. Note that the port is the route's own port, such as 443, and not the grpc port of the management server that the route points to.
      export MGMT_SERVER_NETWORKING_DOMAIN=$(oc get route -n gloo-mesh gloo-mesh-mgmt-server --context $MGMT_CONTEXT -o jsonpath='{.status.ingress[0].host}')
      export MGMT_SERVER_NETWORKING_ADDRESS=${MGMT_SERVER_NETWORKING_DOMAIN}:443
      echo $MGMT_SERVER_NETWORKING_ADDRESS
      

  8. Save the external address and port that were assigned by your cloud provider to the Gloo OpenTelemetry (OTel) gateway service. The OTel collector agents in each workload cluster send metrics to this address.

    export TELEMETRY_GATEWAY_IP=$(kubectl get svc -n gloo-mesh gloo-telemetry-gateway --context $MGMT_CONTEXT -o jsonpath='{.status.loadBalancer.ingress[0].ip}')
    export TELEMETRY_GATEWAY_PORT=$(kubectl -n gloo-mesh get service gloo-telemetry-gateway --context $MGMT_CONTEXT -o jsonpath='{.spec.ports[?(@.name=="otlp")].port}')
    export TELEMETRY_GATEWAY_ADDRESS=${TELEMETRY_GATEWAY_IP}:${TELEMETRY_GATEWAY_PORT}
    echo $TELEMETRY_GATEWAY_ADDRESS
    
    export TELEMETRY_GATEWAY_HOSTNAME=$(kubectl get svc -n gloo-mesh gloo-telemetry-gateway --context $MGMT_CONTEXT -o jsonpath='{.status.loadBalancer.ingress[0].hostname}')
    export TELEMETRY_GATEWAY_PORT=$(kubectl -n gloo-mesh get service gloo-telemetry-gateway --context $MGMT_CONTEXT -o jsonpath='{.spec.ports[?(@.name=="otlp")].port}')
    export TELEMETRY_GATEWAY_ADDRESS=${TELEMETRY_GATEWAY_HOSTNAME}:${TELEMETRY_GATEWAY_PORT}
    echo $TELEMETRY_GATEWAY_ADDRESS
    
    1. Expose the telemetry gateway by using an OpenShift route. Your route might look like the following.
      oc apply --context $MGMT_CONTEXT -n gloo-mesh -f- <<EOF
      kind: Route
      apiVersion: route.openshift.io/v1
      metadata:
        name: gloo-telemetry-gateway
        namespace: gloo-mesh
      spec:
        host: gloo-telemetry-gateway.<your_domain>.com
        to:
          kind: Service
          name: gloo-telemetry-gateway
          weight: 100
        port:
          targetPort: otlp
        tls:
          termination: passthrough
          insecureEdgeTerminationPolicy: Redirect
        wildcardPolicy: None
      EOF
      
    2. Save the telemetry gateway's address, which consists of the route host and the route port. Note that the port is the route's own port, such as 443, and not the otlp port of the telemetry gateway that the route points to.
      export TELEMETRY_GATEWAY_HOST=$(oc get route -n gloo-mesh gloo-telemetry-gateway --context $MGMT_CONTEXT -o jsonpath='{.status.ingress[0].host}')
      export TELEMETRY_GATEWAY_ADDRESS=${TELEMETRY_GATEWAY_HOST}:443
      echo $TELEMETRY_GATEWAY_ADDRESS
      

  9. Create a workspace that selects all clusters and namespaces by default. Gloo workspaces let you organize team resources across Kubernetes namespaces and clusters. In this example, you create a single workspace for everything. For more complex setups, such as creating a workspace for each team to enforce service isolation, set up federation, and even share resources by importing and exporting, see Organize team resources with workspaces.

    kubectl apply --context $MGMT_CONTEXT -f- <<EOF
    apiVersion: admin.gloo.solo.io/v2
    kind: Workspace
    metadata:
     name: $MGMT_CLUSTER
     namespace: gloo-mesh
    spec:
     workloadClusters:
       - name: '*'
         namespaces:
           - name: '*'
    EOF
    
  10. Create a workspace settings for the workspace that enables federation across clusters and selects the Istio east-west gateway.

    kubectl apply --context $MGMT_CONTEXT -f- <<EOF
    apiVersion: admin.gloo.solo.io/v2
    kind: WorkspaceSettings
    metadata:
     name: $MGMT_CLUSTER
     namespace: gloo-mesh
    spec:
     options:
       serviceIsolation:
         enabled: false
       federation:
         enabled: false
         serviceSelector:
         - {}
       eastWestGateways:
       - selector:
           labels:
             istio: eastwestgateway
    EOF
    

Register workload clusters

Register each workload cluster with the management server by deploying the relay agent.

  1. For the workload cluster that you want to register with Gloo, set the following environment variables. You update these variables each time you register another workload cluster.

    export REMOTE_CLUSTER=$REMOTE_CLUSTER1
    export REMOTE_CONTEXT=$REMOTE_CONTEXT1
    
  2. In the management cluster, create a KubernetesCluster resource to represent the workload cluster and store relevant data, such as the workload cluster's local domain.

    • The metadata.name must match the name of the workload cluster that you specify in the gloo-platform Helm chart in subsequent steps.
    • The spec.clusterDomain must match the local cluster domain of the Kubernetes cluster.
    • You can optionally give your cluster a label, such as env: prod, region: us-east, or another selector. Your workspaces can use the label to automatically add the cluster to the workspace.
    kubectl apply --context $MGMT_CONTEXT -f- <<EOF
    apiVersion: admin.gloo.solo.io/v2
    kind: KubernetesCluster
    metadata:
      name: ${REMOTE_CLUSTER}
      namespace: gloo-mesh
      labels:
        env: prod
    spec:
      clusterDomain: cluster.local
    EOF
    
  3. In your workload cluster, apply the Gloo Platform CRDs by creating a gloo-platform-crds Helm release.

    helm install gloo-platform-crds gloo-platform/gloo-platform-crds \
       --kube-context $REMOTE_CONTEXT \
       --namespace=gloo-mesh \
       --create-namespace \
       --version $GLOO_VERSION
    
  4. Prepare a Helm values file for production-level settings, including FIPS-compliant images, OIDC authorization for the Gloo UI, and more. To get started, you can use the minimum settings in the agent profile as a basis for your values file. This profile enables the Gloo agent and the Gloo Platform telemetry collector.

    curl https://storage.googleapis.com/gloo-platform/helm-profiles/$GLOO_VERSION/agent.yaml > agent.yaml
    open agent.yaml
    
    cat >agent.yaml <<EOF
    glooAgent:
      enabled: true
      floatingUserId: true
    telemetryCollector:
      enabled: true
    EOF
    open agent.yaml
    

    Note: When you use the settings in this profile to install Gloo Gateway in OpenShift 4.11 and later, you might see warnings for the pods and containers which violate the OpenShift PodSecurity "restricted:v1.24" profile, due to the elevated permissions required by Istio. You can ignore these warnings. For more info, see this article.

  5. Edit the file to provide your own details for settings that are recommended for production deployments, such as the following optional settings.

    Field Decription
    glooAgent.relay Provide the certificate and secret details that correspond to your management server relay settings. For more information, see the relay Setup options.
    glooAgent.resources.limits Add resource limits for the gloo-mesh-mgmt-server pod, such as cpu: 500m and memory: 512Mi.

    For more information about the settings you can configure:

  6. Deploy the relay agent to the workload cluster.

    helm install gloo-platform gloo-platform/gloo-platform \
       --namespace gloo-mesh \
       --kube-context $REMOTE_CONTEXT \
       --version $GLOO_VERSION \
       --values agent.yaml \
       --set common.cluster=$REMOTE_CLUSTER \
       --set glooAgent.relay.serverAddress=$MGMT_SERVER_NETWORKING_ADDRESS \
       --set telemetryCollector.config.exporters.otlp.endpoint=$TELEMETRY_GATEWAY_ADDRESS
    

  7. Optional: Install add-ons, such as the external auth and rate limit servers, in a separate Helm release. Only create this release if you did not enable the extAuthService and rateLimiter in your agent release.

    export ISTIO_REVISION=1-17-2
    kubectl create namespace gloo-mesh-addons --context $REMOTE_CONTEXT
    kubectl label namespace gloo-mesh-addons istio.io/rev="${ISTIO_REVISION}" --overwrite --context $REMOTE_CONTEXT
    helm install gloo-agent-addons gloo-platform/gloo-platform \
       --namespace gloo-mesh-addons \
       --kube-context $REMOTE_CONTEXT \
       --create-namespace \
       --version $GLOO_VERSION \
       --set common.cluster=$REMOTE_CLUSTER \
       --set extAuthService.enabled=true \
       --set rateLimiter.enabled=true
    
    1. Elevate the permissions of the gloo-mesh-addons service account that will be created.
      oc adm policy add-scc-to-group anyuid system:serviceaccounts:gloo-mesh-addons --context $REMOTE_CONTEXT
      
    2. Create the gloo-mesh-addons project, and create a NetworkAttachmentDefinition custom resource for the project.
      kubectl create ns gloo-mesh-addons --context $REMOTE_CONTEXT
      cat <<EOF | oc -n gloo-mesh-addons create --context $REMOTE_CONTEXT -f -
      apiVersion: "k8s.cni.cncf.io/v1"
      kind: NetworkAttachmentDefinition
      metadata:
        name: istio-cni
      EOF
      
    3. Create the add-ons release.
      export ISTIO_REVISION=1-17-2
      kubectl label namespace gloo-mesh-addons istio.io/rev="${ISTIO_REVISION}" --overwrite --context $REMOTE_CONTEXT
      helm install gloo-agent-addons gloo-platform/gloo-platform \
         --namespace gloo-mesh-addons \
         --kube-context $REMOTE_CONTEXT \
         --version $GLOO_VERSION \
         --set common.cluster=$REMOTE_CLUSTER \
         --set extAuthService.enabled=true \
         --set rateLimiter.enabled=true
      

  8. Verify that the Gloo data plane components are healthy. If not, try debugging the agent.

    meshctl check --kubecontext $REMOTE_CONTEXT
    

    Example output:

    🟢 CRD Version check
    
    
    🟢 Gloo Platform Deployment Status
    
    Namespace        | Name             | Ready | Status 
    gloo-mesh        | gloo-mesh-agent  | 1/1   | Healthy
    gloo-mesh-addons | ext-auth-service | 1/1   | Healthy
    gloo-mesh-addons | rate-limiter     | 1/1   | Healthy
    gloo-mesh-addons | redis            | 1/1   | Healthy
    
  9. Repeat steps 1 - 8 to register each workload cluster with Gloo.

  10. Verify that your Gloo Mesh setup is correctly installed. If not, try debugging the relay connection. Note that this check might take a few seconds to verify that:

    • Your Gloo Platform product licenses are valid and current.
    • The Gloo Platform CRDs are installed at the correct version.
    • The control plane pods in the management cluster are running and healthy.
    • The agents in the workload clusters are successfully identified by the control plane.
    meshctl check --kubecontext $MGMT_CONTEXT
    

    Example output:

    🟢 License status
    
    INFO  gloo-mesh enterprise license expiration is 25 Aug 23 10:38 CDT
    INFO  No GraphQL license module found for any product
    
    🟢 CRD version check
    
    🟢 Gloo Platform deployment status
    
    Namespace | Name                           | Ready | Status
    gloo-mesh | gloo-mesh-mgmt-server          | 1/1   | Healthy
    gloo-mesh | gloo-mesh-redis                | 1/1   | Healthy
    gloo-mesh | gloo-mesh-ui                   | 1/1   | Healthy
    gloo-mesh | gloo-telemetry-gateway         | 1/1   | Healthy
    gloo-mesh | prometheus-server              | 1/1   | Healthy
    gloo-mesh | gloo-telemetry-collector-agent | 3/3   | Healthy
    
    🟢 Mgmt server connectivity to workload agents
    
    Cluster  | Registered | Connected Pod                                   
    cluster1 | true       | gloo-mesh/gloo-mesh-mgmt-server-65bd557b95-v8qq6
    cluster2 | true       | gloo-mesh/gloo-mesh-mgmt-server-65bd557b95-v8qq6
    
  11. If you set istioInstallations.enabled to false in your management cluster Helm installation, deploy Istio in each workload cluster.

Optional: Configure the locality labels for the nodes

Gloo Mesh uses Kubernetes labels on the nodes in your clusters to indicate locality for the services that run on the nodes. For more information, see the Kubernetes topology and Istio locality documentation.

Verify that your nodes have locality labels

Verify that your nodes have at least region and zone labels. If so, and you do not want to update the labels, you can skip the remaining steps.

kubectl get nodes --context $REMOTE_CONTEXT1 -o jsonpath='{.items[*].metadata.labels}'
kubectl get nodes --context $REMOTE_CONTEXT2 -o jsonpath='{.items[*].metadata.labels}'

Example output with region and zone labels:

..."topology.kubernetes.io/region":"us-east","topology.kubernetes.io/zone":"us-east-2"

Add locality labels to your nodes

If your nodes do not already have region and zone labels, you must add the labels. Depending on your cluster setup, you might add the same region label to each node, but a separate zone label per node. The values are not validated against your underlying infrastructure provider. The following example shows how you might label multizone clusters in two different regions, but you can adapt the steps for your actual setup.

  1. Label all the nodes in each cluster for the region. If your nodes have incorrect region labels, include the --overwrite flag in the command.
    kubectl label nodes --all --context $REMOTE_CONTEXT1 topology.kubernetes.io/region=us-east
    kubectl label nodes --all --context $REMOTE_CONTEXT2 topology.kubernetes.io/region=us-west
    
  2. List the nodes in each cluster. Note the name for each node.
    kubectl get nodes --context $REMOTE_CONTEXT1
    kubectl get nodes --context $REMOTE_CONTEXT2
    
  3. Label each node in each cluster for the zone. If your nodes have incorrect zone labels, include the --overwrite flag in the command.
    kubectl label node <cluster1_node-1> --context $REMOTE_CONTEXT1 topology.kubernetes.io/zone=us-east-1
    kubectl label node <cluster1_node-2> --context $REMOTE_CONTEXT1 topology.kubernetes.io/zone=us-east-2
    kubectl label node <cluster1_node-3> --context $REMOTE_CONTEXT1 topology.kubernetes.io/zone=us-east-3
    
    kubectl label node <cluster2_node-1> --context $REMOTE_CONTEXT2 topology.kubernetes.io/zone=us-west-1
    kubectl label node <cluster2_node-2> --context $REMOTE_CONTEXT2 topology.kubernetes.io/zone=us-west-2
    kubectl label node <cluster2_node-3> --context $REMOTE_CONTEXT2 topology.kubernetes.io/zone=us-west-3
    

Next steps

Now that the Gloo Mesh is installed, check out the following resources to explore Gloo Mesh capabilities: