Overview

In this guide, you deploy an ambient mesh to each workload cluster, create an east-west gateway in each cluster, and link the istiod control planes across cluster networks by using peering gateways. In the next guide, you can deploy the Bookinfo sample app to the ambient mesh in each cluster, and make select services available across the multicluster mesh. Incoming requests can then be routed from an ingress gateway, such as Gloo Gateway, to services in your mesh across all clusters.

The following diagram demonstrates an ambient mesh setup across multiple clusters.

Figure: Multicluster ambient mesh set up with the Solo distribution of Istio and Gloo Gateway.
Figure: Multicluster ambient mesh set up with the Solo distribution of Istio and Gloo Gateway.

For more information about in-mesh routing, check out Control in-mesh traffic with east-west gateways and waypoints. For more information about the components that are installed in these steps, see the ambient components overview.

Considerations

Before you set up a multicluster ambient mesh, review the following considerations and requirements.

License requirements

Version requirements

Review the following known Istio version requirements and restrictions.

  • Patch versions 1.26.0 and 1.26.1 of the Solo distribution of Istio lack support for FIPS-tagged images and ztunnel outlier detection. When upgrading or installing 1.26, be sure to use patch version 1.26.1-patch0 and later only.
  • In the Solo distribution of Istio 1.25 and later, you can access enterprise-level features by passing your Solo license in the license.value or license.secretRef field of the Solo distribution of the istiod Helm chart. The Solo istiod Helm chart is strongly recommended due to the included safeguards, default settings, and upgrade handling to ensure a reliable and secure Istio deployment. Though it is not recommended, you can pass your license key in the open source istiod Helm chart by using the --set pilot.env.SOLO_LICENSE_KEY field.
  • Multicluster setups require the Solo distribution of Istio version 1.24.3 or later (1.24.3-solo), including the Solo distribution of istioctl.
  • Due to a lack of support for the Istio CNI and iptables for the Istio proxy, you cannot run Istio (and therefore Gloo Mesh (OSS APIs)) on AWS Fargate. For more information, see the Amazon EKS issue.

Platform requirements

The steps in the following sections have options for deploying an ambient mesh to either Kubernetes or OpenShift clusters.

If you use OpenShift clusters, complete the following steps before you begin:

The commands for OpenShift in the following steps contain these required settings:

  • Your Helm settings must include global.platform=openshift for Istio 1.24 and later. If you instead install Istio 1.23 or earlier, you must use profile=openshift instead of the global.platform setting.
  • Install the istio-cni and ztunnel Helm releases in the kube-system namespace, instead of the istio-system namespace.

Revision and canary upgrade limitations

The upgrade guides in this documentation show you how to perform in-place upgrades for your Istio components, which is the recommended upgrade strategy.

Cross-cluster traffic addresses

In each cluster, you create an east-west gateway, which is implemented as a ztunnel that facilitates traffic between services across clusters in your multicluster mesh. In the Solo distribution of Istio 1.28 and later, you can use either LoadBalancer or NodePort addresses to resolve cross-cluster traffic requests through this gateway. Note that the NodePort method is considered alpha in Istio version 1.28.

LoadBalancer: In the standard LoadBalancer peering method, cross-cluster traffic through the east-west gateway resolves to its LoadBalancer address.

NodePort (alpha): If you prefer to use direct pod-to-pod traffic across clusters, you can annotate the east-west and peering gateways so that cross-cluster traffic resolves to NodePort addresses. This method allows you to avoid LoadBalancer services to reduce cross-cluster traffic costs. Review the following considerations:

  • Note that the gateways must still be created with stable IP addresses, which are required for xDS communication with the istiod control plane in each cluster. NodePort peering is used for data-plane communication, in that requests to services resolve to the NodePort instead of the LoadBalancer address. Also, the east-west gateway must have the topology.istio.io/cluster label.
  • If a node in a target cluster becomes inaccessible, such as during a restart or replacement, a delay can occur in the connection from the client cluster that must become aware of the new east-west gateway NodePort. In this case, you might see a connection error when trying to send cross-cluster traffic to an east-west gateway that is no longer accepting connections.
  • Only nodes where an east-west gateway pod is provisioned are considered targets for traffic.
  • Like LoadBalancer gateways, NodePort gateways support traffic from Envoy-based ingress gateways, waypoints, and sidecars.
  • This feature is in an alpha state. Alpha features are likely to change, are not fully tested, and are not supported for production. For more information, see Solo feature maturity.

The steps in the following guide to create the gateways include options for either the LoadBalancer or NodePort method. A status condition on each east-west and remote peer gateway indicates which dataplane service type is in use.

Migrating from multicluster community Istio

If you previously used the multicluster feature in community Istio, and want to now migrate to multicluster peering in the Solo distribution of Istio, the DISABLE_LEGACY_MULTICLUSTER environment variable is introduced in the Solo distribution of Istio version 1.28 to disable the community multicluster mechanisms. Multicluster in community Istio uses remote secrets that contain kubeconfigs to watch resources on remote clusters. This system is incompatible with the decentralized, push-based model for peering in the Solo distribution of Istio. This variable causes istiod to ignore remote secrets so that it does not attempt to set up Kubernetes clients to connect to them.

  • For fresh multicluster mesh installations with the Solo distribution of Istio, use this environment variable in your istiod settings. This setting serves as a recommended safety measure to prevent any use of remote secrets.
  • If you want to initiate a multicluster migration from community Istio, contact a Solo account representative. An account representative can help you set up two revisions of Istio that each select a different set of namespaces, and set the DISABLE_LEGACY_MULTICLUSTER variable on the revision that uses the Solo distribution of Istio for multicluster peering.

Multicluster setup method

To get started, choose one of following methods for creating a multicluster mesh.

Option 1: Install and link new ambient meshes

In each cluster, use Helm to create the ambient mesh components. Then, create an east-west gateway so that traffic requests can be routed cross-cluster, and link clusters to enable cross-cluster service discovery.

Set up tools

  1. Set your Enterprise level license for Gloo Mesh as an environment variable. If you do not have one, contact an account representative. If you prefer to specify license keys in a secret instead, see Licensing.

      export GLOO_MESH_LICENSE_KEY=<enterprise_license_key>
      
  2. Choose the version of Istio that you want to install or upgrade to by reviewing the supported versions.

  3. Save the Solo distribution of Istio version.

      export ISTIO_VERSION=1.28.0
    export ISTIO_IMAGE=${ISTIO_VERSION}-solo
      
  4. Save the image and Helm repository information for the Solo distribution of Istio.

    • Istio 1.29 and later:

        export REPO=us-docker.pkg.dev/soloio-img/istio
      export HELM_REPO=us-docker.pkg.dev/soloio-img/istio-helm
        
    • Istio 1.28 and earlier: You must provide a repo key for the minor version of the Solo distribution of Istio that you want to install. This is the 12-character hash at the end of the repo URL us-docker.pkg.dev/gloo-mesh/istio-<repo-key>, which you can find in the Istio images built by Solo.io support article.

        # 12-character hash at the end of the repo URL
      export REPO_KEY=<repo_key>
      export REPO=us-docker.pkg.dev/gloo-mesh/istio-${REPO_KEY}
      export HELM_REPO=us-docker.pkg.dev/gloo-mesh/istio-helm-${REPO_KEY}
        
  5. Get the Solo distribution of Istio binary and install istioctl, which you use for multicluster linking and gateway commands.

    1. Get the OS and architecture that you use on your machine.

        OS=$(uname | tr '[:upper:]' '[:lower:]' | sed -E 's/darwin/osx/')
      ARCH=$(uname -m | sed -E 's/aarch/arm/; s/x86_64/amd64/; s/armv7l/armv7/')
      echo $OS
      echo $ARCH
        
    2. Download the Solo distribution of Istio binary and install istioctl.

      • Istio 1.29 and later:

          mkdir -p ~/.istioctl/bin
        curl -sSL https://storage.googleapis.com/soloio-istio-binaries/release/$ISTIO_IMAGE/istio-$ISTIO_IMAGE-$OS-$ARCH.tar.gz | tar xzf - -C ~/.istioctl/bin
        mv ~/.istioctl/bin/istio-$ISTIO_IMAGE/bin/istioctl ~/.istioctl/bin/istioctl
        chmod +x ~/.istioctl/bin/istioctl
        
        export PATH=${HOME}/.istioctl/bin:${PATH}
          
      • Istio 1.28 and earlier:

          mkdir -p ~/.istioctl/bin
        curl -sSL https://storage.googleapis.com/istio-binaries-$REPO_KEY/$ISTIO_IMAGE/istioctl-$ISTIO_IMAGE-$OS-$ARCH.tar.gz | tar xzf - -C ~/.istioctl/bin
        chmod +x ~/.istioctl/bin/istioctl
        
        export PATH=${HOME}/.istioctl/bin:${PATH}
          
    3. Verify that the istioctl client runs the Solo distribution of Istio that you want to install.

        istioctl version --remote=false
        

      Example output:

        client version: 1.28.0-solo
        

Create a shared root of trust

Each cluster in the multicluster setup must have a shared root of trust. This can be achieved by providing a root certificate signed by a PKI provider, or a custom root certificate created for this purpose. The root certificate signs a unique intermediate CA certificate for each cluster.

Deploy ambient components

  1. Save the name and kubeconfig context of a cluster where you want to install Istio in the following environment variables. Each time you repeat the steps in this guide, you change these variables to the next workload cluster’s name and context.

      export CLUSTER_NAME=<cluster-name>
    export CLUSTER_CONTEXT=<cluster-context>
      
  2. Apply the CRDs for the Kubernetes Gateway API to your cluster, which are required to create components such as waypoint proxies for L7 traffic policies, gateways with the Gateway resource, and more.

      kubectl apply -f https://github.com/kubernetes-sigs/gateway-api/releases/download/v1.4.0/standard-install.yaml --context ${CLUSTER_CONTEXT}
      
  3. Install the base chart, which contains the CRDs and cluster roles required to set up Istio.

  4. Create the istiod control plane in your cluster.

  5. Install the Istio CNI node agent daemonset.

  6. Install the ztunnel daemonset.

  7. Verify that the components of the Istio ambient control and data plane are successfully installed. Because the Istio CNI and ztunnel are deployed as daemon sets, the number of CNI and ztunnel pods equals the number of nodes in your cluster. Note that it might take a few seconds for the pods to become available.

      kubectl get pods -A --context ${CLUSTER_CONTEXT}
      

    Example output:

      istiod-85c4dfd97f-mncj5      1/1     Running   0          40s
    istio-cni-node-pr5rl         1/1     Running   0          9s
    istio-cni-node-pvmx2         1/1     Running   0          9s
    istio-cni-node-6q26l         1/1     Running   0          9s
    ztunnel-tvtzn                1/1     Running   0          7s
    ztunnel-vtpjm                1/1     Running   0          4s
    ztunnel-hllxg                1/1     Running   0          4s
      
  8. Label the istio-system namespace with the cluster’s network name, which you previously set to your cluster’s name in the global.network field of the istiod installation. The ambient control plane uses this label internally to group pods that exist in the same L3 network.

      kubectl label namespace istio-system --context ${CLUSTER_CONTEXT} topology.istio.io/network=${CLUSTER_NAME}
      
  9. Create an east-west gateway in the istio-eastwest namespace. In each cluster, the east-west gateway is implemented as a ztunnel that facilitates traffic between services across clusters in your multicluster mesh. You can use either LoadBalancer or NodePort addresses for cross-cluster traffic.

  10. Verify that the east-west gateway is successfully deployed.

      kubectl get pods -n istio-eastwest --context ${CLUSTER_CONTEXT}
      
  11. For each cluster that you want to include in the multicluster ambient mesh setup, repeat these steps to install the ambient mesh components and east-west gateway in each cluster. Remember to change the cluster name and context variables each time you repeat the steps.

      export CLUSTER_NAME=<cluster-name>
    export CLUSTER_CONTEXT=<cluster-context>
      

Link clusters to enable cross-cluster service discovery and allow traffic to be routed through east-west gateways across clusters.

  1. Optional: Before you link clusters, you can check the individual readiness of each cluster for linking by running the istioctl multicluster check command for each cluster.

      istioctl multicluster check --context $CLUSTER_CONTEXT
      

    Before continuing to the next step, make sure that the following checks pass or fail as expected:
    ✅ The license in use by istiod supports multicluster.
    ✅ All istiod, ztunnel, and east-west gateway pods are healthy.
    ✅ The east-west gateway is programmed.
    ❌ Each remote peer gateway has a gloo.solo.io/PeeringSucceeded status of True. Note that this fails if you run this command prior to linking the clusters.

  2. Link clusters to enable cross-cluster service discovery and allow traffic to be routed through east-west gateways across clusters. The steps vary based on whether you have access to the kubeconfig files for each cluster.

  3. For each cluster, verify that peer linking was successful by running the istioctl multicluster check command.

      istioctl multicluster check --context $CLUSTER_CONTEXT
      

    In this example output for cluster1, the license is valid, all Istio pods are healthy, and the east-west gateway is programmed. The remote peer gateways for linking to cluster2 and cluster3 both have a gloo.solo.io/PeeringSucceeded status of True.

      ✅ License Check: license is valid for multicluster
    ✅ Pod Check (istiod): all pods healthy
    ✅ Pod Check (ztunnel): all pods healthy
    ✅ Pod Check (eastwest gateway): all pods healthy
    ✅ Gateway Check: all eastwest gateways programmed
    ✅ Peers Check: all clusters connected
      

Option 2: Upgrade and link existing ambient meshes

Upgrade your existing ambient meshes installed with Helm and link them to create a multicluster ambient mesh.

Set up tools

  1. Set your Enterprise level license for Gloo Mesh as an environment variable. If you do not have one, contact an account representative. If you prefer to specify license keys in a secret instead, see Licensing.

      export GLOO_MESH_LICENSE_KEY=<enterprise_license_key>
      
  2. Save the details for the version of the Solo distribution of Istio that your ambient meshes run. 3. Save the image and Helm repository information for the Solo distribution of Istio.

    • Istio 1.29 and later:
        export ISTIO_VERSION=1.28.0
      export ISTIO_IMAGE=${ISTIO_VERSION}-solo
        
    • Istio 1.28 and earlier: Save the repo key for the minor version of the Solo distribution of Istio. This is the 12-character hash at the end of the repo URL us-docker.pkg.dev/gloo-mesh/istio-<repo-key>, which you can find in the Istio images built by Solo.io support article.
        export ISTIO_VERSION=1.28.0
      export ISTIO_IMAGE=${ISTIO_VERSION}-solo
      # 12-character hash at the end of the repo URL
      export REPO_KEY=<repo_key>
      export REPO=us-docker.pkg.dev/gloo-mesh/istio-${REPO_KEY}
      export HELM_REPO=us-docker.pkg.dev/gloo-mesh/istio-helm-${REPO_KEY}
        
  3. Get the Solo distribution of Istio binary and install istioctl, which you use for multicluster linking and gateway commands.

    1. Get the OS and architecture that you use on your machine.

        OS=$(uname | tr '[:upper:]' '[:lower:]' | sed -E 's/darwin/osx/')
      ARCH=$(uname -m | sed -E 's/aarch/arm/; s/x86_64/amd64/; s/armv7l/armv7/')
      echo $OS
      echo $ARCH
        
    2. Download the Solo distribution of Istio binary and install istioctl.

      • Istio 1.29 and later:
          mkdir -p ~/.istioctl/bin
        curl -sSL https://storage.googleapis.com/soloio-istio-binaries/release/$ISTIO_IMAGE/istio-$ISTIO_IMAGE-$OS-$ARCH.tar.gz | tar xzf - -C ~/.istioctl/bin
        mv ~/.istioctl/bin/istio-$ISTIO_IMAGE/bin/istioctl ~/.istioctl/bin/istioctl
        chmod +x ~/.istioctl/bin/istioctl
        
        export PATH=${HOME}/.istioctl/bin:${PATH}
          
      • Istio 1.28 and earlier:
          mkdir -p ~/.istioctl/bin
        curl -sSL https://storage.googleapis.com/istio-binaries-$REPO_KEY/$ISTIO_IMAGE/istioctl-$ISTIO_IMAGE-$OS-$ARCH.tar.gz | tar xzf - -C ~/.istioctl/bin
        chmod +x ~/.istioctl/bin/istioctl
        
        export PATH=${HOME}/.istioctl/bin:${PATH}
          
    3. Verify that the istioctl client runs the Solo distribution of Istio that your ambient meshes run.

        istioctl version --remote=false
        

      Example output:

        client version: 1.28.0-solo
        

Create a shared root of trust

Each cluster in the multicluster setup must have a shared root of trust. This can be achieved by providing a root certificate signed by a PKI provider, or a custom root certificate created for this purpose. The root certificate signs a unique intermediate CA certificate for each cluster.

Upgrade settings

In each cluster, update the ambient mesh components for multicluster, and create an east-west gateway so that traffic requests can be routed cross-cluster.

  1. Save the name and kubeconfig context of a cluster where you run an ambient mesh. Each time you repeat the steps in this guide, you change these variables to the next workload cluster’s name and context.

      export CLUSTER_NAME=<cluster-name>
    export CLUSTER_CONTEXT=<cluster-context>
      
  2. Get the current values for the istiod Helm release in your cluster.

      helm get values --kube-context ${CLUSTER_CONTEXT} istiod -n istio-system -o yaml > istiod.yaml
      
  3. Update your Helm release with the following multicluster values. If you must update the Istio minor version, include the --set global.tag=${ISTIO_IMAGE} and --set global.hub=${REPO} flags too.

  4. Verify that the istiod pods are successfully restarted. Note that it might take a few seconds for the pods to become available.

      kubectl get pods --context ${CLUSTER_CONTEXT} -n istio-system | grep istiod
      

    Example output:

      istiod-b84c55cff-tllfr   1/1     Running   0          58s
      
  5. Get the current values for the ztunnel Helm release in your cluster.

  6. Update your Helm release with the following multicluster values. If you must update the Istio minor version, include the --set tag=${ISTIO_IMAGE} and --set hub=${REPO} flags too.

  7. Verify that the ztunnel pods are successfully installed. Because the ztunnel is deployed as a daemon set, the number of pods equals the number of nodes in your cluster. Note that it might take a few seconds for the pods to become available.

      kubectl get pods -A --context ${CLUSTER_CONTEXT} | grep ztunnel
      

    Example output:

      ztunnel-tvtzn             1/1     Running   0          7s
    ztunnel-vtpjm             1/1     Running   0          4s
    ztunnel-hllxg             1/1     Running   0          4s
      
  8. Label the istio-system namespace with the cluster’s network name, which you previously set to your cluster’s name in the global.network field of the istiod installation. The ambient control plane uses this label internally to group pods that exist in the same L3 network.

      kubectl label namespace istio-system --context ${CLUSTER_CONTEXT} topology.istio.io/network=${CLUSTER_NAME}
      
  9. Create an east-west gateway in the istio-eastwest namespace. In each cluster, the east-west gateway is implemented as a ztunnel that facilitates traffic between services across clusters in your multicluster mesh. You can use either LoadBalancer or NodePort addresses for cross-cluster traffic.

  10. Verify that the east-west gateway is successfully deployed.

      kubectl get pods -n istio-eastwest --context $CLUSTER_CONTEXT
      
  11. For each cluster that you want to add to the multicluster ambient mesh setup, repeat these steps to upgrade the Helm values and deploy an east-west gateway. Remember to change the cluster name and context variables each time you repeat the steps.

      export CLUSTER_NAME=<cluster-name>
    export CLUSTER_CONTEXT=<cluster-context>
      

Link clusters to enable cross-cluster service discovery and allow traffic to be routed through east-west gateways across clusters.

  1. Optional: Before you link clusters, you can check the individual readiness of each cluster for linking by running the istioctl multicluster check command for each cluster.

      istioctl multicluster check --context $CLUSTER_CONTEXT
      

    Before continuing to the next step, make sure that the following checks pass or fail as expected:
    ✅ The license in use by istiod supports multicluster.
    ✅ All istiod, ztunnel, and east-west gateway pods are healthy.
    ✅ The east-west gateway is programmed.
    ❌ Each remote peer gateway has a gloo.solo.io/PeeringSucceeded status of True. Note that this fails if you run this command prior to linking the clusters.

  2. Link clusters to enable cross-cluster service discovery and allow traffic to be routed through east-west gateways across clusters. The steps vary based on whether you have access to the kubeconfig files for each cluster.

  3. For each cluster, verify that peer linking was successful by running the istioctl multicluster check command.

      istioctl multicluster check --context $CLUSTER_CONTEXT
      

    In this example output for cluster1, the license is valid, all Istio pods are healthy, and the east-west gateway is programmed. The remote peer gateways for linking to cluster2 and cluster3 both have a gloo.solo.io/PeeringSucceeded status of True.

      ✅ License Check: license is valid for multicluster
    ✅ Pod Check (istiod): all pods healthy
    ✅ Pod Check (ztunnel): all pods healthy
    ✅ Pod Check (eastwest gateway): all pods healthy
    ✅ Gateway Check: all eastwest gateways programmed
    ✅ Peers Check: all clusters connected
      

Next: Add apps to the ambient mesh. For multicluster setups, this includes making specific services available across your linked cluster setup.

Option 3: Automatically link clusters (beta)

In each cluster, use Helm to create the ambient mesh components, and create an east-west gateway so that traffic requests can be routed cross-cluster. Then, use the Gloo management plane to automate multicluster linking, which enables cross-cluster service discovery.

Set up tools

  1. Set your Enterprise level license for Gloo Mesh as an environment variable. If you do not have one, contact an account representative. If you prefer to specify license keys in a secret instead, see Licensing.

      export GLOO_MESH_LICENSE_KEY=<enterprise_license_key>
      
  2. Choose the version of Istio that you want to install or upgrade to by reviewing the supported versions.

  3. Save the Solo distribution of Istio version.

      export ISTIO_VERSION=1.28.0
    export ISTIO_IMAGE=${ISTIO_VERSION}-solo
      
  4. Save the image and Helm repository information for the Solo distribution of Istio.

    • Istio 1.29 and later:

        export REPO=us-docker.pkg.dev/soloio-img/istio
      export HELM_REPO=us-docker.pkg.dev/soloio-img/istio-helm
        
    • Istio 1.28 and earlier: You must provide a repo key for the minor version of the Solo distribution of Istio that you want to install. This is the 12-character hash at the end of the repo URL us-docker.pkg.dev/gloo-mesh/istio-<repo-key>, which you can find in the Istio images built by Solo.io support article.

        # 12-character hash at the end of the repo URL
      export REPO_KEY=<repo_key>
      export REPO=us-docker.pkg.dev/gloo-mesh/istio-${REPO_KEY}
      export HELM_REPO=us-docker.pkg.dev/gloo-mesh/istio-helm-${REPO_KEY}
        
  5. Get the Solo distribution of Istio binary and install istioctl, which you use for multicluster linking and gateway commands.

    1. Get the OS and architecture that you use on your machine.

        OS=$(uname | tr '[:upper:]' '[:lower:]' | sed -E 's/darwin/osx/')
      ARCH=$(uname -m | sed -E 's/aarch/arm/; s/x86_64/amd64/; s/armv7l/armv7/')
      echo $OS
      echo $ARCH
        
    2. Download the Solo distribution of Istio binary and install istioctl.

      • Istio 1.29 and later:

          mkdir -p ~/.istioctl/bin
        curl -sSL https://storage.googleapis.com/soloio-istio-binaries/release/$ISTIO_IMAGE/istio-$ISTIO_IMAGE-$OS-$ARCH.tar.gz | tar xzf - -C ~/.istioctl/bin
        mv ~/.istioctl/bin/istio-$ISTIO_IMAGE/bin/istioctl ~/.istioctl/bin/istioctl
        chmod +x ~/.istioctl/bin/istioctl
        
        export PATH=${HOME}/.istioctl/bin:${PATH}
          
      • Istio 1.28 and earlier:

          mkdir -p ~/.istioctl/bin
        curl -sSL https://storage.googleapis.com/istio-binaries-$REPO_KEY/$ISTIO_IMAGE/istioctl-$ISTIO_IMAGE-$OS-$ARCH.tar.gz | tar xzf - -C ~/.istioctl/bin
        chmod +x ~/.istioctl/bin/istioctl
        
        export PATH=${HOME}/.istioctl/bin:${PATH}
          
    3. Verify that the istioctl client runs the Solo distribution of Istio that you want to install.

        istioctl version --remote=false
        

      Example output:

        client version: 1.28.0-solo
        

Enable automatic peering of clusters

Upgrade Gloo Mesh in your multicluster setup to enable the ConfigDistribution feature flag and install the enterprise CRDs, which are required for Gloo Mesh to automate peering and distribute gateways between clusters.

  1. Upgrade your gloo-platform-crds Helm release in the management cluster to include the following settings.

      helm get values gloo-platform-crds -n gloo-mesh -o yaml --kube-context ${MGMT_CONTEXT} > mgmt-crds.yaml
    helm upgrade gloo-platform-crds gloo-platform/gloo-platform-crds \
        --kube-context ${MGMT_CONTEXT} \
        --namespace gloo-mesh \
        -f mgmt-crds.yaml \
        --set featureGates.ConfigDistribution=true \
        --set installEnterpriseCrds=true
      
  2. Upgrade your gloo-platform Helm release in the management cluster to include the following settings.

      helm get values gloo-platform -n gloo-mesh -o yaml --kube-context ${MGMT_CONTEXT} > mgmt-plane.yaml
    helm upgrade gloo-platform gloo-platform/gloo-platform \
        --kube-context ${MGMT_CONTEXT} \
        --namespace gloo-mesh \
        -f mgmt-plane.yaml \
        --set featureGates.ConfigDistribution=true
      
  3. Upgrade your gloo-platform-crds Helm release in each workload cluster to include the following settings. Repeat this step for each workload cluster.

      helm get values gloo-platform-crds -n gloo-mesh -o yaml --kube-context ${CLUSTER_CONTEXT} > crds.yaml
    helm upgrade gloo-platform-crds gloo-platform/gloo-platform-crds \
        --kube-context ${CLUSTER_CONTEXT} \
        --namespace gloo-mesh \
        -f crds.yaml \
        --set installEnterpriseCrds=true
      

Create a shared root of trust

Create a shared root of trust for each cluster in the multicluster setup, including the management cluster. This can be achieved by providing a root certificate signed by a PKI provider, or a custom root certificate created for this purpose. The root certificate signs a unique intermediate CA certificate for each cluster.

Deploy ambient components

  1. Save the name and kubeconfig context of a cluster where you want to install Istio in the following environment variables. Each time you repeat the steps in this guide, you change these variables to the next workload cluster’s name and context. Note that to use automated multicluster peering, you must complete these steps to install an ambient mesh in the management cluster as well as your workload clusters.

      export CLUSTER_NAME=<cluster-name>
    export CLUSTER_CONTEXT=<cluster-context>
      
  2. Apply the CRDs for the Kubernetes Gateway API to your cluster, which are required to create components such as waypoint proxies for L7 traffic policies, gateways with the Gateway resource, and more.

      kubectl apply -f https://github.com/kubernetes-sigs/gateway-api/releases/download/v1.4.0/standard-install.yaml --context ${CLUSTER_CONTEXT}
      
  3. Install the base chart, which contains the CRDs and cluster roles required to set up Istio.

  4. Create the istiod control plane in your cluster.

  5. Install the Istio CNI node agent daemonset. Note that although the CNI is included in this section, it is technically not part of the control plane or data plane.

  6. Verify that the components of the Istio ambient control plane are successfully installed. Because the Istio CNI is deployed as a daemon set, the number of CNI pods equals the number of nodes in your cluster. Note that it might take a few seconds for the pods to become available.

      kubectl get pods -A --context ${CLUSTER_CONTEXT} | grep istio
      

    Example output:

      istio-system   istiod-85c4dfd97f-mncj5                             1/1     Running   0               40s
    istio-system   istio-cni-node-pr5rl                                1/1     Running   0               9s
    istio-system   istio-cni-node-pvmx2                                1/1     Running   0               9s
    istio-system   istio-cni-node-6q26l                                1/1     Running   0               9s
      
  7. Install the ztunnel daemonset.

  8. Verify that the ztunnel pods are successfully installed. Because the ztunnel is deployed as a daemon set, the number of pods equals the number of nodes in your cluster. Note that it might take a few seconds for the pods to become available.

      kubectl get pods -A --context ${CLUSTER_CONTEXT} | grep ztunnel
      

    Example output:

      ztunnel-tvtzn             1/1     Running   0          7s
    ztunnel-vtpjm             1/1     Running   0          4s
    ztunnel-hllxg             1/1     Running   0          4s
      
  9. Label the istio-system namespace with the cluster’s network name, which you previously set to your cluster’s name in the global.network field of the istiod installation. The ambient control plane uses this label internally to group pods that exist in the same L3 network.

      kubectl label namespace istio-system --context ${CLUSTER_CONTEXT} topology.istio.io/network=${CLUSTER_NAME}
      
  10. Create an east-west gateway in the istio-eastwest namespace. In each cluster, the east-west gateway is implemented as a ztunnel that facilitates traffic between services across clusters in your multicluster mesh. You can use either LoadBalancer or NodePort addresses for cross-cluster traffic.

  11. Verify that the east-west gateway is successfully deployed.

      kubectl get pods -n istio-eastwest --context $CLUSTER_CONTEXT
      
  12. For each cluster that you want to include in the multicluster ambient mesh setup, including the management cluster, repeat these steps to install the ambient mesh components and an east-west gateway in each cluster. Remember to change the cluster name and context variables each time you repeat the steps.

      export CLUSTER_NAME=<cluster-name>
    export CLUSTER_CONTEXT=<cluster-context>
      

Review remote peer gateways

After you complete the steps for each cluster, verify that Gloo Mesh successfully created and distributed the remote peering gateways. These gateways use the istio-remote GatewayClass, which allows the istiod control plane in each cluster to discover the east-west gateway addresses of other clusters. Gloo Mesh generates one istio-remote resource in the management cluster for each connected workload cluster, and then distributes the gateway to each cluster respectively.

  1. Verify that an istio-remote gateway for each connected cluster is copied to the management cluster.

      kubectl get gateways -n istio-eastwest --context $MGMT_CONTEXT
      

    In this example output, the istio-remote gateways that were auto-generated for workload clusters cluster1 and cluster2 are copied to the management cluster, alongside the management cluster’s own istio-remote gateway and east-west gateway.

      NAMESPACE        NAME                            CLASS           ADDRESS                                                                   PROGRAMMED   AGE
    istio-eastwest   istio-eastwest                 istio-eastwest   a7f6f1a2611fc4eb3864f8d688622fd4-1234567890.us-east-1.elb.amazonaws.com   True         6s
    istio-eastwest   istio-remote-peer-cluster1     istio-remote     a5082fe9522834b8192a6513eb8c6b01-0987654321.us-east-1.elb.amazonaws.com   True         4s
    istio-eastwest   istio-remote-peer-cluster2     istio-remote     aaad62dc3ffb142a1bfc13df7fe9665b-5678901234.us-east-1.elb.amazonaws.com   True         4s
    istio-eastwest   istio-remote-peer-mgmt         istio-remote     a7f6f1a2611fc4eb3864f8d688622fd4-1234567890.us-east-1.elb.amazonaws.com   True         4s
      
  2. In each cluster, verify that all istio-remote gateways are successfully distributed to all workload clusters. This ensures that services in each workload cluster can now access the east-west gateways in other clusters of the multicluster mesh setup.

      kubectl get gateways -n istio-eastwest --context $CLUSTER_CONTEXT
    ```
    
    3. In each cluster, verify that peer linking was successful by running the `istioctl multicluster check` command.
       ```sh
       istioctl multicluster check --context $CLUSTER_CONTEXT
       ```
    
       In this example output for cluster1, the license is valid, all Istio pods are healthy, and the east-west gateway is programmed. The remote peer gateways for linking to cluster2 and cluster3 both have a `gloo.solo.io/PeeringSucceeded` status of `True`.
       ```
       ✅ License Check: license is valid for multicluster
       ✅ Pod Check (istiod): all pods healthy
       ✅ Pod Check (ztunnel): all pods healthy
       ✅ Pod Check (eastwest gateway): all pods healthy
       ✅ Gateway Check: all eastwest gateways programmed
       ✅ Peers Check: all clusters connected
       ```
      

Next

  • Add apps to the ambient mesh. For multicluster setups, this includes making specific services available across your linked cluster setup.
  • In a multicluster mesh, the east-west gateway serves as a ztunnel that allows traffic requests to flow across clusters, but it does not modify requests in any way. To control in-mesh traffic, you can instead apply policies to waypoint proxies that you create for a workload namespace.