Overview

In this guide, you deploy an ambient mesh to each workload cluster, create an east-west gateway in each cluster, and link the istiod control planes across cluster networks by using peering gateways. In the next guide, you can deploy the Bookinfo sample app to the ambient mesh in each cluster, and make select services available across the multicluster mesh. Incoming requests can then be routed from an ingress gateway, such as Solo Enterprise for kgateway, to services in your mesh across all clusters.

The following diagram demonstrates an ambient mesh setup across multiple clusters. For more information about the components that are installed in these steps, see the ambient components overview.

Figure: Multicluster ambient mesh set up with the Solo distribution of Istio and Solo Enterprise for kgateway.
Figure: Multicluster ambient mesh set up with the Solo distribution of Istio and Solo Enterprise for kgateway.

Considerations

Before you set up a multicluster ambient mesh, review the following considerations and requirements.

License requirements

Version requirements

Review the following known Istio version requirements and restrictions.

  • If you use Istio versions 1.27.7, 1.28.4, 1.29.0 or later, and you install the Solo Enterprise for Istio management plane into a namespace other than gloo-mesh, you must allow that namespace by listing it in the DEBUG_ENDPOINT_AUTH_ALLOWED_NAMESPACES environment variable of your istiod installation. For more information, see the release notes.
  • Patch versions 1.26.0 and 1.26.1 of the Solo distribution of Istio lack support for FIPS-tagged images and ztunnel outlier detection. When upgrading or installing 1.26, be sure to use patch version 1.26.1-patch0 and later only.
  • In the Solo distribution of Istio 1.25 and later, you can access enterprise-level features by passing your Solo license in the license.value or license.secretRef field of the Solo distribution of the istiod Helm chart. The Solo istiod Helm chart is strongly recommended due to the included safeguards, default settings, and upgrade handling to ensure a reliable and secure Istio deployment. Though it is not recommended, you can pass your license key in the open source istiod Helm chart by using the --set pilot.env.SOLO_LICENSE_KEY field.
  • Multicluster setups require the Solo distribution of Istio version 1.24.3 or later (1.24.3-solo), including the Solo distribution of istioctl.
  • Due to a lack of support for the Istio CNI and iptables for the Istio proxy, you cannot run Istio (and therefore Solo Enterprise for Istio) on AWS Fargate. For more information, see the Amazon EKS issue.

Platform requirements

The steps in the following sections have options for deploying an ambient mesh to either Kubernetes or OpenShift clusters.

If you use OpenShift clusters, complete the following steps before you begin:

The commands for OpenShift in the following steps contain these required settings:

  • Your Helm settings must include global.platform=openshift for Istio 1.24 and later. If you instead install Istio 1.23 or earlier, you must use profile=openshift instead of the global.platform setting.
  • Install the istio-cni and ztunnel Helm releases in the kube-system namespace, instead of the istio-system namespace.

Revision and canary upgrade limitations

The upgrade guides in this documentation show you how to perform in-place upgrades for your Istio components, which is the recommended upgrade strategy.

Multicluster setup method

To get started, choose one of following methods for creating a multicluster mesh.

Option 1: Install and link new ambient meshes

In each cluster, use Helm to create the ambient mesh components. Then, create an east-west gateway so that traffic requests can be routed cross-cluster, and link clusters to enable cross-cluster service discovery.

Set up tools

  1. Set your Enterprise level license for Solo Enterprise for Istio as an environment variable. If you do not have one, contact an account representative. If you prefer to specify license keys in a secret instead, see Licensing. Note that you might have previously saved this key in another variable, such as ${SOLO_LICENSE_KEY} or ${GLOO_MESH_LICENSE_KEY}.

      export SOLO_ISTIO_LICENSE_KEY=<enterprise_license_key>
      
  2. Choose the version of Istio that you want to install or upgrade to by reviewing the supported versions.

  3. Save the Solo distribution of Istio version.

      export ISTIO_VERSION=1.27.8
    export ISTIO_IMAGE=${ISTIO_VERSION}-solo
      
  4. Save the repo key for the minor version of the Solo distribution of Istio that you want to install. This is the 12-character hash at the end of the repo URL us-docker.pkg.dev/gloo-mesh/istio-<repo-key>, which you can find in the Istio images built by Solo.io support article.

      # 12-character hash at the end of the repo URL
    export REPO_KEY=<repo_key>
    export REPO=us-docker.pkg.dev/gloo-mesh/istio-${REPO_KEY}
    export HELM_REPO=us-docker.pkg.dev/gloo-mesh/istio-helm-${REPO_KEY}
      
  5. Get the Solo distribution of Istio binary and install istioctl, which you use for multicluster linking and gateway commands. This script automatically detects your OS and architecture, downloads the appropriate Solo distribution of Istio binary, and verifies the installation.

      bash <(curl -sSfL https://raw.githubusercontent.com/solo-io/gloo-mesh-use-cases/main/gloo-mesh/install-istioctl.sh)
    export PATH=${HOME}/.istioctl/bin:${PATH}
      
  6. Save the names and kubeconfig contexts of each cluster. This guide uses two clusters as an example. To add more clusters to the multicluster setup, include them in the arrays.

      export cluster1=<cluster1_name>
    export context1=<cluster1_context>
    export cluster2=<cluster2_name>
    export context2=<cluster2_context>
      

Create a shared root of trust

Each cluster in the multicluster setup must have a shared root of trust. This can be achieved by providing a root certificate signed by a PKI provider, or a custom root certificate created for this purpose. The root certificate signs a unique intermediate CA certificate for each cluster.

Deploy ambient components

  1. Apply the CRDs for the Kubernetes Gateway API to your cluster, which are required to create components such as waypoint proxies for L7 traffic policies, gateways with the Gateway resource, and more.

      kubectl apply -f https://github.com/kubernetes-sigs/gateway-api/releases/download/v1.3.0/standard-install.yaml --context ${context1}
    kubectl apply -f https://github.com/kubernetes-sigs/gateway-api/releases/download/v1.3.0/standard-install.yaml --context ${context2}
      
  2. Install the base chart, which contains the CRDs and cluster roles required to set up Istio, in both clusters.

    You can optionally verify that the CRDs are successfully installed in both clusters.

      kubectl get crds -l app.kubernetes.io/instance=istio-base --kube-context ${context1}
    kubectl get crds -l app.kubernetes.io/instance=istio-base --kube-context ${context2}
      

    Example output:

      NAME                                       CREATED AT
    authorizationpolicies.security.istio.io    2025-12-16T22:56:00Z
    destinationrules.networking.istio.io       2025-12-16T22:56:00Z
    envoyfilters.networking.istio.io           2025-12-16T22:56:00Z
    gateways.networking.istio.io               2025-12-16T22:56:00Z
    peerauthentications.security.istio.io      2025-12-16T22:56:00Z
    proxyconfigs.networking.istio.io           2025-12-16T22:56:00Z
    requestauthentications.security.istio.io   2025-12-16T22:56:00Z
    segments.admin.solo.io                     2025-12-16T22:56:00Z
    serviceentries.networking.istio.io         2025-12-16T22:56:00Z
    sidecars.networking.istio.io               2025-12-16T22:56:00Z
    telemetries.telemetry.istio.io             2025-12-16T22:56:00Z
    virtualservices.networking.istio.io        2025-12-16T22:56:00Z
    wasmplugins.extensions.istio.io            2025-12-16T22:56:00Z
    workloadentries.networking.istio.io        2025-12-16T22:56:00Z
    workloadgroups.networking.istio.io         2025-12-16T22:56:00Z
      
  3. Create the istiod control plane in both clusters.

  4. Install the Istio CNI node agent daemonset in both clusters.

  5. Install the ztunnel daemonset in both clusters.

  6. Verify that the components of the Istio ambient control and data plane are successfully installed in both clusters. Because the Istio CNI and ztunnel are deployed as daemon sets, the number of CNI and ztunnel pods equals the number of nodes in your cluster. Note that it might take a few seconds for the pods to become available.

      kubectl get pods -A --context ${context1} | grep -E 'istio|ztunnel'
    kubectl get pods -A --context ${context2} | grep -E 'istio|ztunnel'
      

    Example output:

      istiod-85c4dfd97f-mncj5      1/1     Running   0          40s
    istio-cni-node-pr5rl         1/1     Running   0          9s
    istio-cni-node-pvmx2         1/1     Running   0          9s
    istio-cni-node-6q26l         1/1     Running   0          9s
    ztunnel-tvtzn                1/1     Running   0          7s
    ztunnel-vtpjm                1/1     Running   0          4s
    ztunnel-hllxg                1/1     Running   0          4s
      
  7. Optional: Check the istiod logs in both clusters to verify that the certificate you generated earlier is picked up by istiod.

      kubectl logs deploy/istiod -n istio-system --context ${context1} | grep x509
    kubectl logs deploy/istiod -n istio-system --context ${context2} | grep x509
      

    Example output:

      2025-12-16T22:59:06.783901Z     info    x509 cert - Issuer: "CN=Intermediate CA,O=Istio,L=cluster-1", Subject: "", SN: def320623729b8370172413749143836, NotBefore: "2025-12-16T22:57:06Z", NotAfter: "2035-12-14T22:59:06Z"
    2025-12-16T22:59:06.783937Z     info    x509 cert - Issuer: "CN=Root CA,O=Istio", Subject: "CN=Intermediate CA,O=Istio,L=cluster-1", SN: 452d45254328667ccf8434c64c79fe789612bb5a, NotBefore: "2025-12-16T22:54:30Z", NotAfter: "2035-12-14T22:54:30Z"
    2025-12-16T22:59:06.783966Z     info    x509 cert - Issuer: "CN=Root CA,O=Istio", Subject: "CN=Root CA,O=Istio", SN: 38dad68e56ab8c506ae07454801f65b134bd9580, NotBefore: "2025-12-16T22:54:30Z", NotAfter: "2035-12-14T22:54:30Z"
      
  8. Label the istio-system namespace with the clusters’ network names, which you previously set to each cluster name in the global.network field of the istiod installations. The ambient control plane uses this label internally to group pods that exist in the same L3 network.

      kubectl label namespace istio-system --context ${context1} topology.istio.io/network=${cluster1}
    kubectl label namespace istio-system --context ${context2} topology.istio.io/network=${cluster2}
      
  9. Create an east-west gateway in the istio-eastwest namespace. In each cluster, the east-west gateway is implemented as a ztunnel that facilitates traffic between services across clusters in your multicluster mesh. Create the east-west gateway in both clusters. For customization options, see the gateway guide in the Istio docs.

      function create_ew_gateway() {
      context=${1:?context}
      cluster=${2:?cluster}
      kubectl create namespace istio-eastwest --context ${context}
      istioctl multicluster expose --namespace istio-eastwest --context ${context} --generate > ew-gateway-${cluster}.yaml
      kubectl apply -f ew-gateway-${cluster}.yaml --context ${context}
    }
    
    create_ew_gateway ${context1} ${cluster1}
    create_ew_gateway ${context2} ${cluster2}
      

    In this example generated Gateway resource, the gatewayClassName that is used, istio-eastwest, is included by default when you install Istio in ambient mode. For customization options, see the gateway guide in the Istio docs.

      apiVersion: gateway.networking.k8s.io/v1
    kind: Gateway
    metadata:
      labels:
        istio.io/expose-istiod: "15012"
        topology.istio.io/network: "cluster1"
        topology.kubernetes.io/region: "us-east"
        topology.kubernetes.io/zone: "us-east-1"
      name: istio-eastwest
      namespace: istio-eastwest
    spec:
      gatewayClassName: istio-eastwest
      listeners:
      - name: cross-network
        port: 15008
        protocol: HBONE
        tls:
          mode: Passthrough
      - name: xds-tls
        port: 15012
        protocol: TLS
        tls:
          mode: Passthrough
      
  10. Verify that the east-west gateway is successfully deployed in both clusters.

      kubectl get svc -n istio-eastwest --context ${context1}
    kubectl get svc -n istio-eastwest --context ${context2}
      

    Example output:

      NAME             TYPE           CLUSTER-IP       EXTERNAL-IP             PORT(S)                                           AGE
    istio-eastwest   LoadBalancer   172.20.205.104   <external_address>      15021:31655/TCP,15008:32699/TCP,15012:32166/TCP   55s
    NAME             TYPE           CLUSTER-IP       EXTERNAL-IP             PORT(S)                                           AGE
    istio-eastwest   LoadBalancer   172.20.21.117    <external_address>      15021:30324/TCP,15008:31875/TCP,15012:31050/TCP   77s
      

Link clusters to enable cross-cluster service discovery and allow traffic to be routed through east-west gateways across clusters.

  1. Optional: Before you link clusters, you can check the individual readiness of each cluster for linking by running the istioctl multicluster check --precheck command. For more information about this command, see the CLI reference. If any checks fail, run the command with --verbose, and see Validate your multicluster setup.

      istioctl multicluster check --precheck --contexts="$context1,$context2"
      

    Before continuing to the next step, make sure that the following checks pass as expected:
    ✅ Relevant environment variables on istiod are supported.
    ✅ The license in use by istiod supports multicluster.
    ✅ All istiod, ztunnel, and east-west gateway pods are healthy.
    ✅ The east-west gateway is programmed.

  2. Link clusters to enable cross-cluster service discovery and allow traffic to be routed through east-west gateways across clusters. Note that you can either link the clusters bi-directionally or asymmetrically. In a standard bi-directional setup, services in any of the linked clusters can send requests to and receive requests from the services in any of the other linked clusters. In an asymmetrical setup, you allow one cluster to send requests to another cluster, but the other cluster cannot send requests back to the first cluster.

  3. Verify that peer linking was successful by running the istioctl multicluster check command. If any checks fail, run the command with --verbose, and see Validate your multicluster setup.

      istioctl multicluster check --contexts="$context1,$context2"
      

    In this example output, the remote peer gateways are successfully connected, and all other checks passed successfully. No global services exist because no app services are exposed across clusters in the multicluster mesh yet.

      === Cluster: cluster1 ===
    ✅ Incompatible Environment Variable Check: all relevant environment variables are valid
    ✅ License Check: license is valid for multicluster
    ✅ CNI DNS Capture Check: AMBIENT_DNS_CAPTURE is enabled
    ✅ Pod Check (istiod): all pods healthy
    ✅ Pod Check (ztunnel): all pods healthy
    ✅ Pod Check (eastwest gateway istio-eastwest/istio-eastwest): all pods healthy
    ✅ Gateway Check: all eastwest gateways programmed
         ✅ istio-eastwest/istio-eastwest available at aab8471c7fcfa4a3c82f2d217b015d97-396238517.us-east-1.elb.amazonaws.com
    ✅ Peers Check: all clusters connected
         ✅ Connected to gloo-gateway-docs-mgt via ab46aa29a49914da789a6d5422aca279-541415195.us-east-2.elb.amazonaws.com
    ℹ️  Shared Services Check: no globally shared services found
    ====== 
    
    === Cluster: cluster2 ===
    ✅ Incompatible Environment Variable Check: all relevant environment variables are valid
    ✅ License Check: license is valid for multicluster
    ✅ CNI DNS Capture Check: AMBIENT_DNS_CAPTURE is enabled
    ✅ Pod Check (istiod): all pods healthy
    ✅ Pod Check (ztunnel): all pods healthy
    ✅ Pod Check (eastwest gateway istio-eastwest/istio-eastwest): all pods healthy
    ✅ Gateway Check: all eastwest gateways programmed
         ✅ istio-eastwest/istio-eastwest available at ab46aa29a49914da789a6d5422aca279-541415195.us-east-2.elb.amazonaws.com
    ✅ Peers Check: all clusters connected
         ✅ Connected to gloo-mesh-core-docs-mgt via aab8471c7fcfa4a3c82f2d217b015d97-396238517.us-east-1.elb.amazonaws.com
    ℹ️  Shared Services Check: no globally shared services found
    ====== 
    
    ✅ Intermediate Certs Compatibility Check: all clusters have compatible intermediate certificates
    ✅ Network Configuration Check: all network configurations are valid
    ✅ Stale Workloads Check: skipped (flat network not detected)
      
  4. Optional: Verify that the istiod control plane for each peered cluster is included in each cluster’s proxy status list.

      istioctl proxy-status --context ${context1}
    istioctl proxy-status --context ${context2}
      

    Example output for cluster1, in which you can verify that the istiod control plane for cluster2 is listed:

      NAME                                               CLUSTER          ISTIOD                      VERSION              SUBSCRIBED TYPES
    istio-eastwest-67fd5679dc-fhsxs.istio-eastwest     cluster1         istiod-7b7c9cc4c6-bdm9c     1.27.8-solo-fips     2 (WADS,WDS)
    istiod-6bc6765484-5bbhd.istio-system               cluster2         istiod-7b7c9cc4c6-bdm9c     1.27.8-solo-fips     3 (FSDS,SGDS,WDS)
    ztunnel-5f8rb.kube-system                          cluster1         istiod-7b7c9cc4c6-bdm9c     1.27.8-solo-fips     2 (WADS,WDS)
    ztunnel-f96kh.kube-system                          cluster1         istiod-7b7c9cc4c6-bdm9c     1.27.8-solo-fips     2 (WADS,WDS)
    ztunnel-vtj4f.kube-system                          cluster1         istiod-7b7c9cc4c6-bdm9c     1.27.8-solo-fips     2 (WADS,WDS)
      

Next: Add apps to the ambient mesh. For multicluster setups, this includes making specific services available across your linked cluster setup.

Option 2: Upgrade and link existing ambient meshes

Upgrade your existing ambient meshes installed with Helm and link them to create a multicluster ambient mesh.

Set up tools

  1. Set your Enterprise level license for Solo Enterprise for Istio as an environment variable. If you do not have one, contact an account representative. If you prefer to specify license keys in a secret instead, see Licensing. Note that you might have previously saved this key in another variable, such as ${SOLO_LICENSE_KEY} or ${GLOO_MESH_LICENSE_KEY}.

      export SOLO_ISTIO_LICENSE_KEY=<enterprise_license_key>
      
  2. Choose the version of Istio that you want to install or upgrade to by reviewing the supported versions.

  3. Save the Solo distribution of Istio version.

      export ISTIO_VERSION=1.27.8
    export ISTIO_IMAGE=${ISTIO_VERSION}-solo
      
  4. Save the repo key for the minor version of the Solo distribution of Istio that you want to install. This is the 12-character hash at the end of the repo URL us-docker.pkg.dev/gloo-mesh/istio-<repo-key>, which you can find in the Istio images built by Solo.io support article.

      # 12-character hash at the end of the repo URL
    export REPO_KEY=<repo_key>
    export REPO=us-docker.pkg.dev/gloo-mesh/istio-${REPO_KEY}
    export HELM_REPO=us-docker.pkg.dev/gloo-mesh/istio-helm-${REPO_KEY}
      
  5. Get the Solo distribution of Istio binary and install istioctl, which you use for multicluster linking and gateway commands. This script automatically detects your OS and architecture, downloads the appropriate Solo distribution of Istio binary, and verifies the installation.

      bash <(curl -sSfL https://raw.githubusercontent.com/solo-io/gloo-mesh-use-cases/main/gloo-mesh/install-istioctl.sh)
    export PATH=${HOME}/.istioctl/bin:${PATH}
      
  6. Save the names and kubeconfig contexts of each cluster. This guide uses two clusters as an example. To add more clusters to the multicluster setup, include them in the arrays.

      export cluster1=<cluster1_name>
    export context1=<cluster1_context>
    export cluster2=<cluster2_name>
    export context2=<cluster2_context>
      

Create a shared root of trust

Each cluster in the multicluster setup must have a shared root of trust. This can be achieved by providing a root certificate signed by a PKI provider, or a custom root certificate created for this purpose. The root certificate signs a unique intermediate CA certificate for each cluster.

Upgrade settings

In each cluster, update the ambient mesh components for multicluster, and create an east-west gateway so that traffic requests can be routed cross-cluster.

  1. Get the current values for the istiod Helm release in both clusters.

      function get_istiod_values() {
      context=${1:?context}
      cluster=${2:?cluster}
      helm get values --kube-context ${context} istiod -n istio-system -o yaml > istiod-${cluster}.yaml
    }
    
    get_istiod_values ${context1} ${cluster1}
    get_istiod_values ${context2} ${cluster2}
      
  2. Update your Helm release with the following multicluster values in both clusters. If you must update the Istio minor version, include the --set global.tag=${ISTIO_IMAGE} and --set global.hub=${REPO} flags too.

  3. Verify that the istiod pods are successfully restarted in both clusters. Note that it might take a few seconds for the pods to become available.

      kubectl get pods --context ${context1} -n istio-system | grep istiod
    kubectl get pods --context ${context2} -n istio-system | grep istiod
      

    Example output:

      istiod-b84c55cff-tllfr   1/1     Running   0          58s
      
  4. Optional: Check the istiod logs in both clusters to verify that the certificate you generated earlier is picked up by istiod.

      kubectl logs deploy/istiod -n istio-system --context ${context1} | grep x509
    kubectl logs deploy/istiod -n istio-system --context ${context2} | grep x509
      

    Example output:

      2025-12-16T22:59:06.783901Z     info    x509 cert - Issuer: "CN=Intermediate CA,O=Istio,L=cluster-1", Subject: "", SN: def320623729b8370172413749143836, NotBefore: "2025-12-16T22:57:06Z", NotAfter: "2035-12-14T22:59:06Z"
    2025-12-16T22:59:06.783937Z     info    x509 cert - Issuer: "CN=Root CA,O=Istio", Subject: "CN=Intermediate CA,O=Istio,L=cluster-1", SN: 452d45254328667ccf8434c64c79fe789612bb5a, NotBefore: "2025-12-16T22:54:30Z", NotAfter: "2035-12-14T22:54:30Z"
    2025-12-16T22:59:06.783966Z     info    x509 cert - Issuer: "CN=Root CA,O=Istio", Subject: "CN=Root CA,O=Istio", SN: 38dad68e56ab8c506ae07454801f65b134bd9580, NotBefore: "2025-12-16T22:54:30Z", NotAfter: "2035-12-14T22:54:30Z"
      
  5. Get the current values for the ztunnel Helm release in both clusters.

  6. Update your Helm release with the following multicluster values in both clusters. If you must update the Istio minor version, include the --set tag=${ISTIO_IMAGE} and --set hub=${REPO} flags too.

  7. Verify that the ztunnel pods are successfully installed in both clusters. Because the ztunnel is deployed as a daemon set, the number of pods equals the number of nodes in your cluster. Note that it might take a few seconds for the pods to become available.

      kubectl get pods -A --context ${context1} | grep ztunnel
    kubectl get pods -A --context ${context2} | grep ztunnel
      

    Example output for one cluster:

      ztunnel-tvtzn             1/1     Running   0          7s
    ztunnel-vtpjm             1/1     Running   0          4s
    ztunnel-hllxg             1/1     Running   0          4s
      
  8. Label the istio-system namespace with the clusters’ network names, which you previously set to each cluster name in the global.network field of the istiod installations. The ambient control plane uses this label internally to group pods that exist in the same L3 network.

      kubectl label namespace istio-system --context ${context1} topology.istio.io/network=${cluster1}
    kubectl label namespace istio-system --context ${context2} topology.istio.io/network=${cluster2}
      
  9. Create an east-west gateway in the istio-eastwest namespace. In each cluster, the east-west gateway is implemented as a ztunnel that facilitates traffic between services across clusters in your multicluster mesh. Create the east-west gateway in both clusters. For customization options, see the gateway guide in the Istio docs.

      function create_ew_gateway() {
      context=${1:?context}
      cluster=${2:?cluster}
      kubectl create namespace istio-eastwest --context ${context}
      istioctl multicluster expose --namespace istio-eastwest --context ${context} --generate > ew-gateway-${cluster}.yaml
      kubectl apply -f ew-gateway-${cluster}.yaml --context ${context}
    }
    
    create_ew_gateway ${context1} ${cluster1}
    create_ew_gateway ${context2} ${cluster2}
      

    In this example generated Gateway resource, the gatewayClassName that is used, istio-eastwest, is included by default when you install Istio in ambient mode. For customization options, see the gateway guide in the Istio docs.

      apiVersion: gateway.networking.k8s.io/v1
    kind: Gateway
    metadata:
      labels:
        istio.io/expose-istiod: "15012"
        topology.istio.io/network: "cluster1"
        topology.kubernetes.io/region: "us-east"
        topology.kubernetes.io/zone: "us-east-1"
      name: istio-eastwest
      namespace: istio-eastwest
    spec:
      gatewayClassName: istio-eastwest
      listeners:
      - name: cross-network
        port: 15008
        protocol: HBONE
        tls:
          mode: Passthrough
      - name: xds-tls
        port: 15012
        protocol: TLS
        tls:
          mode: Passthrough
      
  10. Verify that the east-west gateway is successfully deployed in both clusters.

      kubectl get svc -n istio-eastwest --context ${context1}
    kubectl get svc -n istio-eastwest --context ${context2}
      

    Example output:

      NAME             TYPE           CLUSTER-IP       EXTERNAL-IP             PORT(S)                                           AGE
    istio-eastwest   LoadBalancer   172.20.205.104   <external_address>      15021:31655/TCP,15008:32699/TCP,15012:32166/TCP   55s
    NAME             TYPE           CLUSTER-IP       EXTERNAL-IP             PORT(S)                                           AGE
    istio-eastwest   LoadBalancer   172.20.21.117    <external_address>      15021:30324/TCP,15008:31875/TCP,15012:31050/TCP   77s
      

Link clusters to enable cross-cluster service discovery and allow traffic to be routed through east-west gateways across clusters.

  1. Optional: Before you link clusters, you can check the individual readiness of each cluster for linking by running the istioctl multicluster check --precheck command. For more information about this command, see the CLI reference. If any checks fail, run the command with --verbose, and see Validate your multicluster setup.

      istioctl multicluster check --precheck --contexts="$context1,$context2"
      

    Before continuing to the next step, make sure that the following checks pass as expected:
    ✅ Relevant environment variables on istiod are supported.
    ✅ The license in use by istiod supports multicluster.
    ✅ All istiod, ztunnel, and east-west gateway pods are healthy.
    ✅ The east-west gateway is programmed.

  2. Link clusters to enable cross-cluster service discovery and allow traffic to be routed through east-west gateways across clusters. Note that you can either link the clusters bi-directionally or asymmetrically. In a standard bi-directional setup, services in any of the linked clusters can send requests to and receive requests from the services in any of the other linked clusters. In an asymmetrical setup, you allow one cluster to send requests to another cluster, but the other cluster cannot send requests back to the first cluster.

  3. Verify that peer linking was successful by running the istioctl multicluster check command. If any checks fail, run the command with --verbose, and see Validate your multicluster setup.

      istioctl multicluster check --contexts="$context1,$context2"
      

    In this example output, the remote peer gateways are successfully connected, and all other checks passed successfully. No global services exist because no app services are exposed across clusters in the multicluster mesh yet.

      === Cluster: cluster1 ===
    ✅ Incompatible Environment Variable Check: all relevant environment variables are valid
    ✅ License Check: license is valid for multicluster
    ✅ CNI DNS Capture Check: AMBIENT_DNS_CAPTURE is enabled
    ✅ Pod Check (istiod): all pods healthy
    ✅ Pod Check (ztunnel): all pods healthy
    ✅ Pod Check (eastwest gateway istio-eastwest/istio-eastwest): all pods healthy
    ✅ Gateway Check: all eastwest gateways programmed
         ✅ istio-eastwest/istio-eastwest available at aab8471c7fcfa4a3c82f2d217b015d97-396238517.us-east-1.elb.amazonaws.com
    ✅ Peers Check: all clusters connected
         ✅ Connected to gloo-gateway-docs-mgt via ab46aa29a49914da789a6d5422aca279-541415195.us-east-2.elb.amazonaws.com
    ℹ️  Shared Services Check: no globally shared services found
    ====== 
    
    === Cluster: cluster2 ===
    ✅ Incompatible Environment Variable Check: all relevant environment variables are valid
    ✅ License Check: license is valid for multicluster
    ✅ CNI DNS Capture Check: AMBIENT_DNS_CAPTURE is enabled
    ✅ Pod Check (istiod): all pods healthy
    ✅ Pod Check (ztunnel): all pods healthy
    ✅ Pod Check (eastwest gateway istio-eastwest/istio-eastwest): all pods healthy
    ✅ Gateway Check: all eastwest gateways programmed
         ✅ istio-eastwest/istio-eastwest available at ab46aa29a49914da789a6d5422aca279-541415195.us-east-2.elb.amazonaws.com
    ✅ Peers Check: all clusters connected
         ✅ Connected to gloo-mesh-core-docs-mgt via aab8471c7fcfa4a3c82f2d217b015d97-396238517.us-east-1.elb.amazonaws.com
    ℹ️  Shared Services Check: no globally shared services found
    ====== 
    
    ✅ Intermediate Certs Compatibility Check: all clusters have compatible intermediate certificates
    ✅ Network Configuration Check: all network configurations are valid
    ✅ Stale Workloads Check: skipped (flat network not detected)
      
  4. Optional: Verify that the istiod control plane for each peered cluster is included in each cluster’s proxy status list.

      istioctl proxy-status --context ${context1}
    istioctl proxy-status --context ${context2}
      

    Example output for cluster1, in which you can verify that the istiod control plane for cluster2 is listed:

      NAME                                               CLUSTER          ISTIOD                      VERSION              SUBSCRIBED TYPES
    istio-eastwest-67fd5679dc-fhsxs.istio-eastwest     cluster1         istiod-7b7c9cc4c6-bdm9c     1.27.8-solo-fips     2 (WADS,WDS)
    istiod-6bc6765484-5bbhd.istio-system               cluster2         istiod-7b7c9cc4c6-bdm9c     1.27.8-solo-fips     3 (FSDS,SGDS,WDS)
    ztunnel-5f8rb.kube-system                          cluster1         istiod-7b7c9cc4c6-bdm9c     1.27.8-solo-fips     2 (WADS,WDS)
    ztunnel-f96kh.kube-system                          cluster1         istiod-7b7c9cc4c6-bdm9c     1.27.8-solo-fips     2 (WADS,WDS)
    ztunnel-vtj4f.kube-system                          cluster1         istiod-7b7c9cc4c6-bdm9c     1.27.8-solo-fips     2 (WADS,WDS)
      

Next: Add apps to the ambient mesh. For multicluster setups, this includes making specific services available across your linked cluster setup.

Option 3: Automatically link clusters (beta)

In each cluster, use Helm to create the ambient mesh components, and create an east-west gateway so that traffic requests can be routed cross-cluster. Then, use the Gloo management plane to automate multicluster linking, which enables cross-cluster service discovery.

Set up tools

  1. Set your Enterprise level license for Solo Enterprise for Istio as an environment variable. If you do not have one, contact an account representative. If you prefer to specify license keys in a secret instead, see Licensing. Note that you might have previously saved this key in another variable, such as ${SOLO_LICENSE_KEY} or ${GLOO_MESH_LICENSE_KEY}.

      export SOLO_ISTIO_LICENSE_KEY=<enterprise_license_key>
      
  2. Choose the version of Istio that you want to install or upgrade to by reviewing the supported versions.

  3. Save the Solo distribution of Istio version.

      export ISTIO_VERSION=1.27.8
    export ISTIO_IMAGE=${ISTIO_VERSION}-solo
      
  4. Save the repo key for the minor version of the Solo distribution of Istio that you want to install. This is the 12-character hash at the end of the repo URL us-docker.pkg.dev/gloo-mesh/istio-<repo-key>, which you can find in the Istio images built by Solo.io support article.

      # 12-character hash at the end of the repo URL
    export REPO_KEY=<repo_key>
    export REPO=us-docker.pkg.dev/gloo-mesh/istio-${REPO_KEY}
    export HELM_REPO=us-docker.pkg.dev/gloo-mesh/istio-helm-${REPO_KEY}
      
  5. Get the Solo distribution of Istio binary and install istioctl, which you use for multicluster linking and gateway commands. This script automatically detects your OS and architecture, downloads the appropriate Solo distribution of Istio binary, and verifies the installation.

      bash <(curl -sSfL https://raw.githubusercontent.com/solo-io/gloo-mesh-use-cases/main/gloo-mesh/install-istioctl.sh)
    export PATH=${HOME}/.istioctl/bin:${PATH}
      
  6. Save the names and kubeconfig contexts of each cluster. This guide uses two clusters as an example. To add more clusters to the multicluster setup, include them in the arrays.

      export cluster1=<cluster1_name>
    export context1=<cluster1_context>
    export cluster2=<cluster2_name>
    export context2=<cluster2_context>
      

Enable automatic peering of clusters

Upgrade Solo Enterprise for Istio in your multicluster setup to enable the ConfigDistribution feature flag and install the enterprise CRDs, which are required for Solo Enterprise for Istio to automate peering and distribute gateways between clusters.

  1. In the cluster where you installed the management plane, upgrade your gloo-platform-crds Helm release to include the following settings.

      helm get values gloo-platform-crds -n gloo-mesh -o yaml --kube-context ${context1} > mgmt-crds.yaml
    helm upgrade gloo-platform-crds gloo-platform/gloo-platform-crds \
      --kube-context ${context1} \
      --namespace gloo-mesh \
      -f mgmt-crds.yaml \
      --set featureGates.ConfigDistribution=true \
      --set installEnterpriseCrds=true
      
  2. In the cluster where you installed the management plane, upgrade your gloo-platform Helm release to include the following settings.

      helm get values gloo-platform -n gloo-mesh -o yaml --kube-context ${context1} > mgmt-plane.yaml
    helm upgrade gloo-platform gloo-platform/gloo-platform \
      --kube-context ${context1} \
      --namespace gloo-mesh \
      -f mgmt-plane.yaml \
      --set featureGates.ConfigDistribution=true
      
  3. In each connected cluster, upgrade your gloo-platform-crds Helm release to include the following settings. Repeat this step for each workload cluster.

      helm get values gloo-platform-crds -n gloo-mesh -o yaml --kube-context ${context2} > crds.yaml
    helm upgrade gloo-platform-crds gloo-platform/gloo-platform-crds \
      --kube-context ${context2} \
      --namespace gloo-mesh \
      -f crds.yaml \
      --set installEnterpriseCrds=true
      

Create a shared root of trust

Each cluster in the multicluster setup must have a shared root of trust. This can be achieved by providing a root certificate signed by a PKI provider, or a custom root certificate created for this purpose. The root certificate signs a unique intermediate CA certificate for each cluster.

Deploy ambient components

  1. Apply the CRDs for the Kubernetes Gateway API to your cluster, which are required to create components such as waypoint proxies for L7 traffic policies, gateways with the Gateway resource, and more.

      kubectl apply -f https://github.com/kubernetes-sigs/gateway-api/releases/download/v1.3.0/standard-install.yaml --context ${context1}
    kubectl apply -f https://github.com/kubernetes-sigs/gateway-api/releases/download/v1.3.0/standard-install.yaml --context ${context2}
      
  2. Install the base chart, which contains the CRDs and cluster roles required to set up Istio, in both clusters.

You can optionally verify that the CRDs are successfully installed in both clusters.

  kubectl get crds -l app.kubernetes.io/instance=istio-base --kube-context ${context1}
kubectl get crds -l app.kubernetes.io/instance=istio-base --kube-context ${context2}
  

Example output:

  NAME                                       CREATED AT
authorizationpolicies.security.istio.io    2025-12-16T22:56:00Z
destinationrules.networking.istio.io       2025-12-16T22:56:00Z
envoyfilters.networking.istio.io           2025-12-16T22:56:00Z
gateways.networking.istio.io               2025-12-16T22:56:00Z
peerauthentications.security.istio.io      2025-12-16T22:56:00Z
proxyconfigs.networking.istio.io           2025-12-16T22:56:00Z
requestauthentications.security.istio.io   2025-12-16T22:56:00Z
segments.admin.solo.io                     2025-12-16T22:56:00Z
serviceentries.networking.istio.io         2025-12-16T22:56:00Z
sidecars.networking.istio.io               2025-12-16T22:56:00Z
telemetries.telemetry.istio.io             2025-12-16T22:56:00Z
virtualservices.networking.istio.io        2025-12-16T22:56:00Z
wasmplugins.extensions.istio.io            2025-12-16T22:56:00Z
workloadentries.networking.istio.io        2025-12-16T22:56:00Z
workloadgroups.networking.istio.io         2025-12-16T22:56:00Z
  
  1. Create the istiod control plane in both clusters.
  1. Install the Istio CNI node agent daemonset in both clusters. Note that although the CNI is included in this section, it is technically not part of the control plane or data plane.
  1. Verify that the components of the Istio ambient control plane are successfully installed in both clusters. Because the Istio CNI is deployed as a daemon set, the number of CNI pods equals the number of nodes in your cluster. Note that it might take a few seconds for the pods to become available.

      kubectl get pods -A --context ${context1} | grep istio
    kubectl get pods -A --context ${context2} | grep istio
      

    Example output:

      istio-system   istiod-85c4dfd97f-mncj5                             1/1     Running   0               40s
    istio-system   istio-cni-node-pr5rl                                1/1     Running   0               9s
    istio-system   istio-cni-node-pvmx2                                1/1     Running   0               9s
    istio-system   istio-cni-node-6q26l                                1/1     Running   0               9s
      
  2. Optional: Check the istiod logs in both clusters to verify that the certificate you generated earlier is picked up by istiod.

      kubectl logs deploy/istiod -n istio-system --context ${context1} | grep x509
    kubectl logs deploy/istiod -n istio-system --context ${context2} | grep x509
      

    Example output:

      2025-12-16T22:59:06.783901Z     info    x509 cert - Issuer: "CN=Intermediate CA,O=Istio,L=cluster-1", Subject: "", SN: def320623729b8370172413749143836, NotBefore: "2025-12-16T22:57:06Z", NotAfter: "2035-12-14T22:59:06Z"
    2025-12-16T22:59:06.783937Z     info    x509 cert - Issuer: "CN=Root CA,O=Istio", Subject: "CN=Intermediate CA,O=Istio,L=cluster-1", SN: 452d45254328667ccf8434c64c79fe789612bb5a, NotBefore: "2025-12-16T22:54:30Z", NotAfter: "2035-12-14T22:54:30Z"
    2025-12-16T22:59:06.783966Z     info    x509 cert - Issuer: "CN=Root CA,O=Istio", Subject: "CN=Root CA,O=Istio", SN: 38dad68e56ab8c506ae07454801f65b134bd9580, NotBefore: "2025-12-16T22:54:30Z", NotAfter: "2035-12-14T22:54:30Z"
      
  3. Install the ztunnel daemonset in both clusters.

  1. Verify that the ztunnel pods are successfully installed in both clusters. Because the ztunnel is deployed as a daemon set, the number of pods equals the number of nodes in your cluster. Note that it might take a few seconds for the pods to become available.

      kubectl get pods -A --context ${context1} | grep ztunnel
    kubectl get pods -A --context ${context2} | grep ztunnel
      

    Example output:

      ztunnel-tvtzn             1/1     Running   0          7s
    ztunnel-vtpjm             1/1     Running   0          4s
    ztunnel-hllxg             1/1     Running   0          4s
      
  2. Label the istio-system namespace with the clusters’ network names, which you previously set to each cluster name in the global.network field of the istiod installations. The ambient control plane uses this label internally to group pods that exist in the same L3 network.

      kubectl label namespace istio-system --context ${context1} topology.istio.io/network=${cluster1}
    kubectl label namespace istio-system --context ${context2} topology.istio.io/network=${cluster2}
      
  3. Create an east-west gateway in the istio-eastwest namespace. In each cluster, the east-west gateway is implemented as a ztunnel that facilitates traffic between services across clusters in your multicluster mesh. Create the east-west gateway in both clusters. For customization options, see the gateway guide in the Istio docs.

  function create_ew_gateway() {
  context=${1:?context}
  cluster=${2:?cluster}
  kubectl create namespace istio-eastwest --context ${context}
  istioctl multicluster expose --namespace istio-eastwest --context ${context} --generate > ew-gateway-${cluster}.yaml
  kubectl apply -f ew-gateway-${cluster}.yaml --context ${context}
}

create_ew_gateway ${context1} ${cluster1} create_ew_gateway ${context2} ${cluster2}

In this example generated Gateway resource, the gatewayClassName that is used, istio-eastwest, is included by default when you install Istio in ambient mode. For customization options, see the gateway guide in the Istio docs.

  apiVersion: gateway.networking.k8s.io/v1
kind: Gateway
metadata:
  labels:
    istio.io/expose-istiod: "15012"
    topology.istio.io/network: "cluster1"
    topology.kubernetes.io/region: "us-east"
    topology.kubernetes.io/zone: "us-east-1"
  name: istio-eastwest
  namespace: istio-eastwest
spec:
  gatewayClassName: istio-eastwest
  listeners:
  - name: cross-network
    port: 15008
    protocol: HBONE
    tls:
      mode: Passthrough
  - name: xds-tls
    port: 15012
    protocol: TLS
    tls:
      mode: Passthrough
  
  1. Verify that the east-west gateway is successfully deployed in both clusters.

      kubectl get svc -n istio-eastwest --context ${context1}
    kubectl get svc -n istio-eastwest --context ${context2}
      

    Example output:

      NAME             TYPE           CLUSTER-IP       EXTERNAL-IP             PORT(S)                                           AGE
    istio-eastwest   LoadBalancer   172.20.205.104   <external_address>      15021:31655/TCP,15008:32699/TCP,15012:32166/TCP   55s
    NAME             TYPE           CLUSTER-IP       EXTERNAL-IP             PORT(S)                                           AGE
    istio-eastwest   LoadBalancer   172.20.21.117    <external_address>      15021:30324/TCP,15008:31875/TCP,15012:31050/TCP   77s
      

Review remote peer gateways

After you complete the steps for each cluster, verify that Solo Enterprise for Istio successfully created and distributed the remote peering gateways. These gateways use the istio-remote GatewayClass, which allows the istiod control plane in each cluster to discover the east-west gateway addresses of other clusters. Solo Enterprise for Istio generates one istio-remote resource in the cluster where the management plane is deployed for each connected cluster, and then distributes the gateway to each cluster respectively.

  1. Verify that an istio-remote gateway for each connected cluster is copied to the cluster where the management plane is deployed.

      kubectl get gateways -n istio-eastwest --context ${context1}
      

    In this example output, the istio-remote gateway that was auto-generated for connected cluster cluster2 is copied to cluster1 where the management plane is deployed, alongside cluster1’s own istio-remote gateway and east-west gateway.

      NAMESPACE        NAME                            CLASS           ADDRESS                                                                   PROGRAMMED   AGE
    istio-eastwest   istio-eastwest                 istio-eastwest   a7f6f1a2611fc4eb3864f8d688622fd4-1234567890.us-east-1.elb.amazonaws.com   True         6s
    istio-eastwest   istio-remote-peer-cluster1     istio-remote     a5082fe9522834b8192a6513eb8c6b01-0987654321.us-east-1.elb.amazonaws.com   True         4s
    istio-eastwest   istio-remote-peer-cluster2     istio-remote     aaad62dc3ffb142a1bfc13df7fe9665b-5678901234.us-east-1.elb.amazonaws.com   True         4s
      
  2. In each connected cluster, verify that all istio-remote gateways are successfully distributed to all workload clusters. This ensures that services in each workload cluster can now access the east-west gateways in other clusters of the multicluster mesh setup.

      kubectl get gateways -n istio-eastwest --context ${context2}
      
  3. Verify that peer linking was successful by running the istioctl multicluster check command. For more information about this command, see the CLI reference. If any checks fail, run the command with --verbose, and see Validate your multicluster setup.

      istioctl multicluster check --contexts="$context1,$context2"
      

    Example output:

      === Cluster: cluster1 ===
    ✅ Incompatible Environment Variable Check: all relevant environment variables are valid
    ✅ License Check: license is valid for multicluster
    ✅ CNI DNS Capture Check: AMBIENT_DNS_CAPTURE is enabled
    ✅ Pod Check (istiod): all pods healthy
    ✅ Pod Check (ztunnel): all pods healthy
    ✅ Pod Check (eastwest gateway istio-eastwest/istio-eastwest): all pods healthy
    ✅ Gateway Check: all eastwest gateways programmed
         ✅ istio-eastwest/istio-eastwest available at aab8471c7fcfa4a3c82f2d217b015d97-396238517.us-east-1.elb.amazonaws.com
    ✅ Peers Check: all clusters connected
         ✅ Connected to gloo-gateway-docs-mgt via ab46aa29a49914da789a6d5422aca279-541415195.us-east-2.elb.amazonaws.com
    ℹ️  Shared Services Check: no globally shared services found
    ====== 
    

    === Cluster: cluster2 === ✅ Incompatible Environment Variable Check: all relevant environment variables are valid ✅ License Check: license is valid for multicluster ✅ CNI DNS Capture Check: AMBIENT_DNS_CAPTURE is enabled ✅ Pod Check (istiod): all pods healthy ✅ Pod Check (ztunnel): all pods healthy ✅ Pod Check (eastwest gateway istio-eastwest/istio-eastwest): all pods healthy ✅ Gateway Check: all eastwest gateways programmed ✅ istio-eastwest/istio-eastwest available at ab46aa29a49914da789a6d5422aca279-541415195.us-east-2.elb.amazonaws.com ✅ Peers Check: all clusters connected ✅ Connected to gloo-mesh-core-docs-mgt via aab8471c7fcfa4a3c82f2d217b015d97-396238517.us-east-1.elb.amazonaws.com ℹ️ Shared Services Check: no globally shared services found

    ✅ Intermediate Certs Compatibility Check: all clusters have compatible intermediate certificates ✅ Network Configuration Check: all network configurations are valid ✅ Stale Workloads Check: skipped (flat network not detected)

Next

  • Add apps to the ambient mesh. For multicluster setups, this includes making specific services available across your linked cluster setup.
  • In a multicluster mesh, the east-west gateway serves as a ztunnel that allows traffic requests to flow across clusters, but it does not modify requests in any way. To control in-mesh traffic, you can instead apply policies to waypoint proxies that you create for a workload namespace.

Optional: Validate your multicluster setup

Both before and after you link clusters into a multicluster mesh, you can use the istioctl multicluster check command, along with other observability checks, to verify multiple aspects of multicluster ambient mesh support and status.

istioctl multicluster check

You can use the istioctl multicluster check --precheck command to check the individual readiness of each cluster before running istioctl multicluster link to link them in a multicluster mesh, and run it again after linking to confirm that the connections were successful. This command performs checks listed in the following sections, which you can review to understand what each check validates. Additionally, if any of the checks fail, run the command with the --verbose option, and review the following troubleshooting recommendations.

  istioctl multicluster check --verbose --contexts="$context1,$context2"
  

For more information about this command, see the CLI reference.

Incompatible environment variables

Checks whether the ENABLE_PEERING_DISCOVERY=true and optionally K8S_SELECT_WORKLOAD_ENTRIES=true environment variables are set incorrectly or are not supported for multicluster ambient mesh.

Example verbose output:

  --- Incompatible Environment Variable Check ---

✅ Incompatible Environment Variable Check: K8S_SELECT_WORKLOAD_ENTRIES is valid ("")
✅ Incompatible Environment Variable Check: ENABLE_PEERING_DISCOVERY is valid ("true")
✅ Incompatible Environment Variable Check: all relevant environment variables are valid
  

If this check fails, check your environment variables in your istiod configuration, such as by running helm get values --kube-context ${CLUSTER_CONTEXT} istiod -n istio-system -o yaml, and update your configuration.

License validity

Checks whether the license in use by istiod is valid for multicluster ambient mesh. Multicluster capabilities require an Enterprise level license for Solo Enterprise for Istio.

Example verbose output:

  --- License Check ---

✅ License Check: license is valid for multicluster
  

If your license does not support multicluster ambient mesh, contact your Solo account representative.

Pod health

Checks the health of the pods in the cluster. All istiod, ztunnel, and east-west gateway pods across the checked clusters must be healthy and running for the multicluster mesh to function correctly.

Example verbose output:

  --- Pod Check (istiod) ---

NAME                        READY     STATUS      RESTARTS     AGE
istiod-6d9cdf88cf-l47tf     1/1       Running     0            10m18s

✅ Pod Check (istiod): all pods healthy


--- Pod Check (ztunnel) ---

NAME              READY     STATUS      RESTARTS     AGE
ztunnel-dvlwk     1/1       Running     0            10m6s

✅ Pod Check (ztunnel): all pods healthy


--- Pod Check (eastwest gateway) ---

NAME                                READY     STATUS      RESTARTS     AGE
istio-eastwest-857b77fc5d-qgnrl     1/1       Running     0            9m33s

✅ Pod Check (eastwest gateway): all pods healthy
  

To check any unhealthy pods, run the following commands. Consider checking the pod logs, and review Debug Istio.

  kubectl get po -n istio-system
kubectl get po -n istio-eastwest
  

East-west gateway status

Checks the status of the east-west gateways in the cluster. When an east-west gateway is created, the gateway controller creates a Kubernetes service to expose the gateway. Once this service is correctly attached to the gateway and has an address assigned, the east-west gateway has a Programmed status of true.

Example verbose output:

  --- Gateway Check ---

Gateway: istio-eastwest
Addresses:
- 172.18.7.110
Status: programmed ✅

✅ Gateway Check: all eastwest gateways programmed
  

If the Programmed status is not true, an issue might exist with the address allocation for the service. Check the east-west gateway with a command such as kubectl get svc -n istio-eastwest, and verify that your cloud provider can correctly allocate addresses to the service.

Remote peer gateway status

Checks the status of the remote peer gateways in the cluster, which represent the other peered clusters in the multicluster setup. These remote gateways configure the connection between the local cluster’s istiod control plane, and the peered clusters’ remote networks to enable xDS communication between peers. When the initial network connection between istiod and a remote peer is made, the gateway’s gloo.solo.io/PeerConnected status updates to true. Then, when the full xDS sync occurs between peers, the gateway’s gloo.solo.io/PeeringSucceeded status also updates to true. This check ensures that both statuses are true.

Example verbose output:

  --- Peers Check ---

Cluster: cluster2
Addresses:
- 172.18.7.130
Conditions:
- Accepted: True
- Programmed: True
- gloo.solo.io/PeerConnected: True
- gloo.solo.io/PeeringSucceeded: True
- gloo.solo.io/PeerDataPlaneProgrammed: True
Status: connected ✅

✅ Peers Check: all clusters connected
  

If the connection is severed between the peers, the gloo.solo.io/PeerConnected status becomes false. A failed connection between peers can be due to either a misconfiguration in the peering setup, or a network issue blocking port 15008 on the remote cluster, which is the cross-network HBONE port that the east-west gateway listens on. Review the steps you took to link clusters together, such as the steps outlined in the Helm default network guide. Additionally, review any firewall rules or network policies that might block access through port 15008 on the remote cluster.

Intermediate certificate compatibility

Confirms the certificate compatibility between peered clusters. This check reads the root-cert.pem from the istio-ca-root-cert configmap in the istio-system namespace, and uses x509 certificate validation to confirm the root cert is compatible with all of the clusters’ ca-cert.pem intermediate certificate chains from the cacerts secret.

Example verbose output:

  --- Intermediate Certs Compatibility Check ---

ℹ  Intermediate Certs Compatibility Check: cluster cluster1 root certificate SHA256 sum: 6d18f32e134824c158d97f32618657c45d5a83839f838ada751757139481537e
ℹ  Intermediate Certs Compatibility Check: cluster cluster2 root certificate SHA256 sum: 6d18f32e134824c158d97f32618657c45d5a83839f838ada751757139481537e
✅ Intermediate Certs Compatibility Check: cluster cluster1 has compatible intermediate certificates with cluster cluster2 
✅ Intermediate Certs Compatibility Check: cluster cluster2 has compatible intermediate certificates with cluster cluster1 
✅ Intermediate Certs Compatibility Check: all clusters have compatible intermediate certificates
  

If this check fails because the root certs are not valid for each peered clusters’ intermediate certificate chain, you can check the istiod logs for TLS errors when attempting to communicate with a peered cluster, such as the following:

  2025-12-04T22:09:22.474517Z     warn    deltaadsc       disconnected, retrying in 24.735483751s: delta stream: rpc error: code = Unavailable desc = connection error: desc = "error reading server preface: remote error: tls: unknown certificate authority"       target=peering-cluster2
  

Ensure each cluster has a cacerts secret in the istio-system namespace. To regenerate invalid certificates for each cluster, follow the example steps in Create a shared root of trust.

Network configuration

Confirms the network configuration of the multicluster mesh. For multicluster peering setups that do not use a flat network topology, each cluster must occupy a unique network. The network name must be defined with the label topology.istio.io/network and set on both the istio-system namespace and the istio-eastwest gateway resource. The same network name must also be set as the NETWORK environment variable on the ztunnel daemonset. Each remote gateway that represents that cluster must have the topology.istio.io/network label equal to the network of the remote cluster.

Example verbose output:

  --- Network Configuration Check ---

✅ Cluster cluster1 has network: cluster1
✅ Eastwest gateway istio-eastwest/istio-eastwest has correct network label: cluster1
✅ Cluster cluster2 has network: cluster2
✅ Eastwest gateway istio-eastwest/istio-eastwest has correct network label: cluster2
✅ Remote gateway istio-eastwest/istio-remote-peer-cluster2 references network cluster2 (clusters: [cluster2])
✅ Remote gateway istio-eastwest/istio-remote-peer-cluster1 references network cluster1 (clusters: [cluster1])
✅ Network Configuration Check: all network configurations are valid
  

Mismatched network identities cause errors in cross-cluster communication, which leads to error logs in ztunnel pods that indicate a network timeout on the outbound communication. Notably, the destination address on these errors is a 240.X.X.X address, instead of the correct remote peer gateway address. You can run kubectl logs -l app=ztunnel -n istio-system --tail=10 --context ${CLUSTER_CONTEXT} | grep -iE "error|warn" to review logs such as the following:

  2025-11-18T16:14:53.490573Z     error   access  connection complete     src.addr=240.0.2.27:46802 src.workload="ratings-v1-5dc79b6bcd-zm8v6" src.namespace="bookinfo" src.identity="spiffe://cluster.local/ns/bookinfo/sa/bookinfo-ratings" dst.addr=240.0.9.43:15008 dst.hbone_addr=240.0.9.43:9080 dst.service="productpage.bookinfo.mesh.internal" dst.workload="autogenflat.portfolio1-soloiopoc-cluster1.bookinfo.productpage-v1-54bb874995-hblwp.ee508601917c" dst.namespace="bookinfo" dst.identity="spiffe://cluster.local/ns/bookinfo/sa/bookinfo-productpage" direction="outbound" bytes_sent=0 bytes_recv=0 duration="10001ms" error="connection timed out, maybe a NetworkPolicy is blocking HBONE port 15008: deadline has elapsed"
  

To troubleshoot these issues, be sure that you use unique network names to represent each cluster, and that you correctly labeled the cluster’s istio-system namespace with that network name, such as by running kubectl label namespace istio-system --context ${CLUSTER_CONTEXT} topology.istio.io/network=${CLUSTER_NAME}. You can also relabel the east-west gateway in the cluster, and the remote peer gateways in other clusters that represent this cluster.

Stale workload entries

In flat network setups, checks for any outdated workload entries that must be removed from the multicluster mesh. Stale workload entries might exist from pods that were deleted, but the autogenerated entries for those workloads were not correctly cleaned up. If you do not use a flat network topology, no autogenerated workload entries exist to be validated, and this check can be ignored.

Example verbose output for a non-flat network setup:

  --- Stale Workloads Check ---

⚠  Stale Workloads Check: no autogenflat workload entries found
  

If you use a flat network topology, and this check fails with stale workload entries, run kubectl get workloadentries -n istio-system | grep autogenflat to list the autogenerated workload entries in the remote cluster, and compare the list to the output of kubectl get pods in the source cluster for those workloads. You can safely manually delete the stale workload entries in the remote cluster for pods that no longer exist in the source cluster, such as by running kubectl get workloadentries -n istio-system <entry_name>.

Further debugging and observability

For additional guidance around observing your multicluster ambient mesh, check out the observability overview, which contains links to guides on using logs, metrics, and traces in your Istio environment.

For additional guidance around debugging your multicluster ambient mesh, check out the Istio troubleshooting guide.