Overview

About the integration

SPIRE offers robust workload attestation capabilities that provide significantly more controls around how, when, and if identities are granted to workloads. The Solo distribution of Istio includes Enterprise support for using SPIRE node agents (over an Envoy SDS socket) to attest and grant identities to the ambient mesh workloads they proxy. This allows Istio to use these identities for mTLS connections between the ambient mesh workloads.

With the SPIRE integration, the ztunnel can act as a trusted spire-agent delegate on the node by using the SPIRE DelegatedIdentity API. Ztunnel can integrate with SPIRE to leverage SPIRE’s existing node and workload attestation plugin framework directly, as well as request workload certificates that are issued by SPIRE on the basis of those attestations.

How it works

Community Istio natively supports a SPIRE integration with the sidecar dataplane mode, in which you must mount sockets or volumes in every workload. However, Solo Enterprise for Istio’s support for SPIRE in the ambient dataplane mode functions much more simply. To enable the SPIRE integration with ambient, you only need to register your workloads with SPIRE, and then continue to label your service namespaces for the ambient dataplane mode as usual. Every ambient workload is automatically assigned a SPIRE-managed identity and uses that identity for mTLS, without the need to mount sockets or volumes in every workload.

In ambient, the Layer 4 node proxy, ztunnel, is responsible for capturing and encrypting all pod-to-pod traffic, and for managing workload identities. In the SPIRE-enabled ambient mode, ztunnel obtains those identities directly from the SPIRE agent that runs on the same node, and thus acts as a trusted delegate of SPIRE. However, note that the SPIRE agent attests the workloads, not ztunnel.

This allows the ambient dataplane to integrate with SPIRE’s delegation API as a trusted delegate, while leveraging SPIRE’s multifactor node and workload attestation plugin frameworks directly. The ambient dataplane can request workload certificates issued by SPIRE on the basis of those attestations.

Review the following sequence diagram that shows how SPIRE attestation in ambient works.

Figure: SPIRE attestation for ambient workloads
Figure: SPIRE attestation for ambient workloads
Figure: SPIRE attestation for ambient workloads
Figure: SPIRE attestation for ambient workloads

  1. The ztunnel on the same node as the ambient-enrolled workload pod obtains the PID of the workload container in the pod.
  2. The ztunnel then requests the SPIRE agent on the same node to attest the identity of the workload using its PID.
  3. The SPIRE agent performs checks against the workload to determine whether it can grant the workload a trusted identity.
  4. If the checks succeed, the SPIRE agent returns a certificate (SPIFFE x509 SVID) to the ztunnel for the workload.
  5. The ztunnel enforces mTLS connections for the workload pod using the SPIRE-issued certificate.
  6. The workload pod can then make mTLS-secure connections to other ambient workloads.

To learn more about SPIRE, and how the SPIRE integration with Istio works in Solo Enterprise for Istio, check out this blog post.

Single cluster

Deploy an ambient mesh that uses SPIRE workload identity attestation.

Set up tools

Before you begin, set up the following tools and save details in environment variables.

  1. Set your Enterprise-level license for Solo Enterprise for Istio as an environment variable. If you do not have one, contact an account representative. If you prefer to specify license keys in a secret instead, see Licensing. Note that you might have previously saved this key in another variable, such as ${SOLO_LICENSE_KEY} or ${GLOO_MESH_LICENSE_KEY}.

      export SOLO_ISTIO_LICENSE_KEY=<enterprise_license_key>
      
  2. Save the name of your cluster, which you use in the SPIRE trust domain settings.

      export CLUSTER_NAME=<cluster_name>
      
  3. Choose the version of Istio that you want to install or upgrade to by reviewing the supported versions table, and saving the following details in environment variables.

    1. Save the Solo distribution of Istio patch version and tag.
        export ISTIO_VERSION=1.27.8
      # Change the tags as needed
      export ISTIO_IMAGE=${ISTIO_VERSION}-solo
        
    2. Save the repo key for the minor version of the Solo distribution of Istio that you want to install. This is the 12-character hash at the end of the repo URL us-docker.pkg.dev/gloo-mesh/istio-<repo-key>, which you can find in the Istio images built by Solo.io support article.
        # 12-character hash at the end of the minor version repo URL
      export REPO_KEY=<repo_key>
      export REPO=us-docker.pkg.dev/gloo-mesh/istio-${REPO_KEY}
      export HELM_REPO=us-docker.pkg.dev/gloo-mesh/istio-helm-${REPO_KEY}
        
  4. Make sure that you have the OpenSSL version of openssl, not LibreSSL. The openssl version must be at least 1.1.

    1. Check your openssl version. If you see LibreSSL in the output, continue to the next step.
        openssl version
        
    2. Install the OpenSSL version (not LibreSSL). For example, you might use Homebrew.
        brew install openssl
        
    3. Review the output of the OpenSSL installation for the path of the binary file. You can choose to export the binary to your path, or call the entire path whenever the following steps use an openssl command.
      • For example, openssl might be installed along the following path: /usr/local/opt/openssl@3/bin/
      • To run commands, you can append the path so that your terminal uses this installed version of OpenSSL, and not the default LibreSSL. /usr/local/opt/openssl@3/bin/openssl req -new -newkey rsa:4096 -x509 -sha256 -days 3650...

Prepare SPIRE certificates

Create the root and intermediate CA for the SPIRE server. The SPIRE server later uses these CAs to create certificates for any attested workloads.

  1. Create a directory for the certificates, and save the CA certificate configurations.

      mkdir -p certs/{root-ca,intermediate-ca}
    cd certs
    
    cat >root-ca.cnf <<EOF
    [req]
    distinguished_name = req_distinguished_name
    req_extensions = v3_req
    prompt = no
    
    [req_distinguished_name]
    CN = SPIRE Root CA
    
    [v3_req]
    keyUsage = critical, keyCertSign, cRLSign
    basicConstraints = critical, CA:true, pathlen:2
    subjectKeyIdentifier = hash
    EOF
    
    cat >intermediate-ca.cnf <<EOF
    [req]
    distinguished_name = req_distinguished_name
    req_extensions = v3_req
    prompt = no
    
    [req_distinguished_name]
    CN = SPIRE Intermediate CA
    
    [v3_req]
    keyUsage = critical, keyCertSign, cRLSign
    basicConstraints = critical, CA:true, pathlen:1
    subjectKeyIdentifier = hash
    EOF
      
  2. Create the root CA and intermediate CA, and sign the intermediate CA with the root CA.

      # Create root CA
    openssl genrsa -out root-ca/root-ca.key 2048
    openssl req -new -x509 -key root-ca/root-ca.key -out root-ca/root-ca.crt -config root-ca.cnf -days 3650
    
    # Create intermediate CA
    openssl genrsa -out intermediate-ca/ca.key 2048
    openssl req -new -key intermediate-ca/ca.key -out intermediate-ca/ca.csr -config intermediate-ca.cnf -subj "/CN=SPIRE INTERMEDIATE CA"
    
    # Sign CSR with root CA
    openssl x509 -req -in intermediate-ca/ca.csr -CA root-ca/root-ca.crt -CAkey root-ca/root-ca.key -CAcreateserial \
      -out intermediate-ca/ca.crt -days 1825 -extensions v3_req -extfile intermediate-ca.cnf
    
    # Create the bundle file (intermediate + root)
    cat intermediate-ca/ca.crt root-ca/root-ca.crt > intermediate-ca/ca-chain.pem
    
    # Create the root CA bundle
    cp root-ca/root-ca.crt root-ca-bundle.pem
      
  3. Create the spire-server namespace, and issue the certificates as secrets ready to be mounted onto SPIRE.

      kubectl create namespace spire-server
    kubectl create secret generic spiffe-upstream-ca \
      --from-file=tls.crt=certs/intermediate-ca/ca.crt \
      --from-file=tls.key=certs/intermediate-ca/ca.key \
      --from-file=bundle.crt=certs/intermediate-ca/ca-chain.pem \
      -n spire-server
      

Install SPIRE

Use Helm to deploy SPIRE in each cluster.

  1. Add and update the SPIRE Helm repo.

      helm repo add spire https://spiffe.github.io/helm-charts-hardened/
    helm repo update spire
      
  2. Create the SPIRE CRDs and SPIRE Helm releases.

      helm upgrade -i spire-crds spire/spire-crds \
    --namespace spire-server \
    --create-namespace \
    --version 0.5.0 \
    --wait
    
    helm upgrade -i spire spire/spire \
    --namespace spire-server \
    --version 0.24.2 \
    -f - <<EOF
    # Source https://github.com/solo-io/istio/blob/build/release-1.23/tools/install-spire.sh
    global:
      spire:
        trustDomain: ${CLUSTER_NAME}
    spire-agent:
        authorizedDelegates:
            - "spiffe://${CLUSTER_NAME}/ns/istio-system/sa/ztunnel"
        sockets:
            admin:
                enabled: true
                mountOnHost: true
            hostBasePath: /run/spire/agent/sockets
        tolerations:
          - effect: NoSchedule
            operator: Exists
          - key: CriticalAddonsOnly
            operator: Exists
          - effect: NoExecute
            operator: Exists
    spire-server:
      upstreamAuthority:
        disk:
          enabled: true
          secret:
            create: false
            name: "spiffe-upstream-ca"
    
    spiffe-csi-driver:
        tolerations:
          - effect: NoSchedule
            operator: Exists
          - key: CriticalAddonsOnly
            operator: Exists
          - effect: NoExecute
            operator: Exists
    EOF
      
  3. Verify that the SPIRE server is deployed.

      kubectl -n spire-server wait --for=condition=Ready pods --all
      
  4. Configure SPIRE to issue certificates for the ambient mesh workloads.

      kubectl apply -f - <<EOF
    # Source https://github.com/solo-io/istio/blob/build/release-1.23/tools/install-spire.sh
    ---
    # ClusterSPIFFEID for ztunnel
    apiVersion: spire.spiffe.io/v1alpha1
    kind: ClusterSPIFFEID
    metadata:
      name: istio-ztunnel-reg
    spec:
      spiffeIDTemplate: "spiffe://{{ .TrustDomain }}/ns/{{ .PodMeta.Namespace }}/sa/{{ .PodSpec.ServiceAccountName }}"
      podSelector:
        matchLabels:
          app: "ztunnel"
    ---
    # ClusterSPIFFEID for waypoints
    apiVersion: spire.spiffe.io/v1alpha1
    kind: ClusterSPIFFEID
    metadata:
      name: istio-waypoint-reg
    spec:
      spiffeIDTemplate: "spiffe://{{ .TrustDomain }}/ns/{{ .PodMeta.Namespace }}/sa/{{ .PodSpec.ServiceAccountName }}"
      podSelector:
        matchLabels:
          istio.io/gateway-name: waypoint
    ---
    # ClusterSPIFFEID for workloads
    apiVersion: spire.spiffe.io/v1alpha1
    kind: ClusterSPIFFEID
    metadata:
      name: istio-ambient-reg
    spec:
      spiffeIDTemplate: "spiffe://{{ .TrustDomain }}/ns/{{ .PodMeta.Namespace }}/sa/{{ .PodSpec.ServiceAccountName }}"
      podSelector:
        matchLabels:
          istio.io/dataplane-mode: ambient
    EOF
      

Any workloads that you later deploy to the ambient mesh will now be able to get mTLS certificates from SPIRE.

Install an ambient mesh with SPIRE enabled

Use Helm to create the ambient mesh components, with the SPIRE integration enabled.

  1. Apply the CRDs for the Kubernetes Gateway API to your cluster, which are required to create components such as waypoint proxies for L7 traffic policies, gateways with the Gateway resource, and more.

      kubectl apply -f https://github.com/kubernetes-sigs/gateway-api/releases/download/v1.3.0/standard-install.yaml
      
  2. Install the base chart, which contains the CRDs and cluster roles required to set up Istio.

      helm upgrade --install istio-base oci://${HELM_REPO}/base \
    --namespace istio-system \
    --create-namespace \
    --version ${ISTIO_IMAGE} \
    -f - <<EOF
    defaultRevision: ""
    profile: ambient
    EOF
      

    You can optionally verify that the CRDs are successfully installed by running the following command.

      kubectl get crds -l app.kubernetes.io/instance=istio-base
      

    Example output:

      NAME                                       CREATED AT
    authorizationpolicies.security.istio.io    2025-12-16T22:56:00Z
    destinationrules.networking.istio.io       2025-12-16T22:56:00Z
    envoyfilters.networking.istio.io           2025-12-16T22:56:00Z
    gateways.networking.istio.io               2025-12-16T22:56:00Z
    peerauthentications.security.istio.io      2025-12-16T22:56:00Z
    proxyconfigs.networking.istio.io           2025-12-16T22:56:00Z
    requestauthentications.security.istio.io   2025-12-16T22:56:00Z
    segments.admin.solo.io                     2025-12-16T22:56:00Z
    serviceentries.networking.istio.io         2025-12-16T22:56:00Z
    sidecars.networking.istio.io               2025-12-16T22:56:00Z
    telemetries.telemetry.istio.io             2025-12-16T22:56:00Z
    virtualservices.networking.istio.io        2025-12-16T22:56:00Z
    wasmplugins.extensions.istio.io            2025-12-16T22:56:00Z
    workloadentries.networking.istio.io        2025-12-16T22:56:00Z
    workloadgroups.networking.istio.io         2025-12-16T22:56:00Z
      
  3. Create the istiod control plane in your cluster.

      helm upgrade --install istiod oci://${HELM_REPO}/istiod \
    --namespace istio-system \
    --version ${ISTIO_IMAGE} \
    -f - <<EOF
    global:
      hub: ${REPO}
      proxy:
        clusterDomain: cluster.local
      tag: ${ISTIO_IMAGE}
    gateways:
      spire:
        workloads: true # SPIRE enabled
    meshConfig:
      accessLogFile: /dev/stdout
      defaultConfig:
        proxyMetadata:
          ISTIO_META_DNS_AUTO_ALLOCATE: "true"
          ISTIO_META_DNS_CAPTURE: "true"
      trustDomain: "${cluster1}"  # Matches the custom trustDomain in SPIRE settings
    env:
      PILOT_ENABLE_IP_AUTOALLOCATE: "true"
      PILOT_SKIP_VALIDATE_TRUST_DOMAIN: "true"
    pilot:
      cni:
        namespace: istio-system
        enabled: true
    profile: ambient
    license:
      value: ${SOLO_ISTIO_LICENSE_KEY}
      # Uncomment if you prefer to specify your license secret
      # instead of an inline value.
      # secretRef:
      #   name: 
      #   namespace: 
    EOF
      
  4. Install the Istio CNI node agent daemonset. Note that although the CNI is included in this section, it is technically not part of the control plane or data plane.

      helm upgrade --install istio-cni oci://${HELM_REPO}/cni \
    --namespace istio-system \
    --version ${ISTIO_IMAGE} \
    -f - <<EOF
    ambient:
      dnsCapture: true
    excludeNamespaces:
      - istio-system
      - kube-system
    global:
      hub: ${REPO}
      tag: ${ISTIO_IMAGE}
    profile: ambient
    EOF
      
  5. Verify that the components of the Istio ambient control plane are successfully installed. Because the Istio CNI is deployed as a daemon set, the number of CNI pods equals the number of nodes in your cluster. Note that it might take a few seconds for the pods to become available.

      kubectl get pods -A | grep istio
      

    Example output:

      istio-system   istiod-85c4dfd97f-mncj5                             1/1     Running   0               40s
    istio-system   istio-cni-node-pr5rl                                1/1     Running   0               9s
    istio-system   istio-cni-node-pvmx2                                1/1     Running   0               9s
    istio-system   istio-cni-node-6q26l                                1/1     Running   0               9s
      
  6. Install the ztunnel daemonset.

      helm upgrade --install ztunnel oci://${HELM_REPO}/ztunnel \
    --namespace istio-system \
    --version ${ISTIO_IMAGE} \
    -f - <<EOF
    configValidation: true
    enabled: true
    env:
      L7_ENABLED: "true"
    hub: ${REPO}
    istioNamespace: istio-system
    namespace: istio-system
    profile: ambient
    proxy:
      clusterDomain: cluster.local
    spire:
      enabled: true # SPIRE enabled
    tag: ${ISTIO_IMAGE}
    terminationGracePeriodSeconds: 29
    variant: distroless
    EOF
      
  7. Verify that the ztunnel pods are successfully installed. Because the ztunnel is deployed as a daemon set, the number of pods equals the number of nodes in your cluster. Note that it might take a few seconds for the pods to become available.

      kubectl get pods -A | grep ztunnel
      

    Example output:

      ztunnel-tvtzn             1/1     Running   0          7s
    ztunnel-vtpjm             1/1     Running   0          4s
    ztunnel-hllxg             1/1     Running   0          4s
      

Deploy services to the ambient mesh

Add apps to the ambient mesh. Note that whenever you label a workload to add it to your ambient mesh, the ztunnel on the same node requests that the SPIRE agent performs workload attestation. The provided certificate for the workload enables it to initiate mTLS communication within the mesh.

Multicluster

Deploy a multicluster ambient mesh that uses SPIRE workload identity attestation.

Set up tools

Before you begin, set up the following tools and save details in environment variables.

  1. Set your Enterprise level license for Solo Enterprise for Istio as an environment variable. If you do not have one, contact an account representative. If you prefer to specify license keys in a secret instead, see Licensing. Note that you might have previously saved this key in another variable, such as ${SOLO_LICENSE_KEY} or ${GLOO_MESH_LICENSE_KEY}.

      export SOLO_ISTIO_LICENSE_KEY=<enterprise_license_key>
      
  2. Choose the version of Istio that you want to install or upgrade to by reviewing the supported versions.

  3. Save the Solo distribution of Istio version.

      export ISTIO_VERSION=1.27.8
    export ISTIO_IMAGE=${ISTIO_VERSION}-solo
      
  4. Save the repo key for the minor version of the Solo distribution of Istio that you want to install. This is the 12-character hash at the end of the repo URL us-docker.pkg.dev/gloo-mesh/istio-<repo-key>, which you can find in the Istio images built by Solo.io support article.

      # 12-character hash at the end of the repo URL
    export REPO_KEY=<repo_key>
    export REPO=us-docker.pkg.dev/gloo-mesh/istio-${REPO_KEY}
    export HELM_REPO=us-docker.pkg.dev/gloo-mesh/istio-helm-${REPO_KEY}
      
  5. Get the Solo distribution of Istio binary and install istioctl, which you use for multicluster linking and gateway commands. This script automatically detects your OS and architecture, downloads the appropriate Solo distribution of Istio binary, and verifies the installation.

      bash <(curl -sSfL https://raw.githubusercontent.com/solo-io/gloo-mesh-use-cases/main/gloo-mesh/install-istioctl.sh)
    export PATH=${HOME}/.istioctl/bin:${PATH}
      
  6. Save the names and kubeconfig contexts of each cluster. This guide uses two clusters as an example. To add more clusters to the multicluster setup, include them in the arrays.

      export cluster1=<cluster1_name>
    export cluster2=<cluster2_name>
    export context1=<cluster1_context>
    export context2=<cluster2_context>
      
  1. Make sure that you have the OpenSSL version of openssl, not LibreSSL. The openssl version must be at least 1.1.

    1. Check your openssl version. If you see LibreSSL in the output, continue to the next step.
        openssl version
        
    2. Install the OpenSSL version (not LibreSSL). For example, you might use Homebrew.
        brew install openssl
        
    3. Review the output of the OpenSSL installation for the path of the binary file. You can choose to export the binary to your path, or call the entire path whenever the following steps use an openssl command.
      • For example, openssl might be installed along the following path: /usr/local/opt/openssl@3/bin/
      • To run commands, you can append the path so that your terminal uses this installed version of OpenSSL, and not the default LibreSSL. /usr/local/opt/openssl@3/bin/openssl req -new -newkey rsa:4096 -x509 -sha256 -days 3650...

Prepare SPIRE certificates

Create the root and one intermediate CA for the SPIRE server in each cluster. The SPIRE server later uses these CAs to create certificates for any attested workloads.

  1. Create a directory for the certificates, and save the CA certificate configurations.

      mkdir -p certs/{root-ca,$cluster1,$cluster2}
    cd certs
    
    cat >root-ca.cnf <<EOF
    [req]
    distinguished_name = req_distinguished_name
    req_extensions = v3_req
    prompt = no
    
    [req_distinguished_name]
    CN = SPIRE Root CA
    
    [v3_req]
    keyUsage = critical, keyCertSign, cRLSign
    basicConstraints = critical, CA:true, pathlen:2
    subjectKeyIdentifier = hash
    EOF
    
    cat >intermediate-ca.cnf <<EOF
    [req]
    distinguished_name = req_distinguished_name
    req_extensions = v3_req
    prompt = no
    
    [req_distinguished_name]
    CN = SPIRE Intermediate CA
    
    [v3_req]
    keyUsage = critical, keyCertSign, cRLSign
    basicConstraints = critical, CA:true, pathlen:1
    subjectKeyIdentifier = hash
    EOF
      
  2. Create the root CA. Then create one intermediate CA for each workload cluster, and use the root CA to sign both intermediate CAs.

      # Create root CA
    openssl genrsa -out root-ca/root-ca.key 2048
    openssl req -new -x509 -key root-ca/root-ca.key -out root-ca/root-ca.crt -config root-ca.cnf -days 3650
    
    # Create cluster 1 intermediate CA
    openssl genrsa -out ${cluster1}/${cluster1}-ca.key 2048
    openssl req -new -key ${cluster1}/${cluster1}-ca.key -out ${cluster1}/${cluster1}-ca.csr -config intermediate-ca.cnf -subj "/CN=SPIRE ${cluster1} CA"
    # Sign cluster 1 CSR with root CA
    openssl x509 -req -in ${cluster1}/${cluster1}-ca.csr -CA root-ca/root-ca.crt -CAkey root-ca/root-ca.key -CAcreateserial \
      -out ${cluster1}/${cluster1}-ca.crt -days 1825 -extensions v3_req -extfile intermediate-ca.cnf
    
    # Create cluster 2 intermediate CA
    openssl genrsa -out ${cluster2}/${cluster2}-ca.key 2048
    openssl req -new -key ${cluster2}/${cluster2}-ca.key -out ${cluster2}/${cluster2}-ca.csr -config intermediate-ca.cnf -subj "/CN=SPIRE ${cluster2} CA"
    # Sign cluster 2 CSR with root CA
    openssl x509 -req -in ${cluster2}/${cluster2}-ca.csr -CA root-ca/root-ca.crt -CAkey root-ca/root-ca.key -CAcreateserial \
      -out ${cluster2}/${cluster2}-ca.crt -days 1825 -extensions v3_req -extfile intermediate-ca.cnf
    
    # Create the bundle file for cluster 1 (intermediate + root)
    cat ${cluster1}/${cluster1}-ca.crt root-ca/root-ca.crt > ${cluster1}/${cluster1}-ca-chain.pem
    
    # Create the bundle file for cluster 2 (intermediate + root)
    cat ${cluster2}/${cluster2}-ca.crt root-ca/root-ca.crt > ${cluster2}/${cluster2}-ca-chain.pem
    
    # Create the root CA bundle
    cp root-ca/root-ca.crt root-ca-bundle.pem
      
  3. Create the spire-server namespace in each cluster, and issue the certificates as secrets ready to be mounted onto SPIRE.

      function create_spire_certs() {
      context=${1:?context}
      cluster=${2:?cluster}
      kubectl --context=${context} create namespace spire-server
      kubectl --context=${context} create secret generic spiffe-upstream-ca \
        --from-file=tls.crt=certs/${cluster}/${cluster}-ca.crt \
        --from-file=tls.key=certs/${cluster}/${cluster}-ca.key \
        --from-file=bundle.crt=certs/${cluster}/${cluster}-ca-chain.pem \
        -n spire-server
    }
    
    create_spire_certs ${context1} ${cluster1}
    create_spire_certs ${context2} ${cluster2}
      

Install SPIRE

Use Helm to deploy SPIRE in each cluster.

  1. Add and update the SPIRE Helm repo.

      helm repo add spire https://spiffe.github.io/helm-charts-hardened/
    helm repo update spire
      
  2. Create the SPIRE CRDs Helm release in each cluster.

      for context in ${context1} ${context2}; do
      helm upgrade --kube-context=${context} -i spire-crds spire/spire-crds \
      --namespace spire-server \
      --create-namespace \
      --version 0.5.0 \
      --wait
    done
      
  3. Create the SPIRE Helm release in each cluster.

      function install_spire() {
      context=${1:?context}
      cluster=${2:?cluster}
      helm upgrade --kube-context=${context} -i spire spire/spire \
      --namespace spire-server \
      --version 0.24.2 \
      -f - <<EOF
    # Source https://github.com/solo-io/istio/blob/build/release-1.23/tools/install-spire.sh
    global:
      spire:
        trustDomain: ${cluster}
    spire-agent:
        authorizedDelegates:
            - "spiffe://${cluster}/ns/istio-system/sa/ztunnel"
        sockets:
            admin:
                enabled: true
                mountOnHost: true
            hostBasePath: /run/spire/agent/sockets
        tolerations:
          - effect: NoSchedule
            operator: Exists
          - key: CriticalAddonsOnly
            operator: Exists
          - effect: NoExecute
            operator: Exists
    spire-server:
      upstreamAuthority:
        disk:
          enabled: true
          secret:
            create: false
            name: "spiffe-upstream-ca"
    
    spiffe-csi-driver:
        tolerations:
          - effect: NoSchedule
            operator: Exists
          - key: CriticalAddonsOnly
            operator: Exists
          - effect: NoExecute
            operator: Exists
    EOF
    }
    
    install_spire ${context1} ${cluster1}
    install_spire ${context2} ${cluster2}
      
  4. Verify that the SPIRE servers are deployed.

      kubectl --context=${context1} -n spire-server wait --for=condition=Ready pods --all
    kubectl --context=${context2} -n spire-server wait --for=condition=Ready pods --all
      
  5. Configure SPIRE to issue certificates for the ambient mesh workloads.

      cat >cluster-spiffe-id.yaml <<EOF
    # Source https://github.com/solo-io/istio/blob/build/release-1.23/tools/install-spire.sh
    ---
    # ClusterSPIFFEID for ztunnel
    apiVersion: spire.spiffe.io/v1alpha1
    kind: ClusterSPIFFEID
    metadata:
      name: istio-ztunnel-reg
    spec:
      spiffeIDTemplate: "spiffe://{{ .TrustDomain }}/ns/{{ .PodMeta.Namespace }}/sa/{{ .PodSpec.ServiceAccountName }}"
      podSelector:
        matchLabels:
          app: "ztunnel"
    ---
    # ClusterSPIFFEID for waypoints
    apiVersion: spire.spiffe.io/v1alpha1
    kind: ClusterSPIFFEID
    metadata:
      name: istio-waypoint-reg
    spec:
      spiffeIDTemplate: "spiffe://{{ .TrustDomain }}/ns/{{ .PodMeta.Namespace }}/sa/{{ .PodSpec.ServiceAccountName }}"
      podSelector:
        matchLabels:
          istio.io/gateway-name: waypoint
    ---
    # ClusterSPIFFEID for workloads
    apiVersion: spire.spiffe.io/v1alpha1
    kind: ClusterSPIFFEID
    metadata:
      name: istio-ambient-reg
    spec:
      spiffeIDTemplate: "spiffe://{{ .TrustDomain }}/ns/{{ .PodMeta.Namespace }}/sa/{{ .PodSpec.ServiceAccountName }}"
      podSelector:
        matchLabels:
          istio.io/dataplane-mode: ambient
    EOF
    
    kubectl --context=${context1} apply -f cluster-spiffe-id.yaml
    kubectl --context=${context2} apply -f cluster-spiffe-id.yaml
      

Any workloads that you later deploy to the ambient mesh will now be able to get mTLS certificates from SPIRE.

Create a shared root of trust for istiod

Each cluster in the multicluster setup must have a shared root of trust. This can be achieved by providing a root certificate signed by a PKI provider, or a custom root certificate created for this purpose. The root certificate signs a unique intermediate CA certificate for each cluster.

Install ambient meshes with SPIRE enabled

In each cluster, use Helm to create the ambient mesh components, with the SPIRE integration enabled.

  1. Apply the CRDs for the Kubernetes Gateway API to your cluster, which are required to create components such as waypoint proxies for L7 traffic policies, gateways with the Gateway resource, and more.

      kubectl apply -f https://github.com/kubernetes-sigs/gateway-api/releases/download/v1.3.0/standard-install.yaml --context ${context1}
    kubectl apply -f https://github.com/kubernetes-sigs/gateway-api/releases/download/v1.3.0/standard-install.yaml --context ${context2}
      
  2. Install the base chart, which contains the CRDs and cluster roles required to set up Istio, in both clusters.

      for context in ${context1} ${context2}; do
      helm upgrade --install istio-base oci://${HELM_REPO}/base \
      --namespace istio-system \
      --create-namespace \
      --kube-context ${context} \
      --version ${ISTIO_IMAGE} \
      -f - <<EOF
    defaultRevision: ""
    profile: ambient
    EOF
    done
      

    You can optionally verify that the CRDs are successfully installed in both clusters.

      kubectl get crds -l app.kubernetes.io/instance=istio-base --kube-context ${context1}
    kubectl get crds -l app.kubernetes.io/instance=istio-base --kube-context ${context2}
      

    Example output:

      NAME                                       CREATED AT
    authorizationpolicies.security.istio.io    2025-12-16T22:56:00Z
    destinationrules.networking.istio.io       2025-12-16T22:56:00Z
    envoyfilters.networking.istio.io           2025-12-16T22:56:00Z
    gateways.networking.istio.io               2025-12-16T22:56:00Z
    peerauthentications.security.istio.io      2025-12-16T22:56:00Z
    proxyconfigs.networking.istio.io           2025-12-16T22:56:00Z
    requestauthentications.security.istio.io   2025-12-16T22:56:00Z
    segments.admin.solo.io                     2025-12-16T22:56:00Z
    serviceentries.networking.istio.io         2025-12-16T22:56:00Z
    sidecars.networking.istio.io               2025-12-16T22:56:00Z
    telemetries.telemetry.istio.io             2025-12-16T22:56:00Z
    virtualservices.networking.istio.io        2025-12-16T22:56:00Z
    wasmplugins.extensions.istio.io            2025-12-16T22:56:00Z
    workloadentries.networking.istio.io        2025-12-16T22:56:00Z
    workloadgroups.networking.istio.io         2025-12-16T22:56:00Z
      
  3. Create the istiod control plane in both clusters.

      function install_istiod() {
      context=${1:?context}
      cluster=${2:?cluster}
      helm upgrade --install istiod oci://${HELM_REPO}/istiod \
      --namespace istio-system \
      --kube-context ${context} \
      --version ${ISTIO_IMAGE} \
      -f - <<EOF
    env:
      # Assigns IP addresses to multicluster services
      PILOT_ENABLE_IP_AUTOALLOCATE: "true"
      # Required when meshConfig.trustDomain is set
      PILOT_SKIP_VALIDATE_TRUST_DOMAIN: "true"
    global:
      hub: ${REPO}
      multiCluster:
        clusterName: ${cluster}
      network: ${cluster}
      proxy:
        clusterDomain: cluster.local
      tag: ${ISTIO_IMAGE}
    meshConfig:
      accessLogFile: /dev/stdout
      defaultConfig:
        proxyMetadata:
          ISTIO_META_DNS_AUTO_ALLOCATE: "true"
          ISTIO_META_DNS_CAPTURE: "true"
      trustDomain: "${cluster}"  # Matches the custom trustDomain in SPIRE settings
    gateways:
      spire:
        workloads: true # SPIRE enabled
    pilot:
      cni:
        namespace: istio-system
        enabled: true
    # Required to enable multicluster support
    platforms:
      peering:
        enabled: true
    profile: ambient
    license:
      value: ${SOLO_ISTIO_LICENSE_KEY}
      # Uncomment if you prefer to specify your license secret
      # instead of an inline value.
      # secretRef:
      #   name:
      #   namespace:
    EOF
    }
    
    install_istiod ${context1} ${cluster1}
    install_istiod ${context2} ${cluster2}
      
  4. Install the Istio CNI node agent daemonset in both clusters.

      for context in ${context1} ${context2}; do
      helm upgrade --install istio-cni oci://${HELM_REPO}/cni \
      --namespace istio-system \
      --kube-context ${context} \
      --version ${ISTIO_IMAGE} \
      -f - <<EOF
    # Assigns IP addresses to multicluster services
    ambient:
      dnsCapture: true
    excludeNamespaces:
      - istio-system
      - kube-system
    global:
      hub: ${REPO}
      tag: ${ISTIO_IMAGE}
    profile: ambient
    EOF
    done
      
  5. Install the ztunnel daemonset in both clusters.

      function install_ztunnel() {
      context=${1:?context}
      cluster=${2:?cluster}
      helm upgrade --install ztunnel oci://${HELM_REPO}/ztunnel \
      --namespace istio-system \
      --kube-context ${context} \
      --version ${ISTIO_IMAGE} \
      -f - <<EOF
    configValidation: true
    enabled: true
    env:
      L7_ENABLED: "true"
      # Required when a unique trust domain is set for each cluster
      SKIP_VALIDATE_TRUST_DOMAIN: "true"
    hub: ${REPO}
    istioNamespace: istio-system
    multiCluster:
      clusterName: ${cluster}
    namespace: istio-system
    network: ${cluster}
    profile: ambient
    proxy:
      clusterDomain: cluster.local
    spire:
      enabled: true # SPIRE enabled
    tag: ${ISTIO_IMAGE}
    terminationGracePeriodSeconds: 29
    variant: distroless
    EOF
    }
    
    install_ztunnel ${context1} ${cluster1}
    install_ztunnel ${context2} ${cluster2}
      
  6. Verify that the components of the Istio ambient control and data plane are successfully installed in both clusters. Because the Istio CNI and ztunnel are deployed as daemon sets, the number of CNI and ztunnel pods equals the number of nodes in your cluster. Note that it might take a few seconds for the pods to become available.

      kubectl get pods -A --context ${context1} | grep -E 'istio|ztunnel'
    kubectl get pods -A --context ${context2} | grep -E 'istio|ztunnel'
      

    Example output:

      istiod-85c4dfd97f-mncj5      1/1     Running   0          40s
    istio-cni-node-pr5rl         1/1     Running   0          9s
    istio-cni-node-pvmx2         1/1     Running   0          9s
    istio-cni-node-6q26l         1/1     Running   0          9s
    ztunnel-tvtzn                1/1     Running   0          7s
    ztunnel-vtpjm                1/1     Running   0          4s
    ztunnel-hllxg                1/1     Running   0          4s
      
  7. Label the istio-system namespace with the clusters’ network names, which you previously set to each cluster name in the global.network field of the istiod installations. The ambient control plane uses this label internally to group pods that exist in the same L3 network.

      kubectl label namespace istio-system --context ${context1} topology.istio.io/network=${cluster1}
    kubectl label namespace istio-system --context ${context2} topology.istio.io/network=${cluster2}
      

Create east-west gateways so that traffic requests can be routed cross-cluster. Then, link clusters to enable cross-cluster service discovery.

  1. Create an east-west gateway in the istio-eastwest namespace. An east-west gateway facilitates traffic between services in each cluster in your multicluster mesh.

      for context in ${context1} ${context2}; do
      kubectl create namespace istio-eastwest --context ${context}
      istioctl multicluster expose --namespace istio-eastwest --context ${context} --generate > ew-gateway.yaml
      kubectl apply -f ew-gateway.yaml --context ${context}
    done
      
  2. Verify that the east-west gateways are successfully deployed.

      kubectl get pods -n istio-eastwest --context ${context1}
    kubectl get pods -n istio-eastwest --context ${context2}
      
  3. Link clusters to enable cross-cluster service discovery and allow traffic to be routed through east-west gateways across clusters.

    1. Optional: Before you link clusters, you can check the individual readiness of each cluster for linking by running the istioctl multicluster check --precheck command. For more information about this command, see the CLI reference. If any checks fail, run the command with --verbose, and see Validate your multicluster setup.

        istioctl multicluster check --precheck --contexts="$context1,$context2"
        

      Before continuing to the next step, make sure that the following checks pass as expected:
      ✅ Relevant environment variables on istiod are supported.
      ✅ The license in use by istiod supports multicluster.
      ✅ All istiod, ztunnel, and east-west gateway pods are healthy.
      ✅ The east-west gateway is programmed.

    2. Link clusters to enable cross-cluster service discovery and allow traffic to be routed through east-west gateways across clusters. Note that you can either link the clusters bi-directionally or asymmetrically. In a standard bi-directional setup, services in any of the linked clusters can send requests to and receive requests from the services in any of the other linked clusters. In an asymmetrical setup, you allow one cluster to send requests to another cluster, but the other cluster cannot send requests back to the first cluster.

    3. Verify that peer linking was successful by running the istioctl multicluster check command. If any checks fail, run the command with --verbose, and see Validate your multicluster setup.

        istioctl multicluster check --contexts="$context1,$context2"
        

      In this example output, the remote peer gateways are successfully connected, the intermediate certificates are compatible between the clusters, each cluster has a unique, properly configured network, and no stale workloads were found because no autogenerated workload entries existed in the clusters prior to peering. If you do have preexisting autogenerated workload entries, the check verifies whether all entries are up to date.

        === Cluster: cluster1 ===
      ✅ Incompatible Environment Variable Check: all relevant environment variables are valid
      ✅ License Check: license is valid for multicluster
      ✅ Pod Check (istiod): all pods healthy
      ✅ Pod Check (ztunnel): all pods healthy
      ✅ Pod Check (eastwest gateway): all pods healthy
      ✅ Gateway Check: all eastwest gateways programmed
      ✅ Peers Check: all clusters connected
      ====== 
      
      === Cluster: cluster2 ===
      ✅ Incompatible Environment Variable Check: all relevant environment variables are valid
      ✅ License Check: license is valid for multicluster
      ✅ Pod Check (istiod): all pods healthy
      ✅ Pod Check (ztunnel): all pods healthy
      ✅ Pod Check (eastwest gateway): all pods healthy
      ✅ Gateway Check: all eastwest gateways programmed
      ✅ Peers Check: all clusters connected
      ====== 
      
      ✅ Intermediate Certs Compatibility Check: all clusters have compatible intermediate certificates
      ✅ Network Configuration Check: all network configurations are valid
      ⚠  Stale Workloads Check: no autogenflat workload entries found
        
    4. Optional: Verify that the istiod control plane for each peered cluster is included in each cluster’s proxy status list.

        istioctl proxy-status --context ${context1}
      istioctl proxy-status --context ${context2}
        

      Example output for cluster1, in which you can verify that the istiod control plane for cluster2 is listed:

        NAME                                               CLUSTER          ISTIOD                      VERSION              SUBSCRIBED TYPES
      istio-eastwest-67fd5679dc-fhsxs.istio-eastwest     cluster1         istiod-7b7c9cc4c6-bdm9c     1.27.8-solo-fips     2 (WADS,WDS)
      istiod-6bc6765484-5bbhd.istio-system               cluster2         istiod-7b7c9cc4c6-bdm9c     1.27.8-solo-fips     3 (FSDS,SGDS,WDS)
      ztunnel-5f8rb.kube-system                          cluster1         istiod-7b7c9cc4c6-bdm9c     1.27.8-solo-fips     2 (WADS,WDS)
      ztunnel-f96kh.kube-system                          cluster1         istiod-7b7c9cc4c6-bdm9c     1.27.8-solo-fips     2 (WADS,WDS)
      ztunnel-vtj4f.kube-system                          cluster1         istiod-7b7c9cc4c6-bdm9c     1.27.8-solo-fips     2 (WADS,WDS)
        

Deploy services to the multicluster mesh

Add apps to the ambient mesh. This includes labeling services so that they are included in the ambient mesh, and making the services available across your linked cluster setup.

Note that whenever you label a workload to add it to your ambient mesh, the ztunnel on the same node requests that the SPIRE agent performs workload attestation. The provided certificate for the workload enables it to initiate mTLS communication within the mesh.

Optional: Validate your multicluster setup

Both before and after you link clusters into a multicluster mesh, you can use the istioctl multicluster check command, along with other observability checks, to verify multiple aspects of multicluster ambient mesh support and status.

istioctl multicluster check

You can use the istioctl multicluster check --precheck command to check the individual readiness of each cluster before running istioctl multicluster link to link them in a multicluster mesh, and run it again after linking to confirm that the connections were successful. This command performs checks listed in the following sections, which you can review to understand what each check validates. Additionally, if any of the checks fail, run the command with the --verbose option, and review the following troubleshooting recommendations.

  istioctl multicluster check --verbose --contexts="$context1,$context2"
  

For more information about this command, see the CLI reference.

Incompatible environment variables

Checks whether the ENABLE_PEERING_DISCOVERY=true and optionally K8S_SELECT_WORKLOAD_ENTRIES=true environment variables are set incorrectly or are not supported for multicluster ambient mesh.

Example verbose output:

  --- Incompatible Environment Variable Check ---

✅ Incompatible Environment Variable Check: K8S_SELECT_WORKLOAD_ENTRIES is valid ("")
✅ Incompatible Environment Variable Check: ENABLE_PEERING_DISCOVERY is valid ("true")
✅ Incompatible Environment Variable Check: all relevant environment variables are valid
  

If this check fails, check your environment variables in your istiod configuration, such as by running helm get values --kube-context ${CLUSTER_CONTEXT} istiod -n istio-system -o yaml, and update your configuration.

License validity

Checks whether the license in use by istiod is valid for multicluster ambient mesh. Multicluster capabilities require an Enterprise level license for Solo Enterprise for Istio.

Example verbose output:

  --- License Check ---

✅ License Check: license is valid for multicluster
  

If your license does not support multicluster ambient mesh, contact your Solo account representative.

Pod health

Checks the health of the pods in the cluster. All istiod, ztunnel, and east-west gateway pods across the checked clusters must be healthy and running for the multicluster mesh to function correctly.

Example verbose output:

  --- Pod Check (istiod) ---

NAME                        READY     STATUS      RESTARTS     AGE
istiod-6d9cdf88cf-l47tf     1/1       Running     0            10m18s

✅ Pod Check (istiod): all pods healthy


--- Pod Check (ztunnel) ---

NAME              READY     STATUS      RESTARTS     AGE
ztunnel-dvlwk     1/1       Running     0            10m6s

✅ Pod Check (ztunnel): all pods healthy


--- Pod Check (eastwest gateway) ---

NAME                                READY     STATUS      RESTARTS     AGE
istio-eastwest-857b77fc5d-qgnrl     1/1       Running     0            9m33s

✅ Pod Check (eastwest gateway): all pods healthy
  

To check any unhealthy pods, run the following commands. Consider checking the pod logs, and review Debug Istio.

  kubectl get po -n istio-system
kubectl get po -n istio-eastwest
  

East-west gateway status

Checks the status of the east-west gateways in the cluster. When an east-west gateway is created, the gateway controller creates a Kubernetes service to expose the gateway. Once this service is correctly attached to the gateway and has an address assigned, the east-west gateway has a Programmed status of true.

Example verbose output:

  --- Gateway Check ---

Gateway: istio-eastwest
Addresses:
- 172.18.7.110
Status: programmed ✅

✅ Gateway Check: all eastwest gateways programmed
  

If the Programmed status is not true, an issue might exist with the address allocation for the service. Check the east-west gateway with a command such as kubectl get svc -n istio-eastwest, and verify that your cloud provider can correctly allocate addresses to the service.

Remote peer gateway status

Checks the status of the remote peer gateways in the cluster, which represent the other peered clusters in the multicluster setup. These remote gateways configure the connection between the local cluster’s istiod control plane, and the peered clusters’ remote networks to enable xDS communication between peers. When the initial network connection between istiod and a remote peer is made, the gateway’s gloo.solo.io/PeerConnected status updates to true. Then, when the full xDS sync occurs between peers, the gateway’s gloo.solo.io/PeeringSucceeded status also updates to true. This check ensures that both statuses are true.

Example verbose output:

  --- Peers Check ---

Cluster: cluster2
Addresses:
- 172.18.7.130
Conditions:
- Accepted: True
- Programmed: True
- gloo.solo.io/PeerConnected: True
- gloo.solo.io/PeeringSucceeded: True
- gloo.solo.io/PeerDataPlaneProgrammed: True
Status: connected ✅

✅ Peers Check: all clusters connected
  

If the connection is severed between the peers, the gloo.solo.io/PeerConnected status becomes false. A failed connection between peers can be due to either a misconfiguration in the peering setup, or a network issue blocking port 15008 on the remote cluster, which is the cross-network HBONE port that the east-west gateway listens on. Review the steps you took to link clusters together, such as the steps outlined in the Helm default network guide. Additionally, review any firewall rules or network policies that might block access through port 15008 on the remote cluster.

Intermediate certificate compatibility

Confirms the certificate compatibility between peered clusters. This check reads the root-cert.pem from the istio-ca-root-cert configmap in the istio-system namespace, and uses x509 certificate validation to confirm the root cert is compatible with all of the clusters’ ca-cert.pem intermediate certificate chains from the cacerts secret.

Example verbose output:

  --- Intermediate Certs Compatibility Check ---

ℹ  Intermediate Certs Compatibility Check: cluster cluster1 root certificate SHA256 sum: 6d18f32e134824c158d97f32618657c45d5a83839f838ada751757139481537e
ℹ  Intermediate Certs Compatibility Check: cluster cluster2 root certificate SHA256 sum: 6d18f32e134824c158d97f32618657c45d5a83839f838ada751757139481537e
✅ Intermediate Certs Compatibility Check: cluster cluster1 has compatible intermediate certificates with cluster cluster2 
✅ Intermediate Certs Compatibility Check: cluster cluster2 has compatible intermediate certificates with cluster cluster1 
✅ Intermediate Certs Compatibility Check: all clusters have compatible intermediate certificates
  

If this check fails because the root certs are not valid for each peered clusters’ intermediate certificate chain, you can check the istiod logs for TLS errors when attempting to communicate with a peered cluster, such as the following:

  2025-12-04T22:09:22.474517Z     warn    deltaadsc       disconnected, retrying in 24.735483751s: delta stream: rpc error: code = Unavailable desc = connection error: desc = "error reading server preface: remote error: tls: unknown certificate authority"       target=peering-cluster2
  

Ensure each cluster has a cacerts secret in the istio-system namespace. To regenerate invalid certificates for each cluster, follow the example steps in Create a shared root of trust.

Network configuration

Confirms the network configuration of the multicluster mesh. For multicluster peering setups that do not use a flat network topology, each cluster must occupy a unique network. The network name must be defined with the label topology.istio.io/network and set on both the istio-system namespace and the istio-eastwest gateway resource. The same network name must also be set as the NETWORK environment variable on the ztunnel daemonset. Each remote gateway that represents that cluster must have the topology.istio.io/network label equal to the network of the remote cluster.

Example verbose output:

  --- Network Configuration Check ---

✅ Cluster cluster1 has network: cluster1
✅ Eastwest gateway istio-eastwest/istio-eastwest has correct network label: cluster1
✅ Cluster cluster2 has network: cluster2
✅ Eastwest gateway istio-eastwest/istio-eastwest has correct network label: cluster2
✅ Remote gateway istio-eastwest/istio-remote-peer-cluster2 references network cluster2 (clusters: [cluster2])
✅ Remote gateway istio-eastwest/istio-remote-peer-cluster1 references network cluster1 (clusters: [cluster1])
✅ Network Configuration Check: all network configurations are valid
  

Mismatched network identities cause errors in cross-cluster communication, which leads to error logs in ztunnel pods that indicate a network timeout on the outbound communication. Notably, the destination address on these errors is a 240.X.X.X address, instead of the correct remote peer gateway address. You can run kubectl logs -l app=ztunnel -n istio-system --tail=10 --context ${CLUSTER_CONTEXT} | grep -iE "error|warn" to review logs such as the following:

  2025-11-18T16:14:53.490573Z     error   access  connection complete     src.addr=240.0.2.27:46802 src.workload="ratings-v1-5dc79b6bcd-zm8v6" src.namespace="bookinfo" src.identity="spiffe://cluster.local/ns/bookinfo/sa/bookinfo-ratings" dst.addr=240.0.9.43:15008 dst.hbone_addr=240.0.9.43:9080 dst.service="productpage.bookinfo.mesh.internal" dst.workload="autogenflat.portfolio1-soloiopoc-cluster1.bookinfo.productpage-v1-54bb874995-hblwp.ee508601917c" dst.namespace="bookinfo" dst.identity="spiffe://cluster.local/ns/bookinfo/sa/bookinfo-productpage" direction="outbound" bytes_sent=0 bytes_recv=0 duration="10001ms" error="connection timed out, maybe a NetworkPolicy is blocking HBONE port 15008: deadline has elapsed"
  

To troubleshoot these issues, be sure that you use unique network names to represent each cluster, and that you correctly labeled the cluster’s istio-system namespace with that network name, such as by running kubectl label namespace istio-system --context ${CLUSTER_CONTEXT} topology.istio.io/network=${CLUSTER_NAME}. You can also relabel the east-west gateway in the cluster, and the remote peer gateways in other clusters that represent this cluster.

Stale workload entries

In flat network setups, checks for any outdated workload entries that must be removed from the multicluster mesh. Stale workload entries might exist from pods that were deleted, but the autogenerated entries for those workloads were not correctly cleaned up. If you do not use a flat network topology, no autogenerated workload entries exist to be validated, and this check can be ignored.

Example verbose output for a non-flat network setup:

  --- Stale Workloads Check ---

⚠  Stale Workloads Check: no autogenflat workload entries found
  

If you use a flat network topology, and this check fails with stale workload entries, run kubectl get workloadentries -n istio-system | grep autogenflat to list the autogenerated workload entries in the remote cluster, and compare the list to the output of kubectl get pods in the source cluster for those workloads. You can safely manually delete the stale workload entries in the remote cluster for pods that no longer exist in the source cluster, such as by running kubectl get workloadentries -n istio-system <entry_name>.

Further debugging and observability

For additional guidance around observing your multicluster ambient mesh, check out the observability overview, which contains links to guides on using logs, metrics, and traces in your Istio environment.

For additional guidance around debugging your multicluster ambient mesh, check out the Istio troubleshooting guide.