About flat networking

In a traditional multicluster setup, you typically use different networks for each cluster. To connect these different networks and route traffic across clusters, you need to use an east-west gateway.

In a flat network setup, also referred to as single network, all your clusters use the same network. Because of that, services can communicate with each other by using a service’s IP address directly. No east-west gateway is required to handle dataplane traffic. To avoid IP address conflicts, each cluster typically is assigned a unique IP CIDR range. You can establish a flat network by using a VPN, a CNI that supports the Border Gateway Protocol (BGP), or other overlay solutions for all of your clusters.

For Istio to presume a flat network, all your clusters must use the same network name in the topology.istio.io/network label and spec.values.global.network field. Istio then routes traffic by using a service’s IP address directly and does not send traffic through an east-west gateway.

About this guide

In this guide, you explore how to set up an Istio in a flat network multicluster setup. You complete the following tasks:

  • Set up two kind clusters that are connected by using a flat network.
  • Install the Solo distribution of Istio in each cluster by using the flat network topology and ambient profile. The ambient profile is required, even if you plan to run sidecar-injected Istio workloads in your cluster.
  • Peer the clusters by using east-west and peering gateways. These gateways are used to allow communication between istiod instances.
  • Expose a service globally across both clusters.
  • Test routing between clusters by using the service’s IP address directly instead of the east-west gateway.

Before you begin

  1. Set your Enterprise level license for Gloo Mesh as an environment variable. If you do not have one, contact an account representative. If you prefer to specify license keys in a secret instead, see Licensing.

      export GLOO_MESH_LICENSE_KEY=<enterprise_license_key>
      
  2. Choose the version of Istio that you want to install or upgrade to by reviewing the supported versions.

  3. Save the Solo distribution of Istio version.

      export ISTIO_VERSION=1.28.0
    export ISTIO_IMAGE=${ISTIO_VERSION}-solo
      
  4. Save the repo key for the minor version of the Solo distribution of Istio that you want to install. This is the 12-character hash at the end of the repo URL us-docker.pkg.dev/gloo-mesh/istio-<repo-key>, which you can find in the Istio images built by Solo.io support article.

      # 12-character hash at the end of the repo URL
    export REPO_KEY=<repo_key>
    export REPO=us-docker.pkg.dev/gloo-mesh/istio-${REPO_KEY}
    export HELM_REPO=us-docker.pkg.dev/gloo-mesh/istio-helm-${REPO_KEY}
      
  5. Get the Solo distribution of Istio binary and install istioctl, which you use for multicluster linking and gateway commands.

    1. Get the OS and architecture that you use on your machine.

        OS=$(uname | tr '[:upper:]' '[:lower:]' | sed -E 's/darwin/osx/')
      ARCH=$(uname -m | sed -E 's/aarch/arm/; s/x86_64/amd64/; s/armv7l/armv7/')
      echo $OS
      echo $ARCH
        
    2. Download the Solo distribution of Istio binary and install istioctl.

        mkdir -p ~/.istioctl/bin
      curl -sSL https://storage.googleapis.com/istio-binaries-$REPO_KEY/$ISTIO_IMAGE/istioctl-$ISTIO_IMAGE-$OS-$ARCH.tar.gz | tar xzf - -C ~/.istioctl/bin
      chmod +x ~/.istioctl/bin/istioctl
      
      export PATH=${HOME}/.istioctl/bin:${PATH}
        
    3. Verify that the istioctl client runs the Solo distribution of Istio that you want to install.

        istioctl version --remote=false
        

      Example output:

        client version: 1.28.0-solo
        

Set up a flat network test environment

Set up a kind cluster environment that consists of two kind clusters that are connected by using a flat network. To establish a flat network, you create these kind clusters with custom pod and service subnets, and connect these subnets with custom routes.

  1. Install the cloud-provider-kind CLI. This CLI allows you to assign an external IP address to a LoadBalancerIP service type. An external IP address is required for the east-west gateways to enable proper peering between multiple clusters.

      go install sigs.k8s.io/cloud-provider-kind@latest
      
  2. Start the cloud-provider-kind CLI.

      sudo cloud-provider-kind
      
  3. In a separate terminal window, download and run the setup-clusters.sh script. This script sets up the following components.

    • A kind cluster cluster-1 that uses the 10.10.0.0/16 CIDR for the pod subnet and the 10.255.10.0/24 CIDR for the service subnet.
    • Another kind cluster cluster-2 that uses the 10.20.0.0/16 CIDR for the pod subnet and the 10.255.20.0/24 CIDR for the service subnet.
    • Routes between cluster-1 and cluster-2 to allow communication between pods by using the pod’s IP address directly.
      curl -L https://raw.githubusercontent.com/solo-io/gloo-mesh-use-cases/refs/heads/main/gloo-mesh/istio-install/manual/flat-network/setup-clusters.sh -o setup-clusters.sh
    chmod +x setup-clusters.sh
    ./setup-clusters.sh
      
  4. Save the name and context for both of your clusters in an environment variable.

      REMOTE_CLUSTER1="cluster-1"
    REMOTE_CLUSTER2="cluster-2"
    REMOTE_CONTEXT1="kind-${REMOTE_CLUSTER1}"
    REMOTE_CONTEXT2="kind-${REMOTE_CLUSTER2}"
      

Install Istio

Install the Solo distribution of Istio in your cluster.

  1. Create a shared root of trust.

    Each cluster in the multicluster setup must have a shared root of trust. This can be achieved by providing a root certificate signed by a PKI provider, or a custom root certificate created for this purpose. The root certificate signs a unique intermediate CA certificate for each cluster.

  2. Install the Kubernetes Gateway API in both of your clusters. The API is required to spin up east-west gateways later.

      for ctx in $REMOTE_CONTEXT1 $REMOTE_CONTEXT2; do
      kubectl --context=$ctx apply -f https://github.com/kubernetes-sigs/gateway-api/releases/download/v1.4.0/standard-install.yaml
    done
      
  3. Install the base chart, which contains the CRDs and cluster roles required to set up Istio.

  4. Create the istiod control plane in your cluster. Note that for Istio to presume a flat network, all clusters must use the same network name in the global.network field. The network name is an arbitrary value and can be set to a string of your choice. In this example, flat-network is used as the network name.

  5. Install the Istio CNI node agent daemonset. Note that although the CNI is included in this section, it is technically not part of the control plane or data plane.

  6. Install the ztunnel daemonset. Make sure to use the same network name in the network field as you used in istiod’s global.network field. In this example, flat-network is used as the network name.

  7. Label the istio-system namespace with the topology.istio.io/network=flat-network label. Note that for Istio to presume a flat network, the network name in this label must match the network name that you set in the spec.values.global.network field.

      kubectl --context=$REMOTE_CONTEXT1 label namespace istio-system topology.istio.io/network=flat-network --overwrite
    kubectl --context=$REMOTE_CONTEXT2 label namespace istio-system topology.istio.io/network=flat-network --overwrite
      
  8. Verify that the Istio control plane components are up and running.

      kubectl get pods -n istio-system --context $REMOTE_CONTEXT1     
    kubectl get pods -n istio-system --context $REMOTE_CONTEXT2
      

    Example output:

      NAME                      READY   STATUS    RESTARTS   AGE
    istio-cni-node-v8x2f      1/1     Running   0          1m
    istiod-54c79986dd-g9lxq   1/1     Running   0          1m
    ztunnel-6gccd             1/1     Running   0          50s
    NAME                      READY   STATUS    RESTARTS   AGE
    istio-cni-node-v5sbf      1/1     Running   0          1m
    istiod-76cf85c5d5-z8cjr   1/1     Running   0          1m
    ztunnel-wc9w9             1/1     Running   0          50s
      
  1. Create an east-west gateway in the istio-eastwest namespace. In each cluster, the east-west gateway is implemented as a ztunnel and exposes its xDS server for remote clusters to connect to. This setup facilitates traffic between services across clusters in your multicluster mesh.

      for ctx in $REMOTE_CONTEXT1 $REMOTE_CONTEXT2; do
      kubectl create namespace istio-eastwest --context $ctx
      istioctl --context=$ctx multicluster expose --namespace istio-eastwest
    done
      
  2. Verify that the east-west gateways have a PROGRAMMED status of True.

      for ctx in $REMOTE_CONTEXT1 $REMOTE_CONTEXT2; do
      kubectl get gateway -n istio-eastwest --context $ctx
    done
      
  3. Link clusters to enable cross-cluster service discovery and allow traffic to be routed through east-west gateways across clusters. The steps vary based on whether you have access to the kubeconfig files for each cluster.

  4. Verify that peer linking between your clusters was successful.

      for ctx in $REMOTE_CONTEXT1 $REMOTE_CONTEXT2; do
      istioctl multicluster check --context $ctx
    done
      

    Example output:

      ✅ License Check: license is valid for multicluster
    ✅ Pod Check (istiod): all pods healthy
    ✅ Pod Check (ztunnel): all pods healthy
    ✅ Pod Check (eastwest gateway): all pods healthy
    ✅ Gateway Check: all eastwest gateways programmed
    ✅ Peers Check: all clusters connected
      

Great job! You have successfully peered multiple clusters in a flat network setup.

Test connectivity

  1. Create the demo namespace and add it to the ambient mesh. Then, deploy the sleep sample app into it. You use this app as a client to test connectivity to the services in the mesh later.

      for ctx in $REMOTE_CONTEXT1 $REMOTE_CONTEXT2; do
      kubectl --context=$ctx create namespace demo
      kubectl --context=$ctx label namespace demo istio.io/dataplane-mode=ambient
      kubectl --context=$ctx apply -n demo -f https://raw.githubusercontent.com/solo-io/gloo-mesh-use-cases/refs/heads/main/gloo-mesh/istio-install/manual/flat-network/client/sleep-client.yaml
    done
      
  2. Deploy the global service app. The app is configured to print out Hello version: v1 in cluster-1 and Hello version: v2 in cluster-2. The service has the label solo.io/service-scope: global, which exposes the app under a common domain name global-service.demo.mesh.internal across both of your clusters. For more information, see Make services available across clusters.

      curl -L https://raw.githubusercontent.com/solo-io/gloo-mesh-use-cases/refs/heads/main/gloo-mesh/istio-install/manual/flat-network/services/global/global-service.yaml -o global-service.yaml
    sed 's/VERSION_PLACEHOLDER/v1/g' global-service.yaml | kubectl --context=$REMOTE_CONTEXT1 apply -n demo -f -
    sed 's/VERSION_PLACEHOLDER/v2/g' global-service.yaml | kubectl --context=$REMOTE_CONTEXT2 apply -n demo -f -
      
  3. Verify that you see the following Istio resources in your cluster.

    • An Istio ServiceEntry for the global service name.
    • An Istio WorkloadEntry with the IP address for each global service pod that runs in the opposite cluster. For example, cluster-1 has two WorkloadEntries. Each WorkloadEntry points to the IP address of one global service instance in cluster-2. The WorkloadEntry resources are used to route traffic to the pod IP addresses directly without the need for the east-west gateway.
      for ctx in $REMOTE_CONTEXT1 $REMOTE_CONTEXT2; do
      kubectl get serviceentry -n istio-system --context $ctx
      kubectl get workloadentry -n istio-system --context $ctx
    done
      

    Example output:

      NAME                          HOSTS                                   LOCATION   RESOLUTION   AGE
    autogen.demo.global-service   ["global-service.demo.mesh.internal"]              STATIC       2m25s
    NAME                                                                     AGE     ADDRESS
    autogen.cluster-2.demo.global-service                                    2m26s   
    autogenflat.cluster-2.demo.global-service-d955b94d5-jfzkt.eacd81ea03fd   2m25s   10.20.0.13
    autogenflat.cluster-2.demo.global-service-d955b94d5-tgmcn.eacd81ea03fd   2m25s   10.20.0.12
    
    NAME                          HOSTS                                   LOCATION   RESOLUTION   AGE
    autogen.demo.global-service   ["global-service.demo.mesh.internal"]              STATIC       2m26s
    NAME                                                                      AGE     ADDRESS
    autogen.cluster-1.demo.global-service                                     2m26s   
    autogenflat.cluster-1.demo.global-service-5d86865969-552td.eacd81ea03fd   2m25s   10.10.0.17
    autogenflat.cluster-1.demo.global-service-5d86865969-tp5hr.eacd81ea03fd   2m25s   10.10.0.18
      
  4. Send a curl request from the example sleep app in cluster-1 to the global service name. In your CLI output, verify that you see replies from the global service app from both of your clusters.

      kubectl --context=$REMOTE_CONTEXT1 exec -n demo deploy/sleep -- sh -c "
    for i in \$(seq 1 10); do
      curl -s global-service.demo.mesh.internal:5000/hello
      echo
    done" | grep -o 'version: v[0-9]' | sort | uniq -c
      

    Example output:

      5 version: v1
    5 version: v2
      

    Note that you might see a different CLI output, such as 6 version: v1 4 version: v2. The more requests you send, the closer you get to a 50:50 distribution of requests.

  5. Get the name of the ztunnel instance in cluster-1.

      kubectl get pods -n istio-system --context $REMOTE_CONTEXT1 | grep ztunnel
      
  6. Review the ztunnel logs. In your output, verify that you see log entries from the sleep app in cluster-1 to one of the global service IP addresses in cluster-2. In this example, the sleep app in cluster-1 has an IP address of 10.10.0.14 and reaches out to the global service instance with the 10.20.0.12 IP address in cluster-2.

      kubectl logs <ztunnel-pod> -n istio-system --context $REMOTE_CONTEXT1
      

    Example output:

      2025-09-11T19:42:33.699609Z	info	access	connection complete	src.addr=10.10.0.14:40292 src.
    workload="sleep-746f9d766c-pmk5f" src.namespace="demo" src.identity="spiffe://cluster.local/ns/demo/sa/
    sleep" dst.addr=10.20.0.12:15008 dst.hbone_addr=10.20.0.12:5000 dst.service="global-service.demo.mesh.
    internal" dst.workload="autogenflat.cluster-2.demo.global-service-d955b94d5-7bkck.eacd81ea03fd" dst.
    namespace="demo" dst.identity="spiffe://cluster.local/ns/demo/sa/default" direction="outbound" 
    bytes_sent=107 bytes_recv=218 duration="40ms"
      

Next

You can now follow other guides to expand your ambient mesh capabilities.

  • Enroll other apps in your ambient mesh and expose them globally. For example, you can follow the example guides to add the Bookinfo sample app to the multicluster mesh and expose it across multiple clusters.
  • In a multicluster mesh, the east-west gateway serves as a ztunnel that allows traffic requests to flow across clusters, but it does not modify requests in any way. To control in-mesh traffic, you can instead apply policies to waypoint proxies that you create for a workload namespace.

Cleanup

You can optionally remove the kind cluster setup in this guide.

  1. Remove the kind clusters.

      kind delete cluster --name cluster-1
    kind delete cluster --name cluster-2
      
  2. Remove the cloud-provider-kind CLI from your machine. The CLI is typically installed into your $GOPATH.

      echo $GOPATH
    rm $GOPATH/go/bin/cloud-provider-kind