Flat networking (advanced)
Enable multicluster service discovery and mesh communication by using Istio’s flat network peering capability.
This feature requires your mesh to be installed with the Solo distribution of Istio and an Enterprise-level license for Gloo Mesh (OSS APIs). Contact your account representative to obtain a valid license.
About flat networking
In a traditional multicluster setup, you typically use different networks for each cluster. To connect these different networks and route traffic across clusters, you need to use an east-west gateway.
In a flat network setup, also referred to as single network, all your clusters use the same network. Because of that, services can communicate with each other by using a service’s IP address directly. No east-west gateway is required to handle dataplane traffic. To avoid IP address conflicts, each cluster typically is assigned a unique IP CIDR range. You can establish a flat network by using a VPN, a CNI that supports the Border Gateway Protocol (BGP), or other overlay solutions for all of your clusters.
For Istio to presume a flat network, all your clusters must use the same network name in the topology.istio.io/network label and spec.values.global.network field. Istio then routes traffic by using a service’s IP address directly and does not send traffic through an east-west gateway.
Installing Istio in a flat networking topology is an advanced setup and should be used only if you already use a flat network topology in your environment where the same network is used for all your clusters. If your clusters use different networks, you must use an east-west gateway to route traffic across clusters. To install Istio in a multicluster setup where different cluster networks are used, choose between installing Istio with the Gloo Operator or Helm.
About this guide
In this guide, you explore how to set up an Istio in a flat network multicluster setup. You complete the following tasks:
- Set up two kind clusters that are connected by using a flat network.
- Install the Solo distribution of Istio in each cluster by using the flat network topology and ambient profile. The ambient profile is required, even if you plan to run sidecar-injected Istio workloads in your cluster.
- Peer the clusters by using east-west and peering gateways. These gateways are used to allow communication between istiod instances.
- Expose a service globally across both clusters.
- Test routing between clusters by using the service’s IP address directly instead of the east-west gateway.
Before you begin
Set your Enterprise level license for Gloo Mesh as an environment variable. If you do not have one, contact an account representative. If you prefer to specify license keys in a secret instead, see Licensing.
export GLOO_MESH_LICENSE_KEY=<enterprise_license_key>Choose the version of Istio that you want to install or upgrade to by reviewing the supported versions.
Save the Solo distribution of Istio version.
export ISTIO_VERSION=1.28.0 export ISTIO_IMAGE=${ISTIO_VERSION}-soloSave the image and Helm repository information for the Solo distribution of Istio.
Istio 1.29 and later:
export REPO=us-docker.pkg.dev/soloio-img/istio export HELM_REPO=us-docker.pkg.dev/soloio-img/istio-helmIstio 1.28 and earlier: You must provide a repo key for the minor version of the Solo distribution of Istio that you want to install. This is the 12-character hash at the end of the repo URL
us-docker.pkg.dev/gloo-mesh/istio-<repo-key>, which you can find in the Istio images built by Solo.io support article.# 12-character hash at the end of the repo URL export REPO_KEY=<repo_key> export REPO=us-docker.pkg.dev/gloo-mesh/istio-${REPO_KEY} export HELM_REPO=us-docker.pkg.dev/gloo-mesh/istio-helm-${REPO_KEY}
Get the Solo distribution of Istio binary and install
istioctl, which you use for multicluster linking and gateway commands.Get the OS and architecture that you use on your machine.
OS=$(uname | tr '[:upper:]' '[:lower:]' | sed -E 's/darwin/osx/') ARCH=$(uname -m | sed -E 's/aarch/arm/; s/x86_64/amd64/; s/armv7l/armv7/') echo $OS echo $ARCHDownload the Solo distribution of Istio binary and install
istioctl.Istio 1.29 and later:
mkdir -p ~/.istioctl/bin curl -sSL https://storage.googleapis.com/soloio-istio-binaries/release/$ISTIO_IMAGE/istio-$ISTIO_IMAGE-$OS-$ARCH.tar.gz | tar xzf - -C ~/.istioctl/bin mv ~/.istioctl/bin/istio-$ISTIO_IMAGE/bin/istioctl ~/.istioctl/bin/istioctl chmod +x ~/.istioctl/bin/istioctl export PATH=${HOME}/.istioctl/bin:${PATH}Istio 1.28 and earlier:
mkdir -p ~/.istioctl/bin curl -sSL https://storage.googleapis.com/istio-binaries-$REPO_KEY/$ISTIO_IMAGE/istioctl-$ISTIO_IMAGE-$OS-$ARCH.tar.gz | tar xzf - -C ~/.istioctl/bin chmod +x ~/.istioctl/bin/istioctl export PATH=${HOME}/.istioctl/bin:${PATH}
Verify that the
istioctlclient runs the Solo distribution of Istio that you want to install.istioctl version --remote=falseExample output:
client version: 1.28.0-solo
Set up a flat network test environment
Set up a kind cluster environment that consists of two kind clusters that are connected by using a flat network. To establish a flat network, you create these kind clusters with custom pod and service subnets, and connect these subnets with custom routes.
Use this setup only to test the flat networking capability locally. If you already have clusters in a flat networking setup, you can skip this step and continue with Install Istio.
Install the
cloud-provider-kindCLI. This CLI allows you to assign an external IP address to a LoadBalancerIP service type. An external IP address is required for the east-west gateways to enable proper peering between multiple clusters.go install sigs.k8s.io/cloud-provider-kind@latestStart the
cloud-provider-kindCLI.sudo cloud-provider-kindIn a separate terminal window, download and run the
setup-clusters.shscript. This script sets up the following components.- A kind cluster
cluster-1that uses the10.10.0.0/16CIDR for the pod subnet and the10.255.10.0/24CIDR for the service subnet. - Another kind cluster
cluster-2that uses the10.20.0.0/16CIDR for the pod subnet and the10.255.20.0/24CIDR for the service subnet. - Routes between
cluster-1andcluster-2to allow communication between pods by using the pod’s IP address directly.
curl -L https://raw.githubusercontent.com/solo-io/gloo-mesh-use-cases/refs/heads/main/gloo-mesh/istio-install/manual/flat-network/setup-clusters.sh -o setup-clusters.sh chmod +x setup-clusters.sh ./setup-clusters.sh- A kind cluster
Save the name and context for both of your clusters in an environment variable.
REMOTE_CLUSTER1="cluster-1" REMOTE_CLUSTER2="cluster-2" REMOTE_CONTEXT1="kind-${REMOTE_CLUSTER1}" REMOTE_CONTEXT2="kind-${REMOTE_CLUSTER2}"
Install Istio
Install the Solo distribution of Istio in your cluster.
Create a shared root of trust.
Each cluster in the multicluster setup must have a shared root of trust. This can be achieved by providing a root certificate signed by a PKI provider, or a custom root certificate created for this purpose. The root certificate signs a unique intermediate CA certificate for each cluster.
Install the Kubernetes Gateway API in both of your clusters. The API is required to spin up east-west gateways later.
for ctx in $REMOTE_CONTEXT1 $REMOTE_CONTEXT2; do kubectl --context=$ctx apply -f https://github.com/kubernetes-sigs/gateway-api/releases/download/v1.4.0/standard-install.yaml doneInstall the
basechart, which contains the CRDs and cluster roles required to set up Istio.Create the
istiodcontrol plane in your cluster. Note that for Istio to presume a flat network, all clusters must use the same network name in theglobal.networkfield. The network name is an arbitrary value and can be set to a string of your choice. In this example,flat-networkis used as the network name.Install the Istio CNI node agent daemonset. Note that although the CNI is included in this section, it is technically not part of the control plane or data plane.
Install the ztunnel daemonset. Make sure to use the same network name in the
networkfield as you used in istiod’sglobal.networkfield. In this example,flat-networkis used as the network name.Label the
istio-systemnamespace with thetopology.istio.io/network=flat-networklabel. Note that for Istio to presume a flat network, the network name in this label must match the network name that you set in thespec.values.global.networkfield.kubectl --context=$REMOTE_CONTEXT1 label namespace istio-system topology.istio.io/network=flat-network --overwrite kubectl --context=$REMOTE_CONTEXT2 label namespace istio-system topology.istio.io/network=flat-network --overwriteVerify that the Istio control plane components are up and running.
kubectl get pods -n istio-system --context $REMOTE_CONTEXT1 kubectl get pods -n istio-system --context $REMOTE_CONTEXT2Example output:
NAME READY STATUS RESTARTS AGE istio-cni-node-v8x2f 1/1 Running 0 1m istiod-54c79986dd-g9lxq 1/1 Running 0 1m ztunnel-6gccd 1/1 Running 0 50s NAME READY STATUS RESTARTS AGE istio-cni-node-v5sbf 1/1 Running 0 1m istiod-76cf85c5d5-z8cjr 1/1 Running 0 1m ztunnel-wc9w9 1/1 Running 0 50s
Link clusters
Create an east-west gateway in the
istio-eastwestnamespace. In each cluster, the east-west gateway is implemented as a ztunnel and exposes its xDS server for remote clusters to connect to. This setup facilitates traffic between services across clusters in your multicluster mesh.for ctx in $REMOTE_CONTEXT1 $REMOTE_CONTEXT2; do kubectl create namespace istio-eastwest --context $ctx istioctl --context=$ctx multicluster expose --namespace istio-eastwest doneVerify that the east-west gateways have a
PROGRAMMEDstatus ofTrue.for ctx in $REMOTE_CONTEXT1 $REMOTE_CONTEXT2; do kubectl get gateway -n istio-eastwest --context $ctx doneLink clusters to enable cross-cluster service discovery and allow traffic to be routed through east-west gateways across clusters. The steps vary based on whether you have access to the kubeconfig files for each cluster.
Verify that peer linking between your clusters was successful.
for ctx in $REMOTE_CONTEXT1 $REMOTE_CONTEXT2; do istioctl multicluster check --context $ctx doneExample output:
✅ License Check: license is valid for multicluster ✅ Pod Check (istiod): all pods healthy ✅ Pod Check (ztunnel): all pods healthy ✅ Pod Check (eastwest gateway): all pods healthy ✅ Gateway Check: all eastwest gateways programmed ✅ Peers Check: all clusters connected
Great job! You have successfully peered multiple clusters in a flat network setup.
Test connectivity
Create the
demonamespace and add it to the ambient mesh. Then, deploy the sleep sample app into it. You use this app as a client to test connectivity to the services in the mesh later.for ctx in $REMOTE_CONTEXT1 $REMOTE_CONTEXT2; do kubectl --context=$ctx create namespace demo kubectl --context=$ctx label namespace demo istio.io/dataplane-mode=ambient kubectl --context=$ctx apply -n demo -f https://raw.githubusercontent.com/solo-io/gloo-mesh-use-cases/refs/heads/main/gloo-mesh/istio-install/manual/flat-network/client/sleep-client.yaml doneDeploy the global service app. The app is configured to print out
Hello version: v1incluster-1andHello version: v2incluster-2. The service has the labelsolo.io/service-scope: global, which exposes the app under a common domain nameglobal-service.demo.mesh.internalacross both of your clusters. For more information, see Make services available across clusters.curl -L https://raw.githubusercontent.com/solo-io/gloo-mesh-use-cases/refs/heads/main/gloo-mesh/istio-install/manual/flat-network/services/global/global-service.yaml -o global-service.yaml sed 's/VERSION_PLACEHOLDER/v1/g' global-service.yaml | kubectl --context=$REMOTE_CONTEXT1 apply -n demo -f - sed 's/VERSION_PLACEHOLDER/v2/g' global-service.yaml | kubectl --context=$REMOTE_CONTEXT2 apply -n demo -f -Verify that you see the following Istio resources in your cluster.
- An Istio ServiceEntry for the global service name.
- An Istio WorkloadEntry with the IP address for each global service pod that runs in the opposite cluster. For example,
cluster-1has two WorkloadEntries. Each WorkloadEntry points to the IP address of one global service instance incluster-2. The WorkloadEntry resources are used to route traffic to the pod IP addresses directly without the need for the east-west gateway.
for ctx in $REMOTE_CONTEXT1 $REMOTE_CONTEXT2; do kubectl get serviceentry -n istio-system --context $ctx kubectl get workloadentry -n istio-system --context $ctx doneExample output:
NAME HOSTS LOCATION RESOLUTION AGE autogen.demo.global-service ["global-service.demo.mesh.internal"] STATIC 2m25s NAME AGE ADDRESS autogen.cluster-2.demo.global-service 2m26s autogenflat.cluster-2.demo.global-service-d955b94d5-jfzkt.eacd81ea03fd 2m25s 10.20.0.13 autogenflat.cluster-2.demo.global-service-d955b94d5-tgmcn.eacd81ea03fd 2m25s 10.20.0.12 NAME HOSTS LOCATION RESOLUTION AGE autogen.demo.global-service ["global-service.demo.mesh.internal"] STATIC 2m26s NAME AGE ADDRESS autogen.cluster-1.demo.global-service 2m26s autogenflat.cluster-1.demo.global-service-5d86865969-552td.eacd81ea03fd 2m25s 10.10.0.17 autogenflat.cluster-1.demo.global-service-5d86865969-tp5hr.eacd81ea03fd 2m25s 10.10.0.18Send a curl request from the example sleep app in
cluster-1to the global service name. In your CLI output, verify that you see replies from the global service app from both of your clusters.kubectl --context=$REMOTE_CONTEXT1 exec -n demo deploy/sleep -- sh -c " for i in \$(seq 1 10); do curl -s global-service.demo.mesh.internal:5000/hello echo done" | grep -o 'version: v[0-9]' | sort | uniq -cExample output:
5 version: v1 5 version: v2Note that you might see a different CLI output, such as
6 version: v1 4 version: v2. The more requests you send, the closer you get to a 50:50 distribution of requests.Get the name of the ztunnel instance in
cluster-1.kubectl get pods -n istio-system --context $REMOTE_CONTEXT1 | grep ztunnelReview the ztunnel logs. In your output, verify that you see log entries from the sleep app in
cluster-1to one of the global service IP addresses incluster-2. In this example, the sleep app incluster-1has an IP address of10.10.0.14and reaches out to the global service instance with the10.20.0.12IP address incluster-2.kubectl logs <ztunnel-pod> -n istio-system --context $REMOTE_CONTEXT1Example output:
2025-09-11T19:42:33.699609Z info access connection complete src.addr=10.10.0.14:40292 src. workload="sleep-746f9d766c-pmk5f" src.namespace="demo" src.identity="spiffe://cluster.local/ns/demo/sa/ sleep" dst.addr=10.20.0.12:15008 dst.hbone_addr=10.20.0.12:5000 dst.service="global-service.demo.mesh. internal" dst.workload="autogenflat.cluster-2.demo.global-service-d955b94d5-7bkck.eacd81ea03fd" dst. namespace="demo" dst.identity="spiffe://cluster.local/ns/demo/sa/default" direction="outbound" bytes_sent=107 bytes_recv=218 duration="40ms"
Next
You can now follow other guides to expand your ambient mesh capabilities.
- Enroll other apps in your ambient mesh and expose them globally. For example, you can follow the example guides to add the Bookinfo sample app to the multicluster mesh and expose it across multiple clusters.
- In a multicluster mesh, the east-west gateway serves as a ztunnel that allows traffic requests to flow across clusters, but it does not modify requests in any way. To control in-mesh traffic, you can instead apply policies to waypoint proxies that you create for a workload namespace.If you want to apply the
istio.io/ingress-use-waypoint=truelabel to a service that is exposed globally across clusters in the mesh, you must maintain namespace sameness for the service instances. In other words, you must maintain identical manifests, including labels, for each service instance in the same namespace of each cluster. If one of the service instances in any cluster has theistio.io/ingress-use-waypoint=truelabel, the global service uses the waypoint for ingress traffic.
Cleanup
You can optionally remove the kind cluster setup in this guide.
Remove the kind clusters.
kind delete cluster --name cluster-1 kind delete cluster --name cluster-2Remove the
cloud-provider-kindCLI from your machine. The CLI is typically installed into your $GOPATH.echo $GOPATH rm $GOPATH/go/bin/cloud-provider-kind