In this guide, you deploy an ambient mesh to each workload cluster, create an east-west gateway in each cluster, and link the istiod control planes across cluster networks by using peering gateways. In the next guide, you can deploy the Bookinfo sample app to the ambient mesh in each cluster, and make select services available across the multicluster mesh. Incoming requests can then be routed from an ingress gateway, such as Gloo Gateway, to services in your mesh across all clusters.
The following diagram demonstrates an ambient mesh setup across multiple clusters.
Figure: Multicluster ambient mesh set up with the Solo distribution of Istio and Gloo Gateway.
Figure: Multicluster ambient mesh set up with the Solo distribution of Istio and Gloo Gateway.
Review the following known Istio version requirements and restrictions.
Patch versions 1.26.0 and 1.26.1 of the Solo distribution of Istio lack support for FIPS-tagged images and ztunnel outlier detection. When upgrading or installing 1.26, be sure to use patch version 1.26.1-patch0 and later only.
In the Solo distribution of Istio 1.25 and later, you can access enterprise-level features by passing your Solo license in the license.value or license.secretRef field of the Solo distribution of the istiod Helm chart. The Solo istiod Helm chart is strongly recommended due to the included safeguards, default settings, and upgrade handling to ensure a reliable and secure Istio deployment. Though it is not recommended, you can pass your license key in the open source istiod Helm chart by using the --set pilot.env.SOLO_LICENSE_KEY field.
Multicluster setups require the Solo distribution of Istio version 1.24.3 or later (1.24.3-solo), including the Solo distribution of istioctl.
Due to a lack of support for the Istio CNI and iptables for the Istio proxy, you cannot run Istio (and therefore Gloo Mesh (OSS APIs)) on AWS Fargate. For more information, see the Amazon EKS issue.
The commands for OpenShift in the following steps contain these required settings:
Your Helm settings must include global.platform=openshift for Istio 1.24 and later. If you instead install Istio 1.23 or earlier, you must use profile=openshift instead of the global.platform setting.
Install the istio-cni and ztunnel Helm releases in the kube-system namespace, instead of the istio-system namespace.
The upgrade guides in this documentation show you how to perform in-place upgrades for your Istio components, which is the recommended upgrade strategy.
In each cluster, you create an east-west gateway, which is implemented as a ztunnel that facilitates traffic between services across clusters in your multicluster mesh. In the Solo distribution of Istio 1.28 and later, you can use either LoadBalancer or NodePort addresses to resolve cross-cluster traffic requests through this gateway. Note that the NodePort method is considered alpha in Istio version 1.28.
LoadBalancer: In the standard LoadBalancer peering method, cross-cluster traffic through the east-west gateway resolves to its LoadBalancer address.
NodePort (alpha): If you prefer to use direct pod-to-pod traffic across clusters, you can annotate the east-west and peering gateways so that cross-cluster traffic resolves to NodePort addresses. This method allows you to avoid LoadBalancer services to reduce cross-cluster traffic costs. Review the following considerations:
Note that the gateways must still be created with stable IP addresses, which are required for xDS communication with the istiod control plane in each cluster. NodePort peering is used for data-plane communication, in that requests to services resolve to the NodePort instead of the LoadBalancer address. Also, the east-west gateway must have the topology.istio.io/cluster label.
If a node in a target cluster becomes inaccessible, such as during a restart or replacement, a delay can occur in the connection from the client cluster that must become aware of the new east-west gateway NodePort. In this case, you might see a connection error when trying to send cross-cluster traffic to an east-west gateway that is no longer accepting connections.
Only nodes where an east-west gateway pod is provisioned are considered targets for traffic.
Like LoadBalancer gateways, NodePort gateways support traffic from Envoy-based ingress gateways, waypoints, and sidecars.
This feature is in an alpha state. Alpha features are likely to change, are not fully tested, and are not supported for production. For more information, see Solo feature maturity.
The steps in the following guide to create the gateways include options for either the LoadBalancer or NodePort method. A status condition on each east-west and remote peer gateway indicates which dataplane service type is in use.
If you previously used the multicluster feature in community Istio, and want to now migrate to multicluster peering in the Solo distribution of Istio, the DISABLE_LEGACY_MULTICLUSTER environment variable is introduced in the Solo distribution of Istio version 1.28 to disable the community multicluster mechanisms. Multicluster in community Istio uses remote secrets that contain kubeconfigs to watch resources on remote clusters. This system is incompatible with the decentralized, push-based model for peering in the Solo distribution of Istio. This variable causes istiod to ignore remote secrets so that it does not attempt to set up Kubernetes clients to connect to them.
For fresh multicluster mesh installations with the Solo distribution of Istio, use this environment variable in your istiod settings. This setting serves as a recommended safety measure to prevent any use of remote secrets.
If you want to initiate a multicluster migration from community Istio, contact a Solo account representative. An account representative can help you set up two revisions of Istio that each select a different set of namespaces, and set the DISABLE_LEGACY_MULTICLUSTER variable on the revision that uses the Solo distribution of Istio for multicluster peering.
Option 3: Automatically link clusters by using the Gloo management plane. Note that this method is a beta feature.
Option 1: Install and link new ambient meshes link
In each cluster, use Helm to create the ambient mesh components. Then, create an east-west gateway so that traffic requests can be routed cross-cluster, and link clusters to enable cross-cluster service discovery.
Set your Enterprise level license for Gloo Mesh as an environment variable. If you do not have one, contact an account representative. If you prefer to specify license keys in a secret instead, see Licensing.
Save the repo key for the minor version of the Solo distribution of Istio that you want to install. This is the 12-character hash at the end of the repo URL us-docker.pkg.dev/gloo-mesh/istio-<repo-key>, which you can find in the Istio images built by Solo.io support article.
# 12-character hash at the end of the repo URL
export REPO_KEY=<repo_key>
export REPO=us-docker.pkg.dev/gloo-mesh/istio-${REPO_KEY}
export HELM_REPO=us-docker.pkg.dev/gloo-mesh/istio-helm-${REPO_KEY}
Get the Solo distribution of Istio binary and install istioctl, which you use for multicluster linking and gateway commands.
Get the OS and architecture that you use on your machine.
Each cluster in the multicluster setup must have a shared root of trust. This can be achieved by providing a root certificate signed by a PKI provider, or a custom root certificate created for this purpose. The root certificate signs a unique intermediate CA certificate for each cluster.
By default, the Istio CA generates a self-signed root certificate and key, and uses them to sign the workload certificates. For more information, see the Plug in CA Certificates guide in the community Istio documentation.
For demo installations, you can run the following function to quickly generate and plug in the certificates and key for the Istio CA:
curl -L https://istio.io/downloadIstio | ISTIO_VERSION=${ISTIO_VERSION} sh -
cd istio-${ISTIO_VERSION}
mkdir -p certs
pushd certs
make -f ../tools/certs/Makefile.selfsigned.mk root-ca
function create_cacerts_secret() {
context=${1:?context}
cluster=${2:?cluster}
make -f ../tools/certs/Makefile.selfsigned.mk ${cluster}-cacerts
kubectl --context=${context} create ns istio-system || true
kubectl --context=${context} create secret generic cacerts -n istio-system \
--from-file=${cluster}/ca-cert.pem \
--from-file=${cluster}/ca-key.pem \
--from-file=${cluster}/root-cert.pem \
--from-file=${cluster}/cert-chain.pem
}
create_cacerts_secret ${REMOTE_CONTEXT1} ${REMOTE_CLUSTER1}
create_cacerts_secret ${REMOTE_CONTEXT2} ${REMOTE_CLUSTER2}
cd ../..
To enhance the security of your setup even further and have full control over the Istio CA lifecycle, you can generate and store the root and intermediate CA certificates and keys with your own PKI provider. You can then use tools such as cert-manager to send certificate signing requests on behalf of istiod to your PKI provider. Cert-manager stores the signed intermediate certificates and keys in the cacerts Kubernetes secret so that istiod can use these credentials to issue leaf certificates for the workloads in the service mesh. You can set up cert-manager to also check the certificates and renew them before they expire.
AWS Private CA issuer and cert-manager: For an architectural overview of this certificate setup, see Bring your own Istio CAs with AWS. For steps on how to deploy this certificate setup, check out this Solo.io blog post. Be sure to repeat the steps so that a cacerts secret exists in each cluster.
Save the name and kubeconfig context of a cluster where you want to install Istio in the following environment variables. Each time you repeat the steps in this guide, you change these variables to the next workload cluster’s name and context.
Apply the CRDs for the Kubernetes Gateway API to your cluster, which are required to create components such as waypoint proxies for L7 traffic policies, gateways with the Gateway resource, and more.
Verify that the components of the Istio ambient control and data plane are successfully installed. Because the Istio CNI and ztunnel are deployed as daemon sets, the number of CNI and ztunnel pods equals the number of nodes in your cluster. Note that it might take a few seconds for the pods to become available.
Label the istio-system namespace with the cluster’s network name, which you previously set to your cluster’s name in the global.network field of the istiod installation. The ambient control plane uses this label internally to group pods that exist in the same L3 network.
Create an east-west gateway in the istio-eastwest namespace. In each cluster, the east-west gateway is implemented as a ztunnel that facilitates traffic between services across clusters in your multicluster mesh. You can use either LoadBalancer or NodePort addresses for cross-cluster traffic.
Use the following istioctl command to quickly create the east-west gateway. To take a look at the Gateway resource that this command creates, you can include the --generate flag in the command. Cross-cluster traffic though this gateway resolves to the LoadBalancer address.
Note that the gateway must still be created with a stable IP address, which is required for xDS communication with the istiod control plane in each cluster. NodePort peering is used for data plane communication, in that requests to services resolve to the NodePort instead of the gateway’s stable IP address.
Use the following istioctl command to quickly create the east-west gateway. To take a look at the Gateway resource that this command creates, you can include the --generate flag in the command. Note that the HBONE listener that is created by default on this gateway is unused, because traffic is routed through the NodePort directly.
Verify that the east-west gateway is successfully deployed.
kubectl get pods -n istio-eastwest --context ${CLUSTER_CONTEXT}
For each cluster that you want to include in the multicluster ambient mesh setup, repeat these steps to install the ambient mesh components and east-west gateway in each cluster. Remember to change the cluster name and context variables each time you repeat the steps.
Link clusters to enable cross-cluster service discovery and allow traffic to be routed through east-west gateways across clusters.
Optional: Before you link clusters, you can check the individual readiness of each cluster for linking by running the istioctl multicluster check command for each cluster.
Before continuing to the next step, make sure that the following checks pass or fail as expected:✅ The license in use by istiod supports multicluster.✅ All istiod, ztunnel, and east-west gateway pods are healthy.✅ The east-west gateway is programmed.❌ Each remote peer gateway has a gloo.solo.io/PeeringSucceeded status of True. Note that this fails if you run this command prior to linking the clusters.
Link clusters to enable cross-cluster service discovery and allow traffic to be routed through east-west gateways across clusters. The steps vary based on whether you have access to the kubeconfig files for each cluster.
Verify that the contexts for the clusters that you want to include in the multicluster mesh are listed in your kubeconfig file.
kubectl config get-contexts
In the output, note the names of the cluster contexts, which you use in the next step to link the clusters.
If you have multiple kubeconfig files, you can generate a merged kubeconfig file by running the following command.
Using the names of the cluster contexts, link the clusters so that they can communicate. Note that you can either link the clusters bi-directionally or asymmetrically. In a standard bi-directional setup, services in any of the linked clusters can send requests to and receive requests from the services in any of the other linked clusters. In an asymmetrical setup, you allow one cluster to send requests to another cluster, but the other cluster cannot send requests back to the first cluster.
Bi-directional: You can use the following istioctl command to quickly link the clusters bi-directionally. In each cluster, Gateway resources are created that use the istio-remote GatewayClass. This class allows the gateways to connect to other clusters by using the addresses of the east-west gateways.
istioctl multicluster link --namespace istio-eastwest --contexts=<context1>,<context2>,<context3>
To take a look at the Gateway resources that this command creates, you can include the --generate flag in the command.
Example output for two clusters:
Gateway istio-eastwest/istio-remote-peer-cluster1 applied to cluster "<cluster2_context>" pointing to cluster "<cluster1_context>" (network "cluster1")
Gateway istio-eastwest/istio-remote-peer-cluster2 applied to cluster "<cluster1_context>" pointing to cluster "<cluster2_context>" (network "cluster2")
Asymmetrical: You can use the following istioctl command to quickly link the clusters asymmetrically. The services in the cluster in the --from flag can send requests to services in the cluster in the --to flag, but sending requests in the reverse direction is not permitted.
istioctl multicluster link --namespace istio-eastwest --from <context1> --to <context2>
To take a look at the Gateway resources that this command creates, you can include the --generate flag in the command.
For example, this command allows services in cluster1’s mesh to send requests to services in cluster2’s mesh through cluster2’s east-west gateway. However, the reverse is not permitted: services in cluster2’s mesh cannot send requests through cluster1’s east-west gateway to services in cluster1.
istioctl multicluster link --namespace istio-eastwest --from cluster1 --to cluster2
Example output:
Gateway istio-eastwest/istio-remote-peer-cluster2 applied to cluster "<cluster1_context>" pointing to cluster "<cluster2_context>" (network "cluster2")
NodePort-based cross-cluster traffic: If you want to use NodePorts instead of the gateway LoadBalancer IP address for cross-cluster traffic, annotate the generated peering gateways so that cross-cluster traffic through them resolves to the NodePort address. As with the east-west gateways, the peering gateways are still created with a stable IP address, which is required for xDS communication with the istiod control plane in each cluster. NodePort peering is used for data plane communication, in that requests to services resolve to the NodePort instead of the gateway’s stable IP address. Note that the HBONE listener that is created by default on this gateway is unused, because traffic is routed through the NodePort directly.
If you do not have access to all kubeconfig files for the clusters you want to link, or if you cannot combine the cluster contexts into one kubeconfig file, you can link the clusters by declaratively creating istio-remote peer gateways.
Note that you can either link the clusters bi-directionally or asymmetrically. In a standard bi-directional setup, services in any of the linked clusters can send requests to and receive requests from the services in any of the other linked clusters. In an asymmetrical setup, you allow one cluster to send requests to another cluster, but the other cluster cannot send requests back to the first cluster.
Bi-directional: You can use the following Gateway resources to create an istio-remote peer gateway in each cluster. The istio-remote GatewayClass allows the gateways to connect to other clusters by using the addresses of the east-west gateways.
Get the addresses of the east-west gateway in each cluster. The following commands show examples for two clusters.
Using the east-west gateway addresses, create a Gateway resource in each cluster to represent the other cluster.
In the labels section, be sure to update the locality labels according to the region and zone of each cluster. For more information about locality support, see the release notes.
If you want to use NodePorts instead of the gateway LoadBalancer IP address for cross-cluster traffic, uncomment the peering.solo.io/preferred-data-plane-service-type: NodePort annotation from each Gateway resource. As with the east-west gateways, the peering gateways must still be created with a stable IP address, which is required for xDS communication with the istiod control plane in each cluster. NodePort peering is used for data plane communication, in that requests to services resolve to the NodePort instead of the gateway’s stable IP address. Additionally, you can comment out the HBONE listener in each gateway, because traffic is routed through the NodePort directly.
Asymmetrical: You can use the following Gateway resources to create an istio-remote peer gateway in only some clusters. The istio-remote GatewayClass allows the gateway in one cluster to connect to another cluster by using the address of the east-west gateway, but sending requests in the reverse direction is not permitted.
Get the address of the east-west gateway in the cluster that you want to send traffic to. The following command shows an example for cluster2.
Using the east-west gateway address, create a Gateway resource in the cluster that you want to send requests from. For example, this Gateway resource allows services in cluster1’s mesh to send requests to services in cluster2’s mesh through cluster2’s east-west gateway. However, the reverse is not permitted: services in cluster2’s mesh cannot send requests through cluster1’s east-west gateway to services in cluster1.
In the labels section, be sure to update the locality labels according to the region and zone of each cluster. For more information about locality support, see the release notes.
If you want to use NodePorts instead of the gateway LoadBalancer IP address for cross-cluster traffic, uncomment the peering.solo.io/preferred-data-plane-service-type: NodePort annotation from each Gateway resource. As with the east-west gateways, the peering gateway must still be created with a stable IP address, which is required for xDS communication with the istiod control plane in each cluster. NodePort peering is used for data plane communication, in that requests to services resolve to the NodePort instead of the gateway’s stable IP address. Additionally, you can comment out the HBONE listener in each gateway, because traffic is routed through the NodePort directly.
In this example output for cluster1, the license is valid, all Istio pods are healthy, and the east-west gateway is programmed. The remote peer gateways for linking to cluster2 and cluster3 both have a gloo.solo.io/PeeringSucceeded status of True.
✅ License Check: license is valid for multicluster
✅ Pod Check (istiod): all pods healthy
✅ Pod Check (ztunnel): all pods healthy
✅ Pod Check (eastwest gateway): all pods healthy
✅ Gateway Check: all eastwest gateways programmed
✅ Peers Check: all clusters connected
Option 2: Upgrade and link existing ambient meshes link
Set your Enterprise level license for Gloo Mesh as an environment variable. If you do not have one, contact an account representative. If you prefer to specify license keys in a secret instead, see Licensing.
Save the details for the version of the Solo distribution of Istio that your ambient meshes run.
Save the Solo distribution of Istio patch version and tag.
export ISTIO_VERSION=1.28.0
# Change the tags as needed
export ISTIO_IMAGE=1.28.0-solo
Save the repo key for the minor version of the Solo distribution of Istio. This is the 12-character hash at the end of the repo URL us-docker.pkg.dev/gloo-mesh/istio-<repo-key>, which you can find in the Istio images built by Solo.io support article.
# 12-character hash at the end of the repo URL
export REPO_KEY=<repo_key>
export REPO=us-docker.pkg.dev/gloo-mesh/istio-${REPO_KEY}
export HELM_REPO=us-docker.pkg.dev/gloo-mesh/istio-helm-${REPO_KEY}
Get the Solo distribution of Istio binary and install istioctl, which you use for multicluster linking and gateway commands.
Get the OS and architecture that you use on your machine.
Each cluster in the multicluster setup must have a shared root of trust. This can be achieved by providing a root certificate signed by a PKI provider, or a custom root certificate created for this purpose. The root certificate signs a unique intermediate CA certificate for each cluster.
By default, the Istio CA generates a self-signed root certificate and key, and uses them to sign the workload certificates. For more information, see the Plug in CA Certificates guide in the community Istio documentation.
For demo installations, you can run the following function to quickly generate and plug in the certificates and key for the Istio CA:
curl -L https://istio.io/downloadIstio | ISTIO_VERSION=${ISTIO_VERSION} sh -
cd istio-${ISTIO_VERSION}
mkdir -p certs
pushd certs
make -f ../tools/certs/Makefile.selfsigned.mk root-ca
function create_cacerts_secret() {
context=${1:?context}
cluster=${2:?cluster}
make -f ../tools/certs/Makefile.selfsigned.mk ${cluster}-cacerts
kubectl --context=${context} create ns istio-system || true
kubectl --context=${context} create secret generic cacerts -n istio-system \
--from-file=${cluster}/ca-cert.pem \
--from-file=${cluster}/ca-key.pem \
--from-file=${cluster}/root-cert.pem \
--from-file=${cluster}/cert-chain.pem
}
create_cacerts_secret ${REMOTE_CONTEXT1} ${REMOTE_CLUSTER1}
create_cacerts_secret ${REMOTE_CONTEXT2} ${REMOTE_CLUSTER2}
cd ../..
To enhance the security of your setup even further and have full control over the Istio CA lifecycle, you can generate and store the root and intermediate CA certificates and keys with your own PKI provider. You can then use tools such as cert-manager to send certificate signing requests on behalf of istiod to your PKI provider. Cert-manager stores the signed intermediate certificates and keys in the cacerts Kubernetes secret so that istiod can use these credentials to issue leaf certificates for the workloads in the service mesh. You can set up cert-manager to also check the certificates and renew them before they expire.
AWS Private CA issuer and cert-manager: For an architectural overview of this certificate setup, see Bring your own Istio CAs with AWS. For steps on how to deploy this certificate setup, check out this Solo.io blog post. Be sure to repeat the steps so that a cacerts secret exists in each cluster.
In each cluster, update the ambient mesh components for multicluster, and create an east-west gateway so that traffic requests can be routed cross-cluster.
Save the name and kubeconfig context of a cluster where you run an ambient mesh. Each time you repeat the steps in this guide, you change these variables to the next workload cluster’s name and context.
Update your Helm release with the following multicluster values. If you must update the Istio minor version, include the --set global.tag=${ISTIO_IMAGE} and --set global.hub=${REPO} flags too.
If you prefer to specify your license secret instead of an inline value, you can include --set license.secretRef.name=<name> and --set license.secretRef.namespace=<namespace>.
If you prefer to specify your license secret instead of an inline value, you can include --set license.secretRef.name=<name> and --set license.secretRef.namespace=<namespace>.
Update your Helm release with the following multicluster values. If you must update the Istio minor version, include the --set tag=${ISTIO_IMAGE} and --set hub=${REPO} flags too.
Verify that the ztunnel pods are successfully installed. Because the ztunnel is deployed as a daemon set, the number of pods equals the number of nodes in your cluster. Note that it might take a few seconds for the pods to become available.
kubectl get pods -A --context ${CLUSTER_CONTEXT} | grep ztunnel
Label the istio-system namespace with the cluster’s network name, which you previously set to your cluster’s name in the global.network field of the istiod installation. The ambient control plane uses this label internally to group pods that exist in the same L3 network.
Create an east-west gateway in the istio-eastwest namespace. In each cluster, the east-west gateway is implemented as a ztunnel that facilitates traffic between services across clusters in your multicluster mesh. You can use either LoadBalancer or NodePort addresses for cross-cluster traffic.
Use the following istioctl command to quickly create the east-west gateway. To take a look at the Gateway resource that this command creates, you can include the --generate flag in the command. Cross-cluster traffic though this gateway resolves to the LoadBalancer address.
Note that the gateway must still be created with a stable IP address, which is required for xDS communication with the istiod control plane in each cluster. NodePort peering is used for data plane communication, in that requests to services resolve to the NodePort instead of the gateway’s stable IP address.
Use the following istioctl command to quickly create the east-west gateway. To take a look at the Gateway resource that this command creates, you can include the --generate flag in the command. Note that the HBONE listener that is created by default on this gateway is unused, because traffic is routed through the NodePort directly.
Verify that the east-west gateway is successfully deployed.
kubectl get pods -n istio-eastwest --context $CLUSTER_CONTEXT
For each cluster that you want to add to the multicluster ambient mesh setup, repeat these steps to upgrade the Helm values and deploy an east-west gateway. Remember to change the cluster name and context variables each time you repeat the steps.
Link clusters to enable cross-cluster service discovery and allow traffic to be routed through east-west gateways across clusters.
Optional: Before you link clusters, you can check the individual readiness of each cluster for linking by running the istioctl multicluster check command for each cluster.
Before continuing to the next step, make sure that the following checks pass or fail as expected:✅ The license in use by istiod supports multicluster.✅ All istiod, ztunnel, and east-west gateway pods are healthy.✅ The east-west gateway is programmed.❌ Each remote peer gateway has a gloo.solo.io/PeeringSucceeded status of True. Note that this fails if you run this command prior to linking the clusters.
Link clusters to enable cross-cluster service discovery and allow traffic to be routed through east-west gateways across clusters. The steps vary based on whether you have access to the kubeconfig files for each cluster.
Verify that the contexts for the clusters that you want to include in the multicluster mesh are listed in your kubeconfig file.
kubectl config get-contexts
In the output, note the names of the cluster contexts, which you use in the next step to link the clusters.
If you have multiple kubeconfig files, you can generate a merged kubeconfig file by running the following command.
Using the names of the cluster contexts, link the clusters so that they can communicate. Note that you can either link the clusters bi-directionally or asymmetrically. In a standard bi-directional setup, services in any of the linked clusters can send requests to and receive requests from the services in any of the other linked clusters. In an asymmetrical setup, you allow one cluster to send requests to another cluster, but the other cluster cannot send requests back to the first cluster.
Bi-directional: You can use the following istioctl command to quickly link the clusters bi-directionally. In each cluster, Gateway resources are created that use the istio-remote GatewayClass. This class allows the gateways to connect to other clusters by using the addresses of the east-west gateways.
istioctl multicluster link --namespace istio-eastwest --contexts=<context1>,<context2>,<context3>
To take a look at the Gateway resources that this command creates, you can include the --generate flag in the command.
Example output for two clusters:
Gateway istio-eastwest/istio-remote-peer-cluster1 applied to cluster "<cluster2_context>" pointing to cluster "<cluster1_context>" (network "cluster1")
Gateway istio-eastwest/istio-remote-peer-cluster2 applied to cluster "<cluster1_context>" pointing to cluster "<cluster2_context>" (network "cluster2")
Asymmetrical: You can use the following istioctl command to quickly link the clusters asymmetrically. The services in the cluster in the --from flag can send requests to services in the cluster in the --to flag, but sending requests in the reverse direction is not permitted.
istioctl multicluster link --namespace istio-eastwest --from <context1> --to <context2>
To take a look at the Gateway resources that this command creates, you can include the --generate flag in the command.
For example, this command allows services in cluster1’s mesh to send requests to services in cluster2’s mesh through cluster2’s east-west gateway. However, the reverse is not permitted: services in cluster2’s mesh cannot send requests through cluster1’s east-west gateway to services in cluster1.
istioctl multicluster link --namespace istio-eastwest --from cluster1 --to cluster2
Example output:
Gateway istio-eastwest/istio-remote-peer-cluster2 applied to cluster "<cluster1_context>" pointing to cluster "<cluster2_context>" (network "cluster2")
NodePort-based cross-cluster traffic: If you want to use NodePorts instead of the gateway LoadBalancer IP address for cross-cluster traffic, annotate the generated peering gateways so that cross-cluster traffic through them resolves to the NodePort address. As with the east-west gateways, the peering gateways are still created with a stable IP address, which is required for xDS communication with the istiod control plane in each cluster. NodePort peering is used for data plane communication, in that requests to services resolve to the NodePort instead of the gateway’s stable IP address. Note that the HBONE listener that is created by default on this gateway is unused, because traffic is routed through the NodePort directly.
If you do not have access to all kubeconfig files for the clusters you want to link, or if you cannot combine the cluster contexts into one kubeconfig file, you can link the clusters by declaratively creating istio-remote peer gateways.
Note that you can either link the clusters bi-directionally or asymmetrically. In a standard bi-directional setup, services in any of the linked clusters can send requests to and receive requests from the services in any of the other linked clusters. In an asymmetrical setup, you allow one cluster to send requests to another cluster, but the other cluster cannot send requests back to the first cluster.
Bi-directional: You can use the following Gateway resources to create an istio-remote peer gateway in each cluster. The istio-remote GatewayClass allows the gateways to connect to other clusters by using the addresses of the east-west gateways.
Get the addresses of the east-west gateway in each cluster. The following commands show examples for two clusters.
Using the east-west gateway addresses, create a Gateway resource in each cluster to represent the other cluster.
In the labels section, be sure to update the locality labels according to the region and zone of each cluster. For more information about locality support, see the release notes.
If you want to use NodePorts instead of the gateway LoadBalancer IP address for cross-cluster traffic, uncomment the peering.solo.io/preferred-data-plane-service-type: NodePort annotation from each Gateway resource. As with the east-west gateways, the peering gateways must still be created with a stable IP address, which is required for xDS communication with the istiod control plane in each cluster. NodePort peering is used for data plane communication, in that requests to services resolve to the NodePort instead of the gateway’s stable IP address. Additionally, you can comment out the HBONE listener in each gateway, because traffic is routed through the NodePort directly.
Asymmetrical: You can use the following Gateway resources to create an istio-remote peer gateway in only some clusters. The istio-remote GatewayClass allows the gateway in one cluster to connect to another cluster by using the address of the east-west gateway, but sending requests in the reverse direction is not permitted.
Get the address of the east-west gateway in the cluster that you want to send traffic to. The following command shows an example for cluster2.
Using the east-west gateway address, create a Gateway resource in the cluster that you want to send requests from. For example, this Gateway resource allows services in cluster1’s mesh to send requests to services in cluster2’s mesh through cluster2’s east-west gateway. However, the reverse is not permitted: services in cluster2’s mesh cannot send requests through cluster1’s east-west gateway to services in cluster1.
In the labels section, be sure to update the locality labels according to the region and zone of each cluster. For more information about locality support, see the release notes.
If you want to use NodePorts instead of the gateway LoadBalancer IP address for cross-cluster traffic, uncomment the peering.solo.io/preferred-data-plane-service-type: NodePort annotation from each Gateway resource. As with the east-west gateways, the peering gateway must still be created with a stable IP address, which is required for xDS communication with the istiod control plane in each cluster. NodePort peering is used for data plane communication, in that requests to services resolve to the NodePort instead of the gateway’s stable IP address. Additionally, you can comment out the HBONE listener in each gateway, because traffic is routed through the NodePort directly.
In this example output for cluster1, the license is valid, all Istio pods are healthy, and the east-west gateway is programmed. The remote peer gateways for linking to cluster2 and cluster3 both have a gloo.solo.io/PeeringSucceeded status of True.
✅ License Check: license is valid for multicluster
✅ Pod Check (istiod): all pods healthy
✅ Pod Check (ztunnel): all pods healthy
✅ Pod Check (eastwest gateway): all pods healthy
✅ Gateway Check: all eastwest gateways programmed
✅ Peers Check: all clusters connected
Next: Add apps to the ambient mesh. For multicluster setups, this includes making specific services available across your linked cluster setup.
In each cluster, use Helm to create the ambient mesh components, and create an east-west gateway so that traffic requests can be routed cross-cluster. Then, use the Gloo management plane to automate multicluster linking, which enables cross-cluster service discovery.
info
Review the following considerations:
Automated multicluster peering is a beta feature. For more information, see Solo feature maturity.
This feature requires an Enterprise level license for Gloo Mesh.
Automated peering requires Istio to be installed in the same cluster that the Gloo management plane is deployed to.
Set your Enterprise level license for Gloo Mesh as an environment variable. If you do not have one, contact an account representative. If you prefer to specify license keys in a secret instead, see Licensing.
Save the repo key for the minor version of the Solo distribution of Istio that you want to install. This is the 12-character hash at the end of the repo URL us-docker.pkg.dev/gloo-mesh/istio-<repo-key>, which you can find in the Istio images built by Solo.io support article.
# 12-character hash at the end of the repo URL
export REPO_KEY=<repo_key>
export REPO=us-docker.pkg.dev/gloo-mesh/istio-${REPO_KEY}
export HELM_REPO=us-docker.pkg.dev/gloo-mesh/istio-helm-${REPO_KEY}
Get the Solo distribution of Istio binary and install istioctl, which you use for multicluster linking and gateway commands.
Get the OS and architecture that you use on your machine.
Upgrade Gloo Mesh in your multicluster setup to enable the ConfigDistribution feature flag and install the enterprise CRDs, which are required for Gloo Mesh to automate peering and distribute gateways between clusters.
notifications
These steps assume you already installed Gloo Mesh, and show you how to upgrade your Helm install values. If you have not yet installed Gloo Mesh, follow the steps in Set up multicluster management.
Upgrade your gloo-platform-crds Helm release in the management cluster to include the following settings.
Create a shared root of trust for each cluster in the multicluster setup, including the management cluster. This can be achieved by providing a root certificate signed by a PKI provider, or a custom root certificate created for this purpose. The root certificate signs a unique intermediate CA certificate for each cluster.
By default, the Istio CA generates a self-signed root certificate and key, and uses them to sign the workload certificates. For more information, see the Plug in CA Certificates guide in the community Istio documentation.
For demo installations, you can run the following function to quickly generate and plug in the certificates and key for the Istio CA:
To enhance the security of your setup even further and have full control over the Istio CA lifecycle, you can generate and store the root and intermediate CA certificates and keys with your own PKI provider. You can then use tools such as cert-manager to send certificate signing requests on behalf of istiod to your PKI provider. Cert-manager stores the signed intermediate certificates and keys in the cacerts Kubernetes secret so that istiod can use these credentials to issue leaf certificates for the workloads in the service mesh. You can set up cert-manager to also check the certificates and renew them before they expire.
AWS Private CA issuer and cert-manager: For an architectural overview of this certificate setup, see Bring your own Istio CAs with AWS. For steps on how to deploy this certificate setup, check out this Solo.io blog post. Be sure to repeat the steps so that a cacerts secret exists in each cluster.
Save the name and kubeconfig context of a cluster where you want to install Istio in the following environment variables. Each time you repeat the steps in this guide, you change these variables to the next workload cluster’s name and context. Note that to use automated multicluster peering, you must complete these steps to install an ambient mesh in the management cluster as well as your workload clusters.
Apply the CRDs for the Kubernetes Gateway API to your cluster, which are required to create components such as waypoint proxies for L7 traffic policies, gateways with the Gateway resource, and more.
helm upgrade --install istiod oci://${HELM_REPO}/istiod \
--namespace istio-system \
--kube-context ${CLUSTER_CONTEXT} \
--version ${ISTIO_IMAGE} \
-f - <<EOF
env:
# Enables automatic creation of remote peer gateways
PEERING_AUTOMATIC_LOCAL_GATEWAY: "true"
# Assigns IP addresses to multicluster services
PILOT_ENABLE_IP_AUTOALLOCATE: "true"
# Disable community Istio multicluster mechanisms
DISABLE_LEGACY_MULTICLUSTER: "true"
# Disable selecting workload entries for local service routing.
# Required for Gloo VirtualDestinaton functionality.
PILOT_ENABLE_K8S_SELECT_WORKLOAD_ENTRIES: "false"
# Required when meshConfig.trustDomain is set
PILOT_SKIP_VALIDATE_TRUST_DOMAIN: "true"
global:
hub: ${REPO}
multiCluster:
clusterName: ${CLUSTER_NAME}
network: ${CLUSTER_NAME}
proxy:
clusterDomain: cluster.local
tag: ${ISTIO_IMAGE}
meshConfig:
accessLogFile: /dev/stdout
defaultConfig:
proxyMetadata:
ISTIO_META_DNS_AUTO_ALLOCATE: "true"
ISTIO_META_DNS_CAPTURE: "true"
# Assign each cluster a unique trust domain to apply policies to specific clusters
trustDomain: "${CLUSTER_NAME}.local"
pilot:
cni:
namespace: istio-system
enabled: true
# Required to enable multicluster support
platforms:
peering:
enabled: true
profile: ambient
license:
value: ${GLOO_MESH_LICENSE_KEY}
# Uncomment if you prefer to specify your license secret
# instead of an inline value.
# secretRef:
# name:
# namespace:
EOF
helm upgrade --install istiod oci://${HELM_REPO}/istiod \
--namespace istio-system \
--kube-context ${CLUSTER_CONTEXT} \
--version ${ISTIO_IMAGE} \
-f - <<EOF
env:
# Assigns IP addresses to multicluster services
PILOT_ENABLE_IP_AUTOALLOCATE: "true"
# Disable community Istio multicluster mechanisms
DISABLE_LEGACY_MULTICLUSTER: "true"
# Disable selecting workload entries for local service routing.
# Required for Gloo VirtualDestinaton functionality.
PILOT_ENABLE_K8S_SELECT_WORKLOAD_ENTRIES: "false"
# Required when meshConfig.trustDomain is set
PILOT_SKIP_VALIDATE_TRUST_DOMAIN: "true"
global:
hub: ${REPO}
multiCluster:
clusterName: ${CLUSTER_NAME}
network: ${CLUSTER_NAME}
platform: openshift
proxy:
clusterDomain: cluster.local
tag: ${ISTIO_IMAGE}
meshConfig:
accessLogFile: /dev/stdout
defaultConfig:
proxyMetadata:
ISTIO_META_DNS_AUTO_ALLOCATE: "true"
ISTIO_META_DNS_CAPTURE: "true"
# Assign each cluster a unique trust domain to apply policies to specific clusters
trustDomain: "${CLUSTER_NAME}.local"
pilot:
cni:
namespace: kube-system
enabled: true
# Required to enable multicluster support
platforms:
peering:
enabled: true
profile: ambient
license:
value: ${GLOO_MESH_LICENSE_KEY}
# Uncomment if you prefer to specify your license secret
# instead of an inline value.
# secretRef:
# name:
# namespace:
EOF
Install the Istio CNI node agent daemonset. Note that although the CNI is included in this section, it is technically not part of the control plane or data plane.
Verify that the components of the Istio ambient control plane are successfully installed. Because the Istio CNI is deployed as a daemon set, the number of CNI pods equals the number of nodes in your cluster. Note that it might take a few seconds for the pods to become available.
kubectl get pods -A --context ${CLUSTER_CONTEXT} | grep istio
Verify that the ztunnel pods are successfully installed. Because the ztunnel is deployed as a daemon set, the number of pods equals the number of nodes in your cluster. Note that it might take a few seconds for the pods to become available.
kubectl get pods -A --context ${CLUSTER_CONTEXT} | grep ztunnel
Label the istio-system namespace with the cluster’s network name, which you previously set to your cluster’s name in the global.network field of the istiod installation. The ambient control plane uses this label internally to group pods that exist in the same L3 network.
Create an east-west gateway in the istio-eastwest namespace. In each cluster, the east-west gateway is implemented as a ztunnel that facilitates traffic between services across clusters in your multicluster mesh. You can use either LoadBalancer or NodePort addresses for cross-cluster traffic.
Use the following istioctl command to quickly create the east-west gateway. To take a look at the Gateway resource that this command creates, you can include the --generate flag in the command. Cross-cluster traffic though this gateway resolves to the LoadBalancer address.
Note that the gateway must still be created with a stable IP address, which is required for xDS communication with the istiod control plane in each cluster. NodePort peering is used for data plane communication, in that requests to services resolve to the NodePort instead of the gateway’s stable IP address.
Use the following istioctl command to quickly create the east-west gateway. To take a look at the Gateway resource that this command creates, you can include the --generate flag in the command. Note that the HBONE listener that is created by default on this gateway is unused, because traffic is routed through the NodePort directly.
Verify that the east-west gateway is successfully deployed.
kubectl get pods -n istio-eastwest --context $CLUSTER_CONTEXT
For each cluster that you want to include in the multicluster ambient mesh setup, including the management cluster, repeat these steps to install the ambient mesh components and an east-west gateway in each cluster. Remember to change the cluster name and context variables each time you repeat the steps.
After you complete the steps for each cluster, verify that Gloo Mesh successfully created and distributed the remote peering gateways. These gateways use the istio-remote GatewayClass, which allows the istiod control plane in each cluster to discover the east-west gateway addresses of other clusters. Gloo Mesh generates one istio-remote resource in the management cluster for each connected workload cluster, and then distributes the gateway to each cluster respectively.
Verify that an istio-remote gateway for each connected cluster is copied to the management cluster.
kubectl get gateways -n istio-eastwest --context $MGMT_CONTEXT
In this example output, the istio-remote gateways that were auto-generated for workload clusters cluster1 and cluster2 are copied to the management cluster, alongside the management cluster’s own istio-remote gateway and east-west gateway.
NAMESPACE NAME CLASS ADDRESS PROGRAMMED AGE
istio-eastwest istio-eastwest istio-eastwest a7f6f1a2611fc4eb3864f8d688622fd4-1234567890.us-east-1.elb.amazonaws.com True 6s
istio-eastwest istio-remote-peer-cluster1 istio-remote a5082fe9522834b8192a6513eb8c6b01-0987654321.us-east-1.elb.amazonaws.com True 4s
istio-eastwest istio-remote-peer-cluster2 istio-remote aaad62dc3ffb142a1bfc13df7fe9665b-5678901234.us-east-1.elb.amazonaws.com True 4s
istio-eastwest istio-remote-peer-mgmt istio-remote a7f6f1a2611fc4eb3864f8d688622fd4-1234567890.us-east-1.elb.amazonaws.com True 4s
In each cluster, verify that all istio-remote gateways are successfully distributed to all workload clusters. This ensures that services in each workload cluster can now access the east-west gateways in other clusters of the multicluster mesh setup.
kubectl get gateways -n istio-eastwest --context $CLUSTER_CONTEXT
```
3. In each cluster, verify that peer linking was successful by running the `istioctl multicluster check` command.
```sh
istioctl multicluster check --context $CLUSTER_CONTEXT
```
In this example output for cluster1, the license is valid, all Istio pods are healthy, and the east-west gateway is programmed. The remote peer gateways for linking to cluster2 and cluster3 both have a `gloo.solo.io/PeeringSucceeded` status of `True`.
```
✅ License Check: license is valid for multicluster
✅ Pod Check (istiod): all pods healthy
✅ Pod Check (ztunnel): all pods healthy
✅ Pod Check (eastwest gateway): all pods healthy
✅ Gateway Check: all eastwest gateways programmed
✅ Peers Check: all clusters connected
```
Add apps to the ambient mesh. For multicluster setups, this includes making specific services available across your linked cluster setup.
In a multicluster mesh, the east-west gateway serves as a ztunnel that allows traffic requests to flow across clusters, but it does not modify requests in any way. To control in-mesh traffic, you can instead apply policies to waypoint proxies that you create for a workload namespace.