Considerations

Before you install a multicluster sidecar mesh, review the following considerations and requirements.

License requirements

Version requirements

In Gloo Mesh version 2.7 and later, multicluster setups require the Solo distribution of Istio version 1.24.3 or later (1.24.3-solo), including the Solo distribution of istioctl.

Components

In the following steps, you install the Istio ambient components in each workload cluster to successfully create east-west gateways and establish multicluster peering, even if you plan to use a sidecar mesh. However, sidecar mesh setups continue to use sidecar injection for your workloads. Your workloads are not added to an ambient mesh. For more information about running both ambient and sidecar components in one mesh setup, see Ambient-sidecar interoperability.

Revision and canary upgrade limitations

The upgrade guides in this documentation show you how to perform in-place upgrades for your Istio components, which is the recommended upgrade strategy.

Set up tools

  1. Save the following environment details and install the Solo distribution of the istioctl binary.

    1. Set your Enterprise level license for Gloo Mesh as an environment variable. If you do not have one, contact an account representative. If you prefer to specify license keys in a secret instead, see Licensing.

           export GLOO_MESH_LICENSE_KEY=<enterprise_license_key>
           
    2. Choose the version of Istio that you want to install or upgrade to by reviewing the supported versions. In Gloo Mesh version 2.7 and later, multicluster setups require version 1.24.3 or later.

    3. Save the details for the version of the Solo distribution of Istio that you want to install.

    4. Get the Solo distribution of Istio binary and install istioctl, which you use for multicluster linking and gateway commands.

      1. Get the OS and architecture that you use on your machine.

             OS=$(uname | tr '[:upper:]' '[:lower:]' | sed -E 's/darwin/osx/')
           ARCH=$(uname -m | sed -E 's/aarch/arm/; s/x86_64/amd64/; s/armv7l/armv7/')
           echo $OS
           echo $ARCH
             
      2. Download the Solo distribution of Istio binary and install istioctl.

             mkdir -p ~/.istioctl/bin
           curl -sSL https://storage.googleapis.com/istio-binaries-$REPO_KEY/$ISTIO_IMAGE/istioctl-$ISTIO_IMAGE-$OS-$ARCH.tar.gz | tar xzf - -C ~/.istioctl/bin
           chmod +x ~/.istioctl/bin/istioctl
           
           export PATH=${HOME}/.istioctl/bin:${PATH}
             
      3. Verify that the istioctl client runs the Solo distribution of Istio that you want to install.

             istioctl version --remote=false
             

        Example output:

             client version: 1.25.2-solo
             

  2. Deploy Istio to each cluster and link clusters. These steps vary based on whether you want to use Gloo Mesh to automatically link clusters or manually link clusters yourself. Note that automatic linking is a beta feature, and requires Istio to be installed in the same cluster that the Gloo Mesh management plane is deployed to.

Manually link clusters

In each cluster, use the Gloo Operator to create the service mesh components. Then, create an east-west gateway so that traffic requests can be routed cross-cluster, and link clusters to enable cross-cluster service discovery.

Create a shared root of trust

Each cluster in the multicluster setup must have a shared root of trust. This can be achieved by providing a root certificate signed by a PKI provider, or a custom root certificate created for this purpose. The root certificate signs a unique intermediate CA certificate for each cluster.

Deploy mesh components

  1. Save the name and kubeconfig context of a cluster where you want to install Istio in the following environment variables. Each time you repeat the steps in this guide, you change these variables to the next workload cluster’s name and context.

      export CLUSTER_NAME=<cluster-name>
    export CLUSTER_CONTEXT=<cluster-context>
      
  2. Install the Gloo Operator to the gloo-mesh namespace. This operator deploys and manages your Istio installation. Note that if you already installed Gloo Mesh, you can optionally reference the secret that Gloo Mesh automatically creates for your license in the –set manager.env.SOLO_ISTIO_LICENSE_KEY_SECRET_REF=gloo-mesh/license-keys flag instead.

      helm install gloo-operator oci://us-docker.pkg.dev/solo-public/gloo-operator-helm/gloo-operator \
      --version 0.2.3 \
      -n gloo-mesh \
      --create-namespace \
      --kube-context ${CLUSTER_CONTEXT} \
      --set manager.env.SOLO_ISTIO_LICENSE_KEY=${GLOO_MESH_LICENSE_KEY}
      
  3. Verify that the operator pod is running.

      kubectl get pods -n gloo-mesh --context ${CLUSTER_CONTEXT} -l app.kubernetes.io/name=gloo-operator
      

    Example output:

      gloo-operator-78d58d5c7b-lzbr5     1/1     Running   0          48s
      
  4. Create a ServiceMeshController custom resource to configure an Istio installation. For a description of each configurable field, see the ServiceMeshController reference. If you need to set more advanced Istio configuration, you can also create a gloo-extensions-config configmap.
      kubectl apply -n gloo-mesh --context ${CLUSTER_CONTEXT} -f -<<EOF
    apiVersion: operator.gloo.solo.io/v1
    kind: ServiceMeshController
    metadata:
      name: managed-istio
      labels:
        app.kubernetes.io/name: managed-istio
    spec:
      cluster: ${CLUSTER_NAME}
      network: ${CLUSTER_NAME}
      dataplaneMode: Ambient # required for multicluster setups
      installNamespace: istio-system
      version: ${ISTIO_VERSION}
    EOF
      
  5. Verify that the ServiceMeshController is ready. In the Status section of the output, make sure that all statuses are True, and that the phase is SUCCEEDED.

      kubectl describe servicemeshcontroller -n gloo-mesh managed-istio --context ${CLUSTER_CONTEXT}
      

    Example output:

      ...
    Status:
      Conditions:
        Last Transition Time:  2024-12-27T20:47:01Z
        Message:               Manifests initialized
        Observed Generation:   1
        Reason:                ManifestsInitialized
        Status:                True
        Type:                  Initialized
        Last Transition Time:  2024-12-27T20:47:02Z
        Message:               CRDs installed
        Observed Generation:   1
        Reason:                CRDInstalled
        Status:                True
        Type:                  CRDInstalled
        Last Transition Time:  2024-12-27T20:47:02Z
        Message:               Deployment succeeded
        Observed Generation:   1
        Reason:                DeploymentSucceeded
        Status:                True
        Type:                  ControlPlaneDeployed
        Last Transition Time:  2024-12-27T20:47:02Z
        Message:               Deployment succeeded
        Observed Generation:   1
        Reason:                DeploymentSucceeded
        Status:                True
        Type:                  CNIDeployed
        Last Transition Time:  2024-12-27T20:47:02Z
        Message:               Deployment succeeded
        Observed Generation:   1
        Reason:                DeploymentSucceeded
        Status:                True
        Type:                  WebhookDeployed
        Last Transition Time:  2024-12-27T20:47:02Z
        Message:               All conditions are met
        Observed Generation:   1
        Reason:                SystemReady
        Status:                True
        Type:                  Ready
      Phase:                   SUCCEEDED
    Events:                    <none>
      
  6. Verify that the istiod control plane, Istio CNI, and ztunnel pods are running.

      kubectl get pods -n istio-system --context ${CLUSTER_CONTEXT}
      

    Example output:

      NAME                          READY   STATUS    RESTARTS   AGE
    istio-cni-node-6s5nk          1/1     Running   0          2m53s
    istio-cni-node-blpz4          1/1     Running   0          2m53s
    istiod-gloo-bb86b959f-msrg7   1/1     Running   0          2m45s
    istiod-gloo-bb86b959f-w29cm   1/1     Running   0          3m
    ztunnel-mx8nw                 1/1     Running   0          2m52s
    ztunnel-w8r6c                 1/1     Running   0          2m52s
      
  7. Apply the CRDs for the Kubernetes Gateway API to your cluster, which are required to create components such as waypoint proxies for L7 traffic policies, gateways with the Gateway resource, and more.

      kubectl apply -f https://github.com/kubernetes-sigs/gateway-api/releases/download/v/standard-install.yaml --context ${CLUSTER_CONTEXT}
      
  8. Create an east-west gateway in the istio-eastwest namespace. An east-west gateway facilitates traffic between services in each cluster in your multicluster mesh.

    • You can use the following istioctl command to quickly create the east-west gateway.
        kubectl create namespace istio-eastwest --context ${CLUSTER_CONTEXT}
      istioctl multicluster expose --namespace istio-eastwest --context ${CLUSTER_CONTEXT}
        
    • To take a look at the Gateway resource that this command creates, you can include the --generate flag in the command.
        kubectl create namespace istio-eastwest --context ${CLUSTER_CONTEXT}
      istioctl multicluster expose --namespace istio-eastwest --context ${CLUSTER_CONTEXT} --generate
        
      In this example output, the gatewayClassName that is used, istio-eastwest, is included by default when you install Istio in ambient mode.
        apiVersion: gateway.networking.k8s.io/v1
      kind: Gateway
      metadata:
        labels:
          istio.io/expose-istiod: "15012"
          topology.istio.io/network: "<cluster_network_name>"
        name: istio-eastwest
        namespace: istio-eastwest
      spec:
        gatewayClassName: istio-eastwest
        listeners:
        - name: cross-network
          port: 15008
          protocol: HBONE
          tls:
            mode: Passthrough
        - name: xds-tls
          port: 15012
          protocol: TLS
          tls:
            mode: Passthrough
        
  9. Verify that the east-west gateway is successfully deployed.

      kubectl get pods -n istio-eastwest --context $CLUSTER_CONTEXT
      
  10. For each cluster that you want to include in the multicluster mesh setup, repeat these steps to install the Gloo Operator, service mesh components, and east-west gateway in each cluster. Remember to change the cluster name and context variables each time you repeat the steps.

      export CLUSTER_NAME=<cluster-name>
    export CLUSTER_CONTEXT=<cluster-context>
      

Link clusters to enable cross-cluster service discovery and allow traffic to be routed through east-west gateways across clusters.

  1. Verify that the contexts for the clusters that you want to include in the multicluster mesh are listed in your kubeconfig file.

      kubectl config get-contexts
      
    • In the output, note the names of the cluster contexts, which you use in the next step to link the clusters.
    • If you have multiple kubeconfig files, you can generate a merged kubeconfig file by running the following command.
        KUBECONFIG=<kubeconfig_file1>.yaml:<file2>.yaml:<file3>.yaml kubectl config view --flatten
        
  2. Using the names of the cluster contexts, link the clusters so that they can communicate. Note that you can either link the clusters bi-directionally or asymmetrically. In a standard bi-directional setup, services in any of the linked clusters can send requests to and receive requests from the services in any of the other linked clusters. In an asymmetrical setup, you allow one cluster to send requests to another cluster, but the other cluster cannot send requests back to the first cluster.

Automatically link clusters (beta)

In each cluster, use the Gloo Operator to create the service mesh components, and create an east-west gateway so that traffic requests can be routed cross-cluster. Then, use the Gloo Mesh management plane to automate multicluster linking, which enables cross-cluster service discovery.

Enable automatic peering of clusters

Upgrade Gloo Mesh in your multicluster setup to enable the ConfigDistribution feature flag and install the enterprise CRDs, which are required for Gloo Mesh to automate peering and distribute gateways between clusters.

  1. Upgrade your gloo-platform-crds Helm release in the management cluster to include the following settings.

      helm get values gloo-platform-crds -n gloo-mesh -o yaml --kube-context ${MGMT_CONTEXT} > mgmt-crds.yaml
    helm upgrade gloo-platform-crds gloo-platform/gloo-platform-crds \
        --kube-context ${MGMT_CONTEXT} \
        --namespace gloo-mesh \
        -f mgmt-crds.yaml \
        --set featureGates.ConfigDistribution=true \
        --set installEnterpriseCrds=true
      
  2. Upgrade your gloo-platform Helm release in the management cluster to include the following settings.

      helm get values gloo-platform -n gloo-mesh -o yaml --kube-context ${MGMT_CONTEXT} > mgmt-plane.yaml
    helm upgrade gloo-platform gloo-platform/gloo-platform \
        --kube-context ${MGMT_CONTEXT} \
        --namespace gloo-mesh \
        -f mgmt-plane.yaml \
        --set featureGates.ConfigDistribution=true
      
  3. Upgrade your gloo-platform-crds Helm release in each workload cluster to include the following settings. Repeat this step for each workload cluster.

      helm get values gloo-platform-crds -n gloo-mesh -o yaml --kube-context ${CLUSTER_CONTEXT} > crds.yaml
    helm upgrade gloo-platform-crds gloo-platform/gloo-platform-crds \
        --kube-context ${CLUSTER_CONTEXT} \
        --namespace gloo-mesh \
        -f crds.yaml \
        --set installEnterpriseCrds=true
      

Create a shared root of trust

Create a shared root of trust for each cluster in the multicluster setup, including the management cluster. This can be achieved by providing a root certificate signed by a PKI provider, or a custom root certificate created for this purpose. The root certificate signs a unique intermediate CA certificate for each cluster.

Deploy mesh components

  1. Save the name and kubeconfig context of a cluster where you want to install Istio in the following environment variables. Each time you repeat the steps in this guide, you change these variables to the next workload cluster’s name and context. Note that to use automated multicluster peering, you must complete these steps to install a service mesh in the management cluster as well as your workload clusters.

      export CLUSTER_NAME=<cluster-name>
    export CLUSTER_CONTEXT=<cluster-context>
      
  2. Apply the CRDs for the Kubernetes Gateway API to your cluster, which are required to create components such as waypoint proxies for L7 traffic policies, gateways with the Gateway resource, and more.

      kubectl apply -f https://github.com/kubernetes-sigs/gateway-api/releases/download/v/standard-install.yaml --context ${CLUSTER_CONTEXT}
      
  3. Install the Gloo Operator to the gloo-mesh namespace. This operator deploys and manages your Istio installation. Note that if you already installed Gloo Mesh, you can optionally reference the secret that Gloo Mesh automatically creates for your license in the –set manager.env.SOLO_ISTIO_LICENSE_KEY_SECRET_REF=gloo-mesh/license-keys flag instead.

      helm install gloo-operator oci://us-docker.pkg.dev/solo-public/gloo-operator-helm/gloo-operator \
      --version 0.2.3 \
      -n gloo-mesh \
      --create-namespace \
      --kube-context ${CLUSTER_CONTEXT} \
      --set manager.env.SOLO_ISTIO_LICENSE_KEY=${GLOO_MESH_LICENSE_KEY}
      
  4. Verify that the operator pod is running.

      kubectl get pods -n gloo-mesh --context ${CLUSTER_CONTEXT} -l app.kubernetes.io/name=gloo-operator
      

    Example output:

      gloo-operator-78d58d5c7b-lzbr5     1/1     Running   0          48s
      
  5. Apply the following configmap and ServiceMeshController for the Gloo Operator to enable multicluster peering and deploy a service mesh.

      kubectl apply -n gloo-mesh --context ${CLUSTER_CONTEXT} -f -<<EOF
    apiVersion: v1
    kind: ConfigMap
    metadata:
      name: gloo-extensions-config
      namespace: gloo-mesh
    data:
      beta: |
        serviceMeshController:
          multiClusterMode: Peering
      values.istiod: |
        env:
          PEERING_AUTOMATIC_LOCAL_GATEWAY: "true"	
    EOF
    ---
    kubectl apply -n gloo-mesh --context ${CLUSTER_CONTEXT} -f -<<EOF
    apiVersion: operator.gloo.solo.io/v1
    kind: ServiceMeshController
    metadata:
      name: managed-istio
      labels:
        app.kubernetes.io/name: managed-istio
    spec:
      cluster: ${CLUSTER_NAME}
      network: ${CLUSTER_NAME}
      dataplaneMode: Ambient # required for multicluster setups
      installNamespace: istio-system
      version: ${ISTIO_VERSION}
    EOF
      
  6. Verify that the istiod control plane, Istio CNI, and ztunnel pods are running.

      kubectl get pods -n istio-system --context ${CLUSTER_CONTEXT}
      

    Example output:

      NAME                          READY   STATUS    RESTARTS   AGE
    istio-cni-node-6s5nk          1/1     Running   0          2m53s
    istio-cni-node-blpz4          1/1     Running   0          2m53s
    istiod-gloo-bb86b959f-msrg7   1/1     Running   0          2m45s
    istiod-gloo-bb86b959f-w29cm   1/1     Running   0          3m
    ztunnel-mx8nw                 1/1     Running   0          2m52s
    ztunnel-w8r6c                 1/1     Running   0          2m52s
      
  7. Create an east-west gateway in the istio-eastwest namespace. An east-west gateway facilitates traffic between services in each cluster in your multicluster mesh.

    • You can use the following istioctl command to quickly create the east-west gateway.
        kubectl create namespace istio-eastwest --context ${CLUSTER_CONTEXT}
      istioctl multicluster expose --namespace istio-eastwest --context ${CLUSTER_CONTEXT}
        
    • To take a look at the Gateway resource that this command creates, you can include the --generate flag in the command.
        kubectl create namespace istio-eastwest --context ${CLUSTER_CONTEXT}
      istioctl multicluster expose --namespace istio-eastwest --context ${CLUSTER_CONTEXT} --generate
        
      In this example output, the gatewayClassName that is used, istio-eastwest, is included by default when you install Istio in ambient mode.
        apiVersion: gateway.networking.k8s.io/v1
      kind: Gateway
      metadata:
        labels:
          istio.io/expose-istiod: "15012"
          topology.istio.io/network: "<cluster_network_name>"
        name: istio-eastwest
        namespace: istio-eastwest
      spec:
        gatewayClassName: istio-eastwest
        listeners:
        - name: cross-network
          port: 15008
          protocol: HBONE
          tls:
            mode: Passthrough
        - name: xds-tls
          port: 15012
          protocol: TLS
          tls:
            mode: Passthrough
        
  8. For each cluster that you want to include in the multicluster mesh setup, including the management cluster, repeat these steps to install the Gloo Operator, service mesh components, and east-west gateway in each cluster. Remember to change the cluster name and context variables each time you repeat the steps.

      export CLUSTER_NAME=<cluster-name>
    export CLUSTER_CONTEXT=<cluster-context>
      

Review remote peer gateways

After you complete the steps for each cluster, verify that Gloo Mesh successfully created and distributed the remote peering gateways. These gateways use the istio-remote GatewayClass, which allows the istiod control plane in each cluster to discover the east-west gateway addresses of other clusters. Gloo Mesh generates one istio-remote resource in the management cluster for each connected workload cluster, and then distributes the gateway to each cluster respectively.

  1. Verify that an istio-remote gateway for each connected cluster is copied to the management cluster.

      kubectl get gateways -n istio-eastwest --context $MGMT_CONTEXT
      

    In this example output, the istio-remote gateways that were auto-generated for workload clusters cluster1 and cluster2 are copied to the management cluster, alongside the management cluster’s own istio-remote gateway and east-west gateway.

      NAMESPACE        NAME                            CLASS           ADDRESS                                                                   PROGRAMMED   AGE
    istio-eastwest   istio-eastwest                 istio-eastwest   a7f6f1a2611fc4eb3864f8d688622fd4-1234567890.us-east-1.elb.amazonaws.com   True         6s
    istio-eastwest   istio-remote-peer-cluster1     istio-remote     a5082fe9522834b8192a6513eb8c6b01-0987654321.us-east-1.elb.amazonaws.com   True         4s
    istio-eastwest   istio-remote-peer-cluster2     istio-remote     aaad62dc3ffb142a1bfc13df7fe9665b-5678901234.us-east-1.elb.amazonaws.com   True         4s
    istio-eastwest   istio-remote-peer-mgmt         istio-remote     a7f6f1a2611fc4eb3864f8d688622fd4-1234567890.us-east-1.elb.amazonaws.com   True         4s
      
  2. In each cluster, verify that all istio-remote gateways are successfully distributed to all workload clusters. This ensures that services in each workload cluster can now access the east-west gateways in other clusters of the multicluster mesh setup.

      kubectl get gateways -n istio-eastwest --context $CLUSTER_CONTEXT
      

Next

Add apps to the sidecar mesh. For multicluster setups, this includes making specific services available across your linked cluster setup.

ServiceMeshController reference

Review the following configurable fields for the ServiceMeshController custom resource.

SettingDescriptionSupported valuesDefault
clusterThe name of the cluster to install Istio into. This value is required to set the trust domain field in multicluster environments.
dataplaneModeThe dataplane mode to use.Ambient or SidecarAmbient
distributionOptional: A specific distribution of the Istio version, such as the standard or FIPS image distribution.Standard or FIPSStandard
image.repositoryOptional: An Istio image repository, such as to use an image from a private registry.The Solo distribution of Istio repo for the Istio minor version.
image.secretsOptional: A list of secrets to use for pulling images from a container registry. The secret list must be of type kubernetes.io/dockerconfigjson and exist in the installNamespace that you install Istio in.
installNamespaceNamespace to install the service mesh components into. If you set the installNamespace to a namespace other than gloo-system, gloo-mesh, or istio-system, you must include the –set manager.env.WATCH_NAMESPACES=<namespace> setting.istio-system
networkThe default network where workload endpoints exist. A network is a logical grouping of workloads that exist in the same Layer 3 domain. Workloads in the same network can directly communicate with each other, while workloads in different networks require an east-west gateway to establish connectivity. This value is required in multi-network environments. For example, an easy way to identify the network of in-mesh workloads in one cluster is to simply use the cluster’s name for the network, such as cluster1.
onConflictOptional: How to resolve conflicting Istio configuration, if the configuration in this ServiceMeshController conflicts with existing Istio resources in the cluster.
  • Force: The existing resources are updated with the new configuration.
  • Abort: The installation configured in this ServiceMeshController is aborted, and the existing resources remain unchanged.
Force or AbortAbort
repository.secretsOptional: A list of secrets to use for pulling manifests from an artifact registry. The secret list must be of type kubernetes.io/dockerconfigjson and can exist in any namespace, such as the same namespace that you create the ServiceMeshController in.
repository.insecureSkipVerifyOptional: If set to true, the repository server’s certificate chain and hostname are not verified.true or false
scalingProfileOptional: The istiod control plane scaling settings to use. In large environments, set to Large.
  • Default sets the following scaling values:
    • resources.requests.cpu=1000m
    • resources.requests.memory=1Gi
    • resources.limits.cpu=2000m
    • resources.limits.memory=2Gi
    • autoscaleEnabled=true
    • autoscaleMin=2
    • autoscaleMax=25
    • cpu.targetAverageUtilization=80
  • Demo sets the following scaling values:
    • autoscaleEnabled=false
    • resources.requests.cpu=250m
  • Large sets the following scaling values:
    • resources.requests.cpu=4000m
    • resources.requests.memory=4Gi
    • resources.limits.cpu=4000m
    • resources.limits.memory=4Gi
    • autoscaleEnabled=true
    • autoscaleMin=4
    • autoscaleMax=50
    • cpu.targetAverageUtilization=75
Default, Demo, or LargeDefault
trafficCaptureModeOptional: Traffic capture mode to use.
  • Auto: The most suitable traffic capture mode is automatically selected based on the environment, such as using a CNI to capture traffic.
  • InitContainer: An init container is used for the traffic capture. This setting can only be used when the dataplaneMode is Sidecar.
Auto or InitContainerAuto
trustDomainThe trustDomain for Istio workloads.If cluster is set, defaults to that value. If cluster is unset, defaults to cluster.local.
versionThe Istio patch version to install. For more information, see Supported Solo distributions of Istio.Any Istio version supported for your Gloo version

Advanced settings configuration

You can set advanced Istio configuation by creating a configmap. For example, you might need to specify settings for istiod such as discovery selectors, pod and service annotations, affinities, tolerations, or node selectors.

The following gloo-extensions-config example configmap sets all possible fields for demonstration purposes.

  apiVersion: v1
kind: ConfigMap
metadata:
  name: gloo-extensions-config
  namespace: gloo-mesh
data:
  stable: |
    serviceMeshController:
      istiod:
        discoverySelectors:
          - matchLabels:
              foo: bar
        topology:
          affinity:
            podAntiAffinity:
              preferredDuringSchedulingIgnoredDuringExecution:
              - podAffinityTerm:
                  labelSelector:
                    matchExpressions:
                    - key: foo
                      operator: In
                      values:
                      - bar
                  topologyKey: foo.io/bar
                weight: 80
          nodeSelector:
            foo: bar
          tolerations:
            - key: t1
              operator: Equal
              value: v1
        deployment:
          podAnnotations:
            foo: bar
        serviceAnnotations:
          foo: bar
  beta: |
    serviceMeshController:
      cni:
        confDir: /foo/bar
        binDir: /foo/bar