Gloo Operator
Use the Gloo Operator to link service meshes across multiple clusters.
Overview
In this guide, you deploy a sidecar mesh to each workload cluster, create an east-west gateway in each cluster, and link the istiod control planes across cluster networks by using peering gateways. In the next guide, you can deploy the Bookinfo sample app to the sidecar mesh in each cluster, and make select services available across the multicluster mesh. Incoming requests can then be routed from an ingress gateway, such as Gloo Gateway, to services in your mesh across all clusters.
The following diagram demonstrates a service mesh setup across multiple clusters.


Considerations
Before you install a multicluster sidecar mesh, review the following considerations and requirements.
License requirements
Multicluster capabilities require an Enterpise level license for Gloo Mesh. If you do not have one, contact an account representative.
Version requirements
Review the following known Istio version requirements and restrictions.
- Patch versions 1.26.0 and 1.26.1 of the Solo distribution of Istio lack support for FIPS-tagged images and ztunnel outlier detection. When upgrading or installing 1.26, be sure to use patch version
1.26.1-patch0and later only. - In the Solo distribution of Istio 1.25 and later, you can access enterprise-level features by passing your Solo license in the
license.valueorlicense.secretReffield of the Solo distribution of the istiod Helm chart. The Solo istiod Helm chart is strongly recommended due to the included safeguards, default settings, and upgrade handling to ensure a reliable and secure Istio deployment. Though it is not recommended, you can pass your license key in the open source istiod Helm chart by using the--set pilot.env.SOLO_LICENSE_KEYfield. - Multicluster setups require the Solo distribution of Istio version 1.24.3 or later (
1.24.3-solo), including the Solo distribution ofistioctl. - Due to a lack of support for the Istio CNI and iptables for the Istio proxy, you cannot run Istio (and therefore Gloo Mesh (OSS APIs)) on AWS Fargate. For more information, see the Amazon EKS issue.
Components
In the following steps, you install the Istio ambient components in each workload cluster to successfully create east-west gateways and establish multicluster peering, even if you plan to use a sidecar mesh. However, sidecar mesh setups continue to use sidecar injection for your workloads. Your workloads are not added to an ambient mesh. For more information about running both ambient and sidecar components in one mesh setup, see Ambient-sidecar interoperability.
Revision and canary upgrade limitations
The upgrade guides in this documentation show you how to perform in-place upgrades for your Istio components, which is the recommended upgrade strategy.
Cross-cluster traffic addresses
In each cluster, you create an east-west gateway, which is implemented as a ztunnel that facilitates traffic between services across clusters in your multicluster mesh. In the Solo distribution of Istio 1.28 and later, you can use either LoadBalancer or NodePort addresses to resolve cross-cluster traffic requests through this gateway. Note that the NodePort method is considered alpha in Istio version 1.28.
LoadBalancer: In the standard LoadBalancer peering method, cross-cluster traffic through the east-west gateway resolves to its LoadBalancer address.
NodePort (alpha): If you prefer to use direct pod-to-pod traffic across clusters, you can annotate the east-west and peering gateways so that cross-cluster traffic resolves to NodePort addresses. This method allows you to avoid LoadBalancer services to reduce cross-cluster traffic costs. Review the following considerations:
- Note that the gateways must still be created with stable IP addresses, which are required for xDS communication with the istiod control plane in each cluster. NodePort peering is used for data-plane communication, in that requests to services resolve to the NodePort instead of the LoadBalancer address. Also, the east-west gateway must have the
topology.istio.io/clusterlabel. - If a node in a target cluster becomes inaccessible, such as during a restart or replacement, a delay can occur in the connection from the client cluster that must become aware of the new east-west gateway NodePort. In this case, you might see a connection error when trying to send cross-cluster traffic to an east-west gateway that is no longer accepting connections.
- Only nodes where an east-west gateway pod is provisioned are considered targets for traffic.
- Like LoadBalancer gateways, NodePort gateways support traffic from Envoy-based ingress gateways, waypoints, and sidecars.
- This feature is in an alpha state. Alpha features are likely to change, are not fully tested, and are not supported for production. For more information, see Solo feature maturity.
The steps in the following guide to create the gateways include options for either the LoadBalancer or NodePort method. A status condition on each east-west and remote peer gateway indicates which dataplane service type is in use.
Migrating from multicluster community Istio
If you previously used the multicluster feature in community Istio, and want to now migrate to multicluster peering in the Solo distribution of Istio, the DISABLE_LEGACY_MULTICLUSTER environment variable is introduced in the Solo distribution of Istio version 1.28 to disable the community multicluster mechanisms. Multicluster in community Istio uses remote secrets that contain kubeconfigs to watch resources on remote clusters. This system is incompatible with the decentralized, push-based model for peering in the Solo distribution of Istio. This variable causes istiod to ignore remote secrets so that it does not attempt to set up Kubernetes clients to connect to them.
- For fresh multicluster mesh installations with the Solo distribution of Istio, use this environment variable in your istiod settings. This setting serves as a recommended safety measure to prevent any use of remote secrets.
- If you want to initiate a multicluster migration from community Istio, contact a Solo account representative. An account representative can help you set up two revisions of Istio that each select a different set of namespaces, and set the
DISABLE_LEGACY_MULTICLUSTERvariable on the revision that uses the Solo distribution of Istio for multicluster peering.
Set up tools
Save the following environment details and install the Solo distribution of the
istioctlbinary.Set your Enterprise level license for Gloo Mesh as an environment variable. If you do not have one, contact an account representative. If you prefer to specify license keys in a secret instead, see Licensing.
export GLOO_MESH_LICENSE_KEY=<enterprise_license_key>Choose the version of Istio that you want to install or upgrade to by reviewing the supported versions.
Save the Solo distribution of Istio version.
export ISTIO_VERSION=1.28.1-patch0 export ISTIO_IMAGE=${ISTIO_VERSION}-soloSave the image and Helm repository information for the Solo distribution of Istio.
Istio 1.29 and later:
export REPO=us-docker.pkg.dev/soloio-img/istio export HELM_REPO=us-docker.pkg.dev/soloio-img/istio-helmIstio 1.28 and earlier: You must provide a repo key for the minor version of the Solo distribution of Istio that you want to install. This is the 12-character hash at the end of the repo URL
us-docker.pkg.dev/gloo-mesh/istio-<repo-key>, which you can find in the Istio images built by Solo.io support article.# 12-character hash at the end of the repo URL export REPO_KEY=<repo_key> export REPO=us-docker.pkg.dev/gloo-mesh/istio-${REPO_KEY} export HELM_REPO=us-docker.pkg.dev/gloo-mesh/istio-helm-${REPO_KEY}
Get the Solo distribution of Istio binary and install
istioctl, which you use for multicluster linking and gateway commands.Get the OS and architecture that you use on your machine.
OS=$(uname | tr '[:upper:]' '[:lower:]' | sed -E 's/darwin/osx/') ARCH=$(uname -m | sed -E 's/aarch/arm/; s/x86_64/amd64/; s/armv7l/armv7/') echo $OS echo $ARCHDownload the Solo distribution of Istio binary and install
istioctl.Istio 1.29 and later:
mkdir -p ~/.istioctl/bin curl -sSL https://storage.googleapis.com/soloio-istio-binaries/release/$ISTIO_IMAGE/istio-$ISTIO_IMAGE-$OS-$ARCH.tar.gz | tar xzf - -C ~/.istioctl/bin mv ~/.istioctl/bin/istio-$ISTIO_IMAGE/bin/istioctl ~/.istioctl/bin/istioctl chmod +x ~/.istioctl/bin/istioctl export PATH=${HOME}/.istioctl/bin:${PATH}Istio 1.28 and earlier:
mkdir -p ~/.istioctl/bin curl -sSL https://storage.googleapis.com/istio-binaries-$REPO_KEY/$ISTIO_IMAGE/istioctl-$ISTIO_IMAGE-$OS-$ARCH.tar.gz | tar xzf - -C ~/.istioctl/bin chmod +x ~/.istioctl/bin/istioctl export PATH=${HOME}/.istioctl/bin:${PATH}
Verify that the
istioctlclient runs the Solo distribution of Istio that you want to install.istioctl version --remote=falseExample output:
client version: 1.28.1-patch0-solo
Deploy Istio to each cluster and link clusters. These steps vary based on whether you want to manually link meshes across clusters or use Gloo Mesh to automatically link clusters. Note that automatic linking is a beta feature, and requires Istio to be installed in the same cluster that the Gloo management plane is deployed to.
Option 1: Manually link meshes
In each cluster, use the Gloo Operator to create the service mesh components. Then, create an east-west gateway so that traffic requests can be routed cross-cluster, and link clusters to enable cross-cluster service discovery.
Create a shared root of trust
Each cluster in the multicluster setup must have a shared root of trust. This can be achieved by providing a root certificate signed by a PKI provider, or a custom root certificate created for this purpose. The root certificate signs a unique intermediate CA certificate for each cluster.
Deploy mesh components
Save the name and kubeconfig context of a cluster where you want to install Istio in the following environment variables. Each time you repeat the steps in this guide, you change these variables to the next workload cluster’s name and context.
export CLUSTER_NAME=<cluster-name> export CLUSTER_CONTEXT=<cluster-context>Install the Gloo Operator to the
gloo-meshnamespace. This operator deploys and manages your Istio installation. For more information, see the Helm reference. Note that if you already installed Gloo Mesh, you can optionally reference the secret that Gloo Mesh (OSS APIs) automatically creates for your license in the–set manager.env.SOLO_ISTIO_LICENSE_KEY_SECRET_REF=gloo-mesh/license-keysflag instead.helm install gloo-operator oci://us-docker.pkg.dev/solo-public/gloo-operator-helm/gloo-operator \ --version 0.4.2 \ -n gloo-mesh \ --create-namespace \ --kube-context ${CLUSTER_CONTEXT} \ --set manager.env.SOLO_ISTIO_LICENSE_KEY=${GLOO_MESH_LICENSE_KEY}Verify that the operator pod is running.
kubectl get pods -n gloo-mesh --context ${CLUSTER_CONTEXT} -l app.kubernetes.io/name=gloo-operatorExample output:
gloo-operator-78d58d5c7b-lzbr5 1/1 Running 0 48s- Create a ServiceMeshController custom resource to configure an Istio installation. For a description of each configurable field, see the ServiceMeshController reference. If you need to set more advanced Istio configuration, you can also create a gloo-extensions-config configmap.
kubectl apply -n gloo-mesh --context ${CLUSTER_CONTEXT} -f -<<EOF apiVersion: operator.gloo.solo.io/v1 kind: ServiceMeshController metadata: name: managed-istio labels: app.kubernetes.io/name: managed-istio spec: cluster: ${CLUSTER_NAME} network: ${CLUSTER_NAME} dataplaneMode: Ambient # required for multicluster setups installNamespace: istio-system version: ${ISTIO_VERSION} EOFNote that the operator detects your cloud provider and cluster platform, and configures the necessary settings required for that platform for you. For example, if you create an ambient mesh in an OpenShift cluster, no OpenShift-specific settings are required in the ServiceMeshController, because the operator automatically sets the appropriate settings for OpenShift and your specific cloud provider accordingly.If you set theinstallNamespaceto a namespace other thangloo-system,gloo-mesh, oristio-system, you must include the‐‐set manager.env.WATCH_NAMESPACES=<namespace>setting. Verify that the ServiceMeshController is ready. In the
Statussection of the output, make sure that all statuses areTrue, and that the phase isSUCCEEDED.kubectl describe servicemeshcontroller -n gloo-mesh managed-istio --context ${CLUSTER_CONTEXT}Example output:
... Status: Conditions: Last Transition Time: 2024-12-27T20:47:01Z Message: Manifests initialized Observed Generation: 1 Reason: ManifestsInitialized Status: True Type: Initialized Last Transition Time: 2024-12-27T20:47:02Z Message: CRDs installed Observed Generation: 1 Reason: CRDInstalled Status: True Type: CRDInstalled Last Transition Time: 2024-12-27T20:47:02Z Message: Deployment succeeded Observed Generation: 1 Reason: DeploymentSucceeded Status: True Type: ControlPlaneDeployed Last Transition Time: 2024-12-27T20:47:02Z Message: Deployment succeeded Observed Generation: 1 Reason: DeploymentSucceeded Status: True Type: CNIDeployed Last Transition Time: 2024-12-27T20:47:02Z Message: Deployment succeeded Observed Generation: 1 Reason: DeploymentSucceeded Status: True Type: WebhookDeployed Last Transition Time: 2024-12-27T20:47:02Z Message: All conditions are met Observed Generation: 1 Reason: SystemReady Status: True Type: Ready Phase: SUCCEEDED Events: <none>Verify that the istiod control plane, Istio CNI, and ztunnel pods are running.
kubectl get pods -n istio-system --context ${CLUSTER_CONTEXT}Example output:
NAME READY STATUS RESTARTS AGE istio-cni-node-6s5nk 1/1 Running 0 2m53s istio-cni-node-blpz4 1/1 Running 0 2m53s istiod-gloo-bb86b959f-msrg7 1/1 Running 0 2m45s istiod-gloo-bb86b959f-w29cm 1/1 Running 0 3m ztunnel-mx8nw 1/1 Running 0 2m52s ztunnel-w8r6c 1/1 Running 0 2m52sApply the CRDs for the Kubernetes Gateway API to your cluster, which are required to create components such as waypoint proxies for L7 traffic policies, gateways with the
Gatewayresource, and more.kubectl apply -f https://github.com/kubernetes-sigs/gateway-api/releases/download/v1.4.0/standard-install.yaml --context ${CLUSTER_CONTEXT}Create an east-west gateway in the
istio-eastwestnamespace. In each cluster, the east-west gateway is implemented as a ztunnel that facilitates traffic between services across clusters in your multicluster mesh. You can use either LoadBalancer or NodePort addresses for cross-cluster traffic.Verify that the east-west gateway is successfully deployed.
kubectl get pods -n istio-eastwest --context $CLUSTER_CONTEXTFor each cluster that you want to include in the multicluster mesh setup, repeat these steps to install the Gloo Operator, service mesh components, and east-west gateway in each cluster. Remember to change the cluster name and context variables each time you repeat the steps.
export CLUSTER_NAME=<cluster-name> export CLUSTER_CONTEXT=<cluster-context>
Link clusters
Link clusters to enable cross-cluster service discovery and allow traffic to be routed through east-west gateways across clusters.
Optional: Before you link clusters, you can check the individual readiness of each cluster for linking by running the
istioctl multicluster checkcommand for each cluster.istioctl multicluster check --context $CLUSTER_CONTEXTBefore continuing to the next step, make sure that the following checks pass or fail as expected:✅ The license in use by istiod supports multicluster.✅ All istiod, ztunnel, and east-west gateway pods are healthy.✅ The east-west gateway is programmed.❌ Each remote peer gateway has a
gloo.solo.io/PeeringSucceededstatus ofTrue. Note that this fails if you run this command prior to linking the clusters.Link clusters to enable cross-cluster service discovery and allow traffic to be routed through east-west gateways across clusters. The steps vary based on whether you have access to the kubeconfig files for each cluster.
For each cluster, verify that peer linking was successful by running the
istioctl multicluster checkcommand.istioctl multicluster check --context $CLUSTER_CONTEXTIn this example output for cluster1, the license is valid, all Istio pods are healthy, and the east-west gateway is programmed. The remote peer gateways for linking to cluster2 and cluster3 both have a
gloo.solo.io/PeeringSucceededstatus ofTrue.✅ License Check: license is valid for multicluster ✅ Pod Check (istiod): all pods healthy ✅ Pod Check (ztunnel): all pods healthy ✅ Pod Check (eastwest gateway): all pods healthy ✅ Gateway Check: all eastwest gateways programmed ✅ Peers Check: all clusters connected
Option 2: Automatically link clusters (beta)
In each cluster, use the Gloo Operator to create the service mesh components, and create an east-west gateway so that traffic requests can be routed cross-cluster. Then, use the Gloo management plane to automate multicluster linking, which enables cross-cluster service discovery.
Review the following considerations:
- Automated multicluster peering is a beta feature. For more information, see Solo feature maturity.
- This feature requires an Enterprise level license for Gloo Mesh.
- Automated peering requires Istio to be installed in the same cluster that the Gloo management plane is deployed to.
Enable automatic peering of clusters
Upgrade Gloo Mesh in your multicluster setup to enable the ConfigDistribution feature flag and install the enterprise CRDs, which are required for Gloo Mesh to automate peering and distribute gateways between clusters.
These steps assume you already installed Gloo Mesh, and show you how to upgrade your Helm install values. If you have not yet installed Gloo Mesh, follow the steps in Set up multicluster management.
Upgrade your
gloo-platform-crdsHelm release in the management cluster to include the following settings.helm get values gloo-platform-crds -n gloo-mesh -o yaml --kube-context ${MGMT_CONTEXT} > mgmt-crds.yaml helm upgrade gloo-platform-crds gloo-platform/gloo-platform-crds \ --kube-context ${MGMT_CONTEXT} \ --namespace gloo-mesh \ -f mgmt-crds.yaml \ --set featureGates.ConfigDistribution=true \ --set installEnterpriseCrds=trueUpgrade your
gloo-platformHelm release in the management cluster to include the following settings.helm get values gloo-platform -n gloo-mesh -o yaml --kube-context ${MGMT_CONTEXT} > mgmt-plane.yaml helm upgrade gloo-platform gloo-platform/gloo-platform \ --kube-context ${MGMT_CONTEXT} \ --namespace gloo-mesh \ -f mgmt-plane.yaml \ --set featureGates.ConfigDistribution=trueUpgrade your
gloo-platform-crdsHelm release in each workload cluster to include the following settings. Repeat this step for each workload cluster.helm get values gloo-platform-crds -n gloo-mesh -o yaml --kube-context ${CLUSTER_CONTEXT} > crds.yaml helm upgrade gloo-platform-crds gloo-platform/gloo-platform-crds \ --kube-context ${CLUSTER_CONTEXT} \ --namespace gloo-mesh \ -f crds.yaml \ --set installEnterpriseCrds=true
Create a shared root of trust
Create a shared root of trust for each cluster in the multicluster setup, including the management cluster. This can be achieved by providing a root certificate signed by a PKI provider, or a custom root certificate created for this purpose. The root certificate signs a unique intermediate CA certificate for each cluster.
Deploy mesh components
Save the name and kubeconfig context of a cluster where you want to install Istio in the following environment variables. Each time you repeat the steps in this guide, you change these variables to the next workload cluster’s name and context. Note that to use automated multicluster peering, you must complete these steps to install a service mesh in the management cluster as well as your workload clusters.
export CLUSTER_NAME=<cluster-name> export CLUSTER_CONTEXT=<cluster-context>Apply the CRDs for the Kubernetes Gateway API to your cluster, which are required to create components such as waypoint proxies for L7 traffic policies, gateways with the
Gatewayresource, and more.kubectl apply -f https://github.com/kubernetes-sigs/gateway-api/releases/download/v1.4.0/standard-install.yaml --context ${CLUSTER_CONTEXT}Install the Gloo Operator to the
gloo-meshnamespace. This operator deploys and manages your Istio installation. For more information, see the Helm reference. Note that if you already installed Gloo Mesh, you can optionally reference the secret that Gloo Mesh (OSS APIs) automatically creates for your license in the–set manager.env.SOLO_ISTIO_LICENSE_KEY_SECRET_REF=gloo-mesh/license-keysflag instead.helm install gloo-operator oci://us-docker.pkg.dev/solo-public/gloo-operator-helm/gloo-operator \ --version 0.4.2 \ -n gloo-mesh \ --create-namespace \ --kube-context ${CLUSTER_CONTEXT} \ --set manager.env.SOLO_ISTIO_LICENSE_KEY=${GLOO_MESH_LICENSE_KEY}Verify that the operator pod is running.
kubectl get pods -n gloo-mesh --context ${CLUSTER_CONTEXT} -l app.kubernetes.io/name=gloo-operatorExample output:
gloo-operator-78d58d5c7b-lzbr5 1/1 Running 0 48sApply the following configmap and ServiceMeshController for the Gloo Operator to enable multicluster peering and deploy a service mesh.
kubectl apply -n gloo-mesh --context ${CLUSTER_CONTEXT} -f -<<EOF apiVersion: v1 kind: ConfigMap metadata: name: gloo-extensions-config namespace: gloo-mesh data: beta: | serviceMeshController: multiClusterMode: Peering values.istiod: | env: PEERING_AUTOMATIC_LOCAL_GATEWAY: "true" EOF --- kubectl apply -n gloo-mesh --context ${CLUSTER_CONTEXT} -f -<<EOF apiVersion: operator.gloo.solo.io/v1 kind: ServiceMeshController metadata: name: managed-istio labels: app.kubernetes.io/name: managed-istio spec: cluster: ${CLUSTER_NAME} network: ${CLUSTER_NAME} dataplaneMode: Ambient # required for multicluster setups installNamespace: istio-system version: ${ISTIO_VERSION} EOFNote that the operator detects your cloud provider and cluster platform, and configures the necessary settings required for that platform for you. For example, if you create an ambient mesh in an OpenShift cluster, no OpenShift-specific settings are required in the ServiceMeshController, because the operator automatically sets the appropriate settings for OpenShift and your specific cloud provider accordingly.If you set theinstallNamespaceto a namespace other thangloo-system,gloo-mesh, oristio-system, you must include the‐‐set manager.env.WATCH_NAMESPACES=<namespace>setting.Verify that the istiod control plane, Istio CNI, and ztunnel pods are running.
kubectl get pods -n istio-system --context ${CLUSTER_CONTEXT}Example output:
NAME READY STATUS RESTARTS AGE istio-cni-node-6s5nk 1/1 Running 0 2m53s istio-cni-node-blpz4 1/1 Running 0 2m53s istiod-gloo-bb86b959f-msrg7 1/1 Running 0 2m45s istiod-gloo-bb86b959f-w29cm 1/1 Running 0 3m ztunnel-mx8nw 1/1 Running 0 2m52s ztunnel-w8r6c 1/1 Running 0 2m52sCreate an east-west gateway in the
istio-eastwestnamespace. In each cluster, the east-west gateway is implemented as a ztunnel that facilitates traffic between services across clusters in your multicluster mesh. You can use either LoadBalancer or NodePort addresses for cross-cluster traffic.For each cluster that you want to include in the multicluster mesh setup, including the management cluster, repeat these steps to install the Gloo Operator, service mesh components, and east-west gateway in each cluster. Remember to change the cluster name and context variables each time you repeat the steps.
export CLUSTER_NAME=<cluster-name> export CLUSTER_CONTEXT=<cluster-context>
Review remote peer gateways
After you complete the steps for each cluster, verify that Gloo Mesh successfully created and distributed the remote peering gateways. These gateways use the istio-remote GatewayClass, which allows the istiod control plane in each cluster to discover the east-west gateway addresses of other clusters. Gloo Mesh generates one istio-remote resource in the management cluster for each connected workload cluster, and then distributes the gateway to each cluster respectively.
Verify that an
istio-remotegateway for each connected cluster is copied to the management cluster.kubectl get gateways -n istio-eastwest --context $MGMT_CONTEXTIn this example output, the
istio-remotegateways that were auto-generated for workload clusterscluster1andcluster2are copied to the management cluster, alongside the management cluster’s ownistio-remotegateway and east-west gateway.NAMESPACE NAME CLASS ADDRESS PROGRAMMED AGE istio-eastwest istio-eastwest istio-eastwest a7f6f1a2611fc4eb3864f8d688622fd4-1234567890.us-east-1.elb.amazonaws.com True 6s istio-eastwest istio-remote-peer-cluster1 istio-remote a5082fe9522834b8192a6513eb8c6b01-0987654321.us-east-1.elb.amazonaws.com True 4s istio-eastwest istio-remote-peer-cluster2 istio-remote aaad62dc3ffb142a1bfc13df7fe9665b-5678901234.us-east-1.elb.amazonaws.com True 4s istio-eastwest istio-remote-peer-mgmt istio-remote a7f6f1a2611fc4eb3864f8d688622fd4-1234567890.us-east-1.elb.amazonaws.com True 4sIn each cluster, verify that all
istio-remotegateways are successfully distributed to all workload clusters. This ensures that services in each workload cluster can now access the east-west gateways in other clusters of the multicluster mesh setup.kubectl get gateways -n istio-eastwest --context $CLUSTER_CONTEXT ``` 3. In each cluster, verify that peer linking was successful by running the `istioctl multicluster check` command. ```sh istioctl multicluster check --context $CLUSTER_CONTEXT ``` In this example output for cluster1, the license is valid, all Istio pods are healthy, and the east-west gateway is programmed. The remote peer gateways for linking to cluster2 and cluster3 both have a `gloo.solo.io/PeeringSucceeded` status of `True`. ``` ✅ License Check: license is valid for multicluster ✅ Pod Check (istiod): all pods healthy ✅ Pod Check (ztunnel): all pods healthy ✅ Pod Check (eastwest gateway): all pods healthy ✅ Gateway Check: all eastwest gateways programmed ✅ Peers Check: all clusters connected ```
Next
Add apps to the sidecar mesh. For multicluster setups, this includes making specific services available across your linked cluster setup.
ServiceMeshController reference
Review the commonly configured fields for the ServiceMeshController custom resource. For the full list of available options, see the ServiceMeshController API reference.
| Setting | Description | Supported values | Default |
|---|---|---|---|
cluster | The name of the cluster to install Istio into. This value is required to set the trust domain field in multicluster environments. | ||
dataplaneMode | The dataplane mode to use. | Ambient or Sidecar | Ambient |
distribution | Optional: A specific distribution of the Istio version, such as the standard or FIPS image distribution. | Standard or FIPS | Standard |
image.repository | Optional: An Istio image repository, such as to use an image from a private registry. | The Solo distribution of Istio repo for the Istio minor version. | |
image.secrets | Optional: A list of secrets to use for pulling images from a container registry. The secret list must be of type kubernetes.io/dockerconfigjson and exist in the installNamespace that you install Istio in. | ||
installNamespace | Namespace to install the service mesh components into. If you set the installNamespace to a namespace other than gloo-system, gloo-mesh, or istio-system, you must include the –set manager.env.WATCH_NAMESPACES=<namespace> setting. | istio-system | |
network | The default network where workload endpoints exist. A network is a logical grouping of workloads that exist in the same Layer 3 domain. Workloads in the same network can directly communicate with each other, while workloads in different networks require an east-west gateway to establish connectivity. This value is required in multi-network environments. For example, an easy way to identify the network of in-mesh workloads in one cluster is to simply use the cluster’s name for the network, such as cluster1. | ||
onConflict | Optional: How to resolve conflicting Istio configuration, if the configuration in this ServiceMeshController conflicts with existing Istio resources in the cluster.
| Force or Abort | Abort |
repository.secrets | Optional: A list of secrets to use for pulling manifests from an artifact registry. The secret list must be of type kubernetes.io/dockerconfigjson and can exist in any namespace, such as the same namespace that you create the ServiceMeshController in. | ||
repository.insecureSkipVerify | Optional: If set to true, the repository server’s certificate chain and hostname are not verified. | true or false | |
scalingProfile | Optional: The istiod control plane scaling settings to use. In large environments, set to Large.
| Default, Demo, or Large | Default |
trafficCaptureMode | Optional: Traffic capture mode to use.
| Auto or InitContainer | Auto |
trustDomain | The trustDomain for Istio workloads. | If cluster is set, defaults to that value. If cluster is unset, defaults to cluster.local. | |
version | The Istio patch version to install. For more information, see Supported Solo distributions of Istio. | Any Istio version supported for your Gloo version |
Advanced settings configuration
You can set advanced Istio configuation by creating a configmap. For example, you might need to specify settings for istiod such as discovery selectors, pod and service annotations, affinities, tolerations, or node selectors.
Note that you must name the configmap gloo-extensions-config and create it in the same namespace as the gloo-operator, such as gloo-mesh or gloo-system.
The following gloo-extensions-config example configmap sets all possible fields for demonstration purposes. Note that in some guides in this documentation set, Helm extension settings such as data.values.istiod are defined for specific settings in the configmap. However, these settings are used only when necessary, and are not recommended for other general use cases.
apiVersion: v1
kind: ConfigMap
metadata:
name: gloo-extensions-config
namespace: gloo-mesh
data:
stable: |
serviceMeshController:
istiod:
discoverySelectors:
- matchLabels:
foo: bar
topology:
affinity:
podAntiAffinity:
preferredDuringSchedulingIgnoredDuringExecution:
- podAffinityTerm:
labelSelector:
matchExpressions:
- key: foo
operator: In
values:
- bar
topologyKey: foo.io/bar
weight: 80
nodeSelector:
foo: bar
tolerations:
- key: t1
operator: Equal
value: v1
deployment:
podAnnotations:
foo: bar
serviceAnnotations:
foo: bar
beta: |
serviceMeshController:
cni:
confDir: /foo/bar
binDir: /foo/bar