Multicluster
Deploy Gloo Mesh across multiple clusters to gain valuable insights into your Istio service meshes.
Gloo Mesh deploys alongside your Istio installations in single or multicluster environments, and gives you instant insights into your Istio environment through a custom dashboard.
You can follow this guide to quickly get started with Gloo Mesh. To learn more about the benefits and architecture, see About. To customize your installation with Helm instead, see the advanced installation guide.
Before you begin
Install the following command-line (CLI) tools.
kubectl
, the Kubernetes command line tool. Download thekubectl
version that is within one minor version of the Kubernetes clusters you plan to use.meshctl
, the Solo command line tool.curl -sL https://run.solo.io/meshctl/install | GLOO_MESH_VERSION=v2.8.0-rc1 sh - export PATH=$HOME/.gloo-mesh/bin:$PATH
helm
, the Kubernetes package manager.
Create or use at least two existing Kubernetes clusters. The instructions in this guide assume one management cluster and two workload clusters.
- The cluster name must be alphanumeric with no special characters except a hyphen (-), lowercase, and begin with a letter (not a number).
Set the names of your clusters from your infrastructure provider. If your clusters have different names, specify those names instead.
export MGMT_CLUSTER=mgmt export REMOTE_CLUSTER1=cluster1 export REMOTE_CLUSTER2=cluster2
- Save the kubeconfig contexts for your clusters. Run
kubectl config get-contexts
, look for your cluster in theCLUSTER
column, and get the context name in theNAME
column. Note: Do not use context names with underscores. The generated certificate that connects workload clusters to the management cluster uses the context name as a SAN specification, and underscores in SAN are not FQDN compliant. You can rename a context by runningkubectl config rename-context "<oldcontext>" <newcontext>
.export MGMT_CONTEXT=<management-cluster-context> export REMOTE_CONTEXT1=<remote-cluster1-context> export REMOTE_CONTEXT2=<remote-cluster2-context>
Save your Enterprise level license key for Gloo Mesh as an environment variable, which is required for multicluster mesh functionality. Contact your account representative to obtain a valid license.
export GLOO_MESH_LICENSE_KEY=<enterprise_license_key>
Install Gloo Mesh
In a multicluster setup, you deploy the Gloo management plane into a dedicated management cluster, and the Gloo data plane into one or more workload clusters that run Istio service meshes.
Management plane
Deploy the Gloo management plane into a dedicated management cluster.
Install Gloo Mesh in your management cluster. This command uses a basic profile to create a
gloo-mesh
namespace and install the Gloo management plane components, such as the management server and Prometheus server, in your management cluster. For more information, check out the CLI install profiles.meshctl install --profiles gloo-mesh-mgmt \ --kubecontext $MGMT_CONTEXT \ --set common.cluster=$MGMT_CLUSTER \ --set licensing.glooMeshCoreLicenseKey=$GLOO_MESH_LICENSE_KEY
This guide assumes one dedicated management cluster, and two Istio workload clusters that you register with the management cluster. If you plan to register the management cluster so that it can also function as a workload cluster, include--set telemetryGateway.enabled=true
in this command.Verify that the management plane pods have a status of
Running
.kubectl get pods -n gloo-mesh --context $MGMT_CONTEXT
Example output:
NAME READY STATUS RESTARTS AGE gloo-mesh-mgmt-server-56c495796b-cx687 1/1 Running 0 30s gloo-mesh-redis-8455d49c86-f8qhw 1/1 Running 0 30s gloo-mesh-ui-65b6b6df5f-bf4vp 3/3 Running 0 30s gloo-telemetry-collector-agent-7rzfb 1/1 Running 0 30s gloo-telemetry-collector-agent-mf5rw 1/1 Running 0 30s gloo-telemetry-gateway-6547f479d5-r4zm6 1/1 Running 0 30s prometheus-server-57cd8c74d4-2bc7f 2/2 Running 0 30s
Save the external address and port that your cloud provider assigned to the Gloo OpenTelemetry (OTel) gateway service. The OTel collector agents in each workload cluster send metrics to this address.
export TELEMETRY_GATEWAY_IP=$(kubectl get svc -n gloo-mesh gloo-telemetry-gateway --context $MGMT_CONTEXT -o jsonpath="{.status.loadBalancer.ingress[0]['hostname','ip']}") export TELEMETRY_GATEWAY_PORT=$(kubectl get svc -n gloo-mesh gloo-telemetry-gateway --context $MGMT_CONTEXT -o jsonpath='{.spec.ports[?(@.name=="otlp")].port}') export TELEMETRY_GATEWAY_ADDRESS=${TELEMETRY_GATEWAY_IP}:${TELEMETRY_GATEWAY_PORT} echo $TELEMETRY_GATEWAY_ADDRESS
Data plane
Register each workload cluster with the Gloo management plane by deploying Gloo data plane components. A deployment named gloo-mesh-agent
runs the Gloo agent in each workload cluster.
Register both workload clusters with the management server. These commands use a basic profile to create a
gloo-mesh
namespace and install the Gloo data plane components, such as the Gloo agent. For more information, check out the CLI install profiles.meshctl cluster register $REMOTE_CLUSTER1 \ --kubecontext $MGMT_CONTEXT \ --profiles gloo-mesh-agent \ --remote-context $REMOTE_CONTEXT1 \ --telemetry-server-address $TELEMETRY_GATEWAY_ADDRESS meshctl cluster register $REMOTE_CLUSTER2 \ --kubecontext $MGMT_CONTEXT \ --profiles gloo-mesh-agent \ --remote-context $REMOTE_CONTEXT2 \ --telemetry-server-address $TELEMETRY_GATEWAY_ADDRESS
Verify that the Gloo data plane components in each workload cluster are healthy. If not, try debugging the agent.
kubectl get pods -n gloo-mesh --context $REMOTE_CONTEXT1 kubectl get pods -n gloo-mesh --context $REMOTE_CONTEXT2
Example output:
NAME READY STATUS RESTARTS AGE gloo-mesh-agent-8ffc775c4-tk2z5 2/2 Running 0 90s gloo-telemetry-collector-agent-g8p7x 1/1 Running 0 90s gloo-telemetry-collector-agent-mp2wd 1/1 Running 0 90s
Verify that your Gloo Mesh setup is correctly installed. If not, try debugging the relay connection. Note that this check might take a few seconds to verify that:
- Your Gloo product licenses are valid and current.
- The Gloo CRDs are installed at the correct version.
- The management plane pods in the management cluster are running and healthy.
- The agents in the workload clusters are successfully identified by the management server.
meshctl check --kubecontext $MGMT_CONTEXT
Example output:
🟢 License status INFO gloo-mesh enterprise license expiration is 25 Aug 24 10:38 CDT 🟢 CRD version check 🟢 Gloo deployment status Namespace | Name | Ready | Status gloo-mesh | gloo-mesh-mgmt-server | 1/1 | Healthy gloo-mesh | gloo-mesh-redis | 1/1 | Healthy gloo-mesh | gloo-mesh-ui | 1/1 | Healthy gloo-mesh | gloo-telemetry-gateway | 1/1 | Healthy gloo-mesh | prometheus-server | 1/1 | Healthy gloo-mesh | gloo-telemetry-collector-agent | 2/2 | Healthy 🟢 Mgmt server connectivity to workload agents Cluster | Registered | Connected Pod cluster1 | true | gloo-mesh/gloo-mesh-mgmt-server-65bd557b95-v8qq6 cluster2 | true | gloo-mesh/gloo-mesh-mgmt-server-65bd557b95-v8qq6 Connected Pod | Clusters gloo-mesh/gloo-mesh-mgmt-server-65bd557b95-v8qq6 | 2
Deploy Istio
Use the Gloo Operator to deploy and link service meshes in each workload cluster.
The following guide uses the Solo.io multicluster peering functionality to link clusters together and set up routing between these clusters. This feature requires your mesh to be installed with the Solo distribution of Istio and an Enterprise-level license for Gloo Mesh. Contact your account representative to obtain a valid license.In addition, you must install the Istio ambient components into your cluster to successfully create east-west gateways and establish multicluster peering, even if you plan to use a sidecar mesh. However, sidecar mesh setups continue to use sidecar injection for your workloads. Your workloads are not added to an ambient mesh.
Get the Solo distribution of Istio binary and install
istioctl
, which you use for multicluster linking and gateway commands.Get the operating system, architecture, and Istio repository details and save them in environment variables.
OS=$(uname | tr '[:upper:]' '[:lower:]' | sed -E 's/darwin/osx/') ARCH=$(uname -m | sed -E 's/aarch/arm/; s/x86_64/amd64/; s/armv7l/armv7/') echo $OS echo $ARCH ISTIO_VERSION=1.25.2 ISTIO_IMAGE=${ISTIO_VERSION}-solo REPO_KEY=e038d180f90a
Download the Solo distribution of Istio binary and install
istioctl
.mkdir -p ~/.istioctl/bin curl -sSL https://storage.googleapis.com/istio-binaries-$REPO_KEY/$ISTIO_IMAGE/istioctl-$ISTIO_IMAGE-$OS-$ARCH.tar.gz | tar xzf - -C ~/.istioctl/bin chmod +x ~/.istioctl/bin/istioctl export PATH=${HOME}/.istioctl/bin:${PATH}
Verify that the
istioctl
client runs the Solo distribution of Istio that you want to install.istioctl version --remote=false
Example output:
client version: 1.25.2-solo
Create a shared root of trust for the workload clusters. These example commands use the Istio CA to generate a self-signed root certificate and key, and use them to sign the workload certificates. For more information, see the Plug in CA Certificates guide in the community Istio documentation.
curl -L https://istio.io/downloadIstio | ISTIO_VERSION=${ISTIO_VERSION} sh - cd istio-${ISTIO_VERSION} mkdir -p certs pushd certs make -f ../tools/certs/Makefile.selfsigned.mk root-ca function create_cacerts_secret() { context=${1:?context} cluster=${2:?cluster} make -f ../tools/certs/Makefile.selfsigned.mk ${cluster}-cacerts kubectl --context=${context} create ns istio-system || true kubectl --context=${context} create secret generic cacerts -n istio-system \ --from-file=${cluster}/ca-cert.pem \ --from-file=${cluster}/ca-key.pem \ --from-file=${cluster}/root-cert.pem \ --from-file=${cluster}/cert-chain.pem } create_cacerts_secret ${REMOTE_CONTEXT1} ${REMOTE_CLUSTER1} create_cacerts_secret ${REMOTE_CONTEXT2} ${REMOTE_CLUSTER2}
Apply the CRDs for the Kubernetes Gateway API to each cluster, which are required to create components such as waypoint proxies for L7 traffic policies, gateways with the
Gateway
resource, and more.for context in ${REMOTE_CONTEXT1} ${REMOTE_CONTEXT2}; do kubectl apply -f https://github.com/kubernetes-sigs/gateway-api/releases/download/v1.2.1/standard-install.yaml --context ${REMOTE_CONTEXT1} done
Install the Gloo Operator to the
gloo-mesh
namespace of each cluster. This operator deploys and manages your Istio installations.for context in ${REMOTE_CONTEXT1} ${REMOTE_CONTEXT2}; do helm install gloo-operator oci://us-docker.pkg.dev/solo-public/gloo-operator-helm/gloo-operator \ --version 0.2.3 \ -n gloo-mesh \ --create-namespace \ --kube-context ${context} \ --set manager.env.SOLO_ISTIO_LICENSE_KEY=${GLOO_MESH_LICENSE_KEY} done
Verify that the operator pods are running.
for context in ${REMOTE_CONTEXT1} ${REMOTE_CONTEXT2}; do kubectl get pods -n gloo-mesh --context ${context} | grep operator done
Apply the following ServiceMeshController resource for the Gloo Operator to create an Istio installation.
function apply_smc() { context=${1:?context} cluster=${2:?cluster} kubectl apply -n gloo-mesh --context ${context} -f - <<EOF apiVersion: operator.gloo.solo.io/v1 kind: ServiceMeshController metadata: name: managed-istio labels: app.kubernetes.io/name: managed-istio spec: cluster: ${cluster} network: ${cluster} dataplaneMode: Ambient # required for multicluster setups installNamespace: istio-system version: ${ISTIO_VERSION} EOF } apply_smc ${REMOTE_CONTEXT1} ${REMOTE_CLUSTER1} apply_smc ${REMOTE_CONTEXT2} ${REMOTE_CLUSTER2}
Note that the operator detects your cloud provider and cluster platform, and configures the necessary settings required for that platform for you. For example, if you create an ambient mesh in an OpenShift cluster, no OpenShift-specific settings are required in the ServiceMeshController, because the operator automatically sets the appropriate settings for OpenShift and your specific cloud provider accordingly.If you set theinstallNamespace
to a namespace other thangloo-system
,gloo-mesh
, oristio-system
, you must include the–set manager.env.WATCH_NAMESPACES=<namespace>
setting.Verify that the components of the Istio control and data plane are successfully installed. Because the ztunnel and the CNI are deployed as daemon sets, the number of ztunnel pods and CNI pods each equal the number of nodes in your cluster. Note that it might take a few seconds for the pods to become available.
for context in ${REMOTE_CONTEXT1} ${REMOTE_CONTEXT2}; do kubectl get pods -n istio-system --context ${context} done
Example output for one cluster:
NAME READY STATUS RESTARTS AGE istio-cni-node-6s5nk 1/1 Running 0 2m53s istio-cni-node-blpz4 1/1 Running 0 2m53s istiod-gloo-bb86b959f-msrg7 1/1 Running 0 2m45s istiod-gloo-bb86b959f-w29cm 1/1 Running 0 3m ztunnel-mx8nw 1/1 Running 0 2m52s ztunnel-w8r6c 1/1 Running 0 2m52s
Create an east-west gateway in the
istio-eastwest
namespace of each cluster to facilitate traffic between services in each cluster in your multicluster mesh.for context in ${REMOTE_CONTEXT1} ${REMOTE_CONTEXT2}; do kubectl create namespace istio-eastwest --context ${context} istioctl multicluster expose --namespace istio-eastwest --context ${context} done
Link clusters to enable cross-cluster service discovery and allow traffic to be routed through east-west gateways across clusters. In each cluster, Gateway resources are created that use the
istio-remote
GatewayClass, which allows the gateways to connect to other clusters by using the clusters’ contexts.istioctl multicluster link --namespace istio-eastwest --contexts=${REMOTE_CONTEXT1},${REMOTE_CONTEXT2}
Verify that east-west and remote peering gateways are successfully created in each cluster.
for context in ${REMOTE_CONTEXT1} ${REMOTE_CONTEXT2}; do kubectl get gateways -A --context ${context} done
Example output:
NAMESPACE NAME CLASS ADDRESS PROGRAMMED AGE istio-eastwest istio-eastwest istio-eastwest a7e3c1cb7895b4feaa69bc82ae276e0a-1234567890.us-east-1.elb.amazonaws.com True 55s istio-eastwest cluster1 istio-remote a7e3c1cb7895b4feaa69bc82ae276e0a-1234567890.us-east-1.elb.amazonaws.com True 54s NAMESPACE NAME CLASS ADDRESS PROGRAMMED AGE istio-eastwest istio-eastwest istio-eastwest abfb3c4472a93442b9cb08f11d0d95cb-0987654321.us-east-1.elb.amazonaws.com True 52s istio-eastwest cluster2 istio-remote abfb3c4472a93442b9cb08f11d0d95cb-0987654321.us-east-1.elb.amazonaws.com True 52s
Deploy a sample app
To analyze your service mesh with Gloo Mesh, be sure to include your services in the mesh.
Optional: Expose apps with an ingress gateway
You can optionally deploy an ingress gateway to send requests to sample apps from outside the multicluster service mesh. To review your options, such as deploying Gloo Gateway as an ingress gateway, see the ingress gateway guide for ambient or sidecar meshes.
Explore the UI
Use the Gloo UI to evaluate the health and efficiency of your service mesh. You can review the analysis and insights for your service mesh, such as recommendations to harden your Istio environment and steps to implement them in your environment.
Launch the dashboard
Open the Gloo UI. The Gloo UI is served from the
gloo-mesh-ui
service on port 8090. You can connect by using themeshctl
orkubectl
CLIs.Review your Dashboard for an at-a-glance overview of your Gloo Mesh environment. Environment insights, health, status, inventories, security, and more are summarized in the following cards:
- Analysis and Insights: Gloo Mesh recommendations for how to improve your Istio setups.
- Gloo and Istio health: A status check of the Gloo Mesh and Istio installations in each cluster.
- Certificates Expiry: Validity timelines for your root and intermediate Istio certificates.
- Cluster Services: Inventory of services across all clusters in your Gloo Mesh setup, and whether those services are in a service mesh or not.
- Istio FIPS: FIPS compliance checks for the
istiod
control planes and Istio data plane workloads. - Zero Trust: Number of service mesh workloads that receive only mutual TLS (mTLS)-encrypted traffic, and number of external services that are accessed from the mesh.
Figure: Gloo UI dashboard Figure: Gloo UI dashboard
Check insights
Review the insights for your environment. Gloo Mesh comes with an insights engine that automatically analyzes your Istio setups for health issues. These issues are displayed in the UI along with recommendations to harden your Istio setups. The insights give you a checklist to address issues that might otherwise be hard to detect across your environment.
From the Dashboard, click on any of the insights cards to open the Insights page, or go to the Home > Insights page directly.
On the Insights page, you can view recommendations to harden your Istio setup, and steps to implement them in your environment. Gloo Mesh analyzes your setup, and returns individual insights that contain information about errors and warnings in your environment, best practices you can use to improve your configuration and security, and more.
Figure: Insights page Figure: Insights page Select the insight that you want to resolve. The details modal shows more data about the insight, such as the time when it was last observed in your environment, and if applicable, the extended settings or configuration that the insight applies to.
Figure: Example insight Figure: Example insight Click the Target YAML tab to see the resource file that the insight references, and click the View Resolution Steps tab to see guidance such as steps for fixing warnings and errors in your resource configuration or recommendations for improving your security and setup.
Next steps
Now that you have Gloo Mesh and Istio up and running, check out some of the following resources to learn more about Gloo Mesh and expand your service mesh capabilities.
Istio:
- Find out more about hardened Istio
n-4
version support built into Solo distributions of Istio. - Check out the Istio docs to configure and deploy Istio routing resources.
- Monitor and observe your Istio environment with Gloo Mesh’s built-in telemetry tools.
- When it’s time to upgrade Istio, check out
For ambient installations, see Upgrade Gloo-managed ambient meshes or Upgrade ambient service meshes with Helm.
Gloo Mesh:
- Customize your Gloo Mesh installation with a Helm-based setup.
Help and support:
- Talk to an expert to get advice or build out a proof of concept.
- Join the #gloo-mesh channel in the Solo.io community slack.
- Try out one of the Gloo workshops.
Cleanup
If you no longer need this quick-start Gloo Mesh environment, you can follow the steps in the uninstall guide.