Gloo Mesh deploys alongside your Istio installations in single or multicluster environments, and gives you instant insights into your Istio environment through a custom dashboard.

You can follow this guide to quickly get started with Gloo Mesh. To learn more about the benefits and architecture, see About. To customize your installation with Helm instead, see the advanced installation guide.

Before you begin

  1. Install the following command-line (CLI) tools.

    • kubectl, the Kubernetes command line tool. Download the kubectl version that is within one minor version of the Kubernetes clusters you plan to use.
    • meshctl, the Solo command line tool.
      curl -sL https://run.solo.io/meshctl/install | GLOO_MESH_VERSION=v2.7.0 sh -
      export PATH=$HOME/.gloo-mesh/bin:$PATH
    • helm, the Kubernetes package manager.
  2. Create or use at least two existing Kubernetes clusters. The instructions in this guide assume one management cluster and two workload clusters.

    • The cluster name must be alphanumeric with no special characters except a hyphen (-), lowercase, and begin with a letter (not a number).
  3. Set the names of your clusters from your infrastructure provider. If your clusters have different names, specify those names instead.

    export MGMT_CLUSTER=mgmt
    export REMOTE_CLUSTER1=cluster1
    export REMOTE_CLUSTER2=cluster2
  4. Save the kubeconfig contexts for your clusters. Run kubectl config get-contexts, look for your cluster in the CLUSTER column, and get the context name in the NAME column. Note: Do not use context names with underscores. The generated certificate that connects workload clusters to the management cluster uses the context name as a SAN specification, and underscores in SAN are not FQDN compliant. You can rename a context by running kubectl config rename-context "<oldcontext>" <newcontext>.
    export MGMT_CONTEXT=<management-cluster-context>
    export REMOTE_CONTEXT1=<remote-cluster1-context>
    export REMOTE_CONTEXT2=<remote-cluster2-context>
  5. Set your Gloo Mesh license key as an environment variable. If you do not have one, contact an account representative. If you prefer to specify license keys in a secret instead, see Licensing. To check your license’s validity, you can run meshctl license check --key $(echo ${GLOO_MESH_CORE_LICENSE_KEY} | base64 -w0).

    export GLOO_MESH_LICENSE_KEY=<license_key>

Install Gloo Mesh

In a multicluster setup, you deploy the Gloo management plane into a dedicated management cluster, and the Gloo data plane into one or more workload clusters that run Istio service meshes.

Management plane

Deploy the Gloo management plane into a dedicated management cluster.

  1. Install Gloo Mesh in your management cluster. This command uses a basic profile to create a gloo-mesh namespace and install the Gloo management plane components, such as the management server and Prometheus server, in your management cluster. For more information, check out the CLI install profiles.

    meshctl install --profiles gloo-core-mgmt \
    --kubecontext $MGMT_CONTEXT \
    --set common.cluster=$MGMT_CLUSTER \
    --set licensing.glooMeshCoreLicenseKey=$GLOO_MESH_LICENSE_KEY
  2. Verify that the management plane pods have a status of Running.

    kubectl get pods -n gloo-mesh --context $MGMT_CONTEXT

    Example output:

    NAME                                      READY   STATUS    RESTARTS   AGE
    gloo-mesh-mgmt-server-56c495796b-cx687    1/1     Running   0          30s
    gloo-mesh-redis-8455d49c86-f8qhw          1/1     Running   0          30s
    gloo-mesh-ui-65b6b6df5f-bf4vp             3/3     Running   0          30s
    gloo-telemetry-collector-agent-7rzfb      1/1     Running   0          30s
    gloo-telemetry-gateway-6547f479d5-r4zm6   1/1     Running   0          30s
    prometheus-server-57cd8c74d4-2bc7f        2/2     Running   0          30s
  3. Save the external address and port that your cloud provider assigned to the Gloo OpenTelemetry (OTel) gateway service. The OTel collector agents in each workload cluster send metrics to this address.

    export TELEMETRY_GATEWAY_IP=$(kubectl get svc -n gloo-mesh gloo-telemetry-gateway --context $MGMT_CONTEXT -o jsonpath="{.status.loadBalancer.ingress[0]['hostname','ip']}")
    export TELEMETRY_GATEWAY_PORT=$(kubectl get svc -n gloo-mesh gloo-telemetry-gateway --context $MGMT_CONTEXT -o jsonpath='{.spec.ports[?(@.name=="otlp")].port}')
    export TELEMETRY_GATEWAY_ADDRESS=${TELEMETRY_GATEWAY_IP}:${TELEMETRY_GATEWAY_PORT}
    echo $TELEMETRY_GATEWAY_ADDRESS

Data plane

Register each workload cluster with the Gloo management plane by deploying Gloo data plane components. A deployment named gloo-mesh-agent runs the Gloo agent in each workload cluster.

  1. Register both workload clusters with the management server. These commands use a basic profile to create a gloo-mesh namespace and install the Gloo data plane components, such as the Gloo agent. For more information, check out the CLI install profiles.

    meshctl cluster register $REMOTE_CLUSTER1 \
      --kubecontext $MGMT_CONTEXT \
      --profiles gloo-core-agent \
      --remote-context $REMOTE_CONTEXT1 \
      --telemetry-server-address $TELEMETRY_GATEWAY_ADDRESS
    
    meshctl cluster register $REMOTE_CLUSTER2 \
      --kubecontext $MGMT_CONTEXT \
      --profiles gloo-core-agent \
      --remote-context $REMOTE_CONTEXT2 \
      --telemetry-server-address $TELEMETRY_GATEWAY_ADDRESS
  2. Verify that the Gloo data plane components in each workload cluster are healthy. If not, try debugging the agent.

    meshctl check --kubecontext $REMOTE_CONTEXT1
    meshctl check --kubecontext $REMOTE_CONTEXT2

    Example output:

    🟢 Gloo deployment status
    
    Namespace | Name                           | Ready | Status
    gloo-mesh | gloo-mesh-agent                | 1/1   | Healthy
    gloo-mesh | gloo-telemetry-collector-agent | 3/3   | Healthy
  3. Verify that your Gloo Mesh setup is correctly installed. If not, try debugging the relay connection. Note that this check might take a few seconds to verify that:

    • Your Gloo product licenses are valid and current.
    • The Gloo CRDs are installed at the correct version.
    • The management plane pods in the management cluster are running and healthy.
    • The agents in the workload clusters are successfully identified by the management server.
    meshctl check --kubecontext $MGMT_CONTEXT

    Example output:

    🟢 License status
    
    INFO  gloo-mesh enterprise license expiration is 25 Aug 24 10:38 CDT
    
    🟢 CRD version check
    
    🟢 Gloo deployment status
    
    Namespace | Name                           | Ready | Status
    gloo-mesh | gloo-mesh-mgmt-server          | 1/1   | Healthy
    gloo-mesh | gloo-mesh-redis                | 1/1   | Healthy
    gloo-mesh | gloo-mesh-ui                   | 1/1   | Healthy
    gloo-mesh | gloo-telemetry-collector-agent | 3/3   | Healthy
    gloo-mesh | gloo-telemetry-gateway         | 1/1   | Healthy
    gloo-mesh | prometheus-server              | 1/1   | Healthy
    
    🟢 Mgmt server connectivity to workload agents
    
    Cluster  | Registered | Connected Pod
    cluster1 | true       | gloo-mesh/gloo-mesh-mgmt-server-65bd557b95-v8qq6
    cluster2 | true       | gloo-mesh/gloo-mesh-mgmt-server-65bd557b95-v8qq6
    
    Connected Pod                                    | Clusters
    gloo-mesh/gloo-mesh-mgmt-server-65bd557b95-v8qq6 | 2

Deploy Istio

Use Helm to deploy service meshes in each workload cluster.

  1. Set environment variables for the Solo distribution of Istio that you want to install. You can find these values in the Ambient section of the Istio images built by Solo.io support article.

    # Solo distrubution of Istio patch version
    # in the format 1.x.x, with no tags
    export ISTIO_VERSION=1.24.2
    # Repo key for the minor version of the Solo distribution of Istio
    # This is the 12-character hash at the end of the repo URL: 'us-docker.pkg.dev/gloo-mesh/istio-<repo-key>'
    export REPO_KEY=<repo_key>
    
    # Solo distrubution of Istio patch version and tag,
    # image repo, Helm repo, and binary repo
    export ISTIO_IMAGE=${ISTIO_VERSION}-solo
    export REPO=us-docker.pkg.dev/gloo-mesh/istio-${REPO_KEY}
    export HELM_REPO=us-docker.pkg.dev/gloo-mesh/istio-helm-${REPO_KEY}
    export BINARY_REPO=https://console.cloud.google.com/storage/browser/istio-binaries-${REPO_KEY}/${ISTIO_IMAGE}
  2. Download the Solo distribution of Istio binary and install istioctl, which you use for multicluster linking and gateway commands.

    1. Navigate to the storage repository for the Solo distribution of Istio binaries.
      open ${BINARY_REPO}
    2. Download the tar.gz file for your system, such as istio-1.24.2-solo-osx-amd64.tar.gz.
    3. Extract the downloaded tar.gz file.
    4. Navigate to the package directory and add the istioctl client to your system’s PATH.
      cd istio-${ISTIO_IMAGE}
      export PATH=$PWD/bin:$PATH
    5. Verify that the istioctl client runs the Solo distribution of Istio that you want to install.
      istioctl version --remote=false
      Example output:
      client version: 1.24.2-solo
  3. Create a shared root of trust for the workload clusters. These example commands use the Istio CA to generate a self-signed root certificate and key, and use them to sign the workload certificates. For more information, see the Plug in CA Certificates guide in the community Istio documentation.

    # in directory 'istio-${ISTIO_IMAGE}'
    mkdir -p certs
    pushd certs
    make -f ../tools/certs/Makefile.selfsigned.mk root-ca
    
    function create_cacerts_secret() {
      context=${1:?context}
      cluster=${2:?cluster}
      make -f ../tools/certs/Makefile.selfsigned.mk ${cluster}-cacerts
      kubectl --context=${context} create ns istio-system || true
      kubectl --context=${context} create secret generic cacerts -n istio-system \
        --from-file=${cluster}/ca-cert.pem \
        --from-file=${cluster}/ca-key.pem \
        --from-file=${cluster}/root-cert.pem \
        --from-file=${cluster}/cert-chain.pem
    }
    
    create_cacerts_secret ${REMOTE_CONTEXT1} ${REMOTE_CLUSTER1}
    create_cacerts_secret ${REMOTE_CONTEXT2} ${REMOTE_CLUSTER2}
  4. Save the name and kubeconfig context of a cluster where you want to install Istio in the following environment variables, starting with cluster1. When you repeat the following steps later, you change these variables to cluster2’s name and context, so that you install a mesh in both cluster1 and cluster2.

    export CLUSTER_NAME=${REMOTE_CLUSTER1}
    export CLUSTER_CONTEXT=${REMOTE_CONTEXT1}
  5. If you use Google Kubernetes Engine (GKE) clusters, create the following ResourceQuota in the istio-system namespace. For more information about this requirement, see the community Istio documentation.

    kubectl --kube-context ${CLUSTER_CONTEXT} -n istio-system apply -f - <<EOF
    apiVersion: v1
    kind: ResourceQuota
    metadata:
      name: gcp-critical-pods
      namespace: istio-system
    spec:
      hard:
        pods: 1000
      scopeSelector:
        matchExpressions:
        - operator: In
          scopeName: PriorityClass
          values:
          - system-node-critical
    EOF
  6. Create releases for the Istio Helm charts, which create the following control and data plane components:

    • base- CRDs and cluster roles required to install Istio
    • istiod- istiod control plane
    • cni- Istio CNI daemonset
    • ztunnel- ztunnel daemonset
    • Google Kubernetes Engine (GKE) only: For each of the following Helm charts, you must include --set global.platform=gke.
  7. Verify that the components of the Istio control and data plane are successfully installed. Because the ztunnel and the CNI are deployed as daemon sets, the number of ztunnel pods and CNI pods each equal the number of nodes in your cluster. Note that it might take a few seconds for the pods to become available.

    kubectl get pods -n istio-system --context ${CLUSTER_CONTEXT}

    Example output:

    istio-cni-node-79m9c           1/1     Running   0          40s
    istio-cni-node-m7nhz           1/1     Running   0          40s
    istiod-main-579fdd9cdc-m22mf   1/1     Running   0          40s
    ztunnel-4vjgg                  1/1     Running   0          40s
    ztunnel-rbh5m                  1/1     Running   0          40s
  8. Label the istio-system namespace with the cluster’s network name, which you previously set to your cluster’s name in the global.network field of the istiod installation. The control plane uses this label internally to group pods that exist in the same L3 network.

    kubectl label namespace istio-system --context ${CLUSTER_CONTEXT} topology.istio.io/network=${CLUSTER_NAME}
  9. Apply the CRDs for the Kubernetes Gateway API to your cluster, which are required to create components such as waypoint proxies for L7 traffic policies, gateways with the Gateway resource, and more.

    kubectl apply -f https://github.com/kubernetes-sigs/gateway-api/releases/download/v1.2.1/standard-install.yaml --context ${CLUSTER_CONTEXT}
  10. Create an east-west gateway in the istio-eastwest namespace to facilitate traffic between services in each cluster in your multicluster mesh.

    kubectl create namespace istio-eastwest --context ${CLUSTER_CONTEXT}
    istioctl multicluster expose --namespace istio-eastwest --context ${CLUSTER_CONTEXT}
    kubectl get pods -n istio-eastwest --context ${CLUSTER_CONTEXT}
  11. Repeat steps 4 - 10 to install the CRDs, control plane, and data plane in cluster2. Be sure to reset the values of the $CLUSTER_NAME and $CLUSTER_CONTEXT environment variables to the values for cluster2.

  12. Link the clusters together, which enables cross-cluster service discovery and allows traffic to be routed through east-west gateways across clusters.

    1. Verify that your cluster contexts are both listed in your kubeconfig file.
      kubectl config get-contexts
      • If you have multiple kubeconfig files, you can generate a merged kubeconfig file by running the following command.
        KUBECONFIG=<kubeconfig_file1>.yaml:<file2>.yaml kubectl config view --flatten
    2. Using the cluster contexts, link the clusters bi-directionally. This command creates an istio-remote Gateway resource in each cluster that points to the other cluster. Note that these gateways are used for peering identification only. Traffic requests are routed through the east-west gateway that you created earlier.
      istioctl multicluster link --contexts=${REMOTE_CONTEXT1},${REMOTE_CONTEXT2} \
      --namespace istio-eastwest \
      --revision main
      Example output:
      Gateway istio-eastwest/istio-remote-peer-cluster1 applied to cluster "<cluster2_context>" pointing to cluster "<cluster1_context>" (network "cluster1")
      Gateway istio-eastwest/istio-remote-peer-cluster2 applied to cluster "<cluster1_context>" pointing to cluster "<cluster2_context>" (network "cluster2")

Deploy a sample app

To analyze your service mesh with Gloo Mesh, be sure to include your services in the mesh.

Optional: Expose apps with an ingress gateway

You can optionally deploy an ingress gateway to send requests to sample apps from outside the multicluster service mesh. To review your options, such as deploying Gloo Gateway as an ingress gateway, see the ingress gateway guide for ambient or sidecar meshes.

Explore the UI

Use the Gloo UI to evaluate the health and efficiency of your service mesh. You can review the analysis and insights for your service mesh, such as recommendations to harden your Istio environment and steps to implement them in your environment.

Launch the dashboard

  1. Open the Gloo UI. The Gloo UI is served from the gloo-mesh-ui service on port 8090. You can connect by using the meshctl or kubectl CLIs.

  2. Review your Dashboard for an at-a-glance overview of your Gloo Mesh environment. Environment insights, health, status, inventories, security, and more are summarized in the following cards:

    • Analysis and Insights: Gloo Mesh recommendations for how to improve your Istio setups.
    • Gloo and Istio health: A status check of the Gloo Mesh and Istio installations in each cluster.
    • Certificates Expiry: Validity timelines for your root and intermediate Istio certificates.
    • Cluster Services: Inventory of services across all clusters in your Gloo Mesh setup, and whether those services are in a service mesh or not.
    • Istio FIPS: FIPS compliance checks for the istiod control planes and Istio data plane workloads.
    • Zero Trust: Number of service mesh workloads that receive only mutual TLS (mTLS)-encrypted traffic, and number of external services that are accessed from the mesh.

Figure: Gloo UI dashboard
Figure: Gloo UI dashboard
Figure: Gloo UI dashboard
Figure: Gloo UI dashboard

Check insights

Review the insights for your environment. Gloo Mesh comes with an insights engine that automatically analyzes your Istio setups for health issues. These issues are displayed in the UI along with recommendations to harden your Istio setups. The insights give you a checklist to address issues that might otherwise be hard to detect across your environment.

  1. From the Dashboard, click on any of the insights cards to open the Insights page, or go to the Home > Insights page directly.

  2. On the Insights page, you can view recommendations to harden your Istio setup, and steps to implement them in your environment. Gloo Mesh analyzes your setup, and returns individual insights that contain information about errors and warnings in your environment, best practices you can use to improve your configuration and security, and more.

    Figure: Insights page
    Figure: Insights page
    Figure: Insights page
    Figure: Insights page
  3. Select the insight that you want to resolve. The details modal shows more data about the insight, such as the time when it was last observed in your environment, and if applicable, the extended settings or configuration that the insight applies to.

    Figure: Example insight
    Figure: Example insight
  4. Click the Target YAML tab to see the resource file that the insight references, and click the View Resolution Steps tab to see guidance such as steps for fixing warnings and errors in your resource configuration or recommendations for improving your security and setup.

Next steps

Now that you have Gloo Mesh and Istio up and running, check out some of the following resources to learn more about Gloo Mesh and expand your service mesh capabilities.

Istio:

For ambient installations, see Upgrade Gloo-managed ambient meshes or Upgrade ambient service meshes with Helm.

Gloo Mesh:

  • Customize your Gloo Mesh installation with a Helm-based setup.
    • Explore insights to review and improve your setup’s health and security posture.
    • When it’s time to upgrade Gloo Mesh, see the upgrade guide.

    Help and support:

    Cleanup

    If you no longer need this quick-start Gloo Mesh environment, you can follow the steps in the uninstall guide.