Gloo Mesh deploys alongside your Istio installations in single or multicluster environments, and gives you instant insights into your Istio environment through a custom dashboard. In a multicluster setup, you deploy the Gloo management plane into a dedicated management cluster, and the Gloo data plane into one or more workload clusters that run Istio service meshes.

You can follow this guide to quickly install Gloo Mesh (OSS APIs) with default values, or customize settings for an advanced Gloo Mesh (OSS APIs) installation. To learn more about the benefits and architecture of Gloo Mesh (OSS APIs), see About.

Install Gloo Mesh (OSS APIs) with meshctl

Use default values provided by the meshctl CLI installation profiles to quickly deploy Gloo Mesh (OSS APIs) alongside your service mesh.

Before you begin

  1. Install the following command-line (CLI) tools.

    • helm, the Kubernetes package manager.
    • kubectl, the Kubernetes command line tool. Download the kubectl version that is within one minor version of the Kubernetes clusters you plan to use.
    • meshctl, the Solo command line tool.
        curl -sL https://run.solo.io/meshctl/install | GLOO_MESH_VERSION=v2.11.0 sh -
      export PATH=$HOME/.gloo-mesh/bin:$PATH
        
  2. Create or use at least two existing Kubernetes clusters. The instructions in this guide assume one management cluster and two workload clusters.

    • The cluster name must be alphanumeric with no special characters except a hyphen (-), lowercase, and begin with a letter (not a number) to follow the Kubernetes DNS label standard.
  3. Set the names of your clusters from your infrastructure provider. If your clusters have different names, specify those names instead.

      export MGMT_CLUSTER=mgmt
    export REMOTE_CLUSTER1=cluster1
    export REMOTE_CLUSTER2=cluster2
      
  4. Save the kubeconfig contexts for your clusters. Run kubectl config get-contexts, look for your cluster in the CLUSTER column, and get the context name in the NAME column. Note: Do not use context names with underscores. The generated certificate that connects workload clusters to the management cluster uses the context name as a SAN specification, and underscores in SAN are not FQDN compliant. You can rename a context by running kubectl config rename-context "<oldcontext>" <newcontext>.
      export MGMT_CONTEXT=<management-cluster-context>
    export REMOTE_CONTEXT1=<remote-cluster1-context>
    export REMOTE_CONTEXT2=<remote-cluster2-context>
      
  5. Set your Premium or Enterprise Solo license key for Gloo Mesh as an environment variable. If you do not have one, contact an account representative. If you prefer to specify license keys in a secret instead, see Licensing. To check your license’s validity, you can run meshctl license check --key $(echo ${GLOO_MESH_LICENSE_KEY} | base64 -w0).

      export GLOO_MESH_LICENSE_KEY=<license_key>
      

Management plane

Deploy the Gloo management plane into a dedicated management cluster.

  1. Install Gloo Mesh (OSS APIs) in your management cluster. This command uses a basic profile to create a gloo-mesh namespace and install the Gloo management plane components, such as the management server and Prometheus server, in your management cluster. For more information, check out the CLI install profiles.

      meshctl install --profiles gloo-mesh-mgmt \
      --kubecontext $MGMT_CONTEXT \
      --set common.cluster=$MGMT_CLUSTER \
      --set licensing.glooMeshCoreLicenseKey=$GLOO_MESH_LICENSE_KEY
      
  2. Verify that the management plane pods have a status of Running.

      kubectl get pods -n gloo-mesh --context $MGMT_CONTEXT
      

    Example output:

      NAME                                      READY   STATUS    RESTARTS   AGE
    gloo-mesh-mgmt-server-56c495796b-cx687    1/1     Running   0          30s
    gloo-mesh-redis-8455d49c86-f8qhw          1/1     Running   0          30s
    gloo-mesh-ui-65b6b6df5f-bf4vp             3/3     Running   0          30s
    gloo-telemetry-collector-agent-7rzfb      1/1     Running   0          30s
    gloo-telemetry-collector-agent-mf5rw      1/1     Running   0          30s
    gloo-telemetry-gateway-6547f479d5-r4zm6   1/1     Running   0          30s
    prometheus-server-57cd8c74d4-2bc7f        2/2     Running   0          30s
      
  3. Save the external address and port that your cloud provider assigned to the Gloo OpenTelemetry (OTel) gateway service. The OTel collector agents in each workload cluster send metrics to this address.

      export TELEMETRY_GATEWAY_IP=$(kubectl get svc -n gloo-mesh gloo-telemetry-gateway --context $MGMT_CONTEXT -o jsonpath="{.status.loadBalancer.ingress[0]['hostname','ip']}")
    export TELEMETRY_GATEWAY_PORT=$(kubectl get svc -n gloo-mesh gloo-telemetry-gateway --context $MGMT_CONTEXT -o jsonpath='{.spec.ports[?(@.name=="otlp")].port}')
    export TELEMETRY_GATEWAY_ADDRESS=${TELEMETRY_GATEWAY_IP}:${TELEMETRY_GATEWAY_PORT}
    echo $TELEMETRY_GATEWAY_ADDRESS
      

Data plane

Register each workload cluster with the Gloo management plane by deploying Gloo data plane components. A deployment named gloo-mesh-agent runs the Gloo agent in each workload cluster.

  1. Register both workload clusters with the management server. These commands use a basic profile to create a gloo-mesh namespace and install the Gloo data plane components, such as the Gloo agent. For more information, check out the CLI install profiles.

      meshctl cluster register $REMOTE_CLUSTER1 \
      --kubecontext $MGMT_CONTEXT \
      --profiles gloo-mesh-agent \
      --remote-context $REMOTE_CONTEXT1 \
      --telemetry-server-address $TELEMETRY_GATEWAY_ADDRESS
    
    meshctl cluster register $REMOTE_CLUSTER2 \
      --kubecontext $MGMT_CONTEXT \
      --profiles gloo-mesh-agent \
      --remote-context $REMOTE_CONTEXT2 \
      --telemetry-server-address $TELEMETRY_GATEWAY_ADDRESS
      
  2. Verify that the Gloo data plane components in each workload cluster are healthy. If not, try debugging the agent.

      kubectl get pods -n gloo-mesh --context $REMOTE_CONTEXT1
    kubectl get pods -n gloo-mesh --context $REMOTE_CONTEXT2
      

    Example output:

      NAME                                   READY   STATUS    RESTARTS   AGE
    gloo-mesh-agent-8ffc775c4-tk2z5        2/2     Running   0          90s
    gloo-telemetry-collector-agent-g8p7x   1/1     Running   0          90s
    gloo-telemetry-collector-agent-mp2wd   1/1     Running   0          90s
      
  3. Verify that your Gloo Mesh (OSS APIs) setup is correctly installed. If not, try debugging the relay connection. Note that this check might take a few seconds to verify that:

    • Your Gloo product licenses are valid and current.
    • The Gloo CRDs are installed at the correct version.
    • The management plane pods in the management cluster are running and healthy.
    • The agents in the workload clusters are successfully identified by the management server.
    • Any Istio installation versions are compatible with the installed Gloo version.
      meshctl check --kubecontext $MGMT_CONTEXT
      

    Example output:

      🟢 License status
    
    INFO  gloo-mesh enterprise license expiration is 25 Aug 24 10:38 CDT
    
    🟢 CRD version check
    
    🟢 Gloo deployment status
    
    Namespace | Name                           | Ready | Status
    gloo-mesh | gloo-mesh-mgmt-server          | 1/1   | Healthy
    gloo-mesh | gloo-mesh-redis                | 1/1   | Healthy
    gloo-mesh | gloo-mesh-ui                   | 1/1   | Healthy
    gloo-mesh | gloo-telemetry-gateway         | 1/1   | Healthy
    gloo-mesh | prometheus-server              | 1/1   | Healthy
    gloo-mesh | gloo-telemetry-collector-agent | 2/2   | Healthy
    
    🟢 Mgmt server connectivity to workload agents
    
    Cluster  | Registered | Connected Pod                                   
    cluster1 | true       | gloo-mesh/gloo-mesh-mgmt-server-65bd557b95-v8qq6
    cluster2 | true       | gloo-mesh/gloo-mesh-mgmt-server-65bd557b95-v8qq6
    
    Connected Pod                                    | Clusters
    gloo-mesh/gloo-mesh-mgmt-server-65bd557b95-v8qq6 | 2
    
    🟢 Istio compatibility check
    
    All Istio versions found are compatible
      

Install Gloo Mesh (OSS APIs) with Helm

Use Helm to customize an advanced Gloo Mesh (OSS APIs) installation.

Before you begin

  1. Install the following command-line (CLI) tools.

    • helm, the Kubernetes package manager.
    • kubectl, the Kubernetes command line tool. Download the kubectl version that is within one minor version of the Kubernetes clusters you plan to use.
    • meshctl, the Solo command line tool.
        curl -sL https://run.solo.io/meshctl/install | GLOO_MESH_VERSION=v2.11.0 sh -
      export PATH=$HOME/.gloo-mesh/bin:$PATH
        
  2. Set your Premium or Enterprise Solo license key for Gloo Mesh as an environment variable. If you do not have one, contact an account representative. If you prefer to specify license keys in a secret instead, see Licensing. To check your license’s validity, you can run meshctl license check --key $(echo ${GLOO_MESH_LICENSE_KEY} | base64 -w0).

      export GLOO_MESH_LICENSE_KEY=<license_key>
      
  3. Set the Gloo Mesh (OSS APIs) version. This example uses the latest version. You can find other versions in the Changelog documentation. Append -fips for a FIPS-compliant image, such as 2.11.0-fips. Do not include v before the version number.

      export GLOO_VERSION=2.11.0
      
  4. Create or use at least two existing Kubernetes clusters. The instructions in this guide assume one management cluster and two workload clusters.

    • The cluster name must be alphanumeric with no special characters except a hyphen (-), lowercase, and begin with a letter (not a number) to follow the Kubernetes DNS label standard.
  5. Production installations: Review Best practices for production to prepare your optional security measures. For example, before you begin your Gloo installation, you can provide your own certificates to secure the management server and agent connection, and set up secure access to the Gloo UI.

In a multicluster setup, you deploy the Gloo management plane into a dedicated management cluster, and the Gloo data plane into one or more workload clusters that run Istio service meshes.

Management plane

Deploy the Gloo management plane into a dedicated management cluster.

  1. Save the name and kubeconfig context for your management cluster in environment variables.

      export MGMT_CLUSTER=<management-cluster-name>
    export MGMT_CONTEXT=<management-cluster-context>
      
  2. Add and update the Helm repository for Gloo.

      helm repo add gloo-platform https://storage.googleapis.com/gloo-platform/helm-charts
    helm repo update
      
  3. Install the Gloo CRDs.

      helm upgrade -i gloo-platform-crds gloo-platform/gloo-platform-crds \
       --namespace=gloo-mesh \
       --create-namespace \
       --version=$GLOO_VERSION \
       --set installEnterpriseCrds=false \
       --kube-context $MGMT_CONTEXT
      
  4. Prepare a Helm values file to provide your customizations. To get started, you can use the minimum settings in the following profile as a basis. These settings enable all components that are required for a Gloo management plane installation.
      curl https://storage.googleapis.com/gloo-platform/helm-profiles/$GLOO_VERSION/gloo-mesh-mgmt.yaml > mgmt-plane.yaml
    open mgmt-plane.yaml
      
  5. Decide how you want to secure the relay connection between the Gloo management server and agents. In test and POC environments, you can use self-signed certificates to secure the connection. If you plan to use Gloo Mesh (OSS APIs) in production, it is recommended to bring your own certificates instead. For more information, see Setup options.

  6. Edit the file to provide your own details for settings that are recommended for production deployments, such as the following settings.

    FieldDecription
    glooMgmtServer.resources.limitsAdd resource limits for the gloo-mesh-mgmt-server pod, such as cpu: 1000m and memory: 1Gi.
    glooMgmtServer.safeMode
    glooMgmtServer.safeStartWindow
    Configure how you want the Gloo management server to handle translation after a Redis restart. For available options, see Redis safe mode options.
    glooMgmtServer.serviceOverrides.metadata.annotationsAdd annotations for the management server load balancer as needed, such as AWS-specific load balancer annotations. For more information, see Deployment and service overrides.
    glooUi.authSet up OIDC authorization for the Gloo UI. For more information, see UI authentication.
    prometheus.enabledDisable the default Prometheus instance as needed to provide your own. Otherwise, you can keep the default Prometheus server enabled, and deploy a production-level server to scrape metrics from the server. For more information on each option, see Best practices for collecting metrics in production.
    redisDisable the default Redis deployment and provide your own backing database as needed. For more information, see Backing databases.
    OpenShift: glooMgmtServer.serviceType and telemetryGateway.service.typeIn some OpenShift setups, you might not use load balancer service types. You can set these two deployment service types to ClusterIP, and expose them by using OpenShift routes after installation.
  7. Use the customizations in your Helm values file to install the Gloo management plane components in your management cluster.

  8. Verify that the management plane pods have a status of Running.

      kubectl get pods -n gloo-mesh --context $MGMT_CONTEXT
      

    Example output:

      NAME                                      READY   STATUS    RESTARTS   AGE
    gloo-mesh-mgmt-server-56c495796b-cx687    1/1     Running   0          30s
    gloo-mesh-redis-8455d49c86-f8qhw          1/1     Running   0          30s
    gloo-mesh-ui-65b6b6df5f-bf4vp             3/3     Running   0          30s
    gloo-telemetry-collector-agent-7rzfb      1/1     Running   0          30s
    gloo-telemetry-collector-agent-mf5rw      1/1     Running   0          30s
    gloo-telemetry-gateway-6547f479d5-r4zm6   1/1     Running   0          30s
    prometheus-server-57cd8c74d4-2bc7f        2/2     Running   0          30s
      
  9. Save the external address and port that your cloud provider assigned to the gloo-mesh-mgmt-server service. The gloo-mesh-agent agent in each workload cluster accesses this address via a secure connection.

  10. Save the external address and port that your cloud provider assigned to the Gloo OpenTelemetry (OTel) gateway service. The OTel collector agents in each workload cluster send metrics to this address.

Data plane

Register each workload cluster with the Gloo management plane by deploying Gloo data plane components. A deployment named gloo-mesh-agent runs the Gloo agent in each workload cluster.

  1. For the workload cluster that you want to register with Gloo, set the following environment variables. You update these variables each time you follow these steps to register another workload cluster.

      export REMOTE_CLUSTER=<workload_cluster_name>
    export REMOTE_CONTEXT=<workload_cluster_context>
      
  2. In the management cluster, create a KubernetesCluster resource to represent the workload cluster and store relevant data, such as the workload cluster’s local domain.

      kubectl apply --context $MGMT_CONTEXT -f- <<EOF
    apiVersion: admin.gloo.solo.io/v2
    kind: KubernetesCluster
    metadata:
       name: ${REMOTE_CLUSTER}
       namespace: gloo-mesh
    spec:
       clusterDomain: cluster.local
    EOF
      
  3. In your workload cluster, apply the Gloo CRDs.

      helm upgrade -i gloo-platform-crds gloo-platform/gloo-platform-crds \
       --namespace=gloo-mesh \
       --create-namespace \
       --version=$GLOO_VERSION \
       --set installEnterpriseCrds=false \
       --kube-context $REMOTE_CONTEXT
      
  4. Prepare a Helm values file to provide your customizations. To get started, you can use the minimum settings in the following profile as a basis. These settings enable all components that are required for a Gloo data plane installation.
      curl https://storage.googleapis.com/gloo-platform/helm-profiles/$GLOO_VERSION/gloo-mesh-agent.yaml > data-plane.yaml
    open data-plane.yaml
      
  5. Depending on the method you chose to secure the relay connection, prepare the Helm values for the data plane installation. For more information, see the Setup options.

  6. Edit the file to provide your own details for settings that are recommended for production deployments, such as the following settings.

    FieldDecription
    glooAgent.resources.limitsAdd resource limits for the gloo-mesh-mgmt-server pod, such as cpu: 500m and memory: 512Mi.
  7. Use the customizations in your Helm values file to install the Gloo data plane components in your workload cluster.

  8. Verify that the Gloo data plane component pods are running. If not, try debugging the agent.

      kubectl get pods -n gloo-mesh --context $REMOTE_CONTEXT
      

    Example output:

      NAME                                   READY   STATUS    RESTARTS   AGE
    gloo-mesh-agent-8ffc775c4-tk2z5        2/2     Running   0          90s
    gloo-telemetry-collector-agent-g8p7x   1/1     Running   0          90s
    gloo-telemetry-collector-agent-mp2wd   1/1     Running   0          90s
      
  9. Repeat steps 1 - 8 to register each workload cluster with Gloo.

  10. Verify that your Gloo Mesh (OSS APIs) setup is correctly installed. If not, try debugging the relay connection. Note that this check might take a few seconds to verify that:

    • Your Gloo product licenses are valid and current.
    • The Gloo CRDs are installed at the correct version.
    • The management plane pods in the management cluster are running and healthy.
    • The agents in the workload clusters are successfully identified by the management server.
    • Any Istio installation versions are compatible with the installed Gloo version.
      meshctl check --kubecontext $MGMT_CONTEXT
      

    Example output:

      🟢 License status
    
    INFO  gloo-mesh enterprise license expiration is 25 Aug 24 10:38 CDT
    
    🟢 CRD version check
    
    🟢 Gloo deployment status
    
    Namespace | Name                           | Ready | Status
    gloo-mesh | gloo-mesh-mgmt-server          | 1/1   | Healthy
    gloo-mesh | gloo-mesh-redis                | 1/1   | Healthy
    gloo-mesh | gloo-mesh-ui                   | 1/1   | Healthy
    gloo-mesh | gloo-telemetry-gateway         | 1/1   | Healthy
    gloo-mesh | prometheus-server              | 1/1   | Healthy
    gloo-mesh | gloo-telemetry-collector-agent | 2/2   | Healthy
    
    🟢 Mgmt server connectivity to workload agents
    
    Cluster  | Registered | Connected Pod                                   
    cluster1 | true       | gloo-mesh/gloo-mesh-mgmt-server-65bd557b95-v8qq6
    cluster2 | true       | gloo-mesh/gloo-mesh-mgmt-server-65bd557b95-v8qq6
    
    Connected Pod                                    | Clusters
    gloo-mesh/gloo-mesh-mgmt-server-65bd557b95-v8qq6 | 2
    
    🟢 Istio compatibility check
    
    All Istio versions found are compatible
      
  11. If you have not installed Istio in each workload cluster yet, see the guides for installing Istio in sidecar or ambient mode.

Explore the UI

Now that the management plane is up and running, launch the Gloo UI to evaluate the health and efficiency of your service mesh. You can review the analysis and insights for your service mesh, such as recommendations to harden your Istio environment and steps to implement them in your environment.

Launch the dashboard

  1. Open the Gloo UI. The Gloo UI is served from the gloo-mesh-ui service on port 8090. You can connect by using the meshctl or kubectl CLIs.

  2. Review your Dashboard for an at-a-glance overview of your Gloo Mesh (OSS APIs) environment. Environment insights, health, status, inventories, security, and more are summarized in the dashboard cards. For more information about all available features, see Explore the UI.


    Figure: Gloo UI dashboard
    Figure: Gloo UI dashboard
    Figure: Gloo UI dashboard
    Figure: Gloo UI dashboard

Check insights

Review the insights for your environment. Gloo Mesh (OSS APIs) comes with an insights engine that automatically analyzes your Istio setups for health issues. These issues are displayed in the UI along with recommendations to harden your Istio setups. The insights give you a checklist to address issues that might otherwise be hard to detect across your environment.

  1. From the Dashboard, click on any of the insights cards to open the Insights page, or go to the Insights page directly.

  2. On the Insights page, you can view recommendations to harden your Istio setup, and steps to implement them in your environment. Gloo Mesh (OSS APIs) analyzes your setup, and returns individual insights that contain information about errors and warnings in your environment, best practices you can use to improve your configuration and security, and more.

    Figure: Insights page
    Figure: Insights page
    Figure: Insights page
    Figure: Insights page

  3. Select the insight that you want to resolve. The details modal shows more data about the insight, such as the time when it was last observed in your environment, and if applicable, the extended settings or configuration that the insight applies to.

    Figure: Example insight
    Figure: Example insight

  4. Click the Target YAML tab to see the resource file that the insight references, and click the Resolution Steps tab to see guidance such as steps for fixing warnings and errors in your resource configuration or recommendations for improving your security and setup.

Next steps

Check out some of the following resources to learn more about Gloo Mesh (OSS APIs) and expand your service mesh capabilities.

Gloo Mesh (OSS APIs):

  • Explore the Gloo UI.
  • Review insights to review and improve your setup’s health and security posture.
  • When it’s time to upgrade Gloo Mesh (OSS APIs), see the upgrade guide.

Istio:

  • If you have not installed Istio yet, see the guides for installing an ambient mesh.

Help and support: