Gloo Mesh Core deploys alongside your Istio installations in single or multicluster environments, and gives you instant insights into your Istio environment through a custom dashboard.

You can follow this guide to customize settings for an advanced Gloo Mesh Core installation. To learn more about the benefits and architecture, see About.

Before you begin

  1. Install the following command-line (CLI) tools.

    • kubectl, the Kubernetes command line tool. Download the kubectl version that is within one minor version of the Kubernetes clusters you plan to use.
    • meshctl, the Solo command line tool.
        curl -sL https://run.solo.io/meshctl/install | GLOO_MESH_VERSION=v2.7.0-beta1 sh -
      export PATH=$HOME/.gloo-mesh/bin:$PATH
        
    • helm, the Kubernetes package manager.
  2. Set your Gloo Mesh Core license key as an environment variable. If you do not have one, contact an account representative. If you prefer to specify license keys in a secret instead, see Licensing. To check your license’s validity, you can run meshctl license check --key $(echo ${GLOO_MESH_CORE_LICENSE_KEY} | base64 -w0).

      export GLOO_MESH_CORE_LICENSE_KEY=<license_key>
      
  3. Set the Gloo Mesh Core version. This example uses the latest version. You can find other versions in the Changelog documentation. Append -fips for a FIPS-compliant image, such as 2.7.0-beta1-fips. Do not include v before the version number.

      export GLOO_VERSION=2.7.0-beta1
      
  4. Create or use an existing Kubernetes cluster for a single-cluster setup, or for a multicluster setup, at least two existing Kubernetes clusters. The instructions in this guide assume one management cluster and two workload clusters.

    • The cluster name must be alphanumeric with no special characters except a hyphen (-), lowercase, and begin with a letter (not a number).
  5. Production installations: Review Best practices for production to prepare your optional security measures. For example, before you begin your Gloo installation, you can provide your own certificates to secure the management server and agent connection, and set up secure access to the Gloo UI.

Single cluster

Install all Gloo Mesh Core components in the same cluster as your Istio service mesh.

  1. Save the name of your cluster as an environment variable.

      export CLUSTER_NAME=<cluster_name>
      
  2. Add and update the Helm repository for Gloo.

      helm repo add gloo-platform https://storage.googleapis.com/gloo-platform/helm-charts
    helm repo update
      
  3. Install the Gloo CRDs.

      helm upgrade -i gloo-platform-crds gloo-platform/gloo-platform-crds \
       --namespace=gloo-mesh \
       --create-namespace \
       --version=$GLOO_VERSION \
       --set installEnterpriseCrds=false
      
  4. Prepare a Helm values file to provide your customizations. To get started, you can use the minimum settings in the following profile as a basis. These settings enable all components that are required for a single-cluster Gloo Mesh Core installation.

      curl https://storage.googleapis.com/gloo-platform/helm-profiles/$GLOO_VERSION/gloo-core-single-cluster.yaml > gloo-single.yaml
    open gloo-single.yaml
      
  5. Decide how you want to secure the relay connection between the Gloo management server and agent. In test and POC environments, you can use Gloo self-signed certificates to secure the connection. If you plan to use Gloo Mesh Core in production, it is recommended to bring your own certificates instead. For more information, see Setup options.

  6. Edit the file to provide your own details for settings that are recommended for production deployments, such as the following settings.

    FieldDecription
    glooAgent.resources.limitsAdd resource limits for the gloo-mesh-agent pod, such as cpu: 500m and memory: 512Mi.
    glooMgmtServer.resources.limitsAdd resource limits for the gloo-mesh-mgmt-server pod, such as cpu: 1000m and memory: 1Gi.
    glooMgmtServer.safeMode
    glooMgmtServer.safeStartWindow
    Configure how you want the Gloo management server to handle translation after a Redis restart. For available options, see Redis safe mode options.
    glooMgmtServer.serviceOverrides.metadata.annotationsAdd annotations for the management server load balancer as needed, such as AWS-specific load balancer annotations. For more information, see Deployment and service overrides.
    glooUi.authSet up OIDC authorization for the Gloo UI. For more information, see UI authentication.
    prometheus.enabledDisable the default Prometheus instance as needed to provide your own. Otherwise, you can keep the default Prometheus server enabled, and deploy a production-level server to scrape metrics from the server. For more information on each option, see Best practices for collecting metrics in production.
    redisDisable the default Redis deployment and provide your own backing database as needed. For more information, see Backing databases.
  7. Use the customizations in your Helm values file to install the Gloo Mesh Core components in your cluster.

      helm upgrade -i gloo-platform gloo-platform/gloo-platform \
        -n gloo-mesh \
        --version $GLOO_VERSION \
        --values gloo-single.yaml \
        --set common.cluster=$CLUSTER_NAME \
        --set licensing.glooMeshCoreLicenseKey=$GLOO_MESH_CORE_LICENSE_KEY
      
  8. Verify that your Gloo Mesh Core setup is correctly installed. If not, try debugging the relay connection. Note that this check might take a few seconds to verify that:

    • Your Gloo product license is valid and current.
    • The Gloo CRDs are installed at the correct version.
    • The management plane pods in the management cluster are running and healthy.
    • The Gloo agent is running and connected to the management server.
      meshctl check
      

    Example output:

      🟢 License status
    
    INFO  gloo-mesh-core enterprise license expiration is 25 Aug 24 10:38 CDT
    
    🟢 CRD version check
    
    🟢 Gloo deployment status
    
    Namespace | Name                           | Ready | Status
    gloo-mesh | gloo-mesh-mgmt-server          | 1/1   | Healthy
    gloo-mesh | gloo-mesh-redis                | 1/1   | Healthy
    gloo-mesh | gloo-mesh-ui                   | 1/1   | Healthy
    gloo-mesh | gloo-telemetry-collector-agent | 3/3   | Healthy
    gloo-mesh | prometheus-server              | 1/1   | Healthy
    
    🟢 Mgmt server connectivity to workload agents
    
    Cluster | Registered | Connected Pod                                   
    test    | true       | gloo-mesh/gloo-mesh-mgmt-server-558cddbbd7-rf2hv
    
    Connected Pod                                    | Clusters
    gloo-mesh/gloo-mesh-mgmt-server-558cddbbd7-rf2hv | 1  
      
  9. If you have not installed Istio yet, see the guides for installing Istio in sidecar or ambient mode.

Multicluster

In a multicluster setup, you deploy the Gloo management plane into a dedicated management cluster, and the Gloo data plane into one or more workload clusters that run Istio service meshes.

Management plane

Deploy the Gloo management plane into a dedicated management cluster.

  1. Save the name and kubeconfig context for your management cluster in environment variables.

      export MGMT_CLUSTER=<management-cluster-name>
    export MGMT_CONTEXT=<management-cluster-context>
      
  2. Add and update the Helm repository for Gloo.

      helm repo add gloo-platform https://storage.googleapis.com/gloo-platform/helm-charts
    helm repo update
      
  3. Install the Gloo CRDs.

      helm upgrade -i gloo-platform-crds gloo-platform/gloo-platform-crds \
       --namespace=gloo-mesh \
       --create-namespace \
       --version=$GLOO_VERSION \
       --set installEnterpriseCrds=false \
       --kube-context $MGMT_CONTEXT
      
  4. Prepare a Helm values file to provide your customizations. To get started, you can use the minimum settings in the following profile as a basis. These settings enable all components that are required for a Gloo management plane installation.
      curl https://storage.googleapis.com/gloo-platform/helm-profiles/$GLOO_VERSION/gloo-core-mgmt.yaml > mgmt-plane.yaml
    open mgmt-plane.yaml
      
  5. Decide how you want to secure the relay connection between the Gloo management server and agents. In test and POC environments, you can use self-signed certificates to secure the connection. If you plan to use Gloo Mesh Core in production, it is recommended to bring your own certificates instead. For more information, see Setup options.

  6. Edit the file to provide your own details for settings that are recommended for production deployments, such as the following settings.

    FieldDecription
    glooMgmtServer.resources.limitsAdd resource limits for the gloo-mesh-mgmt-server pod, such as cpu: 1000m and memory: 1Gi.
    glooMgmtServer.safeMode
    glooMgmtServer.safeStartWindow
    Configure how you want the Gloo management server to handle translation after a Redis restart. For available options, see Redis safe mode options.
    glooMgmtServer.serviceOverrides.metadata.annotationsAdd annotations for the management server load balancer as needed, such as AWS-specific load balancer annotations. For more information, see Deployment and service overrides.
    glooUi.authSet up OIDC authorization for the Gloo UI. For more information, see UI authentication.
    prometheus.enabledDisable the default Prometheus instance as needed to provide your own. Otherwise, you can keep the default Prometheus server enabled, and deploy a production-level server to scrape metrics from the server. For more information on each option, see Best practices for collecting metrics in production.
    redisDisable the default Redis deployment and provide your own backing database as needed. For more information, see Backing databases.
  7. Use the customizations in your Helm values file to install the Gloo management plane components in your management cluster.

      helm upgrade -i gloo-platform gloo-platform/gloo-platform \
       --kube-context $MGMT_CONTEXT \
       -n gloo-mesh \
       --version $GLOO_VERSION \
       --values mgmt-plane.yaml \
       --set common.cluster=$MGMT_CLUSTER \
       --set licensing.glooMeshCoreLicenseKey=$GLOO_MESH_CORE_LICENSE_KEY
      
  8. Verify that the management plane pods have a status of Running.

      kubectl get pods -n gloo-mesh --context $MGMT_CONTEXT
      

    Example output:

      NAME                                      READY   STATUS    RESTARTS   AGE
    gloo-mesh-mgmt-server-56c495796b-cx687    1/1     Running   0          30s
    gloo-mesh-redis-8455d49c86-f8qhw          1/1     Running   0          30s
    gloo-mesh-ui-65b6b6df5f-bf4vp             3/3     Running   0          30s
    gloo-telemetry-collector-agent-7rzfb      1/1     Running   0          30s
    gloo-telemetry-gateway-6547f479d5-r4zm6   1/1     Running   0          30s
    prometheus-server-57cd8c74d4-2bc7f        2/2     Running   0          30s
      
  9. Save the external address and port that your cloud provider assigned to the gloo-mesh-mgmt-server service. The gloo-mesh-agent agent in each workload cluster accesses this address via a secure connection.

      export MGMT_SERVER_NETWORKING_DOMAIN=$(kubectl get svc -n gloo-mesh gloo-mesh-mgmt-server --context $MGMT_CONTEXT -o jsonpath="{.status.loadBalancer.ingress[0]['hostname','ip']}")
    export MGMT_SERVER_NETWORKING_PORT=$(kubectl get svc -n gloo-mesh gloo-mesh-mgmt-server --context $MGMT_CONTEXT -o jsonpath='{.spec.ports[?(@.name=="grpc")].port}')
    export MGMT_SERVER_NETWORKING_ADDRESS=${MGMT_SERVER_NETWORKING_DOMAIN}:${MGMT_SERVER_NETWORKING_PORT}
    echo $MGMT_SERVER_NETWORKING_ADDRESS
      

  10. Save the external address and port that your cloud provider assigned to the Gloo OpenTelemetry (OTel) gateway service. The OTel collector agents in each workload cluster send metrics to this address.

      export TELEMETRY_GATEWAY_IP=$(kubectl get svc -n gloo-mesh gloo-telemetry-gateway --context $MGMT_CONTEXT -o jsonpath="{.status.loadBalancer.ingress[0]['hostname','ip']}")
    export TELEMETRY_GATEWAY_PORT=$(kubectl get svc -n gloo-mesh gloo-telemetry-gateway --context $MGMT_CONTEXT -o jsonpath='{.spec.ports[?(@.name=="otlp")].port}')
    export TELEMETRY_GATEWAY_ADDRESS=${TELEMETRY_GATEWAY_IP}:${TELEMETRY_GATEWAY_PORT}
    echo $TELEMETRY_GATEWAY_ADDRESS
      

Data plane

Register each workload cluster with the Gloo management plane by deploying Gloo data plane components. A deployment named gloo-mesh-agent runs the Gloo agent in each workload cluster.

  1. For the workload cluster that you want to register with Gloo, set the following environment variables. You update these variables each time you follow these steps to register another workload cluster.

      export REMOTE_CLUSTER=<workload_cluster_name>
    export REMOTE_CONTEXT=<workload_cluster_context>
      
  2. In the management cluster, create a KubernetesCluster resource to represent the workload cluster and store relevant data, such as the workload cluster’s local domain.

      kubectl apply --context $MGMT_CONTEXT -f- <<EOF
    apiVersion: admin.gloo.solo.io/v2
    kind: KubernetesCluster
    metadata:
       name: ${REMOTE_CLUSTER}
       namespace: gloo-mesh
    spec:
       clusterDomain: cluster.local
    EOF
      
  3. In your workload cluster, apply the Gloo CRDs.

      helm upgrade -i gloo-platform-crds gloo-platform/gloo-platform-crds \
       --namespace=gloo-mesh \
       --create-namespace \
       --version=$GLOO_VERSION \
       --set installEnterpriseCrds=false \
       --kube-context $REMOTE_CONTEXT
      
  4. Prepare a Helm values file to provide your customizations. To get started, you can use the minimum settings in the following profile as a basis. These settings enable all components that are required for a Gloo data plane installation.
      curl https://storage.googleapis.com/gloo-platform/helm-profiles/$GLOO_VERSION/gloo-core-agent.yaml > data-plane.yaml
    open data-plane.yaml
      
  5. Depending on the method you chose to secure the relay connection, prepare the Helm values for the data plane installation. For more information, see the Setup options.

  6. Edit the file to provide your own details for settings that are recommended for production deployments, such as the following settings.

    FieldDecription
    glooAgent.resources.limitsAdd resource limits for the gloo-mesh-mgmt-server pod, such as cpu: 500m and memory: 512Mi.
  7. Use the customizations in your Helm values file to install the Gloo data plane components in your workload cluster.

      helm upgrade -i gloo-platform gloo-platform/gloo-platform \
        --kube-context $REMOTE_CONTEXT \
        -n gloo-mesh \
        --version $GLOO_VERSION \
        --values data-plane.yaml \
        --set common.cluster=$REMOTE_CLUSTER \
        --set glooAgent.relay.serverAddress=$MGMT_SERVER_NETWORKING_ADDRESS \
        --set telemetryCollector.config.exporters.otlp.endpoint=$TELEMETRY_GATEWAY_ADDRESS
      
  8. Verify that the Gloo data plane component pods are running. If not, try debugging the agent.

      meshctl check --kubecontext $REMOTE_CONTEXT
      

    Example output:

      🟢 Gloo deployment status
    
    Namespace | Name                           | Ready | Status
    gloo-mesh | gloo-mesh-agent                | 1/1   | Healthy
    gloo-mesh | gloo-telemetry-collector-agent | 3/3   | Healthy
      
  9. Repeat steps 1 - 8 to register each workload cluster with Gloo.

  10. Verify that your Gloo Mesh Core setup is correctly installed. If not, try debugging the relay connection. Note that this check might take a few seconds to verify that:

    • Your Gloo product licenses are valid and current.
    • The Gloo CRDs are installed at the correct version.
    • The management plane pods in the management cluster are running and healthy.
    • The agents in the workload clusters are successfully identified by the management server.
      meshctl check --kubecontext $MGMT_CONTEXT
      

    Example output:

      🟢 License status
    
    INFO  gloo-mesh-core enterprise license expiration is 25 Aug 24 10:38 CDT
    
    🟢 CRD version check
    
    🟢 Gloo deployment status
    
    Namespace | Name                           | Ready | Status
    gloo-mesh | gloo-mesh-mgmt-server          | 1/1   | Healthy
    gloo-mesh | gloo-mesh-redis                | 1/1   | Healthy
    gloo-mesh | gloo-mesh-ui                   | 1/1   | Healthy
    gloo-mesh | gloo-telemetry-collector-agent | 3/3   | Healthy
    gloo-mesh | gloo-telemetry-gateway         | 1/1   | Healthy
    gloo-mesh | prometheus-server              | 1/1   | Healthy
    
    🟢 Mgmt server connectivity to workload agents
    
    Cluster  | Registered | Connected Pod                                   
    cluster1 | true       | gloo-mesh/gloo-mesh-mgmt-server-65bd557b95-v8qq6
    cluster2 | true       | gloo-mesh/gloo-mesh-mgmt-server-65bd557b95-v8qq6
    
    Connected Pod                                    | Clusters
    gloo-mesh/gloo-mesh-mgmt-server-65bd557b95-v8qq6 | 2  
      
  11. If you have not installed Istio in each workload cluster yet, see the guides for installing Istio in sidecar or ambient mode.

Next steps

Now that you have Gloo Mesh Core and Istio up and running, check out some of the following resources to learn more about Gloo Mesh Core and expand your service mesh capabilities.

Istio:

Gloo Mesh Core:

  • Explore insights to review and improve your setup’s health and security posture.
  • When it’s time to upgrade Gloo Mesh Core, see the upgrade guide.

Help and support: