Gloo Mesh Core deploys alongside your Istio installations in single or multicluster environments, and gives you instant insights into your Istio environment through a custom dashboard. You can follow this guide to customize settings for an advanced Gloo Mesh Core installation. To learn more about the benefits and architecture, see About.

Before you begin

  1. If you have not already, install helm, the Kubernetes package manager.

  2. Set your Gloo license key as an environment variable. If you do not have one, contact an account representative. If you prefer to specify license keys in a secret instead, see Licensing.
      export GLOO_MESH_CORE_LICENSE_KEY=<GLOO_MESH_CORE_LICENSE_KEY>
      

Install Gloo Mesh Core

Deploy the Gloo Mesh Core components into one cluster that runs Istio, or across a multicluster Istio environment.

Single cluster

Install all Gloo Mesh Core components in the same cluster as your Istio service mesh.

  1. Create or use an existing Kubernetes cluster, and save the name of the cluster as an environment variable. Note: The cluster name must be alphanumeric with no special characters except a hyphen (-), lowercase, and begin with a letter (not a number).

      export CLUSTER_NAME=<cluster_name>
      
  2. Set the Gloo version. This example uses the latest version. You can find other versions in the Changelog documentation. Append -fips for a FIPS-compliant image, such as 2.5.0-beta2-fips. Do not include v before the version number.
      export GLOO_VERSION=2.5.0-beta2
      
  3. Add and update the Helm repository for Gloo Mesh Core.

      helm repo add gloo-platform https://storage.googleapis.com/gloo-platform/helm-charts
    helm repo update
      
  4. Prepare a Helm values file to provide your customizations. To get started, you can use the minimum settings in the following example file as a basis. These settings enable all components that are required for a single-cluster Gloo Mesh Core installation.

      cat >gloo-mesh-core-single.yaml <<EOF
    common:
      cluster: ${CLUSTER_NAME}
    glooAgent:
      enabled: true
      relay:
        serverAddress: gloo-mesh-mgmt-server.gloo-mesh:9900
      runAsSidecar: true
    glooAnalyzer:
      enabled: true
      runAsSidecar: true
    glooMgmtServer:
      createGlobalWorkspace: true
      enabled: true
      insights:
        enabled: true
      policyApis:
        enabled: false
      registerCluster: true
    glooInsightsEngine:
      enabled: true
      runAsSidecar: false
    glooUi:
      enabled: true
    licensing:
      glooMeshCoreLicenseKey: ${GLOO_MESH_CORE_LICENSE_KEY}
    prometheus:
      enabled: true
    redis:
      deployment:
        enabled: true
    telemetryCollector:
      enabled: true
      config:
        exporters:
          otlp:
            endpoint: gloo-telemetry-gateway.gloo-mesh:4317
    telemetryCollectorCustomization:
      pipelines: 
        logs/analyzer: 
          enabled: true
    telemetryGateway:
      enabled: false
    EOF
      
  5. Edit the file to provide your own details, such as the following optional settings. You can see all possible fields for the Helm chart that you can set by running helm show values gloo-platform/gloo-platform --version v2.5.0-beta2 > all-values.yaml. You can also see these fields in the Helm values documentation.

    FieldDecription
    glooMgmtServer.resources.limitsAdd resource limits for the gloo-mesh-mgmt-server pod, such as cpu: 1000m and memory: 1Gi.
    glooMgmtServer.serviceOverrides.metadata.annotationsAdd annotations for the management server load balancer as needed, such as AWS-specific load balancer annotations. For more information, see Deployment and service overrides.
    glooUi.authSet up OIDC authorization for the Gloo UI. For more information, see UI authentication.
    prometheus.enabledEnable or disable the default Prometheus instance. Prometheus is required to scrape metrics from specific workloads to visualize workload communication in the Gloo UI Graph.
    redisDisable the default Redis deployment and provide your own backing database as needed. For more information, see Backing databases.
  6. Install the Gloo CRDs.

      helm upgrade -i gloo-platform-crds gloo-platform/gloo-platform-crds \
      --namespace=gloo-mesh \
      --create-namespace \
      --version=$GLOO_VERSION \
      --set installEnterpriseCrds=false
      
  7. Use the customizations in your Helm values file to install the Gloo Mesh Core components in your cluster.

      helm upgrade -i gloo-mesh-core gloo-platform/gloo-platform \
        -n gloo-mesh \
        --version $GLOO_VERSION \
        --values gloo-mesh-core-single.yaml
      
  8. Verify that Gloo Mesh Core installed correctly. This check might take a few seconds to verify that:

    • Your Gloo Mesh Core product license is valid and current.
    • The Gloo CRDs installed at the correct version.
    • The Gloo pods are running and healthy.
    • The Gloo agent is running and connected to the management server.
      meshctl check
      

Multicluster

In a multicluster setup, you deploy the Gloo Mesh Core control plane into a dedicated management cluster, and the Gloo data plane into one or more workload clusters that run Istio service meshes.

Install the control plane

Deploy the Gloo Mesh Core control plane into a dedicated management cluster.

  1. Create or use at least two existing Kubernetes clusters. The instructions in this guide assume one management cluster and two workload clusters. Note: The cluster name must be alphanumeric with no special characters except a hyphen (-), lowercase, and begin with a letter (not a number).

  2. Set the names of your clusters from your infrastructure provider. If your clusters have different names, specify those names instead.

      export MGMT_CLUSTER=mgmt
    export REMOTE_CLUSTER1=cluster1
    export REMOTE_CLUSTER2=cluster2
      
  3. Save the kubeconfig contexts for your clusters. Run kubectl config get-contexts, look for your cluster in the CLUSTER column, and get the context name in the NAME column. Note: Do not use context names with underscores. The generated certificate that connects workload clusters to the management cluster uses the context name as a SAN specification, and underscores in SAN are not FQDN compliant. You can rename a context by running kubectl config rename-context "<oldcontext>" <newcontext>.
      export MGMT_CONTEXT=<management-cluster-context>
    export REMOTE_CONTEXT1=<remote-cluster1-context>
    export REMOTE_CONTEXT2=<remote-cluster2-context>
      
  4. Prepare a Helm values file to provide your customizations. To get started, you can use the minimum settings in the following example file as a basis. These settings enable all components that are required for a Gloo Mesh Core control plane installation.

      cat >control-plane.yaml <<EOF
    common:
      cluster: ${MGMT_CLUSTER}
    glooMgmtServer:
      enabled: true
      policyApis:
        enabled: false
    glooInsightsEngine:
      enabled: true
      runAsSidecar: false
    glooUi:
      enabled: true
    licensing:
      glooMeshCoreLicenseKey: ${GLOO_MESH_CORE_LICENSE_KEY}
    prometheus:
      enabled: true
    telemetryCollector:
      enabled: true
    telemetryGateway:
      enabled: true
    telemetryGatewayCustomization:
      pipelines:
        logs/redis_stream:
          enabled: true
    EOF
      
  5. Edit the file to provide your own details, such as the following optional settings. You can see all possible fields for the Helm chart that you can set by running helm show values gloo-platform/gloo-platform --version v2.5.0-beta2 > all-values.yaml. You can also see these fields in the Helm values documentation.

    FieldDecription
    glooMgmtServer.relaySecure the relay connection between the Gloo management server and agents. By default, Gloo Mesh Core generates self-signed certificates and keys for the root CA and uses these credentials to derive the intermediate CA, server and client TLS certificates. This setup is not recommended for production. Instead, use your preferred PKI provider to generate and store your credentials, and to have more control over the certificate management process..
    glooMgmtServer.resources.limitsAdd resource limits for the gloo-mesh-mgmt-server pod, such as cpu: 1000m and memory: 1Gi.
    glooMgmtServer.serviceOverrides.metadata.annotationsAdd annotations for the management server load balancer as needed, such as AWS-specific load balancer annotations. For more information, see Deployment and service overrides.
    glooUi.authSet up OIDC authorization for the Gloo UI. For more information, see UI authentication.
    prometheus.enabledDisable the default Prometheus instance as needed to provide your own. Otherwise, you can keep the default Prometheus server enabled, and deploy a production-level server to scrape metrics from the server. For more information, see the Prometheus customization options.
    redisDisable the default Redis deployment and provide your own backing database as needed. For more information, see Backing databases.
  6. Install the Gloo CRDs.

      helm upgrade -i gloo-platform-crds gloo-platform/gloo-platform-crds \
      --namespace=gloo-mesh \
      --create-namespace \
      --kube-context $MGMT_CONTEXT \
      --version=$GLOO_VERSION \
      --set installEnterpriseCrds=false
      
  7. Use the customizations in your Helm values file to install the Gloo Mesh Core control plane components in your management cluster.

      helm upgrade -i gloo-mesh-core gloo-platform/gloo-platform \
        --kube-context $MGMT_CONTEXT \
        -n gloo-mesh \
        --version $GLOO_VERSION \
        --values control-plane.yaml
      

    Note: For quick testing, you can create an insecure connection between the management server and workload agents by including the --set common.insecure=true and --set glooMgmtServer.insecure=true flags.

  8. Verify that the control plane pods have a status of Running.

      kubectl get pods -n gloo-mesh --context $MGMT_CONTEXT
      
  9. Save the external address and port that your cloud provider assigned to the Gloo OpenTelemetry (OTel) gateway service. The OTel collector agents in each workload cluster send metrics to this address.

  10. Save the external address and port that your cloud provider assigned to the gloo-mesh-mgmt-server service. The gloo-mesh-agent agent in each workload cluster accesses this address via a secure connection.

Install the data plane

Register each workload cluster with the Gloo Mesh Core control plane by deploying Gloo data plane components. A deployment named gloo-mesh-agent runs the Gloo agent in each workload cluster.

  1. For the workload cluster that you want to register with Gloo Mesh Core, set the following environment variables. You update these variables each time you follow these steps to register another workload cluster.

      export REMOTE_CLUSTER=$REMOTE_CLUSTER1
    export REMOTE_CONTEXT=$REMOTE_CONTEXT1
      
  2. In the management cluster, create a KubernetesCluster resource to represent the workload cluster and store relevant data, such as the workload cluster’s local domain.

      kubectl apply --context $MGMT_CONTEXT -f- <<EOF
    apiVersion: admin.gloo.solo.io/v2
    kind: KubernetesCluster
    metadata:
       name: ${REMOTE_CLUSTER}
       namespace: gloo-mesh
    spec:
       clusterDomain: cluster.local
    EOF
      
  3. Prepare a Helm values file to provide your customizations. To get started, you can use the minimum settings in the following example file as a basis. These settings enable all components that are required to install the Gloo data plane components in the workload cluster.

      cat >data-plane.yaml <<EOF
    common:
      cluster: ${REMOTE_CLUSTER}
    glooAgent:
      enabled: true
      relay:
        serverAddress: ${MGMT_SERVER_NETWORKING_ADDRESS}
    glooAnalyzer:
      enabled: true
      runAsSidecar: true
    telemetryCollector:
      enabled: true
      config:
        exporters:
          otlp:
            endpoint: ${TELEMETRY_GATEWAY_ADDRESS}
    telemetryCollectorCustomization:
      pipelines: 
        logs/analyzer: 
          enabled: true
    EOF
      
  4. Edit the file to provide your own details, such as the following optional settings. You can see all possible fields for the Helm chart that you can set by running helm show values gloo-platform/gloo-platform --version v2.5.0-beta2 > all-values.yaml. You can also see these fields in the Helm values documentation.

    FieldDecription
    glooAgent.relayProvide the certificate and secret details that correspond to your management server relay settings.
    glooAgent.resources.limitsAdd resource limits for the gloo-mesh-mgmt-server pod, such as cpu: 500m and memory: 512Mi.
  5. Install the Gloo CRDs.

      helm upgrade -i gloo-platform-crds gloo-platform/gloo-platform-crds \
      --namespace=gloo-mesh \
      --create-namespace \
      --kube-context $REMOTE_CONTEXT \
      --version=$GLOO_VERSION \
      --set installEnterpriseCrds=false
      
  6. Use the customizations in your Helm values file to install the Gloo Mesh Core data plane components in your workload cluster.

      helm upgrade -i gloo-mesh-core gloo-platform/gloo-platform \
        --kube-context $REMOTE_CONTEXT \
        -n gloo-mesh \
        --version $GLOO_VERSION \
        --values data-plane.yaml
      
  7. Verify that the Gloo data plane components are healthy. If not, try debugging the agent.

      gloo check --kubecontext $REMOTE_CONTEXT
      
  8. Repeat steps 1 - 7 to register each workload cluster with Gloo.

  9. Verify that your multicluster Gloo Mesh Core setup installed correctly. Note that this check might take a few seconds to verify that:

    • Your Gloo Mesh Core product license is valid and current.
    • The Gloo CRDs installed at the correct version.
    • The Gloo pods are running and healthy.
    • The Gloo agent is running and connected to the management server.
      meshctl check --kubecontext $MGMT_CONTEXT
      

Next steps

Now that you have Gloo Mesh Core up and running, check out the following guides to expand your service mesh capabilities.

  • Explore Gloo Mesh Core insights that can help you improve your Istio configuration and security posture.
  • Find out more about hardened Istio n-4 version support built into Solo Istio images.
  • Use Gloo Mesh Core to quickly install and manage your service mesh for you with service mesh lifecycle management.
  • Monitor and observe your Istio environment with Gloo Mesh Core’s built-in telemetry tools.
  • When it’s time to upgrade Gloo Mesh Core, see the upgrade guide.