Install Gloo Gateway in a multicluster setup

Use Helm to customize your setup of Gloo Gateway in multiple clusters.

In a multicluster setup, you install the Gloo Gateway management plane and gateway proxy in separate clusters.

To set up the Gloo Gateway management plane and gateway proxy in one cluster instead, see the Single cluster setup guide.

Before you begin

  1. Create or use existing Kubernetes or OpenShift clusters. For a multicluster setup, you need at least two clusters. One cluster is set up as the Gloo Gateway management plane where the management components are installed. The other cluster is registered as your data plane and runs your Kubernetes workloads and gateway proxy. You can optionally add more workload clusters to your setup. The instructions in this guide assume one management cluster and two workload clusters. Note: The cluster name must be alphanumeric with no special characters except a hyphen (-), lowercase, and begin with a letter (not a number).

  2. Save the names of your clusters from your infrastructure provider as environment variables. If your clusters have different names, specify those names instead.

    export MGMT_CLUSTER=mgmt
    export REMOTE_CLUSTER1=cluster1
    export REMOTE_CLUSTER2=cluster2
    
  3. Save the kubeconfig contexts for your clusters as environment variables. Run kubectl config get-contexts, look for your cluster in the CLUSTER column, and get the context name in the NAME column.
    export MGMT_CONTEXT=<management-cluster-context>
    export REMOTE_CONTEXT1=<remote-cluster1-context>
    export REMOTE_CONTEXT2=<remote-cluster2-context>
    

    Note: Do not use cluster context names with underscores (_). The context name is used as a SAN specification in the generated certificate that connects workload clusters to the management cluster, and underscores in SAN are not FQDN compliant. You can rename a context by running kubectl config rename-context "<oldcontext>" <newcontext>.

  4. Set your Gloo Gateway license key as an environment variable. If you do not have one, contact an account representative. If you prefer to specify license keys in a secret instead, see Licensing.

    export GLOO_GATEWAY_LICENSE_KEY=<gloo-gateway-license-key>
    

    To check your license's validity, you can run meshctl license check --key $(echo ${GLOO_GATEWAY_LICENSE_KEY} | base64 -w0).

  5. Set the Gloo Gateway version as an environment variable. The latest version is used as an example. You can find other versions in the Changelog documentation. Append ‘-fips’ for a FIPS-compliant image, such as ‘2.5.3-fips’. Do not include v before the version number.

    export GLOO_VERSION=2.5.3
    
  6. Install the following CLI tools:

    • meshctl, the Gloo command line tool for bootstrapping Gloo Platform, registering clusters, describing configured resources, and more. Be sure to download version 2.5.3, which uses the latest Gloo Gateway CRDs.
    • kubectl, the Kubernetes command line tool. Download the kubectl version that is within one minor version of the Kubernetes cluster you plan to use.
    • OpenShift only: oc, the OpenShift command line tool. Download the oc version that is the same minor version of the OpenShift cluster you plan to use.
  7. For quick installations, such as for testing environments, you can install with meshctl. To customize your installation in detail, such as for production environments, install with Helm.

Install with meshctl

Quickly install Gloo Gateway by using meshctl, such as for testing purposes.

The meshctl install steps assume that you want to secure the connection between the Gloo management server and agents by using mutual TLS with self-signed TLS certifciates. If you want to customize this setup and use simple TLS instead, or if you want to bring your own TLS certificates, follow the Install with Helm steps.

Install the management plane

Start by installing the Gloo Gateway management plane in your management cluster.

  1. Install the Gloo Gateway management plane in the management cluster. This command uses a basic profile to create a gloo-mesh namespace and install the management plane components, such as the management server and Prometheus server, in your management cluster.

    meshctl install creates a self-signed certificate authority for mTLS if you do not supply your own certificates. If you prefer to set up Gloo Gateway without secure communication for quick demonstrations, include the --set common.insecure=true flag. Note that using the default self-signed CAs or using insecure mode are not suitable for production environments.
    meshctl install --profiles mgmt-server \
      --kubecontext $MGMT_CONTEXT \
      --set common.cluster=$MGMT_CLUSTER \
      --set licensing.glooGatewayLicenseKey=$GLOO_GATEWAY_LICENSE_KEY \
    

    Note:

    • Need to use OpenShift routes instead of load balancer service types? Follow the OpenShift steps in the Helm section instead.
    • After you run the following command in OpenShift 4.11 and later, you might see warnings for the pods and containers which violate the OpenShift PodSecurity "restricted:v1.24" profile, due to the elevated permissions required by Istio. You can ignore these warnings. For more info, see this article.
    1. Elevate the permission for the gloo-mesh service account.
      oc adm policy add-scc-to-group hostmount-anyuid system:serviceaccounts:gloo-mesh --context $MGMT_CONTEXT
      
    2. Install the Gloo management plane in the management cluster.
      meshctl install --profiles mgmt-server-openshift \
        --kubecontext $MGMT_CONTEXT \
        --set common.cluster=$MGMT_CLUSTER \
        --set licensing.glooMeshLicenseKey=$GLOO_GATEWAY_LICENSE_KEY
      

  2. Verify that the management plane pods have a status of Running.

    kubectl get pods -n gloo-mesh --context $MGMT_CONTEXT
    

    Example output:

    NAME                                      READY   STATUS    RESTARTS   AGE
    gloo-mesh-mgmt-server-56c495796b-cx687    1/1     Running   0          30s
    gloo-mesh-redis-8455d49c86-f8qhw          1/1     Running   0          30s
    gloo-mesh-ui-65b6b6df5f-bf4vp             3/3     Running   0          30s
    gloo-telemetry-collector-agent-7rzfb      1/1     Running   0          30s
    gloo-telemetry-gateway-6547f479d5-r4zm6   1/1     Running   0          30s
    prometheus-server-57cd8c74d4-2bc7f        2/2     Running   0          30s
    
  3. Save the external address and port that were assigned by your cloud provider to the Gloo OpenTelemetry (OTel) gateway load balancer service. The OTel collector agents in each workload cluster send metrics to this address.

    export TELEMETRY_GATEWAY_IP=$(kubectl get svc -n gloo-mesh gloo-telemetry-gateway --context $MGMT_CONTEXT -o jsonpath='{.status.loadBalancer.ingress[0].ip}')
    export TELEMETRY_GATEWAY_PORT=$(kubectl -n gloo-mesh get service gloo-telemetry-gateway --context $MGMT_CONTEXT -o jsonpath='{.spec.ports[?(@.name=="otlp")].port}')
    export TELEMETRY_GATEWAY_ADDRESS=${TELEMETRY_GATEWAY_IP}:${TELEMETRY_GATEWAY_PORT}
    echo $TELEMETRY_GATEWAY_ADDRESS
    
    export TELEMETRY_GATEWAY_HOSTNAME=$(kubectl get svc -n gloo-mesh gloo-telemetry-gateway --context $MGMT_CONTEXT -o jsonpath='{.status.loadBalancer.ingress[0].hostname}')
    export TELEMETRY_GATEWAY_PORT=$(kubectl -n gloo-mesh get service gloo-telemetry-gateway --context $MGMT_CONTEXT -o jsonpath='{.spec.ports[?(@.name=="otlp")].port}')
    export TELEMETRY_GATEWAY_ADDRESS=${TELEMETRY_GATEWAY_HOSTNAME}:${TELEMETRY_GATEWAY_PORT}
    echo $TELEMETRY_GATEWAY_ADDRESS
    

  4. Create a workspace that selects all clusters and namespaces by default, and workspace settings that enable communication across clusters. Gloo workspaces let you organize team resources across Kubernetes namespaces and clusters. In this example, you create a global workspace that imports and exports all resources and namespaces, and a workspace settings resource in the gloo-mesh-config namespace. Later, as your teams grow, you can create a workspace for each team, to enforce service isolation, set up federation, and even share resources by importing and exporting.

    kubectl apply --context $MGMT_CONTEXT -f- <<EOF
    apiVersion: admin.gloo.solo.io/v2
    kind: Workspace
    metadata:
      name: $MGMT_CLUSTER
      namespace: gloo-mesh
    spec:
      workloadClusters:
        - name: '*'
          namespaces:
            - name: '*'
    ---
    apiVersion: v1
    kind: Namespace
    metadata:
      name: gloo-mesh-config
    ---
    apiVersion: admin.gloo.solo.io/v2
    kind: WorkspaceSettings
    metadata:
      name: $MGMT_CLUSTER
      namespace: gloo-mesh-config
    spec:
      options:
        serviceIsolation:
          enabled: false
        federation:
          enabled: false
          serviceSelector:
          - {}
        eastWestGateways:
        - selector:
            labels:
              istio: eastwestgateway
    EOF
    

Register workload clusters

Register each workload cluster with the management server by deploying the relay agent.

  1. Prepare the gloo-mesh-addons namespace.

    kubectl create ns gloo-mesh-addons --context $REMOTE_CONTEXT1
    kubectl create ns gloo-mesh-addons --context $REMOTE_CONTEXT2
    
    1. Elevate the permissions of the gloo-mesh-addons service account that will be created. These permissions allow the Istio sidecars to make use of a user ID that is normally restricted by OpenShift. For more information, see the Istio on OpenShift documentation.
      oc adm policy add-scc-to-group anyuid system:serviceaccounts:gloo-mesh-addons --context $REMOTE_CONTEXT1
      oc adm policy add-scc-to-group anyuid system:serviceaccounts:gloo-mesh-addons --context $REMOTE_CONTEXT2
      
    2. Create the gloo-mesh-addons project, and create a NetworkAttachmentDefinition custom resource for the project.
      kubectl create ns gloo-mesh-addons --context $REMOTE_CONTEXT1
      
      cat <<EOF | oc -n gloo-mesh-addons create --context $REMOTE_CONTEXT1 -f -
      apiVersion: "k8s.cni.cncf.io/v1"
      kind: NetworkAttachmentDefinition
      metadata:
        name: istio-cni
      EOF
      
      kubectl create ns gloo-mesh-addons --context $REMOTE_CONTEXT2
      
      cat <<EOF | oc -n gloo-mesh-addons create --context $REMOTE_CONTEXT2 -f -
      apiVersion: "k8s.cni.cncf.io/v1"
      kind: NetworkAttachmentDefinition
      metadata:
        name: istio-cni
      EOF
      

  2. Register both workload clusters with the management server. The meshctl command completes the following:

    • Creates the gloo-mesh namespace
    • Copies the root CA certificate to the workload cluster
    • Copies the boostrap token to the workload cluster
    • Uses basic profiles to install the Gloo agent, rate limit server, and external auth server in the workload cluster
    • Creates the KubernetesCluster CRD in the management cluster
      meshctl cluster register $REMOTE_CLUSTER1 \
        --kubecontext $MGMT_CONTEXT \
        --remote-context $REMOTE_CONTEXT1 \
        --profiles agent,ratelimit,extauth \
        --version $GLOO_VERSION \
        --telemetry-server-address $TELEMETRY_GATEWAY_ADDRESS 
      
      meshctl cluster register $REMOTE_CLUSTER2 \
        --kubecontext $MGMT_CONTEXT \
        --remote-context $REMOTE_CONTEXT2 \
        --profiles agent,ratelimit,extauth \
        --version $GLOO_VERSION \
        --telemetry-server-address $TELEMETRY_GATEWAY_ADDRESS
      
      1. Elevate the permissions for the gloo-mesh service account.
        oc adm policy add-scc-to-group hostmount-anyuid system:serviceaccounts:gloo-mesh --context $REMOTE_CONTEXT1
        oc adm policy add-scc-to-group hostmount-anyuid system:serviceaccounts:gloo-mesh --context $REMOTE_CONTEXT2
        
      2. Register the workload clusters.
        meshctl cluster register $REMOTE_CLUSTER1 \
         --kubecontext $MGMT_CONTEXT \
         --remote-context $REMOTE_CONTEXT1 \
         --profiles agent-openshift,ratelimit,extauth \
         --version $GLOO_VERSION \
         --telemetry-server-address $TELEMETRY_GATEWAY_ADDRESS
            
        meshctl cluster register $REMOTE_CLUSTER2 \
         --kubecontext $MGMT_CONTEXT \
         --remote-context $REMOTE_CONTEXT2 \
         --profiles agent-openshift,ratelimit,extauth \
         --version $GLOO_VERSION \
         --telemetry-server-address $TELEMETRY_GATEWAY_ADDRESS
        

      Note: In OpenShift 4.11 and later, you might see warnings for the pods and containers which violate the OpenShift PodSecurity "restricted:v1.24" profile, due to the elevated permissions required by Istio. You can ignore these warnings. For more info, see this article.

      If you installed the Gloo management plane in insecure mode, include the --relay-server-insecure=true flag in this command.

  3. Verify that the Gloo data plane components are healthy. If not, try debugging the agent.

    meshctl check --kubecontext $REMOTE_CONTEXT1
    meshctl check --kubecontext $REMOTE_CONTEXT2
    

    Example output:

    🟢 CRD Version check
    
    🟢 Gloo deployment status
    
    Namespace        | Name                           | Ready | Status 
    gloo-mesh        | gloo-mesh-agent                | 1/1   | Healthy
    gloo-mesh-addons | ext-auth-service               | 1/1   | Healthy
    gloo-mesh-addons | rate-limiter                   | 1/1   | Healthy
    gloo-mesh-addons | redis                          | 1/1   | Healthy
    gloo-mesh        | gloo-telemetry-collector-agent | 3/3   | Healthy
    
  4. Verify that your Gloo Gateway setup is correctly installed. This check might take a few seconds to verify that:

    • Your Gloo Platform product licenses are valid and current.
    • The Gloo Platform CRDs are installed at the correct version.
    • The management plane pods in the management cluster are running and healthy.
    • The agents in the workload clusters are successfully identified by the management server.
    meshctl check --kubecontext $MGMT_CONTEXT
    

    Example output:

    🟢 License status
    
     INFO  gloo-mesh enterprise license expiration is 25 Aug 24 10:38 CDT
     INFO  Valid GraphQL license module found
    
    🟢 CRD version check
    
    🟢 Gloo deployment status
    
    Namespace | Name                           | Ready | Status 
    gloo-mesh | gloo-mesh-mgmt-server          | 1/1   | Healthy
    gloo-mesh | gloo-mesh-redis                | 1/1   | Healthy
    gloo-mesh | gloo-mesh-ui                   | 1/1   | Healthy
    gloo-mesh | gloo-telemetry-gateway         | 1/1   | Healthy
    gloo-mesh | prometheus-server              | 1/1   | Healthy
    gloo-mesh | gloo-telemetry-collector-agent | 3/3   | Healthy
    
    🟢 Mgmt server connectivity to workload agents
    
    Cluster  | Registered | Connected Pod                                   
    cluster1 | true       | gloo-mesh/gloo-mesh-mgmt-server-65bd557b95-v8qq6
    cluster2 | true       | gloo-mesh/gloo-mesh-mgmt-server-65bd557b95-v8qq6
    
  5. Deploy Istio gateway proxies in each workload cluster.

Install with Helm

Customize your Gloo Gateway setup by installing with the Gloo Platform Helm chart.

Install the management plane

  1. Production installations: Review Best practices for production to prepare your security measures. For example, before you begin your Gloo installation, you can provide your own certificates and set up secure access to the Gloo UI.

  2. Install helm, the Kubernetes package manager.

  3. Add and update the Helm repository for Gloo Platform.

    helm repo add gloo-platform https://storage.googleapis.com/gloo-platform/helm-charts
    helm repo update
    
  4. Apply the Gloo Platform CRDs to your cluster by creating a gloo-platform-crds Helm release.

    helm install gloo-platform-crds gloo-platform/gloo-platform-crds \
       --kube-context $MGMT_CONTEXT \
       --namespace=gloo-mesh \
       --create-namespace \
       --version $GLOO_VERSION
    
  5. Prepare a Helm values file for production-level settings, including FIPS-compliant images, OIDC authorization for the Gloo UI, and more. To get started, you can use the minimum settings in the mgmt-server profile as a basis for your values file. This profile enables all of the necessary components that are required for a Gloo Gateway management plane installation, such as the management server, as well as some optional components, such as the Gloo UI.

    curl https://storage.googleapis.com/gloo-platform/helm-profiles/$GLOO_VERSION/mgmt-server.yaml > mgmt-server.yaml
    open mgmt-server.yaml
    
    curl https://storage.googleapis.com/gloo-platform/helm-profiles/$GLOO_VERSION/mgmt-server-openshift.yaml > mgmt-server.yaml
    open mgmt-server.yaml
    

    Note: When you use the settings in this profile to install Gloo Gateway In OpenShift 4.11 and later, you might see warnings for the pods and containers which violate the OpenShift PodSecurity "restricted:v1.24" profile, due to the elevated permissions required by Istio. You can ignore these warnings. For more info, see this article.

  6. Decide how you want to secure the relay connection between the Gloo management server and agents. In test and POC environments, you can use Gloo Gateway self-signed certificates to secure the connection. If you plan to use Gloo Gateway in production, it is recommended to bring your own certificates instead. For more information, see Setup options.

    Use Gloo Gateway to create a self-signed server TLS certificate that the Gloo management server presents when workload cluster agents connect. This setup is not recommended for production, but can be used for testing purposes. For more information, see Self-signed server TLS certificate.

    In your Helm values file for the management server, add the following values.

    glooMgmtServer:
      enabled: true
      extraEnvs:
        RELAY_DISABLE_CLIENT_CERTIFICATE_AUTHENTICATION:
          value: "true"  
        RELAY_TOKEN: 
          value: "My token"
    
    Helm value Description
    RELAY_TOKEN Specify the relay token that the Gloo management server and agent use to establish initial trust. When you install Gloo Gateway and set RELAY_DISABLE_CLIENT_CERTIFICATE_AUTHENTICATION to true, the connection between the Gloo management server and agent is automatically secured by using simple, server-side TLS. In a simple TLS setup, only the management server presents a certificate to authenticate its identity. The identity of the agent is not verified. To ensure that only trusted agents connect to the management server, the relay identity token is used. The relay identity token can be any string value and is stored in the relay-identity-token-secret Kubernetes secret on the management cluster. You must set the same value in glooAgent.extraEnvs.RELAY_TOKEN.value when installing Gloo Gateway in a workload cluster to allow Gloo agents to connect to the Gloo management server.
    RELAY_DISABLE_CLIENT_CERTIFICATE_AUTHENTICATION Set this value to true to not require a client TLS certificate from the Gloo agent to prove the agent's identity and establish the connection with the management server. This setting is required when you want to use simple TLS to secure the connection between the Gloo management server and agent.

    Use your preferred PKI provider to create the server TLS certificate for the Gloo management server. For more information, see Bring your own server TLS certificate.

    1. Create your root CA credentials and derive a server TLS certificate. To create the TLS certificate, you can use your preferred PKI provider or follow the steps in BYO server certificate to create a custom server TLS certificate by using OpenSSL. Make sure that you have the following Kubernetes secrets in the management cluster:

      • relay-server-tls-secret-custom: The key and TLS certificate for the Gloo management server, and root CA certificate information for the certificate chain. Note that you can use a different name, but you cannot use relay-server-tls-secret as this name is reserved by the Gloo management server when creating self-signed root CAs and server TLS certificates.
      • gloo-telemetry-gateway-tls-secret-custom: The key and TLS certificate for the Gloo telemetry gateway, and root CA certificate information for the certificate chain. It is recommended to use the same credentials that you use for the Gloo management server. Note that you can use a different name, but you cannot use gloo-telemetry-gateway-tls-secret as this name is reserved by the Gloo management server when creating self-signed certificates.
      • telemetry-root-secret: The root CA certificate for the certificate chain.
    2. In your Helm values file for the management server, add the following values.

      glooMgmtServer:
        enabled: true
        relay:
          tlsSecret:
            name: relay-server-tls-secret-custom
        extraEnvs:
          RELAY_DISABLE_CLIENT_CERTIFICATE_AUTHENTICATION:
            value: "true"  
          RELAY_TOKEN: 
            value: "My token"
      telemetryCollector: 
        enabled: true
        extraVolumes: 
          - name: root-ca
            secret:
              defaultMode: 420
              optional: true
              secretName: telemetry-root-secret
          - configMap:
              items:
                - key: relay
                  path: relay.yaml
              name: gloo-telemetry-collector-config
            name: telemetry-configmap
          - hostPath:
              path: /var/run/cilium
              type: DirectoryOrCreate
            name: cilium-run
      telemetryGateway:
        enabled: true
        extraVolumes:
        - name: tls-keys
          secret:
            secretName:  gloo-telemetry-gateway-tls-secret-custom
            defaultMode: 420
        - name: telemetry-configmap
          configMap:
            name: gloo-telemetry-gateway-config
            items:
              - key: relay
                path: relay.yaml
      telemetryGatewayCustomization:
        disableCertGeneration: true
      
      Helm value Description
      relay.tlsSecret.name Add the name of the Kubernetes secret with the custom server TLS secret that you created earlier.
      RELAY_TOKEN Specify the relay token that the Gloo management server and agent use to establish initial trust. When you install Gloo Gateway and set RELAY_DISABLE_CLIENT_CERTIFICATE_AUTHENTICATION to true, the connection between the Gloo management server and agent is automatically secured by using simple, server-side TLS. In a simple TLS setup, only the management server presents a certificate to authenticate its identity. The identity of the agent is not verified. To ensure that only trusted agents connect to the management server, the relay identity token is used. The relay identity token can be any string value and is stored in the relay-identity-token-secret Kubernetes secret on the management cluster. You must set the same value in glooAgent.extraEnvs.RELAY_TOKEN.value when you install the Gloo agent to allow the Gloo agent to connect to the Gloo management server.
      RELAY_DISABLE_CLIENT_CERTIFICATE_AUTHENTICATION Set this value to true to not require a client TLS certificate from the Gloo agent to prove the agent's identity and establish the connection with the management server. This setting is required when you want to use simple TLS to secure the connection between the Gloo management server and agent.
      telemetryGateway.extraVolumes Add the gloo-telemetry-gateway-tls-secret-custom Kubernetes secret that you created earlier to the tls-keys volume. Make sure that you also add the other volumes to your telemetry gateway configuration.
      telemetryCollector.extraVolumes Add the telemetry-root-secret Kubernetes secret that you created earlier to the root-ca volume. Make sure that you also add the other volumes to your telemetry collector configuration.

    You can use Gloo to generate self-signed certificates and keys for the root and intermediate CA. These credentials are used to derive the server TLS certificate for the Gloo management server and client TLS certificate for the Gloo agent. Note that this setup is not recommended for production, but can be used for testing purposes. For more information, see Self-signed CAs with automatic client certificate rotation.



    In your Helm values file for the management server, add the following values. Note that mTLS is the default mode in Gloo Gateway and does not require any additional configuration on the management server.

    glooMgmtServer:
      enabled: true
    

    Use your preferred PKI provider to generate your root CA and intermediate CA credentials. You then store the intermediate CA credentials in your cluster so that you can leverage Gloo Gateway's built-in capability to automatically issue and sign client TLS certificates for Gloo agents. For more information, see the Bring your own CAs with automatic client TLS certificate rotation.

    1. Create your root CA, intermediate CA, server, and telemetry gateway credentials, as well as relay identity token, and store them as the following Kubernetes secrets in the gloo-mesh namespace on the management cluster:

      • relay-root-tls-secret that holds the root CA certificate
      • relay-tls-signing-secret that holds the intermediate CA credentials
      • relay-server-tls-secret that holds the server key, TLS certificate and certificate chain information for the Gloo management server
      • relay-identity-token-secret that holds the relay identity token value
      • telemetry-root-secret that holds the root CA certificate for the certificate chain
      • gloo-telemetry-gateway-tls-secret that holds the key, TLS certificate and certificate chain for the Gloo telemetry gateway

      You can use your preferred PKI provider to create the TLS certificates or follow the steps in BYO server certificate with managed client certificate to create these TLS credentials with OpenSSl.

    2. In your Helm values file for the management server, add the following values.

      glooMgmtServer:
        enabled: true
        relay:
          disableCaCertGeneration: true
          signingTlsSecret:
            name: relay-tls-signing-secret
            namespace: gloo-mesh
          tlsSecret:
            name: relay-server-tls-secret
            namespace: gloo-mesh
          tokenSecret:
            key: token
            name: relay-identity-token-secret
            namespace: gloo-mesh
      telemetryCollector: 
        enabled: true
        extraVolumes: 
          - name: root-ca
            secret:
              defaultMode: 420
              optional: true
              secretName: telemetry-root-secret
          - configMap:
              items:
                - key: relay
                  path: relay.yaml
              name: gloo-telemetry-collector-config
            name: telemetry-configmap
          - hostPath:
              path: /var/run/cilium
              type: DirectoryOrCreate
            name: cilium-run
      telemetryGateway:
        enabled: true
        extraVolumes:
        - name: tls-keys
          secret:
            secretName:  gloo-telemetry-gateway-tls-secret
            defaultMode: 420
        - name: telemetry-configmap
          configMap:
            name: gloo-telemetry-gateway-config
            items:
              - key: relay
                path: relay.yaml
      telemetryGatewayCustomization:
        disableCertGeneration: true
      
      Helm value Description
      glooMgmtServer.relay.disableCaCertGeneration Disable the generation of self-signed certificates to secure the relay connection between the Gloo management server and agent.
      glooMgmtServer.relay.signingTlsSecret Add the name and namespace of the Kubernetes secret that holds the intermediate CA credentials that you created earlier.
      glooMgmtServer.relay.tlsSecret Add the name and namespace of the Kubernetes secret that holds the server TLS certificate for the Gloo management server that you created earlier.
      glooMgmtServer.relay.tokenSecret Add the name, namespace, and key of the Kubernetes secret that holds the relay identity token that you created earlier.
      telemetryGateway.extraVolumes Add the gloo-telemetry-gateway-tls-secret-custom Kubernetes secret that you created earlier to the tls-keys volume. Make sure that you also add the other volumes to your telemetry gateway configuration.
      telemetryCollector.extraVolumes Add the telemetry-root-secret Kubernetes secret that you created earlier to the root-ca volume. Make sure that you also add the other volumes to your telemetry collector configuration.

    Use your preferred PKI provider to create the root CA, server TLS, client TLS, and telemetry gateway certificates. Then, store these certificates in the cluster so that the Gloo agent uses a client TLS certificate when establishing the first connection with the Gloo management server. No relay identity tokens are used. With this approach, you cannot use Gloo Gateway's built-in client TLS certificate rotation capabilities. Instead, you must set up your own processes to monitor the expiration of your certificates and to rotate them before they expire.

    For more information, see Bring your own CAs and client TLS certificates.

    1. Create your root CA, server, client, and telemetry gateway TLS certificates. You can use your preferred PKI provider to do that or follow the steps in Create certificates with OpenSSL to create custom TLS certificates by using OpenSSL. Then, you store the following information in a Kubernetes secret in the gloo-mesh namespace on the management cluster.

      • relay-server-tls-secret that holds the server key, TLS certificate and certificate chain information for the Gloo management server
      • gloo-telemetry-gateway-tls-secret that holds the key, TLS certificate and certificate chain for the Gloo telemetry gateway
      • telemetry-root-secret that holds the root CA certificate for the certificate chain
    2. In your Helm values file for the management server, add the following values.

      glooMgmtServer:
        enabled: true
        relay:
          disableCa: true
          disableCaCertGeneration: true
          disableTokenGeneration: true
          tlsSecret:
            name: relay-server-tls-secret
            namespace: gloo-mesh
          tokenSecret:
            key: null
            name: null
            namespace: null
      telemetryCollector: 
        enabled: true
        extraVolumes: 
          - name: root-ca
            secret:
              defaultMode: 420
              optional: true
              secretName: telemetry-root-secret
          - configMap:
              items:
                - key: relay
                  path: relay.yaml
              name: gloo-telemetry-collector-config
            name: telemetry-configmap
          - hostPath:
              path: /var/run/cilium
              type: DirectoryOrCreate
            name: cilium-run
      telemetryGateway:
        enabled: true
        extraVolumes:
        - name: tls-keys
          secret:
            secretName:  gloo-telemetry-gateway-tls-secret
            defaultMode: 420
        - name: telemetry-configmap
          configMap:
            name: gloo-telemetry-gateway-config
            items:
              - key: relay
                path: relay.yaml
      telemetryGatewayCustomization:
        disableCertGeneration: true
      
      Helm value Description
      relay.disableCa Disable the generation of self-signed root and intermediate CA certificates and the use of identity tokens to establish initial trust between the Gloo management server and agent.
      relay.disableCaCertGeneration Disable the generation of self-signed certificates to secure the relay connection between the Gloo management server and agent.
      relay.disableTokenGeneration Disable the generation of relay identity tokens.
      relay.tlsSecret Add the name and namespace of the Kubernetes secret that holds the server TLS certificate for the Gloo management server that you created earlier.
      relay.tokenSecret Set all values to null to instruct the Gloo management server to not use identity tokens to establish initial trust with Gloo agents.
      telemetryGateway.extraVolumes Add the gloo-telemetry-gateway-tls-secret Kubernetes secret that you created earlier to the tls-keys volume. Make sure that you also add the other volumes to your telemetry gateway configuration.
      telemetryCollector.extraVolumes Add the telemetry-root-secret Kubernetes secret that you created earlier to the root-ca volume. Make sure that you also add the other volumes to your telemetry collector configuration.

  7. Edit the file to provide your own details for settings that are recommended for production deployments, such as the following optional settings.

    Field Decription
    glooMgmtServer.resources.limits Add resource limits for the gloo-mesh-mgmt-server pod, such as cpu: 1000m and memory: 1Gi.
    glooMgmtServer.serviceOverrides.metadata.annotations Add annotations for the management server load balancer as needed, such as AWS-specific load balancer annotations. For more information, see Deployment and service overrides.
    glooUi.auth Set up OIDC authorization for the Gloo UI. For more information, see UI authentication.
    prometheus.enabled Disable the default Prometheus instance as needed to provide your own. Otherwise, you can keep the default Prometheus server enabled, and deploy a production-level server to scrape metrics from the server. For more information on each option, see Customization options.
    redis Disable the default Redis deployment and provide your own backing database as needed. For more information, see Backing databases.
    OpenShift: glooMgmtServer.serviceType and telemetryGateway.service.type In some OpenShift setups, you might not use load balancer service types. You can set these two deployment service types to ClusterIP, and expose them by using OpenShift routes after installation.

    For more information about the settings you can configure:

  8. Install the Gloo Gateway management plane in your management cluster, including the customizations in your Helm values file.

    helm install gloo-platform gloo-platform/gloo-platform \
       --kube-context $MGMT_CONTEXT \
       --namespace gloo-mesh \
       --version $GLOO_VERSION \
       --values mgmt-server.yaml \
       --set common.cluster=$MGMT_CLUSTER \
       --set licensing.glooGatewayLicenseKey=$GLOO_GATEWAY_LICENSE_KEY
    
    1. Elevate the permissions for the gloo-mesh service account.
      oc adm policy add-scc-to-group hostmount-anyuid system:serviceaccounts:gloo-mesh --context $MGMT_CONTEXT
      
    2. Install the Gloo Gateway management plane in your management cluster.
      helm install gloo-platform gloo-platform/gloo-platform \
         --kube-context $MGMT_CONTEXT \
         --namespace gloo-mesh \
         --version $GLOO_VERSION \
         --values mgmt-server.yaml \
         --set common.cluster=$MGMT_CLUSTER \
         --set licensing.glooGatewayLicenseKey=$GLOO_GATEWAY_LICENSE_KEY
      

  9. Verify that the management plane pods have a status of Running.

    kubectl get pods -n gloo-mesh --context $MGMT_CONTEXT
    

    Example output:

    NAME                                      READY   STATUS    RESTARTS   AGE
    gloo-mesh-mgmt-server-56c495796b-cx687    1/1     Running   0          30s
    gloo-mesh-redis-8455d49c86-f8qhw          1/1     Running   0          30s
    gloo-mesh-ui-65b6b6df5f-bf4vp             3/3     Running   0          30s
    gloo-telemetry-collector-agent-7rzfb      1/1     Running   0          30s
    gloo-telemetry-gateway-6547f479d5-r4zm6   1/1     Running   0          30s
    prometheus-server-57cd8c74d4-2bc7f        2/2     Running   0          30s
    
  10. Save the external address and port that were assigned by your cloud provider to the gloo-mesh-mgmt-server service. The gloo-mesh-agent relay agent in each cluster accesses this address via a secure connection.

    export MGMT_SERVER_NETWORKING_DOMAIN=$(kubectl get svc -n gloo-mesh gloo-mesh-mgmt-server --context $MGMT_CONTEXT -o jsonpath='{.status.loadBalancer.ingress[0].ip}')
    export MGMT_SERVER_NETWORKING_PORT=$(kubectl -n gloo-mesh get service gloo-mesh-mgmt-server --context $MGMT_CONTEXT -o jsonpath='{.spec.ports[?(@.name=="grpc")].port}')
    export MGMT_SERVER_NETWORKING_ADDRESS=${MGMT_SERVER_NETWORKING_DOMAIN}:${MGMT_SERVER_NETWORKING_PORT}
    echo $MGMT_SERVER_NETWORKING_ADDRESS
    
    export MGMT_SERVER_NETWORKING_DOMAIN=$(kubectl get svc -n gloo-mesh gloo-mesh-mgmt-server --context $MGMT_CONTEXT -o jsonpath='{.status.loadBalancer.ingress[0].hostname}')
    export MGMT_SERVER_NETWORKING_PORT=$(kubectl -n gloo-mesh get service gloo-mesh-mgmt-server --context $MGMT_CONTEXT -o jsonpath='{.spec.ports[?(@.name=="grpc")].port}')
    export MGMT_SERVER_NETWORKING_ADDRESS=${MGMT_SERVER_NETWORKING_DOMAIN}:${MGMT_SERVER_NETWORKING_PORT}
    echo $MGMT_SERVER_NETWORKING_ADDRESS
    
    1. Expose the management server by using an OpenShift route. Your route might look like the following.
      oc apply --context $MGMT_CONTEXT -n gloo-mesh -f- <<EOF
      kind: Route
      apiVersion: route.openshift.io/v1
      metadata:
        name: gloo-mesh-mgmt-server
        namespace: gloo-mesh
        annotations:
          # Needed for the different agents to connect to different replica instances of the management server deployment
          haproxy.router.openshift.io/balance: roundrobin
      spec:
        host: gloo-mesh-mgmt-server.<your_domain>.com
        to:
          kind: Service
          name: gloo-mesh-mgmt-server
          weight: 100
        port:
          targetPort: grpc
        tls:
          termination: passthrough
          insecureEdgeTerminationPolicy: Redirect
        wildcardPolicy: None
      EOF
      
    2. Save the management server's address, which consists of the route host and the route port. Note that the port is the route's own port, such as 443, and not the grpc port of the management server that the route points to.
      export MGMT_SERVER_NETWORKING_DOMAIN=$(oc get route -n gloo-mesh gloo-mesh-mgmt-server --context $MGMT_CONTEXT -o jsonpath='{.status.ingress[0].host}')
      export MGMT_SERVER_NETWORKING_ADDRESS=${MGMT_SERVER_NETWORKING_DOMAIN}:443
      echo $MGMT_SERVER_NETWORKING_ADDRESS
      

  11. Save the external address and port that were assigned by your cloud provider to the Gloo OpenTelemetry (OTel) gateway service. The OTel collector agents in each workload cluster send metrics to this address.

    export TELEMETRY_GATEWAY_IP=$(kubectl get svc -n gloo-mesh gloo-telemetry-gateway --context $MGMT_CONTEXT -o jsonpath='{.status.loadBalancer.ingress[0].ip}')
    export TELEMETRY_GATEWAY_PORT=$(kubectl -n gloo-mesh get service gloo-telemetry-gateway --context $MGMT_CONTEXT -o jsonpath='{.spec.ports[?(@.name=="otlp")].port}')
    export TELEMETRY_GATEWAY_ADDRESS=${TELEMETRY_GATEWAY_IP}:${TELEMETRY_GATEWAY_PORT}
    echo $TELEMETRY_GATEWAY_ADDRESS
    
    export TELEMETRY_GATEWAY_HOSTNAME=$(kubectl get svc -n gloo-mesh gloo-telemetry-gateway --context $MGMT_CONTEXT -o jsonpath='{.status.loadBalancer.ingress[0].hostname}')
    export TELEMETRY_GATEWAY_PORT=$(kubectl -n gloo-mesh get service gloo-telemetry-gateway --context $MGMT_CONTEXT -o jsonpath='{.spec.ports[?(@.name=="otlp")].port}')
    export TELEMETRY_GATEWAY_ADDRESS=${TELEMETRY_GATEWAY_HOSTNAME}:${TELEMETRY_GATEWAY_PORT}
    echo $TELEMETRY_GATEWAY_ADDRESS
    
    1. Expose the telemetry gateway by using an OpenShift route. Your route might look like the following.
      oc apply --context $MGMT_CONTEXT -n gloo-mesh -f- <<EOF
      kind: Route
      apiVersion: route.openshift.io/v1
      metadata:
        name: gloo-telemetry-gateway
        namespace: gloo-mesh
      spec:
        host: gloo-telemetry-gateway.<your_domain>.com
        to:
          kind: Service
          name: gloo-telemetry-gateway
          weight: 100
        port:
          targetPort: otlp
        tls:
          termination: passthrough
          insecureEdgeTerminationPolicy: Redirect
        wildcardPolicy: None
      EOF
      
    2. Save the telemetry gateway's address, which consists of the route host and the route port. Note that the port is the route's own port, such as 443, and not the otlp port of the telemetry gateway that the route points to.
      export TELEMETRY_GATEWAY_HOST=$(oc get route -n gloo-mesh gloo-telemetry-gateway --context $MGMT_CONTEXT -o jsonpath='{.status.ingress[0].host}')
      export TELEMETRY_GATEWAY_ADDRESS=${TELEMETRY_GATEWAY_HOST}:443
      echo $TELEMETRY_GATEWAY_ADDRESS
      

  12. Create a workspace that selects all clusters and namespaces by default, and workspace settings that enable federation across clusters. Gloo workspaces let you organize team resources across Kubernetes namespaces and clusters. In this example, you create a single workspace for everything. Later, as your teams grow, you can create a workspace for each team, to enforce service isolation, set up federation, and even share resources by importing and exporting. For more information, see Set up multitenancy with workspaces.

    kubectl apply --context $MGMT_CONTEXT -f- <<EOF
    apiVersion: admin.gloo.solo.io/v2
    kind: Workspace
    metadata:
      name: $MGMT_CLUSTER
      namespace: gloo-mesh
    spec:
      workloadClusters:
        - name: '*'
          namespaces:
            - name: '*'
    ---
    apiVersion: v1
    kind: Namespace
    metadata:
      name: gloo-mesh-config
    ---
    apiVersion: admin.gloo.solo.io/v2
    kind: WorkspaceSettings
    metadata:
      name: $MGMT_CLUSTER
      namespace: gloo-mesh
    spec:
      options:
        serviceIsolation:
          enabled: false
        federation:
          enabled: false
          serviceSelector:
          - {}
        eastWestGateways:
        - selector:
            labels:
              istio: eastwestgateway
    EOF
    

Registering workload clusters

Register each workload cluster with the management server by deploying the relay agent.

  1. In the management cluster, create a KubernetesCluster resource to represent the workload cluster and store relevant data, such as the workload cluster's local domain.

    • The metadata.name must match the name of the workload cluster that you will specify in the gloo-platform Helm chart in subsequent steps.
    • The spec.clusterDomain must match the local cluster domain of the Kubernetes cluster.
    • You can optionally give your cluster a label, such as env: prod, region: us-east, or another selector. Your workspaces can use the label to automatically add the cluster to the workspace.
    kubectl apply --context $MGMT_CONTEXT -f- <<EOF
    apiVersion: admin.gloo.solo.io/v2
    kind: KubernetesCluster
    metadata:
      name: $REMOTE_CLUSTER1
      namespace: gloo-mesh
      labels:
        env: prod
    spec:
      clusterDomain: cluster.local
    EOF
    
    kubectl apply --context $MGMT_CONTEXT -f- <<EOF
    apiVersion: admin.gloo.solo.io/v2
    kind: KubernetesCluster
    metadata:
      name: $REMOTE_CLUSTER2
      namespace: gloo-mesh
      labels:
        env: prod
    spec:
      clusterDomain: cluster.local
    EOF
    
  2. In your workload cluster, apply the Gloo Platform CRDs by creating a gloo-platform-crds Helm release. Note: If you plan to manually deploy and manage your Istio installation in workload clusters rather than using Solo's Istio lifecycle manager, include the --set installIstioOperator=false flag to ensure that the Istio operator CRD is not managed by this Gloo CRD Helm release.

    helm install gloo-platform-crds gloo-platform/gloo-platform-crds \
    --kube-context $REMOTE_CONTEXT1 \
    --namespace=gloo-mesh \
    --create-namespace \
    --version $GLOO_VERSION
    
    helm install gloo-platform-crds gloo-platform/gloo-platform-crds \
    --kube-context $REMOTE_CONTEXT2 \
    --namespace=gloo-mesh \
    --create-namespace \
    --version $GLOO_VERSION
    
  3. Prepare a Helm values file for production-level settings, including FIPS-compliant images, OIDC authorization for the Gloo UI, and more. To get started, you can use the minimum settings in the agent profile as a basis for your values file. This profile enables the Gloo agent and the Gloo Platform telemetry collector.

    curl https://storage.googleapis.com/gloo-platform/helm-profiles/$GLOO_VERSION/agent.yaml > agent.yaml
    open agent.yaml
    
    curl https://storage.googleapis.com/gloo-platform/helm-profiles/$GLOO_VERSION/agent-openshift.yaml > agent.yaml
    open agent.yaml
    

    Note: When you use the settings in this profile to install Gloo Gateway in OpenShift 4.11 and later, you might see warnings for the pods and containers which violate the OpenShift PodSecurity "restricted:v1.24" profile, due to the elevated permissions required by Istio. You can ignore these warnings. For more info, see this article.

  4. Depending on the method you chose to secure the relay connection, prepare the Helm values for the data plane installation. For more information, see the Setup options.

    In your Helm values file for the workload cluster, add the following values.

    glooAgent:
      enabled: true
      extraEnvs:
        RELAY_DISABLE_SERVER_CERTIFICATE_VALIDATION:
          value: "true"  
        RELAY_TOKEN: 
          value: "My token"
    telemetryCollector:
      enabled: true
    telemetryCollectorCustomization:
      skipVerify: true
    
    Helm value Description
    RELAY_TOKEN The relay token to establish initial trust between the Gloo management server and the agent. The relay token is saved in memory on the Gloo agent. You must set the same value that you set in glooMgmtServer.extraEnvs.RELAY_TOKEN.value when you installed the Gloo Gateway management plane to allow Gloo agents to connect to the Gloo management server.
    RELAY_DISABLE_SERVER_CERTIFICATE_VALIDATION Set to true to skip validating the server TLS certificate that the Gloo management server presents. This setting is required to configure the relay connection for TLS.
    telemetryCollectorCustomization.skipVerify Set to true to skip validation of the server certificate that the Gloo telemetry gateway presents. By default, the Gloo telemetry gateway uses the same TLS certificates that the Gloo management server uses for the relay connection. If you configure the relay connection for TLS, you must set skipVerify to true on the telemetry collector agent.
    1. Make sure that the following Kubernetes secret exists in the gloo-mesh namespace on each workload cluster.

      • telemetry-root-secret that holds the root CA certificate for the certificate chain

      You can use the steps in BYO server certificate as a guidance for how to create this secret in the workload cluster.

    2. In your Helm values file for the workload cluster, add the following values.

      glooAgent:
        enabled: true
        extraEnvs:
          RELAY_DISABLE_SERVER_CERTIFICATE_VALIDATION:
            value: "true"  
          RELAY_TOKEN: 
            value: "My token"
      telemetryCollector: 
        enabled: true
        extraVolumes: 
          - name: root-ca
            secret:
              defaultMode: 420
              optional: true
              secretName: telemetry-root-secret
          - configMap:
              items:
                - key: relay
                  path: relay.yaml
              name: gloo-telemetry-collector-config
            name: telemetry-configmap
          - hostPath:
              path: /var/run/cilium
              type: DirectoryOrCreate
            name: cilium-run
      telemetryCollectorCustomization:
        skipVerify: true
      
      Helm value Description
      RELAY_TOKEN The relay token to establish initial trust between the Gloo management server and the agent. The relay token is saved in memory on the Gloo agent. You must set the same value that you set in glooMgmtServer.extraEnvs.RELAY_TOKEN.value when you installed the Gloo Gateway management plane to allow Gloo agents to connect to the Gloo management server.
      RELAY_DISABLE_SERVER_CERTIFICATE_VALIDATION Set to true to skip validating the server TLS certificate that the Gloo management server presents. This setting is required to configure the relay connection for TLS.
      telemetryCollector.extraVolumes Add the telemetry-root-secret Kubernetes secret that you created earlier to the root-ca volume. Make sure that you also add the other volumes to your telemetry collector configuration.
      telemetryCollectorCustomization.skipVerify Set to true to skip validation of the server certificate that the Gloo telemetry gateway presents. By default, the Gloo telemetry gateway uses the same TLS certificates that the Gloo management server uses for the relay connection. If you configure the relay connection for TLS, you must set skipVerify to true on the telemetry collector agent.
    1. Get the value of the root CA certificate from the management cluster and create a secret in the workload cluster.

      kubectl get secret relay-root-tls-secret -n gloo-mesh --context $MGMT_CONTEXT -o jsonpath='{.data.ca\.crt}' | base64 -d > ca.crt
      kubectl create secret generic relay-root-tls-secret -n gloo-mesh --context $REMOTE_CONTEXT1 --from-file ca.crt=ca.crt
      rm ca.crt
      
      kubectl get secret relay-root-tls-secret -n gloo-mesh --context $MGMT_CONTEXT -o jsonpath='{.data.ca\.crt}' | base64 -d > ca.crt
      kubectl create secret generic relay-root-tls-secret -n gloo-mesh --context $REMOTE_CONTEXT2 --from-file ca.crt=ca.crt
      rm ca.crt
      
    2. Get the identity token from the management cluster and create a secret in the workload cluster.

      kubectl get secret relay-identity-token-secret -n gloo-mesh --context $MGMT_CONTEXT -o jsonpath='{.data.token}' | base64 -d > token
      kubectl create secret generic relay-identity-token-secret -n gloo-mesh --context $REMOTE_CONTEXT1 --from-file token=token
      rm token
      
      kubectl get secret relay-identity-token-secret -n gloo-mesh --context $MGMT_CONTEXT -o jsonpath='{.data.token}' | base64 -d > token
      kubectl create secret generic relay-identity-token-secret -n gloo-mesh --context $REMOTE_CONTEXT2 --from-file token=token
      rm token
      
    3. Make sure that the following Helm values are added to your values file.

      glooAgent:
        enabled: true
      
    1. Make sure that the following Kubernetes secrets exist in the gloo-mesh namespace on each workload cluster.

      • relay-root-tls-secret that holds the root CA certificate
      • relay-identity-token-secret that holds the relay identity token value
      • telemetry-root-secret that holds the root CA certificate for the certificate chain

      You can use the steps in BYO server certificate with managed client certificate as a guidance for how to create these secrets in the workload cluster.

    2. In your Helm values file for the agent, add the following values.

      glooAgent:
        enabled: true
        relay:
          authority: gloo-mesh-mgmt-server.gloo-mesh
          clientTlsSecretRotationGracePeriodRatio: ""
          rootTlsSecret:
            name: relay-root-tls-secret
            namespace: gloo-mesh
          tokenSecret:
            key: token
            name: relay-identity-token-secret
            namespace: gloo-mesh
      telemetryCollector: 
        enabled: true
        extraVolumes: 
          - name: root-ca
            secret:
              defaultMode: 420
              optional: true
              secretName: telemetry-root-secret
          - configMap:
              items:
                - key: relay
                  path: relay.yaml
              name: gloo-telemetry-collector-config
            name: telemetry-configmap
          - hostPath:
              path: /var/run/cilium
              type: DirectoryOrCreate
            name: cilium-run
      
      Helm value Description
      glooAgent.relay.rootTlsSecret Add the name and namespace of the Kubernetes secret that holds the root CA credentials that you copied from the management cluster to the workload cluster.
      glooAgent.relay.tokenSecret Add the name, namespace, and key of the Kubernetes secret that holds the relay identity token that you copied from the management cluster to the workload cluster.
      telemetryCollector.extraVolumes Add the telemetry-root-secret Kubernetes secret that you created earlier to the root-ca volume. Make sure that you also add the other volumes to your telemetry collector configuration.
    1. Make sure that the following Kubernetes secrets exist in the gloo-mesh namespace on each workload cluster.

      • $REMOTE_CLUSTER1-tls-cert that holds the key, TLS certificate, and certificate chain information for the Gloo agent
      • telemetry-root-secret that holds the root CA certificate for the certificate chain

      You can use the steps in Create certificates with OpenSSL as a guidance for how to create these secrets in the workload cluster.

    2. In your Helm values file for the agent, add the following values. Replace <client-tls-secret-name> with the name of the Kubernetes secret that holds the client TLS certificate, and add the name of the Kubernetes secret that holds the telemetry gateway certificate to the root-ca telemetry collector volume.

      glooAgent:
        enabled: true
        relay:
          authority: gloo-mesh-mgmt-server.gloo-mesh
          clientTlsSecretRotationGracePeriodRatio: ""
          clientTlsSecret: 
            name: <client-tls-secret-name>
            namespace: gloo-mesh
          tokenSecret:
            key: null
            name: null
            namespace: null
      telemetryCollector:
        enabled: true
        extraVolumes: 
          - name: root-ca
            secret:
              defaultMode: 420
              optional: true
              secretName: telemetry-root-secret
          - configMap:
              items:
                - key: relay
                  path: relay.yaml
              name: gloo-telemetry-collector-config
            name: telemetry-configmap
          - hostPath:
              path: /var/run/cilium
              type: DirectoryOrCreate
            name: cilium-run
      
      Helm value Description
      glooAgent.relay.clientTlsSecret Add the name and namespace of the Kubernetes secret that holds the client TLS certificate for the Gloo agent that you created earlier.
      glooAgent.relay.tokenSecret Set all values to null to instruct the Gloo agent to not use identity tokens to establish initial trust with Gloo agents.
      telemetryCollector.extraVolumes Add the name of the Kubernetes secret that holds the Gloo telemetry gateway certificate that you created earlier to the root-ca volume. Make sure that you also add the configMap and hostPath volumes to your configuration.

  5. Edit the file to provide your own details for settings that are recommended for production deployments, such as the following optional settings.

    Field Decription
    glooAgent.resources.limits Add resource limits for the gloo-mesh-mgmt-server pod, such as cpu: 500m and memory: 512Mi.

    For more information about the settings you can configure:

  6. Deploy the relay agent to the workload cluster.

    helm install gloo-platform gloo-platform/gloo-platform \
       --namespace gloo-mesh \
       --kube-context $REMOTE_CONTEXT1 \
       --version $GLOO_VERSION \
       --values agent.yaml \
       --set common.addonsNamespace=gloo-mesh-addons \
       --set common.cluster=$REMOTE_CLUSTER1 \
       --set glooAgent.relay.serverAddress=$MGMT_SERVER_NETWORKING_ADDRESS \
       --set telemetryCollector.config.exporters.otlp.endpoint=$TELEMETRY_GATEWAY_ADDRESS
    
    helm install gloo-platform gloo-platform/gloo-platform \
       --namespace gloo-mesh \
       --kube-context $REMOTE_CONTEXT2 \
       --version $GLOO_VERSION \
       --values agent.yaml \
       --set common.cluster=$REMOTE_CLUSTER2 \
       --set glooAgent.relay.serverAddress=$MGMT_SERVER_NETWORKING_ADDRESS \
       --set telemetryCollector.config.exporters.otlp.endpoint=$TELEMETRY_GATEWAY_ADDRESS
    
    1. Elevate the permissions for the gloo-mesh service account.
      oc adm policy add-scc-to-group hostmount-anyuid system:serviceaccounts:gloo-mesh --context $REMOTE_CONTEXT1
      oc adm policy add-scc-to-group hostmount-anyuid system:serviceaccounts:gloo-mesh --context $REMOTE_CONTEXT2
      
    2. Deploy the relay agent to each workload cluster.
      helm install gloo-platform gloo-platform/gloo-platform \
         --namespace gloo-mesh \
         --kube-context $REMOTE_CONTEXT1 \
         --version $GLOO_VERSION \
         --values agent.yaml \
         --set common.addonsNamespace=gloo-mesh-addons \
         --set common.cluster=$REMOTE_CLUSTER1 \
         --set glooAgent.relay.serverAddress=$MGMT_SERVER_NETWORKING_ADDRESS \
         --set telemetryCollector.config.exporters.otlp.endpoint=$TELEMETRY_GATEWAY_ADDRESS
      
      helm install gloo-platform gloo-platform/gloo-platform \
         --namespace gloo-mesh \
         --kube-context $REMOTE_CONTEXT2 \
         --version $GLOO_VERSION \
         --values agent.yaml \
         --set common.cluster=$REMOTE_CLUSTER2 \
         --set glooAgent.relay.serverAddress=$MGMT_SERVER_NETWORKING_ADDRESS \
         --set telemetryCollector.config.exporters.otlp.endpoint=$TELEMETRY_GATEWAY_ADDRESS
      

  7. Verify that the Gloo data plane components are healthy. If not, try debugging the agent.

    meshctl check --kubecontext $REMOTE_CONTEXT1
    meshctl check --kubecontext $REMOTE_CONTEXT2
    

    Example output:

    🟢 CRD Version check
    
    🟢 Gloo deployment status
    
    Namespace        | Name             | Ready | Status 
    gloo-mesh        | gloo-mesh-agent  | 1/1   | Healthy
    gloo-mesh-addons | ext-auth-service | 1/1   | Healthy
    gloo-mesh-addons | rate-limiter     | 1/1   | Healthy
    gloo-mesh-addons | redis            | 1/1   | Healthy
    
  8. Optional: Install add-ons, such as the external auth and rate limit servers, in a separate Helm release. Only create this release if you did not enable the extAuthService and rateLimiter in your main installation release.

    Want to expose your APIs through a developer portal? You must include some extra Helm settings. To install, see Portal.

    helm install gloo-agent-addons gloo-platform/gloo-platform \
       --namespace gloo-mesh-addons \
       --kube-context $REMOTE_CONTEXT1 \
       --create-namespace \
       --version $GLOO_VERSION \
       --set common.cluster=$REMOTE_CLUSTER1 \
       --set extAuthService.enabled=true \
       --set rateLimiter.enabled=true
    
    helm install gloo-agent-addons gloo-platform/gloo-platform \
       --namespace gloo-mesh-addons \
       --kube-context $REMOTE_CONTEXT2 \
       --create-namespace \
       --version $GLOO_VERSION \
       --set common.cluster=$REMOTE_CLUSTER2 \
       --set extAuthService.enabled=true \
       --set rateLimiter.enabled=true
    
    1. Elevate the permissions of the gloo-mesh-addons service account that will be created.
      oc adm policy add-scc-to-group anyuid system:serviceaccounts:gloo-mesh-addons --context $REMOTE_CONTEXT1
      oc adm policy add-scc-to-group anyuid system:serviceaccounts:gloo-mesh-addons --context $REMOTE_CONTEXT2
      
    2. Create the gloo-mesh-addons project, and create a NetworkAttachmentDefinition custom resource for the project.
      kubectl create ns gloo-mesh-addons --context $REMOTE_CONTEXT1
      kubectl create ns gloo-mesh-addons --context $REMOTE_CONTEXT2
      cat <<EOF | oc -n gloo-mesh-addons create  --context $REMOTE_CONTEXT1 -f -
      apiVersion: "k8s.cni.cncf.io/v1"
      kind: NetworkAttachmentDefinition
      metadata:
        name: istio-cni
      EOF
      cat <<EOF | oc -n gloo-mesh-addons create  --context $REMOTE_CONTEXT2 -f -
      apiVersion: "k8s.cni.cncf.io/v1"
      kind: NetworkAttachmentDefinition
      metadata:
        name: istio-cni
      EOF
      
    3. Create the add-ons release.
      helm install gloo-agent-addons gloo-platform/gloo-platform \
         --namespace gloo-mesh-addons \
         --kube-context $REMOTE_CONTEXT1 \
         --version $GLOO_VERSION \
         --set common.cluster=$REMOTE_CLUSTER1 \
         --set extAuthService.enabled=true \
         --set rateLimiter.enabled=true
      
      helm install gloo-agent-addons gloo-platform/gloo-platform \
         --namespace gloo-mesh-addons \
         --kube-context $REMOTE_CONTEXT1 \
         --version $GLOO_VERSION \
         --set common.cluster=$REMOTE_CLUSTER1 \
         --set extAuthService.enabled=true \
         --set rateLimiter.enabled=true
      

  9. Repeat steps 1 - 8 to register each workload cluster with Gloo.

  10. Verify that your Gloo Gateway setup is correctly installed. If not, try debugging the relay connection. Note that this check might take a few seconds to verify that:

    • Your Gloo Platform product licenses are valid and current.
    • The Gloo Platform CRDs are installed at the correct version.
    • The management plane pods in the management cluster are running and healthy.
    • The agents in the workload clusters are successfully identified by the management server.
    meshctl check --kubecontext $MGMT_CONTEXT
    

    Example output:

    🟢 License status
    
    INFO  gloo-mesh enterprise license expiration is 25 Aug 24 10:38 CDT
    INFO  Valid GraphQL license module found
    
    🟢 CRD version check
    
    🟢 Gloo deployment status
    
    Namespace | Name                           | Ready | Status 
    gloo-mesh | gloo-mesh-agent                | 1/1   | Healthy
    gloo-mesh | gloo-mesh-mgmt-server          | 1/1   | Healthy
    gloo-mesh | gloo-mesh-redis                | 1/1   | Healthy
    gloo-mesh | gloo-mesh-ui                   | 1/1   | Healthy
    gloo-mesh | gloo-telemetry-gateway         | 1/1   | Healthy
    gloo-mesh | prometheus-server              | 1/1   | Healthy
    gloo-mesh | gloo-telemetry-collector-agent | 3/3   | Healthy
    
    🟢 Mgmt server connectivity to workload agents
    
    Cluster  | Registered | Connected Pod                                   
    cluster1 | true       | gloo-mesh/gloo-mesh-mgmt-server-65bd557b95-v8qq6
    cluster2 | true       | gloo-mesh/gloo-mesh-mgmt-server-65bd557b95-v8qq6
    
  11. Deploy Istio gateway proxies in each workload cluster.

Install gateway proxies by using the Istio Lifecycle Manager

Streamline the gateway installation process by using the Gloo management plane to install Istio gateways in your clusters, as part of the Istio lifecycle management. By using a Gloo-managed installation, you no longer need to use istioctl to individually install the Istio control plane and gateways. Instead, you can supply IstioOperator configurations in Gloo resources. Gloo translates this configuration into an Istio control plane and gateway proxy in the cluster.

  1. Save the names of your clusters from your infrastructure provider as environment variables. If your clusters have different names, specify those names instead.

    export REMOTE_CLUSTER1=<cluster1>
    export REMOTE_CLUSTER2=<cluster2>
    ...
    
  2. Save the kubeconfig contexts for your clusters as environment variables. Run kubectl config get-contexts, look for your cluster in the CLUSTER column, and get the context name in the NAME column.
    export MGMT_CONTEXT=<management-cluster-context>
    export REMOTE_CONTEXT1=<remote-cluster1-context>
    export REMOTE_CONTEXT2=<remote-cluster2-context>
    ...
    
  3. Prepare an IstioLifecycleManager resource to manage istiod control planes.

    1. Download the example file.
      curl -0L https://raw.githubusercontent.com/solo-io/gloo-mesh-use-cases/main/gloo-mesh/istio-install/gm-managed/gm-istiod.yaml > gm-istiod.yaml
      
      1. Elevate the permissions of the following service accounts that will be created. These permissions allow the Istio sidecars to make use of a user ID that is normally restricted by OpenShift. For more information, see the Istio on OpenShift documentation.
        oc adm policy add-scc-to-group anyuid system:serviceaccounts:istio-system --context $REMOTE_CONTEXT1
        oc adm policy add-scc-to-group anyuid system:serviceaccounts:gloo-mesh-gateways --context $REMOTE_CONTEXT1
        # Update revision as needed
        oc adm policy add-scc-to-group anyuid system:serviceaccounts:gm-iop-1-20 --context $REMOTE_CONTEXT1
        
        oc adm policy add-scc-to-group anyuid system:serviceaccounts:istio-system --context $REMOTE_CONTEXT2
        oc adm policy add-scc-to-group anyuid system:serviceaccounts:gloo-mesh-gateways --context $REMOTE_CONTEXT2
        # Update revision as needed
        oc adm policy add-scc-to-group anyuid system:serviceaccounts:gm-iop-1-20 --context $REMOTE_CONTEXT2
        
      2. Create the gloo-mesh-gateways project, and create a NetworkAttachmentDefinition custom resource for the project.
        kubectl create ns gloo-mesh-gateways --context $REMOTE_CONTEXT1
        cat <<EOF | oc --context $REMOTE_CONTEXT1 -n gloo-mesh-gateways create -f -
        apiVersion: "k8s.cni.cncf.io/v1"
        kind: NetworkAttachmentDefinition
        metadata:
          name: istio-cni
        EOF
        
        kubectl create ns gloo-mesh-gateways --context $REMOTE_CONTEXT2
        cat <<EOF | oc --context $REMOTE_CONTEXT2 -n gloo-mesh-gateways create -f -
        apiVersion: "k8s.cni.cncf.io/v1"
        kind: NetworkAttachmentDefinition
        metadata:
          name: istio-cni
        EOF
        
      3. Download the gm-istiod.yaml example file.
        curl -0L https://raw.githubusercontent.com/solo-io/gloo-mesh-use-cases/main/gloo-mesh/istio-install/gm-managed/gm-istiod-openshift.yaml > gm-istiod.yaml
        
    2. Update the example file with the environment variables that you previously set, and optionally further edit the file to provide your own details. Save the updated file as gm-istiod-values.yaml. For more information, see the API reference.
      • Tip: Instead of updating the file manually, try running a terminal command to substitute values, such as the following command.
        envsubst < gm-istiod.yaml > gm-istiod-values.yaml
        open gm-istiod-values.yaml
        
    3. Apply the IstioLifecycleManager resource to your management cluster.
      kubectl apply -f gm-istiod-values.yaml --context $MGMT_CONTEXT
      
  4. Prepare a GatewayLifecycleManager custom resource to manage the ingress gateways.

    1. Download the gm-ingress-gateway.yaml example file.
      curl -0L https://raw.githubusercontent.com/solo-io/gloo-mesh-use-cases/main/gloo-mesh/istio-install/gm-managed/gm-ingress-gateway.yaml > gm-ingress-gateway.yaml
      
    2. Update the example file with the environment variables that you previously set, and optionally further edit the file to provide your own details. Save the updated file as gm-ingress-gateway-values.yaml. For more information, see the API reference.
      • Tip: Instead of updating the file manually, try running a terminal command to substitute values, such as the following command.
        envsubst < gm-ingress-gateway.yaml > gm-ingress-gateway-values.yaml
        open gm-ingress-gateway-values.yaml
        
    3. Apply the GatewayLifecycleManager resource to your management cluster.
      kubectl apply -f gm-ingress-gateway-values.yaml --context $MGMT_CONTEXT
      
  5. Verify that the namespaces for your Istio installations are created in each workload cluster.

    kubectl get ns --context $REMOTE_CONTEXT1
    kubectl get ns --context $REMOTE_CONTEXT2
    

    For example, the gm-iop-1-20, gloo-mesh-gateways, and istio-system namespaces are created:

    NAME               STATUS   AGE
    default            Active   56m
    gloo-mesh          Active   36m
    gm-iop-1-20        Active   91s
    gloo-mesh-gateways Active   90s
    istio-system       Active   91s
    kube-node-lease    Active   57m
    kube-public        Active   57m
    kube-system        Active   57m
    
  6. In each namespace, verify that the Istio resources that you specified in your Istio operator configuration are successfully installing. For example, verify that the Istio control plane pods are running.

    kubectl get all -n gm-iop-1-20 --context $REMOTE_CONTEXT1
    

    Example output:

    NAME                                            READY   STATUS    RESTARTS   AGE
    pod/istio-operator-1-20-678fd95cc6-ltbvl   1/1     Running   0          4m12s
    
    NAME                               TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)    AGE
    service/istio-operator-1-20   ClusterIP   10.204.15.247   <none>        8383/TCP   4m12s
    
    NAME                                       READY   UP-TO-DATE   AVAILABLE   AGE
    deployment.apps/istio-operator-1-20   1/1     1            1           4m12s
    
    NAME                                                  DESIRED   CURRENT   READY   AGE
    replicaset.apps/istio-operator-1-20-678fd95cc6   1         1         1       4m12s
    
    kubectl get all -n istio-system --context $REMOTE_CONTEXT1
    

    Example output:

    NAME                                   READY   STATUS    RESTARTS   AGE
    pod/istiod-1-20-b65676555-g2vmr   1/1     Running   0          8m57s
    
    NAME                       TYPE        CLUSTER-IP    EXTERNAL-IP   PORT(S)                                 AGE
    service/istiod-1-20   ClusterIP   10.204.6.56   <none>        15010/TCP,15012/TCP,443/TCP,15014/TCP   8m56s
    
    NAME                               READY   UP-TO-DATE   AVAILABLE   AGE
    deployment.apps/istiod-1-20   1/1     1            1           8m57s
    
    NAME                                         DESIRED   CURRENT   READY   AGE
    replicaset.apps/istiod-1-20-b65676555   1         1         1       8m57s
    
    NAME                                                   REFERENCE                     TARGETS   MINPODS   MAXPODS   REPLICAS   AGE
    horizontalpodautoscaler.autoscaling/istiod-1-20   Deployment/istiod-1-20   1%/80%    1         5         1          8m58s
    

    Note that the gateways might take a few minutes to be created.

    kubectl get all -n gloo-mesh-gateways --context $REMOTE_CONTEXT1
    

    Example output:

    NAME                                                   READY   STATUS    RESTARTS   AGE
    pod/istio-ingressgateway-1-20-77d5f76bc8-j6qkp    1/1     Running   0          2m18s
    
    NAME                                      TYPE           CLUSTER-IP     EXTERNAL-IP      PORT(S)                                      AGE
    service/istio-ingressgateway              LoadBalancer   10.44.4.140    34.150.235.221   15021:31321/TCP,80:32525/TCP,443:31826/TCP   2m16s
    
    NAME                                              READY   UP-TO-DATE   AVAILABLE   AGE
    deployment.apps/istio-ingressgateway-1-20    1/1     1            1           2m18s
    
    NAME                                                         DESIRED   CURRENT   READY   AGE
    replicaset.apps/istio-ingressgateway-1-20-77d5f76bc8    1         1         1       2m18s
    
    NAME                                                                  REFERENCE                                    TARGETS         MINPODS   MAXPODS   REPLICAS   AGE
    horizontalpodautoscaler.autoscaling/istio-ingressgateway-1-20    Deployment/istio-ingressgateway-1-20    4%/80%          1         5         1          2m19s
    

  7. Optional for OpenShift: Expose the gateways by using OpenShift routes.

    oc -n gloo-mesh-gateways expose svc istio-ingressgateway --port=http2 --context $REMOTE_CONTEXT1
    oc -n gloo-mesh-gateways expose svc istio-ingressgateway --port=http2 --context $REMOTE_CONTEXT2
    

In multicluster setups, one Gloo Gateway for north-south traffic is deployed to each workload cluster. To learn about your gateway options, such as creating a global load balancer to route to each gateway IP address or registering each gateway IP address in one DNS entry, see the gateway deployment patterns page.

Optional: Configure locality labels for nodes

Gloo Gateway uses Kubernetes labels on the nodes in your clusters to indicate locality for the services that run on the nodes. For more information, see the Kubernetes topology and Istio locality documentation.

Verify that your nodes have locality labels

Verify that your nodes have at least region and zone labels. If so, and you do not want to update the labels, you can skip the remaining steps.

kubectl get nodes --context $REMOTE_CONTEXT1 -o jsonpath='{.items[*].metadata.labels}'
kubectl get nodes --context $REMOTE_CONTEXT2 -o jsonpath='{.items[*].metadata.labels}'

Example output with region and zone labels:

..."topology.kubernetes.io/region":"us-east","topology.kubernetes.io/zone":"us-east-2"

Add locality labels to your nodes

If your nodes do not already have region and zone labels, you must add the labels. Depending on your cluster setup, you might add the same region label to each node, but a separate zone label per node. The values are not validated against your underlying infrastructure provider. The following example shows how you might label multizone clusters in two different regions, but you can adapt the steps for your actual setup.

  1. Label all the nodes in each cluster for the region. If your nodes have incorrect region labels, include the --overwrite flag in the command.
    kubectl label nodes --all --context $REMOTE_CONTEXT1 topology.kubernetes.io/region=us-east
    kubectl label nodes --all --context $REMOTE_CONTEXT2 topology.kubernetes.io/region=us-west
    
  2. List the nodes in each cluster. Note the name for each node.
    kubectl get nodes --context $REMOTE_CONTEXT1
    kubectl get nodes --context $REMOTE_CONTEXT2
    
  3. Label each node in each cluster for the zone. If your nodes have incorrect zone labels, include the --overwrite flag in the command.
    kubectl label node <cluster1_node-1> --context $REMOTE_CONTEXT1 topology.kubernetes.io/zone=us-east-1
    kubectl label node <cluster1_node-2> --context $REMOTE_CONTEXT1 topology.kubernetes.io/zone=us-east-2
    kubectl label node <cluster1_node-3> --context $REMOTE_CONTEXT1 topology.kubernetes.io/zone=us-east-3
    
    kubectl label node <cluster2_node-1> --context $REMOTE_CONTEXT2 topology.kubernetes.io/zone=us-west-1
    kubectl label node <cluster2_node-2> --context $REMOTE_CONTEXT2 topology.kubernetes.io/zone=us-west-2
    kubectl label node <cluster2_node-3> --context $REMOTE_CONTEXT2 topology.kubernetes.io/zone=us-west-3
    

Next steps

Now that the Gloo Gateway is installed, check out the following resources to explore Gloo Gateway capabilities: