These docs use Gloo Mesh Enterprise APIs to manage your sidecar service mesh. To manage your service mesh with the Kubernetes Gateway API instead, see the Gloo Mesh docs.
Install with Helm
Use Helm to customize the settings of your multicluster Gloo Mesh Enterprise installation.
Gloo Mesh Enterprise is a multicluster and multimesh management plane that is based on hardened, open-source projects like Envoy and Istio. With Gloo Mesh, you can unify the configuration, operation, and visibility of service-to-service connectivity across your distributed applications. These apps can run in different virtual machines (VMs) or Kubernetes clusters on premises or in various cloud providers, and even in different service meshes.
You can follow this guide to customize settings for an advanced Gloo Mesh Enterprise installation. To learn more about the benefits and architecture, see About.
Set your Gloo Mesh Enterprise license key as an environment variable. If you do not have one, contact an account representative. If you prefer to specify license keys in a secret instead, see Licensing. To check your license’s validity, you can run meshctl license check --key $(echo ${GLOO_MESH_LICENSE_KEY} | base64 -w0).
export GLOO_MESH_LICENSE_KEY=<license_key>
Set the Gloo Mesh Enterprise version. This example uses the latest version. You can find other versions in the Changelog documentation. Append -fips for a FIPS-compliant image, such as 2.4.16-fips. Do not include v before the version number.
export GLOO_VERSION=2.4.16
Create or use at least two existing Kubernetes clusters. The instructions in this guide assume one management cluster and two workload clusters.
The cluster name must be alphanumeric with no special characters except a hyphen (-), lowercase, and begin with a letter (not a number).
Production installations: Review Best practices for production to prepare your optional security measures. For example, before you begin your Gloo installation, you can provide your own certificates to secure the management server and agent connection, and set up secure access to the Gloo UI.
In a multicluster setup, you deploy the Gloo management plane into a dedicated management cluster, and the Gloo data plane into one or more workload clusters that run Istio service meshes.
Prepare a Helm values file to provide your customizations. To get started, you can use the minimum settings in the following profile as a basis. These settings enable all components that are required for a Gloo management plane installation.
curl https://storage.googleapis.com/gloo-platform/helm-profiles/$GLOO_VERSION/mgmt-server.yaml > mgmt-plane.yaml
open mgmt-plane.yaml
curl https://storage.googleapis.com/gloo-platform/helm-profiles/$GLOO_VERSION/mgmt-server-openshift.yaml > mgmt-plane.yaml
open mgmt-plane.yaml
Note: In OpenShift 4.11 and later, you might see warnings for the pods and containers which violate the OpenShift PodSecurity "restricted:v1.24" profile, due to the elevated permissions required by Istio. You can ignore these warnings. For more info, see this article.
Decide how you want to secure the relay connection between the Gloo management server and agents. In test and POC environments, you can use self-signed certificates to secure the connection. If you plan to use Gloo Mesh Enterprise in production, it is recommended to bring your own certificates instead. For more information, see Setup options.
Use your preferred PKI provider to generate your root CA and intermediate CA credentials. You then store the intermediate CA credentials in your cluster so that you can leverage Gloo Mesh Enterprise’s built-in capability to automatically issue and sign client TLS certificates for Gloo agents. For more information, see the Bring your own CAs with automatic client TLS certificate rotation.
Create your root CA, intermediate CA, server, and telemetry gateway credentials, as well as relay identity token, and store them as the following Kubernetes secrets in the gloo-mesh namespace on the management cluster:
relay-root-tls-secret that holds the root CA certificate
relay-tls-signing-secret that holds the intermediate CA credentials
relay-server-tls-secret that holds the server key, TLS certificate and certificate chain information for the Gloo management server
relay-identity-token-secret that holds the relay identity token value
telemetry-root-secret that holds the root CA certificate for the certificate chain
gloo-telemetry-gateway-tls-secret that holds the key, TLS certificate and certificate chain for the Gloo telemetry gateway
Disable the generation of self-signed certificates to secure the relay connection between the Gloo management server and agent.
glooMgmtServer.relay.signingTlsSecret
Add the name and namespace of the Kubernetes secret that holds the intermediate CA credentials that you created earlier.
glooMgmtServer.relay.tlsSecret
Add the name and namespace of the Kubernetes secret that holds the server TLS certificate for the Gloo management server that you created earlier.
glooMgmtServer.relay.tokenSecret
Add the name, namespace, and key of the Kubernetes secret that holds the relay identity token that you created earlier.
telemetryGateway.extraVolumes
Add the gloo-telemetry-gateway-tls-secret-custom Kubernetes secret that you created earlier to the tls-keys volume. Make sure that you also add the other volumes to your telemetry gateway configuration.
telemetryCollector.extraVolumes
Add the telemetry-root-secret Kubernetes secret that you created earlier to the root-ca volume. Make sure that you also add the other volumes to your telemetry collector configuration.
Use your preferred PKI provider to create the root CA, server TLS, client TLS, and telemetry gateway certificates. Then, store these certificates in the cluster so that the Gloo agent uses a client TLS certificate when establishing the first connection with the Gloo management server. No relay identity tokens are used. With this approach, you cannot use Gloo Mesh Enterprise’s built-in client TLS certificate rotation capabilities. Instead, you must set up your own processes to monitor the expiration of your certificates and to rotate them before they expire. For more information, see Bring your own CAs and client TLS certificates.
Create your root CA, server, client, and telemetry gateway TLS certificates. You can use your preferred PKI provider to do that or follow the steps in Create certificates with OpenSSL to create custom TLS certificates by using OpenSSL. Then, you store the following information in a Kubernetes secret in the gloo-mesh namespace on the management cluster.
relay-server-tls-secret that holds the server key, TLS certificate and certificate chain information for the Gloo management server
gloo-telemetry-gateway-tls-secret that holds the key, TLS certificate and certificate chain for the Gloo telemetry gateway
telemetry-root-secret that holds the root CA certificate for the certificate chain
In your Helm values file for the management server, add the following values.
Disable the generation of self-signed root and intermediate CA certificates and the use of identity tokens to establish initial trust between the Gloo management server and agent.
relay.disableCaCertGeneration
Disable the generation of self-signed certificates to secure the relay connection between the Gloo management server and agent.
relay.disableTokenGeneration
Disable the generation of relay identity tokens.
relay.tlsSecret
Add the name and namespace of the Kubernetes secret that holds the server TLS certificate for the Gloo management server that you created earlier.
relay.tokenSecret
Set all values to null to instruct the Gloo management server to not use identity tokens to establish initial trust with Gloo agents.
telemetryGateway.extraVolumes
Add the gloo-telemetry-gateway-tls-secret Kubernetes secret that you created earlier to the tls-keys volume. Make sure that you also add the other volumes to your telemetry gateway configuration.
telemetryCollector.extraVolumes
Add the telemetry-root-secret Kubernetes secret that you created earlier to the root-ca volume. Make sure that you also add the other volumes to your telemetry collector configuration.
report
Do not use self-signed certs for production. This setup is recommended for testing purposes only.
You can use Gloo to generate self-signed certificates and keys for the root and intermediate CA. These credentials are used to derive the server TLS certificate for the Gloo management server and client TLS certificate for the Gloo agent. Note that this setup is not recommended for production, but can be used for testing purposes. For more information, see Self-signed CAs with automatic client certificate rotation.
In your Helm values file for the management server, add the following values. Note that mTLS is the default mode in Gloo Mesh Enterprise and does not require any additional configuration on the management server.
glooMgmtServer:
enabled: true
Edit the file to provide your own details for settings that are recommended for production deployments, such as the following settings.
check_circle
For more information about the settings you can configure:
See all possible fields for the Helm chart by running helm show values gloo-platform/gloo-platform --version v2.4.16 > all-values.yaml. You can also see these fields in the Helm values documentation.
Field
Decription
glooMgmtServer.resources.limits
Add resource limits for the gloo-mesh-mgmt-server pod, such as cpu: 1000m and memory: 1Gi.
Add annotations for the management server load balancer as needed, such as AWS-specific load balancer annotations. For more information, see Deployment and service overrides.
glooUi.auth
Set up OIDC authorization for the Gloo UI. For more information, see UI authentication.
prometheus.enabled
Disable the default Prometheus instance as needed to provide your own. Otherwise, you can keep the default Prometheus server enabled, and deploy a production-level server to scrape metrics from the server. For more information on each option, see Best practices for collecting metrics in production.
redis
Disable the default Redis deployment and provide your own backing database as needed. For more information, see Backing databases.
OpenShift: glooMgmtServer.serviceType and telemetryGateway.service.type
In some OpenShift setups, you might not use load balancer service types. You can set these two deployment service types to ClusterIP, and expose them by using OpenShift routes after installation.
Use the customizations in your Helm values file to install the Gloo management plane components in your management cluster.
To install Gloo Mesh Enterprise with an additional Gloo Mesh Gateway license so that you can apply Gloo policies to the ingress gateway, add the --set licensing.glooGatewayLicenseKey=$GLOO_MESH_GATEWAY_LICENSE_KEY option to the helm upgrade command.
Verify that the management plane pods have a status of Running.
kubectl get pods -n gloo-mesh --context $MGMT_CONTEXT
Save the external address and port that your cloud provider assigned to the Gloo OpenTelemetry (OTel) gateway service. The OTel collector agents in each workload cluster send metrics to this address.
Save the telemetry gateway’s address, which consists of the route host and the route port. Note that the port is the route’s own port, such as 443, and not the otlp port of the telemetry gateway that the route points to.
Save the external address and port that your cloud provider assigned to the gloo-mesh-mgmt-server service. The gloo-mesh-agent agent in each workload cluster accesses this address via a secure connection.
Expose the management server by using an OpenShift route. Your route might look like the following.
oc apply --context $MGMT_CONTEXT -n gloo-mesh -f- <<EOF
kind: Route
apiVersion: route.openshift.io/v1
metadata:
name: gloo-mesh-mgmt-server
namespace: gloo-mesh
annotations:
# Needed for the different agents to connect to different replica instances of the management server deployment
haproxy.router.openshift.io/balance: roundrobin
spec:
host: gloo-mesh-mgmt-server.<your_domain>.com
to:
kind: Service
name: gloo-mesh-mgmt-server
weight: 100
port:
targetPort: grpc
tls:
termination: passthrough
insecureEdgeTerminationPolicy: Redirect
wildcardPolicy: None
EOF
Save the management server’s address, which consists of the route host and the route port. Note that the port is the route’s own port, such as 443, and not the grpc port of the management server that the route points to.
Create a workspace that selects all clusters and namespaces by default, and workspace settings that enable communication across clusters. Gloo workspaces let you organize team resources across Kubernetes namespaces and clusters. In this example, you create a global workspace that imports and exports all resources and namespaces, and a workspace settings resource in the gloo-mesh-config namespace. Later, as your teams grow, you can create a workspace for each team, to enforce service isolation, set up federation, and even share resources by importing and exporting.
Register each workload cluster with the Gloo management plane by deploying Gloo data plane components. A deployment named gloo-mesh-agent runs the Gloo agent in each workload cluster.
For the workload cluster that you want to register with Gloo, set the following environment variables. You update these variables each time you follow these steps to register another workload cluster.
In the management cluster, create a KubernetesCluster resource to represent the workload cluster and store relevant data, such as the workload cluster’s local domain.
In your workload cluster, apply the Gloo CRDs. Note: If you plan to manually deploy and manage your Istio installation in workload clusters rather than using Solo’s Istio lifecycle manager, include the --set installIstioOperator=false flag to ensure that the Istio operator CRD is not managed by this Gloo CRD Helm release.
Prepare a Helm values file to provide your customizations. To get started, you can use the minimum settings in the following profile as a basis. These settings enable all components that are required for a Gloo data plane installation.
curl https://storage.googleapis.com/gloo-platform/helm-profiles/$GLOO_VERSION/agent.yaml > data-plane.yaml
open data-plane.yaml
curl https://storage.googleapis.com/gloo-platform/helm-profiles/$GLOO_VERSION/agent-openshift.yaml > data-plane.yaml
open data-plane.yaml
Note: In OpenShift 4.11 and later, you might see warnings for the pods and containers which violate the OpenShift PodSecurity "restricted:v1.24" profile, due to the elevated permissions required by Istio. You can ignore these warnings. For more info, see this article.
Depending on the method you chose to secure the relay connection, prepare the Helm values for the data plane installation. For more information, see the Setup options.
Make sure that the following Kubernetes secrets exist in the gloo-mesh namespace on each workload cluster.
relay-root-tls-secret that holds the root CA certificate
relay-identity-token-secret that holds the relay identity token value
telemetry-root-secret that holds the root CA certificate for the certificate chain
Add the name and namespace of the Kubernetes secret that holds the root CA credentials that you copied from the management cluster to the workload cluster.
glooAgent.relay.tokenSecret
Add the name, namespace, and key of the Kubernetes secret that holds the relay identity token that you copied from the management cluster to the workload cluster.
telemetryCollector.extraVolumes
Add the telemetry-root-secret Kubernetes secret that you created earlier to the root-ca volume. Make sure that you also add the other volumes to your telemetry collector configuration.
Make sure that the following Kubernetes secrets exist in the gloo-mesh namespace on each workload cluster.
$REMOTE_CLUSTER1-tls-cert that holds the key, TLS certificate, and certificate chain information for the Gloo agent
telemetry-root-secret that holds the root CA certificate for the certificate chain
You can use the steps in Create certificates with OpenSSL as a guidance for how to create these secrets in the workload cluster.
In your Helm values file for the agent, add the following values. Replace <client-tls-secret-name> with the name of the Kubernetes secret that holds the client TLS certificate, and add the name of the Kubernetes secret that holds the telemetry gateway certificate to the root-ca telemetry collector volume.
Add the name and namespace of the Kubernetes secret that holds the client TLS certificate for the Gloo agent that you created earlier.
glooAgent.relay.tokenSecret
Set all values to null to instruct the Gloo agent to not use identity tokens to establish initial trust with Gloo agents.
telemetryCollector.extraVolumes
Add the name of the Kubernetes secret that holds the Gloo telemetry gateway certificate that you created earlier to the root-ca volume. Make sure that you also add the configMap and hostPath volumes to your configuration.
report
Do not use self-signed certs for production. This setup is recommended for testing purposes only.
Get the value of the root CA certificate from the management cluster and create a secret in the workload cluster.
See all possible fields for the Helm chart by running helm show values gloo-platform/gloo-platform --version v2.4.16 > all-values.yaml. You can also see these fields in the Helm values documentation.
Field
Decription
glooAgent.resources.limits
Add resource limits for the gloo-mesh-mgmt-server pod, such as cpu: 500m and memory: 512Mi.
extAuthService.enabled
Set to true to install the external auth server add-on.
rateLimiter.enabled
Set to true to install the rate limit server add-on.
Use the customizations in your Helm values file to install the Gloo data plane components in your workload cluster.
Repeat steps 1 - 8 to register each workload cluster with Gloo.
Verify that your Gloo Mesh Enterprise setup is correctly installed. If not, try debugging the relay connection. Note that this check might take a few seconds to verify that:
Your Gloo product licenses are valid and current.
The Gloo CRDs are installed at the correct version.
The management plane pods in the management cluster are running and healthy.
The agents in the workload clusters are successfully identified by the management server.
meshctl check --kubecontext $MGMT_CONTEXT
Example output:
🟢 License status
INFO gloo-mesh enterprise license expiration is 25 Aug 24 10:38 CDT
🟢 CRD version check
🟢 Gloo deployment status
Namespace | Name | Ready | Status
gloo-mesh | gloo-mesh-mgmt-server | 1/1 | Healthy
gloo-mesh | gloo-mesh-redis | 1/1 | Healthy
gloo-mesh | gloo-mesh-ui | 1/1 | Healthy
gloo-mesh | gloo-telemetry-collector-agent | 3/3 | Healthy
gloo-mesh | gloo-telemetry-gateway | 1/1 | Healthy
gloo-mesh | prometheus-server | 1/1 | Healthy
🟢 Mgmt server connectivity to workload agents
Cluster | Registered | Connected Pod
cluster1 | true | gloo-mesh/gloo-mesh-mgmt-server-65bd557b95-v8qq6
cluster2 | true | gloo-mesh/gloo-mesh-mgmt-server-65bd557b95-v8qq6
Connected Pod | Clusters
gloo-mesh/gloo-mesh-mgmt-server-65bd557b95-v8qq6 | 2
Optional: Configure the locality labels for the nodes link
Gloo Mesh Enterprise uses Kubernetes labels on the nodes in your clusters to indicate locality for the services that run on the nodes. For more information, see the Kubernetes topology and Istio locality documentation.
Cloud: Typically, your cloud provider sets the Kubernetes region and zone labels for each node automatically. Depending on the level of availability that you want, you might have clusters in the same region, but different zones. Or, each cluster might be in a different region, with nodes spread across zones.
On-premises: Depending on how you set up your cluster, you likely must set the region and zone labels for each node yourself. Additionally, consider setting a subzone label to specify nodes on the same rack or other more granular setups.
Verify that your nodes have at least region and zone labels.
kubectl get nodes --context $REMOTE_CONTEXT1 -o jsonpath='{.items[*].metadata.labels}'
kubectl get nodes --context $REMOTE_CONTEXT2 -o jsonpath='{.items[*].metadata.labels}'
If your nodes have at least region and zone labels, and you do not want to update the labels, you can skip the remaining steps.
If your nodes do not already have region and zone labels, you must add the labels. Depending on your cluster setup, you might add the same region label to each node, but a separate zone label per node. The values are not validated against your underlying infrastructure provider. The following steps show how you might label multizone clusters in two different regions, but you can adapt the steps for your actual setup.
Label all the nodes in each cluster for the region. If your nodes have incorrect region labels, include the --overwrite flag in the command.
Now that you have Gloo Mesh Enterprise up and running, check out some of the following resources to learn more about Gloo Mesh and expand your service mesh capabilities.
Gloo Mesh Enterprise:
Apply Gloo policies to manage the security and resiliency of your service mesh environment.