Install with Helm
Use Helm to customize the settings of your multicluster Gloo Mesh Enterprise installation.
Gloo Mesh Enterprise is a multicluster and multimesh management plane that is based on hardened, open-source projects like Envoy and Istio. With Gloo Mesh, you can unify the configuration, operation, and visibility of service-to-service connectivity across your distributed applications. These apps can run in different virtual machines (VMs) or Kubernetes clusters on premises or in various cloud providers, and even in different service meshes.
You can follow this guide to customize settings for an advanced Gloo Mesh Enterprise installation. To learn more about the benefits and architecture, see About.
Before you begin
Install the following command-line (CLI) tools.
kubectl
, the Kubernetes command line tool. Download thekubectl
version that is within one minor version of the Kubernetes clusters you plan to use.meshctl
, the Solo command line tool.curl -sL https://run.solo.io/meshctl/install | GLOO_MESH_VERSION=v2.6.6 sh - export PATH=$HOME/.gloo-mesh/bin:$PATH
helm
, the Kubernetes package manager.
Set your Gloo Mesh Enterprise license key as an environment variable. If you do not have one, contact an account representative. If you prefer to specify license keys in a secret instead, see Licensing. To check your license’s validity, you can run
meshctl license check --key $(echo ${GLOO_MESH_LICENSE_KEY} | base64 -w0)
.export GLOO_MESH_LICENSE_KEY=<license_key>
If you plan to deploy an ingress gateway to manage ingress traffic to mesh workloads and you want to apply policies, such as rate limits, external authentication, or Web Application Firewalls to that gateway, you must also have a Gloo Mesh Gateway license. Without a Gloo Mesh Gateway license, you can only set up simple routing rules to match and forward traffic to mesh workloads. Save the Gloo Mesh Gateway license key as an additional environment variable.
export GLOO_MESH_GATEWAY_LICENSE_KEY=<license-key>
Set the Gloo Mesh Enterprise version. This example uses the latest version. You can find other versions in the Changelog documentation. Append
-fips
for a FIPS-compliant image, such as2.6.6-fips
. Do not includev
before the version number.export GLOO_VERSION=2.6.6
Create or use at least two existing Kubernetes clusters. The instructions in this guide assume one management cluster and two workload clusters.
- The cluster name must be alphanumeric with no special characters except a hyphen (-), lowercase, and begin with a letter (not a number).
- Production installations: Review Best practices for production to prepare your optional security measures. For example, before you begin your Gloo installation, you can provide your own certificates to secure the management server and agent connection, and set up secure access to the Gloo UI.
Install Gloo Mesh Enterprise
In a multicluster setup, you deploy the Gloo management plane into a dedicated management cluster, and the Gloo data plane into one or more workload clusters that run Istio service meshes.
Management plane
Deploy the Gloo management plane into a dedicated management cluster.
Save the name and kubeconfig context for your management cluster in environment variables.
export MGMT_CLUSTER=<management-cluster-name> export MGMT_CONTEXT=<management-cluster-context>
Add and update the Helm repository for Gloo.
helm repo add gloo-platform https://storage.googleapis.com/gloo-platform/helm-charts helm repo update
Install the Gloo CRDs.
helm upgrade -i gloo-platform-crds gloo-platform/gloo-platform-crds \ --namespace=gloo-mesh \ --create-namespace \ --version=$GLOO_VERSION \ --kube-context $MGMT_CONTEXT
- Prepare a Helm values file to provide your customizations. To get started, you can use the minimum settings in the following profile as a basis. These settings enable all components that are required for a Gloo management plane installation.
Decide how you want to secure the relay connection between the Gloo management server and agents. In test and POC environments, you can use self-signed certificates to secure the connection. If you plan to use Gloo Mesh Enterprise in production, it is recommended to bring your own certificates instead. For more information, see Setup options.
Edit the file to provide your own details for settings that are recommended for production deployments, such as the following settings.
For more information about the settings you can configure:- See Best practices for production.
- See all possible fields for the Helm chart by running
helm show values gloo-platform/gloo-platform --version v2.6.6 > all-values.yaml
. You can also see these fields in the Helm values documentation.
Field Decription glooInsightsEngine.enabled
Enable the Gloo insights engine, which is recommended to help you improve the security and observability of your environment by creating actionable Istio insights. glooMgmtServer.resources.limits
Add resource limits for the gloo-mesh-mgmt-server
pod, such ascpu: 1000m
andmemory: 1Gi
.glooMgmtServer.safeMode
glooMgmtServer.safeStartWindow
Configure how you want the Gloo management server to handle translation after a Redis restart. For available options, see Redis safe mode options. glooMgmtServer.serviceOverrides.metadata.annotations
Add annotations for the management server load balancer as needed, such as AWS-specific load balancer annotations. For more information, see Deployment and service overrides. glooUi.auth
Set up OIDC authorization for the Gloo UI. For more information, see UI authentication. prometheus.enabled
Disable the default Prometheus instance as needed to provide your own. Otherwise, you can keep the default Prometheus server enabled, and deploy a production-level server to scrape metrics from the server. For more information on each option, see Best practices for collecting metrics in production. redis
Disable the default Redis deployment and provide your own backing database as needed. For more information, see Backing databases. OpenShift: glooMgmtServer.serviceType
andtelemetryGateway.service.type
In some OpenShift setups, you might not use load balancer service types. You can set these two deployment service types to ClusterIP
, and expose them by using OpenShift routes after installation.Use the customizations in your Helm values file to install the Gloo management plane components in your management cluster.
To install Gloo Mesh Enterprise with an additional Gloo Mesh Gateway license so that you can apply Gloo policies to the ingress gateway, add the--set licensing.glooGatewayLicenseKey=$GLOO_MESH_GATEWAY_LICENSE_KEY
option to thehelm upgrade
command.Verify that the management plane pods have a status of
Running
.kubectl get pods -n gloo-mesh --context $MGMT_CONTEXT
Example output:
NAME READY STATUS RESTARTS AGE gloo-mesh-mgmt-server-56c495796b-cx687 1/1 Running 0 30s gloo-mesh-redis-8455d49c86-f8qhw 1/1 Running 0 30s gloo-mesh-ui-65b6b6df5f-bf4vp 3/3 Running 0 30s gloo-telemetry-collector-agent-7rzfb 1/1 Running 0 30s gloo-telemetry-gateway-6547f479d5-r4zm6 1/1 Running 0 30s prometheus-server-57cd8c74d4-2bc7f 2/2 Running 0 30s
Save the external address and port that your cloud provider assigned to the Gloo OpenTelemetry (OTel) gateway service. The OTel collector agents in each workload cluster send metrics to this address.
Save the external address and port that your cloud provider assigned to the
gloo-mesh-mgmt-server
service. Thegloo-mesh-agent
agent in each workload cluster accesses this address via a secure connection.Create a workspace that selects all clusters and namespaces by default, and workspace settings that enable communication across clusters. Gloo workspaces let you organize team resources across Kubernetes namespaces and clusters. In this example, you create a global workspace that imports and exports all resources and namespaces, and a workspace settings resource in the
gloo-mesh-config
namespace. Later, as your teams grow, you can create a workspace for each team, to enforce service isolation, set up federation, and even share resources by importing and exporting.
kubectl apply --context $MGMT_CONTEXT -f- <<EOF
apiVersion: admin.gloo.solo.io/v2
kind: Workspace
metadata:
name: $MGMT_CLUSTER
namespace: gloo-mesh
spec:
workloadClusters:
- name: '*'
namespaces:
- name: '*'
---
apiVersion: v1
kind: Namespace
metadata:
name: gloo-mesh-config
---
apiVersion: admin.gloo.solo.io/v2
kind: WorkspaceSettings
metadata:
name: $MGMT_CLUSTER
namespace: gloo-mesh-config
spec:
options:
serviceIsolation:
enabled: false
federation:
enabled: false
serviceSelector:
- {}
eastWestGateways:
- selector:
labels:
istio: eastwestgateway
EOF
Data plane
Register each workload cluster with the Gloo management plane by deploying Gloo data plane components. A deployment named gloo-mesh-agent
runs the Gloo agent in each workload cluster.
For the workload cluster that you want to register with Gloo, set the following environment variables. You update these variables each time you follow these steps to register another workload cluster.
export REMOTE_CLUSTER=<workload_cluster_name> export REMOTE_CONTEXT=<workload_cluster_context>
In the management cluster, create a
KubernetesCluster
resource to represent the workload cluster and store relevant data, such as the workload cluster’s local domain.kubectl apply --context $MGMT_CONTEXT -f- <<EOF apiVersion: admin.gloo.solo.io/v2 kind: KubernetesCluster metadata: name: ${REMOTE_CLUSTER} namespace: gloo-mesh spec: clusterDomain: cluster.local EOF
In your workload cluster, apply the Gloo CRDs. Note: If you plan to manually deploy and manage your Istio installation in workload clusters rather than using Solo’s Istio lifecycle manager, include the
--set installIstioOperator=false
flag to ensure that the Istio operator CRD is not managed by this Gloo CRD Helm release.helm upgrade -i gloo-platform-crds gloo-platform/gloo-platform-crds \ --namespace=gloo-mesh \ --create-namespace \ --version=$GLOO_VERSION \ --kube-context $REMOTE_CONTEXT
- Prepare a Helm values file to provide your customizations. To get started, you can use the minimum settings in the following profile as a basis. These settings enable all components that are required for a Gloo data plane installation.
Depending on the method you chose to secure the relay connection, prepare the Helm values for the data plane installation. For more information, see the Setup options.
Edit the file to provide your own details for settings that are recommended for production deployments, such as the following settings.
For more information about the settings you can configure:
- See Best practices for production.
- See all possible fields for the Helm chart by running
helm show values gloo-platform/gloo-platform --version v2.6.6 > all-values.yaml
. You can also see these fields in the Helm values documentation.
Field Decription glooAgent.resources.limits
Add resource limits for the gloo-mesh-mgmt-server
pod, such ascpu: 500m
andmemory: 512Mi
.glooAnalyzer.enabled
Enable the Gloo insights analyzer, which is recommended to help you improve the security and observability of your environment by creating actionable Istio insights. extAuthService.enabled
Set to true
to install the external auth server add-on.rateLimiter.enabled
Set to true
to install the rate limit server add-on.Use the customizations in your Helm values file to install the Gloo data plane components in your workload cluster.
Verify that the Gloo data plane component pods are running. If not, try debugging the agent.
meshctl check --kubecontext $REMOTE_CONTEXT
Example output:
🟢 Gloo deployment status Namespace | Name | Ready | Status gloo-mesh | ext-auth-service | 1/1 | Healthy gloo-mesh | gloo-mesh-agent | 1/1 | Healthy gloo-mesh | gloo-telemetry-collector-agent | 3/3 | Healthy gloo-mesh | rate-limiter | 1/1 | Healthy
Repeat steps 1 - 8 to register each workload cluster with Gloo.
Verify that your Gloo Mesh Enterprise setup is correctly installed. If not, try debugging the relay connection. Note that this check might take a few seconds to verify that:
- Your Gloo product licenses are valid and current.
- The Gloo CRDs are installed at the correct version.
- The management plane pods in the management cluster are running and healthy.
- The agents in the workload clusters are successfully identified by the management server.
meshctl check --kubecontext $MGMT_CONTEXT
Example output:
🟢 License status INFO gloo-mesh enterprise license expiration is 25 Aug 24 10:38 CDT 🟢 CRD version check 🟢 Gloo deployment status Namespace | Name | Ready | Status gloo-mesh | gloo-mesh-mgmt-server | 1/1 | Healthy gloo-mesh | gloo-mesh-redis | 1/1 | Healthy gloo-mesh | gloo-mesh-ui | 1/1 | Healthy gloo-mesh | gloo-telemetry-collector-agent | 3/3 | Healthy gloo-mesh | gloo-telemetry-gateway | 1/1 | Healthy gloo-mesh | prometheus-server | 1/1 | Healthy 🟢 Mgmt server connectivity to workload agents Cluster | Registered | Connected Pod cluster1 | true | gloo-mesh/gloo-mesh-mgmt-server-65bd557b95-v8qq6 cluster2 | true | gloo-mesh/gloo-mesh-mgmt-server-65bd557b95-v8qq6 Connected Pod | Clusters gloo-mesh/gloo-mesh-mgmt-server-65bd557b95-v8qq6 | 2
Istio
Deploy Istio on each workload cluster.
- To deploy managed Istio installations, see Install Istio by using the Istio Lifecycle Manager.
- For Amazon EKS clusters, you can use the Solo distribution of Istio add-on to install Istio.
- To instead manage Istio yourself, see Manually deploy Istio.
Optional: Configure the locality labels for the nodes
Gloo Mesh Enterprise uses Kubernetes labels on the nodes in your clusters to indicate locality for the services that run on the nodes. For more information, see the Kubernetes topology and Istio locality documentation.
- Cloud: Typically, your cloud provider sets the Kubernetes
region
andzone
labels for each node automatically. Depending on the level of availability that you want, you might have clusters in the same region, but different zones. Or, each cluster might be in a different region, with nodes spread across zones. - On-premises: Depending on how you set up your cluster, you likely must set the
region
andzone
labels for each node yourself. Additionally, consider setting asubzone
label to specify nodes on the same rack or other more granular setups.
Verify that your nodes have at least
region
andzone
labels.kubectl get nodes --context $REMOTE_CONTEXT1 -o jsonpath='{.items[*].metadata.labels}' kubectl get nodes --context $REMOTE_CONTEXT2 -o jsonpath='{.items[*].metadata.labels}'
Example output with
region
andzone
labels:..."topology.kubernetes.io/region":"us-east","topology.kubernetes.io/zone":"us-east-2"
- If your nodes have at least
region
andzone
labels, and you do not want to update the labels, you can skip the remaining steps. - If your nodes do not already have region and zone labels, you must add the labels. Depending on your cluster setup, you might add the same
region
label to each node, but a separatezone
label per node. The values are not validated against your underlying infrastructure provider. The following steps show how you might label multizone clusters in two different regions, but you can adapt the steps for your actual setup.
- If your nodes have at least
Label all the nodes in each cluster for the region. If your nodes have incorrect region labels, include the
--overwrite
flag in the command.kubectl label nodes --all --context $REMOTE_CONTEXT1 topology.kubernetes.io/region=us-east kubectl label nodes --all --context $REMOTE_CONTEXT2 topology.kubernetes.io/region=us-west
List the nodes in each cluster. Note the name for each node.
kubectl get nodes --context $REMOTE_CONTEXT1 kubectl get nodes --context $REMOTE_CONTEXT2
Label each node in each cluster for the zone. If your nodes have incorrect zone labels, include the
--overwrite
flag in the command.kubectl label node <cluster1_node-1> --context $REMOTE_CONTEXT1 topology.kubernetes.io/zone=us-east-1 kubectl label node <cluster1_node-2> --context $REMOTE_CONTEXT1 topology.kubernetes.io/zone=us-east-2 kubectl label node <cluster1_node-3> --context $REMOTE_CONTEXT1 topology.kubernetes.io/zone=us-east-3 kubectl label node <cluster2_node-1> --context $REMOTE_CONTEXT2 topology.kubernetes.io/zone=us-west-1 kubectl label node <cluster2_node-2> --context $REMOTE_CONTEXT2 topology.kubernetes.io/zone=us-west-2 kubectl label node <cluster2_node-3> --context $REMOTE_CONTEXT2 topology.kubernetes.io/zone=us-west-3
Next steps
Now that you have Gloo Mesh Enterprise up and running, check out some of the following resources to learn more about Gloo Mesh and expand your service mesh capabilities.
Gloo Mesh Enterprise:
- Enable insights to review and improve your setup’s health and security posture.
- Apply Gloo policies to manage the security and resiliency of your service mesh environment.
- Organize team resources with workspaces.
- When it’s time to upgrade Gloo Mesh Enterprise, see the upgrade guide.
Istio:
- Find out more about hardened Istio
n-4
version support built into Solo distributions of Istio. - Review how Gloo Mesh Enterprise custom resources are automatically translated into Istio resources.
- Monitor and observe your Istio environment with Gloo Mesh Enterprise’s built-in telemetry tools.
- When it’s time to upgrade Istio, use Gloo Mesh Enterprise to upgrade managed Istio installations.
Help and support:
- Talk to an expert to get advice or build out a proof of concept.
- Join the #gloo-mesh channel in the Solo.io community slack.
- Try out one of the Gloo workshops.