Installation options
Learn about your options for installing Gloo Mesh in your environment.
Choose whether you want to deploy Gloo Mesh in one cluster, or across multiple clusters.
Single cluster
Gloo Mesh is fully functional when the management plane (management server) and data plane (agent and service mesh) both run within the same cluster. You can easily install both the control and data plane components by using one installation process. If you choose to install the components in separate processes, ensure that you use the same name for the cluster during both processes.
Multicluster
A multicluster Gloo Mesh setup consists of one management cluster that you install the Gloo management plane (management server) in, and one or more workload clusters that serve as the data plane (agent and service mesh). By running the management plane in a dedicated management cluster, you can ensure that no workload pods consume cluster resources that might impede management processes. Many guides throughout the documentation use one management cluster and two workload clusters as an example setup.
Sidecar deployment options
You can deploy some Gloo components as either standalone pods or as sidecar containers to other component pods. Deploying components as sidecars can help reduce the amount of compute resources required to run Gloo Mesh.
The following components can be deployed either as standalone pods or as sidecars. For more information about the installed components, review the Gloo Mesh architecture.
Component deployed as a sidecar | Main component pod | Installation setting |
---|---|---|
Gloo agent | Gloo management server | glooAgent.runAsSidecar: true Note that the agent is available as a sidecar only in single-cluster environments. |
Gloo insights engine | Gloo management server | glooInsightsEngine.runAsSidecar: true |
Gloo analyzer |
| glooAnalyzer.runAsSidecar: true |
Installation methods
After you decide on a single or multicluster environment, choose whether to use the meshctl
CLI or Helm charts to install Gloo Mesh.
CLI install profiles
Gloo packages profiles in the meshctl
CLI for quick Gloo Mesh installations. Profiles provide basic Helm settings for a minimum installation, and are suitable for testing setups. Because the profiles provide standard setups, they can also be useful starting points for building a customized and robust set of Helm installation values.
In your meshctl install
and meshctl cluster register
commands, you can specify one or more profiles in the --profile
flag. Multiple profiles can be applied in a comma-delimited list, in which merge priority is left to right. Note that any values you specify in --set
or --gloo-mesh-agent-chart-values
flags have highest merge priority.
The following profiles are supported. You can review the Helm settings in a profile by running curl https://storage.googleapis.com/gloo-platform/helm-profiles/2.5.13/<profile>.yaml > profile-values.yaml
.
Profile | Use case | Deployed components |
---|---|---|
gloo-core-single-cluster | Install all Gloo Mesh components into a single-cluster Kubernetes setup. | Gloo management server, Gloo UI, Gloo insights engine, Gloo agent, Gloo analyzer, Gloo OpenTelemetry (OTel) collector agents, Prometheus, Redis |
gloo-core-mgmt | In a multicluster Kubernetes setup, install the Gloo management plane in a dedicated cluster. | Gloo management server, Gloo UI, Gloo insights engine, Gloo OTel gateway, Prometheus, Redis |
gloo-core-agent | In a multicluster Kubernetes setup, register a workload cluster that runs an Istio service mesh with the management plane. | Gloo agent, Gloo analyzer, Gloo OTel collector agents |
Helm charts
To extensively customize the settings of your Gloo Mesh installation, you can use the gloo-platform
and gloo-platform-crds
Helm charts.
Installation Helm chart
All components for a full Gloo Mesh installation are available in the gloo-platform
Helm chart.
Helm installations allow for extensive customization of Gloo settings, and are suitable for proof-of-concept or production setups. Within the gloo-platform
chart, you can find the configuration options for all components in the following sections.
Component section | Description |
---|---|
clickhouse | Configuration for the Clickhouse deployment, which stores logs from Gloo telemetry collector agents. See the Bitnami Clickhouse Helm chart for the complete set of values. |
common | Common values shared across components. When applicable, these can be overridden in specific components. |
demo | Demo-specific features that improve quick setups. Do not use in production. |
experimental | Deprecated: Use featureGates fields instead. |
extAuthService | Configuration for the Gloo external authentication service. |
featureGates | Experimental features for Gloo. Disabled by default. |
glooAgent | Configuration for the Gloo agent. |
glooAnalyzer | Configuration for the Gloo analyzer, which gathers data on Gloo and Istio components. |
glooInsightsEngine | Configuration for the Gloo insights engine, which creates Solo insights. |
glooMgmtServer | Configuration for the Gloo management server. |
glooNetwork | Gloo Network agent configuration options. |
glooPortalServer | Configuration for the Gloo Portal server deployment. |
glooSpireServer | Configuration for the Gloo Spire server deployment. |
glooUi | Configuration for the Gloo UI. |
istioInstallations | Configuration for deploying managed Istio control plane and gateway installations by using the Istio lifecycle manager. The istioInstallations Helm settings can be helpful for simple use cases to set up Istio quickly, such as single cluster Gloo Mesh Gateway demos. Otherwise, install Istio by using the IstioLifecycleManager and GatewayLifecycleManager custom resources. |
jaeger | Configuration for the Gloo Jaeger instance. |
licensing | Gloo product licenses. |
postgresql | Configuration for Gloo PostgreSQL instance. |
prometheus | Helm values for configuring Prometheus. See the Prometheus Helm chart for the complete set of values. |
rateLimiter | Configuration for the Gloo rate limiting service. |
redis | Configuration for the default Redis instance. |
telemetryCollector | Configuration for the Gloo telemetry collector agents. See the OpenTelemetry Helm chart for the complete set of values. |
telemetryCollectorCustomization | Optional customization for the Gloo telemetry collector agents. |
telemetryGateway | Configuration for the Gloo telemetry gateway. See the OpenTelemetry Helm chart for the complete set of values. |
telemetryGatewayCustomization | Optional customization for the Gloo telemetry gateway. |
You can see all possible fields that you can set for the chart by running the following command.
helm show values gloo-platform/gloo-platform --version v2.5.13 > all-values.yaml
For more information about each field, see the Helm values documentation. To set up Gloo Mesh with Helm, see the advanced installation guide.
CRD Helm chart
All CRDs that are required for a Gloo Mesh installation are available in the gloo-platform-crds
Helm chart.
By default, this Helm chart installs all CRDs that are available in Gloo, including CRDs that you can use only if you have a Gloo Mesh Enterprise or Gloo Mesh Gateway license. To install only the CRDs that are relevant to Gloo Mesh, set installEnterpriseCrds
to false
. To see all CRD installation options, see the Helm values documentation.
When you set installEnterpriseCrds
to false
, the following CRDs are installed:
certificaterequests.internal.gloo.solo.io
dashboards.admin.gloo.solo.io
discoveredcnis.internal.gloo.solo.io
discoveredgateways.internal.gloo.solo.io
gatewaylifecyclemanagers.admin.gloo.solo.io
issuedcertificates.internal.gloo.solo.io
istiolifecyclemanagers.admin.gloo.solo.io
kubernetesclusters.admin.gloo.solo.io
meshes.internal.gloo.solo.io
podbouncedirectives.internal.gloo.solo.io
workspaces.admin.gloo.solo.io
workspacesettings.admin.gloo.solo.io
xdsconfigs.internal.gloo.solo.io
If you already installed the chart, you can run kubectl get crds -A | grep gloo.solo.io
to see the installed CRDs.
Supported platforms
You can install Gloo Mesh on Kubernetes or OpenShift clusters. For more information about the requirements for clusters on each platform, see the System requirements.
Kubernetes
Gloo Mesh and Istio are fully supported on Kubernetes clusters. Throughout the installation guides, use installation commands that are labeled for use with Kubernetes.
OpenShift
Gloo Mesh is fully supported on OpenShift clusters. However, there are some changes you must make to allow Gloo Mesh and Istio to run on an OpenShift cluster. To make these changes, use commands throughout the installation guides that are labeled for use with OpenShift. For more information about the required changes, see the Istio on OpenShift documentation.
Gloo settings
Dynamic user ID: The pods of all the Gloo components’ deployments must be assigned a dynamic user ID for the Istio sidecar to use. However, this user ID is not permitted in OpenShift by default. In the installation guides, follow the OpenShift commands to use OpenShift-specific install profiles or Helm commands, which include the floatingUserId=true
installation setting for each Gloo component.
Istio settings
- Helm chart settings: If you install Istio by using the Istio Helm charts, your Helm settings must include
profile=openshift
. - (Istio 1.19 and earlier only) Service account permissions: For any pods that require an Istio sidecar, such as your workload pods, you must elevate the permissions of the service account for that namespace. These elevated permissions allow the pods to make use of a user ID that is normally restricted by OpenShift. In the, you follow the OpenShift commands to elevate the service account permissions for the Istioand your workload projects.
- (Istio 1.18 and earlier only) Network attachment definition: The CNI on OpenShift requires a
NetworkAttachmentDefinition
in each workload project in order to invoke theistio-cni
plug-in. For each workload project where you deploy applications in your service mesh, you must create aNetworkAttachmentDefinition
resource.For example, follow the OpenShift steps when you deploy the sample apps to create the resource in each sample app project.