Install Gloo Gateway in a multicluster setup
Use Helm to customize your setup of Gloo Gateway in multiple clusters.
In a multicluster setup, you install the Gloo Gateway control plane and gateway proxy in separate clusters.
- Gloo Gateway control plane: When you install the Gloo Gateway control plane in a dedicated management cluster, a deployment named
gloo-mesh-mgmt-server
is created to translate and implement your Gloo configurations. - Data plane: Set up one or more workload clusters that are registered with and managed by the Gloo Gateway control plane in the management cluster. A deployment named
gloo-mesh-agent
is created to run the Gloo agent in each workload cluster. Additionally, you use the Gloo Gateway control plane to install an ingress gateway proxy in each workload cluster, as part of the Istio lifecycle management. By using Gloo-managed installations, you no longer need to manually install and manage theistiod
control plane and gateway proxy in each workload cluster. Instead, you provide the Istio configuration in yourgloo-platform
Helm chart, and Gloo translates this configuration into managedistiod
control plane and gateway proxies in the clusters.
To set up the Gloo Gateway control plane and gateway proxy in one cluster instead, see the Single cluster setup guide.
Before you begin
-
Create or use existing Kubernetes clusters. For a multicluster setup, you need at least two clusters. One cluster is set up as the Gloo Gateway control plane where the management components are installed. The other cluster is registered as your data plane and runs your Kubernetes workloads and gateway proxy. You can optionally add more workload clusters to your setup. The instructions in this guide assume one management cluster and two workload clusters. Note: The cluster name must be alphanumeric with no special characters except a hyphen (-), lowercase, and begin with a letter (not a number).
-
Save the names of your clusters from your infrastructure provider as environment variables. If your clusters have different names, specify those names instead.
export MGMT_CLUSTER=mgmt export REMOTE_CLUSTER1=cluster1 export REMOTE_CLUSTER2=cluster2
-
Save the kubeconfig contexts for your clusters as environment variables. Run
kubectl config get-contexts
, look for your cluster in theCLUSTER
column, and get the context name in theNAME
column.export MGMT_CONTEXT=<management-cluster-context> export REMOTE_CONTEXT1=<remote-cluster1-context> export REMOTE_CONTEXT2=<remote-cluster2-context>
Note: Do not use cluster context names with underscores (
_
). The context name is used as a SAN specification in the generated certificate that connects workload clusters to the management cluster, and underscores in SAN are not FQDN compliant. You can rename a context by runningkubectl config rename-context "<oldcontext>" <newcontext>
. -
Set your Gloo Gateway license key as an environment variable. If you do not have one, contact an account representative. If you prefer to specify license keys in a secret instead, see Licensing.
export GLOO_GATEWAY_LICENSE_KEY=<gloo-gateway-license-key>
-
Set the Gloo Gateway version as an environment variable. The latest version is used as an example. You can find other versions in the Changelog documentation. Append ‘-fips’ for a FIPS-compliant image, such as ‘2.4.0-beta1-fips’. Do not include
v
before the version number.export GLOO_VERSION=2.4.0-beta1
-
Install the following CLI tools:
meshctl
, the Gloo command line tool for bootstrapping Gloo Platform, registering clusters, describing configured resources, and more. Be sure to download version2.4.0-beta1
, which uses the latest Gloo Gateway CRDs.kubectl
, the Kubernetes command line tool. Download thekubectl
version that is within one minor version of the Kubernetes cluster you plan to use.- OpenShift only:
oc
, the OpenShift command line tool. Download theoc
version that is the same minor version of the OpenShift cluster you plan to use.
-
For quick installations, such as for testing environments, you can install with
meshctl
. To customize your installation in detail, such as for production environments, install with Helm.
Install with meshctl
Quickly install Gloo Gateway by using meshctl
, such as for testing purposes.
Install the control plane
Start by installing the Gloo Gateway control plane in your management cluster.
-
Install the Gloo Gateway control plane in the management cluster. This command uses a basic profile to create a
gloo-mesh
namespace and install the control plane components, such as the management server and Prometheus server, in your management cluster.meshctl install
creates a self-signed certificate authority for mTLS if you do not supply your own certificates. If you prefer to set up Gloo Gateway without secure communication for quick demonstrations, include the--set common.insecure=true
flag. Note that using the default self-signed CAs or using insecure mode are not suitable for production environments.meshctl install --profiles mgmt-server \ --kubecontext $MGMT_CONTEXT \ --set common.cluster=$MGMT_CLUSTER \ --set licensing.glooGatewayLicenseKey=$GLOO_GATEWAY_LICENSE_KEY
Note:
- Need to use OpenShift routes instead of load balancer service types? Follow the OpenShift steps in the Helm section instead.
- After you run the following command in OpenShift 4.11 and later, you might see warnings for the pods and containers which violate the OpenShift
PodSecurity "restricted:v1.24"
profile, due to the elevated permissions required by Istio. You can ignore these warnings. For more info, see this article.
meshctl install --profiles mgmt-server \ --kubecontext $MGMT_CONTEXT \ --set common.cluster=$MGMT_CLUSTER \ --set glooMgmtServer.floatingUserId=true \ --set glooUi.floatingUserId=true \ --set licensing.glooGatewayLicenseKey=$GLOO_GATEWAY_LICENSE_KEY \ --set prometheus.server.securityContext=false \ --set redis.deployment.floatingUserId=true
-
Verify that the control plane pods have a status of
Running
.kubectl get pods -n gloo-mesh --context $MGMT_CONTEXT
Example output:
NAME READY STATUS RESTARTS AGE gloo-mesh-mgmt-server-56c495796b-cx687 1/1 Running 0 30s gloo-mesh-redis-8455d49c86-f8qhw 1/1 Running 0 30s gloo-mesh-ui-65b6b6df5f-bf4vp 3/3 Running 0 30s gloo-telemetry-collector-agent-fcc82 1/1 Running 0 30s gloo-telemetry-collector-agent-npnjn 1/1 Running 0 30s gloo-telemetry-collector-agent-p8jwj 1/1 Running 0 30s gloo-telemetry-gateway-6547f479d5-r4zm6 1/1 Running 0 30s prometheus-server-57cd8c74d4-2bc7f 2/2 Running 0 30s
-
Save the external address and port that were assigned by your cloud provider to the Gloo OpenTelemetry (OTel) gateway load balancer service. The OTel collector agents in each workload cluster send metrics to this address.
export TELEMETRY_GATEWAY_IP=$(kubectl get svc -n gloo-mesh gloo-telemetry-gateway --context $MGMT_CONTEXT -o jsonpath='{.status.loadBalancer.ingress[0].ip}') export TELEMETRY_GATEWAY_PORT=$(kubectl -n gloo-mesh get service gloo-telemetry-gateway --context $MGMT_CONTEXT -o jsonpath='{.spec.ports[?(@.name=="otlp")].port}') export TELEMETRY_GATEWAY_ADDRESS=${TELEMETRY_GATEWAY_IP}:${TELEMETRY_GATEWAY_PORT} echo $TELEMETRY_GATEWAY_ADDRESS
export TELEMETRY_GATEWAY_HOSTNAME=$(kubectl get svc -n gloo-mesh gloo-telemetry-gateway --context $MGMT_CONTEXT -o jsonpath='{.status.loadBalancer.ingress[0].hostname}') export TELEMETRY_GATEWAY_PORT=$(kubectl -n gloo-mesh get service gloo-telemetry-gateway --context $MGMT_CONTEXT -o jsonpath='{.spec.ports[?(@.name=="otlp")].port}') export TELEMETRY_GATEWAY_ADDRESS=${TELEMETRY_GATEWAY_HOSTNAME}:${TELEMETRY_GATEWAY_PORT} echo $TELEMETRY_GATEWAY_ADDRESS
-
Create a workspace that selects all clusters and namespaces by default. Gloo workspaces let you organize team resources across Kubernetes namespaces and clusters. In this example, you create a single workspace for everything. Later, as your teams grow, you can create a workspace for each team, to enforce service isolation, set up federation, and even share resources by importing and exporting.
kubectl apply --context $MGMT_CONTEXT -f- <<EOF apiVersion: admin.gloo.solo.io/v2 kind: Workspace metadata: name: $MGMT_CLUSTER namespace: gloo-mesh spec: workloadClusters: - name: '*' namespaces: - name: '*' EOF
-
Create a workspace settings resource for the workspace that enables federation across clusters and selects the Istio east-west gateway.
kubectl apply --context $MGMT_CONTEXT -f- <<EOF apiVersion: admin.gloo.solo.io/v2 kind: WorkspaceSettings metadata: name: $MGMT_CLUSTER namespace: gloo-mesh spec: options: serviceIsolation: enabled: false federation: enabled: false serviceSelector: - {} eastWestGateways: - selector: labels: istio: eastwestgateway EOF
Register workload clusters
Register each workload cluster with the management server by deploying the relay agent.
-
Prepare the
gloo-mesh-addons
namespace.kubectl create ns gloo-mesh-addons --context $REMOTE_CONTEXT1 kubectl create ns gloo-mesh-addons --context $REMOTE_CONTEXT2
- Elevate the permissions of the
gloo-mesh-addons
service account that will be created.oc adm policy add-scc-to-group anyuid system:serviceaccounts:gloo-mesh-addons --context $REMOTE_CONTEXT1 oc adm policy add-scc-to-group anyuid system:serviceaccounts:gloo-mesh-addons --context $REMOTE_CONTEXT2
- Create the
gloo-mesh-addons
project, and create a NetworkAttachmentDefinition custom resource for the project.kubectl create ns gloo-mesh-addons --context $REMOTE_CONTEXT1 cat <<EOF | oc -n ggloo-mesh-addons create --context $REMOTE_CONTEXT1 -f - apiVersion: "k8s.cni.cncf.io/v1" kind: NetworkAttachmentDefinition metadata: name: istio-cni EOF
kubectl create ns gloo-mesh-addons --context $REMOTE_CONTEXT2 cat <<EOF | oc -n gloo-mesh-addons create --context $REMOTE_CONTEXT2 -f - apiVersion: "k8s.cni.cncf.io/v1" kind: NetworkAttachmentDefinition metadata: name: istio-cni EOF
- Elevate the permissions of the
-
Register the workload clusters. The
meshctl
command completes the following:- Creates the
gloo-mesh
namespace - Copies the root CA certificate to the workload cluster
- Copies the boostrap token to the workload cluster
- Installs the relay agent, rate limit server, and external auth server in the workload cluster
- Creates the KubernetesCluster CRD in the management cluster
meshctl cluster register $REMOTE_CLUSTER1 \ --kubecontext $MGMT_CONTEXT \ --remote-context $REMOTE_CONTEXT1 \ --profiles agent,ratelimit,extauth \ --version $GLOO_VERSION \ --telemetry-server-address $TELEMETRY_GATEWAY_ADDRESS meshctl cluster register $REMOTE_CLUSTER2 \ --kubecontext $MGMT_CONTEXT \ --remote-context $REMOTE_CONTEXT2 \ --profiles agent,ratelimit,extauth \ --version $GLOO_VERSION \ --telemetry-server-address $TELEMETRY_GATEWAY_ADDRESS
meshctl cluster register $REMOTE_CLUSTER1 \ --kubecontext $MGMT_CONTEXT \ --remote-context $REMOTE_CONTEXT1 \ --profiles agent,ratelimit,extauth \ --set glooAgent.floatingUserId=true \ --version $GLOO_VERSION \ --telemetry-server-address $TELEMETRY_GATEWAY_ADDRESS meshctl cluster register $REMOTE_CLUSTER2 \ --kubecontext $MGMT_CONTEXT \ --remote-context $REMOTE_CONTEXT2 \ --profiles agent,ratelimit,extauth \ --set glooAgent.floatingUserId=true \ --version $GLOO_VERSION \ --telemetry-server-address $TELEMETRY_GATEWAY_ADDRESS
Note: In OpenShift 4.11 and later, you might see warnings for the pods and containers which violate the OpenShift
PodSecurity "restricted:v1.24"
profile, due to the elevated permissions required by Istio. You can ignore these warnings. For more info, see this article.If you installed the Gloo management plane in insecure mode, include the--relay-server-insecure=true
flag in this command.
- Creates the
-
Verify that the Gloo data plane components are healthy. If not, try debugging the agent.
meshctl check --kubecontext $REMOTE_CONTEXT1 meshctl check --kubecontext $REMOTE_CONTEXT2
Example output:
🟢 CRD Version check 🟢 Gloo Platform Deployment Status Namespace | Name | Ready | Status gloo-mesh | gloo-mesh-agent | 1/1 | Healthy gloo-mesh-addons | ext-auth-service | 1/1 | Healthy gloo-mesh-addons | rate-limiter | 1/1 | Healthy gloo-mesh-addons | redis | 1/1 | Healthy
-
Verify that your Gloo Gateway setup is correctly installed. This check might take a few seconds to verify that:
- Your Gloo Platform product licenses are valid and current.
- The Gloo Platform CRDs are installed at the correct version.
- The control plane pods in the management cluster are running and healthy.
- The agents in the workload clusters are successfully identified by the control plane.
meshctl check --kubecontext $MGMT_CONTEXT
Example output:
🟢 License status INFO gloo-mesh enterprise license expiration is 25 Aug 23 10:38 CDT INFO Valid GraphQL license module found 🟢 CRD version check 🟢 Gloo Platform deployment status Namespace | Name | Ready | Status gloo-mesh | gloo-mesh-mgmt-server | 1/1 | Healthy gloo-mesh | gloo-mesh-redis | 1/1 | Healthy gloo-mesh | gloo-mesh-ui | 1/1 | Healthy gloo-mesh | gloo-telemetry-gateway | 1/1 | Healthy gloo-mesh | prometheus-server | 1/1 | Healthy gloo-mesh | gloo-telemetry-collector-agent | 3/3 | Healthy 🟢 Mgmt server connectivity to workload agents Cluster | Registered | Connected Pod cluster1 | true | gloo-mesh/gloo-mesh-mgmt-server-65bd557b95-v8qq6 cluster2 | true | gloo-mesh/gloo-mesh-mgmt-server-65bd557b95-v8qq6
-
In each workload cluster, deploy managed gateway proxies with the gateway lifecycle manager.
Install with Helm
Customize your Gloo Gateway setup by installing with the Gloo Platform Helm chart.
This guide uses the new gloo-platform
Helm chart, which is available in Gloo Platform 2.3 and later. For more information about this chart, see the gloo-platform
chart overview guide.
Install the control plane
-
Production installations: Review Best practices for production to prepare your security measures. For example, before you begin your Gloo installation, you can provide your own certificates and set up secure access to the Gloo UI.
-
Install
helm
, the Kubernetes package manager. -
Add and update the Helm repository for Gloo Platform.
helm repo add gloo-platform https://storage.googleapis.com/gloo-platform/helm-charts helm repo update
-
Apply the Gloo Platform CRDs to your cluster by creating a
gloo-platform-crds
Helm release.helm install gloo-platform-crds gloo-platform/gloo-platform-crds \ --kube-context $MGMT_CONTEXT \ --namespace=gloo-mesh \ --create-namespace \ --version $GLOO_VERSION
-
Prepare a Helm values file for production-level settings, including FIPS-compliant images, OIDC authorization for the Gloo UI, and more. To get started, you can use the minimum settings in the
mgmt-server
profile as a basis for your values file. This profile enables all of the necessary components that are required for a Gloo Gateway control plane installation, such as the management server, as well as some optional components, such as the Gloo UI.curl https://storage.googleapis.com/gloo-platform/helm-profiles/$GLOO_VERSION/mgmt-server.yaml > mgmt-server.yaml open mgmt-server.yaml
cat >mgmt-server.yaml <<EOF common: cluster: $MGMT_CLUSTER glooMgmtServer: enabled: true floatingUserId: true serviceType: LoadBalancer glooUi: enabled: true floatingUserId: true prometheus: enabled: true server: securityContext: false redis: deployment: enabled: true floatingUserId: true telemetryGateway: enabled: true service: type: LoadBalancer EOF
Note: When you use the settings in this profile to install Gloo Gateway In OpenShift 4.11 and later, you might see warnings for the pods and containers which violate the OpenShift
PodSecurity "restricted:v1.24"
profile, due to the elevated permissions required by Istio. You can ignore these warnings. For more info, see this article. -
Edit the file to provide your own details for settings that are recommended for production deployments, such as the following optional settings.
Field Decription glooMgmtServer.relay
Secure the relay connection between the Gloo management server and agents. By default, Gloo Gateway generates self-signed certificates and keys for the root CA and uses these credentials to derive the intermediate CA, server and client TLS certificates. This setup is not recommended for production. Instead, use your preferred PKI provider to generate and store your credentials, and to have more control over the certificate management process. For more information, see the relay Setup options. glooMgmtServer.resources.limits
Add resource limits for the gloo-mesh-mgmt-server
pod, such ascpu: 1000m
andmemory: 1Gi
.glooMgmtServer.serviceOverrides.metadata.annotations
Add annotations for the management server load balancer as needed, such as AWS-specific load balancer annotations. For more information, see Deployment and service overrides. glooUi.auth
Set up OIDC authorization for the Gloo UI. For more information, see UI authentication. istioInstallations
Add an istioInstallations
section to deploy managed gateway proxies to each workload cluster. For an example of how this section might look, expand “Example: istioInstallations section” after this table. NOTE: To manage the gateway proxies yourself instead of the gateway lifecycle manager in this Helm chart, setistioInstallations.enabled
tofalse
, and manually deploy gateway proxies in each workload cluster after you complete this guide.prometheus.enabled
Disable the default Prometheus instance as needed to provide your own. Otherwise, you can keep the default Prometheus server enabled, and deploy a production-level server to scrape metrics from the server. For more information on each option, see Best practices for collecting metrics in production. redis
Disable the default Redis deployment and provide your own backing database as needed. For more information, see Backing databases. OpenShift: glooMgmtServer.serviceType
andtelemetryGateway.service.type
In some OpenShift setups, you might not use load balancer service types. You can set these two deployment service types to ClusterIP
, and expose them by using OpenShift routes after installation.For more information about the settings you can configure:
- See Best practices for production.
- See all possible fields for the Helm chart by running
helm show values gloo-platform/gloo-platform --version v2.4.0-beta1 > all-values.yaml
. You can also see these fields in the Helm values documentation.
-
Install the Gloo Gateway control plane in your management cluster, including the customizations in your Helm values file.
helm install gloo-platform gloo-platform/gloo-platform \ --kube-context $MGMT_CONTEXT \ --namespace gloo-mesh \ --version $GLOO_VERSION \ --values mgmt-server.yaml \ --set common.cluster=$CLUSTER_NAME \ --set licensing.glooGatewayLicenseKey=$GLOO_GATEWAY_LICENSE_KEY
-
Verify that the control plane pods have a status of
Running
.kubectl get pods -n gloo-mesh --context $MGMT_CONTEXT
Example output:
NAME READY STATUS RESTARTS AGE gloo-mesh-mgmt-server-56c495796b-cx687 1/1 Running 0 30s gloo-mesh-redis-8455d49c86-f8qhw 1/1 Running 0 30s gloo-mesh-ui-65b6b6df5f-bf4vp 3/3 Running 0 30s gloo-telemetry-collector-agent-fcc82 1/1 Running 0 30s gloo-telemetry-collector-agent-npnjn 1/1 Running 0 30s gloo-telemetry-collector-agent-p8jwj 1/1 Running 0 30s gloo-telemetry-gateway-6547f479d5-r4zm6 1/1 Running 0 30s prometheus-server-57cd8c74d4-2bc7f 2/2 Running 0 30s
-
Save the external address and port that were assigned by your cloud provider to the
gloo-mesh-mgmt-server
service. Thegloo-mesh-agent
relay agent in each cluster accesses this address via a secure connection.export MGMT_SERVER_NETWORKING_DOMAIN=$(kubectl get svc -n gloo-mesh gloo-mesh-mgmt-server --context $MGMT_CONTEXT -o jsonpath='{.status.loadBalancer.ingress[0].ip}') export MGMT_SERVER_NETWORKING_PORT=$(kubectl -n gloo-mesh get service gloo-mesh-mgmt-server --context $MGMT_CONTEXT -o jsonpath='{.spec.ports[?(@.name=="grpc")].port}') export MGMT_SERVER_NETWORKING_ADDRESS=${MGMT_SERVER_NETWORKING_DOMAIN}:${MGMT_SERVER_NETWORKING_PORT} echo $MGMT_SERVER_NETWORKING_ADDRESS
export MGMT_SERVER_NETWORKING_DOMAIN=$(kubectl get svc -n gloo-mesh gloo-mesh-mgmt-server --context $MGMT_CONTEXT -o jsonpath='{.status.loadBalancer.ingress[0].hostname}') export MGMT_SERVER_NETWORKING_PORT=$(kubectl -n gloo-mesh get service gloo-mesh-mgmt-server --context $MGMT_CONTEXT -o jsonpath='{.spec.ports[?(@.name=="grpc")].port}') export MGMT_SERVER_NETWORKING_ADDRESS=${MGMT_SERVER_NETWORKING_DOMAIN}:${MGMT_SERVER_NETWORKING_PORT} echo $MGMT_SERVER_NETWORKING_ADDRESS
- Expose the management server by using an OpenShift route. Your route might look like the following.
oc apply --context $MGMT_CONTEXT -n gloo-mesh -f- <<EOF kind: Route apiVersion: route.openshift.io/v1 metadata: name: gloo-mesh-mgmt-server namespace: gloo-mesh annotations: # Needed for the different agents to connect to different replica instances of the management server deployment haproxy.router.openshift.io/balance: roundrobin spec: host: gloo-mesh-mgmt-server.<your_domain>.com to: kind: Service name: gloo-mesh-mgmt-server weight: 100 port: targetPort: grpc tls: termination: passthrough insecureEdgeTerminationPolicy: Redirect wildcardPolicy: None EOF
- Save the management server's address, which consists of the route host and the route port. Note that the port is the route's own port, such as
443
, and not thegrpc
port of the management server that the route points to.export MGMT_SERVER_NETWORKING_DOMAIN=$(oc get route -n gloo-mesh gloo-mesh-mgmt-server --context $MGMT_CONTEXT -o jsonpath='{.status.ingress[0].host}') export MGMT_SERVER_NETWORKING_ADDRESS=${MGMT_SERVER_NETWORKING_DOMAIN}:443 echo $MGMT_SERVER_NETWORKING_ADDRESS
- Expose the management server by using an OpenShift route. Your route might look like the following.
-
Save the external address and port that were assigned by your cloud provider to the Gloo OpenTelemetry (OTel) gateway service. The OTel collector agents in each workload cluster send metrics to this address.
export TELEMETRY_GATEWAY_IP=$(kubectl get svc -n gloo-mesh gloo-telemetry-gateway --context $MGMT_CONTEXT -o jsonpath='{.status.loadBalancer.ingress[0].ip}') export TELEMETRY_GATEWAY_PORT=$(kubectl -n gloo-mesh get service gloo-telemetry-gateway --context $MGMT_CONTEXT -o jsonpath='{.spec.ports[?(@.name=="otlp")].port}') export TELEMETRY_GATEWAY_ADDRESS=${TELEMETRY_GATEWAY_IP}:${TELEMETRY_GATEWAY_PORT} echo $TELEMETRY_GATEWAY_ADDRESS
export TELEMETRY_GATEWAY_HOSTNAME=$(kubectl get svc -n gloo-mesh gloo-telemetry-gateway --context $MGMT_CONTEXT -o jsonpath='{.status.loadBalancer.ingress[0].hostname}') export TELEMETRY_GATEWAY_PORT=$(kubectl -n gloo-mesh get service gloo-telemetry-gateway --context $MGMT_CONTEXT -o jsonpath='{.spec.ports[?(@.name=="otlp")].port}') export TELEMETRY_GATEWAY_ADDRESS=${TELEMETRY_GATEWAY_HOSTNAME}:${TELEMETRY_GATEWAY_PORT} echo $TELEMETRY_GATEWAY_ADDRESS
- Expose the telemetry gateway by using an OpenShift route. Your route might look like the following.
oc apply --context $MGMT_CONTEXT -n gloo-mesh -f- <<EOF kind: Route apiVersion: route.openshift.io/v1 metadata: name: gloo-telemetry-gateway namespace: gloo-mesh spec: host: gloo-telemetry-gateway.<your_domain>.com to: kind: Service name: gloo-telemetry-gateway weight: 100 port: targetPort: otlp tls: termination: passthrough insecureEdgeTerminationPolicy: Redirect wildcardPolicy: None EOF
- Save the telemetry gateway's address, which consists of the route host and the route port. Note that the port is the route's own port, such as
443
, and not theotlp
port of the telemetry gateway that the route points to.export TELEMETRY_GATEWAY_HOST=$(oc get route -n gloo-mesh gloo-telemetry-gateway --context $MGMT_CONTEXT -o jsonpath='{.status.ingress[0].host}') export TELEMETRY_GATEWAY_ADDRESS=${TELEMETRY_GATEWAY_HOST}:443 echo $TELEMETRY_GATEWAY_ADDRESS
- Expose the telemetry gateway by using an OpenShift route. Your route might look like the following.
-
Create a workspace that selects all clusters and namespaces by default. Gloo workspaces let you organize team resources across Kubernetes namespaces and clusters. In this example, you create a single workspace for everything. For more complex setups, such as creating a workspace for each team to enforce service isolation, set up federation, and even share resources by importing and exporting, see Organize team resources with workspaces.
kubectl apply --context $MGMT_CONTEXT -f- <<EOF apiVersion: admin.gloo.solo.io/v2 kind: Workspace metadata: name: $MGMT_CLUSTER namespace: gloo-mesh spec: workloadClusters: - name: '*' namespaces: - name: '*' EOF
-
Create a workspace settings resource for the workspace that enables federation across clusters and selects the Istio east-west gateway.
kubectl apply --context $MGMT_CONTEXT -f- <<EOF apiVersion: admin.gloo.solo.io/v2 kind: WorkspaceSettings metadata: name: $MGMT_CLUSTER namespace: gloo-mesh spec: options: serviceIsolation: enabled: false federation: enabled: false serviceSelector: - {} eastWestGateways: - selector: labels: istio: eastwestgateway EOF
Registering workload clusters
Register each workload cluster with the management server by deploying the relay agent.
-
For the workload cluster that you want to register with Gloo, set the following environment variables. You update these variables each time you register another workload cluster.
export REMOTE_CLUSTER=$REMOTE_CLUSTER1 export REMOTE_CONTEXT=$REMOTE_CONTEXT1
-
In the management cluster, create a
KubernetesCluster
resource to represent the workload cluster and store relevant data, such as the workload cluster's local domain.- The
metadata.name
must match the name of the workload cluster that you will specify in thegloo-platform
Helm chart in subsequent steps. - The
spec.clusterDomain
must match the local cluster domain of the Kubernetes cluster. - You can optionally give your cluster a label, such as
env: prod
,region: us-east
, or another selector. Your workspaces can use the label to automatically add the cluster to the workspace.
kubectl apply --context $MGMT_CONTEXT -f- <<EOF apiVersion: admin.gloo.solo.io/v2 kind: KubernetesCluster metadata: name: ${REMOTE_CLUSTER} namespace: gloo-mesh labels: env: prod spec: clusterDomain: cluster.local EOF
- The
-
In your workload cluster, apply the Gloo Platform CRDs by creating a
gloo-platform-crds
Helm release.helm install gloo-platform-crds gloo-platform/gloo-platform-crds \ --kube-context $REMOTE_CONTEXT \ --namespace=gloo-mesh \ --create-namespace \ --version $GLOO_VERSION
-
Prepare a Helm values file for production-level settings, including FIPS-compliant images, OIDC authorization for the Gloo UI, and more. To get started, you can use the minimum settings in the
agent
profile as a basis for your values file. This profile enables the Gloo agent and the Gloo Platform telemetry collector.curl https://storage.googleapis.com/gloo-platform/helm-profiles/$GLOO_VERSION/agent.yaml > agent.yaml open agent.yaml
curl https://storage.googleapis.com/gloo-platform/helm-profiles/$GLOO_VERSION/agent-openshift.yaml > agent.yaml open agent.yaml
Note: When you use the settings in this profile to install Gloo Gateway in OpenShift 4.11 and later, you might see warnings for the pods and containers which violate the OpenShift
PodSecurity "restricted:v1.24"
profile, due to the elevated permissions required by Istio. You can ignore these warnings. For more info, see this article. -
Edit the file to provide your own details for settings that are recommended for production deployments, such as the following optional settings.
Field Decription glooAgent.relay
Provide the certificate and secret details that correspond to your management server relay settings. For more information, see the relay Setup options. glooAgent.resources.limits
Add resource limits for the gloo-mesh-mgmt-server
pod, such ascpu: 500m
andmemory: 512Mi
.For more information about the settings you can configure:
- See Best practices for production.
- See all possible fields for the Helm chart by running
helm show values gloo-platform/gloo-platform --version v2.4.0-beta1 > all-values.yaml
. You can also see these fields in the Helm values documentation.
-
Deploy the relay agent to the workload cluster.
helm install gloo-platform gloo-platform/gloo-platform \ --namespace gloo-mesh \ --kube-context $REMOTE_CONTEXT \ --version $GLOO_VERSION \ --values agent.yaml \ --set common.cluster=$REMOTE_CLUSTER \ --set glooAgent.relay.serverAddress=$MGMT_SERVER_NETWORKING_ADDRESS \ --set telemetryCollector.config.exporters.otlp.endpoint=$TELEMETRY_GATEWAY_ADDRESS
-
Verify that the Gloo data plane components are healthy. If not, try debugging the agent.
meshctl check --kubecontext $REMOTE_CONTEXT
Example output:
🟢 CRD Version check 🟢 Gloo Platform Deployment Status Namespace | Name | Ready | Status gloo-mesh | gloo-mesh-agent | 1/1 | Healthy gloo-mesh-addons | ext-auth-service | 1/1 | Healthy gloo-mesh-addons | rate-limiter | 1/1 | Healthy gloo-mesh-addons | redis | 1/1 | Healthy
-
Optional: Install add-ons, such as the external auth and rate limit servers, in a separate Helm release. Only create this release if you did not enable the
extAuthService
andrateLimiter
in your main installation release.Want to expose your APIs through a developer portal? You must include some extra Helm settings. To install, see Portal.
helm install gloo-agent-addons gloo-platform/gloo-platform \ --namespace gloo-mesh-addons \ --kube-context $REMOTE_CONTEXT \ --create-namespace \ --version $GLOO_VERSION \ --set common.cluster=$REMOTE_CLUSTER \ --set extAuthService.enabled=true \ --set rateLimiter.enabled=true
- Elevate the permissions of the
gloo-mesh-addons
service account that will be created.oc adm policy add-scc-to-group anyuid system:serviceaccounts:gloo-mesh-addons --context $REMOTE_CONTEXT
- Create the
gloo-mesh-addons
project, and create a NetworkAttachmentDefinition custom resource for the project.kubectl create ns gloo-mesh-addons --context $REMOTE_CONTEXT cat <<EOF | oc -n gloo-mesh-addons create --context $REMOTE_CONTEXT -f - apiVersion: "k8s.cni.cncf.io/v1" kind: NetworkAttachmentDefinition metadata: name: istio-cni EOF
- Create the add-ons release.
helm install gloo-agent-addons gloo-platform/gloo-platform \ --namespace gloo-mesh-addons \ --kube-context $REMOTE_CONTEXT \ --version $GLOO_VERSION \ --set common.cluster=$REMOTE_CLUSTER \ --set extAuthService.enabled=true \ --set rateLimiter.enabled=true
- Elevate the permissions of the
-
Repeat steps 1 - 8 to register each workload cluster with Gloo.
-
Verify that your Gloo Gateway setup is correctly installed. If not, try debugging the relay connection. Note that this check might take a few seconds to verify that:
- Your Gloo Platform product licenses are valid and current.
- The Gloo Platform CRDs are installed at the correct version.
- The control plane pods in the management cluster are running and healthy.
- The agents in the workload clusters are successfully identified by the control plane.
meshctl check --kubecontext $MGMT_CONTEXT
Example output:
🟢 License status INFO gloo-mesh enterprise license expiration is 25 Aug 23 10:38 CDT INFO Valid GraphQL license module found 🟢 CRD version check 🟢 Gloo Platform deployment status Namespace | Name | Ready | Status gloo-mesh | gloo-mesh-agent | 1/1 | Healthy gloo-mesh | gloo-mesh-mgmt-server | 1/1 | Healthy gloo-mesh | gloo-mesh-redis | 1/1 | Healthy gloo-mesh | gloo-mesh-ui | 1/1 | Healthy gloo-mesh | gloo-telemetry-gateway | 1/1 | Healthy gloo-mesh | prometheus-server | 1/1 | Healthy gloo-mesh | gloo-telemetry-collector-agent | 3/3 | Healthy 🟢 Mgmt server connectivity to workload agents Cluster | Registered | Connected Pod cluster1 | true | gloo-mesh/gloo-mesh-mgmt-server-65bd557b95-v8qq6 cluster2 | true | gloo-mesh/gloo-mesh-mgmt-server-65bd557b95-v8qq6
-
If you did not deploy managed gateway proxies with your Helm installation, manually deploy gateway proxies in each workload cluster.
Optional: Configure locality labels for nodes
Gloo Gateway uses Kubernetes labels on the nodes in your clusters to indicate locality for the services that run on the nodes. For more information, see the Kubernetes topology and Istio locality documentation.
- Cloud: Typically, your cloud provider sets the Kubernetes
region
andzone
labels for each node automatically. Depending on the level of availability that you want, you might have clusters in the same region, but different zones. Or, each cluster might be in a different region, with nodes spread across zones. - On-premises: Depending on how you set up your cluster, you likely must set the
region
andzone
labels for each node yourself. Additionally, consider setting asubzone
label to specify nodes on the same rack or other more granular setups.
Verify that your nodes have locality labels
Verify that your nodes have at least region
and zone
labels. If so, and you do not want to update the labels, you can skip the remaining steps.
kubectl get nodes --context $REMOTE_CONTEXT1 -o jsonpath='{.items[*].metadata.labels}'
kubectl get nodes --context $REMOTE_CONTEXT2 -o jsonpath='{.items[*].metadata.labels}'
Example output with region
and zone
labels:
..."topology.kubernetes.io/region":"us-east","topology.kubernetes.io/zone":"us-east-2"
Add locality labels to your nodes
If your nodes do not already have region and zone labels, you must add the labels. Depending on your cluster setup, you might add the same region
label to each node, but a separate zone
label per node. The values are not validated against your underlying infrastructure provider. The following example shows how you might label multizone clusters in two different regions, but you can adapt the steps for your actual setup.
- Label all the nodes in each cluster for the region. If your nodes have incorrect region labels, include the
--overwrite
flag in the command.kubectl label nodes --all --context $REMOTE_CONTEXT1 topology.kubernetes.io/region=us-east kubectl label nodes --all --context $REMOTE_CONTEXT2 topology.kubernetes.io/region=us-west
- List the nodes in each cluster. Note the name for each node.
kubectl get nodes --context $REMOTE_CONTEXT1 kubectl get nodes --context $REMOTE_CONTEXT2
- Label each node in each cluster for the zone. If your nodes have incorrect zone labels, include the
--overwrite
flag in the command.kubectl label node <cluster1_node-1> --context $REMOTE_CONTEXT1 topology.kubernetes.io/zone=us-east-1 kubectl label node <cluster1_node-2> --context $REMOTE_CONTEXT1 topology.kubernetes.io/zone=us-east-2 kubectl label node <cluster1_node-3> --context $REMOTE_CONTEXT1 topology.kubernetes.io/zone=us-east-3 kubectl label node <cluster2_node-1> --context $REMOTE_CONTEXT2 topology.kubernetes.io/zone=us-west-1 kubectl label node <cluster2_node-2> --context $REMOTE_CONTEXT2 topology.kubernetes.io/zone=us-west-2 kubectl label node <cluster2_node-3> --context $REMOTE_CONTEXT2 topology.kubernetes.io/zone=us-west-3
Next steps
Now that the Gloo Gateway is installed, check out the following resources to explore Gloo Gateway capabilities:
- Organize team resources with workspaces.
- Deploy sample apps in your cluster to follow the guides in the documentation.
- Configure HTTP or HTTPS listeners for your gateway.
- Review routing examples, such as header matching, redirects, or direct responses that you can configure for your API Gateway.
- Explore traffic policies that you can apply to your routes and upstream services.
- Learn more about Gloo Gateway and its features in Concepts.