Install in a multicluster setup
Install the Gloo Mesh Gateway management plane and data plane separately across multiple clusters.
Overview
In a multicluster setup, you install the Gloo management plane and gateway proxy in separate clusters.
- Gloo management plane: When you install the Gloo management plane in a dedicated management cluster, a deployment named
gloo-mesh-mgmt-server
is created to translate and implement your Gloo configurations. - Data plane: Set up one or more workload clusters that are registered with and managed by the Gloo management plane in the management cluster. A deployment named
gloo-mesh-agent
is created to run the Gloo agent in each workload cluster. Additionally, you use the Gloo management plane to install an ingress gateway proxy in each workload cluster, as part of the Istio lifecycle management. By using Gloo-managed installations, you no longer need to manually install and manage theistiod
control plane and gateway proxy in each workload cluster. Instead, you provide the Istio configuration in yourgloo-platform
Helm chart, and Gloo translates this configuration into managedistiod
control plane and gateway proxies in the clusters.
Before you begin
Install the following command-line (CLI) tools.
kubectl
, the Kubernetes command line tool. Download thekubectl
version that is within one minor version of the Kubernetes clusters you plan to use.meshctl
, the Solo command line tool.curl -sL https://run.solo.io/meshctl/install | GLOO_MESH_VERSION=v2.6.6 sh - export PATH=$HOME/.gloo-mesh/bin:$PATH
Set your Gloo Mesh Gateway license key as an environment variable. If you do not have one, contact an account representative. If you prefer to specify license keys in a secret instead, see Licensing. To check your license’s validity, you can run
meshctl license check --key $(echo ${GLOO_MESH_GATEWAY_LICENSE_KEY} | base64 -w0)
.export GLOO_MESH_GATEWAY_LICENSE_KEY=<license_key>
Set the Gloo Mesh Gateway version. This example uses the latest version. You can find other versions in the Changelog documentation. Append
-fips
for a FIPS-compliant image, such as2.6.6-fips
. Do not includev
before the version number.export GLOO_VERSION=2.6.6
Create or use at least two existing Kubernetes clusters. The instructions in this guide assume one management cluster and two workload clusters. The cluster name must be alphanumeric with no special characters except a hyphen (-), lowercase, and begin with a letter (not a number).
For quick installations, such as for testing environments, you can install with
meshctl
. To customize your installation in detail, such as for production environments, install with Helm.
Install with meshctl
Quickly install Gloo Mesh Gateway by using meshctl
, such as for testing purposes.
The meshctl
install steps assume that you want to secure the connection between the Gloo management server and agents by using mutual TLS with self-signed TLS certificates. If you want to customize this setup and use simple TLS instead, or if you want to bring your own TLS certificates, follow the Install with Helm steps.
Management plane
Deploy the Gloo management plane into a dedicated management cluster.
Install the Gloo management plane in your management cluster. This command uses a basic profile to create a
gloo-mesh
namespace and install the management plane components, such as the management server and Prometheus server, in your management cluster.meshctl install
creates a self-signed certificate authority for mTLS if you do not supply your own certificates. If you prefer to set up Gloo Mesh Gateway without secure communication for quick demonstrations, include the--set common.insecure=true
flag. Note that using the default self-signed CAs or using insecure mode are not suitable for production environments.Verify that the management plane pods have a status of
Running
.kubectl get pods -n gloo-mesh --context $MGMT_CONTEXT
Example output:
NAME READY STATUS RESTARTS AGE gloo-mesh-mgmt-server-56c495796b-cx687 1/1 Running 0 30s gloo-mesh-redis-8455d49c86-f8qhw 1/1 Running 0 30s gloo-mesh-ui-65b6b6df5f-bf4vp 3/3 Running 0 30s gloo-telemetry-collector-agent-7rzfb 1/1 Running 0 30s gloo-telemetry-gateway-6547f479d5-r4zm6 1/1 Running 0 30s prometheus-server-57cd8c74d4-2bc7f 2/2 Running 0 30s
Save the external address and port that your cloud provider assigned to the Gloo OpenTelemetry (OTel) gateway service. The OTel collector agents in each workload cluster send metrics to this address.
Create a workspace that selects all clusters and namespaces by default, and workspace settings that enable communication across clusters. Gloo workspaces let you organize team resources across Kubernetes namespaces and clusters. In this example, you create a global workspace that imports and exports all resources and namespaces, and a workspace settings resource in the
gloo-mesh-config
namespace. Later, as your teams grow, you can create a workspace for each team, to enforce service isolation, set up federation, and even share resources by importing and exporting.kubectl apply --context $MGMT_CONTEXT -f- <<EOF apiVersion: admin.gloo.solo.io/v2 kind: Workspace metadata: name: $MGMT_CLUSTER namespace: gloo-mesh spec: workloadClusters: - name: '*' namespaces: - name: '*' --- apiVersion: v1 kind: Namespace metadata: name: gloo-mesh-config --- apiVersion: admin.gloo.solo.io/v2 kind: WorkspaceSettings metadata: name: $MGMT_CLUSTER namespace: gloo-mesh-config spec: options: serviceIsolation: enabled: false federation: enabled: false serviceSelector: - {} eastWestGateways: - selector: labels: istio: eastwestgateway EOF
Data plane
Register each workload cluster with the Gloo management plane by deploying Gloo data plane components. A deployment named gloo-mesh-agent
runs the Gloo agent in each workload cluster.
- Register both workload clusters with the management server. These commands use basic profiles to install the Gloo agent, rate limit server, and external auth server in each workload cluster.
Verify that the Gloo data plane components in each workload cluster are healthy. If not, try debugging the agent.
meshctl check --kubecontext $REMOTE_CONTEXT1 meshctl check --kubecontext $REMOTE_CONTEXT2
Example output:
🟢 Gloo deployment status Namespace | Name | Ready | Status gloo-mesh | ext-auth-service | 1/1 | Healthy gloo-mesh | gloo-mesh-agent | 1/1 | Healthy gloo-mesh | gloo-telemetry-collector-agent | 3/3 | Healthy gloo-mesh | rate-limiter | 1/1 | Healthy
Verify that your Gloo Mesh Gateway setup is correctly installed. If not, try debugging the relay connection. Note that this check might take a few seconds to verify that:
- Your Gloo product licenses are valid and current.
- The Gloo CRDs are installed at the correct version.
- The management plane pods in the management cluster are running and healthy.
- The agents in the workload clusters are successfully identified by the management server.
meshctl check --kubecontext $MGMT_CONTEXT
Example output:
🟢 License status INFO gloo-gateway enterprise license expiration is 25 Aug 24 10:38 CDT 🟢 CRD version check 🟢 Gloo deployment status Namespace | Name | Ready | Status gloo-mesh | gloo-mesh-mgmt-server | 1/1 | Healthy gloo-mesh | gloo-mesh-redis | 1/1 | Healthy gloo-mesh | gloo-mesh-ui | 1/1 | Healthy gloo-mesh | gloo-telemetry-collector-agent | 3/3 | Healthy gloo-mesh | gloo-telemetry-gateway | 1/1 | Healthy gloo-mesh | prometheus-server | 1/1 | Healthy 🟢 Mgmt server connectivity to workload agents Cluster | Registered | Connected Pod cluster1 | true | gloo-mesh/gloo-mesh-mgmt-server-65bd557b95-v8qq6 cluster2 | true | gloo-mesh/gloo-mesh-mgmt-server-65bd557b95-v8qq6 Connected Pod | Clusters gloo-mesh/gloo-mesh-mgmt-server-65bd557b95-v8qq6 | 2
Gateway proxies
Deploy gateway proxies in each workload cluster.
- To deploy managed gateway installations, see Install gateway proxies by using the Istio and Gateway Lifecycle Manager.
- To instead manage Istio gateways yourself, see Manually install gateway proxies.
Install with Helm
Customize your Gloo Mesh Gateway setup by installing with the Gloo Platform Helm chart. For more information, see the Gloo Helm chart overview.
Management plane
Deploy the Gloo management plane into a dedicated management cluster.
Production installations: Review Best practices for production to prepare your optional security measures. For example, before you begin your Gloo installation, you can provide your own certificates to secure the management server and agent connection, and set up secure access to the Gloo UI.
Install
helm
, the Kubernetes package manager.Save the name and kubeconfig context for your management cluster in environment variables.
export MGMT_CLUSTER=<management-cluster-name> export MGMT_CONTEXT=<management-cluster-context>
Add and update the Helm repository for Gloo.
helm repo add gloo-platform https://storage.googleapis.com/gloo-platform/helm-charts helm repo update
Install the Gloo CRDs.
helm upgrade -i gloo-platform-crds gloo-platform/gloo-platform-crds \ --namespace=gloo-mesh \ --create-namespace \ --version=$GLOO_VERSION \ --kube-context $MGMT_CONTEXT
- Prepare a Helm values file to provide your customizations. To get started, you can use the minimum settings in the following profile as a basis. These settings enable all components that are required for a Gloo management plane installation.
Decide how you want to secure the relay connection between the Gloo management server and agents. In test and POC environments, you can use self-signed certificates to secure the connection. If you plan to use Gloo Mesh Gateway in production, it is recommended to bring your own certificates instead. For more information, see Setup options.
Edit the file to provide your own details for settings that are recommended for production deployments, such as the following settings.
For more information about the settings you can configure:- See Best practices for production.
- See all possible fields for the Helm chart by running
helm show values gloo-platform/gloo-platform --version v2.6.6 > all-values.yaml
. You can also see these fields in the Helm values documentation.
Field Decription glooInsightsEngine.enabled
Enable the Gloo insights engine, which is recommended to help you improve the security and observability of your environment by creating actionable Istio insights. glooMgmtServer.resources.limits
Add resource limits for the gloo-mesh-mgmt-server
pod, such ascpu: 1000m
andmemory: 1Gi
.glooMgmtServer.safeMode
glooMgmtServer.safeStartWindow
Configure how you want the Gloo management server to handle translation after a Redis restart. For available options, see Redis safe mode options. glooMgmtServer.serviceOverrides.metadata.annotations
Add annotations for the management server load balancer as needed, such as AWS-specific load balancer annotations. For more information, see Deployment and service overrides. glooUi.auth
Set up OIDC authorization for the Gloo UI. For more information, see UI authentication. prometheus.enabled
Disable the default Prometheus instance as needed to provide your own. Otherwise, you can keep the default Prometheus server enabled, and deploy a production-level server to scrape metrics from the server. For more information on each option, see Best practices for collecting metrics in production. redis
Disable the default Redis deployment and provide your own backing database as needed. For more information, see Backing databases. OpenShift: glooMgmtServer.serviceType
andtelemetryGateway.service.type
In some OpenShift setups, you might not use load balancer service types. You can set these two deployment service types to ClusterIP
, and expose them by using OpenShift routes after installation.Use the customizations in your Helm values file to install the Gloo management plane components in your management cluster.
Verify that the management plane pods have a status of
Running
.kubectl get pods -n gloo-mesh --context $MGMT_CONTEXT
Example output:
NAME READY STATUS RESTARTS AGE gloo-mesh-mgmt-server-56c495796b-cx687 1/1 Running 0 30s gloo-mesh-redis-8455d49c86-f8qhw 1/1 Running 0 30s gloo-mesh-ui-65b6b6df5f-bf4vp 3/3 Running 0 30s gloo-telemetry-collector-agent-7rzfb 1/1 Running 0 30s gloo-telemetry-gateway-6547f479d5-r4zm6 1/1 Running 0 30s prometheus-server-57cd8c74d4-2bc7f 2/2 Running 0 30s
Save the external address and port that your cloud provider assigned to the Gloo OpenTelemetry (OTel) gateway service. The OTel collector agents in each workload cluster send metrics to this address.
Save the external address and port that your cloud provider assigned to the
gloo-mesh-mgmt-server
service. Thegloo-mesh-agent
agent in each workload cluster accesses this address via a secure connection.
Data plane
Register each workload cluster with the Gloo management plane by deploying Gloo data plane components. A deployment named gloo-mesh-agent
runs the Gloo agent in each workload cluster.
For the workload cluster that you want to register with Gloo, set the following environment variables. You update these variables each time you follow these steps to register another workload cluster.
export REMOTE_CLUSTER=<workload_cluster_name> export REMOTE_CONTEXT=<workload_cluster_context>
In the management cluster, create a
KubernetesCluster
resource to represent the workload cluster and store relevant data, such as the workload cluster’s local domain.kubectl apply --context $MGMT_CONTEXT -f- <<EOF apiVersion: admin.gloo.solo.io/v2 kind: KubernetesCluster metadata: name: ${REMOTE_CLUSTER} namespace: gloo-mesh spec: clusterDomain: cluster.local EOF
In your workload cluster, apply the Gloo CRDs. Note: If you plan to manually install gateway proxies rather than using Solo’s gateway lifecycle manager, include the
--set installIstioOperator=false
flag to ensure that the Istio operator CRD is not managed by this Gloo CRD Helm release.helm upgrade -i gloo-platform-crds gloo-platform/gloo-platform-crds \ --namespace=gloo-mesh \ --create-namespace \ --version=$GLOO_VERSION \ --kube-context $REMOTE_CONTEXT
- Prepare a Helm values file to provide your customizations. To get started, you can use the minimum settings in the following profile as a basis. These settings enable all components that are required for a Gloo data plane installation.
Depending on the method you chose to secure the relay connection, prepare the Helm values for the data plane installation. For more information, see the Setup options.
Edit the file to provide your own details for settings that are recommended for production deployments, such as the following settings.
For more information about the settings you can configure:
- See Best practices for production.
- See all possible fields for the Helm chart by running
helm show values gloo-platform/gloo-platform --version v2.6.6 > all-values.yaml
. You can also see these fields in the Helm values documentation.
Field Decription glooAgent.resources.limits
Add resource limits for the gloo-mesh-mgmt-server
pod, such ascpu: 500m
andmemory: 512Mi
.glooAnalyzer.enabled
Enable the Gloo insights analyzer, which is recommended to help you improve the security and observability of your environment by creating actionable Istio insights. extAuthService.enabled
Set to true
to install the external auth server add-on.rateLimiter.enabled
Set to true
to install the rate limit server add-on.Use the customizations in your Helm values file to install the Gloo data plane components in your workload cluster.
Verify that the Gloo data plane component pods are running. If not, try debugging the agent.
meshctl check --kubecontext $REMOTE_CONTEXT
Example output:
🟢 Gloo deployment status Namespace | Name | Ready | Status gloo-mesh | ext-auth-service | 1/1 | Healthy gloo-mesh | gloo-mesh-agent | 1/1 | Healthy gloo-mesh | gloo-telemetry-collector-agent | 3/3 | Healthy gloo-mesh | rate-limiter | 1/1 | Healthy
Repeat steps 1 - 8 to register each workload cluster with Gloo.
Verify that your Gloo Mesh Gateway setup is correctly installed. If not, try debugging the relay connection. Note that this check might take a few seconds to verify that:
- Your Gloo product licenses are valid and current.
- The Gloo CRDs are installed at the correct version.
- The management plane pods in the management cluster are running and healthy.
- The agents in the workload clusters are successfully identified by the management server.
meshctl check --kubecontext $MGMT_CONTEXT
Example output:
🟢 License status INFO gloo-gateway enterprise license expiration is 25 Aug 24 10:38 CDT 🟢 CRD version check 🟢 Gloo deployment status Namespace | Name | Ready | Status gloo-mesh | gloo-mesh-mgmt-server | 1/1 | Healthy gloo-mesh | gloo-mesh-redis | 1/1 | Healthy gloo-mesh | gloo-mesh-ui | 1/1 | Healthy gloo-mesh | gloo-telemetry-collector-agent | 3/3 | Healthy gloo-mesh | gloo-telemetry-gateway | 1/1 | Healthy gloo-mesh | prometheus-server | 1/1 | Healthy 🟢 Mgmt server connectivity to workload agents Cluster | Registered | Connected Pod cluster1 | true | gloo-mesh/gloo-mesh-mgmt-server-65bd557b95-v8qq6 cluster2 | true | gloo-mesh/gloo-mesh-mgmt-server-65bd557b95-v8qq6 Connected Pod | Clusters gloo-mesh/gloo-mesh-mgmt-server-65bd557b95-v8qq6 | 2
Gateway proxies
Deploy gateway proxies in each workload cluster.
- To deploy managed gateway installations, see Install gateway proxies by using the Istio and Gateway Lifecycle Manager.
- To instead manage Istio gateways yourself, see Manually install gateway proxies.
Install gateway proxies by using the Istio and Gateway Lifecycle Manager
Streamline the gateway installation process by using the Gloo management plane to install Istio gateways in your clusters, as part of the Istio lifecycle management. By using a Gloo-managed installation, you no longer need to use istioctl
to individually install the Istio control plane and gateways. Instead, you can supply IstioOperator configurations in Gloo resources. Gloo translates this configuration into an Istio control plane and gateway proxy in the cluster.
Before you begin, review the following considerations for using the Istio lifecycle manager.
- Throughout this guide, you use example configuration files that have pre-filled values. You can update some of the values, but unexpected behaviors might occur. For example, if you change the default
istio-ingressgateway
name, you cannot also use Kubernetes horizontal pod autoscaling. For more information, see the Troubleshooting docs. - If you plan to run multiple revisions of Istio in your cluster and use
discoverySelectors
in each revision to discover the resources in specific namespaces, enable theglooMgmtServer.extraEnvs.IGNORE_REVISIONS_FOR_VIRTUAL_DESTINATION_TRANSLATION
environment variable on the Gloo management server. For more information, see Multiple Istio revisions in the same cluster. - If your organization restricts elevated Kubernetes RBAC permissions for security reasons, you might need to install the Istio CNI plug-in. The OpenShift steps provide an example. For more information, see the Istio docs.
- In multicluster setups, one gateway proxy for north-south traffic is deployed to each workload cluster. To learn about your gateway options, such as creating a global load balancer to route to each gateway IP address or registering each gateway IP address in one DNS entry, see the gateway deployment patterns page.
Istio 1.22 is supported only as patch version 1.22.1-patch0
and later. Do not use patch versions 1.22.0 and 1.22.1, which contain bugs that impact several Gloo Mesh Gateway routing features that rely on virtual destinations. Additionally, in Istio 1.22.0-1.22.3, the ISTIO_DELTA_XDS
environment variable must be set to false
. For more information, see this upstream Istio issue. Note that this issue is resolved in Istio 1.22.4.
Gloo Mesh Gateway version 2.6 supports Istio version 1.21. However, a bug was identified when upgrading from Istio version 1.20 or lower to Istio version 1.21 and later while being on Gloo Mesh Gateway version 2.6. This bug can lead to disabled JWT authentication and authorization policies that fail close, which means that the gateway rejects requests as unauthenticated on any route that is protected by a JWT policy. Note that this bug will be fixed in a future 2.6 patch release. Do not upgrade to Istio version 1.21 and later until this patch is available. For more information, see the release notes.
Istio 1.20 is supported only as patch version 1.20.1-patch1
and later. Do not use patch versions 1.20.0 and 1.20.1, which contain bugs that impact several Gloo Mesh Gateway features that rely on Istio ServiceEntries.
If you have multiple external services that use the same host and plan to use Istio 1.20, 1.21, or 1.22, you must use patch versions 1.20.7, 1.21.3, or 1.22.1-patch0 or later to ensure that the Istio service entry that is created for those external services is correct.
istiod
control planes
Prepare an IstioLifecycleManager
CR to manage istiod
control planes.
Review Supported versions to choose the Solo distribution of Istio that you want to use, and save the version information in the following environment variables.
REPO
: The repo key for the Solo distribution of Istio that you can get by logging in to the Support Center and reviewing the Istio images built by Solo.io support article.ISTIO_IMAGE
: The version that you want to use with thesolo
tag, such as1.22.5-patch0-solo
. You can optionally append other tags of Solo distributions of Istio as needed.REVISION
: Take the Istio major and minor versions and replace the periods with hyphens, such as1-22
.
For testing environments only, you can deploy a revisionless installation. Revisionless installations permit in-place upgrades, which are quicker than the canary-based upgrades that revisioned installations require. To omit a revision, do not set a revision environment variable. Then in the following sections, you edit the sampleIstioLifecycleManager
andGatewayLifecycleManager
files that you download to remove therevision
andgatewayRevision
fields. Note that if you deploy multiple Istio installations in the same cluster, only one installation can be revisionless.export REPO=<repo-key> export ISTIO_IMAGE=1.22.5-patch0-solo export REVISION=1-22
Download the example file,
istiod.yaml
, which contains a basicIstioLifecycleManager
configuration for the control plane.Update the example file with the environment variables that you previously set. Save the updated file as
istiod-values.yaml
.- For example, you can run a terminal command to substitute values:
envsubst < istiod.yaml > istiod-values.yaml
- For example, you can run a terminal command to substitute values:
Verify that the configuration is correct. For example, in
spec.installations.clusters
, verify that entries exist for each workload cluster name. You can also further edit the file to provide your own details. For more information, see the API reference.open istiod-values.yaml
For testing environments only, you can deploy a revisionless installation by removing therevision
fields.Apply the
IstioLifecycleManager
CR to your management cluster.kubectl apply -f istiod-values.yaml --context $MGMT_CONTEXT
In each workload cluster, verify that the Istio pods have a status of
Running
.kubectl get pods -n istio-system --context $REMOTE_CONTEXT1 kubectl get pods -n istio-system --context $REMOTE_CONTEXT2
Example output:
NAME READY STATUS RESTARTS AGE istiod-1-22-b65676555-g2vmr 1/1 Running 0 47s NAME READY STATUS RESTARTS AGE istiod-1-22-7b96cb895-4nzv9 1/1 Running 0 43s
Ingress gateways
Prepare a GatewayLifecycleManager
custom resource to manage the ingress gateways.
Download the
gm-ingress-gateway.yaml
example file.curl -0L https://raw.githubusercontent.com/solo-io/gloo-mesh-use-cases/main/gloo-mesh/istio-install/gm-managed/gm-ingress-gateway.yaml > gm-ingress-gateway.yaml
Update the example file with the environment variables that you previously set. Save the updated file as
ingress-gateway-values.yaml
.- For example, you can run a terminal command to substitute values:
envsubst < ingress-gateway.yaml > ingress-gateway-values.yaml
- For example, you can run a terminal command to substitute values:
Verify that the configuration is correct. You can also further edit the file to provide your own settings. For more information, see the API reference.
open ingress-gateway-values.yaml
- You can add cloud provider-specific load balancer annotations to the
istioOperatorSpec.components.ingressGateways.k8s
section, such as the following AWS annotations:For testing environments only, you can deploy a revisionless installation by removing the... k8s: service: ... serviceAnnotations: service.beta.kubernetes.io/aws-load-balancer-backend-protocol: ssl service.beta.kubernetes.io/aws-load-balancer-cross-zone-load-balancing-enabled: "true" service.beta.kubernetes.io/aws-load-balancer-nlb-target-type: instance service.beta.kubernetes.io/aws-load-balancer-scheme: internet-facing service.beta.kubernetes.io/aws-load-balancer-ssl-cert: "arn:aws:acm:<cert>" service.beta.kubernetes.io/aws-load-balancer-type: external
gatewayRevision
field.
- You can add cloud provider-specific load balancer annotations to the
Apply the
GatewayLifecycleManager
CR to your management cluster.kubectl apply -f ingress-gateway-values.yaml --context $MGMT_CONTEXT
In each workload cluster, verify that the ingress gateway pod is running.
kubectl get pods -n gloo-mesh-gateways --context $REMOTE_CONTEXT1 kubectl get pods -n gloo-mesh-gateways --context $REMOTE_CONTEXT2
Example output for one cluster:
NAME READY STATUS RESTARTS AGE istio-ingressgateway-665d46686f-nhh52 1/1 Running 0 106s
In each workload cluster, verify that the load balancer service has an external address.
kubectl get svc -n gloo-mesh-gateways --context $REMOTE_CONTEXT1 kubectl get svc -n gloo-mesh-gateways --context $REMOTE_CONTEXT2
Example output for one cluster:
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE istio-ingressgateway LoadBalancer 10.96.252.49 <externalip> 15021:32378/TCP,80:30315/TCP,443:32186/TCP,31400:30313/TCP,15443:31632/TCP 2m2s
AWS clusters: For the Elastic Load Balancer (ELB) instance that is automatically created for you to back the ingress gateway service, verify that the health check shows a healthy state. Gloo Mesh Core configures the ingress gateway to listen on HTTPS port 15443. However, when the ELB is created, the first port that is defined in the Kubernetes service manifest is used to perform the health check. This port might be different from the port that Gloo configures. For your ELB health check to pass, you might need to configure the load balancer to run the health check on port 15443.Optional for OpenShift: Expose the gateways by using OpenShift routes.
oc -n gloo-mesh-gateways expose svc istio-ingressgateway --port=http2 --context $REMOTE_CONTEXT1 oc -n gloo-mesh-gateways expose svc istio-ingressgateway --port=http2 --context $REMOTE_CONTEXT2
East-west gateways
Deploy an Istio east-west gateway into each cluster in addition to the ingress gateway. In Gloo Mesh Gateway, the east-west gateways allow the ingress gateway in one cluster to route incoming traffic requests to services in another cluster.
When Gloo Mesh Gateway routes incoming requests across clusters through the east-west gateway, the communication from Gloo Mesh Gateway to the east-west gateway is secured with mTLS. However, when your app is deployed without Istio sidecars, the east-west gateway uses plaintext to route the request to your app. To secure communications to your apps with mTLS instead, consider using Gloo Mesh Enterprise alongside Gloo Mesh Gateway to set up an Istio service mesh for your workloads.Additionally, cross-cluster routing through the east-west gateway in Gloo Mesh Gateway is supported only for incoming requests from a client that is external to your cluster environment. You can use Gloo Mesh Enterprise to also route from service-to-service within your cluster environment by using mTLS connections through the east-west gateway.
Download the example file,
ew-gateway.yaml
, which contains a basicGatewayLifecycleManager
configuration for an east-west gateway.curl -0L https://raw.githubusercontent.com/solo-io/gloo-mesh-use-cases/main/gloo-mesh/istio-install/gm-managed/gm-ew-gateway.yaml > ew-gateway.yaml
Update the example file with the environment variables that you previously set. Save the updated file as
ew-gateway-values.yaml
.- For example, you can run a terminal command to substitute values:
envsubst < ew-gateway.yaml > ew-gateway-values.yaml
- For example, you can run a terminal command to substitute values:
Verify that the configuration is correct. You can also further edit the file to provide your own settings. For more information, see the API reference.
* For testing environments only, you can deploy a revisionless installation by removing theopen ew-gateway-values.yaml
revision
field.Apply the
GatewayLifecycleManager
CR to your management cluster.kubectl apply -f ew-gateway-values.yaml --context $MGMT_CONTEXT
In each workload cluster, verify that the east-west gateway pod is running.
kubectl get pods -n gloo-mesh-gateways --context $REMOTE_CONTEXT1 kubectl get pods -n gloo-mesh-gateways --context $REMOTE_CONTEXT2
Example output for one cluster:
NAME READY STATUS RESTARTS AGE istio-eastwestgateway-665d46686f-nhh52 1/1 Running 0 106s
In each workload cluster, verify that the load balancer service has an external address.
kubectl get svc -n gloo-mesh-gateways --context $REMOTE_CONTEXT1 kubectl get svc -n gloo-mesh-gateways --context $REMOTE_CONTEXT2
Example output for one cluster:
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE istio-eastwestgateway LoadBalancer 10.96.252.49 <externalip> 15021:32378/TCP,80:30315/TCP,443:32186/TCP,31400:30313/TCP,15443:31632/TCP 2m2s
AWS clusters: For the Elastic Load Balancer (ELB) instance that is automatically created for you to back the ingress gateway service, verify that the health check shows a healthy state. Gloo Mesh Core configures the ingress gateway to listen on HTTPS port 15443. However, when the ELB is created, the first port that is defined in the Kubernetes service manifest is used to perform the health check. This port might be different from the port that Gloo configures. For your ELB health check to pass, you might need to configure the load balancer to run the health check on port 15443.
Optional: Configure the locality labels for the nodes
Gloo Mesh Gateway uses Kubernetes labels on the nodes in your clusters to indicate locality for the services that run on the nodes. For more information, see the Kubernetes topology and Istio locality documentation.
- Cloud: Typically, your cloud provider sets the Kubernetes
region
andzone
labels for each node automatically. Depending on the level of availability that you want, you might have clusters in the same region, but different zones. Or, each cluster might be in a different region, with nodes spread across zones. - On-premises: Depending on how you set up your cluster, you likely must set the
region
andzone
labels for each node yourself. Additionally, consider setting asubzone
label to specify nodes on the same rack or other more granular setups.
Verify that your nodes have at least
region
andzone
labels.kubectl get nodes --context $REMOTE_CONTEXT1 -o jsonpath='{.items[*].metadata.labels}' kubectl get nodes --context $REMOTE_CONTEXT2 -o jsonpath='{.items[*].metadata.labels}'
Example output with
region
andzone
labels:..."topology.kubernetes.io/region":"us-east","topology.kubernetes.io/zone":"us-east-2"
- If your nodes have at least
region
andzone
labels, and you do not want to update the labels, you can skip the remaining steps. - If your nodes do not already have region and zone labels, you must add the labels. Depending on your cluster setup, you might add the same
region
label to each node, but a separatezone
label per node. The values are not validated against your underlying infrastructure provider. The following steps show how you might label multizone clusters in two different regions, but you can adapt the steps for your actual setup.
- If your nodes have at least
Label all the nodes in each cluster for the region. If your nodes have incorrect region labels, include the
--overwrite
flag in the command.kubectl label nodes --all --context $REMOTE_CONTEXT1 topology.kubernetes.io/region=us-east kubectl label nodes --all --context $REMOTE_CONTEXT2 topology.kubernetes.io/region=us-west
List the nodes in each cluster. Note the name for each node.
kubectl get nodes --context $REMOTE_CONTEXT1 kubectl get nodes --context $REMOTE_CONTEXT2
Label each node in each cluster for the zone. If your nodes have incorrect zone labels, include the
--overwrite
flag in the command.kubectl label node <cluster1_node-1> --context $REMOTE_CONTEXT1 topology.kubernetes.io/zone=us-east-1 kubectl label node <cluster1_node-2> --context $REMOTE_CONTEXT1 topology.kubernetes.io/zone=us-east-2 kubectl label node <cluster1_node-3> --context $REMOTE_CONTEXT1 topology.kubernetes.io/zone=us-east-3 kubectl label node <cluster2_node-1> --context $REMOTE_CONTEXT2 topology.kubernetes.io/zone=us-west-1 kubectl label node <cluster2_node-2> --context $REMOTE_CONTEXT2 topology.kubernetes.io/zone=us-west-2 kubectl label node <cluster2_node-3> --context $REMOTE_CONTEXT2 topology.kubernetes.io/zone=us-west-3
Next steps
Now that you have Gloo Mesh Gateway up and running, check out some of the following resources to learn more about your API Gateway and expand your routing and network capabilities.
Traffic management:
- Deploy sample apps in your cluster to follow the guides in the documentation.
- Configure HTTP or HTTPS listeners for your gateway.
- Review routing examples, such as header matching, redirects, or direct responses that you can configure for your API Gateway.
- Explore traffic management policies that you can apply to your routes and upstream services. For example, you might apply the proxy protocol policy to your API Gateway so that it preserves connection information such as the originating client IP address.
Gloo Mesh Gateway:
- Monitor and observe your environment with Gloo Mesh Gateway’s built-in telemetry tools.
- Apply Gloo policies to manage the security and resiliency of your service mesh environment.
- Organize team resources with workspaces.
- When it’s time to upgrade Gloo Mesh Gateway, see the upgrade guide.
Help and support:
- Talk to an expert to get advice or build out a proof of concept.
- Join the #gloo-mesh channel in the Solo.io community slack.
- Try out one of the Gloo workshops.