Review the following minimum requirements and other recommendations for your Gloo Gateway setup.
Number of clusters
Single-cluster: Gloo Gateway is fully functional when the control plane (management server) and data plane (agent and gateway proxy) both run within the same cluster. You can easily install both the control and data plane components by using one installation process. If you choose to install the components in separate processes, ensure that you use the same name for the cluster during both processes. Many guides throughout the documentation use one cluster as an example setup.
Multicluster: A multicluster Gloo Gateway setup consists of one management cluster that the Gloo Gateway control plane (management server) is installed in, and one or more workload clusters that serve as the data plane (agent and gateway proxies). By running the control plane in a dedicated management cluster, you can ensure that no workload pods consume cluster resources that might impede management processes. Because many guides throughout the documentation use one cluster as an example setup instead of a multicluster environment, consider applying resources in your management cluster instead of individual workload clusters. For more information, see Workspace configuration in the management plane.
Review the following recommendations and considerations when creating clusters for your Gloo environment.
For any clusters that you plan to register as workload clusters: The cluster name must be alphanumeric with no special characters except a hyphen (-), lowercase, and begin with a letter (not a number).
Additionally, cluster context names cannot include underscores. The context name is used as a SAN specification in the generated certificate that connects workload clusters to the management cluster, and underscores in SAN are not FQDN compliant. You can rename a context by running
kubectl config rename-context "<oldcontext>" <newcontext>.
Throughout the guides in this documentation, a single-cluster setup and a three-cluster setup are used as examples.
- Single-cluster: When an example name is required, the name
mgmtis used. Otherwise, you can save the name of your cluster in the
$CLUSTER_NAMEenvironment variable, and the context of your cluster in the
- Multicluster: When example names are required, the names
cluster2are used. Otherwise, you can save the names of your clusters in the
$REMOTE_CLUSTER2environment variables, and the contexts of your clusters in the
Gloo Gateway is supported on the following platforms:
- OpenShift: Some changes are required to allow the Istio ingress gateway to run on OpenShift clusters. To make these changes, use commands througout the installation guides that are labeled for use with OpenShift. For more information, see Installation options.
Note that in multicluster setups, you can use both Kubernetes and OpenShift clusters.
Note: Be sure to verify that your cluster's Kubernetes or Openshift version is supported for Gloo PLatform.
Size and memory
For a minimum Gloo Gateway setup, the following sizes are recommended:
- Management cluster nodes - 2vCPU and 8GB memory
- Workload cluster nodes (for a multicluster setup) - 2vCPU and 8GB memory
For a more robust Gloo Gateway setup, the following sizes are recommended:
- Management cluster nodes - 2vCPU and 8GB memory
- Workload cluster nodes (for a multicluster setup) - 4vCPU and 16GB memory
n-3 versions for Gloo Platform. Within each Gloo Platform version, different open source project versions are supported, including Solo Istio
n-4 version support.
The following versions of Gloo Platform are supported with the compatible open source project versions of Istio and Kubernetes. Later versions of the open source projects that are released after Gloo Platform might also work, but are not tested as part of the Gloo Platform release.
|Gloo Platform||Release date||Solo Istio
|2.4||28 Aug 2023||1.14 - 1.18||1.21 - 1.26|
|2.3||17 Apr 2023||1.14 - 1.18||1.20 - 1.25|
|2.2||20 Jan 2023||1.14 - 1.18||1.20 - 1.24|
|2.1||21 Oct 2022||1.13 - 1.16||1.18 - 1.23|
Keep in mind that Gloo Platform offers
n-4 security patching support only with Solo Istio versions, not community Istio versions. Solo Istio versions support the same patch versions as community Istio. You can review community Istio patch versions in the Istio release documentation. You must run the latest Gloo Platform patch version to get the backported Istio support.
Supported Istio versions by Kubernetes or OpenShift version
The supported version of Istio, and Kubernetes or OpenShift are dependent on each other. For example, if you plan to use Gloo Platform with Istio 1.15, you must make sure that you use a Kubernetes or OpenShift version that is compatible with Istio 1.15. The same is true if you decided on a specific Kubernetes or OpenShift version, and you must find an Istio version that is compatible.
To find a list of supported Kubernetes versions in Istio, see the Istio docs. For supported OpenShift, go to the OpenShift knowledgebase (requires login).
Known Istio issues
WasmDeploymentPolicyGloo CR is currently unsupported in Istio version 1.18.
- Istio versions 1.17 and later and Gloo Platform 2.4 and later do not support the Gloo legacy metrics pipeline. If you run the legacy metrics pipeline, be sure that you set up the Gloo OpenTelemetry (OTel) pipeline instead in your new or existing Gloo Gateway installation.
- For FIPS-compliant builds of Istio 1.17.2 and 1.16.4, you must use the
-patch1versions of the latest Istio builds published by Solo, such as
1.17.2-patch1-solo-fipsfor Solo Istio version 1.17. These patch versions fix a FIPS-related issue introduced in the upstream Envoy code. In 1.17.3 and later, FIPS compliance is available in the
-fipstags of regular Solo Istio builds, such as
- Istio versions 1.14.0 - 1.14.3 have a known issue about unused endpoints failing to be deleted. Additionally, version 1.14.4 has a known issue about short hostnames causing Kubernetes service and ServiceEntry conflicts. Both issues are resolved in Istio 1.14.5.
Additionally, the following Gloo Platform features require specific versions.
|Gloo Platform feature||Required versions|
|Gloo-managed Istio installations (Istio and gateway lifecycle manager)||Gloo Platform 2.1.0 or later, and Istio version 1.15.4 or later|
|Verification of Gloo Platform Helm charts||Gloo Platform 2.3.1 or later|
|GraphQL add-on||Gloo Platform version 2.1.0 or later, and Istio version 1.16.1 or later|
|AWS Lambda default request and response transformations||Istio version 1.15.1 or later|
For more information, see Supported versions.
Load balancer connectivity
To test access to the ingress gateway proxy in your Gloo Gateway environment, ensure that your cluster setup enables you to externally access LoadBalancer services on the workload clusters.
Port and repo access from cluster networks
If you have restrictions for your cluster networks in your cloud infrastructure provider, you must open ports, protocols, and image repositories to install Gloo and Gloo Gateway, and to allow your Gloo installation to communicate with the Gloo Gateway APIs. For example, you might have firewalls set up on the public network of your clusters so that they do not have default access to all public endpoints. The following sections detail the required and optional ports and repositories that your management and workload clusters must access.
Required: In your firewall or network rules for the management cluster, open the following required ports and repositories.
|Management server images||-||-||IP addresses of management cluster nodes||
|Redis image||-||-||IP addresses of management cluster nodes||
||Public||Allow the Redis image to be installed in the management cluster to store OIDC ID tokens for the Gloo UI.|
|Agent communication||9900||TCP||ClusterIPs of agents on workload clusters||IP addresses of management cluster nodes||Cluster network||Allow the
Optional: In your firewall or network rules for the management cluster, open the following optional ports as needed.
|Healthchecks||8090||TCP||Check initiator||IP addresses of management cluster nodes||Public or cluster network, depending on whether checks originate from outside or inside the cluster||Allow healthchecks to the management server.|
|Prometheus||9091||TCP||Scraper||IP addresses of management cluster nodes||Public||Scrape your Prometheus metrics from a different server, or a similar metrics setup.|
|OpenTelemetry gateway||4317||TCP||OpenTelemetry agent||IP addresses of management cluster nodes||Public||Collect telemetry data, such as metrics, logs, and traces to show in Gloo observability tools.|
|Other tools||-||-||-||-||Public||For any other tools that you use in your Gloo Gateway environment, consult the tool's documentation to ensure that you allow the correct ports. For example, if you use tools such as
Required: In your firewall or network rules for the workload clusters, open the following required ports and repositories.
|Agent image||-||-||IP addresses of workload cluster nodes||
|Ingress gateway||80 and/or 443||HTTP, HTTPS||-||Gateway load balancer IP address||Public or private network||Allow incoming traffic requests to the ingress gateway proxy.|
|East-west gateway||15443||TCP||Node IP addresses of other workload clusters||Gateway load balancer IP address on one workload cluster||Cluster network||Only applicable if Gloo Gateway is used in combination with Gloo Mesh Enterprise: Allow services in one workload cluster to access the east-west gateway for services in another cluster. Repeat this rule for the east-west gateway on each workload cluster. Note that you can customize this port in the
Optional: In your firewall or network rules for the workload clusters, open the following optional ports as needed.
|Agent healthchecks||8090||TCP||Check initiator||IP addresses of workload cluster nodes||Public or cluster network, depending on whether checks originate from outside or inside the cluster||Allow healthchecks to the Gloo agent.|
|Istio Pilot||15017||HTTPS||IP addresses of workload cluster nodes||-||Public||Depending on your cloud provider, you might need to open ports for Istio to be installed. For example, in GKE clusters, you must open port 15017 for the Pilot discovery validation webhook. For more ports and requirements, see Ports used by Istio.|
|Istio healthchecks||15021||HTTP||Check initiator||IP addresses of workload cluster nodes||Public or cluster network, depending on whether checks originate from outside or inside the cluster||Allow healthchecks on path
|Envoy telemetry||15090||HTTP||Scraper||IP addresses of workload cluster nodes||Public||Scrape your Prometheus metrics from a different server, or a similar metrics setup.|
Port and repo access from local systems
If corporate network policies prevent access from your local system to public endpoints via proxies or firewalls:
- Allow access to
https://run.solo.io/meshctl/installto install the
- Allow access to the Gloo Gateway Helm respository,
https://storage.googleapis.com/gloo-platform/helm-charts, to install Gloo Gateway via the
Reserved ports and pod requirements
Review the following platform docs that outline what ports are reserved, so that you do not use these ports for other functions in your apps. You might use other services such as a database or application monitoring tool that reserve additional ports.
Request size limit
The maximum request payload size that can be sent to the Gloo Gateway proxy is 1MiB.