System requirements
Review the following minimum requirements and other recommendations for your Gloo Mesh Enterprise setup.
Number of clusters
Single-cluster: Gloo Mesh is fully functional when the control plane (management server) and data plane (agent and service mesh) both run within the same cluster. You can easily install both the control and data plane components by using one installation process. If you choose to install the components in separate processes, ensure that you use the same name for the cluster during both processes.
Multicluster: A multicluster Gloo Mesh setup consists of one management cluster that the Gloo Mesh control plane (management server) is installed in, and one or more workload clusters that serves as the data plane (agent and service mesh). By running the control plane in a dedicated management cluster, you can ensure that no workload pods consume cluster resources that might impede management processes. Many guides throughout the documentation use one management cluster and two workload clusters as an example setup.
Cluster details
Review the following recommendations and considerations when creating clusters for your Gloo Mesh environment.
Name
For any clusters that you plan to register as workload clusters, the cluster name cannot include underscores (_
).
Additionally, cluster context names cannot include underscores. The context name is used as a SAN specification in the generated certificate that connects workload clusters to the management cluster, and underscores in SAN are not FQDN compliant. You can rename a context by running kubectl config rename-context "<oldcontext>" <newcontext>
.
Throughout the guides in this documentation, a single-cluster setup and a three-cluster setup are used as examples.
- Single-cluster: When an example name is required, the name
mgmt
is used. Otherwise, you can save the name of your cluster in the$CLUSTER_NAME
environment variable, and the context of your cluster in the$MGMT_CONTEXT
environment variable. - Multicluster: When example names are required, the names
mgmt
,cluster1
, andcluster2
are used. Otherwise, you can save the names of your clusters in the$MGMT_CLUSTER
,$REMOTE_CLUSTER1
, and$REMOTE_CLUSTER2
environment variables, and the contexts of your clusters in the$MGMT_CONTEXT
,$REMOTE_CONTEXT1
, and$REMOTE_CONTEXT2
environment variables.
Size and memory
For a minimum Gloo Mesh setup, the following sizes are recommended:
- Management cluster nodes - 2vCPU and 8GB memory
- Workload cluster nodes - 2vCPU and 8GB memory
For a more robust Gloo Mesh setup, the following sizes are recommended:
- Management cluster nodes - 2vCPU and 8GB memory
- Workload cluster nodes - 4vCPU and 16GB memory
Version
Solo supports n-3
versions for Gloo Platform. Within each Gloo Platform version, different open source project versions are supported, including Gloo Istio n-4
version support.
Gloo Platform
The following versions of Gloo Platform are supported with the compatible open source project versions of Istio and Kubernetes. Later versions of the open source projects that are released after Gloo Platform might also work, but are not tested as part of the Gloo Platform release.
Gloo Platform | Release date | Gloo Istio* |
Kubernetes† |
---|---|---|---|
2.3 | 17 Apr 2023 | 1.13 - 1.17 | 1.20 - 1.25 |
2.2 | 20 Jan 2023 | 1.13 - 1.16 | 1.19 - 1.24 |
2.1 | 21 Oct 2022 | 1.13 - 1.16 | 1.18 - 1.23 |
2.0 | 13 May 2022 | 1.9 - 1.13 | 1.17 - 1.23 |
1.2 | 04 Nov 2021 | 1.9 - 1.12 | 1.17 - 1.23 |
Gloo Istio
Keep in mind that Gloo Platform offers n-4
security patching support only with Gloo Istio versions, not community Istio versions. Gloo Istio versions support the same patch versions as community Istio. You can review community Istio patch versions in the Istio release documentation. You must run the latest Gloo Platform patch version to get the backported Istio support.
Supported Istio versions by Kubernetes or OpenShift version
The supported version of Istio, and Kubernetes or OpenShift are dependent on each other. For example, if you plan to use Gloo Platform with Istio 1.15, you must make sure that you use a Kubernetes or OpenShift version that is compatible with Istio 1.15. The same is true if you decided on a specific Kubernetes or OpenShift version, and you must find an Istio version that is compatible.
To find a list of supported Kubernetes versions in Istio, see the Istio docs. For supported OpenShift, go to the OpenShift knowledgebase (requires login).
Known Istio issues
- Istio version 1.17 does not support the Gloo legacy metrics pipeline, which is installed as the default metrics pipeline in Gloo Mesh installations. Before you upgrade or install Istio with version 1.17, be sure that you set up the Gloo OpenTelemetry (OTel) pipeline instead in your new or existing Gloo Mesh installation.
- For any FIPS-compliant builds, you must use the
-patch1
versions of the latest Istio versions published by Solo, such as1.17.2-patch1-solo-fips
for Gloo Istio version 1.17. These patch versions fix a FIPS-related issue introduced in the upstream Envoy code. - Istio versions 1.14.0 - 1.14.3 have a known issue about unused endpoints failing to be deleted. Additionally, version 1.14.4 has a known issue about short hostnames causing Kubernetes service and ServiceEntry conflicts. Both issues are resolved in Istio 1.14.5.
- Istio versions 1.13.0 - 1.13.3 have a known issue about service entry hostname expansion. The issue is resolved in Istio 1.13.4.
Gloo features
Additionally, the following Gloo Platform features require specific versions.
Gloo Platform feature | Required versions |
---|---|
Gloo-managed Istio installations (Istio and gateway lifecycle manager) | Gloo Platform 2.1.0 or later, and Istio version 1.15.4 or later |
Verification of Gloo Platform Helm charts | Gloo Platform 2.3.1 or later |
GraphQL add-on | Gloo Platform version 2.1.0 or later, and Istio version 1.16.1 or later |
AWS Lambda default request and response transformations | Istio version 1.15.1 or later |
For more information, see Supported versions.
Load balancer connectivity
If you also run Gloo Gateway and want to test connectivity to the Istio ingress gateway in your Gloo environment, ensure that your cluster setup enables you to externally access LoadBalancer services on the workload clusters.
Port and repo access from cluster networks
If you have restrictions for your cluster networks in your cloud infrastructure provider, you must open ports, protocols, and image repositories to install Gloo Mesh Enterprise and to allow your Gloo Mesh installation to communicate with the Gloo Mesh APIs. For example, you might have firewalls set up on the public network of your clusters so that they do not have default access to all public endpoints. The following sections detail the required and optional ports and repositories that your management and workload clusters must access.
Need to install Gloo Mesh in a disconnected environment, such as an on-premises datacenter or clusters that run on an intranet or private network only? Check out Install Gloo Mesh in air-gapped environments.
Management cluster
Required: In your firewall or network rules for the management cluster, open the following required ports and repositories.
Name | Port | Protocol | Source | Destination | Network | Description |
---|---|---|---|---|---|---|
Management server images | - | - | IP addresses of management cluster nodes | https://gcr.io/gloo-mesh |
Public | Allow the gloo-mesh-mgmt-server image to be installed and updated in the management cluster. |
Redis image | - | - | IP addresses of management cluster nodes | docker.io/redis |
Public | Allow the Redis image to be installed in the management cluster to store OIDC ID tokens for the Gloo Mesh UI. |
Agent communication | 9900 | TCP | ClusterIPs of agents on workload clusters | IP addresses of management cluster nodes | Cluster network | Allow the gloo-mesh-agent on each workload cluster to send data to the gloo-mesh-mgmt-server in the management cluster. |
Optional: In your firewall or network rules for the management cluster, open the following optional ports as needed.
Name | Port | Protocol | Source | Destination | Network | Description |
---|---|---|---|---|---|---|
Healthchecks | 8090 | TCP | Check initiator | IP addresses of management cluster nodes | Public or cluster network, depending on whether checks originate from outside or inside service mesh | Allow healthchecks to the management server. |
Prometheus | 9091 | TCP | Scraper | IP addresses of management cluster nodes | Public | Scrape your Prometheus metrics from a different server, or a similar metrics setup. |
Other tools | - | - | - | - | Public | For any other tools that you use in your Gloo Mesh environment, consult the tool's documentation to ensure that you allow the correct ports. For example, if you use tools such as cert-manager to generate and manage the Gloo Mesh certificates for your setup, consult the cert-manager platform reference. |
Workload clusters
Required: In your firewall or network rules for the workload clusters, open the following required ports and repositories.
Name | Port | Protocol | Source | Destination | Network | Description |
---|---|---|---|---|---|---|
Agent image | - | - | IP addresses of workload cluster nodes | https://gcr.io/gloo-mesh |
Public | Allow the gloo-mesh-agent image to be installed and updated in workload clusters. |
Ingress gateway | 80 and/or 443 | HTTP, HTTPS | - | Gateway load balancer IP address | Public or private network | Allow incoming traffic requests to the Istio ingress gateway. |
East-west gateway | 15443 | TCP | Node IP addresses of other workload clusters | Gateway load balancer IP address on one workload cluster | Cluster network | Allow services in one workload cluster to access the mesh's east-west gateway for services in another cluster. Repeat this rule for the east-west gateway on each workload cluster. Note that you can customize this port in the spec.options.eastWestGatewaySelector.hostInfo.port setting of your workspace settings resource. |
Optional: In your firewall or network rules for the workload clusters, open the following optional ports as needed.
Name | Port | Protocol | Source | Destination | Network | Description |
---|---|---|---|---|---|---|
Agent healthchecks | 8090 | TCP | Check initiator | IP addresses of workload cluster nodes | Public or cluster network, depending on whether checks originate from outside or inside service mesh | Allow healthchecks to the Gloo Mesh agent. |
Istio Pilot | 15017 | HTTPS | IP addresses of workload cluster nodes | - | Public | Depending on your cloud provider, you might need to open ports for Istio to be installed. For example, in GKE clusters, you must open port 15017 for the Pilot discovery validation webhook. For more ports and requirements, see Ports used by Istio. |
Istio healthchecks | 15021 | HTTP | Check initiator | IP addresses of workload cluster nodes | Public or cluster network, depending on whether checks originate from outside or inside service mesh | Allow healthchecks on path /healthz/ready . |
Envoy telemetry | 15090 | HTTP | Scraper | IP addresses of workload cluster nodes | Public | Scrape your Prometheus metrics from a different server, or a similar metrics setup. |
VM onboarding | 15012, 15443 | TCP | Gateway load balancer IP addresses on workload clusters | VMs | Cluster network | To add virtual machines to your Gloo Mesh setup, allow traffic and updates through east-west routing from the workload clusters to the VMs. |
Port and repo access from local systems
If corporate network policies prevent access from your local system to public endpoints via proxies or firewalls:
- Allow access to
https://run.solo.io/meshctl/install
to install themeshctl
. - Allow access to the Gloo Mesh Helm respository,
https://storage.googleapis.com/gloo-mesh-enterprise/gloo-mesh-enterprise
, to install Gloo Mesh Enterprise via thehelm
CLI.
Reserved ports and pod requirements
Review the following service mesh and platform docs that outline what ports are reserved, so that you do not use these ports for other functions in your apps. You might use other services such as a database or application monitoring tool that reserve additional ports.