Review the following minimum requirements and other recommendations for your Gloo Mesh Core setup.
Number of clusters
Single-cluster: Gloo Mesh Core is fully functional when the control plane (management server) and data plane (agent and service mesh) both run within the same cluster. You can easily install both the control and data plane components by using one installation process. If you choose to install the components in separate processes, ensure that you use the same name for the cluster during both processes.
Multicluster: A multicluster Gloo Mesh Core setup consists of one management cluster that you install the Gloo control plane (management server) in, and one or more workload clusters that serve as the data plane (agent and service mesh). By running the control plane in a dedicated management cluster, you can ensure that no workload pods consume cluster resources that might impede management processes. Many guides throughout the documentation use one management cluster and two workload clusters as an example setup.
Review the following recommendations and considerations when creating clusters for your Gloo Mesh Core environment.
The cluster name must be alphanumeric with no special characters except a hyphen (-), lowercase, and begin with a letter (not a number).
Cluster context names cannot include underscores. The generated certificate that connects workload clusters to the management cluster uses the context name as a SAN specification, and underscores in SAN are not FQDN compliant. You can rename a context by running
kubectl config rename-context "<oldcontext>" <newcontext>.
Throughout the guides in this documentation, examples use a single-cluster setup and a three-cluster setup.
- Single-cluster: When a guide requires an example name, the examples use
mgmt. Otherwise, you can save the name of your cluster in the
$CLUSTER_NAMEenvironment variable, and the context of your cluster in the
- Multicluster: When a guide requires example names, the examples use
cluster2. Otherwise, you can save the names of your clusters in the
$REMOTE_CLUSTER2environment variables, and the contexts of your clusters in the
Review the following table to choose a supported Kubernetes version for the Gloo Mesh Core and Solo Istio versions that you want to install. For more information, see the supported versions.
The following versions of Gloo Mesh Core are supported with the compatible open source project versions of Istio and Kubernetes. Later versions of the open source projects that are released after Gloo Mesh Core might also work, but are not tested as part of the Gloo Mesh Core release.
|Gloo Mesh Core||Release date||Supported Solo Istio versions and related Kubernetes versions tested by Solo|
Load balancer connectivity
If you use an Istio ingress gateway and want to test connectivity through it in your Gloo environment, ensure that your cluster setup enables you to externally access LoadBalancer services on the workload clusters.
Port and repo access from cluster networks
If you have restrictions for your cluster networks in your cloud infrastructure provider, you must open ports, protocols, and image repositories to install Gloo Mesh Core, and to allow your Gloo installation to communicate with the Solo APIs. For example, you might have firewall rules set up on the public network of your clusters so that they do not have default access to all public endpoints. The following sections detail the required and optional ports and repositories that your management and workload clusters must access.
In your firewall or network rules for the management cluster, open the following required ports and repositories.
|Management server images||-||-||IP addresses of management cluster nodes||Public||Allow installation and updates of the |
|Redis image||-||-||IP addresses of management cluster nodes||Public||Allow installation of the Redis image in the management cluster to store OIDC ID tokens for the Gloo UI.|
|Agent communication||9900||TCP||ClusterIPs of agents on workload clusters||IP addresses of management cluster nodes||Cluster network||Allow the |
In your firewall or network rules for the management cluster, open the following optional ports as needed.
|Healthchecks||8090||TCP||Check initiator||IP addresses of management cluster nodes||Public or cluster network, depending on whether checks originate from outside or inside service mesh||Allow healthchecks to the management server.|
|Prometheus||9091||TCP||Scraper||IP addresses of management cluster nodes||Public||Scrape your Prometheus metrics from a different server, or a similar metrics setup.|
|OpenTelemetry gateway||4317||TCP||OpenTelemetry agent||IP addresses of management cluster nodes||Public||Collect telemetry data, such as metrics, logs, and traces to show in Gloo observability tools.|
|Other tools||-||-||-||-||Public||For any other tools that you use in your Gloo Mesh Core environment, consult the tool’s documentation to ensure that you allow the correct ports.|
In your firewall or network rules for the workload clusters, open the following required ports and repositories.
|Agent image||-||-||IP addresses of workload cluster nodes||Public||Allow installation and updates of the |
In your firewall or network rules for the workload clusters, open the following optional ports as needed.
|Agent healthchecks||8090||TCP||Check initiator||IP addresses of workload cluster nodes||Public or cluster network, depending on whether checks originate from outside or inside service mesh||Allow healthchecks to the Gloo agent.|
|Istio Pilot||15017||HTTPS||IP addresses of workload cluster nodes||-||Public||Depending on your cloud provider, you might need to open ports to install Istio. For example, in GKE clusters, you must open port 15017 for the Pilot discovery validation webhook. For more ports and requirements, see Ports used by Istio.|
|Istio healthchecks||15021||HTTP||Check initiator||IP addresses of workload cluster nodes||Public or cluster network, depending on whether checks originate from outside or inside service mesh||Allow healthchecks on path |
|Envoy telemetry||15090||HTTP||Scraper||IP addresses of workload cluster nodes||Public||Scrape your Prometheus metrics from a different server, or a similar metrics setup.|
Port and repo access from local systems
If corporate network policies prevent access from your local system to public endpoints via proxies or firewall rules:
- Allow access to
https://run.solo.io/meshctl/installto install the
- Allow access to the Gloo Mesh Core Helm respository,
https://storage.googleapis.com/gloo-platform/helm-charts, to install Gloo Mesh Core via the
Reserved ports and pod requirements
Review the following Istio and platform docs that outline reserved ports, so that you do not use these ports for other functions in your apps. You might use other services such as a database or app monitoring tool that reserve other ports.