System requirements
Review the following minimum requirements and other recommendations for your Gloo Network for Cilium setup.
Number of clusters
Single-cluster: Gloo Network is fully functional when the management plane (management server) and data plane (agent and service mesh) both run within the same cluster. You can easily install both the control and data plane components by using one installation process. If you choose to install the components in separate processes, ensure that you use the same name for the cluster during both processes.
Multicluster: A multicluster Gloo Network setup consists of one management cluster that the Gloo Network management server is installed in, and one or more workload clusters that serves as the data plane (agent and service mesh). By running the management plane in a dedicated management cluster, you can ensure that no workload pods consume cluster resources that might impede management processes. Many guides throughout the documentation use one management cluster and two workload clusters as an example setup.
Cluster details
Review the following recommendations and considerations when creating clusters for your Gloo Network environment.
Platform requirements for Cilium
To install the Solo distribution of the Cilium CNI, check Cilium’s specific requirements for Kubernetes machine types and infrastructure platforms.
- For the recommended settings for clusters on each infrastructure provider, see the Cilium quick installation guide.
- For more details about machine requirements, see the Cilium system requirements doc.
Name
The cluster name must be alphanumeric with no special characters except a hyphen (-), lowercase, and begin with a letter (not a number).
Cluster context names cannot include underscores. The generated certificate that connects workload clusters to the management cluster uses the context name as a SAN specification, and underscores in SAN are not FQDN compliant. You can rename a context by running kubectl config rename-context "<oldcontext>" <newcontext>
.
Throughout the guides in this documentation, examples use a single-cluster setup and a three-cluster setup.
- Single-cluster: When a guide requires an example name, the examples use
mgmt
. Otherwise, you can save the name of your cluster in the$CLUSTER_NAME
environment variable, and the context of your cluster in the$MGMT_CONTEXT
environment variable. - Multicluster: When a guide requires example names, the examples use
mgmt
,cluster1
, andcluster2
. Otherwise, you can save the names of your clusters in the$MGMT_CLUSTER
,$REMOTE_CLUSTER1
, and$REMOTE_CLUSTER2
environment variables, and the contexts of your clusters in the$MGMT_CONTEXT
,$REMOTE_CONTEXT1
, and$REMOTE_CONTEXT2
environment variables.
Version
The following versions of Gloo Network are supported with the compatible open source project versions of Istio and Kubernetes. Later versions of the open source projects that are released after Gloo Network might also work, but are not tested as part of the Gloo Network release.Feature gates
To review the required Gloo Network versions for specific features that you can optionally enable, see Feature gates.
For more information, see Supported versions.
Load balancer connectivity
If you use an Istio ingress gateway and want to test connectivity through it in your Gloo environment, ensure that your cluster setup enables you to externally access LoadBalancer services on the workload clusters.
Port and repo access from cluster networks
If you have restrictions for your cluster networks in your cloud infrastructure provider, you must open ports, protocols, and image repositories to install Gloo Network and to allow your Gloo installation to communicate with the Solo APIs. For example, you might have firewall rules set up on the public network of your clusters so that they do not have default access to all public endpoints. The following sections detail the required and optional ports and repositories that your management and workload clusters must access.
Management cluster
Required
In your firewall or network rules for the management cluster, open the following required ports and repositories.
Name | Port | Protocol | Source | Destination | Network | Description |
---|---|---|---|---|---|---|
Agent communication | 9900 | TCP | ClusterIPs of agents on workload clusters | IP addresses of management cluster nodes | Cluster network | Allow the gloo-mesh-agent on each workload cluster to send data to the gloo-mesh-mgmt-server in the management cluster. |
Management server images | - | - | IP addresses of management cluster nodes | https://gcr.io/gloo-mesh | Public | Allow installation and updates of the gloo-mesh image in the management cluster. |
Redis image | - | - | IP addresses of management cluster nodes | docker.io/redis | Public | Allow installation of the Redis image in the management cluster to store OIDC ID tokens for the Gloo UI. |
Optional
In your firewall or network rules for the management cluster, open the following optional ports as needed.
Name | Port | Protocol | Source | Destination | Network | Description |
---|---|---|---|---|---|---|
Healthchecks | 8090 | TCP | Check initiator | IP addresses of management cluster nodes | Public or cluster network, depending on whether checks originate from outside or inside service mesh | Allow healthchecks to the management server. |
OpenTelemetry gateway | 4317 | TCP | OpenTelemetry agent | IP addresses of management cluster nodes | Public | Collect telemetry data, such as metrics, logs, and traces to show in Gloo observability tools. |
Solo distributions of Cilium images | - | - | IP addresses of management cluster nodes | If you plan to use the Solo distribution of the Cilium CNI: A repo key for Solo distributions of Cilium images that you can get by logging in to the Support Center and reviewing the Cilium images built by Solo.io support article | Public | Allow installation and updates of the Solo distribution of the Cilium image in the management clusters. |
Prometheus | 9091 | TCP | Scraper | IP addresses of management cluster nodes | Public | Scrape your Prometheus metrics from a different server, or a similar metrics setup. |
Other tools | - | - | - | - | Public | For any other tools that you use in your Gloo environment, consult the tool’s documentation to ensure that you allow the correct ports. For example, if you use tools such as cert-manager to generate and manage the Gloo certificates for your setup, consult the cert-manager platform reference. |
Workload clusters
Required
In your firewall or network rules for the workload clusters, open the following required ports and repositories.
Name | Port | Protocol | Source | Destination | Network | Description |
---|---|---|---|---|---|---|
Agent image | - | - | IP addresses of workload cluster nodes | https://gcr.io/gloo-mesh | Public | Allow installation and updates of the gloo-mesh image in workload clusters. |
Optional
In your firewall or network rules for the workload clusters, open the following optional ports as needed.
Name | Port | Protocol | Source | Destination | Network | Description |
---|---|---|---|---|---|---|
Agent healthchecks | 8090 | TCP | Check initiator | IP addresses of workload cluster nodes | Public or cluster network, depending on whether checks originate from outside or inside service mesh | Allow healthchecks to the Gloo agent. |
Envoy telemetry | 15090 | HTTP | Scraper | IP addresses of workload cluster nodes | Public | Scrape your Prometheus metrics from a different server, or a similar metrics setup. |
Solo distributions of Cilium | - | - | IP addresses of workload cluster nodes | If you plan to use the Solo distribution of the Cilium CNI: A repo key for Solo distributions of Cilium images that you can get by logging in to the Support Center and reviewing the Cilium images built by Solo.io support article | Public | Allow installation and updates of the Solo distribution of the Cilium image in the workload clusters. |
Port and repo access from local systems
If corporate network policies prevent access from your local system to public endpoints via proxies or firewall rules:
- Allow access to
https://run.solo.io/meshctl/install
to install themeshctl
CLI tool. - Allow access to the Gloo Helm repository,
https://storage.googleapis.com/gloo-platform/helm-charts
, to install Gloo Network via thehelm
CLI.
Reserved ports and pod requirements
Review the following service mesh and platform docs that outline what ports are reserved, so that you do not use these ports for other functions in your apps. You might use other services such as a database or application monitoring tool that reserve additional ports.
Considerations for running Cilium and Istio on EKS
If you plan to run Istio with sidecar injection and the Cilium CNI in tunneling mode (VXLAN
or GENEVE
) on an Amazon EKS cluster, the Istio control plane istiod is not reachable by the Kubernetes API server by default.
Istio uses Kubernetes admission webhooks to inject sidecar proxies into pods. In EKS environments, the Cilium CNI cannot run on the same nodes where the Kubernetes API server is deployed to, which leads to communication issues when trying to inject Istio sidecars into pods.
You can choose from the following options to allow istiod to communicate with the Kubernetes API server:
- Configure istiod with direct access to the networking stack of the underlying host node by setting
hostNetwork
totrue
as shown in the following Istio Lifecycle Manager example:# Traffic management components: pilot: k8s: overlays: - kind: Deployment name: istiod-1-20 patches: - path: spec.template.spec.hostNetwork value: true
- Chain the Cilium CNI with the
aws-vpc-cni
. For more information, see the Cilium documentation. - Choose a different Cilium routing mode instead, such as eBPF-based routing. For more information about available modes, see the Cilium documentation.