Prepare to install

Before you install Gloo Mesh Enterprise, review the following recommendations and requirements to prepare your environment.

Licensing

Gloo Platform offers separate licenses for each product, such as Gloo Gateway, Gloo Mesh Enterprise, and Gloo Network. Additionally, licenses are offered for the GraphQL module.

  1. Decide which product and module licenses you need for your Gloo Platform environment. For example, to install Gloo Mesh, you need at least a Gloo Mesh Enterprise license key, or for a trial installation, a Gloo trial license key. For more information, see Licensed products and modules.
  2. Contact an account representative to get your license keys.
  3. Save your license keys as envionment variables, which are used throughout the setup guides in this documentation. For example, to set the Gloo Mesh Enterprise license as an environment variable:
    export GLOO_MESH_LICENSE_KEY=<gloo-mesh-license-key>
    
  4. Decide how you want to provide your license keys during installation.
    • Provide license keys directly: When you install the Gloo management components in your cluster, you can provide license keys directly in your Helm values file or the meshctl install command. For example, to provide a Gloo Mesh Enterprise license, you can provide the key string as the value for the glooMeshLicenseKey field in your Helm values file, or provide the --set glooMeshLicenseKey=$GLOO_MESH_LICENSE_KEY flag in your meshctl install command.
    • Provide license keys in a secret: You can specify your license keys by creating a secret before you install Gloo Mesh.
      1. Create a secret with your license keys in the gloo-mesh namespace of your management cluster.
        cat << EOF | kubectl apply --context $MGMT_CONTEXT -n gloo-mesh -f -
        apiVersion: v1
        kind: Secret
        type: Opaque
        metadata:
          name: license-secret
          namespace: gloo-mesh
        data:
          gloo-mesh-license-key: ""
          gloo-network-license-key: ""
          gloo-gateway-license-key: ""
          gloo-trial-license-key: ""
        EOF
        
      2. When you install the Gloo management components in your cluster, specify the secret name in your Helm values file or the meshctl install command. For example, to provide a Gloo Mesh license, you can provide the secret name as the value for the licenseSecretName field in your Helm values file, or provide the --set licenseSecretName flag in your meshctl install command.

        Currently, you must also provide a license key, such as the Gloo Mesh Enterprise license key, in the Helm licenseKey value or in the --license flag of meshctl install.

System requirements

Review the following minimum requirements and other recommendations for your Gloo setup.

Number of clusters

Single-cluster: Gloo is fully functional when the management and agent components both run within the same cluster. You can easily install both the management and agent components by using one installation process. If you choose to install management components and agent components in separate processes, ensure that you use the same name for the cluster during both processes.

Multicluster: A typical, multicluster Gloo setup consists of one management cluster that the Gloo management components are installed in, and one or more workload clusters that run your workloads and are registered with and managed by the management cluster. By running the management components in a dedicated cluster, you can ensure that no workload pods consume cluster resources that might impede management processes. Many guides throughout the documentation use one management cluster and two workload clusters as an example setup.

Cluster details

Review the following recommendations and considerations when creating clusters for your Gloo Mesh environment.

Name

For any clusters that you plan to register as workload clusters, the cluster name cannot include underscores (_).

Additionally, cluster context names cannot include underscores. The context name is used as a SAN specification in the generated certificate that connects workload clusters to the management cluster, and underscores in SAN are not FQDN compliant. You can rename a context by running kubectl config rename-context "<oldcontext>" <newcontext>.

Throughout the guides in this documentation, a single-cluster setup and a three-cluster setup are used as examples.

Size and memory

The minimum recommended size for each cluster in your setup is at least 2vCPU and 8GB of memory per node.

For a more robust service mesh setup, the following sizes are recommended:

Version

Solo supports n-3 versions for Gloo Platform. Within each Gloo Platform version, different open source project versions are supported, including Gloo Istio n-4 version support.

Gloo Platform

The following versions of Gloo Platform are supported with the compatible open source project versions of Istio and Kubernetes. Later versions of the open source projects that are released after Gloo Platform might also work, but are not tested as part of the Gloo Platform release.

Gloo Platform Release date Gloo Istio* Kubernetes
2.2 TBD 1.11 - 1.15 1.18 - 1.23
2.1 21 Oct 2022 1.11 - 1.15 1.18 - 1.23
2.0 13 May 2022 1.9 - 1.13 1.17 - 1.23
1.2 04 Nov 2021 1.9 - 1.12 1.17 - 1.23

Gloo Istio

Keep in mind that Gloo Platform offers n-4 security patching support only with Gloo Istio versions, not community Istio versions. Gloo Istio versions support the same patch versions as community Istio. You can review community Istio patch versions in the Istio release documentation. You must run the latest Gloo Platform patch version to get the backported Istio support.

Istio by Kubernetes or OpenShift version

The supported Istio and Kubernetes or OpenShift versions are dependent on each other. For example, you cannot use Gloo Platform with Istio 1.15 on a Kubernetes 1.25 cluster, because Istio 1.15 does not support Kubernetes 1.25. Similarly, the OpenShift versions are dependent on what Istio and Kubernetes versions that OpenShift supports. Review the following supported versions table, or refer to the Istio docs and OpenShift knowledgebase (requires login) for more information.

Istio version Kubernetes version OpenShift version
1.11 1.18 - 1.22 4.5 - 4.9
1.12 1.19 - 1.22 4.6 - 4.9
1.13 1.20 - 1.23 4.7 - 4.10
1.14 1.20 - 1.23 4.7 - 4.10
1.15 1.20 - 1.23 4.7 - 4.10

Known Istio issues

Gloo features

Additionally, the following Gloo Platform features require specific versions.

Gloo Platform feature Required versions
XSLT filter Istio 1.11 or later
Gloo-managed Istio installations Gloo Platform 2.1.0 or later
GraphQL add-on Gloo Platform version 2.1.0 or later, and Istio version 1.14.5 or later

For more information, see Supported versions.

Load balancer connectivity

If you also run Gloo Gateway and want to test connectivity to the Istio ingress gateway in your Gloo environment, ensure that your cluster setup enables you to externally access LoadBalancer services on the workload clusters.

Port and repo access from cluster networks

If you have restrictions for your cluster networks in your cloud infrastructure provider, you must open ports, protocols, and image repositories to install Gloo Mesh Enterprise and to allow your Gloo Mesh installation to communicate with the Gloo Mesh APIs. For example, you might have firewalls set up on the public network of your clusters so that they do not have default access to all public endpoints. The following sections detail the required and optional ports and repositories that your management and workload clusters must access.

Need to install Gloo Mesh in a disconnected environment, such as an on-premises datacenter or clusters that run on an intranet or private network only? Check out Install Gloo Mesh in air-gapped environments.

Management cluster

Required: In your firewall or network rules for the management cluster, open the following required ports and repositories.

Name Port Protocol Source Destination Network Description
Management server images - - IP addresses of management cluster nodes https://gcr.io/gloo-mesh Public Allow the gloo-mesh-mgmt-server image to be installed and updated in the management cluster.
Redis image - - IP addresses of management cluster nodes docker.io/redis Public Allow the Redis image to be installed in the management cluster to store OIDC ID tokens for the Gloo Mesh UI.
Agent communication 9900 TCP ClusterIPs of agents on workload clusters IP addresses of management cluster nodes Cluster network Allow the gloo-mesh-agent on each workload cluster to send data to the gloo-mesh-mgmt-server in the management cluster.

Optional: In your firewall or network rules for the management cluster, open the following optional ports as needed.

Name Port Protocol Source Destination Network Description
Healthchecks 8090 TCP Check initiator IP addresses of management cluster nodes Public or cluster network, depending on whether checks originate from outside or inside service mesh Allow healthchecks to the management server.
Prometheus 9091 TCP Scraper IP addresses of management cluster nodes Public Scrape your Prometheus metrics from a different server, or a similar metrics setup.
Other tools - - - - Public For any other tools that you use in your Gloo Mesh environment, consult the tool's documentation to ensure that you allow the correct ports. For example, if you use tools such as cert-manager to generate and manage the Gloo Mesh certificates for your setup, consult the cert-manager platform reference.

Workload clusters

Required: In your firewall or network rules for the workload clusters, open the following required ports and repositories.

Name Port Protocol Source Destination Network Description
Agent image - - IP addresses of workload cluster nodes https://gcr.io/gloo-mesh Public Allow the gloo-mesh-agent image to be installed and updated in workload clusters.
Ingress gateway 80 and/or 443 HTTP, HTTPS - Gateway load balancer IP address Public or private network Allow incoming traffic requests to the Istio ingress gateway.
East-west gateway 15443 TCP Node IP addresses of other workload clusters Gateway load balancer IP address on one workload cluster Cluster network Allow services in one workload cluster to access the mesh's east-west gateway for services in another cluster. Repeat this rule for the east-west gateway on each workload cluster. Note that you can customize this port in the spec.options.eastWestGatewaySelector.hostInfo.port setting of your workspace settings resource.

Optional: In your firewall or network rules for the workload clusters, open the following optional ports as needed.

Name Port Protocol Source Destination Network Description
Agent healthchecks 8090 TCP Check initiator IP addresses of workload cluster nodes Public or cluster network, depending on whether checks originate from outside or inside service mesh Allow healthchecks to the Gloo Mesh agent.
Istio Pilot 15017 HTTPS IP addresses of workload cluster nodes - Public Depending on your cloud provider, you might need to open ports for Istio to be installed. For example, in GKE clusters, you must open port 15017 for the Pilot discovery validation webhook. For more ports and requirements, see Ports used by Istio.
Istio healthchecks 15021 HTTP Check initiator IP addresses of workload cluster nodes Public or cluster network, depending on whether checks originate from outside or inside service mesh Allow healthchecks on path /healthz/ready.
Envoy telemetry 15090 HTTP Scraper IP addresses of workload cluster nodes Public Scrape your Prometheus metrics from a different server, or a similar metrics setup.
VM onboarding 15012, 15443 TCP Gateway load balancer IP addresses on workload clusters VMs Cluster network To add virtual machines to your Gloo Mesh setup, allow traffic and updates through east-west routing from the workload clusters to the VMs.

Port and repo access from local systems

If corporate network policies prevent access from your local system to public endpoints via proxies or firewalls:

Reserved ports and pod requirements

Review the following service mesh and platform docs that outline what ports are reserved, so that you do not use these ports for other functions in your apps. You might use other services such as a database or application monitoring tool that reserve additional ports.