Install with Helm
Use Helm to customize the settings of your Gloo Mesh Gateway installation in one cluster.
Gloo Mesh Gateway is a feature-rich, Kubernetes-native ingress controller and next-generation API gateway. With Gloo Mesh Gateway, you have access to its exceptional function-level routing, discovery capabilities, numerous features, tight integration with leading open-source projects, and support for legacy apps, microservices, and serverless. Gloo Mesh Gateway is uniquely designed to support hybrid applications in which multiple technologies, architectures, protocols, and clouds can coexist. To learn more about the benefits and architecture, see About.
Overview
In this guide, you customize Helm settings for an advanced Gloo Mesh Gateway installation in a single-cluster environment. To use a dedicated cluster for your Gloo management plane, and install gateway proxies in one or more workload clusters instead, see the multicluster setup guide.
These settings install the Gloo management plane and gateway proxy together in one cluster, as shown in the following diagram.
- Gloo Mesh Gateway management plane: When you install the Gloo management plane, a deployment named
gloo-mesh-mgmt-server
is created to translate and implement your Gloo configurations. Because you include theglooAgent.enabled: true
setting in the installation values file, the cluster is also registered to be managed by Gloo. A deployment namedgloo-mesh-agent
is created to run the Gloo agent as part of the Gloo data plane. - Gateway proxy: Use the Gloo management plane to install an ingress gateway proxy in your cluster, as part of the Istio lifecycle management. By using a Gloo-managed installation, you no longer need to manually install and manage the
istiod
control plane and gateway proxy. Instead, you provide the Istio configuration in yourgloo-platform
Helm chart, and Gloo translates this configuration into a managedistiod
control plane and gateway proxy in the cluster.
Before you begin
Install the following command-line (CLI) tools.
kubectl
, the Kubernetes command line tool. Download thekubectl
version that is within one minor version of the Kubernetes clusters you plan to use.meshctl
, the Solo command line tool.curl -sL https://run.solo.io/meshctl/install | GLOO_MESH_VERSION=v2.5.10 sh - export PATH=$HOME/.gloo-mesh/bin:$PATH
helm
, the Kubernetes package manager.
Set your Gloo Mesh Gateway license key as an environment variable. If you do not have one, contact an account representative. If you prefer to specify license keys in a secret instead, see Licensing. To check your license’s validity, you can run
meshctl license check --key $(echo ${GLOO_MESH_GATEWAY_LICENSE_KEY} | base64 -w0)
.export GLOO_MESH_GATEWAY_LICENSE_KEY=<license_key>
Set the Gloo Mesh Gateway version. This example uses the latest version. You can find other versions in the Changelog documentation. Append
-fips
for a FIPS-compliant image, such as2.5.10-fips
. Do not includev
before the version number.export GLOO_VERSION=2.5.10
Create or use an existing Kubernetes or OpenShift cluster, and save the cluster name in an environment variable. Note: The cluster name must be alphanumeric with no special characters except a hyphen (-), lowercase, and begin with a letter (not a number).
export CLUSTER_NAME=<cluster_name>
- Production installations: Review Best practices for production to prepare your optional security measures. For example, before you begin your Gloo installation, you can provide your own certificates to secure the management server and agent connection, and set up secure access to the Gloo UI.
Running your own Prometheus server: In Gloo version 2.5.0, the
prometheus.io/port: "<port_number>"
annotation was removed from the Gloo management server and agent. However, theprometheus.io/scrape: true
annotation is still present. If you have another Prometheus instance that runs in your cluster, and it is not set up with custom scraping jobs for the Gloo management server and agent, the instance automatically scrapes all ports on the management server and agent pods. This can lead to error messages in the management server and agent logs. Note that this issue is resolved in version 2.5.2. To view your options to resolve this issue, see Run another Prometheus instance alongside the built-in one. Note that this issue is resolved in version 2.5.2.
Install Gloo Mesh Gateway
Add and update the Helm repository for Gloo.
helm repo add gloo-platform https://storage.googleapis.com/gloo-platform/helm-charts helm repo update
Install the Gloo CRDs.
helm upgrade -i gloo-platform-crds gloo-platform/gloo-platform-crds \ --namespace=gloo-mesh \ --create-namespace \ --version=$GLOO_VERSION
Prepare a Helm values file to provide your customizations. To get started, you can use the minimum settings in the following profile as a basis. These settings enable all components that are required for a single-cluster Gloo Mesh Gateway installation.
Decide how you want to secure the relay connection between the Gloo management server and agent. In test and POC environments, you can use Gloo self-signed certificates to secure the connection. If you plan to use Gloo Mesh Gateway in production, it is recommended to bring your own certificates instead. For more information, see Setup options.
Edit the file to provide your own details for settings that are recommended for production deployments, such as the following settings.
For more information about the settings you can configure:- See Best practices for production.
- See all possible fields for the Helm chart by running
helm show values gloo-platform/gloo-platform --version v2.5.10 > all-values.yaml
. You can also see these fields in the Helm values documentation.
Field Decription glooAgent.resources.limits
Add resource limits for the gloo-mesh-agent
pod, such ascpu: 500m
andmemory: 512Mi
.glooMgmtServer.resources.limits
Add resource limits for the gloo-mesh-mgmt-server
pod, such ascpu: 1000m
andmemory: 1Gi
.glooMgmtServer.safeMode
glooMgmtServer.safeStartWindow
Configure how you want the Gloo management server to handle translation after a Redis restart. For available options, see Redis safe mode options. glooMgmtServer.serviceOverrides.metadata.annotations
Add annotations for the management server load balancer as needed, such as AWS-specific load balancer annotations. For more information, see Deployment and service overrides. glooUi.auth
Set up OIDC authorization for the Gloo UI. For more information, see UI authentication. extAuthService.enabled
Set to true
to install the external auth server add-on.istioInstallations
The default supported version of Istio is used to install the managed gateway proxies. You can instead choose the Istio version you want to use for your gateway proxy. For an example of how this section might look, see Example istioInstallations
settings.NOTE: To manage the gateway proxies yourself instead of the gateway lifecycle manager in this Helm chart, setistioInstallations.enabled
tofalse
, and manually deploy gateway proxies.prometheus.enabled
Disable the default Prometheus instance as needed to provide your own. Otherwise, you can keep the default Prometheus server enabled, and deploy a production-level server to scrape metrics from the server. For more information on each option, see Best practices for collecting metrics in production. rateLimiter.enabled
Set to true
to install the rate limit server add-on.redis
Disable the default Redis deployment and provide your own backing database as needed. For more information, see Backing databases. OpenShift: glooMgmtServer.serviceType
andtelemetryGateway.service.type
In some OpenShift setups, you might not use load balancer service types. You can set these two deployment service types to ClusterIP
, and expose them by using OpenShift routes after installation.Use the customizations in your Helm values file to install the Gloo Mesh Gateway components in your cluster.
Create a workspace that selects all clusters and namespaces by default, and workspace settings that enable communication across clusters. Gloo workspaces let you organize team resources across Kubernetes namespaces and clusters. In this example, you create a global workspace that imports and exports all resources and namespaces, and a workspace settings resource in the
gloo-mesh-config
namespace. Later, as your teams grow, you can create a workspace for each team, to enforce service isolation, set up federation, and even share resources by importing and exporting.kubectl apply --context $MGMT_CONTEXT -f- <<EOF apiVersion: admin.gloo.solo.io/v2 kind: Workspace metadata: name: $MGMT_CLUSTER namespace: gloo-mesh spec: workloadClusters: - name: '*' namespaces: - name: '*' --- apiVersion: v1 kind: Namespace metadata: name: gloo-mesh-config --- apiVersion: admin.gloo.solo.io/v2 kind: WorkspaceSettings metadata: name: $MGMT_CLUSTER namespace: gloo-mesh-config spec: options: serviceIsolation: enabled: false federation: enabled: false serviceSelector: - {} eastWestGateways: - selector: labels: istio: eastwestgateway EOF
Verify the installation
Verify that your Gloo Mesh Gateway setup is correctly installed. If not, try debugging the relay connection. Note that this check might take a few seconds to verify that:
- Your Gloo product license is valid and current.
- The Gloo CRDs are installed at the correct version.
- The management plane pods in the management cluster are running and healthy.
- The Gloo agent is running and connected to the management server.
meshctl check
Example output:
🟢 License status INFO gloo-gateway enterprise license expiration is 25 Aug 24 10:38 CDT INFO Valid GraphQL license module found 🟢 CRD version check 🟢 Gloo deployment status Namespace | Name | Ready | Status gloo-mesh | ext-auth-service | 1/1 | Healthy gloo-mesh | gloo-mesh-agent | 1/1 | Healthy gloo-mesh | gloo-mesh-mgmt-server | 1/1 | Healthy gloo-mesh | gloo-mesh-redis | 1/1 | Healthy gloo-mesh | gloo-mesh-ui | 1/1 | Healthy gloo-mesh | gloo-telemetry-collector-agent | 3/3 | Healthy gloo-mesh | prometheus-server | 1/1 | Healthy gloo-mesh | rate-limiter | 1/1 | Healthy 🟢 Mgmt server connectivity to workload agents Cluster | Registered | Connected Pod test | true | gloo-mesh/gloo-mesh-mgmt-server-558cddbbd7-rf2hv Connected Pod | Clusters gloo-mesh/gloo-mesh-mgmt-server-558cddbbd7-rf2hv | 1
Verify that the gateway proxy service is created and assigned an external address. It might take a few minutes for the load balancer to deploy. Note: If you did not deploy a managed gateway proxy through the Helm installation, manually deploy the gateway proxy instead.
kubectl get svc -n gloo-mesh-gateways
Example output:
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE istio-ingressgateway LoadBalancer 10.XX.X.XXX 34.XXX.XXX.XXX 15021:30826/TCP,80:31257/TCP,443:30673/TCP,15443:30789/TCP 48s
Example istioInstallations
settings
You can customize the istioInstallations
section of your Helm values files in the following ways.
hub
: Specify a Solo repo key that you can get by logging in to the Support Center and reviewing the Istio images built by Solo.io support article.tag
: Specify the version that you downloaded and append thesolo
tag, which is required to use many enterprise features, such as1.20.7-patch0-solo
. You can append other tags for the Solo distribution of Istio, as described in About the Solo distribution of Istio. Note: The Istio lifecycle manager is supported only for Istio versions 1.15.4 or later.revision
andgatewayRevision
: Take the Istio major and minor versions and replace the period with a hyphen, such as1-20
. Note: For testing environments only, you can deploy a revisionless installation by omitting therevision
andgatewayRevision
fields entirely. Revisionless installations permit in-place upgrades, which are quicker than the canary-based upgrades that are required for revisioned installations. Note that if you deploy multiple Istio installations in the same cluster, only one installation can be revisionless.k8s.serviceAnnotations
: You can optionally provide cloud provider-specific service annotations for the gateway load balancer, such as the following annotations for AWS.
istioInstallations:
controlPlane:
enabled: true
installations:
- istioOperatorSpec:
hub: $repo_key
tag: 1.20.7-patch0-solo
revision: 1-20
enabled: true
northSouthGateways:
- enabled: true
name: istio-ingressgateway
installations:
- clusters:
- activeGateway: true
name: $CLUSTER_NAME
gatewayRevision: 1-20
istioOperatorSpec:
hub: $repo_key
tag: 1.20.7-patch0-solo
# Optional load balancer annotations
components:
ingressGateways:
- enabled: true
k8s:
serviceAnnotations:
service.beta.kubernetes.io/aws-load-balancer-backend-protocol: ssl
service.beta.kubernetes.io/aws-load-balancer-cross-zone-load-balancing-enabled: "true"
service.beta.kubernetes.io/aws-load-balancer-nlb-target-type: instance
service.beta.kubernetes.io/aws-load-balancer-scheme: internet-facing
service.beta.kubernetes.io/aws-load-balancer-ssl-cert: "arn:aws:acm:<cert>"
service.beta.kubernetes.io/aws-load-balancer-type: external
Next steps
Now that you have Gloo Mesh Gateway up and running, check out some of the following resources to learn more about your API Gateway and expand your routing and network capabilities.
Traffic management:
- Deploy sample apps in your cluster to follow the guides in the documentation.
- Configure HTTP or HTTPS listeners for your gateway.
- Review routing examples, such as header matching, redirects, or direct responses that you can configure for your API Gateway.
- Explore traffic management policies that you can apply to your routes and upstream services. For example, you might apply the proxy protocol policy to your API Gateway so that it preserves connection information such as the originating client IP address.
Gloo Mesh Gateway:
- Monitor and observe your environment with Gloo Mesh Gateway’s built-in telemetry tools.
Help and support:
- Talk to an expert to get advice or build out a proof of concept.
- Join the #gloo-mesh channel in the Solo.io community slack.
- Try out one of the Gloo workshops.