Best practices for production
Review the following recommended practices for preparing optional security measures and setting up Gloo in a production environment.
Deployment model
A production Gloo Gateway setup consists of one management cluster that the Gloo Gateway management components are installed in, and one or more workload clusters that run gateway proxies which are registered with and managed by Gloo Gateway. The management cluster serves as the control plane, and the workload clusters serve as the data plane, as depicted in the following diagram.
By default, the management server is deployed with one replica. To increase availability, you can increase the number of replicas that you deploy in the management cluster. Additionally, you can create multiple management clusters, and deploy one or more replicas of the managent server to each cluster. For more information, see High availability and disaster recovery.In a production deployment, you typically want to avoid installing the control plane into a workload cluster that also runs a gateway proxy and other app workloads. Although Gloo Gateway remains fully functional when the management and agent components both run within the same cluster, you might have noisy neighbor concerns in which workload pods consume cluster resources and potentially constrain the management processes. This constraint on management processes can in turn affect other workload clusters that the management components oversee. However, you can prevent resource consumption issues by using Kubernetes best practices, such as node affinity, resource requests, and resource limits. Note that you must also ensure that you use the same name for the cluster during both the control plane installation and cluster registration.
Control plane settings
Before you install the Gloo Gateway control plane into your management cluster, review the following options to help secure your installation. Each section details the benefits of the security option, and the necessary settings to specify in a Helm values file to use during your Helm installation.
You can see all possible fields for the Helm chart by running the following command:
helm show values gloo-platform/gloo-platform --version v2.4.3 > all-values.yaml
You can also review these fields in the Helm values documentation.
Licensing
During installation, you can provide your license key strings directly in license fields such as glooGatewayLicenseKey
. For a more secure setup, you might want to provide those license keys in a secret instead.
- Before you install Gloo Gateway, create a secret with your license keys in the
gloo-mesh
namespace of your management cluster.cat << EOF | kubectl apply -n gloo-mesh -f - apiVersion: v1 kind: Secret type: Opaque metadata: name: license-secret namespace: gloo-mesh data: gloo-mesh-license-key: "" gloo-network-license-key: "" gloo-gateway-license-key: "" gloo-trial-license-key: "" EOF
- When you install the Gloo Gateway control plane in your management cluster, specify the secret name as the value for the
licensing.licenseSecretName
field in your Helm values file.
FIPS-compliant image
If your environment runs workloads that require federal information processing compliance, you can use images of Gloo Gateway components that are specially built to comply with NIST FIPS. Open thevalues.yaml
file, search for the image
section, and append -fips
to the tag, such as in the following example.
...
glooMgmtServer:
image:
pullPolicy: IfNotPresent
registry: gcr.io/gloo-mesh
repository: gloo-mesh-mgmt-server
tag: 2.4.3-fips
Certificate management
Gloo's default behavior is to create self-signed certificates at install time to handle bootstrapping mTLS connectivity between the management server and agent components of Gloo. To use these default certificates, leave the glooMgmtServer.relay.disableCa
and glooMgmtServer.relay.disableCaCertGeneration
values set to false
. If you prefer to set up Gloo without secure communication for quick demonstrations, include the --set insecure=true
flag.
In production installations, do not use the default root CA certificate and intermediate signing CAs that are automatically generated and self-signed by Gloo. Instead, add automation so that the certificates can be easily rotated as described in the certificate management guide.
To supply your custom certificates during Gloo installation:
- Select the certificate management approach that you want to use, such as AWS Certificate Manager, HashiCorp Vault, or your own custom certs.
- As you follow those instructions, make sure that you create relay forwarding and identity secrets in the management and workload clusters.
- As you follow those instructions, modify your Helm values file to use the custom CAs, such as in the following
glooMgmtServer
section. Note that you might need to update the relayTlsSecret name value, depending on your certificate setup.
common:
insecure: false
glooMgmtServer:
insecure: false
relay:
disableCa: true
disableCaCertGeneration: true
signingTlsSecret:
name: relay-tls-signing-secret
tlsSecret:
name: relay-server-tls-secret
Deployment and service overrides
In some cases, you might need to modify the default deployment of the glooMgmtServer
with your own Kubernetes resources. You can specify resources and annotations for the management server deployment in the glooMgmtServer.deploymentOverrides
field, and resources and annotations for the service that exposes the deployment in the glooMgmtServer.serviceOverrides
field.
Most commonly, the serviceOverrides
section specifies cloud provider-specific annotations that might be required for your environment. For example, the following section applies the recommended Amazon Web Services (AWS) annotations for modifying the created load balancer service.
glooMgmtServer:
serviceOverrides:
metadata:
annotations:
# AWS-specific annotations
service.beta.kubernetes.io/aws-load-balancer-healthcheck-healthy-threshold: "2"
service.beta.kubernetes.io/aws-load-balancer-healthcheck-unhealthy-threshold: "2"
service.beta.kubernetes.io/aws-load-balancer-healthcheck-interval: "10"
service.beta.kubernetes.io/aws-load-balancer-healthcheck-port: "9900"
service.beta.kubernetes.io/aws-load-balancer-healthcheck-protocol: "tcp"
service.beta.kubernetes.io/aws-load-balancer-type: external
service.beta.kubernetes.io/aws-load-balancer-scheme: internal
service.beta.kubernetes.io/aws-load-balancer-nlb-target-type: ip
service.beta.kubernetes.io/aws-load-balancer-backend-protocol: TCP
service.beta.kubernetes.io/aws-load-balancer-private-ipv4-addresses: 10.0.50.50, 10.0.64.50
service.beta.kubernetes.io/aws-load-balancer-subnets: subnet-0478784f04c486de5, subnet-09d0cf74c0117fcf3
service.beta.kubernetes.io/aws-load-balancer-target-group-attributes: deregistration_delay.connection_termination.enabled=true,deregistration_delay.timeout_seconds=1
# Kubernetes load balancer service type
serviceType: LoadBalancer
...
In less common cases, you might want to provide other resources, like a config map or service account. This example shows how you might use the deploymentOverrides
to specify a config map in a volume mount.
glooMgmtServer:
deploymentOverrides:
spec:
template:
spec:
volumeMounts:
- name: envoy-config
configMap:
name: my-custom-envoy-config
...
UI authentication
The Gloo UI supports OpenID Connect (OIDC) authentication from common providers such as Google, Okta, and Auth0. Users that access the UI will be required to authenticate with the OIDC provider, and all requests to retrieve data from the API will be authenticated.
You can configure OIDC authentication for the UI by providing your OIDC provider details in the glooUi
section, such as the following.
...
glooUi:
enabled: true
auth:
enabled: true
backend: oidc
oidc:
appUrl: # The URL that the UI for the OIDC app is available at, from the DNS and other ingress settings that expose the OIDC app UI service.
clientId: # From the OIDC provider
clientSecret: # From the OIDC provider. Stored in a secret.
clientSecretName: dashboard
issuerUrl: # The issuer URL from the OIDC provider, usually something like 'https://<domain>.<provider_url>/'.
Redis instance
By default, a Redis instance is deployed for certain control plane components, such as the Gloo management server and Gloo UI. For a production deployment, you can disable the default Redis deployment and provide your own backing database instead.
For more information, see Backing databases.
Prometheus metrics
By default, a Prometheus instance is deployed with the control plane Helm chart to collect metrics for the Gloo management server. For a production deployment, you can either replace the built-in Prometheus server with your own instance, or locally federate metrics and provide them to your production monitoring system. For more information on each option, see Best practices for collecting metrics in production.
Data plane settings
Before you register workload clusters with Gloo, review the following options to help secure your registration. Each section details the benefits of the security option, and the necessary settings to specify in a Helm values file to use during your Helm registration.
You can see all possible fields for the Helm chart by running the following command:
helm show values gloo-platform/gloo-platform --version v2.4.3 > all-values.yaml
You can also review these fields in the Helm values documentation.
FIPS-compliant image
If your environment runs workloads that require federal information processing compliance, you can use images of Gloo Gateway components that are specially built to comply with NIST FIPS. Open thevalues.yaml
file, search for the image
section, and append -fips
to the tag, such as in the following example.
...
glooAgent:
image:
pullPolicy: IfNotPresent
registry: gcr.io/gloo-mesh
repository: gloo-mesh-agent
tag: 2.4.3-fips
Certificate management
If you use the default self-signed certificates during Gloo installation, you can follow the steps in the installation documentation to use these certificates during cluster registration. If you set up Gloo without secure communication for quick demonstrations, include the --set insecure=true
flag during registration. Note that using the default self-signed certificate authorities (CAs) or using insecure mode are not suitable for production environments.
In production environments, you use the same custom certificates that you set up for Gloo installation during cluster registration:
- Ensure that when you installed Gloo Gateway, you set up the relay certificates, such as with AWS Certificate Manager, HashiCorp Vault, or your own custom certs, including the relay forwarding and identity secrets in the management and workload clusters.
- The relay certificate instructions include steps to modify your Helm values file to use the custom CAs, such as in the following
relay
section. Note that you might need to update the clientTlsSecret name and rootTlsSecret name values, depending on your certificate setup.
common:
insecure: false
glooAgent:
insecure: false
relay:
authority: gloo-mesh-mgmt-server.gloo-mesh
clientTlsSecret:
name: gloo-mesh-agent-$REMOTE_CLUSTER-tls-cert
namespace: gloo-mesh
rootTlsSecret:
name: relay-root-tls-secret
namespace: gloo-mesh
serverAddress: $MGMT_SERVER_NETWORKING_ADDRESS
...
Kubernetes RBAC
For information about controlling access to your Gloo resources with Kubernetes role-based access control (RBAC), see User access.
To review the permissions of deployed Gloo components such as the management server and agent, see Gloo component permissions.