Best practices for production
Review the recommended practices for preparing optional security measures and setting up Gloo Mesh Enterprise in a production environment.
Deployment model
A production Gloo Mesh Enterprise setup consists of one management cluster that the Gloo management components are installed in, and one or more workload clusters that run services meshes which are registered with and managed by Gloo Mesh Enterprise. The management cluster serves as the management plane, and the workload clusters serve as the data plane, as depicted in the following diagram.
By default, the management server is deployed with one replica. To increase availability, you can increase the number of replicas that you deploy in the management cluster.
In a production deployment, you typically want to avoid installing the management plane into a workload cluster that also runs a services mesh. Although Gloo Mesh Enterprise remains fully functional when the management and agent components both run within the same cluster, you might have noisy neighbor concerns in which workload pods consume cluster resources and potentially constrain the management processes. This constraint on management processes can in turn affect other workload clusters that the management components oversee. However, you can prevent resource consumption issues by using Kubernetes best practices, such as node affinity, resource requests, and resource limits. Note that you must also ensure that you use the same name for the cluster during both the management plane installation and cluster registration.
Management plane settings
Before you install the Gloo management plane into your management cluster, review the following options to help secure your installation. Each section details the benefits of the security option, and the necessary settings to specify in a Helm values file to use during your Helm installation.
You can see all possible fields for the Helm chart by running the following command:
helm show values gloo-platform/gloo-platform --version v2.3.24 > all-values.yaml
You can also review these fields in the Helm values documentation.
Certificate management
When you install Gloo Mesh Enterprise by using meshctl
or the instructions that are provided in the getting started guide, Gloo Mesh Enterprise generates a self-signed root CA certificate and key that is used to generate the server TLS certificate for the Gloo management server. In addition, an intermediate CA certificate and key are generated that are used to sign client TLS certificates for every Gloo agent. For more information about the default setup, see Self-signed CAs with automatic client certificate rotation.
Using self-signed certificates and keys for the root CA and storing them on the management cluster is not a recommended security practice. The root CA certificate and key is very sensitive information, and, if compromised, can be used to issue certificates for all agents in a workload cluster. In a production-level setup you want to make sure that the root CA credentials are properly stored with your preferred PKI provider, such as AWS Private CA, Google Cloud CA, or Vault and that you use a certificate management tool, such as cert-manager
to automate the issuing and renewing of certificates.
Use the following links to learn about your setup options in production:
- TLS: Bring your own server TLS certificate
- mTLS: Bring your own CAs with automatic client TLS certificate rotation
- mTLS: Bring your own CAs and client TLS certificates
Overrides for default components
In some cases, you might need to modify the default deployment or service for the Gloo Mesh Enterprise components, such as the management server or agent. To do so, you can configure the deploymentOverrides
and serviceOverrides
settings for each component in your Helm values file. Then, you can upgrade your Gloo Mesh Enterprise installation to apply these new settings. Keep in mind that the component might be restarted in order to apply the new settings.
For settings that are key-value dictionaries, the overrides replace any existing keys in the default template. If the overrides do not match any existing keys, then the override values are added to the existing values, such as the following example.
For settings that are lists, the overrides replace any existing lists in the default template, such as the following example.
Example service override
Most commonly, the serviceOverrides
section specifies cloud provider-specific annotations that might be required for your environment. For example, the following section applies the recommended Amazon Web Services (AWS) annotations for modifying the created load balancer service.
glooMgmtServer:
serviceOverrides:
metadata:
annotations:
# AWS-specific annotations
service.beta.kubernetes.io/aws-load-balancer-healthcheck-healthy-threshold: "2"
service.beta.kubernetes.io/aws-load-balancer-healthcheck-unhealthy-threshold: "2"
service.beta.kubernetes.io/aws-load-balancer-healthcheck-interval: "10"
service.beta.kubernetes.io/aws-load-balancer-healthcheck-port: "9900"
service.beta.kubernetes.io/aws-load-balancer-healthcheck-protocol: "tcp"
service.beta.kubernetes.io/aws-load-balancer-type: external
service.beta.kubernetes.io/aws-load-balancer-scheme: internal
service.beta.kubernetes.io/aws-load-balancer-nlb-target-type: ip
service.beta.kubernetes.io/aws-load-balancer-backend-protocol: TCP
service.beta.kubernetes.io/aws-load-balancer-private-ipv4-addresses: 10.0.50.50, 10.0.64.50
service.beta.kubernetes.io/aws-load-balancer-subnets: subnet-0478784f04c486de5, subnet-09d0cf74c0117fcf3
service.beta.kubernetes.io/aws-load-balancer-target-group-attributes: deregistration_delay.connection_termination.enabled=true,deregistration_delay.timeout_seconds=1
# Kubernetes load balancer service type
serviceType: LoadBalancer
...
You can apply service overrides to the following components:
glooAgent
glooAnalyzer
glooInsightsEngine
glooMgmtServer
glooPortalServer
glooSpireServer
glooUi
redis
redisStore
for the management plane (insights and snapshot) and data plane (external auth service and rate limiter)
Example deployment overrides
For some components, you might want to modify the default deployment settings, such as the metadata or resource limits for CPU and memory. Or, you might want to provide your own resource such as a config map, service account, or volume that you mount to the deployment. This example shows how you might use the deploymentOverrides
to specify a config map for a volume.
glooMgmtServer:
deploymentOverrides:
spec:
template:
spec:
volumes:
- name: envoy-config
configMap:
name: my-custom-envoy-config
...
You can apply deployment overrides to the following components:
glooAgent
glooAnalyzer
glooInsightsEngine
glooMgmtServer
glooPortalServer
glooSpireServer
glooUi
redis
redisStore
for the management plane (insights and snapshot) and data plane (external auth service and rate limiter)
FIPS-compliant image
If your environment runs workloads that require federal information processing compliance, you can use images of Gloo Mesh Enterprise components that are specially built to comply with NIST FIPS. Open thevalues.yaml
file, search for the image
section, and append -fips
to the tag, such as in the following example.
...
glooMgmtServer:
image:
pullPolicy: IfNotPresent
registry: gcr.io/gloo-mesh
repository: gloo-mesh-mgmt-server
tag: 2.3.24-fips
Licensing
During installation, you can provide your license key strings directly in license fields such as glooMeshLicenseKey
. For a more secure setup, you might want to provide those license keys in a secret named license-secret
instead. For more information, see Provide your license key during installation.
Prometheus metrics
By default, a Prometheus instance is deployed with the management plane Helm chart to collect metrics for the Gloo management server. For a production deployment, you can either replace the built-in Prometheus server with your own instance, or remove high cardinality labels. For more information on each option, see Customization options.
Redis instance
By default, a Redis instance is deployed for certain management plane components, such as the Gloo management server and Gloo UI. For a production deployment, you can disable the default Redis deployment and provide your own backing instance instead.
For more information, see Backing databases.
Break up large Envoy filters
Some Gloo policies, such as JWT or other external auth policies are translated into Envoy filters during the Gloo translation process. These Envoy filters are stored in the Kubernetes data store etcd alongside other Gloo configurations and applied to the ingress gateway or sidecar proxy to enforce the policies. In environments where you apply policies to a lot of apps and routes, the size of the Envoy filter can become very large and exceed the maximum file size limit in etcd. When the maximum file size limit is reached, new configuration is rejected in etcd and Istio, which leads to policies not being applied and enforced properly.
To prevent this issue in your environment, it is recommended to set the new EXPERIMENTAL_SEGMENT_ENVOY_FILTERS_BY_MATCHER
environment variable on the Gloo management server to instruct the server to break up large Envoy filters into multiple smaller Envoy filters. In your Helm values file for the Gloo management server, add the following snippet:
glooMgmServer:
extraEnvs:
EXPERIMENTAL_SEGMENT_ENVOY_FILTERS_BY_MATCHER:
value: "true"
Important: To safely upgrade and ensure existing Envoy filters are correctly re-created, the Gloo management server, and the Istio control plane istiod must temporarily be scaled down to 0 replicas. This upgrade procedure can have the following implications for your environment:
- Delayed configuration updates: During the upgrade, the Gloo management server and istiod control plane are temporarily scaled down. Because of that, the propagation of configuration changes to the sidecar or gateway proxy, such as new routing rules or security policies, is delayed. This can cause inconsistencies in traffic management and policy enforcement.
- Complex environments with long translation times: If you have a complex environment and your average translation time regularly takes more than 60 seconds, scaling down
istiod
might have unexpected impacts and delay the time for your traffic to continue as normal. - New pods cannot be added to the mesh: The Istio control plane istiod implements the sidecar injection webhook. When the control plane is scaled down, sidecar injection does not work and new pods cannot be added to the service mesh. You can manually inject sidecars into your pods. However, keep in mind that these pods do not receive traffic as endpoint discovery is also disabled when the Istio control plane is scaled down. After the control plane is scaled back up, pods are automatically injected with sidecars and added to the mesh.
- mTLS certificate issues: If certificates expire while the Istio control plane is not available, mutual TLS between services in the mesh might be impacted.
Note that the EXPERIMENTAL_SEGMENT_ENVOY_FILTERS_BY_MATCHER
environment variable is removed in Gloo Mesh Enterprise version 2.5.0. This is because the Envoy filter segmentation is promoted to standard behavior and enabled by default. You no longer need to set the environment variable. If you want to enable this feature in version 2.3.x or 2.4.x, use the upgrade steps in version 2.5 as a general guidance for how to safely scale down the Gloo management server, Gloo agent, and istiod, and re-create the Envoy filters in your environment.
UI authentication
The Gloo UI supports OpenID Connect (OIDC) authentication from common providers such as Google, Okta, and Auth0. Users that access the UI will be required to authenticate with the OIDC provider, and all requests to retrieve data from the API will be authenticated.
You can configure OIDC authentication for the UI by providing your OIDC provider details in the glooUi
section, such as the following.
...
glooUi:
enabled: true
auth:
enabled: true
backend: oidc
oidc:
appUrl: # The URL that the UI for the OIDC app is available at, from the DNS and other ingress settings that expose the OIDC app UI service.
clientId: # From the OIDC provider
clientSecret: # From the OIDC provider. Stored in a secret.
clientSecretName: dashboard
issuerUrl: # The issuer URL from the OIDC provider, usually something like 'https://<domain>.<provider_url>/'.
Data plane settings
Before you register workload clusters with Gloo Mesh Enterprise, review the following options to help secure your registration. Each section details the benefits of the security option, and the necessary settings to specify in a Helm values file to use during your Helm registration.
You can see all possible fields for the Helm chart by running the following command:
helm show values gloo-platform/gloo-platform --version v2.3.24 > all-values.yaml
You can also review these fields in the Helm values documentation.
FIPS-compliant image
If your environment runs workloads that require federal information processing compliance, you can use images of Gloo Mesh Enterprise components that are specially built to comply with NIST FIPS. Open the values.yaml
file, search for the image
section, and append -fips
to the tag, such as in the following example.
...
glooAgent:
image:
pullPolicy: IfNotPresent
registry: gcr.io/gloo-mesh
repository: gloo-mesh-agent
tag: 2.3.24-fips
Certificate management
If you use the default self-signed certificates during Gloo Mesh Enterprise installation, you can follow the steps in the cluster registration documentation to use these certificates during cluster registration. If you set up Gloo Mesh Enterprise without secure communication for quick demonstrations, include the --set insecure=true
flag during registration. Note that using the default self-signed certificate authorities (CAs) or using insecure mode are not suitable for production environments.
In production environments, you use the same custom certificates that you set up for Gloo Mesh Enterprise installation during cluster registration:
Ensure that when you installed Gloo Mesh Enterprise, you set up the relay certificates, such as with AWS Certificate Manager, HashiCorp Vault, or your own custom certs, including the relay forwarding and identity secrets in the management and workload clusters.
The relay certificate instructions include steps to modify your Helm values file to use the custom CAs, such as in the following
relay
section. Note that you might need to update the clientTlsSecret name and rootTlsSecret name values, depending on your certificate setup.common: insecure: false glooAgent: insecure: false relay: authority: gloo-mesh-mgmt-server.gloo-mesh clientTlsSecret: name: gloo-mesh-agent-$REMOTE_CLUSTER-tls-cert namespace: gloo-mesh rootTlsSecret: name: relay-root-tls-secret namespace: gloo-mesh serverAddress: $MGMT_SERVER_NETWORKING_ADDRESS ...
Kubernetes RBAC
To review the permissions of deployed Gloo components such as the management server and agent, see Gloo component permissions.