Deployment model

A production Gloo Mesh Core setup consists of one management cluster that the Gloo management components are installed in, and one or more workload clusters that run services meshes which are registered with and managed by Gloo Mesh Core. The management cluster serves as the management plane, and the workload clusters serve as the data plane, as depicted in the following diagram.

By default, the management server is deployed with one replica. To increase availability, you can increase the number of replicas that you deploy in the management cluster.

In a production deployment, you typically want to avoid installing the management plane into a workload cluster that also runs a services mesh. Although Gloo Mesh Core remains fully functional when the management and agent components both run within the same cluster, you might have noisy neighbor concerns in which workload pods consume cluster resources and potentially constrain the management processes. This constraint on management processes can in turn affect other workload clusters that the management components oversee. However, you can prevent resource consumption issues by using Kubernetes best practices, such as node affinity, resource requests, and resource limits. Note that you must also ensure that you use the same name for the cluster during both the management plane installation and cluster registration.

Figure of a multicluster Gloo quick-start architecture, with a dedicated management cluster.
Figure of a multicluster Gloo quick-start architecture, with a dedicated management cluster.

Management plane settings

Before you install the Gloo management plane into your management cluster, review the following options to help secure your installation. Each section details the benefits of the security option, and the necessary settings to specify in a Helm values file to use during your Helm installation.

Certificate management

When you install Gloo Mesh Core by using meshctl or the instructions that are provided in the getting started guide, Gloo Mesh Core generates a self-signed root CA certificate and key that is used to generate the server TLS certificate for the Gloo management server. In addition, an intermediate CA certificate and key are generated that are used to sign client TLS certificates for every Gloo agent. For more information about the default setup, see Self-signed CAs with automatic client certificate rotation.

Using self-signed certificates and keys for the root CA and storing them on the management cluster is not a recommended security practice. The root CA certificate and key is very sensitive information, and, if compromised, can be used to issue certificates for all agents in a workload cluster. In a production-level setup you want to make sure that the root CA credentials are properly stored with your preferred PKI provider, such as AWS Private CA, Google Cloud CA, or Vault and that you use a certificate management tool, such as cert-manager to automate the issuing and renewing of certificates.

Use the following links to learn about your setup options in production:

Overrides for default components

In some cases, you might need to modify the default deployment or service for the Gloo Mesh Core components, such as the management server or agent. To do so, you can configure the deploymentOverrides and serviceOverrides settings for each component in your Helm values file. Then, you can upgrade your Gloo Mesh Core installation to apply these new settings. Keep in mind that the component might be restarted in order to apply the new settings.

For settings that are key-value dictionaries, the overrides replace any existing keys in the default template. If the overrides do not match any existing keys, then the override values are added to the existing values, such as the following example.

For settings that are lists, the overrides replace any existing lists in the default template, such as the following example.

Example service override

Most commonly, the serviceOverrides section specifies cloud provider-specific annotations that might be required for your environment. For example, the following section applies the recommended Amazon Web Services (AWS) annotations for modifying the created load balancer service.

  
glooMgmtServer:
  serviceOverrides:
    metadata:
      annotations:
        # AWS-specific annotations
        service.beta.kubernetes.io/aws-load-balancer-healthcheck-healthy-threshold: "2"
        service.beta.kubernetes.io/aws-load-balancer-healthcheck-unhealthy-threshold: "2"
        service.beta.kubernetes.io/aws-load-balancer-healthcheck-interval: "10"
        service.beta.kubernetes.io/aws-load-balancer-healthcheck-port: "9900"
        service.beta.kubernetes.io/aws-load-balancer-healthcheck-protocol: "tcp"

        service.beta.kubernetes.io/aws-load-balancer-type: external
        service.beta.kubernetes.io/aws-load-balancer-scheme: internal
        service.beta.kubernetes.io/aws-load-balancer-nlb-target-type: ip
        service.beta.kubernetes.io/aws-load-balancer-backend-protocol: TCP
        service.beta.kubernetes.io/aws-load-balancer-private-ipv4-addresses: 10.0.50.50, 10.0.64.50
        service.beta.kubernetes.io/aws-load-balancer-subnets: subnet-0478784f04c486de5, subnet-09d0cf74c0117fcf3
        service.beta.kubernetes.io/aws-load-balancer-target-group-attributes: deregistration_delay.connection_termination.enabled=true,deregistration_delay.timeout_seconds=1
  # Kubernetes load balancer service type
  serviceType: LoadBalancer
  ...
  

You can apply service overrides to the following components:

  • glooAgent
  • glooAnalyzer
  • glooInsightsEngine
  • glooMgmtServer
  • glooPortalServer
  • glooSpireServer
  • glooUi
  • redis
  • redisStore for the management plane (insights and snapshot) and data plane (external auth service and rate limiter)

Example deployment overrides

For some components, you might want to modify the default deployment settings, such as the metadata or resource limits for CPU and memory. Or, you might want to provide your own resource such as a config map, service account, or volume that you mount to the deployment. This example shows how you might use the deploymentOverrides to specify a config map for a volume.

  
glooMgmtServer:
  deploymentOverrides:
    spec:
      template:
        spec:
          volumes:
            - name: envoy-config
              configMap:
                name: my-custom-envoy-config
  ...
  

You can apply deployment overrides to the following components:

  • extAuthService
  • glooAgent
  • glooAnalyzer
  • glooInsightsEngine
  • glooMgmtServer
  • glooPortalServer
  • glooSpireServer
  • glooUi
  • rateLimiter
  • redis
  • redisStore for the management plane (insights and snapshot) and data plane (external auth service and rate limiter)

FIPS-compliant image

If your environment runs workloads that require federal information processing compliance, you can use images of Gloo Mesh Core components that are specially built to comply with NIST FIPS. Open the values.yaml file, search for the image section, and append -fips to the tag, such as in the following example.
  ...
glooMgmtServer:
  image:
    pullPolicy: IfNotPresent
    registry: gcr.io/gloo-mesh
    repository: gloo-mesh-mgmt-server
    tag: 2.7.0-beta1-fips
  

Licensing

During installation, you can provide your license key strings directly in license fields such as glooMeshLicenseKey. For a more secure setup, you might want to provide those license keys in a secret named license-secret instead. For more information, see Provide your license key during installation.

Prometheus metrics

By default, a Prometheus instance is deployed with the management plane Helm chart to collect metrics for the Gloo management server. For a production deployment, you can either replace the built-in Prometheus server with your own instance, or remove high cardinality labels. For more information on each option, see Customization options.

Redis instance

By default, a Redis instance is deployed for certain management plane components, such as the Gloo management server and Gloo UI. For a production deployment, you can disable the default Redis deployment and provide your own backing instance instead.

For more information, see Backing databases.

Redis safe mode

By default, safe mode is enabled on the Gloo management server to ensure that the Gloo management server translates custom Gloo resources only if the complete context of all workload clusters is populated in Redis or the Gloo management server’s local memory.

If you disabled safe mode during your tests, such as to register a workload cluster with connectivity issues, it is recommended to enable safe mode before you move to production. To enable safe mode, follow these general steps:

To enable safe mode:

  1. Scale down the number of Gloo management server pods to 0.

      kubectl scale deployment gloo-mesh-mgmt-server --replicas=0 -n gloo-mesh
      
  2. Upgrade your Gloo Mesh Core installation. Add the following settings in the Helm values file for the Gloo management plane.

      
    glooMgmtServer:
      safeMode: true
      
  3. Scale the Gloo management server back up to the number of desired replicas. The following example uses 1 replica.

      kubectl scale deployment gloo-mesh-mgmt-server --replicas=1 -n gloo-mesh
      

To learn more about safe mode and how translation works in Gloo Mesh Core, see Safe mode.

Redis I/O threads

If you plan to use the built-in Redis instance in production and you experience performance issues, you can increase the number of I/O threads in Redis by using the redis.deployment.ioThreads Helm option. Redis is mostly single threaded, however some operations, such as UNLINK or slow I/O accesses can be performed on side threads. Increasing the number of side threads can help improve and maximize the performance of Redis as these operations can run in parallel.

If you set I/O threads, the Redis pod must be restarted during the upgrade so that the changes can be applied. During the restart, the input snapshots from all connected Gloo agents are removed from the Redis cache. If you also update settings in the Gloo management server that require the management server pod to restart, the management server’s local memory is cleared and all Gloo agents are disconnected. Although the Gloo agents attempt to reconnect to send their input snapshots and re-populate the Redis cache, some agents might take longer to connect or fail to connect at all. To ensure that the Gloo management server halts translation until the input snapshots of all workload cluster agents are present in Redis, it is recommended to enable safe mode on the management server alongside updating the I/O threads for the Redis pod. For more information, see Safe mode. Note that in version 2.6.0 and later, safe mode is enabled by default.

To update I/O side threads in Redis as part of your Gloo Mesh Core upgrade:

  1. Scale down the number of Gloo management server pods to 0.

      kubectl scale deployment gloo-mesh-mgmt-server --replicas=0 -n gloo-mesh
      
  2. Upgrade Gloo Mesh Core and use the following settings in your Helm values file for the management server. Make sure to also increase the number of CPU cores to one core per thread, and add an additional CPU core for the main Redis thread. The following example also enables safe mode on the Gloo management server to ensure translation is done with the complete context of all workload clusters.

      
    glooMgmtServer:
      safeMode: true
    redis: 
      deployment: 
        ioThreads: 2
        resources: 
          requests: 
            cpu: 3
          limits: 
            cpu: 3
      
  3. Scale the Gloo management server back up to the number of desired replicas. The following example uses 1 replica.

      kubectl scale deployment gloo-mesh-mgmt-server --replicas=1 -n gloo-mesh
      

UI authentication

The Gloo UI supports OpenID Connect (OIDC) authentication from common providers such as Google, Okta, and Auth0. Users that access the UI will be required to authenticate with the OIDC provider, and all requests to retrieve data from the API will be authenticated.

You can configure OIDC authentication for the UI by providing your OIDC provider details in the glooUi section, such as the following.

  ...
glooUi:
  enabled: true
  auth:
    enabled: true
    backend: oidc
    oidc:
      appUrl: # The URL that the UI for the OIDC app is available at, from the DNS and other ingress settings that expose the OIDC app UI service.
      clientId: # From the OIDC provider
      clientSecret: # From the OIDC provider. Stored in a secret.
      clientSecretName: dashboard
      issuerUrl: # The issuer URL from the OIDC provider, usually something like 'https://<domain>.<provider_url>/'.
  

Data plane settings

Before you register workload clusters with Gloo Mesh Core, review the following options to help secure your registration. Each section details the benefits of the security option, and the necessary settings to specify in a Helm values file to use during your Helm registration.

FIPS-compliant image

If your environment runs workloads that require federal information processing compliance, you can use images of Gloo Mesh Core components that are specially built to comply with NIST FIPS. Open the values.yaml file, search for the image section, and append -fips to the tag, such as in the following example.

  ...
glooAgent:
  image:
    pullPolicy: IfNotPresent
    registry: gcr.io/gloo-mesh
    repository: gloo-mesh-agent
    tag: 2.7.0-beta1-fips
  

Certificate management

If you use the default self-signed certificates during Gloo Mesh Core installation, you can follow the steps in the cluster registration documentation to use these certificates during cluster registration. If you set up Gloo Mesh Core without secure communication for quick demonstrations, include the --set insecure=true flag during registration. Note that using the default self-signed certificate authorities (CAs) or using insecure mode are not suitable for production environments.

In production environments, you use the same custom certificates that you set up for Gloo Mesh Core installation during cluster registration:

  1. Ensure that when you installed Gloo Mesh Core, you set up the relay certificates, such as with AWS Certificate Manager, HashiCorp Vault, or your own custom certs, including the relay forwarding and identity secrets in the management and workload clusters.

  2. The relay certificate instructions include steps to modify your Helm values file to use the custom CAs, such as in the following relay section. Note that you might need to update the clientTlsSecret name and rootTlsSecret name values, depending on your certificate setup.

      
    common:
      insecure: false
    glooAgent:
      insecure: false
      relay:
        authority: gloo-mesh-mgmt-server.gloo-mesh
        clientTlsSecret:
          name: gloo-mesh-agent-$REMOTE_CLUSTER-tls-cert
          namespace: gloo-mesh
        rootTlsSecret:
          name: relay-root-tls-secret
          namespace: gloo-mesh
        serverAddress: $MGMT_SERVER_NETWORKING_ADDRESS
    ...
      

Kubernetes RBAC

To review the permissions of deployed Gloo components such as the management server and agent, see Gloo component permissions.