As a platform administrator, you can set up the Gloo Mesh rate limit server while registering your workload clusters. By default, one rate limit server is deployed per cluster. If you have more specific rate limiting requirements, you can set up more servers, such as one per workspace, or create the rate limit server in a different namespace.

Then, you might delegate the configuration and settings for the rate limit server to the app owner or workspace administrator.

For more information about how the rate limit server resources work together, see About rate limiting.

Rate limit server setup

During Gloo Mesh registration, set up your workload clusters to use the Envoy Go/gRPC rate limit service. The server must be set up, configured, and healthy for rate limit policies to work. Platform admins typically install the server, because they have access to modify the Gloo Mesh Enterprise agent installation on your workload clusters.

To set up the rate limiter, see the workload cluster setup guide.

During the initial setup or a later upgrade, you might optionally update the rate limiter settings. For more information, see the override settings information.

  • Number of servers: By default, one rate limit server is deployed per cluster. If you have more specific rate limiting requirements, you can set up more servers, such as one per workspace.
  • Number of replicas: You can increase the number of replicas that the rate limiter deployment creates. Each replica stores the rate limit counters in a shared backing Redis database that you set up. For example, you can use the built-in Redis instance or bring your own.
  • Other deployment settings: You might want to update other settings, such as the config maps, volumes, or resource limits for CPU and memory.

Rate limit server config

Configure the descriptors with the rate limiting rules for the server to accept. You can reuse the same config for multiple servers. To rate limit a request, the action from the client config must match with one of the descriptors in the server config. You must create a rate limit server configuration before using rate limit policies. The platform admin, app owner, or workspace admin might configure the rate limit server.

Review the following example YAML file for a rate limit server config. For more information, see the API docs.

  apiVersion: admin.gloo.solo.io/v2
kind: RateLimitServerConfig
metadata:
  annotations:
    cluster.solo.io/cluster: ""
  name: rl-server-config
  namespace: gloo-mesh
spec:
  destinationServers:
  - port:
      number: 8083
    ref:
      cluster: cluster-1
      name: rate-limiter
      namespace: gloo-mesh
  raw:
    descriptors:
    - key: generic_key
      rateLimit:
        requestsPerUnit: 1
        unit: DAY
      value: counter
  


Review the following table to understand this configuration.

SettingDescription
spec.destinationServersThis example uses the default rate-limiter server in the gloo-mesh namespace on port 8083.
spec.raw.descriptorsSet up a raw configuration for the rate limit server to enforce for your policies. Make sure that any rate limit client config that you create does not conflict with this server config. In this example, one rate limit descriptor is set up for requests that match the key: value label of generic_key: counter. These requests are rate limited to 1 per day. For more information, see the descriptors API reference.

Rate limit server settings

Optionally set up how a client, such as a sidecar or gateway proxy, connects to the rate limit server, such as adding a request timeout. Rate limit server settings are an optional resource, unless you have multiple servers per cluster or if the rate limit server has a non-default name or namespace.

If you don’t create rate limit server settings, you must select the server to use in the rate limit policy or the rate limit client config. The app owner or app developer might create the rate limit server settings.

Review the following example YAML file for a rate limit server config. For more information, see the API docs.

  apiVersion: admin.gloo.solo.io/v2
kind: RateLimitServerSettings
metadata:
  annotations:
    cluster.solo.io/cluster: ""
  name: rl-server
  namespace: bookinfo
spec:
  destinationServer:
    port:
      number: 8083
    ref:
      cluster: cluster-1
      name: rate-limiter
      namespace: gloo-mesh
  


Review the following table to understand this configuration.

SettingDescription
spec.destinationServerThis example connects to the default rate-limiter server in the gloo-mesh namespace on port 8083. No special connection settings such as timeouts or denials are set.

Rate limit client config

Configure the actions for the Envoy client to take, by matching the action to the server descriptor. You can reuse the same client config for multiple destinations or routes. You must create a rate limit client configuration before using rate limit policies. The operator or app owner might configure the rate limit client.

Review the following example YAML file for a rate limit client config. For more information, see the API docs.

  apiVersion: trafficcontrol.policy.gloo.solo.io/v2
kind: RateLimitClientConfig
metadata:
  annotations:
    cluster.solo.io/cluster: ""
  name: rl-client-config
  namespace: bookinfo
spec:
  raw:
    rateLimits:
    - actions:
      - genericKey:
          descriptorValue: counter
      limit:
        dynamicMetadata:
          metadataKey:
            key: envoy.filters.http.ext_authz
            path:
            - key: opa_auth
            - key: rateLimit
  


Review the following table to understand this configuration.

SettingDescription
spec.rawSet up a raw-style configuration for the rate limit client (the Envoy proxy) to enforce for your policies. Make sure that this rate limit client config does not conflict with the server config. In this example, the action generic_key: counter matches the expected descriptor in the server config. For other possible rate limiting actions such as on request headers, see the API docs.