Control the rate of requests to destinations within the service mesh. The following example shows you how to create a basic rate limit policy that applies to a destination, based on a generic key.

For more in-depth examples of the Envoy and Set-style rate limiting APIs, see More rate limit policy examples.

Before you begin

  1. Complete the multicluster getting started guide to set up the following testing environment.

    • Three clusters along with environment variables for the clusters and their Kubernetes contexts.
    • The Gloo meshctl CLI, along with other CLI tools such as kubectl and istioctl.
    • The Gloo management server in the management cluster, and the Gloo agents in the workload clusters.
    • Istio installed in the workload clusters.
    • A simple Gloo workspace setup.
  2. Install Bookinfo and other sample apps.
  3. Make sure that the rate limiting service is installed and running. If not, install the rate limiting service.

      kubectl get pods --context $REMOTE_CONTEXT1 -A -l app=rate-limiter
      
  4. To use rate limiting policies, you must create the required RateLimitServerConfig, RateLimitServerSettings, and RateLimitClientConfig resources. To create these resources you can either follow the Rate limit server setup guide, or use the example resources in step 2 of the Verify rate limit policies section.

Configure rate limit policies

You can apply a rate limit policy at the destination level. For more information, see Applying policies.

Review the following sample configuration files. Continue to the Verify rate limit policies section for example steps on how to check that rate limiting is working.

  apiVersion: trafficcontrol.policy.gloo.solo.io/v2
kind: RateLimitPolicy
metadata:
  annotations:
    cluster.solo.io/cluster: ""
  name: rl-policy
  namespace: bookinfo
spec:
  applyToDestinations:
  - port:
      number: 9080
    selector:
      labels:
        app: reviews
  config:
    ratelimitClientConfig:
      name: rl-client-config
    ratelimitServerConfig:
      name: rl-server-config
      namespace: gloo-mesh
    serverSettings:
      name: rl-server
  


Review the following table to understand this configuration.

SettingDescription
applyToDestinationsUse labels to apply the policy to destinations. Destinations might be a Kubernetes service, VirtualDestination, or ExternalService (if supported by the policy). If you do not specify any destinations or routes, the policy applies to all destinations in the workspace by default. If you do not specify any destinations but you do specify a route, the policy applies to the route but to no destinations.
configThe ratelimitServerConfig is required. The serverSettings and ratelimitClientConfig are optional, and can be added manually in the policy. In this example, the rate limit policy refers to the client config, server config, and server settings that you downloaded before you began. For more information, see Rate limit server setup.

Verify rate limit policies

  1. Create a rate limit server config with the rate limiting rules that the server accepts. For more information, see Rate limit server setup. Note: Change cluster-1 as needed to your cluster’s actual name (value of $CLUSTER_NAME).

      kubectl apply -f - << EOF
    apiVersion: admin.gloo.solo.io/v2
    kind: RateLimitServerConfig
    metadata:
      annotations:
        cluster.solo.io/cluster: ""
      name: rl-server-config
      namespace: gloo-mesh
    spec:
      destinationServers:
      - port:
          number: 8083
        ref:
          cluster: cluster-1
          name: rate-limiter
          namespace: gloo-mesh
      raw:
        descriptors:
        - key: generic_key
          rateLimit:
            requestsPerUnit: 1
            unit: DAY
          value: counter
    EOF
      
  2. Create a rate limit server settings resource to control how clients connect to the rate limit server. For more information, see Rate limit server setup. Note: Change cluster-1 as needed to your cluster’s actual name (value of $CLUSTER_NAME).

      kubectl apply -f - << EOF
    apiVersion: admin.gloo.solo.io/v2
    kind: RateLimitServerSettings
    metadata:
      annotations:
        cluster.solo.io/cluster: ""
      name: rl-server
      namespace: bookinfo
    spec:
      destinationServer:
        port:
          number: 8083
        ref:
          cluster: cluster-1
          name: rate-limiter
          namespace: gloo-mesh
    EOF
      
  3. Create a rate limit client config to set up the rate limiting actions to take. For more information, see Rate limit server setup.

      kubectl apply -f - << EOF
    apiVersion: trafficcontrol.policy.gloo.solo.io/v2
    kind: RateLimitClientConfig
    metadata:
      annotations:
        cluster.solo.io/cluster: ""
      name: rl-client-config
      namespace: bookinfo
    spec:
      raw:
        rateLimits:
        - actions:
          - genericKey:
              descriptorValue: counter
          limit:
            dynamicMetadata:
              metadataKey:
                key: envoy.filters.http.ext_authz
                path:
                - key: opa_auth
                - key: rateLimit
    EOF
      
  4. Create the rate limit policy that you reviewed in the previous section.

      kubectl apply -f - << EOF
    apiVersion: trafficcontrol.policy.gloo.solo.io/v2
    kind: RateLimitPolicy
    metadata:
      annotations:
        cluster.solo.io/cluster: ""
      name: rl-policy
      namespace: bookinfo
    spec:
      applyToDestinations:
      - port:
          number: 9080
        selector:
          labels:
            app: reviews
      config:
        ratelimitClientConfig:
          name: rl-client-config
        ratelimitServerConfig:
          name: rl-server-config
          namespace: gloo-mesh
        serverSettings:
          name: rl-server
    EOF
      
  5. Send a request to the app.Create a temporary curl pod in the bookinfo namespace, so that you can test the app setup. You can also use this method in Kubernetes 1.23 or later, but an ephemeral container might be simpler.

    1. Create the curl pod.
        kubectl run -it -n httpbin --context ${REMOTE_CONTEXT1} curl --image=curlimages/curl:7.73.0 --rm  -- sh
        
    2. Send a request to the reviews app from within the curl pod to test east-west rate limiting.
        curl http://reviews:9080/reviews/1 -v
        
  6. Repeat the request a few times. Because the rate limit policy limits requests to 1 per day, the request results in a 429 - Too Many Requests error.

  7. Exit the temporary curl pod. The pod deletes itself.

      exit
      

Cleanup

You can optionally remove the resources that you set up as part of this guide.
  kubectl --context $REMOTE_CONTEXT1 -n gloo-mesh delete RateLimitServerConfig rl-server-config
kubectl --context $REMOTE_CONTEXT1 -n bookinfo delete RateLimitServerSettings rl-server
kubectl --context $REMOTE_CONTEXT1 -n bookinfo delete RateLimitClientConfig rl-client-config
kubectl --context $REMOTE_CONTEXT1 -n bookinfo delete RateLimitPolicy rl-policy