Basic rate limit policy
Apply a basic rate limiting policy.
Control the rate of requests to destinations within the service mesh. The following example shows you how to create a basic rate limit policy that applies to a destination or route, based on a generic key.
For more in-depth examples of the Envoy and Set-style rate limiting APIs, see More rate limit policy examples.
If you import or export resources across workspaces, your policies might not apply. For more information, see Import and export policies.
Before you begin
This guide assumes that you use the same names for components like clusters, workspaces, and namespaces as in the getting started. If you have different names, make sure to update the sample configuration files in this guide.
- Set up Gloo Mesh Gateway in a single cluster.
- Install Bookinfo and other sample apps.
Configure an HTTP listener on your gateway and set up basic routing for the sample apps.
Make sure that the rate limiting service is installed and running. If not, install the rate limiting service.
kubectl get pods -A -l app=rate-limiter
To use rate limiting policies, you must create the required
RateLimitServerConfig
,RateLimitServerSettings
, andRateLimitClientConfig
resources. To create these resources you can either follow the Rate limit server setup guide, or use the example resources in step 2 of the Verify rate limit policies section.
Configure rate limit policies
You can apply a rate limit policy at the destination level. For more information, see Applying policies.
This policy currently does not support selecting VirtualDestinations as a destination.
You cannot apply this policy to a route that already has a redirect, rewrite, or direct response action. Keep in mind that these actions might not be explicitly defined in the route configuration. For example, invalid routes are automatically replaced with a direct response action, such as when the backing destination is wrong. First, verify that your route configuration is correct. Then, decide whether to apply the policy. To apply the policy, remove any redirect, rewrite, or direct response actions. To keep the actions and not apply the policy, change the route labels of either the policy or the route.
Review the following sample configuration files. Continue to the Verify rate limit policies section for example steps on how to check that rate limiting is working.
apiVersion: trafficcontrol.policy.gloo.solo.io/v2
kind: RateLimitPolicy
metadata:
annotations:
cluster.solo.io/cluster: ""
name: rl-policy
namespace: bookinfo
spec:
applyToRoutes:
- route:
labels:
route: ratings
config:
phase:
postAuthz: {}
ratelimitClientConfig:
name: rl-client-config
ratelimitServerConfig:
name: rl-server-config
namespace: gloo-mesh
serverSettings:
name: rl-server
Review the following table to understand this configuration.
Setting | Description |
---|---|
applyToRoutes | Use labels to configure which routes to apply the policy to. This example label matches the app and route from the example route table that you apply separately. If omitted and you do not have another selector such as applyToDestinations , the policy applies to all routes in the workspace. |
config | The ratelimitServerConfig is required. The serverSettings and ratelimitClientConfig are optional, and can be added manually in the policy. In this example, the rate limit policy refers to the client config, server config, and server settings that you downloaded before you began. For more information, see Rate limit server setup. |
Verify rate limit policies
Create a rate limit server config with the rate limiting rules that the server accepts. For more information, see Rate limit server setup. Note: Change
cluster-1
as needed to your cluster’s actual name (value of$CLUSTER_NAME
).kubectl apply -f - << EOF apiVersion: admin.gloo.solo.io/v2 kind: RateLimitServerConfig metadata: annotations: cluster.solo.io/cluster: "" name: rl-server-config namespace: gloo-mesh spec: destinationServers: - port: number: 8083 ref: cluster: cluster-1 name: rate-limiter namespace: gloo-mesh raw: descriptors: - key: generic_key rateLimit: requestsPerUnit: 1 unit: DAY value: counter EOF
Create a rate limit server settings resource to control how clients connect to the rate limit server. For more information, see Rate limit server setup. Note: Change
cluster-1
as needed to your cluster’s actual name (value of$CLUSTER_NAME
).kubectl apply -f - << EOF apiVersion: admin.gloo.solo.io/v2 kind: RateLimitServerSettings metadata: annotations: cluster.solo.io/cluster: "" name: rl-server namespace: bookinfo spec: destinationServer: port: number: 8083 ref: cluster: cluster-1 name: rate-limiter namespace: gloo-mesh EOF
Create a rate limit client config to set up the rate limiting actions to take. For more information, see Rate limit server setup.
kubectl apply -f - << EOF apiVersion: trafficcontrol.policy.gloo.solo.io/v2 kind: RateLimitClientConfig metadata: annotations: cluster.solo.io/cluster: "" name: rl-client-config namespace: bookinfo spec: raw: rateLimits: - actions: - genericKey: descriptorValue: counter limit: dynamicMetadata: metadataKey: key: envoy.filters.http.ext_authz path: - key: opa_auth - key: rateLimit EOF
Create the rate limit policy that you reviewed in the previous section.
kubectl apply -f - << EOF apiVersion: trafficcontrol.policy.gloo.solo.io/v2 kind: RateLimitPolicy metadata: annotations: cluster.solo.io/cluster: "" name: rl-policy namespace: bookinfo spec: applyToRoutes: - route: labels: route: ratings config: phase: postAuthz: {} ratelimitClientConfig: name: rl-client-config ratelimitServerConfig: name: rl-server-config namespace: gloo-mesh serverSettings: name: rl-server EOF
Send a request to the app.
- HTTP:
curl -vik --connect-timeout 1 --max-time 5 --resolve www.example.com:80:${INGRESS_GW_IP} http://www.example.com:80/ratings/1
- HTTPS:
curl -vik --connect-timeout 1 --max-time 5 --resolve www.example.com:443:${INGRESS_GW_IP} https://www.example.com:443/ratings/1
- HTTP:
Repeat the request a few times. Because the rate limit policy limits requests to 1 per day, the request results in a
429 - Too Many Requests
error.
Cleanup
You can optionally remove the resources that you set up as part of this guide.
kubectl -n gloo-mesh delete RateLimitServerConfig rl-server-config
kubectl -n bookinfo delete RateLimitServerSettings rl-server
kubectl -n bookinfo delete RateLimitClientConfig rl-client-config
kubectl -n bookinfo delete RateLimitPolicy rl-policy