Basic rate limit policy
Control the rate of requests to destinations within the service mesh. The following example shows you how to create a basic rate limit policy that applies to a destination, based on a generic key.
For more in-depth examples of the Envoy and Set-style rate limiting APIs, see More rate limit policy examples.
If you import or export resources across workspaces, your policies might not apply. For more information, see Import and export policies.
Before you begin
- Complete the multicluster getting started guide to set up the following testing environment.
- Three clusters along with environment variables for the clusters and their Kubernetes contexts.
- The Gloo Platform CLI,
meshctl
, along with other CLI tools such askubectl
andistioctl
. - The Gloo management server in the management cluster, and the Gloo agents in the workload clusters.
- Istio installed in the workload clusters.
- A simple Gloo workspace setup.
- Install Bookinfo and other sample apps.
- Make sure that the rate limiting service is installed and running. If not, install the rate limiting service.
kubectl get pods --context ${REMOTE_CONTEXT1} -A -l app=rate-limiter
- To use rate limiting policies, you must create the required
RateLimitServerConfig
,RateLimitServerSettings
, andRateLimitClientConfig
resources. To create these resources you can either follow the Rate limit server setup guide, or use the example resources in step 2 of the Verify rate limit policies section.
Configure rate limit policies
You can apply a rate limit policy at the destination level. For more information, see Applying policies.
When you create the policy with a destination selector, only Kubernetes services can be specified in the applyToDestination
section. Gloo virtual destinations or Gloo external services are not supported.
Review the following sample configuration files. Continue to the Verify rate limit policies section for example steps on how to check that rate limiting is working.
apiVersion: trafficcontrol.policy.gloo.solo.io/v2
kind: RateLimitPolicy
metadata:
annotations:
cluster.solo.io/cluster: ""
name: rl-policy
namespace: bookinfo
spec:
applyToDestinations:
- port:
number: 9080
selector:
labels:
app: reviews
config:
ratelimitClientConfig:
name: rl-client-config
ratelimitServerConfig:
name: rl-server-config
namespace: gloo-mesh-addons
serverSettings:
name: rl-server
Setting | Description |
---|---|
spec.applyToDestinations |
Configure which destinations to apply the policy to, by using labels. Destinations can be a Kubernetes service, VirtualDestination, or ExternalService. If you do not specify any destinations or routes, the policy applies to all destinations in the workspace by default. If you do not specify any destinations but you do specify a route, the policy applies to the route but to no destinations. In this example, the rate limit policy applies to all destinations in the workspace with the app: reviews label. |
config |
The ratelimitServerConfig is required. The serverSettings and ratelimitClientConfig are optional, and can be added manually in the policy. In this example, the rate limit policy refers to the client config, server config, and server settings that you downloaded before you began. For more information, see Rate limit server setup. |
Verify rate limit policies
- Make sure that the rate limit server is installed and running. If not, install the rate limiter.
kubectl get pods --context ${REMOTE_CONTEXT1} -A -l app=rate-limiter
- Create the rate limit server resources that are required to use rate limiting policies. For more information these resources, see Rate limit server setup. Note: Change
cluster-1
as needed to your cluster's actual name.kubectl --context ${REMOTE_CONTEXT1} apply -f - <<EOF apiVersion: admin.gloo.solo.io/v2 kind: RateLimitServerConfig metadata: annotations: cluster.solo.io/cluster: "" name: rl-server-config namespace: gloo-mesh-addons spec: destinationServers: - port: number: 8083 ref: cluster: cluster-1 name: rate-limiter namespace: gloo-mesh-addons raw: descriptors: - key: generic_key rateLimit: requestsPerUnit: 1 unit: DAY value: counter --- apiVersion: admin.gloo.solo.io/v2 kind: RateLimitServerSettings metadata: annotations: cluster.solo.io/cluster: "" name: rl-server namespace: bookinfo spec: destinationServer: port: number: 8083 ref: cluster: cluster-1 name: rate-limiter namespace: gloo-mesh-addons --- apiVersion: trafficcontrol.policy.gloo.solo.io/v2 kind: RateLimitClientConfig metadata: annotations: cluster.solo.io/cluster: "" name: rl-client-config namespace: bookinfo spec: raw: rateLimits: - actions: - genericKey: descriptorValue: counter EOF
- Apply the example rate limit policy in your example setup.
kubectl --context ${REMOTE_CONTEXT1} apply -f - <<EOF apiVersion: trafficcontrol.policy.gloo.solo.io/v2 kind: RateLimitPolicy metadata: annotations: cluster.solo.io/cluster: "" name: rl-policy namespace: bookinfo spec: applyToDestinations: - port: number: 9080 selector: labels: app: reviews config: ratelimitClientConfig: name: rl-client-config ratelimitServerConfig: name: rl-server-config namespace: gloo-mesh-addons serverSettings: name: rl-server EOF
- Send a request to the reviews app from within a
curl
pod to test east-west rate limiting.Create a temporary curl pod in the
bookinfo
namespace, so that you can test the app setup. You can also use this method in Kubernetes 1.23 or later, but an ephemeral container might be simpler, as shown in the other tab.-
Create the curl pod.
kubectl run -it -n bookinfo --context $REMOTE_CONTEXT1 curl \ --image=curlimages/curl:7.73.0 --rm -- sh
-
Send a request to the reviews app.
curl http://reviews:9080/reviews/1 -v
Example output:
{"id": "1","podname": "reviews-v2-65c4dc6fdc-6xlhj","clustername": "null","reviews": [{ "reviewer": "Reviewer1", "text": "An extremely entertaining play by Shakespeare. The slapstick humour is refreshing!", "rating": {"stars": 5, "color": "black"}},{ "reviewer": "Reviewer2", "text": "Absolutely fun and entertaining. The play lacks thematic depth when compared to other plays by Shakespeare.", "rating": {"stars": 4, "color": "black"}}]}
Use the
kubernetes debug
command to create an ephemeral curl container in the deployment. This way, the curl container inherits any permissions from the app that you want to test. If you don't run Kubernetes 1.23 or later, you can deploy a separate curl pod or manually add the curl container as shown in the other tab.kubectl --context ${REMOTE_CONTEXT1} -n bookinfo debug -i pods/$(kubectl get pod --context ${REMOTE_CONTEXT1} -l app=reviews -A -o jsonpath='{.items[0].metadata.name}') --image=curlimages/curl -- curl -v http://reviews:9080/reviews/1
If the output has an error about
EphemeralContainers
, see Ephemeral containers don’t work when testing Bookinfo. -
- Repeat the request. Because the rate limit policy limits requests to 1 per day, the request results in a
429 - Too Many Requests
error. - Exit from the curl pod.
- Optional: Clean up the Gloo resources that you created to test this policy.
kubectl --context $REMOTE_CONTEXT1 -n gloo-mesh-addons delete RateLimitServerConfig rl-server-config kubectl --context $REMOTE_CONTEXT1 -n bookinfo delete RateLimitServerSettings rl-server kubectl --context $REMOTE_CONTEXT1 -n bookinfo delete RateLimitClientConfig rl-client-config kubectl --context $REMOTE_CONTEXT1 -n bookinfo delete RateLimitPolicy rl-policy