Set up caching
Deploy the caching server and start caching responses from upstream services.
This feature is an Enterprise-only feature that requires a Gloo Gateway Enterprise license.
About
When caching is enabled during installation, a caching server deployment is automatically created for you and managed by Gloo Gateway. Then you must configure an HTTP or HTTPS listener on your gateway to cache responses for upstream services. When the listener routes a request to an upstream, the response from the upstream is automatically cached by the caching server if it contains a cache-control
response header. All subsequent requests receive the cached response, until the cache entry expires.
For more information, see About response caching.
In this guide, you complete the following tasks:
- Enable the Gloo Gateway caching server.
- Deploy the Envoy caching service. You use this service to try out caching with response validation.
- Configure caching for an HTTP listener.
- Verify response caching:
- You use the httpbin app from the Get started guide to test response caching without validation.
- You use the Envoy caching app to test response caching with validation.
Before you begin
Follow the Get started guide to install Gloo Gateway, set up a gateway resource, and deploy the httpbin sample app.
Get the external address of the gateway and save it in an environment variable.
Deploy the caching server
Get the Helm values for your current installation.
helm get values gloo-gateway -n gloo-system -o yaml > gloo-gateway.yaml open gloo-gateway.yaml
Add the following values to your Helm values file to enable the caching server.
global: extensions: caching: enabled: true # Enable the caching server.
By default, the caching server uses the Redis instance that is deployed with Gloo Gateway. To use your own Redis instance, such as in production deployments, set the following Helm values:- Set
redis.disabled
totrue
to disable the default Redis instance. - Set
redis.service.name
to the name of the Redis service instance. If the instance is an external service, set the endpoint of the external service as the value. - For other Redis override settings, see the Redis section of the Enterprise Helm chart values.
- Set
Upgrade your Helm installation.
helm upgrade -n gloo-system gloo-gateway glooe/gloo-ee \ -f gloo-gateway.yaml \ --version=1.17.3
Verify that the caching server is deployed.
kubectl --namespace gloo-system get all | grep caching
Example output:
pod/caching-service-5d7f867cdc-bhmqp 1/1 Running 0 74s service/caching-service ClusterIP 10.76.11.242 <none> 8085/TCP 77s deployment.apps/caching-service 1/1 1 1 77s replicaset.apps/caching-service-5d7f867cdc 1 1 1 76s
Deploy the Envoy caching service
Create a namespace for the Envoy caching service.
kubectl create ns envoy-caching
Deploy the caching app and expose it with a Kubernetes service.
kubectl apply -n envoy-caching -f- <<EOF apiVersion: v1 kind: Pod metadata: labels: app: service1 name: service1 spec: containers: - image: ghcr.io/huzlak/caching-service:0.2 name: service1 ports: - name: http containerPort: 8000 readinessProbe: httpGet: port: 8000 path: /service/1/no-cache --- apiVersion: v1 kind: Service metadata: name: service1 spec: ports: - port: 8000 name: http targetPort: http selector: app: service1 EOF
Verify that the app is up and running.
kubectl get pods -n envoy-caching
Example output:
NAMESPACE NAME READY STATUS RESTARTS AGE envoy-caching service1 1/1 Running 0 23h
Create an HTTPRoute resource to route incoming requests on the
caching.example.com
domain to the Envoy caching service.kubectl apply -f- <<EOF apiVersion: gateway.networking.k8s.io/v1beta1 kind: HTTPRoute metadata: name: envoy-caching-route namespace: envoy-caching labels: example: envoy-caching-route spec: parentRefs: - name: http namespace: gloo-system hostnames: - "caching.example.com" rules: - backendRefs: - name: service1 port: 8000 EOF
Send a request to the Envoy caching service and verify that you get back a 200 HTTP response code.
Example output:
HTTP/1.1 200 OK content-type: text/html; charset=utf-8 content-length: 126 cache-control: max-age=0, no-cache etag: "ae3959c591c5e69e5175fbc2c8b272f15883952b" date: Mon, 24 Jun 2024 20:22:47 GMT server: envoy x-envoy-upstream-service-time: 6
Configure and verify caching for an HTTP listener
In the following example, you configure caching for the HTTP listener on the http
gateway that you set up as part of the Get started guide. Alternatively, you can configure the cachingServer
section in your Gloo Settings resource to enable HTTP caching for all gateways by default.
Then, you try out caching with and without response validation with the following apps:
- httpbin: The
/cache/{value}
endpoint is used to show how caching works without response validation. This app is deployed as part of the Get started guide. - Envoy caching service: The
/valid-for-minute
endpoint is used to show how caching works with response validation. This app is deployed as part of this guide.
Create an HTTPListenerOption resource to configure response caching for all services that are served by an HTTP or HTTPS listener. Enabling caching for a specific service or upstream is currently not supported. Note that for listener-level caching to take effect, the
cachingServer
option of the Settings resource must be disabled.kubectl apply -f- <<EOF apiVersion: gateway.solo.io/v1 kind: HttpListenerOption metadata: name: caching namespace: gloo-system spec: targetRefs: - group: gateway.networking.k8s.io kind: Gateway name: http options: caching: cachingServiceRef: name: caching-service namespace: gloo-system EOF
Try out caching without response validation by using the
/cache/{value} endpoint
of the httpbin app.Send a request to the
/cache/{value} endpoint
. The{value}
variable specifies the number of seconds that you want to cache the response for. In this example, the response is cached for 30 seconds. In your CLI output, verify that you get back thecache-control
response header with amax-age=30
value. This response header triggers Gloo Gateway to cache the response.Example output:
* Mark bundle as not supporting multiuse < HTTP/1.1 200 OK HTTP/1.1 200 OK < access-control-allow-credentials: true access-control-allow-credentials: true < access-control-allow-origin: * access-control-allow-origin: * < cache-control: public, max-age=30 cache-control: public, max-age=30 < content-type: application/json; encoding=utf-8 content-type: application/json; encoding=utf-8 < date: Mon, 24 Jun 2024 20:53:12 GMT date: Mon, 24 Jun 2024 20:53:12 GMT < content-length: 603 content-length: 603 < x-envoy-upstream-service-time: 2 x-envoy-upstream-service-time: 2 < server: envoy server: envoy < x-envoy-decorator-operation: httpbin.httpbin.svc.cluster.local:8000/* x-envoy-decorator-operation: httpbin.httpbin.svc.cluster.local:8000/* { "args": {}, "headers": { "Accept": [ "*/*" ], "Host": [ "www.example.com:8080" ], "If-Modified-Since": [ "Mon, 24 Jun 2024 20:40:41 GMT" ], "User-Agent": [ "curl/7.77.0" ], "X-B3-Sampled": [ "0" ], "X-B3-Spanid": [ "939316d334cefd4b" ], "X-B3-Traceid": [ "6b20a80619a46d1b939316d334cefd4b" ], "X-Forwarded-Proto": [ "http" ], "X-Request-Id": [ "62dfa4c8-0155-4b9a-8ee1-69250ebdc71a" ] }
Send another request to the same endpoint within the 30s timeframe. In your CLI output, verify that you get back the original response. In addition, check that an age response header is returned indicating the age of the cached response, and that the date header uses the date and time of the original response.
Example output:
* Mark bundle as not supporting multiuse < HTTP/1.1 200 OK HTTP/1.1 200 OK < access-control-allow-origin: * access-control-allow-origin: * < server: envoy server: envoy < date: Mon, 24 Jun 2024 20:53:12 GMT date: Mon, 24 Jun 2024 20:53:12 GMT < x-envoy-upstream-service-time: 2 x-envoy-upstream-service-time: 2 < cache-control: public, max-age=30 cache-control: public, max-age=30 < x-envoy-decorator-operation: httpbin.httpbin.svc.cluster.local:8000/* x-envoy-decorator-operation: httpbin.httpbin.svc.cluster.local:8000/* < content-length: 603 content-length: 603 < access-control-allow-credentials: true access-control-allow-credentials: true < content-type: application/json; encoding=utf-8 content-type: application/json; encoding=utf-8 < age: 21 age: 21 ...
Wait until the 30 seconds pass and the cached response becomes stale. Send another request to the same endpoint. Verify that you get back a fresh response and that no
age
header is returned.Example output:
* Mark bundle as not supporting multiuse < HTTP/1.1 200 OK HTTP/1.1 200 OK < access-control-allow-credentials: true access-control-allow-credentials: true < access-control-allow-origin: * access-control-allow-origin: * < cache-control: public, max-age=30 cache-control: public, max-age=30 < content-type: application/json; encoding=utf-8 content-type: application/json; encoding=utf-8 < date: Mon, 24 Jun 2024 21:05:45 GMT date: Mon, 24 Jun 2024 21:05:45 GMT < content-length: 603 content-length: 603 < x-envoy-upstream-service-time: 2 x-envoy-upstream-service-time: 2 < server: envoy server: envoy < x-envoy-decorator-operation: httpbin.httpbin.svc.cluster.local:8000/* x-envoy-decorator-operation: httpbin.httpbin.svc.cluster.local:8000/*
Try out caching with response validation by using the Envoy caching service. Response validation must be implemented in the upstream service directly. The service must be capable of reading the date and time that is sent in the
If-Modified-Since
request header and to check if the response has changed since then.Send a request to the
/valid-for-minute
endpoint. The endpoint is configured to cache the response for 1 minute (cache-control: max-age=60
). When the response becomes stale after 1 minute, the request validation process starts.Example output:
* Mark bundle as not supporting multiuse < HTTP/1.1 200 OK HTTP/1.1 200 OK < content-type: text/html; charset=utf-8 content-type: text/html; charset=utf-8 < content-length: 99 content-length: 99 < cache-control: max-age=60 cache-control: max-age=60 < custom-header: any value custom-header: any value < etag: "324ce9104e113743300a847331bb942ab7ace81a" etag: "324ce9104e113743300a847331bb942ab7ace81a" < date: Mon, 24 Jun 2024 21:35:55 GMT date: Mon, 24 Jun 2024 21:35:55 GMT < server: envoy server: envoy < x-envoy-upstream-service-time: 5 x-envoy-upstream-service-time: 5 < This response will stay fresh for one minute Response generated at: Mon, 24 Jun 2024 21:35:55 GMT
Send another request to the same endpoint within the 1 minute timeframe. Because the response is cached for 1 minute, the original response is returned with an
age
header indicating the number of seconds that passed since the original response was sent. Make sure that thedate
header and response body include the same information as in the original response.Example output:
* Mark bundle as not supporting multiuse < HTTP/1.1 200 OK HTTP/1.1 200 OK < server: envoy server: envoy < date: Mon, 24 Jun 2024 21:35:55 GMT date: Mon, 24 Jun 2024 21:35:55 GMT < content-length: 99 content-length: 99 < cache-control: max-age=60 cache-control: max-age=60 < etag: "324ce9104e113743300a847331bb942ab7ace81a" etag: "324ce9104e113743300a847331bb942ab7ace81a" < x-envoy-upstream-service-time: 5 x-envoy-upstream-service-time: 5 < custom-header: any value custom-header: any value < content-type: text/html; charset=utf-8 content-type: text/html; charset=utf-8 < age: 3 age: 3 < This response will stay fresh for one minute Response generated at: Mon, 24 Jun 2024 21:35:55 GMT
After the 1 minute passes and the cached response becomes stale, send another request to the same endpoint. The Envoy caching app is configured to automatically add the
If-Modified-Since
header to each request to trigger the response validation process. In addition, the app is configured to always return a304 Not Modified
HTTP response code to indicate that the response has not changed. When the304
HTTP response code is received by the Gloo Gateway caching server, the caching server fetches the original response from Redis, and sends it back to the client.You can verify that the response validation succeeded when the
date
response header is updated with the time and date of your new request, theage
response header is removed, and the response body contains the same information as in the original response.Example output:
* Mark bundle as not supporting multiuse < HTTP/1.1 200 OK HTTP/1.1 200 OK < cache-control: max-age=60 cache-control: max-age=60 < custom-header: any value custom-header: any value < etag: "324ce9104e113743300a847331bb942ab7ace81a" etag: "324ce9104e113743300a847331bb942ab7ace81a" < date: Mon, 24 Jun 2024 21:46:40 GMT date: Mon, 24 Jun 2024 21:46:40 GMT < server: envoy server: envoy < x-envoy-upstream-service-time: 6 x-envoy-upstream-service-time: 6 < content-length: 99 content-length: 99 < content-type: text/html; charset=utf-8 content-type: text/html; charset=utf-8 < This response will stay fresh for one minute Response generated at: Mon, 24 Jun 2024 21:35:55 GMT
Because the Envoy caching app is configured to always return a 304
HTTP response code, you continue to see the cached response no matter how many requests you send to the app. To reset the app and force the app to return a fresh response, you must restart the service1
pod in the envoy-caching
namespace and the redis*
pod in the gloo-system
namespace.
Cleanup
You can remove the resources that you created in this guide.
kubectl delete pod service1 -n envoy-caching
kubectl delete service service1 -n envoy-caching
kubectl delete httproute envoy-caching-route -n envoy-caching
kubectl delete namespace envoy-caching
kubectl delete HttpListenerOption caching -n gloo-system