Set up soft session affinity between a client and a backend service by using consistent hashing algorithms.

About session affinity

Session affinity allows you to route requests for a particular session to the same backend service instance that served the initial request. This setup is particularly useful if you have a backend service that performs expensive operations and caches the output or data for subsequent requests. With session affinity, you make sure that the expensive operation is performed once and that subsequent requests can be served from the backend’s cache, which can significantly improve operational cost and response times for your clients.

About consistent hashing

Gloo Gateway allows you to set up soft session affinity between a client and a backend service by using the Ringhash or Maglev consistent hashing algorithm. The hashing algorithm uses a property of the request, such as a cookie, header, or source IP address, and hashes this property with the address of a backend service instance that served the initial request. In subsequent requests, as long as the client sends the same request property, the request is routed to the same backend service instance.

Request properties are configured in the loadBalancer.hashPolicies section of a BackendConfigPolicy. The header, cookie, and sourceIP hash policies are mutually exclusive, in that a request can only have one property that the algorithm uses for hashing. However, you can define multiple different hash policies within one BackendConfigPolicy by using the terminal field for each hash policy. If a policy has the terminal: true setting and the policy is matched, any subsequent hash policies are skipped. This field is useful for defining fallback policies, and limiting the amount of time spent generating hash keys.

Before you begin

  1. Follow the Get started guide to install Gloo Gateway.

  2. Follow the Sample app guide to create a gateway proxy with an HTTP listener and deploy the httpbin sample app.

  3. Get the external address of the gateway and save it in an environment variable.

Set up consistent hashing

See the following links to get started:

Ringhash

Ringhash allows you to tune the ring size to balance memory usage vs load distribution precision. This way, you get more fine-grained control over how traffic is distributed across endpoint. However, this configurability might come at a performance cost, depending on your setup. To learn more about Ringhash, see the Envoy documentation.

  1. Create a BackendConfigPolicy that uses the request property of your choice.

  2. Continue with Verify consistent hashing.

Maglev

With Maglev, you use a fixed lookup table of 65,357 entries that is optimized for fast request routing with deterministic performance. This option is well-suited for general-purpose workloads that do not require custom tuning. For more information, see the Envoy docs.

  1. Create a BackendConfigPolicy that uses the request property of your choice.

  2. Continue with Verify consistent hashing.

Verify consistent hashing

Send a few requests to the httpbin app and verify that the request is served by the same backend instance.

  1. Scale the httpbin app up to two instances.

      kubectl scale deployment httpbin -n httpbin --replicas=2
      
  2. Verify that another instance of the httpbin app is created.

      kubectl get pods -n httpbin
      

    Example output:

      NAME                      READY   STATUS        RESTARTS   AGE
    httpbin-8d557795f-86hzg   3/3     Running       0          54s
    httpbin-8d557795f-h8ks9   3/3     Running       0          26m
      
  3. Test consistent hashing by sending multiple requests to the httpbin app and verifying that all requests are served by the same backend instance. Note that the verification steps vary depending on the hashing policy that you defined.

  4. Check the logs of each httpbin instance. Verify that only one of the instances served all 10 of the subsequent requests that you made.

      kubectl logs -n httpbin <httpbin-pod>
      

    Example output for one pod, in which all 10 subsequent requests (timestamps at 17:20) are served after the previous 1 request (timestamp at 17:17):

      Defaulted container "httpbin" out of: httpbin, curl
    go-httpbin listening on http://0.0.0.0:8080
    time="2025-07-16T17:17:09.8479" status=200 method="GET" uri="/headers" size_bytes=440 duration_ms=0.10 user_agent="curl/8.7.1" client_ip=10.244.0.7
    time="2025-07-16T17:20:32.1077" status=200 method="GET" uri="/headers" size_bytes=445 duration_ms=0.04 user_agent="curl/8.7.1" client_ip=10.244.0.7
    time="2025-07-16T17:20:40.7017" status=200 method="GET" uri="/headers" size_bytes=445 duration_ms=0.05 user_agent="curl/8.7.1" client_ip=10.244.0.7
    time="2025-07-16T17:20:49.5744" status=200 method="GET" uri="/headers" size_bytes=515 duration_ms=0.04 user_agent="curl/8.7.1" client_ip=10.244.0.7
    ...
      

Cleanup

You can remove the resources that you created in this guide.
  1. Scale the httpbin app back down.

      kubectl scale deployment httpbin -n httpbin --replicas=1
      
  2. Delete the resources that you created.

      kubectl delete BackendConfigPolicy httpbin-hash -n httpbin