Session Affinity
For certain applications deployed across multiple replicas, it may be desirable to route all traffic from a single client session to the same instance of the application. This can help reduce latency through better use of caches. This load balancer behavior is referred to as Session Affinity or Sticky Sessions. Gloo Gateway exposes Envoy’s full session affinity capabilities, as described below.
Configuration overview
There are two steps to configuring session affinity:
- Set a hashing load balancer on the upstream specification.
- This can be either Envoy’s Ring Hash or Maglev load balancer.
- Define the hash key parameters on the desired routes.
- This can include any combination of headers, cookies, and source IP address.
Below, we show how to configure Gloo Gateway to use hashing load balancers and demonstrate a common cookie-based hashing strategy using a Ring Hash load balancer.
Upstream Plugin Configuration
- Whether an upstream was discovered by Gloo Gateway or created manually, just add the
loadBalancerConfig
spec to your upstream. - Either a
ringHash
ormaglev
load balancer must be specified to achieve session affinity. Some examples are shown below.- To determine whether a Ring Hash or Maglev load balancer is best for your use case, please review the details in Envoy’s load balancer selection docs.
- In many cases, either load balancer will work.
Configure a Ring Hash Load Balancer on an Upstream
- Full reference specification:
apiVersion: gloo.solo.io/v1
kind: Upstream
metadata:
annotations:
labels:
discovered_by: kubernetesplugin
name: default-session-affinity-app-80
namespace: gloo-system
spec:
kube:
selector:
name: session-affinity-app
serviceName: session-affinity-app
serviceNamespace: default
servicePort: 80
loadBalancerConfig:
ringHash:
ringHashConfig:
maximumRingSize: "200"
minimumRingSize: "10"
- Optional fields omitted:
apiVersion: gloo.solo.io/v1
kind: Upstream
metadata:
annotations:
labels:
discovered_by: kubernetesplugin
name: default-session-affinity-app-80
namespace: gloo-system
spec:
kube:
selector:
name: session-affinity-app
serviceName: session-affinity-app
serviceNamespace: default
servicePort: 80
loadBalancerConfig:
ringHash: {}
Configure a Maglev Load Balancer on an Upstream
- There are no configurable parameters for Maglev load balancers:
loadBalancerConfig:
maglev: {}
Route Plugin Configuration
- Full reference specification:
apiVersion: gateway.solo.io/v1
kind: VirtualService
metadata:
name: default
namespace: gloo-system
spec:
virtualHost:
domains:
- '*'
routes:
- matchers:
- exact: /route1
routeAction:
single:
upstream:
name: default-session-affinity-app-80
namespace: gloo-system
options:
lbHash:
hashPolicies: # (1)
- header: x-test-affinity
terminal: true # (2)
- header: origin # (3)
- sourceIp: true # (4)
- cookie: # (5)
name: gloo
path: /abc
ttl: 1s # (6)
prefixRewrite: /count
Notes on hash policies
- One or more
hashPolicies
may be specified. - Ordering of hash policies matters in that any hash policy can be
terminal
, meaning that if Envoy is able to create a hash key with the policies that it has processed up to and including that policy, it will ignore the subsequent policies. This can be used for implementing a content-contingent hashing policy optimization. For example, if a “x-unique-id” header is available, Envoy can save time by ignoring the later identifiers.
- Optional, default:
false
header
policies indicate headers that should be included in the hash key.- The
sourceIp
policy indicates that the request’s sourece IP address should be included in the hash key. cookie
policies indicate that the specified cookie should be included in the hash key.
name
, required, identifies the cookiepath
, optional, cookie pathttl
, optional, if set, Envoy will create the specified cookie, if it is not present on the request
- Envoy can be configured to create cookies by setting the
ttl
parameter. If the specified cookie is not available on the request, Envoy will create it and add it to the response.
For additional insights, please refer to Envoy’s route hash policy documentation.
Tutorial: Cookie-based route hashing
The following tutorial walks through the steps involved in configuring and verifying session affinity.
Summary
- Before enabling session affinity, each instance of our “Counter” app will service our requests in turn (Round Robin).
- This will result in non-incrementing responses, such as [1,1,1,2,2,2,3,3,…].
- After enabling cookie-based session affinity, a single instance of our “Counter” app will service all requests.
- This will produce incremeting responses, such as [4,5,6,…].
Requirements
- Kubernetes cluster with Gloo Gateway installed
- At least two nodes in the cluster.
- Permission to deploy a DaemonSet and edit Gloo Gateway resources.
Deploy a sample app in a DaemonSet
DaemonSets are one type of resource that may benefit from session affinity. A DaemonSet ensures that all (or some) nodes run a given Pod. Depending on your architecture, you may have node-local caches that you want to associate with segments of your traffic. Session affinity can help steer requests from a given client to a consistent node.
Overview of the “Counter” application
We will use a very simple “counter” app to demonstrate session affinity configuration. The counter simply reports how many requests have been made to the /count
endpoint. Without session affinity, subsequent requests will return a non-monotonically increasing response. For example, on a fresh deployment, your first request will be handled by node 1, and return a count of 1. Your second request will by handled by node 2, and also return a count of 1. After you enable session affinity, repeat requests will return a strictly increasing count response.
The source code for the session affinity app is available here.
The core logic is shown below.
package main
import (
"fmt"
"net/http"
"os"
)
func main() {
if err := App(); err != nil {
os.Exit(1)
}
}
var (
countUrl = "/count"
helpMsg = fmt.Sprintf(`Simple counter app for testing Gloo Gateway
%v - reports number of times the %v path was queried`, countUrl, countUrl)
)
func App() error {
count := 0
http.HandleFunc(countUrl, func(w http.ResponseWriter, r *http.Request) {
count++
if _, err := fmt.Fprint(w, count); err != nil {
fmt.Printf("error with request: %v\n", err)
}
})
http.HandleFunc("/", func(w http.ResponseWriter, r *http.Request) {
if _, err := fmt.Fprint(w, helpMsg); err != nil {
fmt.Printf("error with request: %v\n", err)
}
})
return http.ListenAndServe("0.0.0.0:8080", nil)
}
Apply the DaemonSet
The following command will create our DaemonSet and a matching Service.
kubectl apply -f - << EOF
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: session-affinity-app
spec:
selector:
matchLabels:
name: session-affinity-app
app: session-affinity-app
template:
metadata:
labels:
name: session-affinity-app
app: session-affinity-app
spec:
containers:
- name: session-affinity-app
image: soloio/session-affinity-app:0.0.3
resources:
limits:
memory: 10Mi
requests:
cpu: 10m
memory: 10Mi
---
apiVersion: v1
kind: Service
metadata:
name: session-affinity-app
spec:
selector:
name: session-affinity-app
ports:
- protocol: TCP
port: 80
targetPort: 8080
EOF
If you deployed the app to a namespace other than the default namespace you will need to adjust the following commands accordingly.
Gloo Gateway will have discovered the session-affinity-app
service and created an Upstream from it.
Now create a route to the app with glooctl
:
glooctl add route --path-exact /route1 --dest-name default-session-affinity-app-80 --prefix-rewrite /count --name default
In a browser, navigate to this route, /route1
, on your gateway’s URL (you can find this with glooctl proxy url
). If you refresh the page, you should observe a non-incrementing count. For example, in cluster with three nodes, you should see something like the sequence:
1,1,1,2,2,2,3,3,3,4,4,4,...
Apply the session affinity configuration
Configure the upstream
Use kubectl edit upstream -n gloo-system default-session-affinity-app-80
and apply the changes shown below to set a hashing load balancer on the app’s upstream.
apiVersion: gloo.solo.io/v1
kind: Upstream
metadata:
annotations:
labels:
discovered_by: kubernetesplugin
name: default-session-affinity-app-80
namespace: gloo-system
spec:
kube:
selector:
name: session-affinity-app
serviceName: session-affinity-app
serviceNamespace: default
servicePort: 80
loadBalancerConfig:
ringHash:
ringHashConfig:
maximumRingSize: "200"
minimumRingSize: "10"
Configure the route
Now configure your route to produce hash keys based on a cookie. Update the route with kubectl edit virtualservice -n gloo-system default
and apply the changes shown below.
apiVersion: gateway.solo.io/v1
kind: VirtualService
metadata:
name: default
namespace: gloo-system
spec:
virtualHost:
domains:
- '*'
routes:
- matchers:
- exact: /route1
routeAction:
single:
upstream:
name: default-session-affinity-app-80
namespace: gloo-system
options:
lbHash:
hashPolicies:
- cookie:
name: gloo
path: /abc
ttl: 10s
prefixRewrite: /count
Return to the app in your browser and refresh the page a few times. You should see an increasing count similar to this:
5,6,7,8,...
Now that you have configured cookie-based sticky sessions, web requests from your browser will be served by the same instance of the counter app (unless you delete the cookie).
Stateful Session Filter (Enterprise Only)
Envoy provides another method of implementing sticky sessions using the Stateful Session filter, which implements “strong” stickiness.
This example uses the session affinity app resources that are created in the Apply the DaemonSet section. No additional modifications to the upstream or virtual service are required.
Requirements
- A Kubernetes cluster with at least two nodes with Gloo Gateway Enterprise installed.
- Permission to deploy a DaemonSet and edit Gloo Gateway resources.
Cookie-based stateful session filter
When enabling the cookie-based stateful session filter, a hash of the upstream that serves the request is stored in a statefulsessioncookie
cookie. In subsequent requests, the same upstream resource is used to fulfill the request.
-
Edit the gateway proxy.
kubectl edit gateways.gateway.solo.io -n gloo-system gateway-proxy
-
Add the following configuration to the
spec
section of your gateway to enable the cookie-based stateful session filter.spec: bindAddress: '::' bindPort: 8080 httpGateway: options: statefulSession: cookieBased: cookie: name: statefulsessioncookie path: /route1 ttl: 60s proxyNames: - gateway-proxy ssl: false useProxyProto: false
-
Get the URL of the gateway proxy.
glooctl proxy url
-
Open a web browser and navigate to the
/route1
path. For example, if your gateway proxy ishttp://34.111.222.111:80
, typehttp://34.111.222.111:80/route1
in to your web browser. -
Refresh the page a couple of times. Verify that you see an increasing count as the requests are now all directed to the same upstream.
Example output:
5,6,7,8,...
Header-based stateful session filter
When enabling the header-based stateful session filter for a route, a statefulsessionheader
header is returned with the hash of the upstream that served the request. You must use this header in subsequent requests to enable session stickiness.
-
Edit the gateway proxy.
kubectl edit gateways.gateway.solo.io -n gloo-system gateway-proxy
-
Add the following configuration to the
spec
section of your gateway to enable the header-based stateful session filter.spec: bindAddress: '::' bindPort: 8080 httpGateway: options: statefulSession: headerBased: headerName: statefulsessionheader proxyNames: - gateway-proxy ssl: false useProxyProto: false
-
Get the URL of the gateway proxy.
glooctl proxy url
-
Send a request to the
/route1
path. Requests to the/route1
path return astatefulsessionheader
header that you can send in subsequent requests to enable the session stickiness. Because headers are not automatically applied by the browser, it is easier to test this behavior by using a curl request.curl -v $(glooctl proxy url)/route1
Example output:
< HTTP/1.1 200 OK < date: Tue, 11 Jun 2024 15:11:23 GMT < content-length: 1 < content-type: text/plain; charset=utf-8 < x-envoy-upstream-service-time: 10 < statefulsessionheader: MTAuMjQ0LjAuNDU6ODA4MA== < server: envoy < * Connection #0 to host 127.0.0.1 left intact 3%
-
Send another request to the
/route1
path and include thestatefulsessionheader
that was returned in the previous step. Verify that you see an increased count in your response as the requests are now all directed to the same upstream.curl -v -H "statefulsessionheader: MTAuMjQ0LjAuNDU6ODA4MA==" $(glooctl proxy url)/route1