Connection pools for TCP
Set up connection pool settings such as keepalive for TCP protocols.
About
With a keepalive connection policy, the kernel sends probe packets with only an acknowledgement flag (ACK) to the TCP socket of the destination. If the destination returns the packet with an acknowledgement flag (ACK), the connection is determined to be alive. If not, the probe can fail a certain number of times before the connection is considered dead and the destination is removed from the load balancing pool.
You can use the connection policy to keep connections alive and avoid 503 errors. Consider the setup in the following figure.
In the web workspace, your web app needs to call the recommendation app, which is in a different cluster and workspace. Because you have a Gloo virtual destination, the web app can easily reach the recommendation app by calling recommendation.global
. The request goes to the load balancer that backs the east-west gateway, such as an AWS ELB or NLB or the Azure load balancer. Then, the east-west gateway sends the request to the recommendation app.
Say that your web app sends a request to the recommendation app infrequently. Because traffic is not sent for a while, the load balancer hangs and terminates the connection. However, the web app client still is connected to the gateway. If the client now sends a request to the web app, the connection fails and the client gets back a 503 HTTP response.
To avoid this scenario, you apply a Gloo connection policy to your virtual destination that sends a keepalive request to the target endpoint every minute. Now, the load balancer no longer closes the connection. You can set a Gloo connection policy for every virtual destination to help prevent load balancers from closing connections.
For more information, see the following resources.
- Gloo connection policy API docs
- Gloo virtual destination docs
- TCP keepalive overview from the Linux Documentation Project
Want to enable TCP keepalive for downstream services, such as AWS NLBs that you set up in front of your ingress gateway? See the Listener connection policy instead.
If you import or export resources across workspaces, your policies might not apply. For more information, see Import and export policies.
Before you begin
This guide assumes that you use the same names for components like clusters, workspaces, and namespaces as in the getting started. If you have different names, make sure to update the sample configuration files in this guide.
- Set up Gloo Mesh Gateway in a single cluster.
- Install Bookinfo and other sample apps.
Configure an HTTP listener on your gateway and set up basic routing for the sample apps.
Configure connection policies
You can apply a connection policy at the destination level. For more information, see Applying policies.
This policy currently does not support selecting ExternalServices as a destination.
The following example applies a TCP keepalive configuration for every destination in the workspace.
apiVersion: resilience.policy.gloo.solo.io/v2
kind: ConnectionPolicy
metadata:
name: tcp-keepalive
namespace: bookinfo
spec:
applyToDestinations:
- kind: VIRTUAL_DESTINATION
selector: {}
config:
tcp:
tcpKeepalive:
time: "700s"
probes: 20
interval: "10s"
Review the following table to understand this configuration. For more information, see the API docs.
Setting | Description |
---|---|
applyToDestinations | Use labels to apply the policy to destinations. Destinations might be a Kubernetes service, VirtualDestination, or ExternalService (if supported by the policy). If you do not specify any destinations or routes, the policy applies to all destinations in the workspace by default. If you do not specify any destinations but you do specify a route, the policy applies to the route but to no destinations. |
config | Configure the connection settings of the low-level networking protocol to apply to the selected destinations. To set connection pool settings for TCP destinations, use tcp as the protocol. For HTTP connection pool settings, use http . The connection policy in this guide shows how to configure connection pool settings for a TCP destination. To find an example for an HTTP connection policy, see Connection pool settings for HTTP. |
maxConnections | Set the maximum number of connections to the destination host. In this example, no timeout is set, so the default is 2^32-1 . |
connectTimeout | Set the TCP connection timeout, which must be greater than or equal to 1ms . In this example, no timeout is set, so the default is 10s . |
tcpKeepalive | Set the TCP keepalive settings. |
time | The duration of time that a connection can idle before keepalive probes are sent. Set this value as an integer plus a unit of time, in the format 1h , 1m , 1s , or 1ms . The value must be at least 1ms , and defaults to the OS level of 7200s (or 2 hours) in Linux. This example sets the time to a shorter duration of 700s . |
probes | Set the maximum number of TCP keepalive probes to send before determining that connection is dead. If omitted, the value defaults to the OS configuration, which is typically 9 in Linux. This example allows more probes, 20 . |
interval | The duration of time between keepalive probes. Set this value as an integer plus a unit of time, in the format 1h , 1m , 1s , or 1ms . The value must be at least 1ms , and defaults to the OS level of 75s in Linux. This example sets the interval to a shorter duration of 10s . |
Verify connection policies
Create a virtual destination for the ratings app.
kubectl apply -n bookinfo -f- <<EOF apiVersion: networking.gloo.solo.io/v2 kind: VirtualDestination metadata: name: ratings-global namespace: bookinfo spec: hosts: - ratings.global ports: - number: 80 protocol: HTTP targetPort: name: http services: - labels: app: ratings EOF
Apply the example connection policy that selects all virtual destinations.
apiVersion: resilience.policy.gloo.solo.io/v2 kind: ConnectionPolicy metadata: name: tcp-keepalive namespace: bookinfo spec: applyToDestinations: - kind: VIRTUAL_DESTINATION selector: {} config: tcp: tcpKeepalive: time: "700s" probes: 20 interval: "10s"
Verify that an Istio destination rule is created for the ratings virtual destination.
kubectl get destinationrule -n bookinfo --context $REMOTE_CONTEXT1
Example output:
NAME HOST AGE ratings-global-virtual-destinat-2ab46384c8b40be3cfab9740ac8fb2c ratings.global 11s
Describe the Istio destination rule.
kubectl describe destinationrule <destination-rule> -n bookinfo --context $REMOTE_CONTEXT1
In the output, verify that the Connection Pool settings include the Tcp Keepalive that your policy configures.
Traffic Policy: Port Level Settings: Connection Pool: Tcp: Tcp Keepalive: Interval: 10s Probes: 20 Time: 700s
Cleanup
You can optionally remove the resources that you set up as part of this guide.
kubectl -n bookinfo delete VirtualDestination ratings-global
kubectl -n bookinfo delete ConnectionPolicy tcp-keepalive