Listener connection
Configure connection settings between downstream services and a gateway listener. Choose from the following options:
About read and write buffer limits
By default, Gloo Gateway is set up with 1MiB of request read and write buffer for each gateway listener. For large requests that must be buffered and that exceed the default buffer limit, Gloo Gateway either disconnects the connection to the downstream service if headers were already sent, or returns a 500 HTTP response code. To make sure that large requests can be sent and received, you can specify the maximum number of bytes that you want to allow to be buffered between the gateway and the downstream service.
About TCP keepalive settings for downstream apps
Configure TCP keepalive for downstream services to keep a connection to the gateway open during long idle periods. A typical use case for this setting is deploying an AWS NLB instance in front of Gloo Gateway. The AWS NLB has an idle timeout of 350 seconds that cannot be changed. Without TCP keepalive, the connection from the gateway to the AWS NLB silently closes after 350 seconds of idle time. However, the connection from the client to the AWS NLB remains intact. When the client sends a request to the AWS NLB, the request cannot be forwarded to the gateway anymore and the NLB returns a reset packet (RST). Because the client cannot interpret the RST packet, the connection to the AWS NLB is closed.
To make sure that the connection from the AWS NLB to the gateway remains open, even during long idle periods, you can configure TCP keepalive for a listener.
Want to enable TCP keepalive for upstream services? See the Connection pool settings for TCP policy instead.
For more information, see the following resources.
If you import or export resources across workspaces, your policies might not apply. For more information, see Import and export policies.
Before you begin
Follow the getting started instructions to:
- Set up Gloo Gateway in a single cluster.
- Deploy sample apps.
- Configure an HTTP listener on your gateway and set up basic routing for the sample apps.
Configure listener connection policies
You can apply a connection policy at the gateway listener level. For more information, see Applying policies.
The following example sets up a maximum buffer limit of 16 megabytes for each connection to the HTTP listener on the istio-ingressgateway
virtual gateway.
apiVersion: resilience.policy.gloo.solo.io/v2
kind: ListenerConnectionPolicy
metadata:
name: listener-connection
namespace: httpbin
spec:
config:
perConnectionBufferLimitBytes: 16384
applyToListeners:
- virtualGateway:
name: istio-ingressgateway
namespace: bookinfo
cluster: $CLUSTER_NAME
port:
number: 80
Review the following table to understand this configuration. For more information, see the API docs.
Setting | Description |
---|---|
spec.config.perConnectionBufferLimitBytes |
The maximum number of bytes that you want to allow to be buffered for each connection between the gateway and a downstream service. The default value is 1MiB. |
spec.applyToListener |
The gateway listener that you want to apply this policy to. To select a gateway listener, you must reference the virtual gateway and the port number that the listener was configured for. To learn more about gateway listeners, see Listener overview. |
In the following example, the connection between the gateway listener and the downstream service must be idle for 240 seconds before TCP probes are sent. After 240 seconds, the gateway starts to send TCP probes in 1 minute invervals. If the TCP keepalive packet is acknowledged by the downstream service, the connection is considered healthy. If the packet is not ackowledged, the gateway sends another probe after the 1 minute interval. If no response is received after 5 TCP keepalive probes, the connection to the downstream service is considered failed and the gateway closes connection.
apiVersion: resilience.policy.gloo.solo.io/v2
kind: ListenerConnectionPolicy
metadata:
annotations:
cluster.solo.io/cluster: ""
name: http-buffer
namespace: bookinfo
spec:
applyToListeners:
- port:
number: 443
virtualGateway:
cluster: cluster-1
name: istio-ingressgateway
namespace: bookinfo
config:
tcpKeepalive:
interval: 1m
probes: 5
time: 240s
Setting | Description |
---|---|
spec.config.tcpKeepalive.interval |
The time duration between TCP keepalive probes. Example formats include 1ms , 1s , 1m , or 1h . For more information about the time value format, see the Google protocol buffer documentation. |
spec.config.tcpKeepalive.probes |
The maximum number of TCP keepalive packets that are sent before a connection between the gateway and the downstream service is considered failed. |
spec.config.tcpKeepalive.time |
The time duration a connection between the gateway and the downstream service must to be idle before TCP keepalive probes are sent. Example formats include 1ms , 1s , 1m , or 1h . For more information about the time value format, see the Google protocol buffer documentation. |
Verify listener connection policies
-
Apply the listener connection policy in your cluster.
kubectl apply -f- <<EOF apiVersion: resilience.policy.gloo.solo.io/v2 kind: ListenerConnectionPolicy metadata: name: listener-connection namespace: bookinfo spec: config: perConnectionBufferLimitBytes: 16384 tcpKeepalive: interval: 1m probes: 5 time: 240s applyToListeners: - virtualGateway: name: istio-ingressgateway namespace: bookinfo cluster: $CLUSTER_NAME port: number: 80 EOF
-
Check that the configuration is applied in the Envoy filter.
kubectl get envoyfilter istio-ingressgateway-listener-8080-listener-connection -n gloo-mesh-gateways -o yaml
Example output:
apiVersion: networking.istio.io/v1alpha3 kind: EnvoyFilter metadata: annotations: cluster.solo.io/cluster: gloo-gateway-docs-mgt creationTimestamp: "2023-06-05T18:05:26Z" generation: 1 labels: context.mesh.gloo.solo.io/cluster: gloo-gateway-docs-mgt context.mesh.gloo.solo.io/namespace: gloo-mesh-gateways context.mesh.gloo.solo.io/workspace: gloo-gateway-docs-mgt gloo.solo.io/parent_cluster: gloo-gateway-docs-mgt gloo.solo.io/parent_group: networking.gloo.solo.io gloo.solo.io/parent_kind: VirtualGateway gloo.solo.io/parent_name: istio-ingressgateway gloo.solo.io/parent_namespace: bookinfo gloo.solo.io/parent_version: v2 reconciler.mesh.gloo.solo.io/name: translator name: istio-ingressgateway-listener-8080-listener-connection namespace: gloo-mesh-gateways resourceVersion: "18104008" uid: 962d88bf-7373-45f9-98bc-510de612d3ec spec: configPatches: - applyTo: LISTENER match: listener: portNumber: 8080 patch: operation: MERGE value: per_connection_buffer_limit_bytes: 16384 - applyTo: LISTENER match: listener: portNumber: 8080 patch: operation: MERGE value: socket_options: - description: enable keep-alive int_value: 1 level: 1 name: 9 state: STATE_PREBIND - description: idle time before first keep-alive probe is sent int_value: 240 level: 6 name: 4 state: STATE_PREBIND - description: keep-alive interval int_value: 60 level: 6 name: 5 state: STATE_PREBIND - description: keep-alive probes count int_value: 5 level: 6 name: 6 state: STATE_PREBIND workloadSelector: labels: istio: ingressgateway
Cleanup
You can clean up the listener connection policy that you created in this guide by running the following command.
kubectl delete listenerconnectionpolicy listener-connection -n bookinfo