eBPF-based acceleration

With Gloo Network's built-in eBPF capability, you can shorten the data path for a request and accelerate packet processing in the kernel which reduces network latency for your apps. Review the following Layer 4 and Layer 7 networking examples to learn how eBPF bypasses stacks in the OSI network model.

Layer 4 networking

Review how Gloo Network's eBPF capabilities can accelerate the data path for requests and reduce network latency in your cluster.

Layer 4 networking without Gloo Network

To understand eBPF-based acceleration, let's first see what Layer 4 networking looks like without Gloo Network's eBPF capabilities. In the following image pod A wants to send a request to pod B. The request must travel through the entire OSI network stack in order to get from pod A to pod B.

Figure: Layer 4 networking example without Gloo Network

Layer 4 networking with Gloo Network

With Gloo Network's eBPF integration, requests do not need to be processed by all layers of the OSI network stack. Instead, the TCP/IP networking stack is bypassed in the kernel, and data is directly written to the socket of the target pod. This approach significantly reduces network latency for your request.

Figure: Layer 4 networking example with Gloo Network

Layer 7 networking in a service mesh

The benefits of eBPF-based acceleration with Gloo Network are even more significant if you use Gloo Network in a Gloo Mesh-managed service mesh. As a standalone CNI, you can use the built-in Cilium support to create Layer 7 network policies for your cluster. However, to enforce these policies, an Envoy proxy process is started in the Cilium agent. The Envoy proxy intercepts the traffic to and from each pod, which is similar to how Gloo Mesh Enterprise intercepts traffic by using Envoy sidecars (Istio sidecar architecture) or ztunnels (Istio sidecarless architecture).

Review what eBPF-based acceleration for a request looks like if an Envoy proxy is involved and how the data path is significantly shortened with Gloo Network's eBPF capabilities.

Layer 7 networking in a service mesh without Gloo Network

In a traditional Istio service mesh that uses a sidecar architecture, all pod-to-pod communication involves the TCP/IP stack of the OSI network model. This is because the Linux kernel's netfilter capabilities are used to intercept and route the traffic to and from the sidecar proxy (Envoy). This approach can lead to degraded performance in the service mesh.

The following image shows a typical data path for a request from app A to app B.

Layer 7 networking without Gloo Network

Layer 7 networking in a service mesh with Gloo Network and Gloo Mesh Enterprise

When you use Gloo Mesh Enterprise and Gloo Network together, you can accelerate request processing with eBPF for Istio workloads in your service mesh, and reduce network latency. Instead of using iptables rules to redirect ingress and egress traffic from an app container to the injected Istio sidecar proxy and vice versa, eBPF is used to intercept the traffic in a service mesh. With eBPF, packets to and from apps can be directly forwarded from one socket to the other.

Enabling eBPF-based acceleration in your service mesh does not change the way the service mesh works. You can still use all the Layer 7 traffic policies that you are used to. Layer 7 policies are automatically translated into Istio and Cilium policies, and enforced on Layer 3-7 of the OSI network stack. With this defense-in-depth approach, you can create a multi-layer defense mechanism and address many attack vectors to protect your apps from being compromised.

Layer 7 networking with by using the eBPF sidecar acceleration in Gloo Mesh Enterprise and Gloo Network