What is Cilium and eBPF?
Cilium is an open source technology and a highly scalable Kubernetes Container Network Interface (CNI) that provides cloud-native networking connectivity, security, and observability for container-based workloads, such as in Kubernetes and Docker. To provide advanced networking and security controls, Cilium leverages the Linux kernel technology eBPF.
What is eBPF?
eBPF is a revolutionary Linux kernel technology that lets you safely run sandboxed programs in the operating system kernel. With eBPF, you can extend the kernel's capabilities at runtime without changing the kernel code or loading kernel modules.
eBPF programs are event-driven and are run only when the event occurs, such as when a certain system call is initiated. To add the program to an event, pre-defined kernel hooks and probes are used. The hook or probe loads the eBPF program and validates it for its safety. For a program to be considered safe, it must not crash or harm the kernel and must always run to completion. If the program is considered safe, it is translated into machine-specific instructions by using the Just-in-Time (JIT) compiler. For more information about the eBPF architecture and how programs are compiled, see the eBPF documentation.
Because eBPF programs run within the kernel, they come with the following benefits:
- Privileged process only: Only kernel processes can load eBPF programs.
- Safe for the kernel: All programs must pass the safety validation step before they are run in the kernel. A program is considered safe if it does not crash or harm the kernel, and always runs to completion. In addition, eBPF programs have limited access to kernel functions and resources.
- Flexibility: Because eBPF programs do not require changes to the kernel source code or the development of new kernel modules, you can enhance kernel capabilities more quickly without maintaining custom kernel modules or waiting for new kernel releases to be released.
- Efficient: eBPF programs are translated into machine-readable instructions which makes them as efficient as natively compiled kernel code or code that is loaded as a kernel module.
- Enhanced monitoring and tracing: Because eBPF programs run in the kernel, they can detect, monitor and trace incoming and outgoing packets as well as system calls. You can use this data to send only a subset of data to a user program, or allow and block certain packets or system calls.
eBPF was initially developed for user programs, such as
tcpdump, to filter the TCP/IP network packets that are sent from the kernel space into the user space. With the trend of containerizing software and building cloud-native applications, eBPF is used for a wide variety of use cases, such as:
- High performance networking
- Advanced load balancing for cloud-native, multicluster and multi-cloud environments
- Security enforcement at container runtime
- Application tracing
- Performance monitoring and troubleshooting
You can write your own eBPF programs in bytecode, or leverage projects, such as BumbleBee to create the eBPF program for you.
How is eBPF used in Cilium?
Cilium provides an abstraction on top of the eBPF API that lets you implement intelligent networking policies and load balancing between workloads without the need to write actual eBPF code. Instead, you specify your network definition and policies in a YAML or JSON file, and let Cilium translate these definitions into eBPF programs.
As a Kubernetes CNI, Cilium is deployed as a daemon set on each node of the Kubernetes cluster. The daemon set enforces the networking policies you define and is responsible to translate your network definitions into eBPF programs. You can choose to enforce network policies on layer 3 or 4, but Cilium can also protect apps that use layer 7 protocols, such as HTTP, gRPC, or Kafka. Pods can communicate with each other over an overlay network with VXLAN tunnling for packet encapsulation or by utilizing the routing tables of the Linux host.
How does Cilium fit into the OSI model?
With Cilium, you can choose to apply network policies on layer 3, 4, and 7 of the OSI network model. For layer 3 and 4 network policies, unallowed packets are automatically dropped on the respective layer. Because the application layer (layer 7) is not involved in this communication, the client that sends the request does not receive an HTTP response code that can be used to analyze the reason a request was denied.
For network policies that are applied on layer 7, such as an access policy that allows only app A to talk to app B, denied requests return an HTTP response code to the client that includes the reason for the denial.
Why Solo uses eBPF and Cilium?
Solo is committed to provide valuable app networking and service connectivity solutions for customers, and believes that eBPF can greatly reduce network latency while enhancing security and observability in a cluster environment. Solo started investigating the benefits of integrating eBPF into a service mesh early on. With open source projects, such as BumbleBee, Solo provided a tool to easily create, build, run, and distribute eBPF programs without the need to learn eBPF code.
To reduce network latency in a service mesh, Solo recently added support for eBPF as part of Gloo Mesh Enterprise. Instead of using iptables rules to redirect ingress and egress traffic from an app container to the injected Istio sidecar proxy and vice versa, eBPF is used to intercept the traffic and shorten the data path in a service mesh. With eBPF, packets to and from apps can be directly forwarded from one socket to the other. This setup reduces network latency and the necessary packet processing in the kernel.
Gloo Network complements the Solo application and networking stack and enables customers to combine Gloo Mesh's Layer 7 capabilities with eBPF's security and policy enforcement features on Layer 3 and 4 of the OSI networking model via Cilium. This integration gives you more flexibility and deeper control over connection handling, load balancing, and redirects, and you can decide if you want to enforce policies before, after, or when traffic reaches your workloads. To learn more about how Cilium is integrated into Gloo Network, see the About Gloo Network pages.