Benefits

Gloo Gateway is a feature-rich, Kubernetes-native ingress controller and next-generation API gateway. With Gloo Gateway, you have access to its exceptional function-level routing, discovery capabilities, numerous features, tight integration with leading open-source projects, and support for legacy apps, microservices, and serverless.

Built on the Istio's ingress gateway model, Gloo Gateway uses an Envoy proxy as the ingress gateway to manage and control traffic that enters your Kubernetes cluster. You use custom resources, such as Gloo virtual gateways, route tables, and policies to implement security measures that meet your business and app requirements, and that simplify configuring ingress traffic rules.

Because these resources offer declarative, API-driven configuration, you can easily integrate Gloo Gateway into your existing GitOps and CI/CD workflows.

Review the key benefits that you get with Gloo Gateway.

With Gloo Gateway, you get a Layer 7 load-balancing solution that is built on open source projects. Envoy is a graduated CNCF project, and Istio recently joined the CNCF. Solo is a leader within both of these communities and can help you get the most value out of your investment in open source technology. With this open source foundation, you can configure a portable, vendor-neutral solution across cloud providers.

Gloo Gateway is uniquely designed to support hybrid applications, in which multiple technologies, architectures, protocols, and clouds can coexist. For example, by using virtual gateway and route table resources, you can set up intelligent routing within a single cluster or across clusters. In addition, you can use external services to route to endpoints that are hosted outside of your Kubernetes cluster, such as an on-prem database.

Figure: Gloo Gateway provides multicluster load balancing and routing.

Gloo Gateway works with a suite of traffic policies for advanced traffic management that is essential for your distributed, cloud-native apps. Highlights of these policies include the following benefits:

  • Upgrading services through canary deployments that can shift traffic to different versions based on a customizable percentage.
  • Mirroring, or copying, requests to a “shadow” environment so that you can test upgrades before rolling out to production.
  • Adding resiliency to your apps with timeouts, retries, and circuit breaking.
  • Injecting faults to simulate abnormal conditions and perform stress tests of your apps.
  • Manipulate request and response headers to inject or remove information specific to your apps, network, infrastructure, or environment.
  • Transforming requests in a number of different ways, from simple HTTP redirects or prefix rewrites, to more advanced header and body manipulations for identity-based routing.

The policy “filters” that you can use with Gloo Gateway are highly extensible, and set you up for cutting edge adoption of technologies such as WebAssembly (Wasm), GraphQL, and eBPF.

Figure: Gloo Gateway provides a suite of capabilities to transform, shift, and otherwise control traffic.

Gloo Gateway can terminate TLS sessions before they reach your apps. You can configure the virtual gateway to use your own TLS certificates for each domain that it listens on. Such configuration means that you can use different certificates for different apps, to meet security standards.

You can also integrate identity providers with external authentication and authorization policies. Then, Gloo Gateway can make routing decisions based on the identity of the requestor.

Figure: Gloo Gateway integrates with identity providers to provide external auth, as well as certificate managers to secure traffic with mutual TLS.
You can apply several different policies to prevent threats before they reach the workloads in your cluster, such as auth, web application firewall (WAF), and rate limiting.

Figure: Gloo Gateway can apply access policies such as web application firewalls and rate limiting to prevent threats before they reach your environment.

Gloo Gateway provides a variety of observability features to help you analyze your setup and the traffic that flows through your API Gateways. Metrics are automatically generated by the API Gateway and sent to the built-in Prometheus server. You can open the Prometheus UI and use PromQL queries to analyze the traffic that was processed by your API Gateway. Some of the metrics are also summarized and displayed in the Gloo UI. You can further use the Gloo UI to review the Kubernetes and Gloo Gateway resources that you set up, such as virtual gateways, route tables, or traffic policies.

You can use this data to detect failures, troubleshoot bottlenecks, and to find ways to improve the performance and reliability of the services in your cluster.

Figure: Gloo Gateway offers a variety of observability tools, including a Prometheus-backed user interface, to give you insights into your environment.
You can centrally manage and configure your gateway proxies across Kubernetes namespaces and clusters by using the Gloo management and data plane architecture, and custom resources such as Gloo workspaces and virtual gateways. That way, you can reduce the management overhead for your resources and decrease the risk of configuration drift.

Figure: Gloo Gateway lets you centrally manage and apply configuration across multiple API Gateways.