This doc set is for users of the Gloo Mesh Gateway product to set up an Istio ingress gateway. For users of the Gloo Gateway product to set up an ingress gateway with the Kubernetes Gateway API instead, see the Gloo Gateway docs.
Federated services
Explore routing scenarios for services that are federated across clusters.
Gloo Mesh Gateway federates Gloo and Kubernetes resources so that services can communicate with each other across clusters within the workspace. To federate the resources, Gloo does the following:
Discovers Gloo and Kubernetes resources in each workspace. As part of discovery, Gloo identifies the workspace settings to decide which resources should be available in which clusters and namespaces.
Translates the Gloo and Kubernetes resources into underlying Istio resources such as virtual services, service entries, and Envoy filters.
Copies the underlying Istio resources in each cluster and namespace that the Gloo or Kubernetes resource belongs to.
Because workspaces can span clusters, federation makes the resource available across clusters. For more details on the discovery and translation process, see Relay architecture.
Gloo can federate ungrouped services for an entire workspace, or groupings of services that you define in select Gloo custom resources. You can also use both types of federation together. In such a case, the behavior is determined by whether the services that back the host that you call are ungrouped or grouped.
You can enable federation for a workspace in the workspace settings. Then, Gloo creates separate Istio service entries in each cluster for the Kubernetes services of the workspace. In this type of federation, the destinations are not grouped for you. As such, you can’t take advantage of all the routing capabilities of Gloo resources, such as attaching policies to all the service entries at once. Instead, the recommended way to federate is to use Gloo resources that group destinations, like virtual destinations or external services.
When you create one of these Gloo custom resources, Gloo automatically federates the resource across namespaces and clusters within the workspace. This type of federation lets you group the destinations, set up routes, and apply policies consistently.
Testing-only: Ungrouped federation at the workspace level link
You can configure federation and the east-west gateway to use for multicluster traffic in the workspace settings. Then, Gloo federates each service to every namespace within that workspace, as well as any workspace that imports the service.
As part of the federation process, Gloo creates Istio service entries in each cluster for the Kubernetes services in a workspace, with unique hostnames in the format <service_name>.<namespace>.svc.<cluster_name>.<host_suffix>. Then, you can route to the federated host directly or by creating a route table for that host.
Workspace-level federation is not intended for high availability failover. Unlike with Gloo virtual destinations, workspace-level federation does not group the destinations for traffic. Depending on the number of services, this type of federation might cause issues at scale. You cannot attach other Gloo resources such as policies to these federated Kubernetes services. As such, workspace-level federation is for simple testing scenarios only. Otherwise, use virtual destinations or external services with route tables.
Flip through the following diagrams to understand how routing works with certain workspace-level federation scenarios.
In the following example, Gloo federates the services for the apps in each cluster. The hostnames follow the format <service_name>.<namespace>.svc.<cluster_name>.<host_suffix>. You can choose the <host_suffix> in the workspace settings.
The client in cluster1 can call either hostname to get the service that backs the hostname.
app.ns.svc.cluster1-host returns the response from app-v1 in cluster1.
app.ns.svc.cluster2-host returns the response from app-v2 in cluster2. Because the backing service is in a different cluster than the client, the request is routed through the east-west gateway.
In this scenario, the app-v2 in cluster2 has multiple replicas that back the app.ns.svc.cluster2-host federated hostname.
The client in cluster1 can call either hostname to get the service that backs the hostname.
app.ns.svc.cluster1-host returns the response from app-v1 in cluster1.
app.ns.svc.cluster2-host returns the response from one of the replicas of app-v2 in cluster2. The east-west gateway automatically load balances between the replicas.
Instead of calling the federated hostnames, in this scenario the client calls a route that is defined in a route table.
The route table configures different routes to forward to each cluster’s app.
You can set weights for each route, such as to set priority for a route to return responses from the local cluster first.
Requests on route1 are returned by app-v1 in cluster1.
Requests on route2 are returned by app-v2 in cluster2.
Besides setting route weights, you can use route tables in combination with policies to manage traffic. For example, you can apply an outlier detection policy to the federated service to force client traffic to the local instance of the app.
The route table configures different routes to forward to the federated hostnames for each cluster’s app.
An outlier detection policy applies to the Kubernetes service for app-v1.
Requests on route1 or route2 are returned by app-v1 in cluster1 because app-v1 is local to the client.
One thing to keep in mind is that workspace-level federation and route tables perform load balancing.
The route table configures different routes to forward to each cluster’s app.
Requests on route1 are returned by app-v1 in cluster1. If app-v1 in cluster1 fails, 503 responses are still returned to the client.
Requests on route2 are returned by app-v2 in cluster2.
To avoid 503 responses, you can apply an outlier detection policy to the Kubernetes service that backs the federated hostname for the app. Then, unhealthy replicas of app-v2 are removed from the load balancing pool.
To set up failover, use virtual destinations instead of workspace-level federation.
The client in cluster1 can call either hostname to get the service that backs the hostname.
app.ns.svc.cluster1-host returns the response from app-v1 in cluster1.
app.ns.svc.cluster2-host returns the response from one of the replicas of app-v2 in cluster2. The east-west gateway automatically load balances between the replicas.
The outlier detection policy on the app service detects when a replica becomes unhealthy and removes it from load balancing. This way, responses for app.ns.svc.cluster2-host are still successful.
When you create a virtual destination or external service, Gloo groups together and federates the backing services. Federation makes the service the resource represents available in each namespace within the workspace, across clusters and even other workspaces if you set up importing and exporting.
This way, you get intelligent, multicluster routing for the services that the virtual destinations or external services select. You also get consistent ingress control for the routes that the route tables select. Additionally, you can attach Gloo policies to these resources, such as to secure and shift traffic. Depending on the type of policy and your workspace settings, these policies might even apply across workspaces.
Flip through the following scenarios to understand how routing works for federated, grouped resources.
In the following example, you create a virtual destination to federate access to services across clusters.
The client in cluster1 calls the app.global hostname that is defined in the virtual destination to group together the app-v1 and app-v2 services.
Responses are load balanced from both app-v1 in cluster1 and app-v2 in cluster2.
To force local traffic, you can apply an outlier detection policy to the virtual destination.
The client in cluster1 calls the app.global hostname that is defined in the virtual destination.
Responses are returned from app-v1 in cluster1 that is local to the client.
If you don’t have a local service in the same cluster, then requests are load balanced across backing services in other clusters.
The client in cluster1 calls the app.global hostname that is defined in the virtual destination.
Responses are load balanced from the remote app-v2 in cluster2 and app-v3 in cluster-3.
The virtual destination with an outlier detection policy removes any unhealthy services from load balancing. As long as other healthy services back the virtual destination, the client continues to get back successful responses.
The client in cluster1 calls the app.global hostname that is defined in the virtual destination.
The outlier detection policy on the virtual destination ensures that the unhealthy app-v1 in cluster1 is removed from load balancing.
Responses are load balanced from the remote app-v2 in cluster2 and app-v3 in cluster-3.
You can also decide where failover traffic goes with a failover policy. In the following example, the failover policy applies to the virtual destination, and prefers that traffic from cluster1 is failed over to cluster2.
The client in cluster1 calls the app.global hostname that is defined in the virtual destination.
The outlier detection policy on the virtual destination ensures that the unhealthy app-v1 in cluster1 is removed from load balancing.
The failover policy on the virtual destination ensures that responses are returned only from app-v2 in cluster2.
Virtual destinations can also federate services outside the service mesh by selecting an external service.
The client in cluster1 calls the app.global hostname that is defined in the virtual destination.
Responses are load balanced from both app-v1 in cluster1 and app-v2 outside the mesh.
You can also implement similar scenarios as covered previously, such as forcing local traffic, setting up outlier detection, and configuring failover. For more information, see Routing to external services.
By using virtual destinations in combination with route tables, you get greater control of routing. The virtual destinations let you treat a bunch of similar services in different clusters as a group. But what if you want to route to only some of the backing services?
Instead of making separate virtual destinations, you can use the subset feature of route tables. In the following example, the route table selects the virtual destination for the app. Then, the subsets for the routes control the behavior.
For requests on route1 in cluster1, the client gets back responses only from app-v1, not app-v2.
For requests on route2 in cluster2, the client gets back responses only from app-v3, not app-v4.
For more route table possibilities, see the API docs.