Multi-cluster routing with virtual destinations
You can easily set up intelligent multi-cluster routing for active-active and active-passive workloads in your service mesh by using virtual destinations. With virtual destinations, you can define unique internal hostnames for apps that are spread across multiple clusters, and enable service discovery for these apps within the mesh by adding the hostnames to the service mesh registry.
For more information about how to set up intra-mesh routing, see Route across clusters.
How does it work?
To reach a multi-cluster endpoint, your apps inside the mesh simply send a request to the internal hostname, and Gloo Mesh then routes the request to the desired destination. Your apps do not need to know where the target app is deployed within the mesh. Instead, Gloo Mesh takes into account the context of the client request, such as the client location, where the target app is deployed within the mesh, and the routing rules that you defined to ensure that the request is routed in an efficient and secure manner.
Let’s take a look at an example multi-cluster setup to see how routing is done for active-active workloads in Kubernetes and how you can improve this scenario by using Gloo Mesh.
Multi-cluster routing in Kubernetes
In the following example, client A in cluster 1 wants to send a request to app B. The app B deployment consists of multiple instances that are spread across multiple clusters. To achieve multi-cluster routing in Kubernetes, all participating clusters must be added to a global load balancer. The global load balancer is assigned a custom domain, such as
mydomain.com and each app in the cluster must be configured to serve on a specific path, such as
app-b. For client A to reach app B, client A must send a request over the internet to the global load balancer by using the
mydomain.com/app-b address. The global load balancer uses round-robin to select a cluster and routes the request to the cluster's ingress gateway, which then forwards the request to app B.
A major challenge in this setup is knowing where app B is deployed. The global load balancer is aware of the ingress gateway IP addresses and whether or not a certain cluster can be reached. However, the global load balancer does not know if app B is actually deployed in cluster 2 or 3, or whether app B is healthy to accept requests. Furthermore, client A must always leave the service mesh and go over the internet to send the request to the global load balancer, even though a local instance of app B exists.
Multi-cluster routing with Gloo Mesh
Now let’s look at how you can improve this scenario with virtual destinations in Gloo Mesh. To expose app B within the service mesh, you define a unique internal hostname, such as
app-b.mesh.internal, and assign a target port to it, such as
80. Then, you use labels to select the services in your clusters that expose app B. Gloo Mesh automatically discovers the app B service instances and creates the corresponding Istio custom resources, such as service entries, virtual services, and destination rules to enable peer-to-peer multi-cluster routing.
Because Gloo Mesh is aware of where app B is deployed and whether an instance is healthy to accept incoming requests, any request from client A to
app-b.mesh.internal:80 is routed directly to the closest app B instance without leaving the service mesh. In the following diagram, the closest instance is deployed within the same cluster.
But what happens if no local app B instance exists or the local instance becomes unavailable? Gloo Mesh automatically updates the routing rules and gives priority to the next closest app B instance in another cluster. In the following diagram, the next closest app B instance is deployed to cluster 2. Instead of leaving the service mesh and going to the global load balancer, the request is now routed to the east-west gateway of cluster 2 directly, which forwards the request to app B.
With location-aware priority-based routing, you ensure that requests are always routed in an efficient and secure manner improving network latency, response time, availability, and reliability for your services within the mesh. Instead of duplicating the same deployments across all the clusters as you would do in a traditional Kubernetes setup, you can optimze your clusters and deploy your apps where you need them without configuring and updating custom routing rules for each cluster.