Routing based on locality

Providing high-availability of applications across clusters, zones, and regions can be a significant challenge. Source traffic should be routed to the closest available destination, or be routed to a failover destination if issues occur. In this guide, you will use a VirtualDestination to accomplish locality-based failover.

Gloo Mesh provides the ability to configure a VirtualDestination, which is a virtual traffic destination composed of a list of 1-n services selected. The composing services are configured with outlier detection, the ability of the system to detect unresponsive services, read more here. Traffic will automatically be sorted into priority levels by proximity to the orginiating service, and failover when priorities become unhealthy.

Before you begin

To illustrate these concepts, we will assume that:

Be sure to review the assumptions and satisfy the pre-requisites from the Guides top-level document.

Set the following variables in your environment:


MGMT_CONTEXT=your_mgmt_context
REMOTE_CONTEXT1=your_first_remote_context
REMOTE_CONTEXT2=your_second_remote_context
MGMT_CLUSTER=your_mgmt_cluster_name
REMOTE_CLUSTER1=your_first_cluster_name
REMOTE_CLUSTER2=your_second_cluster_name

MGMT_CONTEXT=kind-mgmt
REMOTE_CONTEXT1=kind-cluster-1
REMOTE_CONTEXT2=kind-cluster-2
MGMT_CLUSTER=mgmt
REMOTE_CLUSTER1=cluster-1
REMOTE_CLUSTER2=cluster-2

Configure the locality labels for the nodes

Gloo Mesh uses Kubernetes labels on the nodes in your clusters to indicate locality for the services that run on the nodes. For more information, see the Kubernetes topology and Istio locality documentation.

Verify that your nodes have locality labels

Verify that your nodes have at least region and zone labels. If so, and you do not want to update the labels, you can skip the remaining steps and continue with Creating the VirtualDestination. sh kubectl get nodes --context $REMOTE_CONTEXT1 -o jsonpath='{.items[*].metadata.labels}' kubectl get nodes --context $REMOTE_CONTEXT2 -o jsonpath='{.items[*].metadata.labels}'

Example output with region and zone labels:

..."topology.kubernetes.io/region":"us-east","topology.kubernetes.io/zone":"us-east-2"

Add locality labels to your nodes

If your nodes do not already have region and zone labels, you must add the labels. Depending on your cluster setup, you might add the same region label to each node, but a separate zone label per node. The values are not validated against your underlying infrastructure provider. The following example shows how you might label multizone clusters in two different regions, but you can adapt the steps for your actual setup.

  1. Label all the nodes in each cluster for the region. If your nodes have incorrect region labels, include the --overwrite flag in the command.
    kubectl label nodes --all --context $REMOTE_CONTEXT1 topology.kubernetes.io/region=us-east
    kubectl label nodes --all --context $REMOTE_CONTEXT2 topology.kubernetes.io/region=us-south
    
  2. List the nodes in each cluster. Note the name for each node.
    kubectl get nodes --context $REMOTE_CONTEXT1
    kubectl get nodes --context $REMOTE_CONTEXT2
    
  3. Label each node in each cluster for the zone. If your nodes have incorrect zone labels, include the --overwrite flag in the command.
    kubectl label node <cluster1-node1> --context $REMOTE_CONTEXT1 topology.kubernetes.io/zone=us-east-1
    kubectl label node <cluster1-node2> --context $REMOTE_CONTEXT1 topology.kubernetes.io/zone=us-east-2
    kubectl label node <cluster1-node3> --context $REMOTE_CONTEXT1 topology.kubernetes.io/zone=us-east-3
    
    kubectl label node <cluster2-node1> --context $REMOTE_CONTEXT2 topology.kubernetes.io/zone=us-west-1
    kubectl label node <cluster2-node2> --context $REMOTE_CONTEXT2 topology.kubernetes.io/zone=us-west-2
    kubectl label node <cluster2-node3> --context $REMOTE_CONTEXT2 topology.kubernetes.io/zone=us-west-3
    

Create the VirtualDestination

Now we will create the VirtualDestination for the reviews service in the management cluster. The VirtualDestination is composed of the reviews services on both cluster-1 and cluster-2. If the reviews service on the local (cluster-1) cluster is unhealthy, requests will automatically be routed to the reviews service on cluster-2.


apiVersion: networking.enterprise.mesh.gloo.solo.io/v1beta1
kind: VirtualDestination
metadata:
  name: bookinfo-global
  namespace: gloo-mesh
spec:
  hostname: reviews.global
  port:
    number: 9080
    protocol: http
  localized:
    outlierDetection:
      consecutiveErrors: 1
      maxEjectionPercent: 100
      interval: 5s
      baseEjectionTime: 120s
    destinationSelectors:
    - kubeServiceMatcher:
        labels:
          app: reviews
  virtualMesh:
    name: virtual-mesh
    namespace: gloo-mesh

kubectl --context $MGMT_CONTEXT apply -f - << EOF
apiVersion: networking.enterprise.mesh.gloo.solo.io/v1beta1
kind: VirtualDestination
metadata:
  name: bookinfo-global
  namespace: gloo-mesh
spec:
  hostname: reviews.global
  port:
    number: 9080
    protocol: http
  localized:
    outlierDetection:
      consecutiveErrors: 1
      maxEjectionPercent: 100
      interval: 5s
      baseEjectionTime: 120s
    destinationSelectors:
    - kubeServiceMatcher:
        labels:
          app: reviews
  virtualMesh:
    name: virtual-mesh
    namespace: gloo-mesh
EOF

For demonstration purposes, we're setting consecutiveErrors to 1 and maxEjectionPercent to 100 to more easily trigger the failover. However, these should most likely not be used in production scenarios.

The virtualMesh field indicates the control planes that the VirtualDestination is visible to. It will be visible to all meshes in the VirtualMesh. Alternatively, a list of meshes can be supplied here instead.

Once applied, run the following:

kubectl -n gloo-mesh get virtualdestination/bookinfo-global -oyaml

and you should see the following status:

status:
  observedGeneration: "1"
  state: ACCEPTED

Demonstrating Locality Routing Functionality

To demonstrate locality routing functionality, configure a TrafficPolicy to shift requests that target the reviews service on the first remote cluster to the reviews VirtualDestination that you previously created.


apiVersion: networking.mesh.gloo.solo.io/v1
kind: TrafficPolicy
metadata:
  name: reviews-shift-failover
  namespace: bookinfo
spec:
  destinationSelector:
  - kubeServiceRefs:
      services:
      - clusterName: $REMOTE_CLUSTER1
        name: reviews
        namespace: bookinfo
  policy:
    trafficShift:
      destinations:
      - virtualDestination:
          name: bookinfo-global
          namespace: gloo-mesh

kubectl --context $MGMT_CONTEXT apply -f - << EOF
apiVersion: networking.mesh.gloo.solo.io/v1
kind: TrafficPolicy
metadata:
  name: reviews-shift-failover
  namespace: bookinfo
spec:
  destinationSelector:
  - kubeServiceRefs:
      services:
      - clusterName: $REMOTE_CLUSTER1
        name: reviews
        namespace: bookinfo
  policy:
    trafficShift:
      destinations:
      - virtualDestination:
          name: bookinfo-global
          namespace: gloo-mesh
EOF

Now we can test the TrafficShift by accessing the reviews service from the bookinfo's product page. Port forward the productpage pod with the following command and open your web browser to localhost:9080.

kubectl -n bookinfo port-forward deployments/productpage-v1 9080

Reloading the page a few times should show the “Book Reviews” section with either no stars (for requests routed to the reviews-v1 pod) or black stars (for requests routed to the reviews-v2 pod). This shows that the productpage is routing to the local service. This is the desired behavior. The product page requests are coming from the local cluster and being routed to a local destination.

Recall from the multicluster setup guide that reviews-v1 and reviews-v2 only exist on cluster-1, and reviews-v3 only exists on cluster-2, which we'll use to distinguish requests routing to either cluster.

Now, to trigger the failover, we'll modify the reviews-v1 and reviews-v2 deployments to disable the web servers.

Run the following commands:

kubectl -n bookinfo patch deploy reviews-v1 --patch '{"spec": {"template": {"spec": {"containers": [{"name": "reviews","command": ["sleep", "20h"]}]}}}}'
kubectl -n bookinfo patch deploy reviews-v2 --patch '{"spec": {"template": {"spec": {"containers": [{"name": "reviews","command": ["sleep", "20h"]}]}}}}'

Once the modified deployment has rolled out, refresh the productpage and you should see reviews with red stars, corresponding to reviews-v3, which only exists on cluster-2, demonstrating that the requests are indeed failing locally, and so instead they are being routed to the remote instance.

To restore the disabled reviews-v1 and reviews-v2, run the following:

kubectl -n bookinfo patch deployment reviews-v1  --type json   -p '[{"op": "remove", "path": "/spec/template/spec/containers/0/command"}]'
kubectl -n bookinfo patch deployment reviews-v2  --type json   -p '[{"op": "remove", "path": "/spec/template/spec/containers/0/command"}]'

Once the deployment has rolled out, reloading the productpage should show reviews with no stars or black stars, indicating that our localized virtual destination is routing requests to the local service in cluster-1.

Next Steps

In this guide, you successfully configured cross-cluster failover using a VirtualMesh and Traffic Policies. To learn how to apply Traffic Policies to the Virtual Destination you just created, see Traffic policies for virtual destinations.