Getting started

Get started with Gloo Mesh Gateway by testing multicluster traffic management and advanced routing.

About

In this guide: Learn to use Gloo Mesh Gateway for incoming network traffic, also called “ingress,” “edge,” or “north-south” traffic. You start by setting up a simple route through one ingress gateway. Then, you configure detailed routes through multiple gateways across multiple clusters.

Benefits: By using Gloo Mesh Gateway for north-south routing, you can use the Gloo Mesh management plane to configure ingress gateways and routes across multiple clusters and service meshes in your environment. This allows you to configure your north-south routing setup alongside your east-west microservice routing, and use the Gloo Mesh observability suite to monitor all traffic flows in your environment.

Context in your environment: Commonly, the Istio ingress gateway, which is exposed by a Kubernetes load balancer service, is the targeted listener in north-south routing configurations. For example deployment patterns, see Example architectures.

Resources: In this guide, you configure VirtualGateway, VirtualHost, RouteTable, and TrafficPolicy resources. For more information, see Gloo Mesh Gateway custom resources.

Example use case: The sections in this guide use the Bookinfo sample app to demonstrate a multicluster routing setup, in which multiple istio-ingressgateways route north-south traffic to Bookinfo services across multiple clusters. Additionally, a route table is used to demonstrate a separation of responsibilities for various teams in an organization.

Before you begin

  1. Set the names of your clusters from your infrastructure provider. In this guide, the cluster names mgmt-cluster, cluster-1, and cluster-2 are used.
    export MGMT_CLUSTER=<management_cluster_name>
    export REMOTE_CLUSTER1=<remote_cluster_1_name>
    export REMOTE_CLUSTER2=<remote_cluster_2_name>
    
  2. Save the kubeconfig contexts for your clusters. Run kubectl config get-contexts, look for your cluster in the CLUSTER column, and get the context name in the NAME column.
    export MGMT_CONTEXT=<management_cluster_context>
    export REMOTE_CONTEXT1=<remote_cluster_1_context>
    export REMOTE_CONTEXT2=<remote_cluster_2_context>
    
  3. Install Istio into the istio-system root namespace of each workload cluster. If you deployed the istio-ingressgateway into another namespace, such as the istio-ingress namespace, be sure to change the namespace in the resources in this gateway guide.
  4. Deploy the Bookinfo sample app across the two workload clusters into the bookinfo namespace, such that cluster-1 runs the app with versions 1 and 2 of the reviews service (reviews-v1 and reviews-v2), and cluster-2 runs version 3 of the reviews service (reviews-v3).
  5. Install the Gloo Mesh management components into the gloo-mesh namespace of your management cluster.

    Basic Gateway features are available with a standard Gloo Mesh Enterprise license. To use advanced gateway features, purchase a Gloo Mesh Gateway license and use this gateway license during Gloo Mesh installation.

  6. Register the workload clusters with Gloo Mesh.

When these prerequisites are complete, the workload environment looks like the following. Note that no routes are configured yet for the istio-ingressgateway in cluster-1.

Bookinfo Multicluster

Setting up a basic route

To start, set up a route to call the ratings service from bookinfo on cluster-1.

  1. Create a VirtualGateway resource in the management cluster named demo-gateway. This resource selects:

    • One gateway, the istio-ingressgateway in cluster-1. In subsequent sections you can deploy this configuration to multiple gateways at once.
    • The port the gateway listener runs on, named http2. This corresponds to port 8080 on the istio-ingressgateway service.
    • The type of gateway listener, HTTP.
    • The virtual host for www.example.com and a route to /ratings, which forwards traffic to the ratings service.
    cat << EOF | kubectl apply --context $MGMT_CONTEXT -f -
    apiVersion: networking.enterprise.mesh.gloo.solo.io/v1beta1
    kind: VirtualGateway
    metadata:
      name: demo-gateway
      namespace: gloo-mesh
    spec:
      ingressGatewaySelectors:
      - portName: http2
        destinationSelectors:
        - kubeServiceMatcher:
            clusters:
            - cluster-1
            labels:
              istio: ingressgateway
            namespaces:
            - istio-system
      connectionHandlers:
      - http:
          routeConfig:
          - virtualHost:
              domains:
              - www.example.com
              routes:
              - matchers:
                - uri:
                    prefix: /ratings
                name: ratings
                routeAction:
                  destinations:
                  - kubeService:
                      clusterName: cluster-1
                      name: ratings
                      namespace: bookinfo
    EOF
    
  2. Get the address of the Istio ingress gateway on cluster-1.

    
       CLUSTER_1_INGRESS_ADDRESS=$(kubectl --context $REMOTE_CONTEXT1 get svc -n istio-system istio-ingressgateway -o jsonpath='{.status.loadBalancer.ingress[0].ip}')
       
    
       CLUSTER_1_INGRESS_ADDRESS=$(kubectl --context $REMOTE_CONTEXT1 get svc -n istio-system istio-ingressgateway -o jsonpath='{.status.loadBalancer.ingress[0].hostname}')
       

    Note: Depending on your environment, you might see <pending> instead of an external IP address. For example, if you are testing locally in KinD or minikube, or if you have insufficent permissions in your cloud platform, you can instead port-forward the service port of the ingress gateway:

    kubectl --context $REMOTE_CONTEXT1 -n istio-system port-forward deploy/istio-ingressgateway 8081
    
  3. Test the route by sending a request to the ratings service. If you used port forwarding, substitute localhost:8081 for the ingress address.

    curl -H "Host: www.example.com" $CLUSTER_1_INGRESS_ADDRESS/ratings/1
    

    Example response:

    {"id":1,"ratings":{"Reviewer1":5,"Reviewer2":4}}
    

Congratulations, you configured your first route with Gloo Mesh Gateway!

Splitting out the VirtualHost

In the previous section, the basic example included the VirtualHost in-line within the VirtualGateway resource. This works great for simple setups or quick tests. However, as your deployments grow in complexity and you add more routes, route options, and matchers, housing the VirtualHost in-line within the VirtualGateway resource can become unwieldy. Instead, you can break up the VirtualHost and the VirtualGateway into two separate resources. Not only is this easier to maintain, but it also allows organizations to split responsibility of gateway configuration and host configuration across multiple teams, if required. This separation is also recommended for production deployments.

In the following configuration, the demo-virtualhost resource contains the same data as the in-line VirtualHost, but is deployed separately to the gloo-mesh namespace in the management cluster. In the VirtualGateway resource, the virtualHostSelector selects the namespaces that the VirtualHost resource is deployed to.

apiVersion: networking.enterprise.mesh.gloo.solo.io/v1beta1
kind: VirtualGateway
metadata:
  name: demo-gateway
  namespace: gloo-mesh
spec:
  connectionHandlers:
  - http:
      routeConfig:
      - virtualHostSelector:
          namespaces:
          - "gloo-mesh"
  ingressGatewaySelectors:
  - portName: http2
    destinationSelectors:
    - kubeServiceMatcher:
        clusters:
        - cluster-1
        labels:
          istio: ingressgateway
        namespaces:
        - istio-system
---
apiVersion: networking.enterprise.mesh.gloo.solo.io/v1beta1
kind: VirtualHost
metadata:
  name: demo-virtualhost
  namespace: gloo-mesh
spec:
  domains:
  - www.example.com
  routes:
  - matchers:
    - uri:
        prefix: /ratings
    name: ratings
    routeAction:
      destinations:
      - kubeService:
          clusterName: cluster-1
          name: ratings
          namespace: bookinfo

Routing to multiple clusters

So far we have routed traffic to services in the same cluster that the gateway itself lives in. Routing traffic to services in other clusters is just as simple however. If you've been following our examples so far, you will have a VirtualHost called demo-virtualhost which routes to the ratings service in cluster-1. In our environment, we also happen to have a copy of this service living in cluster-2. If we want to route to it, it's as simple as changing the clusterName in the kubeService definition in our VirtualHost:

apiVersion: networking.enterprise.mesh.gloo.solo.io/v1beta1
kind: VirtualHost
metadata:
  name: demo-virtualhost
  namespace: gloo-mesh
spec:
  domains:
  - www.example.com
  routes:
  - matchers:
    - uri:
        prefix: /ratings
    name: ratings
    routeAction:
      destinations:
      - kubeService:
          clusterName: cluster-2
          name: ratings
          namespace: bookinfo

Similarly, we can easily split traffic across both clusters, and specify weights for the traffic ratio sent to each:

apiVersion: networking.enterprise.mesh.gloo.solo.io/v1beta1
kind: VirtualHost
metadata:
  name: demo-virtualhost
  namespace: gloo-mesh
spec:
  domains:
  - www.example.com
  routes:
  - matchers:
    - uri:
        prefix: /ratings
    name: ratings
    routeAction:
      destinations:
      - kubeService:
          clusterName: cluster-1
          name: ratings
          namespace: bookinfo
        weight: 75
      - kubeService:
          clusterName: cluster-2
          name: ratings
          namespace: bookinfo
        weight: 25

The above will result in about three quarters of the traffic going to the ratings service in cluster-1, with the remaining quarter going to the ratings service in cluster-2. Note, these weights are relative, and if ommitted then traffic will be split evenly across all destinations.

Delegating to a RouteTable

Above, we have separated the VirtualHost out from the VirtualGateway. We can also optionally go one step further and delegate some of the routing to a separate RouteTable object. Here's an example of the same configuration above, but split across a VirtualGateway, VirtualHost, and RouteTable:

apiVersion: networking.enterprise.mesh.gloo.solo.io/v1beta1
kind: VirtualGateway
metadata:
  name: demo-gateway
  namespace: gloo-mesh
spec:
  connectionHandlers:
  - http:
      routeConfig:
      - virtualHostSelector:
          namespaces:
          - gloo-mesh
  ingressGatewaySelectors:
  - portName: http2
    destinationSelectors:
    - kubeServiceMatcher:
        clusters:
        - cluster-1
        labels:
          istio: ingressgateway
        namespaces:
        - istio-system

---
apiVersion: networking.enterprise.mesh.gloo.solo.io/v1beta1
kind: VirtualHost
metadata:
  name: demo-virtualhost
  namespace: gloo-mesh
spec:
  domains:
  - www.example.com
  routes:
  - matchers:
    - uri:
        prefix: /
    delegateAction:
      selector:
        namespaces:
        - gloo-mesh


---
apiVersion: networking.enterprise.mesh.gloo.solo.io/v1beta1
kind: RouteTable
metadata:
  name: demo-routetable
  namespace: gloo-mesh
spec:
  routes:
  - matchers:
    - uri:
        prefix: /ratings
    name: ratings
    routeAction:
      destinations:
      - kubeService:
          clusterName: cluster-1
          name: ratings
          namespace: bookinfo
        weight: 75
      - kubeService:
          clusterName: cluster-2
          name: ratings
          namespace: bookinfo
        weight: 25

Functionally this is still the same routing configuration as our VirtualGateway/VirtualHost configuration before, but we have now given ourselves the flexibility to break up the configuration into different logical areas. The team configuring the RouteTable may be better suited to determine all of the routes within their application, and the operations team may want to keep control over which domains that application is served on, for example.

Configure Route Options

Routes can have policies configured using route options. Anything that can be set in a Gloo Mesh TrafficPolicy can also be set here. For example, here is some configuration to add an arbitrary header to the response:

apiVersion: networking.enterprise.mesh.gloo.solo.io/v1beta1
kind: RouteTable
metadata:
  name: demo-routetable
  namespace: gloo-mesh
spec:
  routes:
  - matchers:
    - uri:
        prefix: /ratings
    name: ratings
    routeAction:
      destinations:
      - kubeService:
          clusterName: cluster-1
          name: ratings
          namespace: bookinfo
    options:
      headerManipulation:
        appendResponseHeaders:
          "x-my-custom-header": "example"

Configuring these route policies at the route level is simply one option. Alternatively, or in addition, these options can be configured at the VirtualHost level, or from a TrafficPolicy resource.

Here is how we would set the same options using a TrafficPolicy instead:

apiVersion: networking.enterprise.mesh.gloo.solo.io/v1beta1
kind: RouteTable
metadata:
  name: demo-routetable
  namespace: gloo-mesh
spec:
  routes:
  - matchers:
    - uri:
        prefix: /ratings
    name: ratings
    routeAction:
      destinations:
      - kubeService:
          clusterName: mgmt.cluster
          name: ratings
          namespace: bookinfo
---
apiVersion: networking.mesh.gloo.solo.io/v1
kind: TrafficPolicy
metadata:
  namespace: gloo-mesh
  name: add-header-policy
spec:
  routeSelector:
  - routeTableRefs:
    - name: demo-routetable
      namespace: gloo-mesh
  policy:
    headerManipulation:
      appendResponseHeaders:
        "x-my-custom-header": "example"

The benefit of defining such policies in a TrafficPolicy resource, is that they can be reused to configure both ingress traffic and east-west traffic within the mesh. To accomplish this policy sharing, we simply need to ensure the TrafficPolicy has a routeSelector for ingress traffic, and a destinationSelector for east-west traffic:

apiVersion: networking.mesh.gloo.solo.io/v1
kind: TrafficPolicy
metadata:
  namespace: gloo-mesh
  name: add-header-policy
spec:
  destinationSelector:
  - kubeServiceRefs:
    services:
      - clusterName: cluster-1
        name: ratings
        namespace: bookinfo
  routeSelector:
  - routeTableRefs:
    - name: demo-routetable
      namespace: gloo-mesh
  policy:
    headerManipulation:
      appendResponseHeaders:
        "x-my-custom-header": "example"

Multiple Ingress Gateways

If we look at our VirtualGateway, under ingressGatewaySelectors[0].destinationSelectors.kubeServiceMatcherw.clusters , we see that we're matching the istio ingress-gateway on cluster-1. Notice however that this selector is an array. If we add the istio ingressgateway from cluster-2, these gateway settings will also apply there. If you send requests to this second gateway, you should still be able to hit all of the routes, regardless of what cluster the service is ultimately served from.

Here's our updated VirtualGateway which now configures the ingress gateways on both cluster-1 and cluster-2:

apiVersion: networking.enterprise.mesh.gloo.solo.io/v1beta1
kind: VirtualGateway
metadata:
  name: demo-gateway
  namespace: gloo-mesh
spec:
  connectionHandlers:
  - http:
      routeConfig:
      - virtualHostSelector:
          namespaces:
          - "gloo-mesh"
  ingressGatewaySelectors:
  - portName: http2
    destinationSelectors:
    - kubeServiceMatcher:
        clusters:
        - cluster-1
        - cluster-2
        labels:
          istio: ingressgateway
        namespaces:
        - istio-system

You should now be able to make requests (eg using curl) to either ingress gateway, and both will have identical routes configured, even though the routes may point to services split across both clusters.