About ambient mesh

Solo collaborated with Google to develop ambient mesh, a new “sidecarless” architecture for the Istio service mesh. Ambient mesh uses node-level ztunnels to route and secure Layer 4 traffic between pods with mutual TLS (mTLS). Waypoint proxies enforce Layer 7 traffic policies whenever needed. To onboard apps into the ambient mesh, you simply label the namespace the app belongs to. Because no sidecars need to be injected in to your apps, ambient mesh significantly reduces the complexity of adopting a service mesh.

To learn more about ambient, see the ambient mesh documentation.

About this guide

Set up a multicluster ambient mesh and expose apps across multiple clusters with a global hostname. Then, use your gateway proxy to load balance ambient mesh traffic across your clusters.

This guide assumes that you have two clusters, ${REMOTE_CLUSTER1} and ${REMOTE_CLUSTER2}, that you want to install ambient meshes in and link together. Gloo Gateway is installed in ${REMOTE_CLUSTER1} alongside your ambient mesh workloads. To try out the multicluster routing capabilities, you deploy the Bookinfo app in both clusters. Then, you expose the productpage app across clusters with a global hostname, productpage.bookinfo.mesh.internal. Gloo Gateway uses the global hostname to route traffic to the productpage apps in both clusters.

Gloo Gateway as an ingress gateway to a multicluster ambient mesh
Gloo Gateway as an ingress gateway to a multicluster ambient mesh

Before you begin

  1. Follow the Get started guide to install Gloo Gateway.

  2. Follow the Sample app guide to create a gateway proxy with an HTTP listener and deploy the httpbin sample app.

  3. Get the external address of the gateway and save it in an environment variable.

Step 1: Set up a multicluster ambient mesh

  1. Follow the multicluster ambient mesh setup guide in the Gloo Mesh documentation to install ambient in two clusters, ${REMOTE_CLUSTER1} and ${REMOTE_CLUSTER2}. The steps include setting up a shared root of trust, installing ambient in each cluster, and linking both clusters to create your multicluster ambient mesh. You can choose between the following installation methods:

  2. Install Bookinfo in your multicluster setup and expose the productpage app across both clusters with a global hostname.

  3. Add the gloo-system namespace to your ambient mesh. This label ensures that traffic from the gateway proxy to your apps are secured via mTLS.

      kubectl label ns gloo-system istio.io/dataplane-mode=ambient --context ${REMOTE_CONTEXT1}
      

Step 2: Set up multicluster routing

  1. Before setting up routing through the ingress gateway, verify multicluster routing within the mesh.

    1. Make sure that you can route from the ratings app to the global hostname that the productpage apps are exposed on.

        kubectl -n bookinfo --context ${REMOTE_CONTEXT1} debug -i pods/$(kubectl get pod -l app=ratings \
      --context ${REMOTE_CONTEXT1} -A -o jsonpath='{.items[0].metadata.name}') \
      --image=curlimages/curl -- curl -vik http://productpage.bookinfo.mesh.internal:9080/productpage
        
    2. Scale down the productpage app in ${REMOTE_CLUSTER1}.

        kubectl scale deployment productpage-v1 -n bookinfo --context ${REMOTE_CONTEXT1} --replicas=0
        
    3. Repeat the request to the productpage app. Because the productpage app is scaled down in ${REMOTE_CLUSTER1}, traffic is forced to go to the productpage app in ${REMOTE_CLUSTER2}. Verify that you continue to see a 200 HTTP response code.

        kubectl -n bookinfo --context ${REMOTE_CONTEXT1} debug -i pods/$(kubectl get pod -l app=ratings \
      --context ${REMOTE_CONTEXT1} -A -o jsonpath='{.items[0].metadata.name}') \
      --image=curlimages/curl -- curl -vik http://productpage.bookinfo.mesh.internal:9080/productpage
        
    4. Scale up the productpage app in ${REMOTE_CLUSTER1}.

        kubectl scale deployment productpage-v1 -n bookinfo --context ${REMOTE_CONTEXT1} --replicas=1
        
  2. Create an HTTPRoute to expose the global hostname for the productpage app along the /productpage prefix path on the http Gateway that you created in the get started tutorial.

      kubectl apply --context ${REMOTE_CONTEXT1} -f- <<EOF                                                
    apiVersion: gateway.networking.k8s.io/v1
    kind: HTTPRoute
    metadata:
      name: productpage
      namespace: gloo-system
    spec:            
      parentRefs: 
        - name: http          
          namespace: gloo-system                    
      rules:          
        - matches:
          - path:
              type: PathPrefix
              value: /productpage
          backendRefs:
            - name: productpage.bookinfo.mesh.internal 
              port: 9080
              kind: Hostname
              group: networking.istio.io
    EOF
      
  3. Verify multicluster routing through the ingress gateway.

    1. Send a request through the ingress gateway along the /productpage path. Verify that you get back a 200 HTTP response code.

        curl -I http://$INGRESS_GW_ADDRESS:8080/productpage     
        

      Example output:

        HTTP/1.1 200 OK
      content-type: text/html; charset=utf-8
      content-length: 5179
      server: envoy
      x-envoy-upstream-service-time: 133
        
    2. Scale down the productpage app in ${REMOTE_CLUSTER1}.

        kubectl scale deployment productpage-v1 -n bookinfo --context ${REMOTE_CONTEXT1} --replicas=0
        
    3. Repeat the request along the /productpage path. Because the product page app is scaled down in ${REMOTE_CLUSTER1}, traffic is forced to go to the productpage app in ${REMOTE_CLUSTER2}. Verify that you continue to see a 200 HTTP response code.

        curl -I http://$INGRESS_GW_ADDRESS:8080/productpage     
        

      Example output:

        HTTP/1.1 200 OK
      content-type: text/html; charset=utf-8
      content-length: 5179
      server: envoy
      x-envoy-upstream-service-time: 133
        
    4. Scale up the productpage app in ${REMOTE_CLUSTER1}.

        kubectl scale deployment productpage-v1 -n bookinfo --context ${REMOTE_CONTEXT1} --replicas=1
        

Optional: Review ambient traffic in the Gloo UI

Use the Gloo UI graph to visualize the traffic flow through your ambient mesh, and open the built-in Prometheus expression browser to verify that traffic between services is secured via mutual TLS.

Use the Gloo UI graph

  1. Install or upgrade the Gloo UI. Be sure to include your license key for the Solo distribution of Istio in your Gloo UI Helm values, so that you can review ambient mesh traffic in the Gloo UI graph. If you already installed the Gloo UI, you can use the guide to upgrade your installation with your license key.

  2. Port-forward the gloo-mesh-ui service on 8090.

      kubectl port-forward -n gloo-system svc/gloo-mesh-ui 8090:8090 --context $REMOTE_CONTEXT1
      
  3. Open your browser and connect to http://localhost:8090.

      open http://localhost:8090/
      
  4. Go to Graph.

  5. Verify that you see traffic between the gateway proxy and the Bookinfo app as shown in the following image.

Figure: Gloo UI Graph
Figure: Gloo UI Graph

View metrics

  1. Port-forward the built-in Prometheus expression browser.

      kubectl -n gloo-mesh port-forward deploy/prometheus-server 9091
      
  2. Open the Prometheus expression browser.

  3. Enter istio_requests_total{destination_workload_namespace="httpbin"} into the query field and review the results. Verify that you see a SPIFFE ID for the source and destination workload and that the connection_security_policy is set to mutual_tls. Example output:

      istio_requests_total{app="gloo-telemetry-collector-agent", cluster="gloo-gateway-ambient-mgt", 
    collector_pod="gloo-telemetry-collector-79f767f765-bqqhb", component="standalone-collector", 
    connection_security_policy="mutual_tls", destination_cluster="gloo-gateway-ambient-mgt", 
    destination_principal="spiffe://gloo-gateway-ambient-mgt/ns/httpbin/sa/httpbin", 
    destination_service="httpbin.httpbin.svc.cluster.local", destination_workload="httpbin", 
    destination_workload_id="httpbin.httpbin.gloo-gateway-ambient-mgt", 
    destination_workload_namespace="httpbin", namespace="istio-system", reporter="destination", 
    response_code="200", response_flags="-", source_cluster="gloo-gateway-ambient-mgt",
    source_principal="spiffe://gloo-gateway-ambient-mgt/ns/gloo-system/sa/gloo-proxy-http", 
    source_workload="gloo-proxy-http", source_workload_namespace="gloo-system",
    workload_id="gloo-proxy-http.gloo-system.gloo-gateway-ambient-mgt"}
      

Next

Now that you set up Gloo Gateway as the ingress gateway for your multicluster ambient mesh, you can further control and secure ingress traffic with Gloo Gateway policies.