About ambient mesh

Solo collaborated with Google to develop ambient mesh, a new “sidecarless” architecture for the Istio service mesh. Ambient mesh uses node-level ztunnels to route and secure Layer 4 traffic between pods with mutual TLS (mTLS). Waypoint proxies enforce Layer 7 traffic policies whenever needed. To onboard apps into the ambient mesh, you simply label the namespace the app belongs to. Because no sidecars need to be injected in to your apps, ambient mesh significantly reduces the complexity of adopting a service mesh.

To learn more about ambient, see the ambient mesh documentation.

About this guide

In this guide, you learn how to use Gloo Gateway as the ingress gateway to route traffic to a single cluster or multicluster ambient service mesh.

Before you begin

  1. Follow the Get started guide to install Gloo Gateway, set up a gateway resource, and deploy the httpbin sample app.

  2. Get the external address of the gateway and save it in an environment variable.

Single cluster

Set up an ambient in the same cluster where you installed Gloo Gateway and use the gateway proxy as an ingress for the workloads in your ambient mesh.

Step 1: Set up an ambient mesh

Set up an ambient mesh in your cluster to secure service-to-service communication with mutual TLS.

  • Ambient mesh with the Solo distribution of Istio: Follow the instructions in the Gloo Mesh documentation to Deploy Istio in ambient mode. These instructions use the Solo distribution of Istio, which is a hardened Istio image provided by Solo. You do not need to create an Istio ingress gateway, as you configure Gloo Gateway as the ingress gateway for your ambient mesh.
  • Community ambient mesh: You can install the community version of ambient mesh by following the ambient mesh quickstart tutorial. This tutorial uses a script to quickly set up an ambient mesh in your cluster. You do not need to create an Istio ingress gateway, as you configure Gloo Gateway as the ingress gateway for your ambient mesh.

Step 2: Set up Gloo Gateway for ingress

To set up Gloo Gateway as the ingress gateway for your ambient mesh, you simply add all the namespaces that you want to secure to your ambient mesh, including the namespace that your gateway proxy is deployed to.

  1. Add the gloo-system and httpbin namespaces to your ambient mesh. Use the same command to add other namespaces in your cluster. The label instructs istiod to configure a ztunnel socket on all the pods in that namespace so that traffic to these pods is secured via mutual TLS (mTLS).

      kubectl label ns gloo-system istio.io/dataplane-mode=ambient
    kubectl label ns httpbin istio.io/dataplane-mode=ambient
      
  2. Send a request to the httpbin app and verify that you get back a 200 HTTP response code. All traffic from the gateway is automatically intercepted by a ztunnel that is co-located on the same node as the gateway. The ztunnel collects Layer 4 metrics before it forwards the request to the ztunnel that is co-located on the same node as the httpbin app. The connection between ztunnels is secured via mutual TLS.

  3. Verify that traffic between the gateway proxy and the httpbin app is secured via mutual TLS. Depending on your setup, you can choose between the following options.

Step 3 (optional): Expose the Bookinfo sample app

Deploy the Bookinfo sample app to your ambient mesh, and verify that Gloo Gateway correctly routes requests to its services.

Add Bookinfo to the ambient mesh

For testing purposes, you can deploy Bookinfo, the Istio sample app, and add it to your ambient mesh. Note that if you already followed the example to deploy Bookinfo in the Gloo Mesh docs, you can continue to the next section.

  1. Create the bookinfo namespace, and label it with the istio.io/dataplane-mode=ambient label. This label adds all Bookinfo services that you create in the namespace to the ambient mesh.

      kubectl create ns bookinfo
    kubectl label namespace bookinfo istio.io/dataplane-mode=ambient
      
  2. Deploy the Bookinfo app.

      # deploy bookinfo application components for all versions
    kubectl -n bookinfo apply -f https://raw.githubusercontent.com/istio/istio/1.23.4/samples/bookinfo/platform/kube/bookinfo.yaml -l 'app'
    # deploy an updated product page with extra container utilities such as 'curl' and 'netcat'
    kubectl -n bookinfo apply -f https://raw.githubusercontent.com/solo-io/gloo-mesh-use-cases/main/policy-demo/productpage-with-curl.yaml
    # deploy all bookinfo service accounts
    kubectl -n bookinfo apply -f https://raw.githubusercontent.com/istio/istio/1.23.4/samples/bookinfo/platform/kube/bookinfo.yaml -l 'account'
      
  3. Verify that the Bookinfo app is deployed successfully.

      kubectl get pods,svc -n bookinfo
      
  4. Verify that you can access the ratings app from the product page app.

      kubectl -n bookinfo debug -i pods/$(kubectl get pod -l app=productpage -A -o jsonpath='{.items[0].metadata.name}') --image=curlimages/curl -- curl -v http://ratings:9080/ratings/1
      

    Example output:

      ...
    < HTTP/1.1 200 OK
    < Content-type: application/json
    < Date: Tue, 24 Dec 2024 20:58:23 GMT
    < Connection: keep-alive
    < Keep-Alive: timeout=5
    < Transfer-Encoding: chunked
    < 
    { [59 bytes data]
    100    48    0    48    0     0   2549      0 --:--:-- --:--:-- --:--:--  2666
    * Connection #0 to host ratings left intact
    {"id":1,"ratings":{"Reviewer1":5,"Reviewer2":4}}
      

Route to Bookinfo services

To expose the app to incoming traffic requests, you create an HTTPRoute resource that references the product page microservice.

  1. Create an HTTPRoute resource that defines routing rules for each microservice path.

      kubectl apply -n bookinfo -f- <<EOF
    apiVersion: gateway.networking.k8s.io/v1
    kind: HTTPRoute
    metadata:
      name: bookinfo
    spec:
      parentRefs:
      - name: http
        namespace: gloo-system
      rules:
      - matches:
        - path:
            type: Exact
            value: /productpage
        - path:
            type: PathPrefix
            value: /static
        - path:
            type: Exact
            value: /login
        - path:
            type: Exact
            value: /logout
        - path:
            type: PathPrefix
            value: /api/v1/products
        backendRefs:
          - name: productpage
            port: 9080
    EOF
      
  2. Verify that Gloo Gateway correctly routes traffic requests to Bookinfo services in your ambient mesh by opening the product page in your web browser.

      open http://$INGRESS_GW_ADDRESS:8080/productpage 
      

Multicluster

Set up a multicluster ambient mesh and expose apps across multiple clusters with a global hostname. Then, use your gateway proxy to load balance ambient mesh traffic across your clusters.

This guide assumes that you have two clusters, ${REMOTE_CLUSTER1} and ${REMOTE_CLUSTER2}, that you want to install ambient meshes in and link together. Gloo Gateway is installed in ${REMOTE_CLUSTER1} alongside your ambient mesh workloads. To try out the multicluster routing capabilities, you deploy the Bookinfo app in both clusters. Then, you expose the productpage app across clusters with a global hostname, productpage.bookinfo.mesh.internal. Gloo Gateway uses the global hostname to route traffic to the productpage apps in both clusters.

Step 1: Set up a multicluster ambient mesh

  1. Follow the multicluster ambient mesh setup guide in the Gloo Mesh documentation to install ambient in two clusters, ${REMOTE_CLUSTER1} and ${REMOTE_CLUSTER2}. The steps include setting up a shared root of trust, installing ambient in each cluster, and linking both clusters to create your multicluster ambient mesh. You can choose between the following installation methods:

  2. Install Bookinfo in your multicluster setup and expose the productpage app across both clusters with a global hostname.

Step 2: Set up Gloo Gateway for ingress

  1. Get the Helm values for your current Gloo Gateway installation.

      helm get values gloo -n gloo-system -o yaml --kube-context ${REMOTE_CONTEXT1} > gloo-gateway.yaml
    open gloo-gateway.yaml
      
  2. Add the following values to the Helm value file to enable the multicluster ambient support.

      
    gloo: 
      gloo:
        deployment:
          customEnv:
            - name: GG_AMBIENT_MULTINETWORK
              value: "true"
      
  3. Upgrade your Gloo Gateway installation.

      helm upgrade -n gloo-system gloo glooe/gloo-ee \
    --kube-context ${REMOTE_CONTEXT1} -f gloo-gateway.yaml \
    --version=1.18.14
      
  4. Add the gloo-system namespace to your ambient mesh. This label ensures that traffic from the gateway proxy to your apps are secured via mTLS.

      kubectl label ns gloo-system istio.io/dataplane-mode=ambient --context ${REMOTE_CONTEXT1}
      

Step 3: Set up multicluster routing

  1. Before setting up routing through the ingress gateway, verify multicluster routing within the mesh.

    1. Make sure that you can route from the ratings app to the global hostname that the productpage apps are exposed on.

        kubectl -n bookinfo --context ${REMOTE_CONTEXT1} debug -i pods/$(kubectl get pod -l app=ratings \
      --context ${REMOTE_CONTEXT1} -A -o jsonpath='{.items[0].metadata.name}') \
      --image=curlimages/curl -- curl -vik http://productpage.bookinfo.mesh.internal:9080/productpage
        
    2. Scale down the productpage app in ${REMOTE_CLUSTER1}.

        kubectl scale deployment productpage-v1 -n bookinfo --context ${REMOTE_CONTEXT1} --replicas=0
        
    3. Repeat the request to the productpage app. Because the productpage app is scaled down in ${REMOTE_CLUSTER1}, traffic is forced to go to the productpage app in ${REMOTE_CLUSTER2}. Verify that you continue to see a 200 HTTP response code.

        kubectl -n bookinfo --context ${REMOTE_CONTEXT1} debug -i pods/$(kubectl get pod -l app=ratings \
      --context ${REMOTE_CONTEXT1} -A -o jsonpath='{.items[0].metadata.name}') \
      --image=curlimages/curl -- curl -vik http://productpage.bookinfo.mesh.internal:9080/productpage
        
    4. Scale up the productpage app in ${REMOTE_CLUSTER1}.

        kubectl scale deployment productpage-v1 -n bookinfo --context ${REMOTE_CONTEXT1} --replicas=1
        
  2. Create an HTTPRoute to expose the global hostname for the productpage app along the /productpage prefix path on the http Gateway that you created in the get started tutorial.

      kubectl apply --context ${REMOTE_CONTEXT1} -f- <<EOF                                                
    apiVersion: gateway.networking.k8s.io/v1
    kind: HTTPRoute
    metadata:
      name: productpage
      namespace: gloo-system
    spec:            
      parentRefs: 
        - name: http          
          namespace: gloo-system                    
      rules:          
        - matches:
          - path:
              type: PathPrefix
              value: /productpage
          backendRefs:
            - name: productpage.bookinfo.mesh.internal 
              port: 9080
              kind: Hostname
              group: networking.istio.io
    EOF
      
  3. Verify multicluster routing through the ingress gateway.

    1. Send a request through the ingress gateway along the /productpage path. Verify that you get back a 200 HTTP response code.

        curl -I http://$INGRESS_GW_ADDRESS:8080/productpage     
        

      Example output:

        HTTP/1.1 200 OK
      content-type: text/html; charset=utf-8
      content-length: 5179
      server: envoy
      x-envoy-upstream-service-time: 133
        
    2. Scale down the productpage app in ${REMOTE_CLUSTER1}.

        kubectl scale deployment productpage-v1 -n bookinfo --context ${REMOTE_CONTEXT1} --replicas=0
        
    3. Repeat the request along the /productpage path. Because the product page app is scaled down in ${REMOTE_CLUSTER1}, traffic is forced to go to the productpage app in ${REMOTE_CLUSTER2}. Verify that you continue to see a 200 HTTP response code.

        curl -I http://$INGRESS_GW_ADDRESS:8080/productpage     
        

      Example output:

        HTTP/1.1 200 OK
      content-type: text/html; charset=utf-8
      content-length: 5179
      server: envoy
      x-envoy-upstream-service-time: 133
        
    4. Scale up the productpage app in ${REMOTE_CLUSTER1}.

        kubectl scale deployment productpage-v1 -n bookinfo --context ${REMOTE_CONTEXT1} --replicas=1
        

Optional: Review ambient traffic in the Gloo UI

If you installed the Gloo UI, you can use the Gloo UI graph to visualize the traffic flow through your ambient mesh, and open the built-in Prometheus expression browser to verify that traffic between services is secured via mutual TLS.

Use the Gloo UI graph

  1. Port-forward the gloo-mesh-ui service on 8090.

      kubectl port-forward -n gloo-mesh svc/gloo-mesh-ui 8090:8090
      
  2. Open your browser and connect to http://localhost:8090.

  3. Go to Observability > Graph.

  4. Verify that you see a lock icon for traffic between the gateway proxy and the httpbin app as shown in the following image.

View metrics

  1. Port-forward the built-in Prometheus expression browser.

      kubectl -n gloo-mesh port-forward deploy/prometheus-server 9091
      
  2. Open the Prometheus expression browser.

  3. Enter istio_requests_total{destination_workload_namespace="httpbin"} into the query field and review the results. Verify that you see a SPIFFE ID for the source and destination workload and that the connection_security_policy is set to mutual_tls. Example output:

      istio_requests_total{app="gloo-telemetry-collector-agent", cluster="gloo-gateway-ambient-mgt", 
    collector_pod="gloo-telemetry-collector-79f767f765-bqqhb", component="standalone-collector", 
    connection_security_policy="mutual_tls", destination_cluster="gloo-gateway-ambient-mgt", 
    destination_principal="spiffe://gloo-gateway-ambient-mgt/ns/httpbin/sa/httpbin", 
    destination_service="httpbin.httpbin.svc.cluster.local", destination_workload="httpbin", 
    destination_workload_id="httpbin.httpbin.gloo-gateway-ambient-mgt", 
    destination_workload_namespace="httpbin", namespace="istio-system", reporter="destination", 
    response_code="200", response_flags="-", source_cluster="gloo-gateway-ambient-mgt",
    source_principal="spiffe://gloo-gateway-ambient-mgt/ns/gloo-system/sa/gloo-proxy-http", 
    source_workload="gloo-proxy-http", source_workload_namespace="gloo-system",
    workload_id="gloo-proxy-http.gloo-system.gloo-gateway-ambient-mgt"}
      

Next

Now that you set up Gloo Gateway as the ingress gateway for your ambient mesh, you can further control and secure ingress traffic with Gloo Gateway policies.