Add services to the mesh

Now that Istio is up and running, you can create service namespaces for your teams to run app workloads in, and onboard them to the ambient mesh.

Label each workload namespace with the istio.io/dataplane-mode=ambient label, which adds all pods in the namespace to an ambient mesh.

  kubectl label namespace <service_ns> istio.io/dataplane-mode=ambient
  

After you label the namespace, all ingoing and outgoing traffic to the pods is automatically intercepted and secured by the ztunnel socket that is exposed on the pod. The socket is configured by the ztunnel that is co-located on the same node as the pod. The ztunnel socket forwards traffic to the ztunnel socket of the target pod. If the target pod is located on a different node, its ztunnel socket is configured by the ztunnel instance that is co-located on the same node as the pod. The communication between the ztunnel sockets is secured via mutual TLS. For more information, see the component overview.

Bookinfo example

Single cluster

For testing purposes, you can deploy Bookinfo, the Istio sample app, and add it to your ambient mesh. You can also verify that traffic is routed through the ztunnels in your cluster by checking the ztunnel logs.

  1. Create the bookinfo namespace, and label it with the istio.io/dataplane-mode=ambient label. This label adds all Bookinfo services that you create in the namespace to the ambient mesh.

      kubectl create ns bookinfo
    kubectl label namespace bookinfo istio.io/dataplane-mode=ambient
      
  2. Deploy the Bookinfo app.

      # deploy bookinfo application components for all versions
    kubectl -n bookinfo apply -f https://raw.githubusercontent.com/istio/istio/1.28.1/samples/bookinfo/platform/kube/bookinfo.yaml -l 'app'
    # deploy an updated product page with extra container utilities such as 'curl' and 'netcat'
    kubectl -n bookinfo apply -f https://raw.githubusercontent.com/solo-io/gloo-mesh-use-cases/main/policy-demo/productpage-with-curl.yaml
    # deploy all bookinfo service accounts
    kubectl -n bookinfo apply -f https://raw.githubusercontent.com/istio/istio/1.28.1/samples/bookinfo/platform/kube/bookinfo.yaml -l 'account'
      
  3. Verify that the Bookinfo app is deployed successfully.

      kubectl get pods,svc -n bookinfo
      
  4. Verify that you can access the ratings app from the product page app.

      kubectl -n bookinfo debug -i pods/$(kubectl get pod -l app=productpage -A -o jsonpath='{.items[0].metadata.name}') --image=curlimages/curl -- curl -v http://ratings:9080/ratings/1
      

    Example output:

      ...
    < HTTP/1.1 200 OK
    < Content-type: application/json
    < Date: Tue, 24 Dec 2024 20:58:23 GMT
    < Connection: keep-alive
    < Keep-Alive: timeout=5
    < Transfer-Encoding: chunked
    < 
    { [59 bytes data]
    100    48    0    48    0     0   2549      0 --:--:-- --:--:-- --:--:--  2666
    * Connection #0 to host ratings left intact
    {"id":1,"ratings":{"Reviewer1":5,"Reviewer2":4}}
      
  5. Optional: Verify that traffic flows through the ztunnel by getting the logs of the ztunnel that is co-located with the ratings app.

    1. Get the name of the node that the ratings app is deployed to.

        kubectl get pods -n bookinfo -o wide | grep ratings
        

      In this example output, ip-10-0-6-27.us-east-2.compute.internal is the name of the node.

        ratings-v1-7c9cd8db6d-8t62f       1/1     Running   0          3m9s    10.0.13.100   ip-10-0-6-27.us-east-2.compute.internal   <none>           <none>
        
    2. List the ztunnels in your cluster and note the name of the ztunnel that is deployed to the same node as the ratings app.

        kubectl get pods -n istio-system -o wide | grep ztunnel
        

      In this example output, ztunnel-tvtzn is deployed to the same node as the ratings pod.

        ztunnel-tvtzn             1/1     Running   0          16m   10.0.5.167   ip-10-0-6-27.us-east-2.compute.internal   <none>           <none>
      ztunnel-vtpjm             1/1     Running   0          16m   10.0.1.204   ip-10-0-8-23.us-east-2.compute.internal   <none>           <none>
        
    3. Get the logs of the ztunnel pod that runs on the same node as the ratings app. Make sure that you see an access log message for the request that the product page app sent to ratings.

        kubectl logs -n istio-system <ztunnel-pod-name>
        

      Example output:

        2024-06-21T16:33:13.093929Z	info	access	connection complete	src.addr=10.XX.X.XX:46103 src.workload="productpage-v1-78dd566f6f-jcrtj" src.namespace="bookinfo" src.identity="spiffe://cluster.local/ns/bookinfo/sa/bookinfo-productpage" dst.addr=10.XX.X.XX:9080 dst.hbone_addr=10.XX.X.XX:9080 dst.service="ratings.bookinfo.svc.cluster.local" dst.workload="ratings-v1-7c9cd8db6d-dph55" dst.namespace="bookinfo" dst.identity="spiffe://cluster.local/ns/bookinfo/sa/bookinfo-ratings" direction="inbound" bytes_sent=222 bytes_recv=84 duration="4ms"
        
    4. Port-forward the ztunnel pod on port 15020.

        kubectl -n istio-system port-forward pod/<ztunnel_pod_name> 15020
        
    5. Open localhost:15020/stats/prometheus in your browser to view Istio Layer 4 metrics that were emitted by the ztunnel, such as istio_tcp_sent_bytes_total or istio_tcp_connections_closed_total. These metrics are forwarded to the built-in Prometheus server and are used by the Gloo UI to visualize traffic between workloads in the ambient mesh.

      Example output:

        istio_tcp_sent_bytes_total{reporter="destination",source_workload="productpage-v1",source_canonical_service="productpage",source_canonical_revision="v1",source_workload_namespace="bookinfo",source_principal="spiffe://cluster.local/ns/bookinfo/sa/bookinfo-productpage",source_app="productpage",source_version="v1",source_cluster="gloo-mesh-docs-ambient-mgt",destination_service="unknown",destination_service_namespace="unknown",destination_service_name="unknown",destination_workload="ratings-v1",destination_canonical_service="ratings",destination_canonical_revision="v1",destination_workload_namespace="bookinfo",destination_principal="spiffe://cluster.local/ns/bookinfo/sa/bookinfo-ratings",destination_app="ratings",destination_version="v1",destination_cluster="gloo-mesh-docs-ambient-mgt",request_protocol="tcp",response_flags="-",connection_security_policy="mutual_tls",response_code="",grpc_response_status=""} 398
        

Multicluster

For testing purposes, you can deploy the Bookinfo sample app across multiple clusters and add the app services to your ambient mesh. You can also verify that traffic is routed through the ztunnels in your cluster by checking the ztunnel logs.

This example deploys the same Bookinfo app to two clusters, and adds them to the ambient mesh in each cluster.

  1. Save the kubeconfig contexts for two clusters that you deployed an ambient mesh to and linked together.

      export REMOTE_CONTEXT1=<cluster1-context>
    export REMOTE_CONTEXT2=<cluster2-context>
      
  2. Create the bookinfo namespace in each cluster, and label them with the istio.io/dataplane-mode=ambient label. This label adds all Bookinfo services that you create in each namespace to the ambient mesh in the respective cluster.

      for context in ${REMOTE_CONTEXT1} ${REMOTE_CONTEXT2}; do
      kubectl --context ${context} create ns bookinfo
      kubectl --context ${context} label namespace bookinfo istio.io/dataplane-mode=ambient
    done
      
  3. Deploy the Bookinfo app to each cluster.

      for context in ${REMOTE_CONTEXT1} ${REMOTE_CONTEXT2}; do
      # deploy bookinfo application components for all versions
      kubectl --context ${context} -n bookinfo apply -f https://raw.githubusercontent.com/istio/istio/1.28.1/samples/bookinfo/platform/kube/bookinfo.yaml -l 'app'
      # deploy an updated product page with extra container utilities such as 'curl' and 'netcat'
      kubectl --context ${context} -n bookinfo apply -f https://raw.githubusercontent.com/solo-io/gloo-mesh-use-cases/main/policy-demo/productpage-with-curl.yaml
      # deploy all bookinfo service accounts
      kubectl --context ${context} -n bookinfo apply -f https://raw.githubusercontent.com/istio/istio/1.28.1/samples/bookinfo/platform/kube/bookinfo.yaml -l 'account'
      # deploy individual services for each microservice version
      kubectl --context ${context} -n bookinfo apply -f https://raw.githubusercontent.com/istio/istio/1.28.1/samples/bookinfo/platform/kube/bookinfo-versions.yaml
    done
      
  4. Verify that the Bookinfo app is deployed successfully.

      kubectl --context ${REMOTE_CONTEXT1} get pods,svc -n bookinfo
    kubectl --context ${REMOTE_CONTEXT2} get pods,svc -n bookinfo
      
  5. Optional: You can verify that traffic flows through the ztunnel in one cluster by getting the logs of the ztunnel that is co-located with the ratings app. Note that this simply helps you review how traffic flows in the ambient mesh of one cluster; you make services available globally across meshes in the next section.

    1. Verify that you can access the ratings app from the product page app.

        kubectl --context ${REMOTE_CONTEXT1} -n bookinfo debug -i pods/$(kubectl --context ${REMOTE_CONTEXT1} get pod -l app=productpage -A -o jsonpath='{.items[0].metadata.name}') --image=curlimages/curl -- curl -v http://ratings:9080/ratings/1
        

      Example output:

        ...
      < HTTP/1.1 200 OK
      < Content-type: application/json
      < Date: Tue, 24 Dec 2024 20:58:23 GMT
      < Connection: keep-alive
      < Keep-Alive: timeout=5
      < Transfer-Encoding: chunked
      < 
      { [59 bytes data]
      100    48    0    48    0     0   2549      0 --:--:-- --:--:-- --:--:--  2666
      * Connection #0 to host ratings left intact
      {"id":1,"ratings":{"Reviewer1":5,"Reviewer2":4}}
        
    2. Get the name of the node that the ratings app is deployed to.

        kubectl --context ${REMOTE_CONTEXT1} get pods -n bookinfo -o wide | grep ratings
        

      In this example output, ip-10-0-6-27.us-east-2.compute.internal is the name of the node.

        ratings-v1-7c9cd8db6d-8t62f       1/1     Running   0          3m9s    10.0.13.100   ip-10-0-6-27.us-east-2.compute.internal   <none>           <none>
        
    3. List the ztunnels in your cluster and note the name of the ztunnel that is deployed to the same node as the ratings app.

        kubectl --context ${REMOTE_CONTEXT1} get pods -n istio-system -o wide | grep ztunnel
        

      In this example output, ztunnel-tvtzn is deployed to the same node as the ratings pod.

        ztunnel-tvtzn             1/1     Running   0          16m   10.0.5.167   ip-10-0-6-27.us-east-2.compute.internal   <none>           <none>
      ztunnel-vtpjm             1/1     Running   0          16m   10.0.1.204   ip-10-0-8-23.us-east-2.compute.internal   <none>           <none>
        
    4. Get the logs of the ztunnel pod that runs on the same node as the ratings app. Make sure that you see an access log message for the request that the product page app sent to ratings.

        kubectl --context ${REMOTE_CONTEXT1} logs -n istio-system <ztunnel-pod-name>
        

      Example output:

        2024-06-21T16:33:13.093929Z	info	access	connection complete	src.addr=10.XX.X.XX:46103 src.workload="productpage-v1-78dd566f6f-jcrtj" src.namespace="bookinfo" src.identity="spiffe://cluster.local/ns/bookinfo/sa/bookinfo-productpage" dst.addr=10.XX.X.XX:9080 dst.hbone_addr=10.XX.X.XX:9080 dst.service="ratings.bookinfo.svc.cluster.local" dst.workload="ratings-v1-7c9cd8db6d-dph55" dst.namespace="bookinfo" dst.identity="spiffe://cluster.local/ns/bookinfo/sa/bookinfo-ratings" direction="inbound" bytes_sent=222 bytes_recv=84 duration="4ms"
        
    5. Port-forward the ztunnel pod on port 15020.

        kubectl --context ${REMOTE_CONTEXT1} -n istio-system port-forward pod/<ztunnel_pod_name> 15020
        
    6. Open localhost:15020/stats/prometheus in your browser to view Istio Layer 4 metrics that were emitted by the ztunnel, such as istio_tcp_sent_bytes_total or istio_tcp_connections_closed_total. These metrics are forwarded to the built-in Prometheus server and are used by the Gloo UI to visualize traffic between workloads in the ambient mesh.

      Example output:

        istio_tcp_sent_bytes_total{reporter="destination",source_workload="productpage-v1",source_canonical_service="productpage",source_canonical_revision="v1",source_workload_namespace="bookinfo",source_principal="spiffe://cluster.local/ns/bookinfo/sa/bookinfo-productpage",source_app="productpage",source_version="v1",source_cluster="gloo-mesh-docs-ambient-mgt",destination_service="unknown",destination_service_namespace="unknown",destination_service_name="unknown",destination_workload="ratings-v1",destination_canonical_service="ratings",destination_canonical_revision="v1",destination_workload_namespace="bookinfo",destination_principal="spiffe://cluster.local/ns/bookinfo/sa/bookinfo-ratings",destination_app="ratings",destination_version="v1",destination_cluster="gloo-mesh-docs-ambient-mgt",request_protocol="tcp",response_flags="-",connection_security_policy="mutual_tls",response_code="",grpc_response_status=""} 398
        

Next