Add apps to the ambient mesh
Add services in existing namepaces to your ambient mesh, or deploy the Bookinfo sample app without sidecars to try out traffic routing in your ambient mesh.
Add services to the mesh
Now that Istio is up and running, you can create service namespaces for your teams to run app workloads in, and onboard them to the ambient mesh.
Label each workload namespace with the istio.io/dataplane-mode=ambient
label, which adds all pods in the namespace to an ambient mesh.
kubectl label namespace <service_ns> istio.io/dataplane-mode=ambient
After you label the namespace, all ingoing and outgoing traffic to the pods is automatically intercepted and secured by the ztunnel socket that is exposed on the pod. The socket is configured by the ztunnel that is co-located on the same node as the pod. The ztunnel socket forwards traffic to the ztunnel socket of the target pod. If the target pod is located on a different node, its ztunnel socket is configured by the ztunnel instance that is co-located on the same node as the pod. The communication between the ztunnel sockets is secured via mutual TLS. For more information, see the component overview.
Make services available across clusters
If you linked clusters to form a multicluster ambient mesh, you can make a service available across clusters throughout the multicluster mesh by using the solo.io/service-scope=global
service label. When you apply this label to a service, it does not change the service. Instead, a new global service is created with a hostname in the format <name>.<namespace>.mesh.internal
.
For the service that you want to be accessible from multiple clusters, apply the
solo.io/service-scope=global
label.kubectl label service <name> -n <namespace> --context ${CLUSTER_CONTEXT} solo.io/service-scope=global
Verify that the global service entry with a hostname in the format
<svc_name>.<namespace>.mesh.internal
is created in theistio-system
namespace. This hostname makes the endpoint for your service available across the multicluster mesh.kubectl get serviceentry -n istio-system --context ${CLUSTER_CONTEXT}
Example output:
NAME HOSTS LOCATION RESOLUTION AGE autogen.<namespace>.<svc_name> ["<svc_name>.<namespace>.mesh.internal"] STATIC 94s
If you have a service of the same name and namespace in another cluster of the mesh, label that service too. This way, the service’s endpoint is added to the global service’s hostname, and increases the availability of your service in the multicluster mesh. For more information, see Namespace sameness.
kubectl label service <name> -n <namespace> --context <other_context> solo.io/service-scope=global
Optional: Modify the way that traffic is routed to service endpoints for the global service by applying the
networking.istio.io/traffic-distribution
annotation to each service instance. For more information, see Endpoint traffic control.kubectl annotate service <name> -n <namespace> networking.istio.io/traffic-distribution=<mode>
For an example of deploying an app to a multicluster mesh, see Bookinfo example: Multicluster.
Namespace sameness
You might have the same app service in the same namespace in multiple clusters. When you label the service in one cluster, a global service is created in the ambient mesh for that service’s endpoint. However, you must also label the service in the other cluster to include its endpoint in the global service’s hostname. When you label each service, then all instances of that app in the multicluster mesh are exposed by the global service. By adhering to the principle of namespace sameness, the global service’s hostname unifies the endpoints for each service across the clusters.
For example, you might have a myapp
service in the stage
namespace of cluster1
, and a myapp
service in the stage
namespace of cluster2
. If you label the service in cluster1
with solo.io/service-scope=global
, a global service is created with the hostname myapp.stage.mesh.internal
. However, until you label the service in cluster2
, the service is not automatically included in the global service’s endpoints.
Endpoint traffic control
If an in-mesh service makes a request to a service that is exposed globally, and a healthy endpoint for that service is available locally in the same cluster network, the request is routed to that local instance by default. If no healthy endpoints of that service are available locally, requests are routed to healthy endpoints in remote cluster networks. An endpoint is considered healthy if the app pod exposed by that local service is in a ready state.
To modify this default behavior for global services, you can apply the networking.istio.io/traffic-distribution
annotation to a service that has the solo.io/service-scope=global
label.
PreferNetwork
: Route requests to available endpoints within the source’s same network first. This is the default for global services.PreferClose
: Route requests to an available endpoint that is closest to the source first, considering the zone, region, and network. For example, you might have a global service with two endpoints in zoneus-west
and one endpoint inus-east
. When sending traffic from a client inus-west
, all traffic routes to the twous-west
endpoints. If one of those endpoints becomes unhealthy, all traffic routes to the remaining endpoint inus-west
. If that endpoint becomes unhealthy, traffic routes to theus-east
endpoint.PreferRegion
: Route requests to an available endpoint that is closest to the source first, considering the region and network.Any
: Consider all endpoints equally, and route requests to a random available endpoint.
Recall that the directionality that you choose when you link the clusters can also affect how services can communicate. For example, if you choose to asymmetrically link your clusters, services in one cluster might not be able to send requests to remote endpoints for services in another cluster.
Bookinfo example: Single cluster
For testing purposes, you can deploy Bookinfo, the Istio sample app, and add it to your ambient mesh. You can also verify that traffic is routed through the ztunnels in your cluster by checking the ztunnel logs.
Create the
bookinfo
namespace, and label it with theistio.io/dataplane-mode=ambient
label. This label adds all Bookinfo services that you create in the namespace to the ambient mesh.kubectl create ns bookinfo kubectl label namespace bookinfo istio.io/dataplane-mode=ambient
Deploy the Bookinfo app.
# deploy bookinfo application components for all versions kubectl -n bookinfo apply -f https://raw.githubusercontent.com/istio/istio/1.24.2/samples/bookinfo/platform/kube/bookinfo.yaml -l 'app' # deploy an updated product page with extra container utilities such as 'curl' and 'netcat' kubectl -n bookinfo apply -f https://raw.githubusercontent.com/solo-io/gloo-mesh-use-cases/main/policy-demo/productpage-with-curl.yaml # deploy all bookinfo service accounts kubectl -n bookinfo apply -f https://raw.githubusercontent.com/istio/istio/1.24.2/samples/bookinfo/platform/kube/bookinfo.yaml -l 'account'
Verify that the Bookinfo app is deployed successfully.
kubectl get pods,svc -n bookinfo
Verify that you can access the ratings app from the product page app.
kubectl -n bookinfo debug -i pods/$(kubectl get pod -l app=productpage -A -o jsonpath='{.items[0].metadata.name}') --image=curlimages/curl -- curl -v http://ratings:9080/ratings/1
Example output:
... < HTTP/1.1 200 OK < Content-type: application/json < Date: Tue, 24 Dec 2024 20:58:23 GMT < Connection: keep-alive < Keep-Alive: timeout=5 < Transfer-Encoding: chunked < { [59 bytes data] 100 48 0 48 0 0 2549 0 --:--:-- --:--:-- --:--:-- 2666 * Connection #0 to host ratings left intact {"id":1,"ratings":{"Reviewer1":5,"Reviewer2":4}}
Optional: Verify that traffic flows through the ztunnel by getting the logs of the ztunnel that is co-located with the ratings app.
Get the name of the node that the ratings app is deployed to.
kubectl get pods -n bookinfo -o wide | grep ratings
In this example output,
ip-10-0-6-27.us-east-2.compute.internal
is the name of the node.ratings-v1-7c9cd8db6d-8t62f 1/1 Running 0 3m9s 10.0.13.100 ip-10-0-6-27.us-east-2.compute.internal <none> <none>
List the ztunnels in your cluster and note the name of the ztunnel that is deployed to the same node as the ratings app.
kubectl get pods -n istio-system -o wide | grep ztunnel
In this example output,
ztunnel-tvtzn
is deployed to the same node as the ratings pod.ztunnel-tvtzn 1/1 Running 0 16m 10.0.5.167 ip-10-0-6-27.us-east-2.compute.internal <none> <none> ztunnel-vtpjm 1/1 Running 0 16m 10.0.1.204 ip-10-0-8-23.us-east-2.compute.internal <none> <none>
Get the logs of the ztunnel pod that runs on the same node as the ratings app. Make sure that you see an
access
log message for the request that the product page app sent to ratings.kubectl logs -n istio-system <ztunnel-pod-name>
Example output:
2024-06-21T16:33:13.093929Z info access connection complete src.addr=10.XX.X.XX:46103 src.workload="productpage-v1-78dd566f6f-jcrtj" src.namespace="bookinfo" src.identity="spiffe://cluster.local/ns/bookinfo/sa/bookinfo-productpage" dst.addr=10.XX.X.XX:9080 dst.hbone_addr=10.XX.X.XX:9080 dst.service="ratings.bookinfo.svc.cluster.local" dst.workload="ratings-v1-7c9cd8db6d-dph55" dst.namespace="bookinfo" dst.identity="spiffe://cluster.local/ns/bookinfo/sa/bookinfo-ratings" direction="inbound" bytes_sent=222 bytes_recv=84 duration="4ms"
Port-forward the ztunnel pod on port 15020.
kubectl -n istio-system port-forward pod/<ztunnel_pod_name> 15020
Open localhost:15020/stats/prometheus in your browser to view Istio Layer 4 metrics that were emitted by the ztunnel, such as
istio_tcp_sent_bytes_total
oristio_tcp_connections_closed_total
. These metrics are forwarded to the built-in Prometheus server and are used by the Gloo UI to visualize traffic between workloads in the ambient mesh.Example output:
istio_tcp_sent_bytes_total{reporter="destination",source_workload="productpage-v1",source_canonical_service="productpage",source_canonical_revision="v1",source_workload_namespace="bookinfo",source_principal="spiffe://cluster.local/ns/bookinfo/sa/bookinfo-productpage",source_app="productpage",source_version="v1",source_cluster="gloo-mesh-docs-ambient-mgt",destination_service="unknown",destination_service_namespace="unknown",destination_service_name="unknown",destination_workload="ratings-v1",destination_canonical_service="ratings",destination_canonical_revision="v1",destination_workload_namespace="bookinfo",destination_principal="spiffe://cluster.local/ns/bookinfo/sa/bookinfo-ratings",destination_app="ratings",destination_version="v1",destination_cluster="gloo-mesh-docs-ambient-mgt",request_protocol="tcp",response_flags="-",connection_security_policy="mutual_tls",response_code="",grpc_response_status=""} 398
Bookinfo example: Multicluster
For testing purposes, you can deploy the Bookinfo sample app across multiple clusters, add the app services to your ambient mesh, and make the services available across clusters in the mesh.
Deploy Bookinfo to the ambient mesh
This example deploys the same Bookinfo app to two clusters, and adds them to the ambient mesh in each cluster.
Save the kubeconfig contexts for two clusters that you deployed an ambient mesh to and linked together.
export REMOTE_CONTEXT1=<cluster1-context> export REMOTE_CONTEXT2=<cluster2-context>
Create the
bookinfo
namespace in each cluster, and label them with theistio.io/dataplane-mode=ambient
label. This label adds all Bookinfo services that you create in each namespace to the ambient mesh in the respective cluster.for context in ${REMOTE_CONTEXT1} ${REMOTE_CONTEXT2}; do kubectl --context ${context} create ns bookinfo kubectl --context ${context} label namespace bookinfo istio.io/dataplane-mode=ambient done
Deploy the Bookinfo app to each cluster.
for context in ${REMOTE_CONTEXT1} ${REMOTE_CONTEXT2}; do # deploy bookinfo application components for all versions kubectl --context ${context} -n bookinfo apply -f https://raw.githubusercontent.com/istio/istio/1.24.2/samples/bookinfo/platform/kube/bookinfo.yaml -l 'app' # deploy an updated product page with extra container utilities such as 'curl' and 'netcat' kubectl --context ${context} -n bookinfo apply -f https://raw.githubusercontent.com/solo-io/gloo-mesh-use-cases/main/policy-demo/productpage-with-curl.yaml # deploy all bookinfo service accounts kubectl --context ${context} -n bookinfo apply -f https://raw.githubusercontent.com/istio/istio/1.24.2/samples/bookinfo/platform/kube/bookinfo.yaml -l 'account' done
Verify that the Bookinfo app is deployed successfully.
kubectl --context ${REMOTE_CONTEXT1} get pods,svc -n bookinfo kubectl --context ${REMOTE_CONTEXT2} get pods,svc -n bookinfo
Optional: You can verify that traffic flows through the ztunnel in one cluster by getting the logs of the ztunnel that is co-located with the ratings app. Note that this simply helps you review how traffic flows in the ambient mesh of one cluster; you make services available globally across meshes in the next section.
Verify that you can access the ratings app from the product page app.
kubectl --context ${REMOTE_CONTEXT1} -n bookinfo debug -i pods/$(kubectl --context ${REMOTE_CONTEXT1} get pod -l app=productpage -A -o jsonpath='{.items[0].metadata.name}') --image=curlimages/curl -- curl -v http://ratings:9080/ratings/1
Example output:
... < HTTP/1.1 200 OK < Content-type: application/json < Date: Tue, 24 Dec 2024 20:58:23 GMT < Connection: keep-alive < Keep-Alive: timeout=5 < Transfer-Encoding: chunked < { [59 bytes data] 100 48 0 48 0 0 2549 0 --:--:-- --:--:-- --:--:-- 2666 * Connection #0 to host ratings left intact {"id":1,"ratings":{"Reviewer1":5,"Reviewer2":4}}
Get the name of the node that the ratings app is deployed to.
kubectl --context ${REMOTE_CONTEXT1} get pods -n bookinfo -o wide | grep ratings
In this example output,
ip-10-0-6-27.us-east-2.compute.internal
is the name of the node.ratings-v1-7c9cd8db6d-8t62f 1/1 Running 0 3m9s 10.0.13.100 ip-10-0-6-27.us-east-2.compute.internal <none> <none>
List the ztunnels in your cluster and note the name of the ztunnel that is deployed to the same node as the ratings app.
kubectl --context ${REMOTE_CONTEXT1} get pods -n istio-system -o wide | grep ztunnel
In this example output,
ztunnel-tvtzn
is deployed to the same node as the ratings pod.ztunnel-tvtzn 1/1 Running 0 16m 10.0.5.167 ip-10-0-6-27.us-east-2.compute.internal <none> <none> ztunnel-vtpjm 1/1 Running 0 16m 10.0.1.204 ip-10-0-8-23.us-east-2.compute.internal <none> <none>
Get the logs of the ztunnel pod that runs on the same node as the ratings app. Make sure that you see an
access
log message for the request that the product page app sent to ratings.kubectl --context ${REMOTE_CONTEXT1} logs -n istio-system <ztunnel-pod-name>
Example output:
2024-06-21T16:33:13.093929Z info access connection complete src.addr=10.XX.X.XX:46103 src.workload="productpage-v1-78dd566f6f-jcrtj" src.namespace="bookinfo" src.identity="spiffe://cluster.local/ns/bookinfo/sa/bookinfo-productpage" dst.addr=10.XX.X.XX:9080 dst.hbone_addr=10.XX.X.XX:9080 dst.service="ratings.bookinfo.svc.cluster.local" dst.workload="ratings-v1-7c9cd8db6d-dph55" dst.namespace="bookinfo" dst.identity="spiffe://cluster.local/ns/bookinfo/sa/bookinfo-ratings" direction="inbound" bytes_sent=222 bytes_recv=84 duration="4ms"
Port-forward the ztunnel pod on port 15020.
kubectl --context ${REMOTE_CONTEXT1} -n istio-system port-forward pod/<ztunnel_pod_name> 15020
Open localhost:15020/stats/prometheus in your browser to view Istio Layer 4 metrics that were emitted by the ztunnel, such as
istio_tcp_sent_bytes_total
oristio_tcp_connections_closed_total
. These metrics are forwarded to the built-in Prometheus server and are used by the Gloo UI to visualize traffic between workloads in the ambient mesh.Example output:
istio_tcp_sent_bytes_total{reporter="destination",source_workload="productpage-v1",source_canonical_service="productpage",source_canonical_revision="v1",source_workload_namespace="bookinfo",source_principal="spiffe://cluster.local/ns/bookinfo/sa/bookinfo-productpage",source_app="productpage",source_version="v1",source_cluster="gloo-mesh-docs-ambient-mgt",destination_service="unknown",destination_service_namespace="unknown",destination_service_name="unknown",destination_workload="ratings-v1",destination_canonical_service="ratings",destination_canonical_revision="v1",destination_workload_namespace="bookinfo",destination_principal="spiffe://cluster.local/ns/bookinfo/sa/bookinfo-ratings",destination_app="ratings",destination_version="v1",destination_cluster="gloo-mesh-docs-ambient-mgt",request_protocol="tcp",response_flags="-",connection_security_policy="mutual_tls",response_code="",grpc_response_status=""} 398
Expose services across clusters
To make Bookinfo globally available in the multicluster setup, you label each productpage
service so that both productpage
endpoints are available behind one global service hostname.
Label the
productpage
service in each cluster to create oneproductpage
global service.kubectl --context ${REMOTE_CONTEXT1} label service productpage -n bookinfo solo.io/service-scope=global kubectl --context ${REMOTE_CONTEXT2} label service productpage -n bookinfo solo.io/service-scope=global
Apply the
networking.istio.io/traffic-distribution=Any
annotation to the services. This annotation allows requests to theproductpage
global service to be routed to each service endpoint equally.kubectl --context ${REMOTE_CONTEXT1} annotate service productpage -n bookinfo networking.istio.io/traffic-distribution=Any kubectl --context ${REMOTE_CONTEXT2} annotate service productpage -n bookinfo networking.istio.io/traffic-distribution=Any
Verify that the global service entry with the
productpage.bookinfo.mesh.internal
hostname is created in theistio-system
namespace.kubectl get serviceentry -n istio-system --context ${REMOTE_CONTEXT1} | grep bookinfo
Example output:
autogen.bookinfo.productpage ["productpage.bookinfo.mesh.internal"] STATIC 94s
The productpage
services for each Bookinfo instance are now unified behind one hostname, which increases the availability of the Bookinfo app. You can now use this global service hostname in routing configurations. For example, to expose the productpage
global service hostname with an ingress gateway, continue with the Bookinfo example in the ingress gateway guide.
Next
- Expose apps in your mesh with an ingress gateway.
- Launch the Gloo UI to review the Istio insights that were captured for your ambient setup. Gloo Mesh Core comes with an insights engine that automatically analyzes your Istio setups for health issues. Then, Gloo shares these issues along with recommendations to harden your Istio setups. The insights give you a checklist to address issues that might otherwise be hard to detect across your environment. For more information, see Insights.
- Check out the Istio docs to:
- When it’s time to upgrade your ambient mesh, you can perform a safe in-place upgrade by using the Gloo operator or Helm.