Add services to the mesh

Now that Istio is up and running, you can create service namespaces for your teams to run app workloads in, and onboard them to the ambient mesh.

Label each workload namespace with the istio.io/dataplane-mode=ambient label, which adds all pods in the namespace to an ambient mesh.

  kubectl label namespace <service_ns> istio.io/dataplane-mode=ambient
  

After you label the namespace, all ingoing and outgoing traffic to the pods is automatically intercepted and secured by the ztunnel socket that is exposed on the pod. The socket is configured by the ztunnel that is co-located on the same node as the pod. The ztunnel socket forwards traffic to the ztunnel socket of the target pod. If the target pod is located on a different node, its ztunnel socket is configured by the ztunnel instance that is co-located on the same node as the pod. The communication between the ztunnel sockets is secured via mutual TLS. For more information, see the component overview.

Make services available across clusters

If you linked clusters to form a multicluster ambient or sidecar mesh, you can make a service available across clusters throughout the multicluster mesh by using the solo.io/service-scope=global service label. When you apply this label to a service, it does not change the service. Instead, a new global service is created with a hostname in the format <name>.<namespace>.mesh.internal.

You can then use this internal hostname for in-mesh routing. For example, you might create a Gateway resource in cluster1 to manage incoming traffic requests for your app in cluster2. To make the app accessible across clusters, you label its service with solo.io/service-scope=global, which generates a global service hostname. To route requests to the app, you create an HTTPRoute resource that references this global service hostname. The ingress gateway can then use this global service hostname in the HTTPRoute to route incoming traffic requests through the east-west gateway across clusters to your app.

  1. For the service that you want to be accessible from multiple clusters, apply the solo.io/service-scope=global label.

      kubectl label service <name> -n <namespace> --context ${CLUSTER_CONTEXT} solo.io/service-scope=global
      
  2. Verify that the global service entry with a hostname in the format <svc_name>.<namespace>.mesh.internal is created in the istio-system namespace. This hostname makes the endpoint for your service available across the multicluster mesh.

      kubectl get serviceentry -n istio-system --context ${CLUSTER_CONTEXT}
      

    Example output:

      NAME                             HOSTS                                    LOCATION     RESOLUTION   AGE
    autogen.<namespace>.<svc_name>   ["<svc_name>.<namespace>.mesh.internal"]              STATIC       94s
      
  3. If you have a service of the same name and namespace in another cluster of the mesh, label that service too. This way, the service’s endpoint is added to the global service’s hostname, and increases the availability of your service in the multicluster mesh. For more information, see Namespace sameness.

      kubectl label service <name> -n <namespace> --context <other_context> solo.io/service-scope=global
      
  4. Optional: Modify the way that traffic is routed to service endpoints for the global service by applying the networking.istio.io/traffic-distribution annotation to each service instance. For more information, see Endpoint traffic control.

      kubectl annotate service <name> -n <namespace> networking.istio.io/traffic-distribution=<mode>
      

For an example of deploying an app to a multicluster mesh, see Bookinfo example: Multicluster.

Namespace sameness

You might have the same app service in the same namespace in multiple clusters. When you label the service in one cluster, a global service is created in the mesh for that service’s endpoint. However, you must also label the service in the other cluster to include its endpoint in the global service’s hostname. When you label each service, then all instances of that app in the multicluster mesh are exposed by the global service. By adhering to the principle of namespace sameness, the global service’s hostname unifies the endpoints for each service across the clusters.

For example, you might have a myapp service in the stage namespace of cluster1, and a myapp service in the stage namespace of cluster2. If you label the service in cluster1 with solo.io/service-scope=global, a global service is created with the hostname myapp.stage.mesh.internal. However, until you label the service in cluster2, the service is not automatically included in the global service’s endpoints.

Endpoint traffic control

If an in-mesh service makes a request to a service that is exposed globally, and a healthy endpoint for that service is available locally in the same cluster network, the request is routed to that local instance by default. If no healthy endpoints of that service are available locally, requests are routed to healthy endpoints in remote cluster networks. An endpoint is considered healthy if the app pod exposed by that local service is in a ready state.

To modify this default behavior for global services, you can apply the networking.istio.io/traffic-distribution=<setting> annotation to a service that has the solo.io/service-scope=global label.

  • PreferNetwork: Route requests to available endpoints within the source’s same network first. This is the default for global services.
  • PreferClose: Route requests to an available endpoint that is closest to the source first, considering the zone, region, and network. For example, you might have a global service with two endpoints in zone us-west and one endpoint in us-east. When sending traffic from a client in us-west, all traffic routes to the two us-west endpoints. If one of those endpoints becomes unhealthy, all traffic routes to the remaining endpoint in us-west. If that endpoint becomes unhealthy, traffic routes to the us-east endpoint.
  • PreferRegion: Route requests to an available endpoint that is closest to the source first, considering the region and network.
  • Any: Consider all endpoints equally, and route requests to a random available endpoint.

Modifying the original service

By default, when you apply the solo.io/service-scope=global label to a service, it does not change the original service. Instead, a new global service is created with a hostname in the format <name>.<namespace>.mesh.internal, and original service remains unchanged with local endpoints only.

In the Solo distribution of Istio version 1.25 and later, you can instead modify the original service to include remote endpoints by applying the solo.io/service-scope=global-only label to the service.

By using this option, you can configure a service to span multiple clusters without changing your configuration or applications. However, you do not have the ability to control whether other services access only local endpoints for the service, such as configuring the networking.istio.io/traffic-distribution=PreferNetwork annotation, or whether they access both local and global endpoints. Before you apply the solo.io/service-scope=global-only label, ensure your application can unconditionally handle cross-cluster requests.

Subset routing

Traditional subset routing in Istio, such by configuring routing rules to different app subsets in a DestinationRule, is currently not supported for cross-cluster routing when using multicluster peering. Instead, the recommended approach involves using services to represent specific subsets, and using an HTTPRoute resource to manage cross-cluster routing to each “subset” service.

For example, say that you have three clusters that you linked together to create a multicluster mesh.

  1. You deploy the Bookinfo app across the clusters by creating the ratings and productpage microservices in cluster0, reviews-v1 in cluster1, and reviews-v2 in cluster2.
  2. You then abstract the reviews subsets into individual services, such as one service called reviews-v1 and one service called reviews-v2, instead of using just one reviews service for both.
  3. To ensure that each service is accessible across the multicluster mesh, you label them with solo.io/service-scope=global, so that they now have internal global hostnames such as reviews-v1.bookinfo.internal.mesh and reviews-v2.bookinfo.internal.mesh.
  4. To manage routing rules for each service, you create an HTTPRoute similar to the following example. This HTTPRoute routes requests to each global service hostname. You can then implement routing rules to each “subset” service, such as by using header matching to send requests from a user called “blackstar” to only reviews-v2.
      apiVersion: gateway.networking.k8s.io/v1
    kind: HTTPRoute
    metadata:
      name: reviews
    spec:
      parentRefs:
      - group: ""
        kind: Service
        name: reviews
        port: 9080
      rules:
      - matches:
        - headers:
          - name: end-user
            value: jason
        backendRefs:
        - name: reviews-v1.bookinfo.internal.mesh
          port: 9080
      - backendRefs:
        - name: reviews-v2.bookinfo.internal.mesh
          port: 9080
        matches:
        - headers:
          - name: end-user
            value: blackstar
      

Bookinfo example: Single cluster

For testing purposes, you can deploy Bookinfo, the Istio sample app, and add it to your ambient mesh. You can also verify that traffic is routed through the ztunnels in your cluster by checking the ztunnel logs.

  1. Create the bookinfo namespace, and label it with the istio.io/dataplane-mode=ambient label. This label adds all Bookinfo services that you create in the namespace to the ambient mesh.

      kubectl create ns bookinfo
    kubectl label namespace bookinfo istio.io/dataplane-mode=ambient
      
  2. Deploy the Bookinfo app.

      # deploy bookinfo application components for all versions
    kubectl -n bookinfo apply -f https://raw.githubusercontent.com/istio/istio/1.25.2/samples/bookinfo/platform/kube/bookinfo.yaml -l 'app'
    # deploy an updated product page with extra container utilities such as 'curl' and 'netcat'
    kubectl -n bookinfo apply -f https://raw.githubusercontent.com/solo-io/gloo-mesh-use-cases/main/policy-demo/productpage-with-curl.yaml
    # deploy all bookinfo service accounts
    kubectl -n bookinfo apply -f https://raw.githubusercontent.com/istio/istio/1.25.2/samples/bookinfo/platform/kube/bookinfo.yaml -l 'account'
      
  3. Verify that the Bookinfo app is deployed successfully.

      kubectl get pods,svc -n bookinfo
      
  4. Verify that you can access the ratings app from the product page app.

      kubectl -n bookinfo debug -i pods/$(kubectl get pod -l app=productpage -A -o jsonpath='{.items[0].metadata.name}') --image=curlimages/curl -- curl -v http://ratings:9080/ratings/1
      

    Example output:

      ...
    < HTTP/1.1 200 OK
    < Content-type: application/json
    < Date: Tue, 24 Dec 2024 20:58:23 GMT
    < Connection: keep-alive
    < Keep-Alive: timeout=5
    < Transfer-Encoding: chunked
    < 
    { [59 bytes data]
    100    48    0    48    0     0   2549      0 --:--:-- --:--:-- --:--:--  2666
    * Connection #0 to host ratings left intact
    {"id":1,"ratings":{"Reviewer1":5,"Reviewer2":4}}
      
  5. Optional: Verify that traffic flows through the ztunnel by getting the logs of the ztunnel that is co-located with the ratings app.

    1. Get the name of the node that the ratings app is deployed to.

        kubectl get pods -n bookinfo -o wide | grep ratings
        

      In this example output, ip-10-0-6-27.us-east-2.compute.internal is the name of the node.

        ratings-v1-7c9cd8db6d-8t62f       1/1     Running   0          3m9s    10.0.13.100   ip-10-0-6-27.us-east-2.compute.internal   <none>           <none>
        
    2. List the ztunnels in your cluster and note the name of the ztunnel that is deployed to the same node as the ratings app.

        kubectl get pods -n istio-system -o wide | grep ztunnel
        

      In this example output, ztunnel-tvtzn is deployed to the same node as the ratings pod.

        ztunnel-tvtzn             1/1     Running   0          16m   10.0.5.167   ip-10-0-6-27.us-east-2.compute.internal   <none>           <none>
      ztunnel-vtpjm             1/1     Running   0          16m   10.0.1.204   ip-10-0-8-23.us-east-2.compute.internal   <none>           <none>
        
    3. Get the logs of the ztunnel pod that runs on the same node as the ratings app. Make sure that you see an access log message for the request that the product page app sent to ratings.

        kubectl logs -n istio-system <ztunnel-pod-name>
        

      Example output:

        2024-06-21T16:33:13.093929Z	info	access	connection complete	src.addr=10.XX.X.XX:46103 src.workload="productpage-v1-78dd566f6f-jcrtj" src.namespace="bookinfo" src.identity="spiffe://cluster.local/ns/bookinfo/sa/bookinfo-productpage" dst.addr=10.XX.X.XX:9080 dst.hbone_addr=10.XX.X.XX:9080 dst.service="ratings.bookinfo.svc.cluster.local" dst.workload="ratings-v1-7c9cd8db6d-dph55" dst.namespace="bookinfo" dst.identity="spiffe://cluster.local/ns/bookinfo/sa/bookinfo-ratings" direction="inbound" bytes_sent=222 bytes_recv=84 duration="4ms"
        
    4. Port-forward the ztunnel pod on port 15020.

        kubectl -n istio-system port-forward pod/<ztunnel_pod_name> 15020
        
    5. Open localhost:15020/stats/prometheus in your browser to view Istio Layer 4 metrics that were emitted by the ztunnel, such as istio_tcp_sent_bytes_total or istio_tcp_connections_closed_total. These metrics are forwarded to the built-in Prometheus server and are used by the Gloo UI to visualize traffic between workloads in the ambient mesh.

      Example output:

        istio_tcp_sent_bytes_total{reporter="destination",source_workload="productpage-v1",source_canonical_service="productpage",source_canonical_revision="v1",source_workload_namespace="bookinfo",source_principal="spiffe://cluster.local/ns/bookinfo/sa/bookinfo-productpage",source_app="productpage",source_version="v1",source_cluster="gloo-mesh-docs-ambient-mgt",destination_service="unknown",destination_service_namespace="unknown",destination_service_name="unknown",destination_workload="ratings-v1",destination_canonical_service="ratings",destination_canonical_revision="v1",destination_workload_namespace="bookinfo",destination_principal="spiffe://cluster.local/ns/bookinfo/sa/bookinfo-ratings",destination_app="ratings",destination_version="v1",destination_cluster="gloo-mesh-docs-ambient-mgt",request_protocol="tcp",response_flags="-",connection_security_policy="mutual_tls",response_code="",grpc_response_status=""} 398
        

Bookinfo example: Multicluster

For testing purposes, you can deploy the Bookinfo sample app across multiple clusters, add the app services to your ambient mesh, and make the services available across clusters in the mesh.

Deploy Bookinfo to the ambient mesh

This example deploys the same Bookinfo app to two clusters, and adds them to the ambient mesh in each cluster.

  1. Save the kubeconfig contexts for two clusters that you deployed an ambient mesh to and linked together.

      export REMOTE_CONTEXT1=<cluster1-context>
    export REMOTE_CONTEXT2=<cluster2-context>
      
  2. Create the bookinfo namespace in each cluster, and label them with the istio.io/dataplane-mode=ambient label. This label adds all Bookinfo services that you create in each namespace to the ambient mesh in the respective cluster.

      for context in ${REMOTE_CONTEXT1} ${REMOTE_CONTEXT2}; do
      kubectl --context ${context} create ns bookinfo
      kubectl --context ${context} label namespace bookinfo istio.io/dataplane-mode=ambient
    done
      
  3. Deploy the Bookinfo app to each cluster.

      for context in ${REMOTE_CONTEXT1} ${REMOTE_CONTEXT2}; do
      # deploy bookinfo application components for all versions
      kubectl --context ${context} -n bookinfo apply -f https://raw.githubusercontent.com/istio/istio/1.25.2/samples/bookinfo/platform/kube/bookinfo.yaml -l 'app'
      # deploy an updated product page with extra container utilities such as 'curl' and 'netcat'
      kubectl --context ${context} -n bookinfo apply -f https://raw.githubusercontent.com/solo-io/gloo-mesh-use-cases/main/policy-demo/productpage-with-curl.yaml
      # deploy all bookinfo service accounts
      kubectl --context ${context} -n bookinfo apply -f https://raw.githubusercontent.com/istio/istio/1.25.2/samples/bookinfo/platform/kube/bookinfo.yaml -l 'account'
      # deploy individual services for each microservice version
      kubectl --context ${context} -n bookinfo apply -f https://raw.githubusercontent.com/istio/istio/1.25.2/samples/bookinfo/platform/kube/bookinfo-versions.yaml
    done
      
  4. Verify that the Bookinfo app is deployed successfully.

      kubectl --context ${REMOTE_CONTEXT1} get pods,svc -n bookinfo
    kubectl --context ${REMOTE_CONTEXT2} get pods,svc -n bookinfo
      
  5. Optional: You can verify that traffic flows through the ztunnel in one cluster by getting the logs of the ztunnel that is co-located with the ratings app. Note that this simply helps you review how traffic flows in the ambient mesh of one cluster; you make services available globally across meshes in the next section.

    1. Verify that you can access the ratings app from the product page app.

        kubectl --context ${REMOTE_CONTEXT1} -n bookinfo debug -i pods/$(kubectl --context ${REMOTE_CONTEXT1} get pod -l app=productpage -A -o jsonpath='{.items[0].metadata.name}') --image=curlimages/curl -- curl -v http://ratings:9080/ratings/1
        

      Example output:

        ...
      < HTTP/1.1 200 OK
      < Content-type: application/json
      < Date: Tue, 24 Dec 2024 20:58:23 GMT
      < Connection: keep-alive
      < Keep-Alive: timeout=5
      < Transfer-Encoding: chunked
      < 
      { [59 bytes data]
      100    48    0    48    0     0   2549      0 --:--:-- --:--:-- --:--:--  2666
      * Connection #0 to host ratings left intact
      {"id":1,"ratings":{"Reviewer1":5,"Reviewer2":4}}
        
    2. Get the name of the node that the ratings app is deployed to.

        kubectl --context ${REMOTE_CONTEXT1} get pods -n bookinfo -o wide | grep ratings
        

      In this example output, ip-10-0-6-27.us-east-2.compute.internal is the name of the node.

        ratings-v1-7c9cd8db6d-8t62f       1/1     Running   0          3m9s    10.0.13.100   ip-10-0-6-27.us-east-2.compute.internal   <none>           <none>
        
    3. List the ztunnels in your cluster and note the name of the ztunnel that is deployed to the same node as the ratings app.

        kubectl --context ${REMOTE_CONTEXT1} get pods -n istio-system -o wide | grep ztunnel
        

      In this example output, ztunnel-tvtzn is deployed to the same node as the ratings pod.

        ztunnel-tvtzn             1/1     Running   0          16m   10.0.5.167   ip-10-0-6-27.us-east-2.compute.internal   <none>           <none>
      ztunnel-vtpjm             1/1     Running   0          16m   10.0.1.204   ip-10-0-8-23.us-east-2.compute.internal   <none>           <none>
        
    4. Get the logs of the ztunnel pod that runs on the same node as the ratings app. Make sure that you see an access log message for the request that the product page app sent to ratings.

        kubectl --context ${REMOTE_CONTEXT1} logs -n istio-system <ztunnel-pod-name>
        

      Example output:

        2024-06-21T16:33:13.093929Z	info	access	connection complete	src.addr=10.XX.X.XX:46103 src.workload="productpage-v1-78dd566f6f-jcrtj" src.namespace="bookinfo" src.identity="spiffe://cluster.local/ns/bookinfo/sa/bookinfo-productpage" dst.addr=10.XX.X.XX:9080 dst.hbone_addr=10.XX.X.XX:9080 dst.service="ratings.bookinfo.svc.cluster.local" dst.workload="ratings-v1-7c9cd8db6d-dph55" dst.namespace="bookinfo" dst.identity="spiffe://cluster.local/ns/bookinfo/sa/bookinfo-ratings" direction="inbound" bytes_sent=222 bytes_recv=84 duration="4ms"
        
    5. Port-forward the ztunnel pod on port 15020.

        kubectl --context ${REMOTE_CONTEXT1} -n istio-system port-forward pod/<ztunnel_pod_name> 15020
        
    6. Open localhost:15020/stats/prometheus in your browser to view Istio Layer 4 metrics that were emitted by the ztunnel, such as istio_tcp_sent_bytes_total or istio_tcp_connections_closed_total. These metrics are forwarded to the built-in Prometheus server and are used by the Gloo UI to visualize traffic between workloads in the ambient mesh.

      Example output:

        istio_tcp_sent_bytes_total{reporter="destination",source_workload="productpage-v1",source_canonical_service="productpage",source_canonical_revision="v1",source_workload_namespace="bookinfo",source_principal="spiffe://cluster.local/ns/bookinfo/sa/bookinfo-productpage",source_app="productpage",source_version="v1",source_cluster="gloo-mesh-docs-ambient-mgt",destination_service="unknown",destination_service_namespace="unknown",destination_service_name="unknown",destination_workload="ratings-v1",destination_canonical_service="ratings",destination_canonical_revision="v1",destination_workload_namespace="bookinfo",destination_principal="spiffe://cluster.local/ns/bookinfo/sa/bookinfo-ratings",destination_app="ratings",destination_version="v1",destination_cluster="gloo-mesh-docs-ambient-mgt",request_protocol="tcp",response_flags="-",connection_security_policy="mutual_tls",response_code="",grpc_response_status=""} 398
        

Expose services across clusters

To make Bookinfo globally available in the multicluster setup, you label each productpage service so that both productpage endpoints are available behind one global service hostname.

  1. Label the productpage service in each cluster to create one productpage global service.

      kubectl --context ${REMOTE_CONTEXT1} label service productpage -n bookinfo solo.io/service-scope=global
    kubectl --context ${REMOTE_CONTEXT2} label service productpage -n bookinfo solo.io/service-scope=global
      
  2. Apply the networking.istio.io/traffic-distribution=Any annotation to the services. This annotation allows requests to the productpage global service to be routed to each service endpoint equally.

      kubectl --context ${REMOTE_CONTEXT1} annotate service productpage -n bookinfo networking.istio.io/traffic-distribution=Any
    kubectl --context ${REMOTE_CONTEXT2} annotate service productpage -n bookinfo networking.istio.io/traffic-distribution=Any
      
  3. Verify that the global service entry with the productpage.bookinfo.mesh.internal hostname is created in the istio-system namespace.

      kubectl get serviceentry -n istio-system --context ${REMOTE_CONTEXT1} | grep bookinfo
    kubectl get serviceentry -n istio-system --context ${REMOTE_CONTEXT2} | grep bookinfo
      

    Example output:

      autogen.bookinfo.productpage   ["productpage.bookinfo.mesh.internal"]              STATIC       94s
      
  4. Use the ratings app to send a request to the productpage.bookinfo.mesh.internal global hostname. Verify that you get back a 200 HTTP response code.

      kubectl -n bookinfo --context $REMOTE_CONTEXT1 debug -i pods/$(kubectl get pod -l app=ratings \
    --context $REMOTE_CONTEXT1 -A -o jsonpath='{.items[0].metadata.name}') \
    --image=curlimages/curl -- curl -vik http://productpage.bookinfo.mesh.internal:9080/productpage
      

    Example output:

      Defaulting debug container name to debugger-htnfh.
    If you don't see a command prompt, try pressing enter.
    * HTTP 1.0, assume close after body
    < HTTP/1.0 200 OK
    < Content-Type: text/html; charset=utf-8
    < Content-Length: 5179
    < Server: Werkzeug/0.14.1 Python/3.6.8
    < Date: Thu, 24 Apr 2025 18:14:41 GMT
    < 
    { [885 bytes data]
    100  5179  100  5179    0     0   5245      0 --:--:-- --:--:-- --:--:--  5241
    * shutting down connection #0
    HTTP/1.0 200 OK
    Content-Type: text/html; charset=utf-8
    Content-Length: 5179
    Server: Werkzeug/0.14.1 Python/3.6.8
    Date: Thu, 24 Apr 2025 18:14:41 GMT
    

    <!DOCTYPE html> <html> <head> <title>Simple Bookstore App</title> …

    The productpage services for each Bookinfo instance are now unified behind one hostname, which increases the availability of the Bookinfo app. You can now use this global service hostname in routing configurations. For example, to expose the productpage global service hostname with an ingress gateway, continue with the Bookinfo example in the ingress gateway guide.

  5. Scale down the productpage app in $REMOTE_CLUSTER1.

      kubectl scale deployment productpage-v1 -n bookinfo --context $REMOTE_CONTEXT1 --replicas=0
      
  6. Repeat the request from the ratings app to the productpage app. Because the productpage app in $REMOTE_CLUSTER1 is unavailable, all traffic is automatically routed to the productpage app in $REMOTE_CLUSTER2 through the east-west gateway. Verify that you get back a 200 HTTP response code.

      kubectl -n bookinfo --context $REMOTE_CONTEXT1 debug -i pods/$(kubectl get pod -l app=ratings \
    --context $REMOTE_CONTEXT1 -A -o jsonpath='{.items[0].metadata.name}') \
    --image=curlimages/curl -- curl -vik http://productpage.bookinfo.mesh.internal:9080/productpage
      

    Example output:

      Defaulting debug container name to debugger-htnfh.
    If you don't see a command prompt, try pressing enter.
    * HTTP 1.0, assume close after body
    < HTTP/1.0 200 OK
    < Content-Type: text/html; charset=utf-8
    < Content-Length: 5179
    < Server: Werkzeug/0.14.1 Python/3.6.8
    < Date: Thu, 24 Apr 2025 18:14:41 GMT
    < 
    { [885 bytes data]
    100  5179  100  5179    0     0   5245      0 --:--:-- --:--:-- --:--:--  5241
    * shutting down connection #0
    HTTP/1.0 200 OK
    Content-Type: text/html; charset=utf-8
    Content-Length: 5179
    Server: Werkzeug/0.14.1 Python/3.6.8
    Date: Thu, 24 Apr 2025 18:14:41 GMT
    

    <!DOCTYPE html> <html> <head> <title>Simple Bookstore App</title> …

  7. If you also installed Gloo Mesh, open the Gloo UI. If you do not have Gloo Mesh installed, you can follow the Get started guide.

      meshctl dashboard --kubecontext $MGMT_CONTEXT
      
  8. Navigate to Observability > Graph and verify that you see the traffic flow for your multicluster setup. Note that you might need to select both clusters and the bookinfo and istio-eastwest namespaces.

  9. Scale the productpage deployment back up in $REMOTE_CLUSTER1.

      kubectl scale deployment productpage-v1 -n bookinfo --context $REMOTE_CONTEXT1 --replicas=1
      

Next