In this guide, you deploy two apps to a multicluster mesh using namespace sameness: an example httpbin app, and a client app from which to test connectivity to the httpbin app. You initially make the apps available throughout the multicluster mesh before creating segments by using the default global hostnames. Then, you shift the apps into two environment-based segments, dev and prod, so that the apps shift to segment-distinct hostnames. Finally, you can then shift the scope of app availability from the entire multicluster mesh to within the segment only.

For more information about the concepts covered in this guide, review the overview of multitenancy and namespace sameness, and how segments overcome namespace sameness challenges.

Before you begin

  1. Install a multicluster ambient mesh.
  2. If you have not already, save the kubeconfig contexts of each cluster where an ambient mesh is installed. The examples in this guide assume two workload clusters.
      export REMOTE_CONTEXT1=<cluster1-context>
    export REMOTE_CONTEXT2=<cluster2-context>
      

Deploy and globally expose sample apps

  1. Run the following commands to deploy an httpbin app in each cluster called in-ambient.

      kubectl apply --context ${REMOTE_CONTEXT1} -f https://raw.githubusercontent.com/solo-io/gloo-mesh-use-cases/main/gloo-mesh/sample-apps/in-ambient.yaml
    kubectl apply --context ${REMOTE_CONTEXT2} -f https://raw.githubusercontent.com/solo-io/gloo-mesh-use-cases/main/gloo-mesh/sample-apps/in-ambient.yaml
      
  2. Run the following commands to deploy a client app in each cluster called client-in-ambient.

      kubectl apply --context ${REMOTE_CONTEXT1} -f https://raw.githubusercontent.com/solo-io/gloo-mesh-use-cases/main/gloo-mesh/sample-apps/client-in-ambient.yaml
    kubectl apply --context ${REMOTE_CONTEXT2} -f https://raw.githubusercontent.com/solo-io/gloo-mesh-use-cases/main/gloo-mesh/sample-apps/client-in-ambient.yaml
      
  3. Verify that the in-ambient app and client-in-ambient client app are deployed successfully.

      kubectl --context ${REMOTE_CONTEXT1} -n httpbin get pods
    kubectl --context ${REMOTE_CONTEXT2} -n httpbin get pods
      

    Example output:

      NAME                                  READY   STATUS    RESTARTS   AGE
    client-in-ambient-5c64bb49cd-w3dmw    1/1     Running   0          4s
    in-ambient-5c64bb49cd-m9kwm           1/1     Running   0          4s
      
  4. Label the httpbin namespace to add the apps to the ambient mesh.

      kubectl label ns httpbin istio.io/dataplane-mode=ambient --context ${REMOTE_CONTEXT1}
    kubectl label ns httpbin istio.io/dataplane-mode=ambient --context ${REMOTE_CONTEXT2}
      
  5. Before introducing segments, expose your apps across the multicluster mesh by creating standard mesh.internal global hostnames.

    1. In each cluster, label each service with the solo.io/service-scope=global label.
        for context in ${REMOTE_CONTEXT1} ${REMOTE_CONTEXT2}; do
        kubectl label service in-ambient -n httpbin --context ${context} solo.io/service-scope=global
        kubectl label service client-in-ambient -n httpbin --context ${context} solo.io/service-scope=global
      done
        
    2. Verify that the global service entry with a hostname in the format <svc_name>.httpbin.mesh.internal is created for the labeled services in the istio-system namespace. This default mesh.internal hostname makes the endpoint for your service available across the multicluster mesh.
        kubectl get serviceentry -n istio-system --context ${REMOTE_CONTEXT1}
      kubectl get serviceentry -n istio-system --context ${REMOTE_CONTEXT2}
        
      Example output:
        NAME                                 HOSTS                                         LOCATION        RESOLUTION   AGE
      autogen.httpbin.client-in-ambient    ["client-in-ambient.httpbin.mesh.internal"]                   STATIC       16s
      autogen.httpbin.in-ambient           ["in-ambient.httpbin.mesh.internal"]                          STATIC       18s
      NAME                                 HOSTS                                         LOCATION        RESOLUTION   AGE
      autogen.httpbin.client-in-ambient    ["client-in-ambient.httpbin.mesh.internal"]                   STATIC       18s
      autogen.httpbin.in-ambient           ["in-ambient.httpbin.mesh.internal"]                          STATIC       20s
        
    3. To verify that standard multicluster loadbalancing across the default mesh.internal domain is working, scale down the in-ambient app in $REMOTE_CLUSTER1.
        kubectl scale deployment in-ambient -n httpbin --context ${REMOTE_CONTEXT1} --replicas=0
        
    4. In $REMOTE_CLUSTER1, send a few curl requests from the client app to the in-ambient app, using the app’s mesh.internal domain. Because the in-ambient app in $REMOTE_CLUSTER1 is unavailable, all traffic is automatically routed to the in-ambient app in $REMOTE_CLUSTER2 through the east-west gateway.
        kubectl --context ${REMOTE_CONTEXT1} -n httpbin exec -it deploy/client-in-ambient -- sh -c "for i in \$(seq 1 5); do curl -s in-ambient.httpbin.mesh.internal:8000/hostname; done"
        
      Verify that you get back the name of the in-ambient app instance in $REMOTE_CLUSTER2.
        {
        "hostname": "in-ambient-b86fbcb48-6rvhp"
      }
      {
        "hostname": "in-ambient-b86fbcb48-6rvhp"
      }
      {
        "hostname": "in-ambient-b86fbcb48-6rvhp"
      }
      {
        "hostname": "in-ambient-b86fbcb48-6rvhp"
      }
      {
        "hostname": "in-ambient-b86fbcb48-6rvhp"
      }
        
    5. Scale the in-ambient deployment back up in $REMOTE_CLUSTER1.
        kubectl scale deployment in-ambient -n httpbin --context $REMOTE_CONTEXT1 --replicas=1
        

Create segments

When both clusters participate in the implicit default segment, traffic is evenly spread across the workloads in $REMOTE_CLUSTER1 and $REMOTE_CLUSTER2 through the mesh.internal global hostnames. This coupling works well when services are identical in name and namespace through namespace sameness, but can become a liability in more complex multitenant environments. For example, whenever teams need to run different environment tiers like dev and prod, but with the same namespaces and service names, the endpoints for all of the identical services are unified under one global hostname, regardless of environment. Segments can remedy this problem by assigning a dedicated DNS suffix each logical environment, so that hostnames in the format <svc_name>.<namespace>.<segment_domain> can be used. For more examples of common multitenancy problems that segments can resolve, review the example segment scenarios.

  1. Define the dev-segment and prod-segment in both clusters.

    • These segments define cluster.dev and cluster.prod domains for the services in the segments that will be globally exposed.
    • Segments must always be created in the istio-system namespace.
    • Always deploy the same segment resources to all peered clusters in your multicluster mesh environment. In this example, $REMOTE_CLUSTER1 serves as the dev environment, and $REMOTE_CLUSTER2 serves as the prod environment. However, you must create both segment resources in both clusters.
      for context in ${REMOTE_CONTEXT1} ${REMOTE_CONTEXT2}; do
      kubectl apply --context ${context} -f - <<EOF
      apiVersion: admin.solo.io/v1alpha1
      kind: Segment
      metadata:
        name: dev-segment
        namespace: istio-system
      spec:
        domain: cluster.dev
      ---
      apiVersion: admin.solo.io/v1alpha1
      kind: Segment
      metadata:
        name: prod-segment
        namespace: istio-system
      spec:
        domain: cluster.prod
      EOF
    done
      
  2. Assign $REMOTE_CLUSTER1 to the dev-segment and $REMOTE_CLUSTER2 to the prod-segment by labeling the istio-system namespaces. Note that a cluster can belong to only one segment at a time.

      kubectl --context ${REMOTE_CONTEXT1} label namespace istio-system admin.solo.io/segment=dev-segment --overwrite
    kubectl --context ${REMOTE_CONTEXT2} label namespace istio-system admin.solo.io/segment=prod-segment --overwrite
      
  3. Verify that hostnames in the format <svc_name>.httpbin.cluster.<env> are now created for the services. With the clusters now partitioned into their own environment segments, the legacy mesh.internal domain no longer maps to either segment, and the ServiceEntries for it are removed.

      kubectl get serviceentry -n istio-system --context ${REMOTE_CONTEXT1}
    kubectl get serviceentry -n istio-system --context ${REMOTE_CONTEXT2}
      

    Example output:

      NAME                                 HOSTS                                         LOCATION        RESOLUTION   AGE
    autogen.<segment-name>.httpbin.client-in-ambient    ["client-in-ambient.httpbin.<segment-domain>"]                   STATIC       16s
    autogen.<segment-name>.httpbin.in-ambient           ["in-ambient.httpbin.<segment-domain>"]                          STATIC       18s
    NAME                                 HOSTS                                         LOCATION        RESOLUTION   AGE
    autogen.<segment-name>.httpbin.client-in-ambient    ["client-in-ambient.httpbin.<segment-domain>"]                   STATIC       18s
    autogen.<segment-name>.httpbin.in-ambient           ["in-ambient.httpbin.<segment-domain>"]                          STATIC       20s
      
  4. Verify that the legacy domain is no longer routable. Repeat the requests in $REMOTE_CLUSTER1 from the in-ambient client to the in-ambient app, using the app’s mesh.internal hostname. This time, the request fails to resolve the domain.

      kubectl --context ${REMOTE_CONTEXT1} -n httpbin exec -it deploy/client-in-ambient -- curl -v in-ambient.httpbin.mesh.internal:8000/hostname
      
  5. Verify that the client app in the dev-segment of $REMOTE_CLUSTER1 can now reach the in-ambient app in the dev-segment in $REMOTE_CLUSTER1 through its cluster.dev hostname.

      kubectl --context ${REMOTE_CONTEXT1} -n httpbin exec -it deploy/client-in-ambient -- sh -c "for i in \$(seq 1 5); do curl -s in-ambient.httpbin.cluster.dev:8000/hostname; done"
      
  6. Verify that the client app in the dev-segment of $REMOTE_CLUSTER1 can also reach the in-ambient app in the prod-segment of $REMOTE_CLUSTER2 through its hostname. Because the app services are labeled with solo.io/service-scope=global, they are reachable by their segment hostname throughout the multicluster mesh, regardless of which segment the request sources from. In the next section, you limit this scope to only the individual segment partitions.

      kubectl --context ${REMOTE_CONTEXT1} -n httpbin exec -it deploy/client-in-ambient -- sh -c "for i in \$(seq 1 5); do curl -s in-ambient.httpbin.cluster.prod:8000/hostname; done"
      

Change hostname visibility to the segment

Now that apps are partitioned into environment segments and have segment hostnames, you can next limit hostname visibility to only the segment. This optional step involves changing the solo.io/service-scope label on the in-ambient service to segment, so the hostnames are visible across clusters only within the app’s segment. For more information, see Global vs segment scope.

  1. Change the scope of the in-ambient service hostnames to segment so that the in-ambient.httpbin.cluster.dev and in-ambient.httpbin.cluster.prod hostnames are visible across clusters, but only within each segment.

      kubectl --context ${REMOTE_CONTEXT1} -n httpbin label svc in-ambient solo.io/service-scope=segment --overwrite
    kubectl --context ${REMOTE_CONTEXT2} -n httpbin label svc in-ambient solo.io/service-scope=segment --overwrite
      
  2. Verify that the client app in the dev-segment of $REMOTE_CLUSTER1 can still reach the in-ambient app in the dev-segment in $REMOTE_CLUSTER1 through its cluster.dev hostname.

      kubectl --context ${REMOTE_CONTEXT1} -n httpbin exec -it deploy/client-in-ambient -- sh -c "for i in \$(seq 1 5); do curl -s in-ambient.httpbin.cluster.dev:8000/hostname; done"
      

    Example output:

      {
      "hostname": "in-ambient-b86fbcb48-d4x2p"
    }
    ...
      
  3. Verify that the client app in the dev-segment of $REMOTE_CLUSTER1 now cannot reach the in-ambient app in the prod-segment of $REMOTE_CLUSTER2 through its hostname. Because the in-ambient app services are labeled with solo.io/service-scope=segment, they are reachable by their segment hostname throughout the multicluster mesh, but only from other apps within the segment. For example, if a third cluster existed in this setup that also belonged to the prod-segment, apps within that cluster would be able to reach the in-ambient app in the prod-segment of $REMOTE_CLUSTER2 through its hostname.

      kubectl --context ${REMOTE_CONTEXT1} -n httpbin exec -it deploy/client-in-ambient -- sh -c "for i in \$(seq 1 5); do curl -s in-ambient.httpbin.cluster.prod:8000/hostname; done"
      

    Example output:

      {
      "hostname": "in-ambient-b86fbcb48-6rvhp"
    }
    ...
      

Takeover local service traffic

Finally, you can “take over” cluster-local traffic to the service. This optional step involves applying the solo.io/service-takeover=true label to the in-ambient service, so that any requests to the service–including from local services within the same cluster network–are always routed to the service’s <name>.<namespace>.<segment_domain> hostname, and not to the service’s <name>.<namespace>.svc.cluster.local local hostname. By using this option, you can configure a service to span multiple clusters without changing your configuration or applications.

For more information, see Local traffic takeover.

  1. To demonstrate the service takeover capabilities across clusters, put both clusters in the dev-segment so they can act as a single logical environment. Both in-ambient services and client-in-ambient services are now accessible through the in-ambient.httpbin.cluster.dev and client-in-ambient.httpbin.cluster.dev hostnames, respectively.

      kubectl --context ${REMOTE_CONTEXT2} label namespace istio-system admin.solo.io/segment=dev-segment --overwrite
      
  2. Enable service takeover on the in-ambient services. This label tells Istio to rewrite the local Kubernetes DNS (.svc.cluster.local) to the globally-aware service hostname.

      kubectl --context ${REMOTE_CONTEXT1} -n httpbin label svc in-ambient solo.io/service-takeover=true --overwrite
    kubectl --context ${REMOTE_CONTEXT2} -n httpbin label svc in-ambient solo.io/service-takeover=true --overwrite
      
  3. Send traffic requests to the cluster-local DNS name in $REMOTE_CLUSTER1. Even though the client uses .svc.cluster.local in its request, service takeover forwards requests across the segment, providing seamless multi-cluster routing.

      kubectl --context ${REMOTE_CONTEXT1} -n httpbin exec -it deploy/client-in-ambient -- sh -c "for i in \$(seq 1 5); do curl -s in-ambient.httpbin.svc.cluster.local:8000/hostname; done"
      

    Example output:

      {
      "hostname": "in-ambient-b86fbcb48-6rvhp"
    }
    ...
      

Next

Cleanup

You can optionally remove the resources that you set up as part of this guide.
  1. Example httpbin apps:

    • If you want to keep the example apps in your multicluster mesh, remove the service takeover labels and revert the service scope to global.
        kubectl --context ${REMOTE_CONTEXT1} -n httpbin label svc in-ambient solo.io/service-takeover- --overwrite
      kubectl --context ${REMOTE_CONTEXT2} -n httpbin label svc in-ambient solo.io/service-takeover- --overwrite
      kubectl --context ${REMOTE_CONTEXT1} -n httpbin label svc in-ambient solo.io/service-scope=global --overwrite
      kubectl --context ${REMOTE_CONTEXT2} -n httpbin label svc in-ambient solo.io/service-scope=global --overwrite
        
    • If you no longer need the example apps, uninstall them and delete the httpbin namespaces.
        kubectl delete --context ${REMOTE_CONTEXT1} -f https://raw.githubusercontent.com/solo-io/gloo-mesh-use-cases/main/gloo-mesh/sample-apps/client-in-ambient.yaml
      kubectl delete --context ${REMOTE_CONTEXT2} -f https://raw.githubusercontent.com/solo-io/gloo-mesh-use-cases/main/gloo-mesh/sample-apps/client-in-ambient.yaml
      kubectl delete --context ${REMOTE_CONTEXT1} -f https://raw.githubusercontent.com/solo-io/gloo-mesh-use-cases/main/gloo-mesh/sample-apps/in-ambient.yaml
      kubectl delete --context ${REMOTE_CONTEXT2} -f https://raw.githubusercontent.com/solo-io/gloo-mesh-use-cases/main/gloo-mesh/sample-apps/in-ambient.yaml
        
  2. Remove the segment labels from the istio-system namespaces.

      kubectl --context ${REMOTE_CONTEXT1} label namespace istio-system admin.solo.io/segment- --overwrite
    kubectl --context ${REMOTE_CONTEXT2} label namespace istio-system admin.solo.io/segment- --overwrite
      
  3. Delete the segment resources.

      kubectl --context ${REMOTE_CONTEXT1} delete segment dev-segment -n istio-system
    kubectl --context ${REMOTE_CONTEXT1} delete segment prod-segment -n istio-system
    kubectl --context ${REMOTE_CONTEXT2} delete segment dev-segment -n istio-system
    kubectl --context ${REMOTE_CONTEXT2} delete segment prod-segment -n istio-system