Create segments ALPHA
Group clusters in your multicluster ambient mesh into logical segments with their own domain suffixes and isolated service discovery.
In this guide, you deploy two apps to a multicluster mesh using namespace sameness: an example httpbin app, and a client app from which to test connectivity to the httpbin app. You initially make the apps available throughout the multicluster mesh before creating segments by using the default global hostnames. Then, you shift the apps into two environment-based segments, dev and prod, so that the apps shift to segment-distinct hostnames. Finally, you can then shift the scope of app availability from the entire multicluster mesh to within the segment only.
For more information about the concepts covered in this guide, review the overview of multitenancy and namespace sameness, and how segments overcome namespace sameness challenges.
This feature requires your mesh to be installed with the Solo distribution of Istio and an Enterprise-level license for Gloo Mesh (OSS APIs). Contact your account representative to obtain a valid license.The segments feature is in the alpha state. Alpha features are likely to change, are not fully tested, and are not supported for production. For more information, see Solo feature maturity.
Before you begin
- Install a multicluster ambient mesh.
- If you have not already, save the kubeconfig contexts of each cluster where an ambient mesh is installed. The examples in this guide assume two workload clusters.
export REMOTE_CONTEXT1=<cluster1-context> export REMOTE_CONTEXT2=<cluster2-context>
Deploy and globally expose sample apps
Run the following commands to deploy an httpbin app in each cluster called
in-ambient.kubectl apply --context ${REMOTE_CONTEXT1} -f https://raw.githubusercontent.com/solo-io/gloo-mesh-use-cases/main/gloo-mesh/sample-apps/in-ambient.yaml kubectl apply --context ${REMOTE_CONTEXT2} -f https://raw.githubusercontent.com/solo-io/gloo-mesh-use-cases/main/gloo-mesh/sample-apps/in-ambient.yamlRun the following commands to deploy a client app in each cluster called
client-in-ambient.kubectl apply --context ${REMOTE_CONTEXT1} -f https://raw.githubusercontent.com/solo-io/gloo-mesh-use-cases/main/gloo-mesh/sample-apps/client-in-ambient.yaml kubectl apply --context ${REMOTE_CONTEXT2} -f https://raw.githubusercontent.com/solo-io/gloo-mesh-use-cases/main/gloo-mesh/sample-apps/client-in-ambient.yamlVerify that the
in-ambientapp andclient-in-ambientclient app are deployed successfully.kubectl --context ${REMOTE_CONTEXT1} -n httpbin get pods kubectl --context ${REMOTE_CONTEXT2} -n httpbin get podsExample output:
NAME READY STATUS RESTARTS AGE client-in-ambient-5c64bb49cd-w3dmw 1/1 Running 0 4s in-ambient-5c64bb49cd-m9kwm 1/1 Running 0 4sLabel the
httpbinnamespace to add the apps to the ambient mesh.kubectl label ns httpbin istio.io/dataplane-mode=ambient --context ${REMOTE_CONTEXT1} kubectl label ns httpbin istio.io/dataplane-mode=ambient --context ${REMOTE_CONTEXT2}Before introducing segments, expose your apps across the multicluster mesh by creating standard
mesh.internalglobal hostnames.- In each cluster, label each service with the
solo.io/service-scope=globallabel.for context in ${REMOTE_CONTEXT1} ${REMOTE_CONTEXT2}; do kubectl label service in-ambient -n httpbin --context ${context} solo.io/service-scope=global kubectl label service client-in-ambient -n httpbin --context ${context} solo.io/service-scope=global done - Verify that the global service entry with a hostname in the format
<svc_name>.httpbin.mesh.internalis created for the labeled services in theistio-systemnamespace. This defaultmesh.internalhostname makes the endpoint for your service available across the multicluster mesh.Example output:kubectl get serviceentry -n istio-system --context ${REMOTE_CONTEXT1} kubectl get serviceentry -n istio-system --context ${REMOTE_CONTEXT2}NAME HOSTS LOCATION RESOLUTION AGE autogen.httpbin.client-in-ambient ["client-in-ambient.httpbin.mesh.internal"] STATIC 16s autogen.httpbin.in-ambient ["in-ambient.httpbin.mesh.internal"] STATIC 18s NAME HOSTS LOCATION RESOLUTION AGE autogen.httpbin.client-in-ambient ["client-in-ambient.httpbin.mesh.internal"] STATIC 18s autogen.httpbin.in-ambient ["in-ambient.httpbin.mesh.internal"] STATIC 20s - To verify that standard multicluster loadbalancing across the default
mesh.internaldomain is working, scale down the in-ambient app in$REMOTE_CLUSTER1.kubectl scale deployment in-ambient -n httpbin --context ${REMOTE_CONTEXT1} --replicas=0 - In
$REMOTE_CLUSTER1, send a few curl requests from the client app to the in-ambient app, using the app’smesh.internaldomain. Because the in-ambient app in$REMOTE_CLUSTER1is unavailable, all traffic is automatically routed to the in-ambient app in$REMOTE_CLUSTER2through the east-west gateway.Verify that you get back the name of the in-ambient app instance inkubectl --context ${REMOTE_CONTEXT1} -n httpbin exec -it deploy/client-in-ambient -- sh -c "for i in \$(seq 1 5); do curl -s in-ambient.httpbin.mesh.internal:8000/hostname; done"$REMOTE_CLUSTER2.{ "hostname": "in-ambient-b86fbcb48-6rvhp" } { "hostname": "in-ambient-b86fbcb48-6rvhp" } { "hostname": "in-ambient-b86fbcb48-6rvhp" } { "hostname": "in-ambient-b86fbcb48-6rvhp" } { "hostname": "in-ambient-b86fbcb48-6rvhp" } - Scale the in-ambient deployment back up in
$REMOTE_CLUSTER1.kubectl scale deployment in-ambient -n httpbin --context $REMOTE_CONTEXT1 --replicas=1
- In each cluster, label each service with the
Create segments
When both clusters participate in the implicit default segment, traffic is evenly spread across the workloads in $REMOTE_CLUSTER1 and $REMOTE_CLUSTER2 through the mesh.internal global hostnames. This coupling works well when services are identical in name and namespace through namespace sameness, but can become a liability in more complex multitenant environments. For example, whenever teams need to run different environment tiers like dev and prod, but with the same namespaces and service names, the endpoints for all of the identical services are unified under one global hostname, regardless of environment. Segments can remedy this problem by assigning a dedicated DNS suffix each logical environment, so that hostnames in the format <svc_name>.<namespace>.<segment_domain> can be used. For more examples of common multitenancy problems that segments can resolve, review the example segment scenarios.
Define the
dev-segmentandprod-segmentin both clusters.- These segments define
cluster.devandcluster.proddomains for the services in the segments that will be globally exposed. - Segments must always be created in the
istio-systemnamespace. - Always deploy the same segment resources to all peered clusters in your multicluster mesh environment. In this example,
$REMOTE_CLUSTER1serves as thedevenvironment, and$REMOTE_CLUSTER2serves as theprodenvironment. However, you must create both segment resources in both clusters.
for context in ${REMOTE_CONTEXT1} ${REMOTE_CONTEXT2}; do kubectl apply --context ${context} -f - <<EOF apiVersion: admin.solo.io/v1alpha1 kind: Segment metadata: name: dev-segment namespace: istio-system spec: domain: cluster.dev --- apiVersion: admin.solo.io/v1alpha1 kind: Segment metadata: name: prod-segment namespace: istio-system spec: domain: cluster.prod EOF done- These segments define
Assign
$REMOTE_CLUSTER1to thedev-segmentand$REMOTE_CLUSTER2to theprod-segmentby labeling theistio-systemnamespaces. Note that a cluster can belong to only one segment at a time.kubectl --context ${REMOTE_CONTEXT1} label namespace istio-system admin.solo.io/segment=dev-segment --overwrite kubectl --context ${REMOTE_CONTEXT2} label namespace istio-system admin.solo.io/segment=prod-segment --overwriteVerify that hostnames in the format
<svc_name>.httpbin.cluster.<env>are now created for the services. With the clusters now partitioned into their own environment segments, the legacymesh.internaldomain no longer maps to either segment, and the ServiceEntries for it are removed.kubectl get serviceentry -n istio-system --context ${REMOTE_CONTEXT1} kubectl get serviceentry -n istio-system --context ${REMOTE_CONTEXT2}Example output:
NAME HOSTS LOCATION RESOLUTION AGE autogen.<segment-name>.httpbin.client-in-ambient ["client-in-ambient.httpbin.<segment-domain>"] STATIC 16s autogen.<segment-name>.httpbin.in-ambient ["in-ambient.httpbin.<segment-domain>"] STATIC 18s NAME HOSTS LOCATION RESOLUTION AGE autogen.<segment-name>.httpbin.client-in-ambient ["client-in-ambient.httpbin.<segment-domain>"] STATIC 18s autogen.<segment-name>.httpbin.in-ambient ["in-ambient.httpbin.<segment-domain>"] STATIC 20sVerify that the legacy domain is no longer routable. Repeat the requests in
$REMOTE_CLUSTER1from the in-ambient client to the in-ambient app, using the app’smesh.internalhostname. This time, the request fails to resolve the domain.kubectl --context ${REMOTE_CONTEXT1} -n httpbin exec -it deploy/client-in-ambient -- curl -v in-ambient.httpbin.mesh.internal:8000/hostnameVerify that the client app in the
dev-segmentof$REMOTE_CLUSTER1can now reach the in-ambient app in thedev-segmentin$REMOTE_CLUSTER1through itscluster.devhostname.kubectl --context ${REMOTE_CONTEXT1} -n httpbin exec -it deploy/client-in-ambient -- sh -c "for i in \$(seq 1 5); do curl -s in-ambient.httpbin.cluster.dev:8000/hostname; done"Verify that the client app in the
dev-segmentof$REMOTE_CLUSTER1can also reach the in-ambient app in theprod-segmentof$REMOTE_CLUSTER2through its hostname. Because the app services are labeled withsolo.io/service-scope=global, they are reachable by their segment hostname throughout the multicluster mesh, regardless of which segment the request sources from. In the next section, you limit this scope to only the individual segment partitions.kubectl --context ${REMOTE_CONTEXT1} -n httpbin exec -it deploy/client-in-ambient -- sh -c "for i in \$(seq 1 5); do curl -s in-ambient.httpbin.cluster.prod:8000/hostname; done"
Change hostname visibility to the segment
Now that apps are partitioned into environment segments and have segment hostnames, you can next limit hostname visibility to only the segment. This optional step involves changing the solo.io/service-scope label on the in-ambient service to segment, so the hostnames are visible across clusters only within the app’s segment. For more information, see Global vs segment scope.
While the service scope label can be a useful tool for partitioning service traffic in your multicluster mesh, do not use it as a security feature. If you need to ensure access control between apps in your mesh, use Istio AuthorizationPolicies.
Change the scope of the
in-ambientservice hostnames tosegmentso that thein-ambient.httpbin.cluster.devandin-ambient.httpbin.cluster.prodhostnames are visible across clusters, but only within each segment.kubectl --context ${REMOTE_CONTEXT1} -n httpbin label svc in-ambient solo.io/service-scope=segment --overwrite kubectl --context ${REMOTE_CONTEXT2} -n httpbin label svc in-ambient solo.io/service-scope=segment --overwriteVerify that the client app in the
dev-segmentof$REMOTE_CLUSTER1can still reach the in-ambient app in thedev-segmentin$REMOTE_CLUSTER1through itscluster.devhostname.kubectl --context ${REMOTE_CONTEXT1} -n httpbin exec -it deploy/client-in-ambient -- sh -c "for i in \$(seq 1 5); do curl -s in-ambient.httpbin.cluster.dev:8000/hostname; done"Example output:
{ "hostname": "in-ambient-b86fbcb48-d4x2p" } ...Verify that the client app in the
dev-segmentof$REMOTE_CLUSTER1now cannot reach the in-ambient app in theprod-segmentof$REMOTE_CLUSTER2through its hostname. Because the in-ambient app services are labeled withsolo.io/service-scope=segment, they are reachable by their segment hostname throughout the multicluster mesh, but only from other apps within the segment. For example, if a third cluster existed in this setup that also belonged to theprod-segment, apps within that cluster would be able to reach the in-ambient app in theprod-segmentof$REMOTE_CLUSTER2through its hostname.kubectl --context ${REMOTE_CONTEXT1} -n httpbin exec -it deploy/client-in-ambient -- sh -c "for i in \$(seq 1 5); do curl -s in-ambient.httpbin.cluster.prod:8000/hostname; done"Example output:
{ "hostname": "in-ambient-b86fbcb48-6rvhp" } ...
Takeover local service traffic
Finally, you can “take over” cluster-local traffic to the service. This optional step involves applying the solo.io/service-takeover=true label to the in-ambient service, so that any requests to the service–including from local services within the same cluster network–are always routed to the service’s <name>.<namespace>.<segment_domain> hostname, and not to the service’s <name>.<namespace>.svc.cluster.local local hostname. By using this option, you can configure a service to span multiple clusters without changing your configuration or applications.
For more information, see Local traffic takeover.
To demonstrate the service takeover capabilities across clusters, put both clusters in the
dev-segmentso they can act as a single logical environment. Bothin-ambientservices andclient-in-ambientservices are now accessible through thein-ambient.httpbin.cluster.devandclient-in-ambient.httpbin.cluster.devhostnames, respectively.kubectl --context ${REMOTE_CONTEXT2} label namespace istio-system admin.solo.io/segment=dev-segment --overwriteEnable service takeover on the
in-ambientservices. This label tells Istio to rewrite the local Kubernetes DNS (.svc.cluster.local) to the globally-aware service hostname.kubectl --context ${REMOTE_CONTEXT1} -n httpbin label svc in-ambient solo.io/service-takeover=true --overwrite kubectl --context ${REMOTE_CONTEXT2} -n httpbin label svc in-ambient solo.io/service-takeover=true --overwriteSend traffic requests to the cluster-local DNS name in
$REMOTE_CLUSTER1. Even though the client uses.svc.cluster.localin its request, service takeover forwards requests across the segment, providing seamless multi-cluster routing.kubectl --context ${REMOTE_CONTEXT1} -n httpbin exec -it deploy/client-in-ambient -- sh -c "for i in \$(seq 1 5); do curl -s in-ambient.httpbin.svc.cluster.local:8000/hostname; done"Example output:
{ "hostname": "in-ambient-b86fbcb48-6rvhp" } ...
Next
- Expose apps in your mesh with an ingress gateway.
- Control traffic by creating a waypoint proxy.
- If you haven’t yet, install the Gloo management plane. The management plane inclues the Gloo UI, which allows you to review the Istio insights that were captured for your ambient setup. Gloo Mesh (OSS APIs) comes with an insights engine that automatically analyzes your Istio setups for health issues. These issues are displayed in the UI along with recommendations to harden your Istio setups. The insights give you a checklist to address issues that might otherwise be hard to detect across your environment. For more information, see Insights.
- When it’s time to upgrade your ambient mesh, you can perform a safe in-place upgrade by using the Gloo Operator or Helm.
Cleanup
You can optionally remove the resources that you set up as part of this guide.Example httpbin apps:
- If you want to keep the example apps in your multicluster mesh, remove the service takeover labels and revert the service scope to global.
kubectl --context ${REMOTE_CONTEXT1} -n httpbin label svc in-ambient solo.io/service-takeover- --overwrite kubectl --context ${REMOTE_CONTEXT2} -n httpbin label svc in-ambient solo.io/service-takeover- --overwrite kubectl --context ${REMOTE_CONTEXT1} -n httpbin label svc in-ambient solo.io/service-scope=global --overwrite kubectl --context ${REMOTE_CONTEXT2} -n httpbin label svc in-ambient solo.io/service-scope=global --overwrite - If you no longer need the example apps, uninstall them and delete the
httpbinnamespaces.kubectl delete --context ${REMOTE_CONTEXT1} -f https://raw.githubusercontent.com/solo-io/gloo-mesh-use-cases/main/gloo-mesh/sample-apps/client-in-ambient.yaml kubectl delete --context ${REMOTE_CONTEXT2} -f https://raw.githubusercontent.com/solo-io/gloo-mesh-use-cases/main/gloo-mesh/sample-apps/client-in-ambient.yaml kubectl delete --context ${REMOTE_CONTEXT1} -f https://raw.githubusercontent.com/solo-io/gloo-mesh-use-cases/main/gloo-mesh/sample-apps/in-ambient.yaml kubectl delete --context ${REMOTE_CONTEXT2} -f https://raw.githubusercontent.com/solo-io/gloo-mesh-use-cases/main/gloo-mesh/sample-apps/in-ambient.yaml
- If you want to keep the example apps in your multicluster mesh, remove the service takeover labels and revert the service scope to global.
Remove the segment labels from the
istio-systemnamespaces.kubectl --context ${REMOTE_CONTEXT1} label namespace istio-system admin.solo.io/segment- --overwrite kubectl --context ${REMOTE_CONTEXT2} label namespace istio-system admin.solo.io/segment- --overwriteDelete the segment resources.
kubectl --context ${REMOTE_CONTEXT1} delete segment dev-segment -n istio-system kubectl --context ${REMOTE_CONTEXT1} delete segment prod-segment -n istio-system kubectl --context ${REMOTE_CONTEXT2} delete segment dev-segment -n istio-system kubectl --context ${REMOTE_CONTEXT2} delete segment prod-segment -n istio-system