In this guide, you deploy two apps to a multicluster mesh using namespace sameness: an example httpbin app, and a client app from which to test connectivity to the httpbin app. You initially make the apps available throughout the multicluster mesh before creating segments by using the default global hostnames. Then, you shift the apps into two environment-based segments, dev and prod, so that the apps shift to segment-distinct hostnames. Finally, you can then shift the scope of app availability from the entire multicluster mesh to within the segment only.

For more information about the concepts covered in this guide, review the overview of multitenancy and namespace sameness, and how segments overcome namespace sameness challenges.

Before you begin

  1. Install a multicluster ambient mesh.
  2. If you have not already, save the kubeconfig contexts of each cluster where an ambient mesh is installed. The examples in this guide assume two workload clusters.
      export REMOTE_CONTEXT1=<cluster1-context>
    export REMOTE_CONTEXT2=<cluster2-context>
      

Deploy and globally expose sample apps

  1. Run the following commands to deploy an httpbin app in each cluster called in-ambient.

      kubectl apply --context ${REMOTE_CONTEXT1} -f https://raw.githubusercontent.com/solo-io/gloo-mesh-use-cases/main/gloo-mesh/sample-apps/in-ambient.yaml
    kubectl apply --context ${REMOTE_CONTEXT2} -f https://raw.githubusercontent.com/solo-io/gloo-mesh-use-cases/main/gloo-mesh/sample-apps/in-ambient.yaml
      
  2. Run the following commands to deploy a client app in each cluster called client-in-ambient.

      kubectl apply --context ${REMOTE_CONTEXT1} -f https://raw.githubusercontent.com/solo-io/gloo-mesh-use-cases/main/gloo-mesh/sample-apps/client-in-ambient.yaml
    kubectl apply --context ${REMOTE_CONTEXT2} -f https://raw.githubusercontent.com/solo-io/gloo-mesh-use-cases/main/gloo-mesh/sample-apps/client-in-ambient.yaml
      
  3. Verify that the in-ambient app and client-in-ambient client app are deployed successfully.

      kubectl --context ${REMOTE_CONTEXT1} -n httpbin get pods
    kubectl --context ${REMOTE_CONTEXT2} -n httpbin get pods
      

    Example output:

      NAME                                  READY   STATUS    RESTARTS   AGE
    client-in-ambient-5c64bb49cd-w3dmw    1/1     Running   0          4s
    in-ambient-5c64bb49cd-m9kwm           1/1     Running   0          4s
      
  4. Label the httpbin namespace to add the apps to the ambient mesh.

      kubectl label ns httpbin istio.io/dataplane-mode=ambient --context ${REMOTE_CONTEXT1}
    kubectl label ns httpbin istio.io/dataplane-mode=ambient --context ${REMOTE_CONTEXT2}
      
  5. Before introducing segments, expose your apps across the multicluster mesh by creating standard mesh.internal global hostnames.

    1. In each cluster, label each service with the solo.io/service-scope=global label.
        for context in ${REMOTE_CONTEXT1} ${REMOTE_CONTEXT2}; do
        kubectl label service in-ambient -n httpbin --context ${context} solo.io/service-scope=global
        kubectl label service client-in-ambient -n httpbin --context ${context} solo.io/service-scope=global
      done
        
    2. Verify that the global service entry with a hostname in the format <svc_name>.httpbin.mesh.internal is created for the labeled services in the istio-system namespace. This default mesh.internal hostname makes the endpoint for your service available across the multicluster mesh.
        kubectl get serviceentry -n istio-system --context ${REMOTE_CONTEXT1}
      kubectl get serviceentry -n istio-system --context ${REMOTE_CONTEXT2}
        
      Example output:
        NAME                                 HOSTS                                         LOCATION        RESOLUTION   AGE
      autogen.httpbin.client-in-ambient    ["client-in-ambient.httpbin.mesh.internal"]                   STATIC       16s
      autogen.httpbin.in-ambient           ["in-ambient.httpbin.mesh.internal"]                          STATIC       18s
      NAME                                 HOSTS                                         LOCATION        RESOLUTION   AGE
      autogen.httpbin.client-in-ambient    ["client-in-ambient.httpbin.mesh.internal"]                   STATIC       18s
      autogen.httpbin.in-ambient           ["in-ambient.httpbin.mesh.internal"]                          STATIC       20s
        
    3. To verify that standard multicluster loadbalancing across the default mesh.internal domain is working, scale down the in-ambient app in $REMOTE_CLUSTER1.
        kubectl scale deployment in-ambient -n httpbin --context ${REMOTE_CONTEXT1} --replicas=0
        
    4. In $REMOTE_CLUSTER1, send a few curl requests from the client app to the in-ambient app, using the app’s mesh.internal domain. Because the in-ambient app in $REMOTE_CLUSTER1 is unavailable, all traffic is automatically routed to the in-ambient app in $REMOTE_CLUSTER2 through the east-west gateway.
        kubectl --context ${REMOTE_CONTEXT1} -n httpbin exec -it deploy/client-in-ambient -- sh -c "for i in \$(seq 1 5); do curl -s in-ambient.httpbin.mesh.internal:8000/hostname; done"
        
      Verify that you get back the name of the in-ambient app instance in $REMOTE_CLUSTER2.
        {
        "hostname": "in-ambient-b86fbcb48-6rvhp"
      }
      {
        "hostname": "in-ambient-b86fbcb48-6rvhp"
      }
      {
        "hostname": "in-ambient-b86fbcb48-6rvhp"
      }
      {
        "hostname": "in-ambient-b86fbcb48-6rvhp"
      }
      {
        "hostname": "in-ambient-b86fbcb48-6rvhp"
      }
        
    5. Scale the in-ambient deployment back up in $REMOTE_CLUSTER1.
        kubectl scale deployment in-ambient -n httpbin --context $REMOTE_CONTEXT1 --replicas=1
        

Create segments

When both clusters participate in the implicit default segment, traffic is evenly spread across the workloads in $REMOTE_CLUSTER1 and $REMOTE_CLUSTER2 through the mesh.internal global hostnames. This coupling works well when services are identical in name and namespace through namespace sameness, but can become a liability in more complex multitenant environments. For example, whenever teams need to run different environment tiers like dev and prod, but with the same namespaces and service names, the endpoints for all of the identical services are unified under one global hostname, regardless of environment. Segments can remedy this problem by assigning a dedicated DNS suffix to each logical environment, so that hostnames in the format <svc_name>.<namespace>.<segment_domain> can be used. For more examples of common multitenancy problems that segments can resolve, review the example segment scenarios.

  1. Define the dev-segment and prod-segment in both clusters.

    • These segments define cluster.dev and cluster.prod domains for the services in the segments that will be globally exposed.
    • Segments must always be created in the istio-system namespace.
    • Always deploy the same segment resources to all peered clusters in your multicluster mesh environment. In this example, $REMOTE_CLUSTER1 serves as the dev environment, and $REMOTE_CLUSTER2 serves as the prod environment. However, you must create both segment resources in both clusters.
      for context in ${REMOTE_CONTEXT1} ${REMOTE_CONTEXT2}; do
      kubectl apply --context ${context} -f - <<EOF
    apiVersion: admin.solo.io/v1alpha1
    kind: Segment
    metadata:
      name: dev-segment
      namespace: istio-system
    spec:
      domain: cluster.dev
    ---
    apiVersion: admin.solo.io/v1alpha1
    kind: Segment
    metadata:
      name: prod-segment
      namespace: istio-system
    spec:
      domain: cluster.prod
    EOF
    done
      
  2. Assign $REMOTE_CLUSTER1 to the dev-segment and $REMOTE_CLUSTER2 to the prod-segment by labeling the istio-system namespaces. Note that a cluster can belong to only one segment at a time.

      kubectl --context ${REMOTE_CONTEXT1} label namespace istio-system admin.solo.io/segment=dev-segment --overwrite
    kubectl --context ${REMOTE_CONTEXT2} label namespace istio-system admin.solo.io/segment=prod-segment --overwrite
      
  3. Verify that hostnames in the format <svc_name>.httpbin.cluster.<env> are now created for the services. With the clusters now partitioned into their own environment segments, the legacy mesh.internal domain no longer maps to either segment, and the ServiceEntries for it are removed.

      kubectl get serviceentry -n istio-system --context ${REMOTE_CONTEXT1}
    kubectl get serviceentry -n istio-system --context ${REMOTE_CONTEXT2}
      

    Example output:

      NAME                                 HOSTS                                         LOCATION        RESOLUTION   AGE
    autogen.<segment-name>.httpbin.client-in-ambient    ["client-in-ambient.httpbin.<segment-domain>"]                   STATIC       16s
    autogen.<segment-name>.httpbin.in-ambient           ["in-ambient.httpbin.<segment-domain>"]                          STATIC       18s
    NAME                                 HOSTS                                         LOCATION        RESOLUTION   AGE
    autogen.<segment-name>.httpbin.client-in-ambient    ["client-in-ambient.httpbin.<segment-domain>"]                   STATIC       18s
    autogen.<segment-name>.httpbin.in-ambient           ["in-ambient.httpbin.<segment-domain>"]                          STATIC       20s
      
  4. Verify that the legacy domain is no longer routable. Repeat the requests in $REMOTE_CLUSTER1 from the in-ambient client to the in-ambient app, using the app’s mesh.internal hostname. This time, the request fails to resolve the domain.

      kubectl --context ${REMOTE_CONTEXT1} -n httpbin exec -it deploy/client-in-ambient -- curl -v in-ambient.httpbin.mesh.internal:8000/hostname
      
  5. Verify that the client app in the dev-segment of $REMOTE_CLUSTER1 can now reach the in-ambient app in the dev-segment in $REMOTE_CLUSTER1 through its cluster.dev hostname.

      kubectl --context ${REMOTE_CONTEXT1} -n httpbin exec -it deploy/client-in-ambient -- sh -c "for i in \$(seq 1 5); do curl -s in-ambient.httpbin.cluster.dev:8000/hostname; done"
      
  6. Verify that the client app in the dev-segment of $REMOTE_CLUSTER1 can also reach the in-ambient app in the prod-segment of $REMOTE_CLUSTER2 through its hostname. Because the app services are labeled with solo.io/service-scope=global, they are reachable by their segment hostname throughout the multicluster mesh, regardless of which segment the request sources from. In the next section, you limit this scope to only the individual segment partitions.

      kubectl --context ${REMOTE_CONTEXT1} -n httpbin exec -it deploy/client-in-ambient -- sh -c "for i in \$(seq 1 5); do curl -s in-ambient.httpbin.cluster.prod:8000/hostname; done"
      

Set up global service aliases

When you add clusters to a segment, any service that is globally exposed is assigned a dedicated segment-specific hostname in the format <svc_name>.<namespace>.<segment_domain>. This hostname replaces the default hostname that is assigned to globally exposed services.

You might want to use a different hostname pattern for your global services. For example, you might already use a specific hostname pattern within your organization that ensures unique hostnames within and across segments. Starting in the Solo distribution of Istio version 1.29, you can specify hostname alias patterns in the Segment resource. To learn more about global service aliasing, see Global service aliasing.

  1. Update the Segment resources that you created in the previous step to create hostname aliases with custom hostname patterns for your global services. The pattern must follow the URI Template RFC to be properly validated. Aliases can be used alongside the default segment-specific hostnames to address a service in the segment.

    The following Segment resources define hostname aliases with the following patterns:

    PatternTypeDescription
    Pattern 1: {service}.cluster.devSimple overwriteCreate an alias for all global services that includes the service name and the `cluster.dev
    Pattern 2: {service}.{labels['cluster.dev/prod']}.cluster.devLabel-based aliasAn alias with this pattern is created only for services that have a cluster.dev/prod label. If that label is set, the pattern extracts the label value and adds it to the hostname alias. For example, let’s assume you label the in-ambient service in the dev segment with cluster.dev/prod=dev. The generated alias for this service is in-ambient.dev.cluster.dev.
    Pattern 3: {service}.api.cluster.devLabel selector matchAn alias with this pattern is created only for services that have a tier=backend label. For example, the alias for the in-ambient service in the dev segment is in-ambient.api.cluster.dev.
      for context in ${REMOTE_CONTEXT1} ${REMOTE_CONTEXT2}; do
      kubectl apply --context ${context} -f - <<EOF
    apiVersion: admin.solo.io/v1alpha1
    kind: Segment
    metadata:
      name: dev-segment
      namespace: istio-system
    spec:
      domain: cluster.dev
      aliases:
      # Pattern 1: Simple overwrite
      # Simplify the default segment-specific hostnames by removing the namespace. 
      # Use this pattern if you can guarantee unique service names across namespaces.
      - pattern: "{service}.cluster.dev"
    
      # Pattern 2: Label-based alias
      # Label services with a specific label and add the label value to the hostname alias.
      # In this example, services must be labeled with cluster.dev/prod, such as cluster.dev/prod=dev.
      # The dev value is extracted and added to the hostname alias. 
      - pattern: "{service}.{labels['cluster.dev/prod']}.cluster.dev"
    
      # Pattern 3: Label selector match
      # Aliases are generated only if the service has the tier=backend label.
      - pattern: "{service}.api.cluster.dev"
        selector:
          matchLabels:
            tier: backend
    ---
    apiVersion: admin.solo.io/v1alpha1
    kind: Segment
    metadata:
      name: prod-segment
      namespace: istio-system
    spec:
      domain: cluster.prod
      aliases:
      # Pattern 1: Simple overwrite
      # Simplify the default segment-specific hostnames by removing the namespace. 
      # Use this pattern if you can guarantee unique service names across namespaces.
      - pattern: "{service}.cluster.prod"
    
      # Pattern 2: Label-based alias
      # Label services with a specific label and add the label value to the hostname alias.
      # In this example, services must be labeled with cluster.dev/prod, such as cluster.dev/prod=prod.
      # The prod value is extracted and added to the hostname alias. 
      - pattern: "{service}.{labels['cluster.dev/prod']}.cluster.prod"
    
      # Pattern 3: Label selector match
      # Aliases are generated only if the service has the tier=backend label.
      - pattern: "{service}.api.cluster.prod"
        selector:
          matchLabels:
            tier: backend
    EOF
    done
      
  2. Verify that hostname aliases are added to the existing ServiceEntries. Note that because your services are not labeled with cluster.dev/prod or tier=backend, only aliases that follow pattern 1 are created. The aliases are added as a comma-separated list in the solo.io/service-aliases annotation.

      kubectl get serviceentry \
    -n istio-system \
    --context ${REMOTE_CONTEXT1} \
    -o yaml | grep -A2 solo.io/service-aliases
    
    kubectl get serviceentry \
    -n istio-system \
    --context ${REMOTE_CONTEXT2} \
    -o yaml | grep -A2 solo.io/service-aliases
      

    Example output:

            solo.io/service-aliases: client-in-ambient.cluster.dev
     creationTimestamp: "2026-02-12T20:10:39Z"
     generation: 1
    --
       solo.io/service-aliases: in-ambient.cluster.dev
     creationTimestamp: "2026-02-12T20:10:39Z"
     generation: 1
    --
       solo.io/service-aliases: client-in-ambient.cluster.prod
     creationTimestamp: "2026-02-12T20:10:39Z"
     generation: 2
    --
       solo.io/service-aliases: in-ambient.cluster.prod
     creationTimestamp: "2026-02-12T20:10:39Z"
     generation: 1
       solo.io/service-aliases: client-in-ambient.cluster.dev
     creationTimestamp: "2026-02-12T20:10:39Z"
     generation: 4
    --
       solo.io/service-aliases: in-ambient.cluster.dev
     creationTimestamp: "2026-02-12T20:10:39Z"
     generation: 3
    --
       solo.io/service-aliases: client-in-ambient.cluster.prod
     creationTimestamp: "2026-02-12T20:10:39Z"
     generation: 1
    --
       solo.io/service-aliases: in-ambient.cluster.prod
     creationTimestamp: "2026-02-12T20:10:39Z"
     generation: 1
      
  3. Verify that the client-in-ambient app in the dev-segment of $REMOTE_CLUSTER1 can now reach the in-ambient app in the dev-segment in $REMOTE_CLUSTER1 through the in-ambient.cluster.dev alias.

      kubectl --context ${REMOTE_CONTEXT1} \
      -n httpbin \
      exec -it deploy/client-in-ambient \
      -- sh -c "
        for i in \$(seq 1 5); do 
          curl -s in-ambient.cluster.dev:8000/hostname
        done"
      

    Example output:

      {
      "hostname": "in-ambient-5cdcdf5d4b-d75td"
    }
    ...
      
  4. Label the in-ambient service in the dev-segment with the cluster.dev/prod=dev and tier=backend labels. These labels trigger alias creation in accordance to pattern 2 and 3.

      kubectl label service in-ambient -n httpbin --context $REMOTE_CONTEXT1 cluster.dev/prod=dev
    kubectl label service in-ambient -n httpbin --context $REMOTE_CONTEXT1 tier=backend
      
  5. Verify that you see two more hostname aliases for the in-ambient service in $REMOTE_CLUSTER1:

    • in-ambient.api.cluster.dev: This alias is generated because the service matches the tier=backend label. Alias generation follows the {service}.api.cluster.dev pattern.
    • in-ambient.dev.cluster.dev: This alias is generated because the service has the cluster.dev/prod=dev label. To generate the alias, the value of the label (in this case: dev) is extracted to replace {labels['cluster.dev/prod']} in the {service}.{labels['cluster.dev/prod']}.cluster.dev hostname pattern.
      kubectl get serviceentry autogen.dev-segment.httpbin.in-ambient -n istio-system -o yaml --context $REMOTE_CONTEXT1 | grep -A2 solo.io/service-aliases
      

    Example output:

      solo.io/service-aliases: in-ambient.api.cluster.dev,in-ambient.cluster.dev,in-ambient.dev.cluster.dev
      
  6. Verify that the in-ambient service in $REMOTE_CLUSTER1 can now be reached through the default segment-specific hostname in-ambient.httpbin.cluster.dev and 3 different aliases in-ambient.api.cluster.dev, in-ambient.cluster.dev, and in-ambient.dev.cluster.dev.

      kubectl --context ${REMOTE_CONTEXT1} \
      -n httpbin \
      exec -it deploy/client-in-ambient \
      -- sh -c "
        for i in \$(seq 1 5); do 
          curl -s in-ambient.httpbin.cluster.dev:8000/hostname
        done"
    
    kubectl --context ${REMOTE_CONTEXT1} \
      -n httpbin \
      exec -it deploy/client-in-ambient \
      -- sh -c "
        for i in \$(seq 1 5); do 
          curl -s in-ambient.api.cluster.dev:8000/hostname
        done"
    
    kubectl --context ${REMOTE_CONTEXT1} \
      -n httpbin \
      exec -it deploy/client-in-ambient \
      -- sh -c "
        for i in \$(seq 1 5); do 
          curl -s in-ambient.cluster.dev:8000/hostname
        done"
    
    kubectl --context ${REMOTE_CONTEXT1} \
      -n httpbin \
      exec -it deploy/client-in-ambient \
      -- sh -c "
        for i in \$(seq 1 5); do 
          curl -s in-ambient.dev.cluster.dev:8000/hostname
        done"
      

Change hostname visibility to the segment

Now that apps are partitioned into environment segments and have segment hostnames, you can next limit hostname visibility to only the segment. This optional step involves changing the solo.io/service-scope label on the in-ambient service to segment, so the hostnames are visible across clusters only within the app’s segment. For more information, see Global vs segment scope.

  1. Change the scope of the in-ambient service hostnames to segment so that the in-ambient.httpbin.cluster.dev and in-ambient.httpbin.cluster.prod hostnames are visible across clusters, but only within each segment.

      kubectl --context ${REMOTE_CONTEXT1} -n httpbin label svc in-ambient solo.io/service-scope=segment --overwrite
    kubectl --context ${REMOTE_CONTEXT2} -n httpbin label svc in-ambient solo.io/service-scope=segment --overwrite
      
  2. Verify that the client app in the dev-segment of $REMOTE_CLUSTER1 can still reach the in-ambient app in the dev-segment in $REMOTE_CLUSTER1 through its cluster.dev hostname.

      kubectl --context ${REMOTE_CONTEXT1} \
      -n httpbin \
      exec -it deploy/client-in-ambient \
      -- sh -c "
        for i in \$(seq 1 5); do 
          curl -s in-ambient.httpbin.cluster.dev:8000/hostname
        done"
      

    Example output:

      {
      "hostname": "in-ambient-b86fbcb48-d4x2p"
    }
    ...
      
  3. Verify that the client app in the dev-segment of $REMOTE_CLUSTER1 now cannot reach the in-ambient app in the prod-segment of $REMOTE_CLUSTER2 through its hostname. Because the in-ambient app services are labeled with solo.io/service-scope=segment, they are reachable by their segment hostname throughout the multicluster mesh, but only from other apps within the segment. For example, if a third cluster existed in this setup that also belonged to the prod-segment, apps within that cluster would be able to reach the in-ambient app in the prod-segment of $REMOTE_CLUSTER2 through its hostname.

      kubectl --context ${REMOTE_CONTEXT1} \
      -n httpbin \
      exec -it deploy/client-in-ambient \
      -- sh -c "
        for i in \$(seq 1 5); do 
          curl -s in-ambient.httpbin.cluster.prod:8000/hostname
        done"
      

    Example output:

      command terminated with exit code 6
      

Takeover local service traffic

Finally, you can “take over” cluster-local traffic to the service. This optional step involves applying the solo.io/service-takeover=true label to the in-ambient service, so that any requests to the service–including from local services within the same cluster network–are always routed to the service’s <name>.<namespace>.<segment_domain> hostname, and not to the service’s <name>.<namespace>.svc.cluster.local local hostname. By using this option, you can configure a service to span multiple clusters without changing your configuration or applications.

For more information, see Local traffic takeover.

  1. To demonstrate the service takeover capabilities across clusters, put both clusters in the dev-segment so they can act as a single logical environment. Both in-ambient services and client-in-ambient services are now accessible through the in-ambient.httpbin.cluster.dev and client-in-ambient.httpbin.cluster.dev hostnames, respectively.

      kubectl --context ${REMOTE_CONTEXT2} label namespace istio-system admin.solo.io/segment=dev-segment --overwrite
      
  2. Enable service takeover on the in-ambient services. This label tells Istio to rewrite the local Kubernetes DNS (.svc.cluster.local) to the globally-aware service hostname.

      kubectl --context ${REMOTE_CONTEXT1} -n httpbin label svc in-ambient solo.io/service-takeover=true --overwrite
    kubectl --context ${REMOTE_CONTEXT2} -n httpbin label svc in-ambient solo.io/service-takeover=true --overwrite
      
  3. Send traffic requests to the cluster-local DNS name in $REMOTE_CLUSTER1. Even though the client uses .svc.cluster.local in its request, service takeover forwards requests across the segment, providing seamless multi-cluster routing.

      kubectl --context ${REMOTE_CONTEXT1} -n httpbin exec -it deploy/client-in-ambient -- sh -c "for i in \$(seq 1 5); do curl -s in-ambient.httpbin.svc.cluster.local:8000/hostname; done"
      

    Example output:

      {
      "hostname": "in-ambient-b86fbcb48-6rvhp"
    }
    ...
      

Next

Cleanup

You can optionally remove the resources that you set up as part of this guide.
  1. Example httpbin apps:

    • If you want to keep the example apps in your multicluster mesh, remove the service takeover labels and revert the service scope to global.
        kubectl --context ${REMOTE_CONTEXT1} -n httpbin label svc in-ambient solo.io/service-takeover- --overwrite
      kubectl --context ${REMOTE_CONTEXT2} -n httpbin label svc in-ambient solo.io/service-takeover- --overwrite
      kubectl --context ${REMOTE_CONTEXT1} -n httpbin label svc in-ambient solo.io/service-scope=global --overwrite
      kubectl --context ${REMOTE_CONTEXT2} -n httpbin label svc in-ambient solo.io/service-scope=global --overwrite
        
    • If you no longer need the example apps, uninstall them and delete the httpbin namespaces.
        kubectl delete --context ${REMOTE_CONTEXT1} -f https://raw.githubusercontent.com/solo-io/gloo-mesh-use-cases/main/gloo-mesh/sample-apps/client-in-ambient.yaml
      kubectl delete --context ${REMOTE_CONTEXT2} -f https://raw.githubusercontent.com/solo-io/gloo-mesh-use-cases/main/gloo-mesh/sample-apps/client-in-ambient.yaml
      kubectl delete --context ${REMOTE_CONTEXT1} -f https://raw.githubusercontent.com/solo-io/gloo-mesh-use-cases/main/gloo-mesh/sample-apps/in-ambient.yaml
      kubectl delete --context ${REMOTE_CONTEXT2} -f https://raw.githubusercontent.com/solo-io/gloo-mesh-use-cases/main/gloo-mesh/sample-apps/in-ambient.yaml
        
  2. Remove the segment labels from the istio-system namespaces.

      kubectl --context ${REMOTE_CONTEXT1} label namespace istio-system admin.solo.io/segment- --overwrite
    kubectl --context ${REMOTE_CONTEXT2} label namespace istio-system admin.solo.io/segment- --overwrite
      
  3. Delete the segment resources.

      kubectl --context ${REMOTE_CONTEXT1} delete segment dev-segment -n istio-system
    kubectl --context ${REMOTE_CONTEXT1} delete segment prod-segment -n istio-system
    kubectl --context ${REMOTE_CONTEXT2} delete segment dev-segment -n istio-system
    kubectl --context ${REMOTE_CONTEXT2} delete segment prod-segment -n istio-system