Make services available across clusters
After you enroll your apps in a multicluster mesh, make apps available to other services across clusters.
About
This feature requires your mesh to be installed with the Solo distribution of Istio and an Enterprise-level license for Solo Enterprise for Istio. Contact your account representative to obtain a valid license.
If you linked clusters to form a multicluster ambient or sidecar mesh, you can make a service available across the multicluster mesh by using the solo.io/service-scope service or namespace label. When you apply this label, a new global service is created with a hostname in the format <name>.<namespace>.mesh.internal if you do not use segments, or <name>.<namespace>.<segment_domain> if you do.
You can then use this internal hostname for in-mesh routing. For example, you might create a Gateway resource in cluster1 to manage incoming traffic requests for your app in cluster2. To make the app accessible across clusters, you label its service with solo.io/service-scope=global, which generates a global service hostname. To route requests to the app, you create an HTTPRoute resource that references this global service hostname. The ingress gateway can then use this global service hostname in the HTTPRoute to route incoming traffic requests through the east-west gateway across clusters to your app.
For detailed information about and considerations for making services available across clusters, see Overview.
Before you begin
Make sure that your apps are enrolled in the ambient mesh.
Expose services across clusters
Your options for exposing services across clusters vary based on whether you organized your apps into segments (alpha).
- Option 1 (default): Make services available across clusters
- Option 2 (alpha): Make segmented services available across clusters
Option 1 (default): Make services available across clusters
Follow these steps to make a service available across your multicluster mesh. To try out a sample app first, skip to Example: Bookinfo.
Apply the
solo.io/service-scope=globallabel to either an individual service that you want to be accessible from multiple clusters, or to an entire namespace so that global hostnames are created for each service in that namespace.- Service:
kubectl label service <name> -n <namespace> --context ${REMOTE_CONTEXT1} solo.io/service-scope=global - Namespace:
kubectl label namespace <namespace> --context ${REMOTE_CONTEXT1} solo.io/service-scope=globalIf you need to opt individual services out of the global-level scope applied to the entire namespace, you can run `kubectl label service-n --context ${REMOTE_CONTEXT1} solo.io/service-scope=cluster` to return that service's exposure scope to the cluster network.
- Service:
Verify that the global service entry with a hostname in the format
<svc_name>.<namespace>.mesh.internalis created for the labeled service in theistio-systemnamespace, or multiple global service entries for all services in the labeled namespace. This hostname makes the endpoint for your service available across the multicluster mesh.kubectl get serviceentry -n istio-system --context ${REMOTE_CONTEXT1}Example output:
NAME HOSTS LOCATION RESOLUTION AGE autogen.<namespace>.<svc_name> ["<svc_name>.<namespace>.mesh.internal"] STATIC 94sIf you have a service of the same name and namespace in another cluster of the mesh, or a namespace with the same name and services, label that service or namespace too. This way, the service’s endpoint is added to the global service’s hostname, and increases the availability of your service in the multicluster mesh. For more information, see Namespace sameness.
- Service:
kubectl label service <name> -n <namespace> --context ${REMOTE_CONTEXT2} solo.io/service-scope=global - Namespace:
kubectl label namespace <name> --context ${REMOTE_CONTEXT2} solo.io/service-scope=global
- Service:
Optional: To ensure that all traffic requests to the service are routed to its global hostname, apply the
solo.io/service-takeover=truelabel to each service instance, or to each namespace. For more information, see Local traffic takeover.- Service:
kubectl label service <name> -n <namespace> --context ${REMOTE_CONTEXT1} solo.io/service-takeover=truekubectl label service <name> -n <namespace> --context ${REMOTE_CONTEXT2} solo.io/service-takeover=true - Namespace:
kubectl label namespace <name> --context ${REMOTE_CONTEXT1} solo.io/service-takeover=truekubectl label namespace <name> --context ${REMOTE_CONTEXT2} solo.io/service-takeover=true
- Service:
Optional: Modify the way that traffic is routed to service endpoints for the global service by applying the
networking.istio.io/traffic-distributionannotation to each service instance. By default, requests are routed to available endpoints within the source’s same cluster network first (PreferNetwork). Note that if you applied thesolo.io/service-takeover=truelabel, you must choose a different mode thanPreferNetwork. For more information and available traffic distribution modes, see Locality routing and traffic distribution.kubectl annotate service <name> -n <namespace> --context ${REMOTE_CONTEXT1} networking.istio.io/traffic-distribution=<mode> kubectl annotate service <name> -n <namespace> --context ${REMOTE_CONTEXT2} networking.istio.io/traffic-distribution=<mode>Optional: If you also installed Solo Enterprise for Istio, you can review the global service hostnames in the Gloo UI. If you do not have Solo Enterprise for Istio installed, you can follow the management plane installation guide.
- Open the Gloo UI.
meshctl dashboard --kubecontext $MGMT_CONTEXT - Navigate to Global Services and verify that you see the global hostname for your service.

Figure: Global Services page in the Gloo UI 
Figure: Global Services page in the Gloo UI
- Open the Gloo UI.
Check out the recommended next steps, such as using the global hostname in ingress gateway routing configurations.
Example: Bookinfo
After you add the Bookinfo services to your multicluster ambient mesh, you can then make the services available across clusters.
To make Bookinfo globally available in the multicluster setup, label each
productpageservice so that bothproductpageendpoints are available behind one global service hostname.kubectl --context ${REMOTE_CONTEXT1} label service productpage -n bookinfo solo.io/service-scope=global kubectl --context ${REMOTE_CONTEXT2} label service productpage -n bookinfo solo.io/service-scope=globalApply the
networking.istio.io/traffic-distribution=Anyannotation to the services. This annotation allows requests to theproductpageglobal service to be routed to each service endpoint equally.kubectl --context ${REMOTE_CONTEXT1} annotate service productpage -n bookinfo networking.istio.io/traffic-distribution=Any kubectl --context ${REMOTE_CONTEXT2} annotate service productpage -n bookinfo networking.istio.io/traffic-distribution=AnyVerify that the global service entry with the
productpage.bookinfo.mesh.internalhostname is created in theistio-systemnamespace.kubectl get serviceentry -n istio-system --context ${REMOTE_CONTEXT1} | grep bookinfo kubectl get serviceentry -n istio-system --context ${REMOTE_CONTEXT2} | grep bookinfoExample output:
autogen.bookinfo.productpage ["productpage.bookinfo.mesh.internal"] STATIC 94sDon't see the global service entries? Make sure that you installed the Gloo Operator with an Enterprise-level license for Solo Enterprise for Istio. If you update your Gloo Operator or Helm installations with an Enterprise license, be sure to restart istiod and ztunnel in each cluster by runningkubectl rollout restart deploy istiod-gloo -n istio-systemandkubectl rollout restart daemonset ztunnel -n istio-system.Use the ratings app to send a request to the
productpage.bookinfo.mesh.internalglobal hostname. Verify that you get back a 200 HTTP response code.kubectl -n bookinfo --context $REMOTE_CONTEXT1 debug -i pods/$(kubectl get pod -l app=ratings \ --context $REMOTE_CONTEXT1 -A -o jsonpath='{.items[0].metadata.name}') \ --image=curlimages/curl -- curl -vik http://productpage.bookinfo.mesh.internal:9080/productpageExample output:
Defaulting debug container name to debugger-htnfh. If you don't see a command prompt, try pressing enter. * HTTP 1.0, assume close after body < HTTP/1.0 200 OK < Content-Type: text/html; charset=utf-8 < Content-Length: 5179 < Server: Werkzeug/0.14.1 Python/3.6.8 < Date: Thu, 24 Apr 2025 18:14:41 GMT < { [885 bytes data] 100 5179 100 5179 0 0 5245 0 --:--:-- --:--:-- --:--:-- 5241 * shutting down connection #0 HTTP/1.0 200 OK Content-Type: text/html; charset=utf-8 Content-Length: 5179 Server: Werkzeug/0.14.1 Python/3.6.8 Date: Thu, 24 Apr 2025 18:14:41 GMT <!DOCTYPE html> <html> <head> <title>Simple Bookstore App</title> ...The
productpageservices for each Bookinfo instance are now unified behind one hostname, which increases the availability of the Bookinfo app.By default, if a service instance exists in the same cluster as the client that sends the request, the request is routed to that service’s endpoint in the global hostname. To verify that the ratings app can reach the endpoint of the other productpage instance across clusters, scale down the productpage app in one cluster.
Scale down the productpage app in
$REMOTE_CLUSTER1.kubectl scale deployment productpage-v1 -n bookinfo --context $REMOTE_CONTEXT1 --replicas=0Repeat the request from the ratings app to the productpage app. Because the productpage app in
$REMOTE_CLUSTER1is unavailable, all traffic is automatically routed to the productpage app in$REMOTE_CLUSTER2through the east-west gateway. Verify that you get back a 200 HTTP response code.kubectl -n bookinfo --context $REMOTE_CONTEXT1 debug -i pods/$(kubectl get pod -l app=ratings \ --context $REMOTE_CONTEXT1 -A -o jsonpath='{.items[0].metadata.name}') \ --image=curlimages/curl -- curl -vik http://productpage.bookinfo.mesh.internal:9080/productpageExample output:
Defaulting debug container name to debugger-htnfh. If you don't see a command prompt, try pressing enter. * HTTP 1.0, assume close after body < HTTP/1.0 200 OK < Content-Type: text/html; charset=utf-8 < Content-Length: 5179 < Server: Werkzeug/0.14.1 Python/3.6.8 < Date: Thu, 24 Apr 2025 18:14:41 GMT < { [885 bytes data] 100 5179 100 5179 0 0 5245 0 --:--:-- --:--:-- --:--:-- 5241 * shutting down connection #0 HTTP/1.0 200 OK Content-Type: text/html; charset=utf-8 Content-Length: 5179 Server: Werkzeug/0.14.1 Python/3.6.8 Date: Thu, 24 Apr 2025 18:14:41 GMT <!DOCTYPE html> <html> <head> <title>Simple Bookstore App</title> ...Scale the productpage deployment back up in
$REMOTE_CLUSTER1.kubectl scale deployment productpage-v1 -n bookinfo --context $REMOTE_CONTEXT1 --replicas=1
Optional: If you also installed Solo Enterprise for Istio, you can review the global service hostnames in the Gloo UI. If you do not have Solo Enterprise for Istio installed, you can follow the management plane installation guide.
- Open the Gloo UI.
meshctl dashboard --kubecontext $MGMT_CONTEXT - Navigate to Global Services and verify that you see the global hostname for your service.

Figure: Global Services page in the Gloo UI 
Figure: Global Services page in the Gloo UI
- Open the Gloo UI.
Check out the recommended next steps, such as using the global hostname in ingress gateway routing configurations.
Option 2 (alpha): Make segmented services available across clusters
Follow these steps to make a service available across your multicluster mesh. To try out a sample app first, check out the httpbin app example in the segments guide.
The multitenancy segments feature is in the alpha state. Alpha features are likely to change, are not fully tested, and are not supported for production. For more information, see Solo feature maturity.
If you organized your apps into segments (alpha), you can customize how apps are exposed across clusters.
Apply the
solo.io/service-scopelabel to either an individual service that you want to be accessible from multiple clusters, or to an entire namespace so that global hostnames are created for each service in that namespace. The label value depends on the scope of exposure you want to achieve. For more information, see Global vs segment scope.Verify that the global service entry with a hostname in the format
<svc_name>.<namespace>.<segment_domain>is created for the labeled service in theistio-systemnamespace, or multiple global service entries for all services in the labeled namespace. This hostname makes the endpoint for your service available across the multicluster mesh.kubectl get serviceentry -n istio-system --context ${REMOTE_CONTEXT1}Example output:
NAME HOSTS LOCATION RESOLUTION AGE autogen.<namespace>.<svc_name> ["<svc_name>.<namespace>.<segment_domain>"] STATIC 94sIf you have a service of the same name and namespace in another cluster of the mesh, or a namespace with the same name and services, label that service or namespace too. This way, the service’s endpoint is added to the global service’s hostname, and increases the availability of your service in the multicluster mesh. For more information, see Namespace sameness.
Optional: To ensure that all traffic requests to the service are routed to its global hostname, apply the
solo.io/service-takeover=truelabel to each service instance, or to each namespace. For more information, see Local traffic takeover.- Service:
kubectl label service <name> -n <namespace> --context ${REMOTE_CONTEXT1} solo.io/service-takeover=truekubectl label service <name> -n <namespace> --context ${REMOTE_CONTEXT2} solo.io/service-takeover=true - Namespace:
kubectl label namespace <namespace> --context ${REMOTE_CONTEXT1} solo.io/service-takeover=truekubectl label namespace <namespace> --context ${REMOTE_CONTEXT2} solo.io/service-takeover=true
- Service:
Optional: Modify the way that traffic is routed to service endpoints for the global service by applying the
networking.istio.io/traffic-distributionannotation to each service instance. By default, requests are routed to available endpoints within the source’s same cluster network first (PreferNetwork). Note that if you applied thesolo.io/service-takeover=trueannotation, you must choose a different mode thanPreferNetwork. For more information and available traffic distribution modes, see Locality routing and traffic distribution.kubectl annotate service <name> -n <namespace> --context ${REMOTE_CONTEXT1} networking.istio.io/traffic-distribution=<mode> kubectl annotate service <name> -n <namespace> --context ${REMOTE_CONTEXT2} networking.istio.io/traffic-distribution=<mode>Optional: If you also installed Solo Enterprise for Istio, you can review the global service hostnames in the Gloo UI. If you do not have Solo Enterprise for Istio installed, you can follow the management plane installation guide.
- Open the Gloo UI.
meshctl dashboard --kubecontext $MGMT_CONTEXT - Navigate to Global Services and verify that you see the global hostname for your service.

Figure: Global Services page in the Gloo UI 
Figure: Global Services page in the Gloo UI
- Open the Gloo UI.
Check out the recommended next steps, such as using the global hostname in ingress gateway routing configurations.
Example: httpbin
For an example of adding apps to segments and making them available either globally across the multicluster mesh or within the segment in the multicluster mesh, check out the httpbin app example in the segments guide.
Next
- Expose apps in your mesh with an ingress gateway, and use your service’s global hostname in routing configurations. For example, to expose the
productpageglobal service hostname with an ingress gateway, continue with the Bookinfo example in the ingress gateway guide. - Control traffic by creating a waypoint proxy.
- If you haven’t yet, install the Gloo management plane. The management plane includes the Gloo UI, which allows you to review the Istio insights that were captured for your ambient setup. Solo Enterprise for Istio comes with an insights engine that automatically analyzes your Istio setups for health issues. These issues are displayed in the UI along with recommendations to harden your Istio setups. The insights give you a checklist to address issues that might otherwise be hard to detect across your environment. For more information, see Insights.
- When it’s time to upgrade your ambient mesh, you can perform a safe in-place upgrade by using the Gloo Operator or Helm.