About

If you linked clusters to form a multicluster ambient or sidecar mesh, you can make a service available across the multicluster mesh by using the solo.io/service-scope service or namespace label. When you apply this label, a new global service is created with a hostname in the format <name>.<namespace>.mesh.internal if you do not use segments, or <name>.<namespace>.<segment_domain> if you do.

You can then use this internal hostname for in-mesh routing. For example, you might create a Gateway resource in cluster1 to manage incoming traffic requests for your app in cluster2. To make the app accessible across clusters, you label its service with solo.io/service-scope=global, which generates a global service hostname. To route requests to the app, you create an HTTPRoute resource that references this global service hostname. The ingress gateway can then use this global service hostname in the HTTPRoute to route incoming traffic requests through the east-west gateway across clusters to your app.

For detailed information about and considerations for making services available across clusters, see Overview.

Before you begin

Make sure that your apps are enrolled in the ambient mesh.

Expose services across clusters

Your options for exposing services across clusters vary based on whether you organized your apps into segments (alpha).

Option 1 (default): Make services available across clusters

Follow these steps to make a service available across your multicluster mesh. To try out a sample app first, skip to Example: Bookinfo.

  1. Apply the solo.io/service-scope=global label to either an individual service that you want to be accessible from multiple clusters, or to an entire namespace so that global hostnames are created for each service in that namespace.

    • Service:
        kubectl label service <name> -n <namespace> --context ${REMOTE_CONTEXT1} solo.io/service-scope=global
        
    • Namespace:
        kubectl label namespace <namespace> --context ${REMOTE_CONTEXT1} solo.io/service-scope=global
        
  2. Verify that the global service entry with a hostname in the format <svc_name>.<namespace>.mesh.internal is created for the labeled service in the istio-system namespace, or multiple global service entries for all services in the labeled namespace. This hostname makes the endpoint for your service available across the multicluster mesh.

      kubectl get serviceentry -n istio-system --context ${REMOTE_CONTEXT1}
      

    Example output:

      NAME                             HOSTS                                    LOCATION     RESOLUTION   AGE
    autogen.<namespace>.<svc_name>   ["<svc_name>.<namespace>.mesh.internal"]              STATIC       94s
      
  3. If you have a service of the same name and namespace in another cluster of the mesh, or a namespace with the same name and services, label that service or namespace too. This way, the service’s endpoint is added to the global service’s hostname, and increases the availability of your service in the multicluster mesh. For more information, see Namespace sameness.

    • Service:
        kubectl label service <name> -n <namespace> --context ${REMOTE_CONTEXT2} solo.io/service-scope=global
        
    • Namespace:
        kubectl label namespace <name> --context ${REMOTE_CONTEXT2} solo.io/service-scope=global
        
  4. Optional: To ensure that all traffic requests to the service are routed to its global hostname, apply the solo.io/service-takeover=true label to each service instance, or to each namespace. For more information, see Local traffic takeover.

    • Service:
        kubectl label service <name> -n <namespace> --context ${REMOTE_CONTEXT1} solo.io/service-takeover=true
        
        kubectl label service <name> -n <namespace> --context ${REMOTE_CONTEXT2} solo.io/service-takeover=true
        
    • Namespace:
        kubectl label namespace <name> --context ${REMOTE_CONTEXT1} solo.io/service-takeover=true
        
        kubectl label namespace <name> --context ${REMOTE_CONTEXT2} solo.io/service-takeover=true
        
  5. Optional: Modify the way that traffic is routed to service endpoints for the global service by applying the networking.istio.io/traffic-distribution annotation to each service instance. By default, requests are routed to available endpoints within the source’s same cluster network first (PreferNetwork). Note that if you applied the solo.io/service-takeover=true label, you must choose a different mode than PreferNetwork. For more information and available traffic distribution modes, see Locality routing and traffic distribution.

      kubectl annotate service <name> -n <namespace> --context ${REMOTE_CONTEXT1} networking.istio.io/traffic-distribution=<mode>
    kubectl annotate service <name> -n <namespace> --context ${REMOTE_CONTEXT2} networking.istio.io/traffic-distribution=<mode>
      
  6. Optional: If you also installed Solo Enterprise for Istio, you can review the global service hostnames in the Gloo UI. If you do not have Solo Enterprise for Istio installed, you can follow the management plane installation guide.

    1. Open the Gloo UI.
        meshctl dashboard --kubecontext $MGMT_CONTEXT
        
    2. Navigate to Global Services and verify that you see the global hostname for your service.
      Figure: Global Services page in the Gloo UI
      Figure: Global Services page in the Gloo UI
  7. Check out the recommended next steps, such as using the global hostname in ingress gateway routing configurations.

Example: Bookinfo

After you add the Bookinfo services to your multicluster ambient mesh, you can then make the services available across clusters.

  1. To make Bookinfo globally available in the multicluster setup, label each productpage service so that both productpage endpoints are available behind one global service hostname.

      kubectl --context ${REMOTE_CONTEXT1} label service productpage -n bookinfo solo.io/service-scope=global
    kubectl --context ${REMOTE_CONTEXT2} label service productpage -n bookinfo solo.io/service-scope=global
      
  2. Apply the networking.istio.io/traffic-distribution=Any annotation to the services. This annotation allows requests to the productpage global service to be routed to each service endpoint equally.

      kubectl --context ${REMOTE_CONTEXT1} annotate service productpage -n bookinfo networking.istio.io/traffic-distribution=Any
    kubectl --context ${REMOTE_CONTEXT2} annotate service productpage -n bookinfo networking.istio.io/traffic-distribution=Any
      
  3. Verify that the global service entry with the productpage.bookinfo.mesh.internal hostname is created in the istio-system namespace.

      kubectl get serviceentry -n istio-system --context ${REMOTE_CONTEXT1} | grep bookinfo
    kubectl get serviceentry -n istio-system --context ${REMOTE_CONTEXT2} | grep bookinfo
      

    Example output:

      autogen.bookinfo.productpage   ["productpage.bookinfo.mesh.internal"]              STATIC       94s
      
  4. Use the ratings app to send a request to the productpage.bookinfo.mesh.internal global hostname. Verify that you get back a 200 HTTP response code.

      kubectl -n bookinfo --context $REMOTE_CONTEXT1 debug -i pods/$(kubectl get pod -l app=ratings \
    --context $REMOTE_CONTEXT1 -A -o jsonpath='{.items[0].metadata.name}') \
    --image=curlimages/curl -- curl -vik http://productpage.bookinfo.mesh.internal:9080/productpage
      

    Example output:

      Defaulting debug container name to debugger-htnfh.
    If you don't see a command prompt, try pressing enter.
    * HTTP 1.0, assume close after body
    < HTTP/1.0 200 OK
    < Content-Type: text/html; charset=utf-8
    < Content-Length: 5179
    < Server: Werkzeug/0.14.1 Python/3.6.8
    < Date: Thu, 24 Apr 2025 18:14:41 GMT
    < 
    { [885 bytes data]
    100  5179  100  5179    0     0   5245      0 --:--:-- --:--:-- --:--:--  5241
    * shutting down connection #0
    HTTP/1.0 200 OK
    Content-Type: text/html; charset=utf-8
    Content-Length: 5179
    Server: Werkzeug/0.14.1 Python/3.6.8
    Date: Thu, 24 Apr 2025 18:14:41 GMT
    
    <!DOCTYPE html>
    <html>
      <head>
        <title>Simple Bookstore App</title>
    ...
      

    The productpage services for each Bookinfo instance are now unified behind one hostname, which increases the availability of the Bookinfo app.

  5. By default, if a service instance exists in the same cluster as the client that sends the request, the request is routed to that service’s endpoint in the global hostname. To verify that the ratings app can reach the endpoint of the other productpage instance across clusters, scale down the productpage app in one cluster.

    1. Scale down the productpage app in $REMOTE_CLUSTER1.

        kubectl scale deployment productpage-v1 -n bookinfo --context $REMOTE_CONTEXT1 --replicas=0
        
    2. Repeat the request from the ratings app to the productpage app. Because the productpage app in $REMOTE_CLUSTER1 is unavailable, all traffic is automatically routed to the productpage app in $REMOTE_CLUSTER2 through the east-west gateway. Verify that you get back a 200 HTTP response code.

        kubectl -n bookinfo --context $REMOTE_CONTEXT1 debug -i pods/$(kubectl get pod -l app=ratings \
      --context $REMOTE_CONTEXT1 -A -o jsonpath='{.items[0].metadata.name}') \
      --image=curlimages/curl -- curl -vik http://productpage.bookinfo.mesh.internal:9080/productpage
        

      Example output:

        Defaulting debug container name to debugger-htnfh.
      If you don't see a command prompt, try pressing enter.
      * HTTP 1.0, assume close after body
      < HTTP/1.0 200 OK
      < Content-Type: text/html; charset=utf-8
      < Content-Length: 5179
      < Server: Werkzeug/0.14.1 Python/3.6.8
      < Date: Thu, 24 Apr 2025 18:14:41 GMT
      < 
      { [885 bytes data]
      100  5179  100  5179    0     0   5245      0 --:--:-- --:--:-- --:--:--  5241
      * shutting down connection #0
      HTTP/1.0 200 OK
      Content-Type: text/html; charset=utf-8
      Content-Length: 5179
      Server: Werkzeug/0.14.1 Python/3.6.8
      Date: Thu, 24 Apr 2025 18:14:41 GMT
      
      <!DOCTYPE html>
      <html>
        <head>
          <title>Simple Bookstore App</title>
      ...
        
    3. Scale the productpage deployment back up in $REMOTE_CLUSTER1.

        kubectl scale deployment productpage-v1 -n bookinfo --context $REMOTE_CONTEXT1 --replicas=1
        
  6. Optional: If you also installed Solo Enterprise for Istio, you can review the global service hostnames in the Gloo UI. If you do not have Solo Enterprise for Istio installed, you can follow the management plane installation guide.

    1. Open the Gloo UI.
        meshctl dashboard --kubecontext $MGMT_CONTEXT
        
    2. Navigate to Global Services and verify that you see the global hostname for your service.
      Figure: Global Services page in the Gloo UI
      Figure: Global Services page in the Gloo UI
  7. Check out the recommended next steps, such as using the global hostname in ingress gateway routing configurations.

Option 2 (alpha): Make segmented services available across clusters

Follow these steps to make a service available across your multicluster mesh. To try out a sample app first, check out the httpbin app example in the segments guide.

If you organized your apps into segments (alpha), you can customize how apps are exposed across clusters.

  1. Apply the solo.io/service-scope label to either an individual service that you want to be accessible from multiple clusters, or to an entire namespace so that global hostnames are created for each service in that namespace. The label value depends on the scope of exposure you want to achieve. For more information, see Global vs segment scope.

  2. Verify that the global service entry with a hostname in the format <svc_name>.<namespace>.<segment_domain> is created for the labeled service in the istio-system namespace, or multiple global service entries for all services in the labeled namespace. This hostname makes the endpoint for your service available across the multicluster mesh.

      kubectl get serviceentry -n istio-system --context ${REMOTE_CONTEXT1}
      

    Example output:

      NAME                             HOSTS                                       LOCATION     RESOLUTION   AGE
    autogen.<namespace>.<svc_name>   ["<svc_name>.<namespace>.<segment_domain>"]              STATIC       94s
      
  3. If you have a service of the same name and namespace in another cluster of the mesh, or a namespace with the same name and services, label that service or namespace too. This way, the service’s endpoint is added to the global service’s hostname, and increases the availability of your service in the multicluster mesh. For more information, see Namespace sameness.

  4. Optional: To ensure that all traffic requests to the service are routed to its global hostname, apply the solo.io/service-takeover=true label to each service instance, or to each namespace. For more information, see Local traffic takeover.

    • Service:
        kubectl label service <name> -n <namespace> --context ${REMOTE_CONTEXT1} solo.io/service-takeover=true
        
        kubectl label service <name> -n <namespace> --context ${REMOTE_CONTEXT2} solo.io/service-takeover=true
        
    • Namespace:
        kubectl label namespace <namespace> --context ${REMOTE_CONTEXT1} solo.io/service-takeover=true
        
        kubectl label namespace <namespace> --context ${REMOTE_CONTEXT2} solo.io/service-takeover=true
        
  5. Optional: Modify the way that traffic is routed to service endpoints for the global service by applying the networking.istio.io/traffic-distribution annotation to each service instance. By default, requests are routed to available endpoints within the source’s same cluster network first (PreferNetwork). Note that if you applied the solo.io/service-takeover=true annotation, you must choose a different mode than PreferNetwork. For more information and available traffic distribution modes, see Locality routing and traffic distribution.

      kubectl annotate service <name> -n <namespace> --context ${REMOTE_CONTEXT1} networking.istio.io/traffic-distribution=<mode>
    kubectl annotate service <name> -n <namespace> --context ${REMOTE_CONTEXT2} networking.istio.io/traffic-distribution=<mode>
      
  6. Optional: If you also installed Solo Enterprise for Istio, you can review the global service hostnames in the Gloo UI. If you do not have Solo Enterprise for Istio installed, you can follow the management plane installation guide.

    1. Open the Gloo UI.
        meshctl dashboard --kubecontext $MGMT_CONTEXT
        
    2. Navigate to Global Services and verify that you see the global hostname for your service.
      Figure: Global Services page in the Gloo UI
      Figure: Global Services page in the Gloo UI
  7. Check out the recommended next steps, such as using the global hostname in ingress gateway routing configurations.

Example: httpbin

For an example of adding apps to segments and making them available either globally across the multicluster mesh or within the segment in the multicluster mesh, check out the httpbin app example in the segments guide.

Next