Overview
Review considerations and best practices for making in-mesh apps available to other services across clusters.
About
This feature requires your mesh to be installed with the Solo distribution of Istio and an Enterprise-level license for Gloo Mesh (OSS APIs). Contact your account representative to obtain a valid license.
If you linked clusters to form a multicluster ambient or sidecar mesh, you can make a service available across clusters throughout the multicluster mesh by using the solo.io/service-scope service or namespace label. When you apply this label, a new global service with a custom hostname is created for you. If you use segments, the hostname follows the format <name>.<namespace>.<segment_domain>. If you do not use segments, the hostname uses the <name>.<namespace>.mesh.internal format.
You can then use this internal hostname for in-mesh routing. For example, you might create a Gateway resource in cluster1 to manage incoming traffic requests for your app in cluster2. To make the app accessible across clusters, you label its service with solo.io/service-scope=global, which generates a global service hostname. To route requests to the app, you create an HTTPRoute resource that references this global service hostname. The ingress gateway can then use this global service hostname in the HTTPRoute to route incoming traffic requests through the east-west gateway across clusters to your app.
Namespace sameness
Namespace sameness means that namespaces with the same name on different clusters are treated as the same logical namespace. When you have services with identical names in namespaces with the same name across multiple clusters and mark them as a global service, Istio considers all service instances as a single logical service with aggregated endpoints.
To achieve namespace sameness in a multicluster setup, you might have the same app service in the same namespace in multiple clusters. When you label the service in one cluster, a global service is created in the mesh for that service’s endpoint. However, you must also label the service in the other cluster to include its endpoint in the global service’s hostname. When you label each service, then all instances of that app in the multicluster mesh are exposed by the global service. By adhering to the principle of namespace sameness, the global service’s hostname unifies the endpoints for each service across the clusters.
For example, you might have a myapp service in the stage namespace of cluster1, and a myapp service in the stage namespace of cluster2. If you label the service in cluster1 with solo.io/service-scope=global, a global service is created with the hostname myapp.stage.mesh.internal. However, until you label the service in cluster2, the service is not automatically included in the global service’s endpoints. Alternatively, you can also apply the solo.io/service-scope=global label to the stage namespace in each cluster. In this case, a global hostname is created for each service that exists in the namespace. You can then label the same namespace in another cluster to unify the endpoints for each service under their global hostnames.
Global vs segment scope with solo.io/service-scope
By labeling a service or namespace with solo.io/service-scope, you make a service available across clusters throughout the multicluster mesh. Depending on whether you use segments, you can set the scope of exposure through the following label values.
solo.io/service-scope=global: Make the service or all services in the namespace available across all clusters in the mesh. This is the only supported value to expose apps across clusters if you do not use segments. If you use segments, you can use this value to expose the service globally in all peered clusters. Both apps in same segment and apps that are not in the same segment can see the service.solo.io/service-scope=segment: If you use segments, and want the service or all services in a namespace to be visible across clusters within the app’s segment only, use thesegmentvalue. For example, if your team owns only a subset of the clusters in the multicluster mesh, you might want to unify the endpoints of your app across your clusters for failover, but do not want to expose the app to other teams’ clusters.
Do not use the service scope label as a security feature. If you need to ensure access control between apps in your multicluster mesh, use Istio AuthorizationPolicies.
Locality routing and traffic distribution
If an in-mesh service makes a request to a service that is exposed globally, and a healthy endpoint for that service is available locally in the same cluster network, the request is routed to that local instance by default. If no healthy endpoints of that service are available locally, requests are routed to healthy endpoints in remote cluster networks. An endpoint is considered healthy if the app pod exposed by that local service is in a ready state.
To modify this default behavior for global services, you can apply the networking.istio.io/traffic-distribution=<setting> annotation to a service that has the solo.io/service-scope label.
PreferNetwork(default): Route requests to available endpoints within the source’s same network first. Requests to the service from within the same cluster network route to the service’s local hostname, and only route to the global hostname if a service instance in the same cluster is not available. This is the default for global services.PreferClose: Route requests to an available endpoint that is closest to the source first, considering the zone, region, and network. For example, you might have a global service with two endpoints in zoneus-westand one endpoint inus-east. When sending traffic from a client inus-west, all traffic routes to the twous-westendpoints. If one of those endpoints becomes unhealthy, all traffic routes to the remaining endpoint inus-west. If that endpoint becomes unhealthy, traffic routes to theus-eastendpoint.PreferRegion: Route requests to an available endpoint that is closest to the source first, considering the region and network.Any: Consider all endpoints equally, and route requests to a random available endpoint.
Recall that the directionality that you choose when you link the clusters can also affect how services can communicate. For example, if you choose to asymmetrically link your clusters, services in one cluster might not be able to send requests to remote endpoints for services in another cluster.
Local traffic takeover with solo.io/service-takeover
By default, when you apply the solo.io/service-scope label to a service or namespace, it does not change the original service. Instead, a new global service is created with a hostname in the format <name>.<namespace>.mesh.internal (or <name>.<namespace>.<segment_domain> if you use segments), and the original service remains unchanged with local endpoints only. Requests to the service from within the same cluster network still use the service’s <name>.<namespace>.svc.cluster.local local hostname by default.
In the Solo distribution of Istio version 1.27.2 and later, after you expose a service across clusters with the solo.io/service-scope label, you can instead modify the original service to include remote endpoints by applying the solo.io/service-takeover=true label to the service or to the service’s namespace. This label ensures that any requests to the service–including from local services within the same cluster network–are always routed to the service’s global <name>.<namespace>.mesh.internal (or <name>.<namespace>.<segment_domain> if you use segments) hostname, and not to the service’s <name>.<namespace>.svc.cluster.local local hostname. In this sense, you “take over” the traffic to the service’s local hostname, to ensure it is instead always routed to the global hostname.
By using this option, you can configure a service to span multiple clusters without changing your configuration or applications. However, you do not have the ability to control whether other services access only local endpoints for the service, such as configuring the networking.istio.io/traffic-distribution=PreferNetwork annotation, or whether they access both local and global endpoints. Before you apply the solo.io/service-takeover=true label, ensure your application can unconditionally handle cross-cluster requests.
Subset routing
Traditional subset routing in Istio, such as by configuring routing rules to different app subsets in a DestinationRule, is currently not supported for cross-cluster routing when using multicluster peering. Instead, the recommended approach involves using services to represent specific subsets, and using an HTTPRoute resource to manage cross-cluster routing to each “subset” service.
For example, say that you have three clusters that you linked together to create a multicluster mesh.
- You deploy the Bookinfo app across the clusters by creating the ratings and productpage microservices in
cluster0, reviews-v1 incluster1, and reviews-v2 incluster2. - You then abstract the reviews subsets into individual services, such as one service called
reviews-v1and one service calledreviews-v2, instead of using just onereviewsservice for both. - To ensure that each service is accessible across the multicluster mesh, you label them with
solo.io/service-scope=global, so that they now have internal global hostnames such asreviews-v1.bookinfo.internal.meshandreviews-v2.bookinfo.internal.mesh. - To manage routing rules for each service, you create an HTTPRoute similar to the following example. This HTTPRoute routes requests to each global service hostname. You can then implement routing rules to each “subset” service, such as by using header matching to send requests from a user called “blackstar” to only reviews-v2.
apiVersion: gateway.networking.k8s.io/v1 kind: HTTPRoute metadata: name: reviews spec: parentRefs: - group: "" kind: Service name: reviews port: 9080 rules: - matches: - headers: - name: end-user value: jason backendRefs: - name: reviews-v1.bookinfo.internal.mesh port: 9080 - backendRefs: - name: reviews-v2.bookinfo.internal.mesh port: 9080 matches: - headers: - name: end-user value: blackstar
Next
Now that you understand the best practices for multicluster mesh app availability, follow the steps to make services available across clusters.