Identity / Trust Domain

Service Mesh Hub can help unify the root identity between multiple service mesh installations so any intermediates are signed by the same Root CA and end-to-end mTLS between clusters and services can be established correctly.

Service Mesh Hub will establish trust based on the trust model defined by the user – is there complete shared trust and a common root and identity? Or is there limited trust between clusters and traffic is gated by egress and ingress gateways?

In this guide, we’ll explore the shared trust model between two Istio clusters and how Service Mesh Hub simplifies and orchestrates the processes needed for this to happen.

Before you begin

To illustrate these concepts, we will assume that:

Be sure to review the assumptions and satisfy the pre-requisites from the Guides top-level document.

Verify identity in two clusters is different

We can see the certificate chain used to establish mTLS between Istio services in management-plane-context cluster and remote-cluster-context cluster and can compare them to be different. One way to see the certificates, is to use the openssl s_client tool with the -showcerts param when calling between two services. Let’s try it on the management-plane-cluster:

kubectl --context management-plane-context exec -it deploy/reviews-v1 -c istio-proxy \
-- openssl s_client -showcerts -connect ratings.default:9080

You should see an output of the certificate chain among other handshake-related information. You can review the last certificate in the chain and that’s the root cert:

Certificate chain
 0 s:
   i:O = cluster.local
 1 s:O = cluster.local
   i:O = cluster.local

Run the same thing in the remote-cluster-context and explore the output and compare. For the reviews service running in the remote-cluster-context cluster, we have to use deploy/reviews-v3 as reviews-v1 which we used in the previous command doesn’t exist on that cluster:

kubectl --context remote-cluster-context exec -it deploy/reviews-v3 -c istio-proxy \
-- openssl s_client -showcerts -connect ratings.default:9080

You should notice that the root certificates that signed the workload certificates are different. Let’s unify those into a shared trust model of identity.

Creating a Virtual Mesh

Service Mesh Hub uses the Virtual Mesh Custom Resource to configure a Virtual Mesh, which is a logical grouping of one or multiple service meshes for the purposes of federation according to some parameters. Let’s take a look at a VirtualMesh configuration that can help unify our two service meshes and establish a shared trust model for identity:

kind: VirtualMesh
  name: virtual-mesh
  namespace: service-mesh-hub
  displayName: "Demo Mesh Federation"
      ttlDays: 356
      rsaKeySizeBytes: 4096
      orgName: "service-mesh-hub"
    mode: PERMISSIVE
  shared: {}
  enforceAccessControl: false
  - name: istio-istio-system-management-plane 
    namespace: service-mesh-hub
  - name: istio-istio-system-new-remote-cluster
    namespace: service-mesh-hub
Understanding VirtualMesh

In the first highlighted section, we can see the parameters to establishing shared identity and federation. In this case, we tell Service Mesh Hub to create a Root CA using the parameters specified above (ttl, key size, org name, etc). We could have also configured an existing Root CA by providing an existing secret:

        name: root-ca-name
        namespace: root-ca-namespace

We also specify the federation mode to be PERMISSIVE. This means we’ll make services available between meshes. You can control this later by specifying different global service properties.

Lastly, we are creating the VirtualMesh with two different service meshes: istio-istio-system-management-plane and istio-istio-system-new-remote-cluster. We can have any meshes defined here that should be part of this virtual grouping and federation.

Applying VirtualMesh

If you saved this VirtualMesh CR to a file named demo-virtual-mesh.yaml, you can apply it like this:

kubectl --context management-plane-context apply -f demo-virtual-mesh.yaml

At this point we need to bounce the istiod control plane. This is because the Istio control plane picks up the CA for Citadel and does not rotate it often enough. This is being improved in future versions of Istio.

kubectl --context management-plane-context \
delete pod -n istio-system -l app=istiod 
kubectl --context remote-cluster-context \
delete pod -n istio-system -l app=istiod 

Note, after you bounce the control plane, it may still take time for the workload certs to get re-issued with the new CA. You can force the workloads to re-load by bouncing them. For example, for the bookinfo sample running in the default namespace:

kubectl --context management-plane-context delete po --all
kubectl --context remote-cluster-context delete po --all

Creating this resource will instruct Service Mesh to establish a shared root identity across the clusters in the Virtual Mesh as well as federate the services. The next sections of this document help you understand some of the pieces of how this works.

Understanding the shared root process

When we create the VirtualMesh CR and set the trust model to shared and configured the Root CA parameters, Service Mesh Hub will kick off the process to unify the identity to a shared root. First, Service Mesh Hub will either create the Root CA specified (if builtin is used).

Then Service Mesh Hub will use a Certificate Signing Request (CSR) agent on each of the different clusters to create a new key/cert pair that will form an intermediate CA that will be used by the mesh on that cluster. It will then create a Certificate Signing Request, represented by the VirtualMeshCertificateSigningRequest CR. Service Mesh Hub will sign the certificate with the Root CA specified in the VirtualMesh. At that point, we will want the mesh (Istio in this case) to pick up the new intermediate CA and start using that for its workloads.

Service Mesh Hub Architecture

To verify, let’s check the VirtualServiceCertificateSigningRequest CR in remote-cluster-context:

kubectl --context remote-cluster-context \
get virtualmeshcertificatesigningrequest -n service-mesh-hub

We should see this on the remote cluster:

NAME                              AGE
istio-virtual-mesh-cert-request   3m15s

If we do the same on the management-plane-cluster, we should also see a VirtualMeshCertificateSigningRequest there as well.

Lastly, let’s verify the correct cacerts was created in the istio-system namespace that can be used for Istio’s Citadel:

kubectl --context management-plane-context get secret -n istio-system cacerts 
NAME      TYPE                      DATA   AGE
cacerts   5      8m10s
kubectl --context remote-cluster-context get secret -n istio-system cacerts 
NAME      TYPE                      DATA   AGE
cacerts   5      8m34s

In the previous section, we bounced the Istio control plane to pick up these intermediate certs. Again, this is being improved in future versions of Istio.

Multi-cluster mesh federation

Once trust has been established, Service Mesh Hub will start federating services so that they are accessible across clusters. Behind the scenes, Service Mesh Hub will handle the networking – possibly through egress and ingress gateways, and possibly affected by user-defined traffic and access policies – and ensure requests to the service will resolve and be routed to the right destination. Users can fine-tune which services are federated where by editing the virtual mesh.

For example, you can see what Istio ServiceEntry objects were created. On the management-plane-context cluster you can see:

kubectl --context management-plane-context \
get serviceentry -n istio-system
NAME                                                   HOSTS                                                    LOCATION        RESOLUTION   AGE   []   MESH_INTERNAL   DNS          62m                     []                     MESH_INTERNAL   DNS          62m                     []                     MESH_INTERNAL   DNS          62m

On the remote-cluster-context cluster, you can see:

kubectl --context remote-cluster-context \
get serviceentry -n istio-system
NAME                                                 HOSTS                                                  LOCATION        RESOLUTION   AGE                     []                     MESH_INTERNAL   DNS          63m   []   MESH_INTERNAL   DNS          63m                 []                 MESH_INTERNAL   DNS          63m                     []                     MESH_INTERNAL   DNS          63m                     []                     MESH_INTERNAL   DNS          63m

See it in action

Check out “Part Two” of the “Dive into Service Mesh Hub” video series:

Next steps

At this point, you should be able to route traffic across your clusters with end-to-end mTLS. You can verify the certs following the same approach we did earlier in this section.

Now that you have a single logical “virtual mesh” you can begin configuring it with an API that is aware of this VirtualMesh concept. In the next sections, you can apply access control and traffic policies.