Federated Trust and Identity

Gloo Mesh can help unify the root identity between multiple service mesh installations so any intermediates are signed by the same Root CA and end-to-end mTLS between clusters and services can be established correctly.

Gloo Mesh will establish trust based on the trust model defined by the user – is there complete shared trust and a common root and identity? Or is there limited trust between clusters and traffic is gated by egress and ingress gateways?

In this guide, we'll explore the shared trust model between two Istio clusters and how Gloo Mesh simplifies and orchestrates the processes needed for this to happen.

Before you begin

To illustrate these concepts, we will assume that:

Be sure to review the assumptions and satisfy the pre-requisites from the Guides top-level document.

Ensure you have the correct context names set in your environment:


Enforce mTLS

Apply the following yaml to both your management plane and remote cluster, assuming that istio-system is the root namespace for the istio deployment:

apiVersion: "security.istio.io/v1beta1"
kind: "PeerAuthentication"
  name: "default"
  namespace: "istio-system"
    mode: STRICT

kubectl apply --context $MGMT_CONTEXT -f - << EOF
apiVersion: "security.istio.io/v1beta1"
kind: "PeerAuthentication"
  name: "default"
  namespace: "istio-system"
    mode: STRICT
kubectl apply --context $REMOTE_CONTEXT -f - << EOF
apiVersion: "security.istio.io/v1beta1"
kind: "PeerAuthentication"
  name: "default"
  namespace: "istio-system"
    mode: STRICT

This is an Istio setting. For more, see: https://istio.io/latest/docs/concepts/security/

Verify identity in two clusters is different

We can see the certificate chain used to establish mTLS between Istio services in mgmt-cluster cluster and remote-cluster cluster and can compare them to be different. One way to see the certificates, is to use the openssl s_client tool with the -showcerts param when calling between two services. Let's try it on the mgmt-cluster-cluster:

kubectl --context $MGMT_CONTEXT -n bookinfo exec -it deploy/reviews-v1 -c istio-proxy \
-- openssl s_client -showcerts -connect ratings.bookinfo:9080

You should see an output of the certificate chain among other handshake-related information. You can review the last certificate in the chain and that's the root cert:

Certificate chain
 0 s:
   i:O = cluster.local
 1 s:O = cluster.local
   i:O = cluster.local

Run the same thing in the remote-cluster and explore the output and compare. For the reviews service running in the remote-cluster cluster, we have to use deploy/reviews-v3 as reviews-v1 which we used in the previous command doesn't exist on that cluster:

kubectl --context $REMOTE_CONTEXT -n bookinfo exec -it deploy/reviews-v3 -c istio-proxy \
-- openssl s_client -showcerts -connect ratings.bookinfo:9080

You should notice that the root certificates that signed the workload certificates are different. Let's unify those into a shared trust model of identity.

Creating a Virtual Mesh

Gloo Mesh uses the Virtual Mesh Custom Resource to configure a Virtual Mesh, which is a logical grouping of one or multiple service meshes for the purposes of federation according to some parameters. Let's take a look at a VirtualMesh configuration that can help unify our two service meshes and establish a shared trust model for identity:

apiVersion: networking.mesh.gloo.solo.io/v1
kind: VirtualMesh
  name: virtual-mesh
  namespace: gloo-mesh
    autoRestartPods: true
        generated: {}
  federation: {}
  - name: istiod-istio-system-mgmt-cluster 
    namespace: gloo-mesh
  - name: istiod-istio-system-remote-cluster
    namespace: gloo-mesh
Understanding VirtualMesh

In the first highlighted section, we can see the parameters establishing shared identity and federation. In this case, we tell Gloo Mesh to create a Root CA using the parameters specified above (ttl, key size, org name, etc).

We could have also configured an existing Root CA by providing an existing secret:

    autoRestartPods: true
          name: root-ca-name
          namespace: root-ca-namespace

See the section on User Provided Certificates below for details on how to format the certificate as a Kubernetes Secret.

We also specify the federation mode to be PERMISSIVE. This means we'll make services available between meshes. You can control this later by specifying different global service properties.

Lastly, we are creating the VirtualMesh with two different service meshes: istiod-istio-system-mgmt-cluster and istiod-istio-system-remote-cluster. We can have any meshes defined here that should be part of this virtual grouping and federation.

User Provided Certificates

A root certificate for a VirtualMesh must be supplied to Gloo Mesh as a Secret formatted as follows:

kind: Secret
  name: providedrootcert
  namespace: default
type: Opaque
  key.pem: {private key file}
  root-cert.pem: {root CA certificate file}

Given a root certificate file root-cert.pem and its associated private key file key.pem, this secret can be created by running:

kubectl -n default create secret generic providedrootcert --from-file=root-cert.pem --from-file=key.pem.

An example root certificate and private key file can be generated by following this guide and running make root-ca.

Note that the name/namespace of the provided root cert cannot be cacerts/istio-system as that is used by Gloo Mesh for carrying out the CSR (certificate signing request) procedure that unifies the trust root between Meshes in the VirtualMesh.

Applying VirtualMesh

If you saved this VirtualMesh CR to a file named demo-virtual-mesh.yaml, you can apply it like this:

kubectl --context $MGMT_CONTEXT apply -f demo-virtual-mesh.yaml

Notice the autoRestartPods: true in the mtlsConfig stanza. This instructs Gloo Mesh to restart the Istio pods in the relevant clusters.

This is due to a limitation of Istio. The Istio control plane picks up the CA for Citadel and does not rotate it often enough. This is being improved in future versions of Istio.

If you wish to perform this step manually, set autoRestartPods: false and run the following:

meshctl mesh restart --mesh-name istiod-istio-system-mgmt-cluster

Note, after you bounce the control plane, it may still take time for the workload certs to get re-issued with the new CA. You can force the workloads to re-load by bouncing them. For example, for the bookinfo sample running in the bookinfo namespace:

kubectl --context $MGMT_CONTEXT -n bookinfo delete po --all
kubectl --context $REMOTE_CONTEXT -n bookinfo delete po --all

Creating this resource will instruct Service Mesh to establish a shared root identity across the clusters in the Virtual Mesh as well as federate the services. The next sections of this document help you understand some of the pieces of how this works.

Understanding the Shared Root Process

When we create the VirtualMesh CR, set the trust model to shared, and configure the Root CA parameters, Gloo Mesh will kick off the process to unify the identity to a shared root. First, Gloo Mesh will either create the Root CA specified (if generated is used) or use the supplied CA information.

Then Gloo Mesh will use a Certificate Request (CR) agent on each of the affected clusters to create a new key/cert pair that will form an intermediate CA used by the mesh on that cluster. It will then create a Certificate Request, represented by the CertificateRequest CR.

Gloo Mesh will sign the certificate with the Root CA specified in the VirtualMesh. At that point, we will want the mesh (Istio in this case) to pick up the new intermediate CA and start using that for its workloads.

Gloo Mesh Architecture

To verify, let's check the IssuedCertificates CR in remote-cluster-context:

kubectl --context $REMOTE_CONTEXT \
get issuedcertificates -n gloo-mesh

We should see this on the remote cluster:

NAME                                 AGE
istiod-istio-system-remote-cluster   3m15s

If we do the same on the mgmt-cluster, we should also see an IssuedCertificates entry there as well.

Lastly, let's verify the correct cacerts was created in the istio-system namespace that can be used for Istio's Citadel:

kubectl --context $MGMT_CONTEXT get secret -n istio-system cacerts 

NAME      TYPE                                          DATA   AGE
cacerts   certificates.mesh.gloo.solo.io/issued_certificate   5      20s
kubectl --context $REMOTE_CONTEXT get secret -n istio-system cacerts 

NAME      TYPE                                          DATA   AGE
cacerts   certificates.mesh.gloo.solo.io/issued_certificate   5      5m3s

In the previous section, we bounced the Istio control plane to pick up these intermediate certs. Again, this is being improved in future versions of Istio.

Multi-cluster mesh federation

Once trust has been established, Gloo Mesh will start federating services so that they are accessible across clusters. Behind the scenes, Gloo Mesh will handle the networking – possibly through egress and ingress gateways, and possibly affected by user-defined traffic and access policies – and ensure requests to the service will resolve and be routed to the right destination. Users can fine-tune which services are federated where by editing the virtual mesh.

For example, you can see what Istio ServiceEntry objects were created. On the mgmt-cluster cluster you can see:

kubectl --context $MGMT_CONTEXT \
  get serviceentry -n istio-system
NAME                                                          HOSTS                                                           LOCATION        RESOLUTION   AGE
istio-ingressgateway.istio-system.svc.remote-cluster.global   [istio-ingressgateway.istio-system.svc.remote-cluster.global]   MESH_INTERNAL   DNS          6m2s
ratings.bookinfo.svc.remote-cluster.global                    [ratings.bookinfo.svc.remote-cluster.global]                    MESH_INTERNAL   DNS          6m2s
reviews.bookinfo.svc.remote-cluster.global                    [reviews.bookinfo.svc.remote-cluster.global]                    MESH_INTERNAL   DNS          6m2s

On the remote-cluster-context cluster, you can see:

kubectl --context $REMOTE_CONTEXT \
get serviceentry -n istio-system
NAME                                                            HOSTS                                                             LOCATION        RESOLUTION   AGE
details.bookinfo.svc.mgmt-cluster.global                    [details.bookinfo.svc.mgmt-cluster.global]                    MESH_INTERNAL   DNS          2m5s     
istio-ingressgateway.istio-system.svc.mgmt-cluster.global   [istio-ingressgateway.istio-system.svc.mgmt-cluster.global]   MESH_INTERNAL   DNS          5m18s    
productpage.bookinfo.svc.mgmt-cluster.global                [productpage.bookinfo.svc.mgmt-cluster.global]                MESH_INTERNAL   DNS          55s      
ratings.bookinfo.svc.mgmt-cluster.global                    [ratings.bookinfo.svc.mgmt-cluster.global]                    MESH_INTERNAL   DNS          7m2s     
reviews.bookinfo.svc.mgmt-cluster.global                    [reviews.bookinfo.svc.mgmt-cluster.global]                    MESH_INTERNAL   DNS          90s 

See it in action

Check out “Part Two” of the “Dive into Gloo Mesh” video series (note that the video content reflects Gloo Mesh v0.6.1):

Next steps

At this point, you should be able to route traffic across your clusters with end-to-end mTLS. You can verify the certs following the same approach we did earlier in this section.

Now that you have a single logical “virtual mesh” you can begin configuring it with an API that is aware of this VirtualMesh concept. In the next sections, you can apply access control and traffic policies.