Overview

About the integration

SPIRE offers robust workload attestation capabilities that provide significantly more controls around how, when, and if identities are granted to workloads. The Solo distribution of Istio includes Enterprise support for using SPIRE node agents (over an Envoy SDS socket) to attest and grant identities to the ambient mesh workloads they proxy. This allows Istio to use these identities for mTLS connections between the ambient mesh workloads.

With the SPIRE integration, the ztunnel can act as a trusted spire-agent delegate on the node by using the SPIRE DelegatedIdentity API. Ztunnel can integrate with SPIRE to leverage SPIRE’s existing node and workload attestation plugin framework directly, as well as request workload certificates that are issued by SPIRE on the basis of those attestations.

How it works

Community Istio natively supports a SPIRE integration with the sidecar dataplane mode, in which you must mount sockets or volumes in every workload. However, Gloo Mesh’s support for SPIRE in the ambient dataplane mode functions much more simply. To enable the SPIRE integration with ambient, you only need to register your workloads with SPIRE, and then continue to label your service namespaces for the ambient dataplane mode as usual. Every ambient workload is automatically assigned a SPIRE-managed identity and uses that identity for mTLS, without the need to mount sockets or volumes in every workload.

In ambient, the Layer 4 node proxy, ztunnel, is responsible for capturing and encrypting all pod-to-pod traffic, and for managing workload identities. In the SPIRE-enabled ambient mode, ztunnel obtains those identities directly from the SPIRE agent that runs on the same node, and thus acts as a trusted delegate of SPIRE. However, note that the SPIRE agent attests the workloads, not ztunnel.

This allows the ambient dataplane to integrate with SPIRE’s delegation API as a trusted delegate, while leveraging SPIRE’s multifactor node and workload attestation plugin frameworks directly. The ambient dataplane can request workload certificates issued by SPIRE on the basis of those attestations.

Review the following sequence diagram that shows how SPIRE attestation in ambient works.

Figure: SPIRE attestation for ambient workloads
Figure: SPIRE attestation for ambient workloads
Figure: SPIRE attestation for ambient workloads
Figure: SPIRE attestation for ambient workloads

  1. The ztunnel on the same node as the ambient-enrolled workload pod obtains the PID of the workload container in the pod.
  2. The ztunnel then requests the SPIRE agent on the same node to attest the identity of the workload using its PID.
  3. The SPIRE agent performs checks against the workload to determine whether it can grant the workload a trusted identity.
  4. If the checks succeed, the SPIRE agent returns a certificate (SPIFFE x509 SVID) to the ztunnel for the workload.
  5. The ztunnel enforces mTLS connections for the workload pod using the SPIRE-issued certificate.
  6. The workload pod can then make mTLS-secure connections to other ambient workloads.

To learn more about SPIRE, and how the SPIRE integration with Istio works in Gloo Mesh, check out this blog post.

Single cluster

Deploy an ambient mesh that uses SPIRE workload identity attestation.

Set up tools

Before you begin, set up the following tools and save details in environment variables.

  1. Set your Enterprise-level license for Gloo Mesh as an environment variable. If you do not have one, contact an account representative. If you prefer to specify license keys in a secret instead, see Licensing.

      export GLOO_MESH_LICENSE_KEY=<enterprise_license_key>
      
  2. Save the name of your cluster, which you use in the SPIRE trust domain settings.

      export CLUSTER_NAME=<cluster_name>
      
  3. Choose the version of Istio that you want to install or upgrade to by reviewing the supported versions table, and saving it in an environment variable. Note that the Gloo Operator installs the Solo distribution of Istio by default for the version you specify, so neither the -solo image tag nor the repo key are required.

      export ISTIO_VERSION=1.26.0
      
  4. Make sure that you have the OpenSSL version of openssl, not LibreSSL. The openssl version must be at least 1.1.

    1. Check your openssl version. If you see LibreSSL in the output, continue to the next step.
        openssl version
        
    2. Install the OpenSSL version (not LibreSSL). For example, you might use Homebrew.
        brew install openssl
        
    3. Review the output of the OpenSSL installation for the path of the binary file. You can choose to export the binary to your path, or call the entire path whenever the following steps use an openssl command.
      • For example, openssl might be installed along the following path: /usr/local/opt/openssl@3/bin/
      • To run commands, you can append the path so that your terminal uses this installed version of OpenSSL, and not the default LibreSSL. /usr/local/opt/openssl@3/bin/openssl req -new -newkey rsa:4096 -x509 -sha256 -days 3650...

Prepare SPIRE certificates

Create the root and intermediate CA for the SPIRE server. The SPIRE server later uses these CAs to create certificates for any attested workloads.

  1. Create a directory for the certificates, and save the CA certificate configurations.

      mkdir -p certs/{root-ca,intermediate-ca}
    cd certs
    
    cat >root-ca.cnf <<EOF
    [req]
    distinguished_name = req_distinguished_name
    req_extensions = v3_req
    prompt = no
    
    [req_distinguished_name]
    CN = SPIRE Root CA
    
    [v3_req]
    keyUsage = critical, keyCertSign, cRLSign
    basicConstraints = critical, CA:true, pathlen:2
    subjectKeyIdentifier = hash
    EOF
    
    cat >intermediate-ca.cnf <<EOF
    [req]
    distinguished_name = req_distinguished_name
    req_extensions = v3_req
    prompt = no
    
    [req_distinguished_name]
    CN = SPIRE Intermediate CA
    
    [v3_req]
    keyUsage = critical, keyCertSign, cRLSign
    basicConstraints = critical, CA:true, pathlen:1
    subjectKeyIdentifier = hash
    EOF
      
  2. Create the root CA and intermediate CA, and sign the intermediate CA with the root CA.

      # Create root CA
    openssl genrsa -out root-ca/root-ca.key 2048
    openssl req -new -x509 -key root-ca/root-ca.key -out root-ca/root-ca.crt -config root-ca.cnf -days 3650
    
    # Create intermediate CA
    openssl genrsa -out intermediate-ca/ca.key 2048
    openssl req -new -key intermediate-ca/ca.key -out intermediate-ca/ca.csr -config intermediate-ca.cnf -subj "/CN=SPIRE INTERMEDIATE CA"
    
    # Sign CSR with root CA
    openssl x509 -req -in intermediate-ca/ca.csr -CA root-ca/root-ca.crt -CAkey root-ca/root-ca.key -CAcreateserial \
      -out intermediate-ca/ca.crt -days 1825 -extensions v3_req -extfile intermediate-ca.cnf
    
    # Create the bundle file (intermediate + root)
    cat intermediate-ca/ca.crt root-ca/root-ca.crt > intermediate-ca/ca-chain.pem
    
    # Create the root CA bundle
    cp root-ca/root-ca.crt root-ca-bundle.pem
      
  3. Create the spire-server namespace, and issue the certificates as secrets ready to be mounted onto SPIRE.

      kubectl create namespace spire-server
    kubectl create secret generic spiffe-upstream-ca \
      --from-file=tls.crt=certs/intermediate-ca/ca.crt \
      --from-file=tls.key=certs/intermediate-ca/ca.key \
      --from-file=bundle.crt=certs/intermediate-ca/ca-chain.pem \
      -n spire-server
      

Install SPIRE

Use Helm to deploy SPIRE in each cluster.

  1. Add and update the SPIRE Helm repo.

      helm repo add spire https://spiffe.github.io/helm-charts-hardened/
    helm repo update spire
      
  2. Create the SPIRE CRDs and SPIRE Helm releases.

      helm upgrade -i spire-crds spire/spire-crds \
    --namespace spire-server \
    --create-namespace \
    --version 0.5.0 \
    --wait
    
    helm upgrade -i spire spire/spire \
    --namespace spire-server \
    --version 0.24.2 \
    -f - <<EOF
    # Source https://github.com/solo-io/istio/blob/build/release-1.23/tools/install-spire.sh
    global:
      spire:
        trustDomain: ${CLUSTER_NAME}
    spire-agent:
        authorizedDelegates:
            - "spiffe://${CLUSTER_NAME}/ns/istio-system/sa/ztunnel"
        sockets:
            admin:
                enabled: true
                mountOnHost: true
            hostBasePath: /run/spire/agent/sockets
        tolerations:
          - effect: NoSchedule
            operator: Exists
          - key: CriticalAddonsOnly
            operator: Exists
          - effect: NoExecute
            operator: Exists
    spire-server:
      upstreamAuthority:
        disk:
          enabled: true
          secret:
            create: false
            name: "spiffe-upstream-ca"
    
    spiffe-csi-driver:
        tolerations:
          - effect: NoSchedule
            operator: Exists
          - key: CriticalAddonsOnly
            operator: Exists
          - effect: NoExecute
            operator: Exists
    EOF
      
  3. Verify that the SPIRE server is deployed.

      kubectl -n spire-server wait --for=condition=Ready pods --all
      
  4. Configure SPIRE to issue certificates for the ambient mesh workloads.

      kubectl apply -f - <<EOF
    # Source https://github.com/solo-io/istio/blob/build/release-1.23/tools/install-spire.sh
    ---
    # ClusterSPIFFEID for ztunnel
    apiVersion: spire.spiffe.io/v1alpha1
    kind: ClusterSPIFFEID
    metadata:
      name: istio-ztunnel-reg
    spec:
      spiffeIDTemplate: "spiffe://{{ .TrustDomain }}/ns/{{ .PodMeta.Namespace }}/sa/{{ .PodSpec.ServiceAccountName }}"
      podSelector:
        matchLabels:
          app: "ztunnel"
    ---
    # ClusterSPIFFEID for waypoints
    apiVersion: spire.spiffe.io/v1alpha1
    kind: ClusterSPIFFEID
    metadata:
      name: istio-waypoint-reg
    spec:
      spiffeIDTemplate: "spiffe://{{ .TrustDomain }}/ns/{{ .PodMeta.Namespace }}/sa/{{ .PodSpec.ServiceAccountName }}"
      podSelector:
        matchLabels:
          istio.io/gateway-name: waypoint
    ---
    # ClusterSPIFFEID for workloads
    apiVersion: spire.spiffe.io/v1alpha1
    kind: ClusterSPIFFEID
    metadata:
      name: istio-ambient-reg
    spec:
      spiffeIDTemplate: "spiffe://{{ .TrustDomain }}/ns/{{ .PodMeta.Namespace }}/sa/{{ .PodSpec.ServiceAccountName }}"
      podSelector:
        matchLabels:
          istio.io/dataplane-mode: ambient
    EOF
      

Any workloads that you later deploy to the ambient mesh will now be able to get mTLS certificates from SPIRE.

Install an ambient mesh with SPIRE enabled

Use the Gloo Operator to create the ambient mesh components, with the SPIRE integration enabled.

  1. Install the Gloo Operator to the gloo-mesh namespace. This operator deploys and manages your Istio installation. For more information, see the Helm reference. Note that if you already installed Gloo Mesh, you can optionally reference the secret that Gloo Mesh automatically creates for your license in the –set manager.env.SOLO_ISTIO_LICENSE_KEY_SECRET_REF=gloo-mesh/license-keys flag instead.

      helm install gloo-operator oci://us-docker.pkg.dev/solo-public/gloo-operator-helm/gloo-operator \
      --version 0.2.4 \
      -n gloo-mesh \
      --create-namespace \
      --set manager.env.SOLO_ISTIO_LICENSE_KEY=${GLOO_MESH_LICENSE_KEY}
      
  2. Verify that the operator pod is running.

      kubectl get pods -n gloo-mesh -l app.kubernetes.io/name=gloo-operator
      

    Example output:

      gloo-operator-78d58d5c7b-lzbr5     1/1     Running   0          48s
      
  3. Apply the following configmap and ServiceMeshController for the Gloo Operator to enable the SPIRE integration and deploy an ambient mesh.

      kubectl apply -n gloo-mesh -f -<<EOF
    apiVersion: v1
    kind: ConfigMap
    metadata:
    name: gloo-extensions-config
    namespace: gloo-mesh
    data:
    values.istiod: |
       gateways:
          spire:
          workloads: true
    values.istio-ztunnel: |
       spire:
          enabled: true
    ---
    apiVersion: operator.gloo.solo.io/v1
    kind: ServiceMeshController
    metadata:
      name: managed-istio
      labels:
        app.kubernetes.io/name: managed-istio
    spec:
      dataplaneMode: Ambient
      installNamespace: istio-system
      version: ${ISTIO_VERSION}
    EOF
      
  4. Verify that the istiod control plane, Istio CNI, and ztunnel pods are running.

      kubectl get pods -n istio-system
      

    Example output:

      NAME                          READY   STATUS    RESTARTS   AGE
    istio-cni-node-6s5nk          1/1     Running   0          2m53s
    istio-cni-node-blpz4          1/1     Running   0          2m53s
    istiod-gloo-bb86b959f-msrg7   1/1     Running   0          2m45s
    istiod-gloo-bb86b959f-w29cm   1/1     Running   0          3m
    ztunnel-mx8nw                 1/1     Running   0          2m52s
    ztunnel-w8r6c                 1/1     Running   0          2m52s
      
  5. Apply the CRDs for the Kubernetes Gateway API to your cluster, which are required to create components such as waypoint proxies for L7 traffic policies, gateways with the Gateway resource, and more.

      kubectl apply -f https://github.com/kubernetes-sigs/gateway-api/releases/download/v1.2.1/standard-install.yaml
      

Deploy services to the ambient mesh

Add apps to the ambient mesh. Note that whenever you label a workload to add it to your ambient mesh, the ztunnel on the same node requests that the SPIRE agent performs workload attestation. The provided certificate for the workload enables it to initiate mTLS communication within the mesh.

Multicluster

Deploy a multicluster ambient mesh that uses SPIRE workload identity attestation.

Set up tools

Before you begin, set up the following tools and save details in environment variables.

  1. Set your Enterprise-level license for Gloo Mesh as an environment variable. If you do not have one, contact an account representative. If you prefer to specify license keys in a secret instead, see Licensing.

      export GLOO_MESH_LICENSE_KEY=<enterprise_license_key>
      
  2. Save the names and contexts of your clusters. The example steps in this guide assume one management cluster that the Gloo Mesh management plane is installed in, and two registered workload clusters where you want to install ambient meshes.

    1. Set the names of your clusters from your infrastructure provider.
        export MGMT_CLUSTER=<mgmt-cluster-name>
      export REMOTE_CLUSTER1=<workload-cluster1-name>
      export REMOTE_CLUSTER2=<workload-cluster2-name>
        
    2. Save the kubeconfig contexts for your clusters. Run kubectl config get-contexts, look for your cluster in the CLUSTER column, and get the context name in the NAME column. Note: Do not use context names with underscores. The generated certificate that connects workload clusters to the management cluster uses the context name as a SAN specification, and underscores in SAN are not FQDN compliant. You can rename a context by running kubectl config rename-context "<oldcontext>" <newcontext>.
        export MGMT_CONTEXT=<management-cluster-context>
      export REMOTE_CONTEXT1=<workload-cluster1-context>
      export REMOTE_CONTEXT2=<workload-cluster1-context>
        
  3. Save the details for the version of the Solo distribution of Istio that you want to install.

    1. Choose the version of Istio that you want to install or upgrade to by reviewing the supported versions. In Gloo Mesh version 2.7 and later, multicluster setups require version 1.24.3 or later.
    2. Save the Solo distribution of Istio patch version.
        export ISTIO_VERSION=1.26.0
      export ISTIO_IMAGE=${ISTIO_VERSION}-solo
        
    3. Save the repo key for the minor version of the Solo distribution of Istio that you want to install. This is the 12-character hash at the end of the repo URL us-docker.pkg.dev/gloo-mesh/istio-<repo-key>, which you can find in the Istio images built by Solo.io support article.
        # 12-character hash at the end of the repo URL
      export REPO_KEY=<repo_key>
        
    4. Get the OS and architecture that you use on your machine.
        OS=$(uname | tr '[:upper:]' '[:lower:]' | sed -E 's/darwin/osx/')
      ARCH=$(uname -m | sed -E 's/aarch/arm/; s/x86_64/amd64/; s/armv7l/armv7/')
      echo $OS
      echo $ARCH
        
    5. Download the Solo distribution of Istio binary and install istioctl, which you use for multicluster linking and gateway commands.
        mkdir -p ~/.istioctl/bin
      curl -sSL https://storage.googleapis.com/istio-binaries-$REPO_KEY/$ISTIO_IMAGE/istioctl-$ISTIO_IMAGE-$OS-$ARCH.tar.gz | tar xzf - -C ~/.istioctl/bin
      chmod +x ~/.istioctl/bin/istioctl
      
      export PATH=${HOME}/.istioctl/bin:${PATH}
        
    6. Verify that the istioctl client runs the Solo distribution of Istio that you want to install.
        istioctl version --remote=false
        
      Example output:
        client version: 1.26.0-solo
        
  4. Make sure that you have the OpenSSL version of openssl, not LibreSSL. The openssl version must be at least 1.1.

    1. Check your openssl version. If you see LibreSSL in the output, continue to the next step.
        openssl version
        
    2. Install the OpenSSL version (not LibreSSL). For example, you might use Homebrew.
        brew install openssl
        
    3. Review the output of the OpenSSL installation for the path of the binary file. You can choose to export the binary to your path, or call the entire path whenever the following steps use an openssl command.
      • For example, openssl might be installed along the following path: /usr/local/opt/openssl@3/bin/
      • To run commands, you can append the path so that your terminal uses this installed version of OpenSSL, and not the default LibreSSL. /usr/local/opt/openssl@3/bin/openssl req -new -newkey rsa:4096 -x509 -sha256 -days 3650...

Prepare SPIRE certificates

Create the root and one intermediate CA for the SPIRE server in each cluster. The SPIRE server later uses these CAs to create certificates for any attested workloads.

  1. Create a directory for the certificates, and save the CA certificate configurations.

      mkdir -p certs/{root-ca,${REMOTE_CLUSTER1},${REMOTE_CLUSTER2}}
    cd certs
    
    cat >root-ca.cnf <<EOF
    [req]
    distinguished_name = req_distinguished_name
    req_extensions = v3_req
    prompt = no
    
    [req_distinguished_name]
    CN = SPIRE Root CA
    
    [v3_req]
    keyUsage = critical, keyCertSign, cRLSign
    basicConstraints = critical, CA:true, pathlen:2
    subjectKeyIdentifier = hash
    EOF
    
    cat >intermediate-ca.cnf <<EOF
    [req]
    distinguished_name = req_distinguished_name
    req_extensions = v3_req
    prompt = no
    
    [req_distinguished_name]
    CN = SPIRE Intermediate CA
    
    [v3_req]
    keyUsage = critical, keyCertSign, cRLSign
    basicConstraints = critical, CA:true, pathlen:1
    subjectKeyIdentifier = hash
    EOF
      
  2. Create the root CA. Then create one intermediate CA for each workload cluster, and use the root CA to sign both intermediate CAs.

      # Create root CA
    openssl genrsa -out root-ca/root-ca.key 2048
    openssl req -new -x509 -key root-ca/root-ca.key -out root-ca/root-ca.crt -config root-ca.cnf -days 3650
    
    # Create cluster 1 intermediate CA
    openssl genrsa -out ${REMOTE_CLUSTER1}/${REMOTE_CLUSTER1}-ca.key 2048
    openssl req -new -key ${REMOTE_CLUSTER1}/${REMOTE_CLUSTER1}-ca.key -out ${REMOTE_CLUSTER1}/${REMOTE_CLUSTER1}-ca.csr -config intermediate-ca.cnf -subj "/CN=SPIRE ${REMOTE_CLUSTER1} CA"
    # Sign cluster 1 CSR with root CA
    openssl x509 -req -in ${REMOTE_CLUSTER1}/${REMOTE_CLUSTER1}-ca.csr -CA root-ca/root-ca.crt -CAkey root-ca/root-ca.key -CAcreateserial \
      -out ${REMOTE_CLUSTER1}/${REMOTE_CLUSTER1}-ca.crt -days 1825 -extensions v3_req -extfile intermediate-ca.cnf
    
    # Create cluster 2 intermediate CA
    openssl genrsa -out ${REMOTE_CLUSTER2}/${REMOTE_CLUSTER2}-ca.key 2048
    openssl req -new -key ${REMOTE_CLUSTER2}/${REMOTE_CLUSTER2}-ca.key -out ${REMOTE_CLUSTER2}/${REMOTE_CLUSTER2}-ca.csr -config intermediate-ca.cnf -subj "/CN=SPIRE ${REMOTE_CLUSTER2} CA"
    # Sign cluster 2 CSR with root CA
    openssl x509 -req -in ${REMOTE_CLUSTER2}/${REMOTE_CLUSTER2}-ca.csr -CA root-ca/root-ca.crt -CAkey root-ca/root-ca.key -CAcreateserial \
      -out ${REMOTE_CLUSTER2}/${REMOTE_CLUSTER2}-ca.crt -days 1825 -extensions v3_req -extfile intermediate-ca.cnf
    
    # Create the bundle file for cluster 1 (intermediate + root)
    cat ${REMOTE_CLUSTER1}/${REMOTE_CLUSTER1}-ca.crt root-ca/root-ca.crt > ${REMOTE_CLUSTER1}/${REMOTE_CLUSTER1}-ca-chain.pem
    
    # Create the bundle file for cluster 2 (intermediate + root)
    cat ${REMOTE_CLUSTER2}/${REMOTE_CLUSTER2}-ca.crt root-ca/root-ca.crt > ${REMOTE_CLUSTER2}/${REMOTE_CLUSTER2}-ca-chain.pem
    
    # Create the root CA bundle
    cp root-ca/root-ca.crt root-ca-bundle.pem
      
  3. Create the spire-server namespace in each cluster, and issue the certificates as secrets ready to be mounted onto SPIRE.

      kubectl --context=${REMOTE_CONTEXT1} create namespace spire-server
    kubectl --context=${REMOTE_CONTEXT1} create secret generic spiffe-upstream-ca \
      --from-file=tls.crt=certs/${REMOTE_CLUSTER1}/${REMOTE_CLUSTER1}-ca.crt \
      --from-file=tls.key=certs/${REMOTE_CLUSTER1}/${REMOTE_CLUSTER1}-ca.key \
      --from-file=bundle.crt=certs/${REMOTE_CLUSTER1}/${REMOTE_CLUSTER1}-ca-chain.pem \
      -n spire-server
    
    kubectl --context=${REMOTE_CONTEXT2} create namespace spire-server
    kubectl --context=${REMOTE_CONTEXT2} create secret generic spiffe-upstream-ca \
      --from-file=tls.key=certs/${REMOTE_CLUSTER2}/${REMOTE_CLUSTER2}-ca.key \
      --from-file=tls.crt=certs/${REMOTE_CLUSTER2}/${REMOTE_CLUSTER2}-ca.crt \
      --from-file=bundle.crt=certs/${REMOTE_CLUSTER2}/${REMOTE_CLUSTER2}-ca-chain.pem \
      -n spire-server
      

Install SPIRE

Use Helm to deploy SPIRE in each cluster.

  1. Add and update the SPIRE Helm repo.

      helm repo add spire https://spiffe.github.io/helm-charts-hardened/
    helm repo update spire
      
  2. Create the SPIRE CRDs Helm release in each cluster.

      helm upgrade --kube-context=${REMOTE_CONTEXT1} -i spire-crds spire/spire-crds \
    --namespace spire-server \
    --create-namespace \
    --version 0.5.0 \
    --wait
    
    helm upgrade --kube-context=${REMOTE_CONTEXT2} -i spire-crds spire/spire-crds \
    --namespace spire-server \
    --create-namespace \
    --version 0.5.0 \
    --wait
      
  3. Create the SPIRE Helm release in each cluster.

      helm upgrade --kube-context=${REMOTE_CONTEXT1} -i spire spire/spire \
    --namespace spire-server \
    --version 0.24.2 \
    -f - <<EOF
    # Source https://github.com/solo-io/istio/blob/build/release-1.23/tools/install-spire.sh
    global:
      spire:
        trustDomain: ${REMOTE_CLUSTER1}
    spire-agent:
        authorizedDelegates:
            - "spiffe://${REMOTE_CLUSTER1}/ns/istio-system/sa/ztunnel"
        sockets:
            admin:
                enabled: true
                mountOnHost: true
            hostBasePath: /run/spire/agent/sockets
        tolerations:
          - effect: NoSchedule
            operator: Exists
          - key: CriticalAddonsOnly
            operator: Exists
          - effect: NoExecute
            operator: Exists
    spire-server:
      upstreamAuthority:
        disk:
          enabled: true
          secret:
            create: false
            name: "spiffe-upstream-ca"
    
    spiffe-csi-driver:
        tolerations:
          - effect: NoSchedule
            operator: Exists
          - key: CriticalAddonsOnly
            operator: Exists
          - effect: NoExecute
            operator: Exists
    EOF
    
    helm upgrade --kube-context=${REMOTE_CONTEXT2} -i spire spire/spire \
    --namespace spire-server \
    --version 0.24.2 \
    -f - <<EOF
    # Source https://github.com/solo-io/istio/blob/build/release-1.23/tools/install-spire.sh
    global:
      spire:
        trustDomain: ${REMOTE_CLUSTER2}
    spire-agent:
        authorizedDelegates:
            - "spiffe://${REMOTE_CLUSTER2}/ns/istio-system/sa/ztunnel"
        sockets:
            admin:
                enabled: true
                mountOnHost: true
            hostBasePath: /run/spire/agent/sockets
        tolerations:
          - effect: NoSchedule
            operator: Exists
          - key: CriticalAddonsOnly
            operator: Exists
          - effect: NoExecute
            operator: Exists
    spire-server:
      upstreamAuthority:
        disk:
          enabled: true
          secret:
            create: false
            name: "spiffe-upstream-ca"
    
    spiffe-csi-driver:
        tolerations:
          - effect: NoSchedule
            operator: Exists
          - key: CriticalAddonsOnly
            operator: Exists
          - effect: NoExecute
            operator: Exists
    EOF
      
  4. Verify that the SPIRE servers are deployed.

      kubectl --context=${REMOTE_CONTEXT1} -n spire-server wait --for=condition=Ready pods --all
    kubectl --context=${REMOTE_CONTEXT2} -n spire-server wait --for=condition=Ready pods --all
      
  5. Configure SPIRE to issue certificates for the ambient mesh workloads.

      cat >cluster-spiffe-id.yaml <<EOF
    # Source https://github.com/solo-io/istio/blob/build/release-1.23/tools/install-spire.sh
    ---
    # ClusterSPIFFEID for ztunnel
    apiVersion: spire.spiffe.io/v1alpha1
    kind: ClusterSPIFFEID
    metadata:
      name: istio-ztunnel-reg
    spec:
      spiffeIDTemplate: "spiffe://{{ .TrustDomain }}/ns/{{ .PodMeta.Namespace }}/sa/{{ .PodSpec.ServiceAccountName }}"
      podSelector:
        matchLabels:
          app: "ztunnel"
    ---
    # ClusterSPIFFEID for waypoints
    apiVersion: spire.spiffe.io/v1alpha1
    kind: ClusterSPIFFEID
    metadata:
      name: istio-waypoint-reg
    spec:
      spiffeIDTemplate: "spiffe://{{ .TrustDomain }}/ns/{{ .PodMeta.Namespace }}/sa/{{ .PodSpec.ServiceAccountName }}"
      podSelector:
        matchLabels:
          istio.io/gateway-name: waypoint
    ---
    # ClusterSPIFFEID for workloads
    apiVersion: spire.spiffe.io/v1alpha1
    kind: ClusterSPIFFEID
    metadata:
      name: istio-ambient-reg
    spec:
      spiffeIDTemplate: "spiffe://{{ .TrustDomain }}/ns/{{ .PodMeta.Namespace }}/sa/{{ .PodSpec.ServiceAccountName }}"
      podSelector:
        matchLabels:
          istio.io/dataplane-mode: ambient
    EOF
    
    kubectl --context=${REMOTE_CONTEXT1} apply -f cluster-spiffe-id.yaml
    kubectl --context=${REMOTE_CONTEXT2} apply -f cluster-spiffe-id.yaml
      

Any workloads that you later deploy to the ambient mesh will now be able to get mTLS certificates from SPIRE.

Create a shared root of trust for istiod

Each cluster in the multicluster setup must have a shared root of trust. This can be achieved by providing a root certificate signed by a PKI provider, or a custom root certificate created for this purpose. The root certificate signs a unique intermediate CA certificate for each cluster.

Install ambient meshes with SPIRE enabled

In each cluster, use the Gloo Operator to create the ambient mesh components, with the SPIRE integration enabled.

  1. Install the Gloo Operator to the gloo-mesh namespace. This operator deploys and manages your Istio installation. For more information, see the Helm reference. Note that if you already installed Gloo Mesh, you can optionally reference the secret that Gloo Mesh automatically creates for your license in the –set manager.env.SOLO_ISTIO_LICENSE_KEY_SECRET_REF=gloo-mesh/license-keys flag instead.

      helm install gloo-operator oci://us-docker.pkg.dev/solo-public/gloo-operator-helm/gloo-operator \
      --version 0.2.4 \
      -n gloo-mesh \
      --create-namespace \
      --kube-context ${REMOTE_CONTEXT1} \
      --set manager.env.SOLO_ISTIO_LICENSE_KEY=${GLOO_MESH_LICENSE_KEY}
    
    helm install gloo-operator oci://us-docker.pkg.dev/solo-public/gloo-operator-helm/gloo-operator \
      --version 0.2.4 \
      -n gloo-mesh \
      --create-namespace \
      --kube-context ${REMOTE_CONTEXT2} \
      --set manager.env.SOLO_ISTIO_LICENSE_KEY=${GLOO_MESH_LICENSE_KEY}
      
  2. Verify that the operator pod is running.

      kubectl get pods -n gloo-mesh --context ${REMOTE_CONTEXT1} -l app.kubernetes.io/name=gloo-operator
    kubectl get pods -n gloo-mesh --context ${REMOTE_CONTEXT2} -l app.kubernetes.io/name=gloo-operator
      

    Example output:

      gloo-operator-78d58d5c7b-lzbr5     1/1     Running   0          48s
      
  3. Apply the following configmap and ServiceMeshController for the Gloo Operator to enable the SPIRE integartion and deploy an ambient mesh.

      kubectl apply -n gloo-mesh --context ${REMOTE_CONTEXT1} -f -<<EOF
    apiVersion: v1
    kind: ConfigMap
    metadata:
    name: gloo-extensions-config
    namespace: gloo-mesh
    data:
    values.istiod: |
       gateways:
          spire:
          workloads: true
    values.istio-ztunnel: |
       spire:
          enabled: true
    ---
    apiVersion: operator.gloo.solo.io/v1
    kind: ServiceMeshController
    metadata:
      name: managed-istio
      labels:
        app.kubernetes.io/name: managed-istio
    spec:
      cluster: ${REMOTE_CLUSTER1}
      network: ${REMOTE_CLUSTER1}
      dataplaneMode: Ambient
      installNamespace: istio-system
      version: ${ISTIO_VERSION}
    EOF
    
    kubectl apply -n gloo-mesh --context ${REMOTE_CONTEXT2} -f -<<EOF
    apiVersion: v1
    kind: ConfigMap
    metadata:
    name: gloo-extensions-config
    namespace: gloo-mesh
    data:
    values.istiod: |
       gateways:
          spire:
          workloads: true
    values.istio-ztunnel: |
       spire:
          enabled: true
    ---
    apiVersion: operator.gloo.solo.io/v1
    kind: ServiceMeshController
    metadata:
      name: managed-istio
      labels:
        app.kubernetes.io/name: managed-istio
    spec:
      cluster: ${REMOTE_CLUSTER2}
      network: ${REMOTE_CLUSTER2}
      dataplaneMode: Ambient
      installNamespace: istio-system
      version: ${ISTIO_VERSION}
    EOF
      
  4. Verify that the istiod control plane, Istio CNI, and ztunnel pods are running.

      kubectl get pods -n istio-system --context ${REMOTE_CONTEXT1}
    kubectl get pods -n istio-system --context ${REMOTE_CONTEXT2}
      

    Example output:

      NAME                          READY   STATUS    RESTARTS   AGE
    istio-cni-node-6s5nk          1/1     Running   0          2m53s
    istio-cni-node-blpz4          1/1     Running   0          2m53s
    istiod-gloo-bb86b959f-msrg7   1/1     Running   0          2m45s
    istiod-gloo-bb86b959f-w29cm   1/1     Running   0          3m
    ztunnel-mx8nw                 1/1     Running   0          2m52s
    ztunnel-w8r6c                 1/1     Running   0          2m52s
      

Create east-west gateways so that traffic requests can be routed cross-cluster. Then, link clusters to enable cross-cluster service discovery.

  1. Apply the CRDs for the Kubernetes Gateway API to your cluster, which are required to create components such as waypoint proxies for L7 traffic policies, gateways with the Gateway resource, and more.

      kubectl apply -f https://github.com/kubernetes-sigs/gateway-api/releases/download/v1.2.1/standard-install.yaml --context ${REMOTE_CONTEXT1}
    kubectl apply -f https://github.com/kubernetes-sigs/gateway-api/releases/download/v1.2.1/standard-install.yaml --context ${REMOTE_CONTEXT2}
      
  2. Create an east-west gateway in the istio-eastwest namespace. An east-west gateway facilitates traffic between services in each cluster in your multicluster mesh. To take a look at the Gateway resource that this command creates, you can include the --generate flag in the command.

      kubectl create namespace istio-eastwest --context ${REMOTE_CONTEXT1}
    istioctl multicluster expose --namespace istio-eastwest --context ${REMOTE_CONTEXT1}
    
    kubectl create namespace istio-eastwest --context ${REMOTE_CONTEXT2}
    istioctl multicluster expose --namespace istio-eastwest --context ${REMOTE_CONTEXT2}
      
  3. Verify that the east-west gateways are successfully deployed.

      kubectl get pods -n istio-eastwest --context ${REMOTE_CONTEXT1}
    kubectl get pods -n istio-eastwest --context ${REMOTE_CONTEXT2}
      
  4. Using the names of the cluster contexts, link the clusters so that they can communicate. Note that you can either link the clusters bi-directionally or asymmetrically. In a standard bi-directional setup, services in any of the linked clusters can send requests to and receive requests from the services in any of the other linked clusters. In an asymmetrical setup, you allow one cluster to send requests to another cluster, but the other cluster cannot send requests back to the first cluster.

Deploy services to the multicluster mesh

Add apps to the ambient mesh. This includes labeling services so that they are included in the ambient mesh, and making the services available across your linked cluster setup.

Note that whenever you label a workload to add it to your ambient mesh, the ztunnel on the same node requests that the SPIRE agent performs workload attestation. The provided certificate for the workload enables it to initiate mTLS communication within the mesh.