About

Vault is a popular open source secret management tool that you can use to set up a secure, private key infrastructure (PKI) and manage TLS certificates. In this setup, you install a Vault instance in the Gloo management cluster that serves as the root certificate authority. The root CA certificate and private key that are stored in Vault are used to sign and issue Istio intermediate CA certificates.

To enable Gloo Mesh to automatically derive intermediate CA certificates from the root CA in Vault, istiod is injected with the istiod-agent sidecar that authenticates and sends certificate signing requests to Vault. Instead of writing the intermediate CA certificate and key to the cacerts Kubernetes secret, the istiod-agent stores the credentials in memory, which adds an additional security layer to this approach. When the istiod-agent sidecar is deleted, the private key is also deleted. Istiod reads the intermediate CA key from the istiod-agent memory directly when signing leaf certificates for the workloads in the service mesh.

For more information about this approach, see Option 4: Integrate with Vault.

Before you begin

  1. Complete the multicluster getting started guide to set up the following testing environment.

    • Three clusters along with environment variables for the clusters and their Kubernetes contexts.
    • The Gloo meshctl CLI, along with other CLI tools such as kubectl and istioctl.
    • The Gloo management server in the management cluster, and the Gloo agents in the workload clusters.
    • Istio installed in the workload clusters.
    • A simple Gloo workspace setup.
  2. Install Bookinfo and other sample apps.
  3. The openssl version must be at least 1.1.

    1. Check your openssl version. If you see LibreSSL in the output, continue to the next step.
      openssl version
    2. Install the OpenSSL version (not LibreSSL). For example, you might use Homebrew.
      brew install openssl
    3. Review the output of the OpenSSL installation for the path of the binary file. You can choose to export the binary to your path, or call the entire path whenever the following steps use an openssl command.
      • For example, openssl might be installed along the following path: /usr/local/opt/openssl@3/bin/
      • To run commands, you can append the path so that your terminal uses this installed version of OpenSSL, and not the default LibreSSL. /usr/local/opt/openssl@3/bin/openssl req -new -newkey rsa:4096 -x509 -sha256 -days 3650...
  4. Save the kubeconfig contexts for your clusters. Run kubectl config get-contexts, look for your cluster in the CLUSTER column, and get the context name in the NAME column. Note: Do not use context names with underscores. The generated certificate that connects workload clusters to the management cluster uses the context name as a SAN specification, and underscores in SAN are not FQDN compliant. You can rename a context by running kubectl config rename-context "<oldcontext>" <newcontext>.
    export MGMT_CLUSTER=<mgmt-cluster-name>
    export REMOTE_CLUSTER=<remote-cluster-name>
    export MGMT_CONTEXT=<management-cluster-context>
    export REMOTE_CONTEXT=<remote-cluster-context>

Install and set up Vault

  1. If not added already, add the HashiCorp Helm repository to your management cluster.

    helm repo add hashicorp https://helm.releases.hashicorp.com --kube-context ${MGMT_CONTEXT}
    helm repo update
  2. Generate a root CA certificate and key for Vault. You can update the subj field to your domain.

    openssl req -new -newkey rsa:4096 -x509 -sha256 \
      -days 3650 -nodes -out root-cert.pem -keyout root-key.pem \
      -subj "/O=solo.io"
  3. In the management cluster, install Vault in dev mode and enable debugging logs. For more information about setting up Vault in Kubernetes, see the Vault docs.

    helm install -n vault vault hashicorp/vault --set "injector.enabled=false" --set "server.logLevel=debug" --set "server.dev.enabled=true" --set "server.service.type=LoadBalancer" --kube-context="${MGMT_CONTEXT}" --create-namespace
    kubectl --context $MGMT_CONTEXT wait --for condition=Ready -n vault pod/vault-0

    Example output:

    pod/vault-0 condition met
  4. Enable Vault userpass.

    kubectl --context="${MGMT_CONTEXT}" exec -n vault vault-0 -- /bin/sh -c 'vault auth enable userpass'
    kubectl --context="${MGMT_CONTEXT}" exec -n vault vault-0 -- /bin/sh -c 'vault write auth/userpass/users/admin password=admin policies=admins'

    Example output:

    Success! Enabled userpass auth method at: userpass/
    Success! Data written to: auth/userpass/users/admin
  5. Enable Vault authentication along a path for the workload cluster.

    kubectl --context="${MGMT_CONTEXT}" exec -n vault vault-0 -- /bin/sh -c "vault auth enable -path=kube-${REMOTE_CLUSTER1}-mesh-auth kubernetes"
    kubectl --context="${MGMT_CONTEXT}" exec -n vault vault-0 -- /bin/sh -c "vault auth enable -path=kube-${REMOTE_CLUSTER2}-mesh-auth kubernetes"

    Example output:

    Success! Enabled kubernetes auth method at: kube-cluster1-mesh-auth/
  6. Get the API token for the istiod-service-account. Note that depending on your Kubernetes version, the token is automatically created for you or must be created manually.

  7. Get the CA certificate for the service account.

    SA_CA_CRT_C1=$(kubectl config view --raw -o json | jq -r --arg wc $REMOTE_CONTEXT1 '. as $c | $c.contexts[] | select(.name == $wc) as $context | $c.clusters[] | select(.name == $context.context.cluster) | .cluster."certificate-authority-data"'| base64 -d)
    SA_CA_CRT_C2=$(kubectl config view --raw -o json | jq -r --arg wc $REMOTE_CONTEXT2 '. as $c | $c.contexts[] | select(.name == $wc) as $context | $c.clusters[] | select(.name == $context.context.cluster) | .cluster."certificate-authority-data"'| base64 -d)
    echo $SA_CA_CRT_C1
    echo $SA_CA_CRT_C2

    Example output:

    -----BEGIN CERTIFICATE-----
    ...
    -----END CERTIFICATE-----
    -----BEGIN CERTIFICATE-----
    ...
    -----END CERTIFICATE-----
  8. Get the address of your cluster.

    K8S_ADDR_C1=$(kubectl config view -o json | jq -r --arg wc $REMOTE_CONTEXT1 '. as $c | $c.contexts[] | select(.name == $wc) as $context | $c.clusters[] | select(.name == $context.context.cluster) | .cluster.server ')
    K8S_ADDR_C2=$(kubectl config view -o json | jq -r --arg wc $REMOTE_CONTEXT2 '. as $c | $c.contexts[] | select(.name == $wc) as $context | $c.clusters[] | select(.name == $context.context.cluster) | .cluster.server ')
    echo $K8S_ADDR_C1
    echo $K8S_ADDR_C2

    Example output:

    https://34.xxx.xxx.xxx
    https://35.xxx.xxx.xxx
  9. Set the Kubernetes auth config for Vault to the mounted service account token.

    kubectl --context="${MGMT_CONTEXT}" exec -n vault vault-0 -- /bin/sh -c "vault write auth/kube-${REMOTE_CLUSTER1}-mesh-auth/config \
    token_reviewer_jwt="$SA_TOKEN_C1" \
    kubernetes_host="$K8S_ADDR_C1" \
    kubernetes_ca_cert='$SA_CA_CRT_C1' \
    disable_local_ca_jwt="true" \
    issuer='https://kubernetes.default.svc.cluster.local'"
    kubectl --context="${MGMT_CONTEXT}" exec -n vault vault-0 -- /bin/sh -c "vault write auth/kube-${REMOTE_CLUSTER2}-mesh-auth/config \
    token_reviewer_jwt="$SA_TOKEN_C2" \
    kubernetes_host="$K8S_ADDR_C2" \
    kubernetes_ca_cert='$SA_CA_CRT_C2' \
    disable_local_ca_jwt="true" \
    issuer='https://kubernetes.default.svc.cluster.local'"

    Example output:

    Success! Data written to: auth/kube-${REMOTE_CLUSTER}-mesh-auth/config
    Success! Data written to: auth/kube-${REMOTE_CLUSTER}-mesh-auth/config
  10. Bind the istiod service account to the Vault PKI policy.

    kubectl --context="${MGMT_CONTEXT}" exec -n vault vault-0 -- /bin/sh -c "vault write \
    auth/kube-${REMOTE_CLUSTER1}-mesh-auth/role/gen-int-ca-istio-${REMOTE_CLUSTER1}-mesh \
    bound_service_account_names=istiod-service-account \
    bound_service_account_namespaces=istio-system \
    policies=gen-int-ca-istio-${REMOTE_CLUSTER1}-mesh \
    ttl=720h"
    kubectl --context="${MGMT_CONTEXT}" exec -n vault vault-0 -- /bin/sh -c "vault write \
    auth/kube-${REMOTE_CLUSTER2}-mesh-auth/role/gen-int-ca-istio-${REMOTE_CLUSTER2}-mesh \
    bound_service_account_names=istiod-service-account \
    bound_service_account_namespaces=istio-system \
    policies=gen-int-ca-istio-${REMOTE_CLUSTER2}-mesh \
    ttl=720h"

    Example output:

    Success! Data written to: auth/kube-${REMOTE_CLUSTER1}-mesh-auth/role/gen-int-ca-istio-${REMOTE_CLUSTER1}-mesh
    Success! Data written to: auth/kube-${REMOTE_CLUSTER2}-mesh-auth/role/gen-int-ca-istio-${REMOTE_CLUSTER2}-mesh
  11. Initialize the Vault PKI.

    kubectl --context="${MGMT_CONTEXT}" exec -n vault vault-0 -- /bin/sh -c 'vault secrets enable pki'

    Example output:

    Success! Enabled the pki secrets engine at: pki/
  12. Set the Vault CA to the pem_bundle.

    kubectl --context="${MGMT_CONTEXT}" exec -n vault vault-0 -- /bin/sh -c "vault write -format=json pki/config/ca pem_bundle=\"$(cat root-key.pem root-cert.pem)\""

    Example output:

    {
      "request_id": "2aa29fd6-9fa3-3edd-2f8b-2a0e4c007e8c",
      "lease_id": "",
      "lease_duration": 0,
      "renewable": false,
      "data": {
        "imported_issuers": null,
        "imported_keys": null,
        "mapping": {
          "aa877391-b4f2-045d-63da-33521c91dc68": "8257875c-4016-f28e-288b-ecca33065097"
        }
      },
      "warnings": null
    }
  13. Enable the Vault intermediate cert path. Replace ${REMOTE_CLUSTER} with your cluster’s name.

    kubectl --context="${MGMT_CONTEXT}" exec -n vault vault-0 -- /bin/sh -c "vault secrets enable -path=pki_int_${REMOTE_CLUSTER1} pki"
    kubectl --context="${MGMT_CONTEXT}" exec -n vault vault-0 -- /bin/sh -c "vault secrets enable -path=pki_int_${REMOTE_CLUSTER2} pki"

    Example output:

    Success! Enabled the pki secrets engine at: pki_int_${REMOTE_CLUSTER1}/
    Success! Enabled the pki secrets engine at: pki_int_${REMOTE_CLUSTER2}/
  14. Set the policy for the intermediate cert path. Replace $REMOTE_CLUSTER1 and $REMOTE_CLUSTER2 with your cluster names.

    kubectl --context="${MGMT_CONTEXT}" exec -n vault vault-0 -- /bin/sh -c 'vault policy write gen-int-ca-istio-${REMOTE_CLUSTER1}-mesh - <<EOF
    path "pki_int_${REMOTE_CLUSTER1}/*" {
    capabilities = ["create", "read", "update", "delete", "list"]
    }
    path "pki/cert/ca" {
    capabilities = ["read"]
    }
    path "pki/root/sign-intermediate" {
    capabilities = ["create", "read", "update", "list"]
    }
    EOF'
    kubectl --context="${MGMT_CONTEXT}" exec -n vault vault-0 -- /bin/sh -c 'vault policy write gen-int-ca-istio-${REMOTE_CLUSTER2}-mesh - <<EOF
    path "pki_int_${REMOTE_CLUSTER2}/*" {
    capabilities = ["create", "read", "update", "delete", "list"]
    }
    path "pki/cert/ca" {
    capabilities = ["read"]
    }
    path "pki/root/sign-intermediate" {
    capabilities = ["create", "read", "update", "list"]
    }
    EOF'

    Example output:

    Success! Uploaded policy: gen-int-ca-istio-${REMOTE_CLUSTER1}-mesh
    Success! Uploaded policy: gen-int-ca-istio-${REMOTE_CLUSTER2}-mesh

Now that Vault is set up in your clusters, you can use Vault as an intermediate CA provider. If you see any errors, review the troubleshooting section.

Update Gloo RBAC permissions

The istio-agent sidecar in each cluster needs to read and modify Gloo resources. To enable the necessary RBAC permissions, update the gloo-agent Helm release. You can update the Helm release by adding the following snippet to the YAML configuration file in your GitOps pipeline or directly with the helm upgrade command.

  1. Follow the steps to Get your Helm chart values for the Gloo agent deployment.

  2. Add the following code to your Helm values file.

  3. Set the Gloo Mesh Enterprise version. This example uses the latest version. You can find other versions in the Changelog documentation. Append -fips for a FIPS-compliant image, such as 2.7.0-beta1-fips. Do not include v before the version number.

    export GLOO_VERSION=2.7.0-beta1
  4. Make sure that you have the Helm repo for the Gloo agent. Note that you might have a different name for the Helm repo, such as gloo-mesh-agent.

    helm repo add gloo-agent https://storage.googleapis.com/gloo-mesh-enterprise/gloo-mesh-agent --kube-context ${REMOTE_CONTEXT1}
    helm repo update --kube-context ${REMOTE_CONTEXT1}
  5. Upgrade the Gloo agent Helm chart with the required RBAC permissions. Note that you might have a different name for the Helm repo, such as gloo-mesh-agent.

    helm upgrade -n gloo-mesh gloo-agent gloo-mesh-agent/gloo-mesh-agent --kube-context="${REMOTE_CONTEXT1}" --version=$GLOO_VERSION -f values-data-plane-env.yaml
  6. Repeat these steps for each workload cluster.

Modify istiod

So far, you set up the Gloo agent on each cluster to use Vault to obtain the intermediate CA. Now, you can modify your Istio installation to support fetching and dynamically reloading the intermediate CA from Vault.

These steps vary based on your Istio installation method.

Enable Vault as an intermediate CA provider

Now, federate the two meshes together by using Gloo with Vault to establish trusted communication across the service meshes.

  1. Get the endpoint for the Vault service in the management cluster.
    export VAULT_ENDPOINT="http://$(kubectl get svc/vault -n vault -o wide --context $MGMT_CONTEXT -o jsonpath='{.status.loadBalancer.ingress[0].*}')"
    echo $VAULT_ENDPOINT
    Example output:
    http://35.xxx.xxx.xxx
  2. Get the name of your Istio mesh.
    export MESH=$(kubectl get meshes -n gloo-mesh --context $REMOTE_CONTEXT -o jsonpath='{.items[*].metadata.name}')
    echo $MESH
  3. Create a root trust policy for the workload cluster so that the istiod agent on the workload cluster knows how to communicate with Vault on the management cluster. For more information about root trust policies, see the API docs.
    kubectl apply --context ${REMOTE_CONTEXT} -f - << EOF
    apiVersion: admin.gloo.solo.io/v2
    kind: RootTrustPolicy
    metadata:
      name: ${REMOTE_CLUSTER}
      namespace: gloo-mesh
    spec:
      applyToMeshes:
      - istio:
          clusterSelector:
            mesh: ${MESH}
          namespace: istio-system
          selector:
            app: istiod
            vault: ${REMOTE_CLUSTER}
      config:
        agentCa:
          vault:
            caPath: pki/root/sign-intermediate
            csrPath: pki_int_${REMOTE_CLUSTER}/intermediate/generate/exported
            server: $VAULT_ENDPOINT:8200
            kubernetesAuth:
              mountPath: /v1/auth/kube-${REMOTE_CLUSTER}-mesh-auth
              role: gen-int-ca-istio-${REMOTE_CLUSTER}-mesh
    EOF
  4. Restart the istiod deployment. Note that you cannot update Istio resources until istiod is running again.
    kubectl rollout restart deployment -l app=istiod -n istio-system --context ${REMOTE_CONTEXT}
  5. Repeat the previous steps for each workload cluster with Istio.

Verify traffic uses the root CA

Now that the Istio control plane is patched with the gloo-mesh-istiod-agent sidecar, you can verify that all of the service mesh traffic is secured by using the root CA that you generated for Vault in the previous section.

To verify, you can check the root-cert.pem in the istio-ca-root-cert config map that Istio propagates for the initial TLS connection. The following example checks the propagated root-cert.pem against the local certificate that you supplied to Vault in the previous section.

  1. Check the Vault version that the management cluster runs.

    kubectl --context="${MGMT_CONTEXT}" exec -n vault vault-0 -- /bin/sh -c "vault version"

    Example output:

    Vault v1.11.3 (17250b25303c6418c283c95b1d5a9c9f16174fe8), built 2022-08-26T10:27:10Z
  2. Check the root trust policy for errors.

    kubectl describe RootTrustPolicy ${REMOTE_CLUSTER} -n gloo-mesh --context ${REMOTE_CONTEXT}
  3. Check the mesh for errors.

    kubectl describe mesh ${MESH} -n gloo-mesh --context ${REMOTE_CONTEXT}
  4. From your terminal, navigate to the same directory as the root-cert.pem file that you previously created. Or, if you are using an existing Vault deployment, save the root certificate as root-cert.pem.

  5. Check the difference between the root certificate that istiod uses and the Vault root certificate. If installed correctly, the files are the same.

    kubectl --context=$REMOTE_CONTEXT get cm -n bookinfo istio-ca-root-cert -ojson | jq -r  '.data["root-cert.pem"]' | diff -q root-cert.pem -
  6. If you see that the files differ, check the istiod logs.

    kubectl logs -n istio-system --context ${REMOTE_CONTEXT} $(kubectl get pods -n istio-system -l app=istiod --context ${REMOTE_CONTEXT} | cut -d" " -f1 | tail -1) > istiod-logs.txt
  7. Check the issued certificates for errors.

    kubectl describe issuedcertificates -n istio-system --context ${REMOTE_CONTEXT}

For more troubleshooting steps, see Troubleshoot errors with the Vault setup or Debug Istio.

Rotating certificates for Istio workloads

When certificates are issued, pods that are managed by Istio must be restarted to ensure they pick up the new certificates. The certificate issuer creates a PodBounceDirective, which contains the namespaces and labels of the pods that must be restarted. For more information about how certificate rotation works in Istio, review the video series in this blog post.

Note: To avoid potential downtime for your apps in production, disable the PodBounceDirective feature by setting autoRestartPods to false. Then, control pod restarts in another way, such as a rolling update.

  1. Get your root trust policies.

    kubectl get roottrustpolicy --context ${MGMT_CONTEXT} -A
  2. Verify that you do not have the autoRestartPods setting. If the setting is true, change the value to false.

    kubectl edit roottrustpolicy --context ${MGMT_CONTEXT} -n <namespace> <root-trust-policy>

    Example output:

    apiVersion: admin.gloo.solo.io/v2
    kind: RootTrustPolicy
    metadata:
      name: istio-ingressgateway
      namespace: gloo-mesh
    spec:
      config:
        autoRestartPods: false
        ...
  3. If you updated the autoRestartPods field:

    1. Ensure the pods pick up the new certificates by restarting the istiod pod in each remote cluster.
    kubectl --context {$REMOTE_CONTEXT} -n istio-system patch deployment istiod \
      -p "{\"spec\":{\"template\":{\"metadata\":{\"labels\":{\"date\":\"`date +'%s'`\"}}}}}"
    1. Restart your app pods that are managed by Istio, such as by using a rolling update strategy.

Troubleshoot errors with the Vault setup

If you have errors with the steps to install Vault, review the following table.

ErrorDescription
Error from server (NotFound): pods “vault-0” not found. Error from server (BadRequest): pod vault-0 does not have a host assigned.The Vault pod might not be running. Check the pod status, troubleshoot any issues, wait for the pod to start, and try again.
* path is already in useYou already have set up that path. If you already ran the script, you can ignore this message.
Error writing data to pki/config/ca: Error making API request. Code: 400. Errors: * the given certificate is not marked for CA use and cannot be used with this backend command terminated with exit code 2If you are using macOS, you might have the default LibreSSL version. Set up OpenSSL instead. For more information see Before you begin.

Example script

You can review or adapt the following example script for your own use.

Environment details:

  • 3 cluster setup: 1 management cluster and 2 workload clusters
  • Gloo installed on all clusters
  • Istio installed on the workload clusters, including the httpbin sample app

The script organizes the functions into the following commands that you can run.

  1. Copy the GitHub Gist, also rendered after these steps.

  2. Make sure to update the environment variables at the beginning of the script for the Gloo version, management, and workload cluster contexts that you want to use.

  3. Read and execute the Vault script.

    source ~/Downloads/lib.sh
  4. Execute the Vault functions in order. If you notice errors, try running them one at a time, or refer to the troubleshooting section.

    • Run all functions at once:
      vault-install-all
    • Run functions separately, one at a time:
    1. Install Vault on the management cluster.
      vault-install
    2. Enable Vault authentication for Kubernetes.
      vault-enable-kube-auth
    3. Set up the CA in Vault.
      vault-setup-ca
  5. Verify Vault. Note that this verification assumes you have httpbin on each workload cluster in the httpbin namespace.

    vault-verify

Example script gist

GitHub Gist

#!/bin/bash
export GLOO_MESH_VERSION="v2.1.0-beta22"
export CLUSTER1=cluster1
export CLUSTER2=cluster2
# Ensuring your MGMT env var is set and if not setting it to mgmt.
if [[ -z "MGMT" ]]
then
echo Using Management Context: $MGMT
echo
else
export MGMT=mgmt
fi
vault-install(){
# Install Vault on Kubernetes using Helm
helm repo add hashicorp https://helm.releases.hashicorp.com
helm repo update hashicorp
echo
echo
if [[ $(openssl version | grep OpenSSL) ]]
then
echo Using: $(openssl version)
else
echo "This script does not support LibreSSL. Install OpenSSL (brew install openssl) and add it to your PATH."
exit 1
fi
echo
echo
echo
echo "Generating root CA certificate and key for Vault..."
openssl req -new -newkey rsa:4096 -x509 -sha256 \
-days 3650 -nodes -out root-cert.pem -keyout root-key.pem \
-subj "/O=solo.io"
echo
echo
echo "Install Vault on the management cluster and add the root CA to the Vault deployment."
echo
# Install Vault in dev mode
echo
echo "Installing Vault in dev mode"
#helm install -n vault vault hashicorp/vault --version=0.20.1 --set "injector.enabled=false" --set "server.dev.enabled=true" --set "server.service.type=LoadBalancer" --kube-context="${MGMT}" --create-namespace
helm install -n vault vault hashicorp/vault --set "injector.enabled=false" --set "server.logLevel=debug" --set "server.dev.enabled=true" --set "server.service.type=LoadBalancer" --kube-context="${MGMT}" --create-namespace
# Wait for Vault to come up.
# Don't use 'kubectl rollout' because Vault is a statefulset without a rolling deployment.
kubectl --context="${MGMT}" wait --for=condition=Ready -n vault pod/vault-0
sleep 10
echo
echo
echo "Vault is installed on kube context $MGMT and ready to be used with Istio"
echo
echo "This Hashicorp Vault setup was derived from the \"Manage Istio Certificates with Vault\" documentation here: https://docs.solo.io/gloo-mesh-enterprise/main/setup/prod/certs/vault-certs/vault-istio/"
echo
echo "Note that The vault server service is exposed via type LoadBalancer on the the management server via the following IP: $(kubectl get svc/vault -n vault -o wide --context $MGMT -o jsonpath='{.status.loadBalancer.ingress[0].*}')"
echo
echo
}
vault-enable-basic-auth(){
# Enable Vault userpass.
echo
echo Enabling userpass for Vault.
kubectl --context="${MGMT}" exec -n vault vault-0 -- /bin/sh -c 'vault auth enable userpass'
# Set the Kubernetes Auth config for Vault to the mounted token.
echo
echo "Adding user admin/admin to Vault userpass."
kubectl --context="${MGMT}" exec -n vault: vault-0 -- /bin/sh -c 'vault write auth/userpass/users/admin \
password=admin \
policies=admins'
}
vault-enable-kube-auth(){
# Enable Vault auth for Kubernetes.
echo
echo Enabling Vault auth for Kubernetes.
#kubectl --context="${MGMT}" exec -n vault vault-0 -- /bin/sh -c 'vault auth enable kubernetes'
# CLUSTER1
kubectl --context="${MGMT}" exec -n vault vault-0 -- /bin/sh -c 'vault auth enable -path=kube-cluster1-mesh-auth kubernetes'
# Policy for intermediate signing
VAULT_SA_NAME_C1=$(kubectl --context $CLUSTER1 get sa istiod -n istio-system -o jsonpath="{.secrets[*]['name']}")
SA_TOKEN_C1=$(kubectl --context $CLUSTER1 get secret $VAULT_SA_NAME_C1 -n istio-system -o 'go-template={{ .data.token }}' | base64 --decode)
SA_CA_CRT_C1=$(kubectl config view --raw -o json \
| jq -r --arg wc $CLUSTER1 '. as $c | $c.contexts[] | select(.name == $wc) as $context | $c.clusters[] | select(.name == $context.context.cluster) | .cluster."certificate-authority-data"' \
| base64 -d)
K8S_ADDR_C1=$(kubectl config view -o json \
| jq -r --arg wc $CLUSTER1 '. as $c | $c.contexts[] | select(.name == $wc) as $context | $c.clusters[] | select(.name == $context.context.cluster) | .cluster.server')
# Set Kubernetes auth config for Vault to the mounted token
kubectl --context="${MGMT}" exec -n vault vault-0 -- /bin/sh -c "vault write auth/kube-cluster1-mesh-auth/config \
token_reviewer_jwt="$SA_TOKEN_C1" \
kubernetes_host="$K8S_ADDR_C1" \
kubernetes_ca_cert='$SA_CA_CRT_C1' \
disable_local_ca_jwt="true" \
issuer='https://kubernetes.default.svc.cluster.local'"
# Bind the istiod service account to the PKI policy
kubectl --context="${MGMT}" exec -n vault vault-0 -- /bin/sh -c "vault write \
auth/kube-cluster1-mesh-auth/role/gen-int-ca-istio-cluster1-mesh \
bound_service_account_names=istiod \
bound_service_account_namespaces=istio-system \
policies=gen-int-ca-istio-cluster1-mesh \
ttl=720h"
# CLUSTER2
kubectl --context="${MGMT}" exec -n vault vault-0 -- /bin/sh -c 'vault auth enable -path=kube-cluster2-mesh-auth kubernetes'
# Policy for intermediate signing
VAULT_SA_NAME_C2=$(kubectl --context $CLUSTER2 get sa istiod -n istio-system -o jsonpath="{.secrets[*]['name']}")
SA_TOKEN_C2=$(kubectl --context $CLUSTER2 get secret $VAULT_SA_NAME_C2 -n istio-system -o 'go-template={{ .data.token }}' | base64 --decode)
SA_CA_CRT_C2=$(kubectl config view --raw -o json \
| jq -r --arg wc $CLUSTER2 '. as $c | $c.contexts[] | select(.name == $wc) as $context | $c.clusters[] | select(.name == $context.context.cluster) | .cluster."certificate-authority-data"' \
| base64 -d)
K8S_ADDR_C2=$(kubectl config view -o json \
| jq -r --arg wc $CLUSTER2 '. as $c | $c.contexts[] | select(.name == $wc) as $context | $c.clusters[] | select(.name == $context.context.cluster) | .cluster.server')
# Set Kubernetes auth config for Vault to the mounted token
kubectl --context="${MGMT}" exec -n vault vault-0 -- /bin/sh -c "vault write auth/kube-cluster2-mesh-auth/config \
token_reviewer_jwt="$SA_TOKEN_C2" \
kubernetes_host="$K8S_ADDR_C2" \
kubernetes_ca_cert='$SA_CA_CRT_C2' \
disable_local_ca_jwt="true" \
issuer='https://kubernetes.default.svc.cluster.local'"
# Bind the istiod service account to the PKI policy
kubectl --context="${MGMT}" exec -n vault vault-0 -- /bin/sh -c "vault write \
auth/kube-cluster2-mesh-auth/role/gen-int-ca-istio-cluster2-mesh \
bound_service_account_names=istiod \
bound_service_account_namespaces=istio-system \
policies=gen-int-ca-istio-cluster2-mesh \
ttl=720h"
# Initialize the Vault PKI.
echo
echo "Initializing the Vault PKI."
kubectl --context="${MGMT}" exec -n vault vault-0 -- /bin/sh -c 'vault secrets enable pki'
}
vault-setup-ca(){
# Set the Vault CA to the pem_bundle.
echo
echo "Setting the Vault CA to the pem_bundle."
kubectl --context="${MGMT}" exec -n vault vault-0 -- /bin/sh -c "vault write -format=json pki/config/ca pem_bundle=\"$(cat root-key.pem root-cert.pem)\""
# Initialize the Vault intermediate cert path.
echo
echo "Initializing the Vault intermediate cert path."
kubectl --context="${MGMT}" exec -n vault vault-0 -- /bin/sh -c 'vault secrets enable -path=pki_int_cluster1 pki'
kubectl --context="${MGMT}" exec -n vault vault-0 -- /bin/sh -c 'vault secrets enable -path=pki_int_cluster2 pki'
# Set the policy for the intermediate cert path.
# CLUSTER1
kubectl --context="${MGMT}" exec -n vault vault-0 -- /bin/sh -c 'vault policy write gen-int-ca-istio-cluster1-mesh - <<EOF
path "pki_int_cluster1/*" {
capabilities = ["create", "read", "update", "delete", "list"]
}
path "pki/cert/ca" {
capabilities = ["read"]
}
path "pki/root/sign-intermediate" {
capabilities = ["create", "read", "update", "list"]
}
EOF'
# CLUSTER2
kubectl --context="${MGMT}" exec -n vault vault-0 -- /bin/sh -c 'vault policy write gen-int-ca-istio-cluster2-mesh - <<EOF
path "pki_int_cluster2/*" {
capabilities = ["create", "read", "update", "delete", "list"]
}
path "pki/cert/ca" {
capabilities = ["read"]
}
path "pki/root/sign-intermediate" {
capabilities = ["create", "read", "update", "list"]
}
EOF'
#rm root-cert.pem root-key.pem
}
# Enable the necessary RBAC permissions. Update the gloo-mesh-agent Helm release on both clusters.
vault-update-gloo-mesh-rbac(){
echo
echo "Enable the necessary RBAC permissions. Update the gloo-mesh-agent Helm release on both clusters."
echo
for cluster in ${CLUSTER1} ${CLUSTER2}; do
helm get values -n gloo-mesh gloo-mesh-agent --kube-context="${cluster}" > $cluster-values.yaml
echo "istiodSidecar:" >> $cluster-values.yaml
echo " createRoleBinding: true" >> $cluster-values.yaml
echo " istiodServiceAccount:" >> $cluster-values.yaml
echo " name: istiod" >> $cluster-values.yaml
echo " namespace: istio-system" >> $cluster-values.yaml
helm repo update -n gloo-mesh gloo-mesh-agent --kube-context="${cluster}"
helm upgrade -n gloo-mesh gloo-mesh-agent gloo-mesh-agent/gloo-mesh-agent --kube-context="${cluster}" --version=${GLOO_MESH_VERSION} -f ${cluster}-values.yaml
rm $cluster-values.yaml
done
}
vault-modify-istiod(){
export MGMT_PLANE_VERSION=$(meshctl version --kubecontext $MGMT | jq '.server[].components[] | select(.componentName == "gloo-mesh-mgmt-server") | .images[] | select(.name == "gloo-mesh-mgmt-server") | .version')
echo
echo "Modifying istiod"
echo
for cluster in ${CLUSTER1} ${CLUSTER2}; do
kubectl patch -n istio-system deployment/istiod --context $cluster --patch '{
"spec": {
"template": {
"spec": {
"initContainers": [
{
"args": [
"init-container"
],
"env": [
{
"name": "PILOT_CERT_PROVIDER",
"value": "istiod"
},
{
"name": "POD_NAME",
"valueFrom": {
"fieldRef": {
"apiVersion": "v1",
"fieldPath": "metadata.name"
}
}
},
{
"name": "POD_NAMESPACE",
"valueFrom": {
"fieldRef": {
"apiVersion": "v1",
"fieldPath": "metadata.namespace"
}
}
},
{
"name": "SERVICE_ACCOUNT",
"valueFrom": {
"fieldRef": {
"apiVersion": "v1",
"fieldPath": "spec.serviceAccountName"
}
}
}
],
"volumeMounts": [
{
"mountPath": "/etc/cacerts",
"name": "cacerts"
}
],
"imagePullPolicy": "IfNotPresent",
"image": "gcr.io/gloo-mesh/gloo-mesh-istiod-agent:2.1.0-beta22",
"name": "istiod-agent-init"
}
],
"containers": [
{
"args": [
"sidecar"
],
"env": [
{
"name": "PILOT_CERT_PROVIDER",
"value": "istiod"
},
{
"name": "POD_NAME",
"valueFrom": {
"fieldRef": {
"apiVersion": "v1",
"fieldPath": "metadata.name"
}
}
},
{
"name": "POD_NAMESPACE",
"valueFrom": {
"fieldRef": {
"apiVersion": "v1",
"fieldPath": "metadata.namespace"
}
}
},
{
"name": "SERVICE_ACCOUNT",
"valueFrom": {
"fieldRef": {
"apiVersion": "v1",
"fieldPath": "spec.serviceAccountName"
}
}
}
],
"volumeMounts": [
{
"mountPath": "/etc/cacerts",
"name": "cacerts"
}
],
"imagePullPolicy": "IfNotPresent",
"image": "gcr.io/gloo-mesh/gloo-mesh-istiod-agent:2.1.0-beta22",
"name": "istiod-agent"
}
],
"volumes": [
{
"name": "cacerts",
"secret": null,
"emptyDir": {
"medium": "Memory"
}
}
]
}
}
}
}'
done
}
vault-apply-roottrustpolicy(){
# Each workload cluster gets their own RootTrustPolicy
VAULT_ENDPOINT="http://$(kubectl get svc/vault -n vault -o wide --context $MGMT -o jsonpath='{.status.loadBalancer.ingress[0].*}')"
cat << EOF | kubectl apply --context=${CLUSTER1} -f -
apiVersion: admin.gloo.solo.io/v2
kind: RootTrustPolicy
metadata:
name: ${CLUSTER1}
namespace: gloo-mesh
spec:
applyToMeshes:
- istio:
clusterSelector:
mesh: istiod-istio-system-${CLUSTER1}
namespace: istio-system
selector:
app: istiod
vault: cluster1
config:
agentCa:
vault:
caPath: pki/root/sign-intermediate
csrPath: pki_int_cluster1/intermediate/generate/exported
server: $VAULT_ENDPOINT:8200
kubernetesAuth:
mountPath: /v1/auth/kube-cluster1-mesh-auth
role: gen-int-ca-istio-cluster1-mesh
EOF
kubectl rollout restart deployment istiod -n istio-system --context ${CLUSTER1}
sleep 21
cat << EOF | kubectl apply --context=${CLUSTER2} -f -
apiVersion: admin.gloo.solo.io/v2
kind: RootTrustPolicy
metadata:
name: ${CLUSTER2}
namespace: gloo-mesh
spec:
applyToMeshes:
- istio:
clusterSelector:
mesh: istiod-istio-system-${CLUSTER2}
namespace: istio-system
selector:
app: istiod
vault: cluster2
config:
agentCa:
vault:
caPath: pki/root/sign-intermediate
csrPath: pki_int_cluster2/intermediate/generate/exported
server: $VAULT_ENDPOINT:8200
kubernetesAuth:
mountPath: /v1/auth/kube-cluster2-mesh-auth
role: gen-int-ca-istio-cluster2-mesh
EOF
kubectl rollout restart deployment istiod -n istio-system --context ${CLUSTER2}
}
vault-verify(){
echo -----------------------------------------------------------------------
echo
echo
kubectl --context="${MGMT}" exec -n vault vault-0 -- /bin/sh -c "vault version"
echo
echo
echo Vault Server SVC LB IP: $(kubectl get svc/vault -n vault -o wide --context $MGMT -o jsonpath='{.status.loadBalancer.ingress[0].*}')
echo
echo
echo Verify traffic uses the root CA
echo
echo $CLUSTER1
if $(kubectl --context=$CLUSTER1 get cm -n httpbin istio-ca-root-cert -ojson | jq -r '.data["root-cert.pem"]' | diff -q root-cert.pem -); then
echo "Vault is your intermediate CA"
else
echo "Vault is NOT your intermediate CA"
fi
echo
echo
echo $CLUSTER2
if $(kubectl --context=$CLUSTER2 get cm -n httpbin istio-ca-root-cert -ojson | jq -r '.data["root-cert.pem"]' | diff -q root-cert.pem -); then
echo "Vault is your intermediate CA"
else
"Vault is NOT your imtermediate CA"
fi
echo
echo
echo -------------------------------------------------------------------------
echo
echo
echo "> kubectl get pods -n istio-system -l app=istiod --context cluster1"
kubectl get pods -n istio-system -l app=istiod --context ${CLUSTER1}
echo
kubectl logs -n istio-system --context ${CLUSTER1} $(kubectl get pods -n istio-system -l app=istiod --context ${CLUSTER1} | cut -d" " -f1 | tail -1) -c istiod-agent-init
echo
echo
echo -------------------------------------------------------------------------
echo
echo
echo "> kubectl get pods -n istio-system -l app=istiod --context cluster2"
kubectl get pods -n istio-system -l app=istiod --context ${CLUSTER2}
echo
kubectl logs -n istio-system --context ${CLUSTER2} $(kubectl get pods -n istio-system -l app=istiod --context ${CLUSTER2} | cut -d" " -f1 | tail -1) -c istiod-agent-init
echo
echo
echo
}
# Install Everything
vault-install-all(){
vault-install
vault-enable-basic-auth
vault-enable-kube-auth
vault-setup-ca
vault-update-gloo-mesh-rbac
vault-modify-istiod
vault-apply-roottrustpolicy
}
# Install Everything including supporting compentents
vault-up(){
k3d-up
istio-install
kubectl label deployment istiod -n istio-system vault=${CLUSTER1} --context ${CLUSTER1}
kubectl label deployment istiod -n istio-system vault=${CLUSTER2} --context ${CLUSTER2}
gloo-mesh-install
vault-install-all
}
vault-down(){
k3d-down
}
vault-uninstall(){
helm uninstall vault -n vault --kube-context ${MGMT}
kubectl delete ns vault --context ${MGMT}
}
vault-reinstall(){
k3d-down
vault-up
}
# Provide the cluster context as the argument
vault-debug(){
kubectl logs -n istio-system --context $1 $(kubectl get pods -n istio-system -l app=istiod --context $1 | cut -d" " -f1 | tail -1) -c istiod-agent-init
echo
echo -----------
echo
kubectl describe RootTrustPolicy -n gloo-mesh --context $1
echo
echo -----------
echo
kubectl describe meshes -n gloo-mesh --context $1
echo
echo -----------
kubectl describe issuedcertificates.internal.gloo.solo.io istiod-istio-system-$1 -n istio-system --context $1
echo
echo
}
view raw lib.sh hosted with ❤ by GitHub