External auth with Dex example

Use Dex as an OIDC provider for both authentication and authorization to the Gloo Mesh UI.

The following instructions are for Linux operating systems. The example uses kind for a demo Kubernetes cluster and Dex for the OIDC provider.

You can follow a similar process for your production setup. After you complete the demonstration steps, continue to the Configure OIDC values in your Gloo Mesh deployment.

Before you begin

  1. Make sure that you install the following tools.
    • docker, to run Dex as the local OIDC provider.
    • kind to create a local Kubernetes cluster.
    • krew, the Kubernetes plug-in manager.
    • kubectl to manage Kubernetes clusters.
    • openssl to generate certificates for your OIDC provider.
  2. Optional: Review the information about how authentication and authorization work with the Gloo Mesh UI.

Create certificates for the OIDC provider

You can test out these steps with a self-signed certificate that uses OpenSSL.

  1. Download the gencerts.sh script.
  2. Optional depending on your file settings: Give the script execution permissions.
    cd ~/Downloads
    chmod +x ./gencerts.sh
    
  3. Run the script.
    ./gencerts.sh
    

The script generates certificates for the OIDC provider in an ssl folder, such as in the following example.

ls ssl/       
ca-key.pem ca.pem     cert.pem   csr.pem    key.pem    req.cnf

Create a demo cluster and set up Dex as your OIDC provider

Set up Dex as the OIDC provider for your Kubernetes cluster. To unify authentication and authorization, the cluster's OIDC provider must match the provider that you want to use to authenticate to the Gloo Mesh UI.

  1. Download the kindconfig.yaml configuration file. Note that the API server is set up with the Dex OIDC information.
  2. Create a Kubernetes cluster locally with the kind configuration file.
    kind create cluster --config=kindconfig.yaml
    
  3. Download the dex.yaml configuration file. This configuration file refers to the certificates that you previously generated. It also configures the redirect URLs for accessing the Gloo Mesh UI on the local host.
  4. Run Dex as the OIDC provider for your cluster. You can choose to run Dex via a Docker command, or install Dex as a deployment in your cluster via Helm.
    1. Create an OIDC provider with Dex. Note that Dex is added to the kind network so that the Kubernetes API server can reach the OIDC provider.

      docker run -p 5557:5557 -d --network=kind -v ${PWD}:/data:ro -v ${PWD}/ssl:/ssl:ro --name oidc ghcr.io/dexidp/dex:v2.31.0 dex serve /data/dex.yaml
      
    2. Modify the /etc/hosts file so that the OIDC provider can be reached by using the issuer URL. Depending on your file system settings, you might need to run the following command with elevated permissions such as sudo.

      echo $(docker inspect oidc -f '{{range .NetworkSettings.Networks}}{{.IPAddress}}{{end}}') oidc| sudo tee -a /etc/hosts
      

      Example output:

      172.18.0.3 oidc
      
    3. Make sure you can reach the OIDC provider.

      curl --cacert ssl/ca.pem https://oidc:5557/dex/.well-known/openid-configuration
      
    1. Add the Dex Helm chart to your cluster.

      helm repo add dex https://charts.dexidp.io --kube-context ${MGMT_CONTEXT}
      
    2. Create a namespace for Dex.

      kubectl --context ${MGMT_CONTEXT} create namespace dex
      
    3. Install Dex via Helm.

      helm upgrade --install dex dex/dex \
       --kube-context ${MGMT_CONTEXT} \
       --namespace dex \
       --values "${SCRIPT_DIR}/dex-helm-$SETUP-env-values.yaml"
      
    4. Make sure that your deployment rolled out successfully.

      kubectl --context ${MGMT_CONTEXT} -n dex rollout status deployment dex
      

Optional: Verify your OIDC setup

You can check that your OIDC setup works by enforcing your kubectl CLI client to authenticate with Dex.

  1. To log in to Kubernetes with an OIDC provider and kubectl, install the OIDC login plug-in.
    kubectl krew install oidc-login
    
  2. Set up the OIDC credentials with the OIDC client information that you previously generated from the script.
    kubectl oidc-login setup --oidc-issuer-url=https://oidc:5557/dex --oidc-client-id=kuberentes --oidc-client-secret=ZXhhbXBsZS1hcHAtc2VjcmV0 --certificate-authority=${PWD}/ssl/ca.pem --oidc-extra-scope=email
    
  3. Follow the steps that the plug-in suggests. In particular, add the user's OIDC access tokens to your kubectl config.
    kubectl config set-credentials oidc-user \
        --exec-api-version=client.authentication.k8s.io/v1beta1 \
        --exec-command=kubectl \
        --exec-arg=oidc-login \
        --exec-arg=get-token \
        --exec-arg=--oidc-issuer-url=https://oidc:5557/dex \
        --exec-arg=--oidc-client-id=kubernetes \
        --exec-arg=--oidc-client-secret=ZXhhbXBsZS1hcHAtc2VjcmV0 \
        --exec-arg=--oidc-extra-scope=email \
        --exec-arg=--certificate-authority=${PWD}/ssl/ca.pem
    
  4. Check that your user can view the Kubernetes resources he was granted access to with cluster RBAC rules.
    kubectl --user oidc-user get pods
    

Install Gloo Mesh and configure the Gloo Mesh UI for Dex

The following steps are for a demonstration setup only. For more detailed steps such as for a production-level deployment, see Install Gloo Mesh.

  1. Install Gloo Mesh in your kind cluster.
    1. Set the Gloo Mesh Enterprise version as an environment variable.
      export GLOO_MESH_VERSION=2.0.13
      
    2. Add the Gloo Mesh Helm repositories.
      helm repo add gloo-mesh-enterprise https://storage.googleapis.com/gloo-mesh-enterprise/gloo-mesh-enterprise
      helm repo add gloo-mesh-agent https://storage.googleapis.com/gloo-mesh-enterprise/gloo-mesh-agent
      helm repo update
      
    3. Create the gloo-mesh namespace in your cluster.
      kubectl create ns gloo-mesh
      
    4. Install the Gloo Mesh management server.
      helm upgrade --install gloo-mesh-enterprise \
          gloo-mesh-enterprise/gloo-mesh-enterprise \
          --namespace gloo-mesh \
          --version ${GLOO_MESH_VERSION} \
          --set licenseKey=${GLOO_MESH_LICENSE_KEY} \
          --set mgmtClusterName=kind \
          --set devMode=true \
          --set verbose=true \
          --set glooMeshMgmtServer.relay.disableCaCertGeneration=true \
          --set glooMeshUi.enabled=true \
          --set insecure=true
      
    5. Install the Gloo Mesh agent.
      helm upgrade --install gloo-mesh-agent \
          gloo-mesh-agent/gloo-mesh-agent \
          --namespace gloo-mesh \
          --version ${GLOO_MESH_VERSION} \
          --set cluster=kind \
          --set relay.serverAddress=gloo-mesh-mgmt-server.gloo-mesh.svc.cluster.local:9900 \
          --set ext-auth-service.enabled=false \
          --set rate-limiter.enabled=false \
          --set verbose=true \
          --set devMode=true \
          --set insecure=true
      
  2. Download the gm_resources.yaml configuration file that you need to set up your Gloo Mesh environment, including KubernetesCluster, Workspace, and WorkspaceSettings resources.
  3. Apply the Gloo Mesh resources to your cluster.
    kubectl apply -f gm_resources.yaml
    
  4. Create a ConfigMap with the root CA.
    kubectl create configmap -n gloo-mesh oidc-root-ca --from-file=ca.crt=ssl/ca.pem
    
  5. Download the dashboard-settings.yaml configuration file, to make the Gloo Mesh UI use the same OIDC provider and settings as the Kubernetes cluster. Note the userMapping section in the Dashboard custom resource matches the cluster settings from the kindconfig.yaml file that you previously downloaded. For more options, see the API documentation.
    apiVersion: admin.gloo.solo.io/v2
    kind: Dashboard
    metadata:
      name: settings
      namespace: gloo-mesh
    spec:
      authz:
        multiClusterRbac: {}
      authn:
        oidc:
          caCertConfigmapName: oidc-root-ca
          userMapping:
            usernameClaim: "email"
            usernamePrefix: "oidc:"
          appUrl: http://localhost:8090/
          clientId: dashboard
          clientSecretName: dashboard
          issuerUrl: https://oidc:5557/dex
          scopes:
          - openid
          - profile
          - email
    ---
    apiVersion: v1
    kind: Secret
    metadata:
      name: dashboard
      namespace: gloo-mesh
    stringData:
      oidc-client-secret: ZXhhbXBsZS1hcHAtc2VjcmV0
    
    
  6. Apply the dashboard configuration to your cluster.
    kubectl apply -f dashboard-settings.yaml
    
  7. Download the Kubernetes rbac.yaml configuration file to give permissions to your OIDC users. Note that the configuration is for one user that matches your dex.yaml configuration, admin@example.com.
    
    kind: ClusterRoleBinding
    apiVersion: rbac.authorization.k8s.io/v1
    metadata:
      name: oidc-admin
    subjects:
      - kind: User
        name: oidc:admin@example.com
    roleRef:
      kind: ClusterRole
      name: cluster-admin
      apiGroup: rbac.authorization.k8s.io
    
  8. Apply the RBAC configuration to your cluster.
    kubectl apply -f rbac.yaml
    

Log in to the Gloo Mesh UI

Test that authentication and authorization with Dex work in the Gloo Mesh UI.

  1. Open a port on your local machine to access the Gloo Mesh UI.
    kubectl port-forward -n gloo-mesh deploy/gloo-mesh-ui 8090
    
  2. Open the Gloo Mesh UI in your browser: http://localhost:8090.
  3. Log in to the Gloo Mesh UI with your OIDC-provided users, admin@example.com or user@example.com. The different users have different views, depending on their RBAC permissions.
    • admin@example.com: This user can authenticate to the Gloo Mesh UI. Additionally, the user is authorized to all resources by the cluster-admin role, as described in the rbac.yaml file.
    • user@example.com: This user can authenticate to the Gloo Mesh UI because the user is in the dex.yaml OIDC configuration. However, without an RBAC role, the user is not authorized to view any resources in the Gloo Mesh UI.

Demo cleanup

To clean up the resources from your local machine, run the following commands.

docker rm -f oidc
kind delete cluster
rm -rf ssl

Debug your demo setup

To troubleshoot connection problems between the OIDC provider and your cluster, review the Kubernetes API server logs.

docker exec -ti kind-control-plane crictl ps
APISERVERID="$(docker exec -ti kind-control-plane crictl ps --name kube-apiserver -q|tr -d '\r')"
docker exec -ti kind-control-plane crictl logs "$APISERVERID"

To check the OIDC token that kubectl uses, run the following command from the directory where you generated the ssl folder with the gencerts.sh script.

kubectl oidc-login get-token --oidc-issuer-url=https://oidc:5557/dex --oidc-client-id=kuberentes --oidc-client-secret=ZXhhbXBsZS1hcHAtc2VjcmV0 --oidc-extra-scope=email --certificate-authority=${PWD}/ssl/ca.pem