RBAC for Gloo Portal

Use Kubernetes role-based access control (RBAC) to control user access to Gloo Portal resources in your clusters.

Your cloud provider might have an identity and access management (IAM) service that automatically synchronizes Kubernetes RBAC with IAM permissions. Make sure that the user or group that you want to grant access to has the proper permissions from your cloud provider. For more information, check your cloud provider's IAM docs.

List Gloo Portal API groups and resources

To list the Gloo Portal resources, their related API groups, and possible verbs to use in Kubernetes RBAC, run the following command.

kubectl api-resources -o wide | grep portal.gloo

Example admin, edit, and view roles

Refer to the following examples for the Gloo API groups and resources that you can add to rules in Kubernetes RBAC roles or cluster roles. The examples are organized by the verbs that are allowed in the default Kubernetes Admin, Edit, and View roles.

The following examples include only Gloo Portal custom resources. You might also need permission to Kubernetes or other custom resources, such as secrets to work with API keys.

rules:
- apiGroups:
  - portal.gloo.solo.io
  resources:
  - adminuisettings
  - apidocs
  - apiproducts
  - environments
  - groups
  - portals
  - routes
  - storages 
  - users
  verbs:
  - create
  - delete
  - deletecollection
  - get
  - list
  - patch
  - update
  - watch
rules:
- apiGroups:
  - portal.gloo.solo.io
  resources:
  - adminuisettings
  - apidocs
  - apiproducts
  - environments
  - groups
  - portals
  - routes
  - storages
  - users
  verbs:
  - get
  - list
  - patch
  - update
  - watch
rules:
- apiGroups:
  - portal.gloo.solo.io
  resources:
  - adminuisettings
  - apidocs
  - apiproducts
  - environments
  - groups
  - portals
  - routes
  - storages
  - users
  verbs:
  - get
  - list
  - watch

Set up Kubernetes RBAC for Gloo Portal resources

  1. List the Gloo Portal resources, their related API groups, and possible verbs.

    kubectl api-resources -o wide | grep gloo
    

    Example output:

    NAME            SHORTNAMES APIVERSION                  NAMESPACED KIND            VERBS
    adminuisettings            portal.gloo.solo.io/v1beta1 true       AdminUiSettings [delete deletecollection get list patch create update watch]
    apidocs                    portal.gloo.solo.io/v1beta1 true       APIDoc          [delete deletecollection get list patch create update watch]
    apiproducts                portal.gloo.solo.io/v1beta1 true       APIProduct      [delete deletecollection get list patch create update watch]
    environments    env        portal.gloo.solo.io/v1beta1 true       Environment     [delete deletecollection get list patch create update watch]
    groups                     portal.gloo.solo.io/v1beta1 true       Group           [delete deletecollection get list patch create update watch]
    portals                    portal.gloo.solo.io/v1beta1 true       Portal          [delete deletecollection get list patch create update watch]
    routes                     portal.gloo.solo.io/v1beta1 true       Route           [delete deletecollection get list patch create update watch]
    storages                   portal.gloo.solo.io/v1beta1 true       Storage         [delete deletecollection get list patch create update watch]
    users                      portal.gloo.solo.io/v1beta1 true       User            [delete deletecollection get list patch create update watch]
    
  2. Optional: Get the details of an existing role or cluster role to modify or use as a starting point for a new configuration file. For example, you might get the default Kubernetes cluster roles admin, edit, and view.

    1. Get the name of the existing role that you want to modify.
      kubectl get roles -A
      
    2. Get the configuration of the role that you want to modify and save it as a local YAML file.
      kubectl get role $ROLE -o yaml > $ROLE.yaml
      
    1. Get the name of the existing cluster role that you want to modify.
      kubectl get clusterroles -A
      
    2. Get the configuration of the cluster role that you want to modify and save it as a local YAML file.
      kubectl get clusterrole $CLUSTER_ROLE -o yaml > $CLUSTER_ROLE.yaml
      

  3. Create or update an existing role or cluster role. In the rules section, add a stanza for the Gloo Portal resources that you want to control permissions for. Use the API group, resource name, and verbs that you previously retrieved. For a full list, see Example admin, edit, and view roles. The following example creates a view-only role for Gloo Portal apidocs, apiproducts environments, and portals resources, but not for groups, routes, or users.

    kubectl apply -f - <<EOF
    apiVersion: rbac.authorization.k8s.io/v1
    kind: Role
    metadata:
      namespace: gloo-portal
      name: gloo-portal-view
    rules:
    - apiGroups:
      - portal.gloo.solo.io
      resources:
      - adminuisettings
      - apidocs
      - apiproducts
      - environments
      - portals
      - storages
      verbs:
      - get
      - list
      - watch
    EOF
    
  4. Create a service account in the same namespace as your role to test permissions.

    kubectl create serviceaccount gloo-portal-rbac-service-account -n gloo-portal
    
  5. Create or update an existing role binding or cluster role binding that maps the user or service account as a subject for the role or cluster role that you updated. The following example creates a role binding for the service account that you created in the previous step. For more information, see the Kubernetes docs.

    kubectl apply -f - <<EOF
    apiVersion: rbac.authorization.k8s.io/v1
    kind: RoleBinding
    metadata:
      name: gloo-portal-view-role-binding
      namespace: gloo-portal
    subjects:
    - namespace: gloo-portal 
      kind: ServiceAccount
      name: gloo-portal-rbac-service-account
    roleRef:
      apiGroup: rbac.authorization.k8s.io
      kind: Role
      name: gloo-portal-view
    EOF
    
  6. Check the permissions that the service account has.

    kubectl auth can-i get portals --as=system:serviceaccount:gloo-portal:gloo-portal-rbac-service-account -n gloo-portal
    kubectl auth can-i get users --as=system:serviceaccount:gloo-portal:gloo-portal-rbac-service-account -n gloo-portal
    kubectl auth can-i get portals --as=system:serviceaccount:gloo-portal:gloo-portal-rbac-service-account
    

    Example output:

    • yes: The service account can get portals in the gloo-portal namespace, as expected.
    • no: The service account cannot get users in the gloo-portal namespace, because the role only gives viewer permissions for select Gloo Portal resources, not users resources.
    • no: The service account cannot get portals in the default namespace, because the role and role binding are scoped to the gloo-portal namespace.
    kubectl auth can-i --list --as=system:serviceaccount:gloo-portal:gloo-portal-rbac-service-account -n gloo-portal
    

    Example output:

    Resources                           Non-Resource URLs Resource Names Verbs
    adminuisettings.portal.gloo.solo.io []                []             [get list watch]
    apidocs.portal.gloo.solo.io         []                []             [get list watch]
    apiproducts.portal.gloo.solo.io     []                []             [get list watch]
    environments.portal.gloo.solo.io    []                []             [get list watch]
    portals.portal.gloo.solo.io         []                []             [get list watch]
    storages.portal.gloo.solo.io        []                []             [get list watch]
    

  7. Verify that the service account can get the resources.

    1. Get and decode the token from the secret for the service account.

      kubectl get secrets  -n gloo-portal $(kubectl get serviceaccount gloo-rbac-service-account -n gloo-portal -o=jsonpath='{.secrets[0].name}') -o=jsonpath='{.data.token}' | base64 -D
      
    2. Save the token output of the previous step as an environment variable.

      export SA_TOKEN=<ey...>
      
    3. Get the cluster endpoint for API access.

      kubectl get endpoints | grep kubernetes
      

      Example output:

      kubernetes   34.xx.xxx.xxx:443   1d
      
    4. Save the cluster endpoint without the port as an environment variable.

      export CLUSTER_ENDPOINT=<34.xx.xxx.xxx>
      
    5. Send some curl requests to the cluster endpoint with the service account token. Note that some succeed and some fail based on the permissions of the service account.

      curl -k  https://$CLUSTER_ENDPOINT/apis/portal.gloo.solo.io/v1beta1/portals -H "Authorization: Bearer $SA_TOKEN"
      curl -k  https://$CLUSTER_ENDPOINT/apis/portal.gloo.solo.io/v1beta1/namespaces/gloo-portal/portals -H "Authorization: Bearer $SA_TOKEN"
      curl -k  https://$CLUSTER_ENDPOINT/apis/admin.gloo.solo.io/v1beta1/namespaces/gloo-portal/users -H "Authorization: Bearer $SA_TOKEN"
      

      Example output:

      • The first request fails because the service account does not have permissions to list portals for the entire cluster.
      • The second request succeeds because the service account can list portals in the gloo-portal namespace.
      • The third request fails because the service account cannot list users.
  8. Optional: Clean up the resources that you created.

    kubectl delete -n gloo-portal role gloo-portal-view
    kubectl delete -n gloo-portal rolebinding gloo-portal-view-role-binding
    kubectl delete -n gloo-portal serviceaccount gloo-portal-rbac-service-account