Portal

Review the following troubleshooting information for the Gloo Platform developer portal. For more information, see the Portal setup guides.

Debug the portal

  1. Check that the portal server pod is running.
    kubectl get pods -n gloo-mesh-addons -l app=gloo-mesh-portal-server
    
  2. Check the logs of the Gloo portal server in your workload cluster. To view logs recorded since a relative duration such as 5s, 2m, or 3h, you can specify the --since <duration> flag.
    kubectl logs -n gloo-mesh-addons pods/$(kubectl get pod -l app=gloo-mesh-portal-server -A -o jsonpath='{.items[0].metadata.name}')
    

    Optionally, you can format the output with jq or save it in a local file so that you can read and analyze the output more easily.

    kubectl logs -n gloo-mesh-addons pods/$(kubectl get pod -l app=gloo-mesh-portal-server -A -o jsonpath='{.items[0].metadata.name}') > gloo-mesh-portal-server.json
    
  3. Review the following common error messages from the portal server logs:
  4. Check the state of other Gloo components that the portal server uses:
  5. Make sure that you created all of the portal-related custom resources, and check the configuration for any errors.
    kubectl get apidocs,ratelimitserverconfigs,RatelimitConfigs,ratelimitserversettings,ratelimitclientconfigs,ratelimitpolicies,extauthpolicies,extauthserver,routetables,portals,portalgroups,virtualgateways -A
    

Debug Keycloak OAuth issues

Use the following general steps if you encounter issues during your Keycloak setup to enable OAuth for your portal resources.

  1. Make sure that the status of the external auth policy shows ACCEPTED.

    kubectl get extauthpolicy oidc-auth -n gloo-mesh-addons -o yaml
    
  2. Get the authconfig resource that was created for your policy and make sure that it shows ACCEPTED.

    kubectl get authconfig -n gloo-mesh-addons -o yaml
    
  3. If you used environment variables, such as $KEYCLOAK_CLIENT or $KEYCLOAK_URL, make sure that you entered the values of these environment variables in the external auth policy. If you used the variable names, the values might not be properly replaced.

  4. To get detailed logs for the external auth service, change the log level to DEBUG.

    1. Edit the external auth service.

      kubectl edit deploy -n gloo-mesh-addons ext-auth-service
      
    2. In the spec.container.env section, find the LOG_LEVEL environment variable and set it to DEBUG.

      ...
      spec:
        containers:
        - env:
          - name: LOG_LEVEL
            value: DEBUG
      
    3. Make sure that the external auth service pod restarts.

      kubectl get po -n gloo-mesh-addons -l app=ext-auth-service
      
  5. In a separate terminal, get the logs of the external auth service.

    kubectl logs -n gloo-mesh-addons pods/$(kubectl get pod -l app=ext-auth-service -A -o jsonpath='{.items[0].metadata.name}')
    
  6. Send the curl request that is failing and review the logs that are returned by the external auth service.

    For example, you might see a log such as the following. Make sure that the access token that you included in the request is correct and try again.

    {"level":"debug","ts":1685026072.6853623,"logger":"ext-auth.ext-auth-service","msg":"no token present. request is not authorized","version":"0.35.2","x-request-id":"70439884-2d47-41b4-902f-08273b0b326b"}
    

Failed to watch a portal resource

When you create a portal, the portal watches several custom resources such as ApiDocs and PortalConfigs.

What's happening

After creating a Portal custom resource, you still cannot access the Gloo Platform Portal APIs, such as to list API products or review usage plans.

You do not have any PortalConfig internal resources in the same namespace as your Portal custom resources.

You might see error messages in the portal server logs such as the following messages.

{"level":"error","ts":"2023-06-11T03:20:29.333Z","caller":"cache/reflector.go:138","msg":"pkg/mod/k8s.io/client-go@v0.24.8/tools/cache/reflector.go:167: Failed to watch *v2.PortalConfig: failed to list *v2.PortalConfig: Get \"https://10.xx.xx.x:443/apis/internal.gloo.solo.io/v2/portalconfigs?resourceVersion=22969674\": dial tcp 10.xx.xx.x:443: i/o timeout\n","stacktrace":"k8s.io/client-go/tools/cache.DefaultWatchErrorHandler\n\t/root/go/pkg/mod/k8s.io/client-go@v0.24.8/tools/cache/reflector.go:138\nk8s.io/client-go/tools/cache.(*Reflector).Run.func1\n\t/root/go/pkg/mod/k8s.io/client-go@v0.24.8/tools/cache/reflector.go:222\nk8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1\n\t/root/go/pkg/mod/k8s.io/apimachinery@v0.24.8/pkg/util/wait/wait.go:155\nk8s.io/apimachinery/pkg/util/wait.BackoffUntil\n\t/root/go/pkg/mod/k8s.io/apimachinery@v0.24.8/pkg/util/wait/wait.go:156\nk8s.io/client-go/tools/cache.(*Reflector).Run\n\t/root/go/pkg/mod/k8s.io/client-go@v0.24.8/tools/cache/reflector.go:220\nk8s.io/apimachinery/pkg/util/wait.(*Group).StartWithChannel.func1\n\t/root/go/pkg/mod/k8s.io/apimachinery@v0.24.8/pkg/util/wait/wait.go:56\nk8s.io/apimachinery/pkg/util/wait.(*Group).Start.func1\n\t/root/go/pkg/mod/k8s.io/apimachinery@v0.24.8/pkg/util/wait/wait.go:73"}

{"level":"error","ts":"2023-06-11T03:20:38.901Z","caller":"cache/reflector.go:138","msg":"pkg/mod/k8s.io/client-go@v0.24.8/tools/cache/reflector.go:167: Failed to watch *v2.ApiDoc: failed to list *v2.ApiDoc: Get \"https://10.xx.xx.x:443/apis/apimanagement.gloo.solo.io/v2/apidocs?resourceVersion=22970006\": dial tcp 10.xx.xx.x:443: i/o timeout\n","stacktrace":"k8s.io/client-go/tools/cache.DefaultWatchErrorHandler\n\t/root/go/pkg/mod/k8s.io/client-go@v0.24.8/tools/cache/reflector.go:138\nk8s.io/client-go/tools/cache.(*Reflector).Run.func1\n\t/root/go/pkg/mod/k8s.io/client-go@v0.24.8/tools/cache/reflector.go:222\nk8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1\n\t/root/go/pkg/mod/k8s.io/apimachinery@v0.24.8/pkg/util/wait/wait.go:155\nk8s.io/apimachinery/pkg/util/wait.BackoffUntil\n\t/root/go/pkg/mod/k8s.io/apimachinery@v0.24.8/pkg/util/wait/wait.go:156\nk8s.io/client-go/tools/cache.(*Reflector).Run\n\t/root/go/pkg/mod/k8s.io/client-go@v0.24.8/tools/cache/reflector.go:220\nk8s.io/apimachinery/pkg/util/wait.(*Group).StartWithChannel.func1\n\t/root/go/pkg/mod/k8s.io/apimachinery@v0.24.8/pkg/util/wait/wait.go:56\nk8s.io/apimachinery/pkg/util/wait.(*Group).Start.func1\n\t/root/go/pkg/mod/k8s.io/apimachinery@v0.24.8/pkg/util/wait/wait.go:73"}

Why it's happening

The Portal custom resource requires that several components are set up. Some of these components you manually configure, such as a route table that bundles your APIs into API products or policies that create usage plans. Other components are automatically configured for you by installing portal or configuring resources, such as Kubernetes RBAC roles, ApiDocs, and an internal PortalConfig. If any of these resources are misconfigured or missing, your portal might not work.

How to fix it

  1. Confirm that the Kubernetes CRDs and API resources for Gloo Platform Portal are installed in your cluster. If not, upgrade to reinstall the Gloo Platform CRD Helm chart.

    kubectl get crds -A | grep -e portal -e apidoc
    kubectl api-resources | grep -e portal -e apidoc
    
  2. Confirm that the portal cluster role has permissions to list the custom resources. If not, add the permissions. If the cluster role does not exist, you can create it or upgrade Gloo Platform with Portal enabled.

    kubectl get clusterrole gloo-mesh-portal-server-gloo-mesh -o yaml
    

    Example output of a correct cluster role:

    apiVersion: rbac.authorization.k8s.io/v1
    kind: ClusterRole
    metadata:
      annotations:
        meta.helm.sh/release-name: gloo-platform
        meta.helm.sh/release-namespace: gloo-mesh-addons
      labels:
        app: gloo-mesh-portal-server
        app.kubernetes.io/managed-by: Helm
      name: gloo-mesh-portal-server-gloo-mesh
    rules:
    - apiGroups:
      - apimanagement.gloo.solo.io
      resources:
      - apidocs
      verbs:
      - get
      - list
      - watch
    - apiGroups:
      - internal.gloo.solo.io
      resources:
      - portalconfigs
      verbs:
      - get
      - list
      - watch
    
  3. Confirm that the portal cluster role binding for the cluster role refers to the correct service account of the portal server. If the cluster role binding refers to a service account that does not exist or in a different namespace than the portal server, upgrade to reinstall the Gloo Platform CRD Helm chart.

    kubectl get clusterrolebinding gloo-mesh-portal-server-gloo-mesh -o yaml
    

    Example output:

    apiVersion: rbac.authorization.k8s.io/v1
    kind: ClusterRoleBinding
    metadata:
      annotations:
        meta.helm.sh/release-name: gloo-platform
        meta.helm.sh/release-namespace: gloo-mesh
      creationTimestamp: "2023-06-12T19:33:08Z"
      labels:
        app: gloo-mesh-portal-server
        app.kubernetes.io/managed-by: Helm
      name: gloo-mesh-portal-server-gloo-mesh
      resourceVersion: "24490979"
      uid: 44b10923-f27d-4a38-ae45-7fa8a8b83de9
    roleRef:
      apiGroup: rbac.authorization.k8s.io
      kind: ClusterRole
      name: gloo-mesh-portal-server-gloo-mesh
    subjects:
    - kind: ServiceAccount
      name: gloo-mesh-portal-server
      namespace: gloo-mesh-addons
    
  4. Review your Portal configuration.

    kubectl get portals -A
    
  5. Confirm the selected domains, portal backend selectors, usage plans, and APIs. Some common errors include the following:

    • Domains: The route table with the domain does not exist, or does not include any of the selected APIs.
    • Portal backend: The portal backend is in a different namespace than the portal custom resource you configured. Create the portal in the same namespace as the backend. To check where the portal backend is deployed, run kubectl get pods -A -l app=gloo-mesh-portal-server .
    • Usage Plans: Not every selected API has a usage plan. Make sure to apply external auth and rate limiting policies to every API that you want to expose through the portal.
    • APIs: Not every selected API has the label you use. Make sure that each route (not just the route table) has the label.

The try-it-out feature in the developer portal does not work

When you create a frontend application for your developer portal, users can review a catalog of your API products. This catalog includes the OpenAPI docs, which might include a Try it out feature, such as shown in the following figure.

Figure: Screenshot of a Tracks OpenAPI doc with a Try it out button

What's happening

The Try it out feature does not work. The test request returns no response, even when properly formatted.

Why it's happening

Your portal configuration or OpenAPI spec might be incorrect. Additionally, you might have policies or other network security rules that prevent the feature from working.

How to fix it

  1. Debug your Portal server.
  2. Make sure that you set up the frontend application for the developer portal correctly. Common errors include the following:
    • Selecting the wrong or an incorrectly configured route table for your API products
    • Incorrect OpenAPI spec in the ApiDoc resource for your APIs
    • An error in the external authentication setup for users to access the portal, such as the OIDC provider
  3. In the ApiDoc for your API, check that the full URL for your OpenAPI spec is correct. For example, you might need to replace a hostname api.example.com with the full path and port, such as http://api.example.com:31080.
  4. You might need to apply a CORS policy to allow certain headers and origins, such as localhost. The following example uses the Tracks API product that you configured in the Portal guide.
    kubectl apply -f - << EOF
    apiVersion: security.policy.gloo.solo.io/v2
    kind: CORSPolicy
    metadata:
      name: dev-portal-cors
      namespace: gloo-mesh-gateways
    spec:
      applyToRoutes:
        - route:
            labels:
              api: tracks
              useagePlans: dev-portal
      config:
        allowCredentials: true
        allowHeaders:
          - "Content-Type"
          - "api-key"
        allowMethods:
          - GET
          - POST
          - DELETE
          - PUT
          - OPTIONS
        allowOrigins:
          - prefix: http://localhost
    EOF