Failed to watch a portal resource
Troubleshoot how to resolve issues with the portal server watching for other resources.
When you create a portal, the portal watches several custom resources such as ApiDocs and PortalConfigs.
What’s happening
After creating a Portal custom resource, you still cannot access the Gloo Platform Portal APIs, such as to list API products or review usage plans.
You do not have any PortalConfig internal resources in the same namespace as your Portal custom resources.
You might see error messages in the portal server logs such as the following messages.
{"level":"error","ts":"2023-06-11T03:20:29.333Z","caller":"cache/reflector.go:138","msg":"pkg/mod/k8s.io/client-go@v0.24.8/tools/cache/reflector.go:167: Failed to watch *v2.PortalConfig: failed to list *v2.PortalConfig: Get \"https://10.xx.xx.x:443/apis/internal.gloo.solo.io/v2/portalconfigs?resourceVersion=22969674\": dial tcp 10.xx.xx.x:443: i/o timeout\n","stacktrace":"k8s.io/client-go/tools/cache.DefaultWatchErrorHandler\n\t/root/go/pkg/mod/k8s.io/client-go@v0.24.8/tools/cache/reflector.go:138\nk8s.io/client-go/tools/cache.(*Reflector).Run.func1\n\t/root/go/pkg/mod/k8s.io/client-go@v0.24.8/tools/cache/reflector.go:222\nk8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1\n\t/root/go/pkg/mod/k8s.io/apimachinery@v0.24.8/pkg/util/wait/wait.go:155\nk8s.io/apimachinery/pkg/util/wait.BackoffUntil\n\t/root/go/pkg/mod/k8s.io/apimachinery@v0.24.8/pkg/util/wait/wait.go:156\nk8s.io/client-go/tools/cache.(*Reflector).Run\n\t/root/go/pkg/mod/k8s.io/client-go@v0.24.8/tools/cache/reflector.go:220\nk8s.io/apimachinery/pkg/util/wait.(*Group).StartWithChannel.func1\n\t/root/go/pkg/mod/k8s.io/apimachinery@v0.24.8/pkg/util/wait/wait.go:56\nk8s.io/apimachinery/pkg/util/wait.(*Group).Start.func1\n\t/root/go/pkg/mod/k8s.io/apimachinery@v0.24.8/pkg/util/wait/wait.go:73"}
{"level":"error","ts":"2023-06-11T03:20:38.901Z","caller":"cache/reflector.go:138","msg":"pkg/mod/k8s.io/client-go@v0.24.8/tools/cache/reflector.go:167: Failed to watch *v2.ApiDoc: failed to list *v2.ApiDoc: Get \"https://10.xx.xx.x:443/apis/apimanagement.gloo.solo.io/v2/apidocs?resourceVersion=22970006\": dial tcp 10.xx.xx.x:443: i/o timeout\n","stacktrace":"k8s.io/client-go/tools/cache.DefaultWatchErrorHandler\n\t/root/go/pkg/mod/k8s.io/client-go@v0.24.8/tools/cache/reflector.go:138\nk8s.io/client-go/tools/cache.(*Reflector).Run.func1\n\t/root/go/pkg/mod/k8s.io/client-go@v0.24.8/tools/cache/reflector.go:222\nk8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1\n\t/root/go/pkg/mod/k8s.io/apimachinery@v0.24.8/pkg/util/wait/wait.go:155\nk8s.io/apimachinery/pkg/util/wait.BackoffUntil\n\t/root/go/pkg/mod/k8s.io/apimachinery@v0.24.8/pkg/util/wait/wait.go:156\nk8s.io/client-go/tools/cache.(*Reflector).Run\n\t/root/go/pkg/mod/k8s.io/client-go@v0.24.8/tools/cache/reflector.go:220\nk8s.io/apimachinery/pkg/util/wait.(*Group).StartWithChannel.func1\n\t/root/go/pkg/mod/k8s.io/apimachinery@v0.24.8/pkg/util/wait/wait.go:56\nk8s.io/apimachinery/pkg/util/wait.(*Group).Start.func1\n\t/root/go/pkg/mod/k8s.io/apimachinery@v0.24.8/pkg/util/wait/wait.go:73"}
Why it’s happening
The Portal custom resource requires that several components are set up. Some of these components you manually configure, such as a route table that bundles your APIs into API products or policies that create usage plans. Other components are automatically configured for you by installing portal or configuring resources, such as Kubernetes RBAC roles, ApiDocs, and an internal PortalConfig. If any of these resources are misconfigured or missing, your portal might not work.
How to fix it
Confirm that the Kubernetes CRDs and API resources for Gloo Platform Portal are installed in your cluster. If not, upgrade to reinstall the Gloo Platform CRD Helm chart.
kubectl get crds -A | grep -e portal -e apidoc kubectl api-resources | grep -e portal -e apidoc
Confirm that the portal cluster role has permissions to list the custom resources. If not, add the permissions. If the cluster role does not exist, you can create it or upgrade Gloo Platform with Portal enabled.
kubectl get clusterrole gloo-mesh-portal-server-gloo-mesh -o yaml
Example output of a correct cluster role:
apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: annotations: meta.helm.sh/release-name: gloo-platform meta.helm.sh/release-namespace: gloo-mesh labels: app: gloo-mesh-portal-server app.kubernetes.io/managed-by: Helm name: gloo-mesh-portal-server-gloo-mesh rules: - apiGroups: - apimanagement.gloo.solo.io resources: - apidocs - apischemadiscoveries verbs: - get - list - watch - apiGroups: - internal.gloo.solo.io resources: - portalconfigs verbs: - get - list - watch
Confirm that the portal cluster role binding for the cluster role refers to the correct service account of the portal server. If the cluster role binding refers to a service account that does not exist or in a different namespace than the portal server, upgrade to reinstall the Gloo Platform CRD Helm chart.
kubectl get clusterrolebinding gloo-mesh-portal-server-gloo-mesh -o yaml
Example output:
apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: annotations: meta.helm.sh/release-name: gloo-platform meta.helm.sh/release-namespace: gloo-mesh creationTimestamp: "2023-06-12T19:33:08Z" labels: app: gloo-mesh-portal-server app.kubernetes.io/managed-by: Helm name: gloo-mesh-portal-server-gloo-mesh resourceVersion: "24490979" uid: 44b10923-f27d-4a38-ae45-7fa8a8b83de9 roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: gloo-mesh-portal-server-gloo-mesh subjects: - kind: ServiceAccount name: gloo-mesh-portal-server namespace: gloo-mesh
Review your Portal configuration.
kubectl get portals -A
Confirm the selected domains, portal backend selectors, usage plans, and APIs. Some common errors include the following:
- Domains: The route table with the domain does not exist, or does not include any of the selected APIs.
- Portal backend: The portal backend is in a different namespace than the portal custom resource you configured. Create the portal in the same namespace as the backend. To check where the portal backend is deployed, run
kubectl get pods -A -l app=gloo-mesh-portal-server
. - Usage Plans: Not every selected API has a usage plan. Make sure to apply external auth and rate limiting policies to every API that you want to expose through the portal.
- APIs: Not every selected API has the label you use. Make sure that each route (not just the route table) has the label.