Sometimes, you might create Gloo resources and expect a certain result, such as a policy applying to traffic or a gateway listening for traffic. If you do not get the expected result, try the following general debugging steps.

  1. Check the Gloo resources in your management and remote clusters. You might find the following commands useful.

  2. Describe the resource, and look at the global status, events, and other areas for more information.

      kubectl describe virtualgateway $GATEWAY_NAME -n $NAMESPACE
      
  3. If you see an error or warning in the global status:

    1. Launch the Gloo UI, find the resource in the Resources > Solo page, and click View YAML. You can review workspace and cluster-specific details about the status in the UI.
    2. If the resource seems to work, such as a policy taking effect, even though the status shows up as unhealthy, you might have a Redis state issue. Restart Redis and check if the health status returns to normal.
  4. Verify that the resource is in the workspace that you expect it to be in. You can check the resource’s namespace against the namespaces that are included in the workspace resource on the management cluster.

      kubectl describe workspace -n gloo-mesh --context $MGMT_CONTEXT
      

    In the output, check the Status sections to make sure that the workspace includes each cluster and namespace that you want to be part of the workspace.

      Status:
      Clusters:
        Name:  cluster1
        Namespaces:
          bookinfo
          default
          gloo-mesh
    ...
      
  5. Modify any workspaces that might have conflicting namespaces, such as the default workspace, because namespaces can belong to only one workspace. For more information about how workspace conflicts can impact your setup, review the concept docs.

  6. Verify that related resources are in the same workspace, or are exported and imported appropriately. For example, your virtual gateway must be in the same workspace as the route table and policy that you want it to work with. Common workspace errors include the following:

    • Importing workspace-b in the WorkspaceSettings of workspace-a, but not exporting to workspace-a in the WorkspaceSettings of workspace-b.
    • Mismatching namespaces or resources for import and export. For example, you might scope exporting to a particular app-a namespace, but the resources you meant to export are in a different app-b namespace.
    • Sharing only some of the resources that you need. For example, you might want to delegate routes from one workspace to a route table in another workspace. You set up import and export rules for the route tables, but still don’t get the delegated routes to work. This can happen when the backing destinations for those routes are not in your workspace. You must also import those Kubernetes services, Gloo virtual destinations, or Gloo external services.
  7. If applicable, verify that the ports are set up appropriately on the resource and the backing service. For example, your virtual gateway might listen on port 80, which matches the port of the Kubernetes service for the gateway deployment.

  8. Check the logs of the management server in the management cluster for accepted or translated resource messages. You might find common translation errors, such as a resource missing from the expected namespace or cluster. Create the resource or correct the resource configuration, and try again.

  9. Check the logs of agent pods on the remote cluster that the resource is created in.

  10. Check for Gloo resource’s translated Istio resources in the same namespace. If the Istio resource exists, describe the resource and make sure that its configuration matches what you expect based on the Gloo resource configuration. If no Istio resource exists, try debugging your Gloo components. For example, your Gloo resources might not belong to the expected Gloo workspace, cluster, or namespace.

  11. If you upgraded Gloo versions recently, make sure that you applied the CRDs as part of the upgrade.

  12. If you continue to notice errors about resources being in a loop of re-creating or moving from healthy to unhealthy states, try restarting the Redis pod.