Policies

After applying a policy, you might notice unexpected behavior or conversely no effect on your traffic.

Before you begin, you can review the Policy enforcement concepts to understand more about how policies are applied, import and export workspace considerations, and supported policy for type of gateway.

  1. Check your Gloo policy resources in the Gloo UI or your terminal.

    1. Launch the Gloo UI, such as by running meshctl dashboard.
    2. From the menu, click Policies.
    3. Find and click your policy.
    4. Review the YAML configuration, applied routes and destinations, and other details.
    1. List your policy resources.
      kubectl get ratelimitserverconfigs,RatelimitConfigs,ratelimitserversettings,ratelimitclientconfigs,ratelimitpolicies,wasmdeploymentpolicies,externalendpoints,externalservices,activehealthcheckpolicies,accesslogpolicies,failoverpolicies,faultinjectionpolicies,listenerconnectionpolicies,loadbalancerpolicies,outlierdetectionpolicies,jwtpolicies,wafpolicies,retrytimeoutpolicies,accesspolicies,connectionpolicies,corspolicies,csrfpolicies,dlppolicies,extauthpolicies,extauthserver,mirrorpolicies,transformationpolicies,authorizationpolicies,headermanipulationpolicies,proxyprotocolpolicies,trimproxyconfigpolicies -A
      
    2. Describe and get the configuration details of the resource that you want to check. Replace the example namespace and policy with your own details.
      kubectl describe -n gloo-mesh-addons extauthpolicies my-ext-auth-policy
      kubectl get -n gloo-mesh-addons extauthpolicies my-ext-auth-policy -o yaml
      

  2. In your policy details, check for common issues, such as the following:

    • Any configuration error messages, including the following table of messages
    • Underlying servers that must exist, such as the rate limit or external auth server configurations for those policies
    • Correctly selected routes, destinations, or workloads
    • Correctly imported or exported resources
    • Typos in resource labels, names, namespaces
    Message Description Steps to Resolve
    conflicts with existing policy applied to route. Skipping applying <policy name> Some policies, such as JWT or header manipulation, cannot apply multiple policies to the same route at a time. If the policy selects several routes with the same label, the policy still applies to other routes that do not have a conflicting policy. The policy with the oldest creationTimestamp takes precedence. Revise the policies to select the route in only one policy. You can review which policies are applied to the route by checking the route status, such as in the Gloo UI.
  3. For policies that select imported and exported resources: In general, Gloo workspaces are set up to give you control of your microservices. When it comes to sharing policies across workspaces, the following general rules apply:

    • Policies that select destinations (applyToDestinations) or workloads (applyToWorkloads) do not apply across workspaces, even if their underlying destinations are imported. This way, each workspace owner can control the client-side policies that are applied to their own destinations.
    • Policies that select routes (applyToRoutes) or the destinations of routes (applyToRouteDestination) do still apply across workspaces. This way, you control how your services are accessed via the routes, even if those routes are exported to other workspaces.

  4. Confirm that the policy selects the correct, healthy upstream service.

    For policies that select routes: Confirm that the underlying routing resources are healthy.

    1. Check the details of your policy to find what it applies to, such as routes with the usagePlans: dev-portal label in the following example.
      kubectl get ratelimitpolicy.trafficcontrol.policy.gloo.solo.io/tracks-rate-limit -o yaml
      
      ...
      spec:
        applyToRoutes:
        - route:
            labels:
              usagePlans: dev-portal
      
    2. Check the details of the route table that the route is in to confirm that the label matches and to identify the upstream that traffic for that route is forwarded to.
      kubectl get routetable.networking.gloo.solo.io/tracks-rt -n gloo-mesh-gateways -o yaml
      
      ...
      spec:
        http:
        - forwardTo:
            destinations:
            - port:
                number: 5000
              ref:
                name: tracks-rest-api
                namespace: tracks
            pathRewrite: /
          labels:
            usagePlans: dev-portal
          matchers:
          - uri:
              prefix: /
          name: tracks-api
      
    3. In the route table, check that you selected the right host and virtual gateway instance. Note: If the route table is delegated, the host and virtual gateway information is in the parent route table.
      ...
      spec:
        hosts:
        - api.example.com
        virtualGateways:
        - name: istio-ingressgateway
          namespace: gloo-mesh-gateways
      
    4. Verify that the upstream for the route is running, such as in the following example.
      kubectl get pods -n tracks
      

    For policies that select destinations:

    1. From your policy configuration, find the selected destination and its kind, such as all virtual destinations in the following example.
      spec:
        applyToDestinations:
        - kind: VIRTUAL_DESTINATION
          selector: {}
      
    2. Confirm that the underlying virtual destinations, external services, or Kubernetes services are healthy. Review the details for any errors, and note the underlying workload for the destination.
      kubectl get virtualdestinations,externalservices,externalendpoints,services,externalworkloads -A
      
    3. Confirm that the underlying workloads are healthy and are part of the Istio service mesh when you use Gloo Mesh.
      kubectl get pods -A
      

    For policies that select workloads:

    1. From your policy configuration, find the selected workload and its label selector, such as app: ratings in the following example.
      spec:
        applyToWorkloads:
        - selector:
            labels:
              app: ratings
      
    2. Confirm that the underlying workloads are healthy, are part of the Istio service mesh when you use Gloo Mesh, and that they have an associated Kubernetes service or external service. If you do not see any workloads with the label selector from your policy, label the correct workload and try again.
      kubectl get pods -A -l app=ratings
      

  5. Check the underlying Istio and Envoy resources that set up routing behavior in your Gloo environment. For Gloo Gateway, these resources are typically in the same namespace as your ingress gateway (such as gloo-mesh-gateways). For Gloo Mesh, these resources can be in all of the namespaces in your service mesh.

    1. Check the Istio virtual service for the host and route that you previously identified.
      kubectl get virtualservice -A
      kubectl get virtualservice <virtual-service-name> -n gloo-mesh-gateways -o yaml
      

      In the output, verify that the host, route, service, and gateway all match what you expect.

      gateways:
      - <virtual-gateway-name>
      hosts:
      - api.example.com
      http:
      - match:
        - sourceLabels:
            app: istio-ingressgateway
            istio: ingressgateway
            revision: 1-20
          uri:
            prefix: /trackapi/
        name: tracks-api-tracks-rt.gloo-mesh-gateways.cluster1-portal--api-example-com-rt.gloo-mesh-gateways.cluster1-portal
        rewrite:
          uri: /
        route:
        - destination:
            host: tracks-rest-api.tracks.svc.cluster.local
            port:
              number: 5000
      
    2. Check the matching Istio gateway.
      kubectl get gateways -A
      kubectl get gateway <virtual-gateway-name> -n gloo-mesh-gateways -o yaml
      

      In the output, confirm that the gateway, host, and protocol all match what you expect.

      ...
      spec:
        selector:
          app: istio-ingressgateway
          istio: ingressgateway
          revision: 1-20
        servers:
        - hosts:
          - api.example.com
          port:
            name: http-8080-api-example-com
            number: 8080
            protocol: HTTP
      
    3. Check the Envoy filter for the gateway.
      kubectl get envoyfilters -A
      kubectl get envoyfilter <istio-ingressgateway-pod-name> -n gloo-mesh-gateways -o yaml  
      

      In the output, verify the route configuration details, such as the HTTP route type, gateway context, route name, and Envoy filter details for the policy that you applied.

      ...
      spec:
        configPatches:
        - applyTo: HTTP_ROUTE
          match:
            context: GATEWAY
            routeConfiguration:
              vhost:
                route:
                  name: tracks-api-tracks-rt.gloo-mesh-gateways.cluster1-portal--api-example-com-rt.gloo-mesh-gateways.cluster1-portal
          patch:
            operation: MERGE
            value:
              route:
                rateLimits:
                - actions:
                  - genericKey:
                      descriptorValue: gloo-mesh-addons.usage-plans-gloo-mesh-addons-cluster1-portal-rate-limiter
                  - genericKey:
                      descriptorValue: solo.setDescriptor.uniqueValue
                  - requestHeaders:
                      descriptorKey: usagePlan
                      headerName: x-solo-plan
                      skipIfAbsent: true
                  - metadata:
                      descriptorKey: userId
                      metadataKey:
                        key: envoy.filters.http.ext_authz
                        path:
                        - key: userId
                  stage: 1
      
  6. Review the Envoy dashboard to check the filter order of the applied policies. For more information about filter order, see the Life of a Request in the Envoy docs.

    1. Get the name of the pods that run an Istio proxy, such as your Istio ingress gateway or a workload pod with an Istio-injected sidecar. The following example uses the default ingress gateway.
      kubectl get pods -n gloo-mesh-gateways
      
    2. Launch the Envoy dashboard for your gateway with the name of the pod that you previously retrieved.
      istioctl dashboard envoy -n gloo-mesh-gateways <istio-ingressgateway-pod-name>
      
    3. In your browser that opens to https://localhost:15000, click the config_dump command, such as shown in the following example. Optionally, you can enter dynamic_listeners in The resource to dump box to filter the output to the resource you want to review. Screenshot of envoy config dump interactive table of commands
    4. In the config dump, search for dynamic_listeners for the 8080 HTTP port.
    5. In the listener's filters_chain, review the order of the filters to find your policy. Typically, your policy's filter is part of the HTTP Connection Manager (envoy.filters.network.http_connection_manager).
    6. If your policy is not in the expected order, you might need to check for multiple, conflicting policies or use a phase or stage setting in the policy to change its order (if available).
    7. If your policy is not in the filter chain, you might need to fix the policy configuration and re-apply the policy.

    Example filter chain output: In the following example, the rate limit filter happens after the external auth filter. The rate limit filter includes the configuration details that you set in the Gloo rate limit custom resources.

    "dynamic_listeners": [
     {
      "name": "0.0.0.0_8080",
      "active_state": {
       "version_info": "2023-04-12T16:51:47Z/9",
       "listener": {
        "@type": "type.googleapis.com/envoy.config.listener.v3.Listener",
        "name": "0.0.0.0_8080",
        "address": {
         "socket_address": {
          "address": "0.0.0.0",
          "port_value": 8080
         }
        },
        "filter_chains": [
         {
          "filters": [
           {
            "name": "istio_authn",
            "typed_config": {
             "@type": "type.googleapis.com/udpa.type.v1.TypedStruct",
             "type_url": "type.googleapis.com/io.istio.network.authn.Config"
            }
           },
           {
            "name": "envoy.filters.network.http_connection_manager",
            ...
            {
              "name": "envoy.filters.http.ext_authz",
              "typed_config": {
               "@type": "type.googleapis.com/envoy.extensions.filters.http.ext_authz.v3.ExtAuthz",
               "grpc_service": {
                "envoy_grpc": {
                 "cluster_name": "outbound|8083||ext-auth-service.gloo-mesh-addons.svc.cluster.local",
                 "authority": "outbound_.8083_._.ext-auth-service.gloo-mesh-addons.svc.cluster.local"
                },
                "timeout": "2s"
               },
               "metadata_context_namespaces": [
                "envoy.filters.http.jwt_authn"
               ],
               "transport_api_version": "V3"
              }
             },
             {
              "name": "envoy.filters.http.ratelimit",
              "typed_config": {
               "@type": "type.googleapis.com/envoy.extensions.filters.http.ratelimit.v3.RateLimit",
               "domain": "solo.io",
               "stage": 1,
               "request_type": "both",
               "timeout": "0.100s",
               "rate_limit_service": {
                "grpc_service": {
                 "envoy_grpc": {
                  "cluster_name": "outbound|8083||rate-limiter.gloo-mesh-addons.svc.cluster.local",
                  "authority": "outbound_.8083_._.rate-limiter.gloo-mesh-addons.svc.cluster.local"
                 }
                },
                "transport_api_version": "V3"
               }
              }
             },