Verify in-cluster routing

Before you apply a policy, verify that the productpage service can route to the ratings-v1 and ratings-v2 services within the same service mesh in cluster1.

  1. In a separate tab in your terminal, open the Bookinfo product page from your local host.

    1. Enable port-forwarding on the product page deployment.
        kubectl -n bookinfo port-forward deployment/productpage-v1 9080:9080
        
    2. Open your browser to http://localhost:9080/productpage?u=normal.
  2. Refresh the page a few times to see the black stars in the Book ratings column appear and disappear. The presence of black stars represents ratings-v2 and the absence of stars represents ratings-v1, and the presence of red stars represents ratings-v3.

    Figure: Bookinfo product page UI

Apply fault injection

Apply a fault injection policy to the ratings service to delay requests and simulate network issues or an overloaded service. A delay simulates an overloaded upstream service or network issues, and can help you build more resilient apps.

  1. Create a temporary curl or debug pod and send a request to the ratings app from your local machine. The communication from your local machine to the ratings app is allowed because no access policy is applied yet.

  2. Create a fault injection policy to delay responses from the ratings app by 10 seconds, and a route table specifically for testing access to the ratings app.

      kubectl apply -f- <<EOF
    apiVersion: resilience.policy.gloo.solo.io/v2
    kind: FaultInjectionPolicy
    metadata:
      name: faultinjection-basic-delay
      namespace: bookinfo
    spec:
      applyToRoutes:
        - route:
            labels:
              route: ratings
      config:
        delay:
          fixedDelay: 10s
    ---
    apiVersion: networking.gloo.solo.io/v2
    kind: RouteTable
    metadata:
      name: ratings-rt
      namespace: bookinfo
    spec:
      hosts:
      - ratings
      http:
      - forwardTo:
          destinations:
          - ref:
              name: ratings
              namespace: bookinfo
        labels:
          route: ratings
      workloadSelectors:
      - {}
    EOF
      
  3. Send another request to the ratings app by using the same method as in step 1. Note that this time, the app’s response is delayed due to the fault injection.

Explore the UI

Use the Gloo UI to evaluate the health and efficiency of your service mesh. You can review the analysis and insights for your service mesh, such as recommendations to harden your Istio environment and steps to implement them in your environment.

  1. Open the Gloo UI. The Gloo UI is served from the gloo-mesh-ui service on port 8090. You can connect by using the meshctl or kubectl CLIs.

  2. Review the Dashboard page, which presents an at-a-glance look at the health of workspaces and clusters that make up your Gloo setup.

    • In the Workspaces pane, you can review the workspace that was automatically created for you in your Gloo setup.
    • In the Clusters pane, you can review the workload clusters that are currently connected to your Gloo setup.
    Figure: Overview UI screenshot
  3. Verify the details of the fault injection policy that you created in the previous section.

    1. Click the Resources tab to open the Solo resources page.
    2. In the row for your policy, faultinjection-basic-delay, click View Policy.
    3. Review the details of the policy, such as the ratings route that it applies to.
    4. Click View YAML.
    5. Scroll to the end of the YAML output to verify that the policy has a state of ACCEPTED.

Optional: Apply a Cilium network policy

If you installed the Solo distribution of the Cilium CNI, deploy a demo app to visualize Cilium network traffic in the Gloo UI, and try out a Cilium network policy to secure and control traffic flows between app microservices.

  1. Deploy the Cilium Star Wars demo app in your cluster.

    1. Create a namespace for the demo app, and include the starwars services in your service mesh.

        kubectl create ns starwars
      kubectl label ns starwars istio-injection=enabled
        
    2. Deploy the demo app, which includes tiefighter, xwing, and deathstar pods, and a deathstar service. The tiefighter and deathstar pods have the org=empire label, and the xwing pod has the org=alliance label.

        kubectl -n starwars apply -f https://raw.githubusercontent.com/cilium/cilium/$CILIUM_VERSION/examples/minikube/http-sw-app.yaml
        
    3. Verify that the demo pods and service are running.

        kubectl get pods,svc -n starwars
        

      Example output:

        NAME                             READY   STATUS    RESTARTS   AGE
      pod/deathstar-6fb5694d48-5hmds   1/1     Running   0          107s
      pod/deathstar-6fb5694d48-fhf65   1/1     Running   0          107s
      pod/tiefighter                   1/1     Running   0          107s
      pod/xwing                        1/1     Running   0          107s
      
      NAME                 TYPE        CLUSTER-IP    EXTERNAL-IP   PORT(S)   AGE
      service/deathstar    ClusterIP   10.96.110.8   <none>        80/TCP    107s
        
  2. Generate some network traffic by sending requests from the xwing and tiefighter pods to the deathstar service.

      kubectl exec xwing -n starwars -- curl -s -XPOST deathstar.starwars.svc.cluster.local/v1/request-landing
    kubectl exec tiefighter -n starwars -- curl -s -XPOST deathstar.starwars.svc.cluster.local/v1/request-landing
      

    Example output for both commands:

      Ship landed
    Ship landed
      
  3. View network traffic information in the Gloo UI.

    1. Open the Gloo UI.

      • meshctl:
          meshctl dashboard
          
      • kubectl:
        1. Port-forward the gloo-mesh-ui service on 8090.
            kubectl port-forward -n gloo-mesh svc/gloo-mesh-ui 8090:8090
            
        2. Open your browser and connect to http://localhost:8090.
    2. From the left-hand navigation, click Observability > Graph.

    3. View the network graph for the Star Wars app. The graph is automatically generated based on which apps talk to each other.

      Figure: Gloo UI network graph for the Star Wars app
      Figure: Gloo UI network graph for the Star Wars app
  4. Create a Cilium network policy that allows only apps that have the org=empire label to access the deathstar app. After you create this access policy, only the tiefighter pod can access the deathstar app.

      kubectl apply -f - << EOF
    apiVersion: "cilium.io/v2"
    kind: CiliumNetworkPolicy
    metadata:
      name: "rule1"
      namespace: starwars
    spec:
      description: "L3-L4 policy to restrict deathstar access to empire ships only"
      endpointSelector:
        matchLabels:
          org: empire
          class: deathstar
      ingress:
      - fromEndpoints:
        - matchLabels:
            org: empire
        toPorts:
        - ports:
          - port: "80"
            protocol: TCP
    EOF
      
  5. Send another request from the tiefighter pod to the deathstar service.

      kubectl exec tiefighter -n starwars -- curl -s -XPOST deathstar.starwars.svc.cluster.local/v1/request-landing
      

    The request succeeds, because requests from apps with the org=empire label are permitted. Example output:

      Ship landed
      
  6. Send another request from the xwing pod to the deathstar service.

      kubectl exec xwing -n starwars -- curl -s -XPOST deathstar.starwars.svc.cluster.local/v1/request-landing
      

    This request hangs, because requests from apps without the org=empire label are not permitted. No Layer 7 HTTP response code is returned, because the request is dropped on Layer 3. You can enter control+c to stop the curl request, or wait for it to time out.

  7. You can also check the metrics to verify that the policy allowed or blocked requests.

    1. Open the Prometheus expression browser.
      • meshctl: For more information, see the CLI documentation.
          meshctl proxy prometheus
          
      • kubectl:
        1. Port-forward the prometheus-server deployment on 9091.
            kubectl -n gloo-mesh port-forward deploy/prometheus-server 9091
            
        2. Open your browser and connect to localhost:9091/.
    2. In the Expression search bar, paste the following query and click Execute.
        rate(hubble_drop_total{destination_workload_id=~"deathstar.+"}[5m])
        
    3. Verify that you can see requests from the xwing pod to the deathstar service that were dropped because of the network policy.

Next steps

Now that you have Gloo Mesh Enterprise and Istio up and running, check out some of the following resources to learn more about Gloo Mesh and expand your service mesh capabilities.

Gloo Mesh Enterprise:

Istio: Now that you have Gloo Mesh Enterprise and Istio installed, you can use Gloo to manage your Istio service mesh resources. You don’t need to directly configure any Istio resources going forward.

Cilium: If you installed the Solo distribution of the Cilium CNI in your cluster:

Help and support:

Cleanup

You can optionally remove the resources that you set up as part of this guide.

  1. Delete the fault injection policy and testing route table.

      kubectl delete FaultInjectionPolicy faultinjection-basic-delay -n bookinfo
    kubectl delete RouteTable ratings-rt -n bookinfo
      
  2. If you installed the Solo distribution of the Cilium CNI, remove the demo app resources and namespace.

      kubectl delete -f https://raw.githubusercontent.com/cilium/cilium/$CILIUM_VERSION/examples/minikube/http-sw-app.yaml -n starwars
    kubectl delete cnp rule1 -n starwars
    kubectl delete ns starwars
      
  3. If you no longer need this quick-start Gloo Mesh environment, you can follow the steps in the uninstall guide.