Apply a policy and explore the UI
Apply a fault injection policy to the ratings service. Then, you can check out the policy and other Gloo resources in the Gloo UI.
Apply fault injection
Apply a fault injection policy to the ratings service to delay requests and simulate network issues or an overloaded service. A delay simulates an overloaded upstream service or network issues, and can help you build more resilient apps.
Verify that you can successfully send requests to the ratings app from within the mesh.
Create a fault injection policy to delay responses from the ratings app by 10 seconds, and a route table specifically for testing access to the ratings app.
kubectl apply --context $MGMT_CONTEXT -f- <<EOF apiVersion: resilience.policy.gloo.solo.io/v2 kind: FaultInjectionPolicy metadata: name: faultinjection-basic-delay namespace: bookinfo spec: applyToRoutes: - route: labels: route: ratings config: delay: fixedDelay: 10s --- apiVersion: networking.gloo.solo.io/v2 kind: RouteTable metadata: name: ratings-rt namespace: bookinfo spec: hosts: - ratings http: - forwardTo: destinations: - ref: name: ratings namespace: bookinfo labels: route: ratings workloadSelectors: - {} EOF
Send another request to the ratings app by using the same method as in step 1. Note that this time, the app’s response is delayed due to the fault injection.
Explore the UI
Use the Gloo UI to evaluate the health and efficiency of your service mesh. You can review the analysis and insights for your service mesh, such as recommendations to harden your Istio environment and steps to implement them in your environment.
Open the Gloo UI. The Gloo UI is served from the
gloo-mesh-ui
service on port 8090. You can connect by using themeshctl
orkubectl
CLIs.Review the Overview page, which presents an at-a-glance look at the health of workspaces and clusters that make up your Gloo setup.
- In the Workspaces pane, you can review the workspace that was automatically created for you in your Gloo setup.
- In the Clusters pane, you can review the workload clusters that are currently connected to your Gloo setup.
Verify the details of the fault injection policy that you created in the previous section.
- Click the Policies tab to open the Policy Rules page.
- Click the name of your policy,
faultinjection-basic-delay
. - Review the details of the policy, such as the
ratings
route that it applies to. - Click View YAML.
- Scroll to the end of the YAML output to verify that the policy has a
state
ofACCEPTED
.
Optional: Apply a Cilium network policy
If you installed the Solo distribution of the Cilium CNI, deploy a demo app to visualize Cilium network traffic in the Gloo UI, and try out a Cilium network policy to secure and control traffic flows between app microservices.
Deploy the Cilium Star Wars demo app in your cluster.
Create a namespace for the demo app, and include the starwars services in your service mesh.
kubectl create ns starwars kubectl label ns starwars istio-injection=enabled
Deploy the demo app, which includes
tiefighter
,xwing
, anddeathstar
pods, and adeathstar
service. The tiefighter and deathstar pods have theorg=empire
label, and the xwing pod has theorg=alliance
label.kubectl -n starwars apply -f https://raw.githubusercontent.com/cilium/cilium/$CILIUM_VERSION/examples/minikube/http-sw-app.yaml
Verify that the demo pods and service are running.
kubectl get pods,svc -n starwars
Example output:
NAME READY STATUS RESTARTS AGE pod/deathstar-6fb5694d48-5hmds 1/1 Running 0 107s pod/deathstar-6fb5694d48-fhf65 1/1 Running 0 107s pod/tiefighter 1/1 Running 0 107s pod/xwing 1/1 Running 0 107s NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE service/deathstar ClusterIP 10.96.110.8 <none> 80/TCP 107s
Generate some network traffic by sending requests from the xwing and tiefighter pods to the deathstar service.
kubectl exec xwing -n starwars -- curl -s -XPOST deathstar.starwars.svc.cluster.local/v1/request-landing kubectl exec tiefighter -n starwars -- curl -s -XPOST deathstar.starwars.svc.cluster.local/v1/request-landing
Example output for both commands:
Ship landed Ship landed
View network traffic information in the Gloo UI.
Open the Gloo UI.
meshctl
:meshctl dashboard
kubectl
:- Port-forward the
gloo-mesh-ui
service on 8090.kubectl port-forward -n gloo-mesh svc/gloo-mesh-ui 8090:8090
- Open your browser and connect to http://localhost:8090.
- Port-forward the
From the left-hand navigation, click Observability > Graph.
View the network graph for the Star Wars app. The graph is automatically generated based on which apps talk to each other.
Create a Cilium network policy that allows only apps that have the
org=empire
label to access the deathstar app. After you create this access policy, only the tiefighter pod can access the deathstar app.kubectl apply -f - << EOF apiVersion: "cilium.io/v2" kind: CiliumNetworkPolicy metadata: name: "rule1" namespace: starwars spec: description: "L3-L4 policy to restrict deathstar access to empire ships only" endpointSelector: matchLabels: org: empire class: deathstar ingress: - fromEndpoints: - matchLabels: org: empire toPorts: - ports: - port: "80" protocol: TCP EOF
Send another request from the tiefighter pod to the deathstar service.
kubectl exec tiefighter -n starwars -- curl -s -XPOST deathstar.starwars.svc.cluster.local/v1/request-landing
The request succeeds, because requests from apps with the
org=empire
label are permitted. Example output:Ship landed
Send another request from the xwing pod to the deathstar service.
kubectl exec xwing -n starwars -- curl -s -XPOST deathstar.starwars.svc.cluster.local/v1/request-landing
This request hangs, because requests from apps without the
org=empire
label are not permitted. No Layer 7 HTTP response code is returned, because the request is dropped on Layer 3. You can entercontrol
+c
to stop the curl request, or wait for it to time out.You can also check the metrics to verify that the policy allowed or blocked requests.
- Open the Prometheus expression browser.
meshctl
: For more information, see the CLI documentation.meshctl proxy prometheus
kubectl
:- Port-forward the
prometheus-server
deployment on 9091.kubectl -n gloo-mesh port-forward deploy/prometheus-server 9091
- Open your browser and connect to localhost:9091/.
- Port-forward the
- In the Expression search bar, paste the following query and click Execute.
rate(hubble_drop_total{destination_workload_id=~"deathstar.+"}[5m])
- Verify that you can see requests from the xwing pod to the deathstar service that were dropped because of the network policy.
- Open the Prometheus expression browser.
Next steps
Now that you have Gloo Mesh Enterprise and Istio up and running, check out some of the following resources to learn more about Gloo Mesh and expand your service mesh capabilities.
Gloo Mesh Enterprise:
- To see how Gloo Mesh helps you create a secure, multicluster service mesh, check out the multitenancy, federation, and isolation tutorial to configure Gloo Mesh for a multitenancy use case.
- Customize your Gloo Mesh installation with a Helm-based setup.
Istio: Now that you have Gloo Mesh Enterprise and Istio installed, you can use Gloo to manage your Istio service mesh resources. You don’t need to directly configure any Istio resources going forward.
- Find out more about hardened Istio
n-4
version support built into Solo distributions of Istio. - Review how Gloo Mesh Enterprise custom resources are automatically translated into Istio resources.
- Monitor and observe your Istio environment with Gloo Mesh Enterprise’s built-in telemetry tools.
- When it’s time to upgrade Istio, use Gloo Mesh Enterprise to upgrade managed Istio installations.
Cilium: If you installed the Solo distribution of the Cilium CNI in your clusters:
- Find out more about hardened Cilium
n-4
version support built into Solo distributions of Cilium images. - Enable additional flow logs to monitor network traffic in your cluster.
- Import the Cilium Grafana dashboard to monitor the health of your Cilium CNI.
- Apply more Cilium network policies by following the Cilium docs.
Help and support:
- Talk to an expert to get advice or build out a proof of concept.
- Join the #gloo-mesh channel in the Solo.io community slack.
- Try out one of the Gloo workshops.
Cleanup
You can optionally remove the resources that you set up as part of this guide.
Delete the fault injection policy and testing route table.
kubectl delete FaultInjectionPolicy faultinjection-basic-delay -n bookinfo --context $MGMT_CONTEXT kubectl delete RouteTable ratings-rt -n bookinfo --context $MGMT_CONTEXT
If you installed the Solo distribution of the Cilium CNI, remove the demo app resources and namespace.
kubectl delete -f https://raw.githubusercontent.com/cilium/cilium/$CILIUM_VERSION/examples/minikube/http-sw-app.yaml -n starwars kubectl delete cnp rule1 -n starwars kubectl delete ns starwars
If you no longer need this quick-start Gloo Mesh environment, you can follow the steps in the uninstall guide.