Single cluster
Deploy Gloo Mesh Core to gain valuable insights into your Istio service mesh.
Gloo Mesh Core deploys alongside your Istio installation in a single-cluster environment, and gives you instant insights into your Istio service mesh through a custom dashboard. You can follow this guide to quickly get started with Gloo Mesh Core. To learn more about the benefits and architecture, see About. To customize your installation with Helm instead, see the advanced installation guide.
Before you begin
Install the following command-line (CLI) tools.
kubectl
, the Kubernetes command line tool. Download thekubectl
version that is within one minor version of the Kubernetes clusters you plan to use.meshctl
, the Solo command line tool.curl -sL https://run.solo.io/meshctl/install | GLOO_MESH_VERSION=v2.6.0-beta2 sh - export PATH=$HOME/.gloo-mesh/bin:$PATH
Create or use an existing Kubernetes cluster, and save the cluster name in an environment variable.
- The cluster name must be alphanumeric with no special characters except a hyphen (-), lowercase, and begin with a letter (not a number).
- Cilium CNI: To use the Solo distribution of the Cilium CNI with Gloo Mesh Core, follow the steps in Install the Solo distribution of the Cilium CNI to prepare your clusters. Note that if you plan to use Gloo Mesh Core and Istio in an EKS environment, see Considerations for running Cilium and Istio on EKS.
export CLUSTER_NAME=<cluster_name>
Set your Gloo Mesh Core license key as an environment variable. If you do not have one, contact an account representative. If you prefer to specify license keys in a secret instead, see Licensing. To check your license’s validity, you can run
meshctl license check --key $(echo ${GLOO_MESH_CORE_LICENSE_KEY} | base64 -w0)
.export GLOO_MESH_CORE_LICENSE_KEY=<license_key>
Create a YAML file with the following values to configure the TLS connection between the Gloo management server and agent. The following example uses
My token
as your relay identity token value, but you can use any string value. The relay token is used by the Gloo agent when establishing the first connection to the Gloo management server. Only when the relay identity token that the agent presents matches the relay token that the Gloo management server uses, initial trust is established and the Gloo agent and management server proceed with establishing a simple TLS connection. In a simple TLS setup, only the management server presents a certificate to authenticate its identity. The identity of the agent is not verified.cat > values.yaml <<EOF glooMgmtServer: extraEnvs: RELAY_DISABLE_CLIENT_CERTIFICATE_AUTHENTICATION: value: "true" RELAY_TOKEN: value: "My token" glooAgent: extraEnvs: RELAY_DISABLE_SERVER_CERTIFICATE_VALIDATION: value: "true" RELAY_TOKEN: value: "My token" EOF
Install Gloo Mesh Core
Install all Gloo Mesh Core components in the same cluster as your Istio service mesh.
Install Gloo Mesh Core in your cluster. This command uses a basic profile to create a
gloo-mesh
namespace and install the Gloo control and data plane components. For more information, check out the CLI install profiles.Verify that your Gloo Mesh Core setup is correctly installed. If not, try debugging the relay connection. Note that this check might take a few seconds to verify that:
- Your Gloo product license is valid and current.
- The Gloo CRDs are installed at the correct version.
- The management plane pods in the management cluster are running and healthy.
- The Gloo agent is running and connected to the management server.
meshctl check
Example output:
🟢 License status INFO gloo-mesh-core enterprise license expiration is 25 Aug 24 10:38 CDT INFO No GraphQL license module found for any product 🟢 CRD version check 🟢 Gloo deployment status Namespace | Name | Ready | Status gloo-mesh | gloo-mesh-mgmt-server | 1/1 | Healthy gloo-mesh | gloo-mesh-redis | 1/1 | Healthy gloo-mesh | gloo-mesh-ui | 1/1 | Healthy gloo-mesh | gloo-telemetry-collector-agent | 3/3 | Healthy gloo-mesh | prometheus-server | 1/1 | Healthy 🟢 Mgmt server connectivity to workload agents Cluster | Registered | Connected Pod test | true | gloo-mesh/gloo-mesh-mgmt-server-558cddbbd7-rf2hv Connected Pod | Clusters gloo-mesh/gloo-mesh-mgmt-server-558cddbbd7-rf2hv | 1
Deploy Istio
Check whether an Istio control plane already exists.
kubectl get pods -n istio-system
Deploy a sample app
To analyze your service mesh with Gloo Mesh Core, be sure to include your services in the mesh.
- If you already deployed apps that you want to include in the mesh, you can run the following command to label the service namespace for Istio sidecar injection.
kubectl label ns <namespace> istio-injection=enabled
- If you don’t have any apps yet, you can deploy Bookinfo, the Istio sample app.
- Create the
bookinfo
namespace and label it for Istio injection so that the services become part of the service mesh.kubectl create ns bookinfo kubectl label ns bookinfo istio-injection=enabled
- Deploy the Bookinfo app.
# deploy bookinfo application components for all versions less than v3 kubectl -n bookinfo apply -f https://raw.githubusercontent.com/istio/istio/1.20.1/samples/bookinfo/platform/kube/bookinfo.yaml -l 'app,version notin (v3)' # deploy an updated product page with extra container utilities such as 'curl' and 'netcat' kubectl -n bookinfo apply -f https://raw.githubusercontent.com/solo-io/gloo-mesh-use-cases/main/policy-demo/productpage-with-curl.yaml # deploy all bookinfo service accounts kubectl -n bookinfo apply -f https://raw.githubusercontent.com/istio/istio/1.20.1/samples/bookinfo/platform/kube/bookinfo.yaml -l 'account'
- Verify that the Bookinfo app deployed successfully.
kubectl get pods -n bookinfo kubectl get svc -n bookinfo
- Create the
Explore the UI
Use the Gloo UI to evaluate the health and efficiency of your service mesh. You can review the analysis and insights for your service mesh, such as recommendations to harden your Istio environment and steps to implement them in your environment.
Launch the dashboard
Open the Gloo UI. The Gloo UI is served from the
gloo-mesh-ui
service on port 8090. You can connect by using themeshctl
orkubectl
CLIs.Review your Dashboard for an at-a-glance overview of your Gloo Mesh Core environment. Environment insights, health, status, inventories, security, and more are summarized in the following cards:
- Analysis and Insights: Gloo Mesh Core recommendations for how to improve your Istio setup, and if installed, your Cilium setup.
- Gloo, Istio, and Cilium health: A status check of the Gloo Mesh Core, Istio, and if installed, Cilium installations in your cluster.
- Certificates Expiry: Validity timelines for your root and intermediate Istio certificates.
- Cluster Services: Inventory of services in your Gloo Mesh Core setup, and whether those services are in a service mesh or not.
- Istio FIPS: FIPS compliance checks for the
istiod
control plane and Istio data plane workloads. - Zero Trust: Number of service mesh workloads that receive only mutual TLS (mTLS)-encrypted traffic, and number of external services that are accessed from the mesh.
If you installed the Cilium CNI, click the Cilium tab in the Gloo, Istio, and Cilium health card. Verify that the Solo distribution of Cilium version was discovered.
Check insights
Review the insights for your environment. Gloo Mesh Core comes with an insights engine that automatically analyzes your Istio and Cilium setups for health issues. Then, Gloo shares these issues along with recommendations to harden your Istio and Cilium setups. The insights give you a checklist to address issues that might otherwise be hard to detect across your environment.
On the Analysis and Insights card of the dashboard, you can quickly see a summary of the insights for your environment, including how many insights are available at each severity level, and the type of insight.
View the list of insights by clicking the Details button, or go to the Insights page.
On the Insights page, you can view recommendations to harden your Istio, and if installed, Cilium setup, and steps to implement them in your environment. Gloo Mesh Core analyzes your setup, and returns individual insights that contain information about errors and warnings in your environment, best practices you can use to improve your configuration and security, and more.
On an insight that you want to resolve, click Details. The details modal shows more data about the insight, such as the time when it was last observed in your environment, and if applicable, the extended settings or configuration that the insight applies to.
Click the Target YAML tab to see the resource file that the insight references, and click the View Resolution Steps tab to see guidance such as steps for fixing warnings and errors in your resource configuration or recommendations for improving your security and setup.
Optional: Apply a Cilium network policy
If you installed the Solo distribution of the Cilium CNI, deploy a demo app to visualize Cilium network traffic in the Gloo UI, and try out a Cilium network policy to secure and control traffic flows between app microservices.
Deploy the Cilium Star Wars demo app in your cluster.
Create a namespace for the demo app, and include the starwars services in your service mesh.
kubectl create ns starwars kubectl label ns starwars istio-injection=enabled
Deploy the demo app, which includes
tiefighter
,xwing
, anddeathstar
pods, and adeathstar
service. The tiefighter and deathstar pods have theorg=empire
label, and the xwing pod has theorg=alliance
label.kubectl -n starwars apply -f https://raw.githubusercontent.com/cilium/cilium/$CILIUM_VERSION/examples/minikube/http-sw-app.yaml
Verify that the demo pods and service are running.
kubectl get pods,svc -n starwars
Example output:
NAME READY STATUS RESTARTS AGE pod/deathstar-6fb5694d48-5hmds 1/1 Running 0 107s pod/deathstar-6fb5694d48-fhf65 1/1 Running 0 107s pod/tiefighter 1/1 Running 0 107s pod/xwing 1/1 Running 0 107s NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE service/deathstar ClusterIP 10.96.110.8 <none> 80/TCP 107s
Generate some network traffic by sending requests from the xwing and tiefighter pods to the deathstar service.
kubectl exec xwing -n starwars -- curl -s -XPOST deathstar.starwars.svc.cluster.local/v1/request-landing kubectl exec tiefighter -n starwars -- curl -s -XPOST deathstar.starwars.svc.cluster.local/v1/request-landing
Example output for both commands:
Ship landed Ship landed
View network traffic information in the Gloo UI.
Open the Gloo UI.
meshctl
:meshctl dashboard
kubectl
:- Port-forward the
gloo-mesh-ui
service on 8090.kubectl port-forward -n gloo-mesh svc/gloo-mesh-ui 8090:8090
- Open your browser and connect to http://localhost:8090.
- Port-forward the
From the left-hand navigation, click Observability > Graph.
View the network graph for the Star Wars app. The graph is automatically generated based on which apps talk to each other.
Create a Cilium network policy that allows only apps that have the
org=empire
label to access the deathstar app. After you create this access policy, only the tiefighter pod can access the deathstar app.kubectl apply -f - << EOF apiVersion: "cilium.io/v2" kind: CiliumNetworkPolicy metadata: name: "rule1" namespace: starwars spec: description: "L3-L4 policy to restrict deathstar access to empire ships only" endpointSelector: matchLabels: org: empire class: deathstar ingress: - fromEndpoints: - matchLabels: org: empire toPorts: - ports: - port: "80" protocol: TCP EOF
Send another request from the tiefighter pod to the deathstar service.
kubectl exec tiefighter -n starwars -- curl -s -XPOST deathstar.starwars.svc.cluster.local/v1/request-landing
The request succeeds, because requests from apps with the
org=empire
label are permitted. Example output:Ship landed
Send another request from the xwing pod to the deathstar service.
kubectl exec xwing -n starwars -- curl -s -XPOST deathstar.starwars.svc.cluster.local/v1/request-landing
This request hangs, because requests from apps without the
org=empire
label are not permitted. No Layer 7 HTTP response code is returned, because the request is dropped on Layer 3. You can entercontrol
+c
to stop the curl request, or wait for it to time out.You can also check the metrics to verify that the policy allowed or blocked requests.
- Open the Prometheus expression browser.
meshctl
: For more information, see the CLI documentation.meshctl proxy prometheus
kubectl
:- Port-forward the
prometheus-server
deployment on 9091.kubectl -n gloo-mesh port-forward deploy/prometheus-server 9091
- Open your browser and connect to localhost:9091/.
- Port-forward the
- In the Expression search bar, paste the following query and click Execute.
rate(hubble_drop_total{destination_workload_id=~"deathstar.+"}[5m])
- Verify that you can see requests from the xwing pod to the deathstar service that were dropped because of the network policy.
- Open the Prometheus expression browser.
Next steps
Now that you have Gloo Mesh Core and Istio up and running, check out some of the following resources to learn more about Gloo Mesh Core and expand your service mesh capabilities.
Istio:
- Find out more about hardened Istio
n-4
version support built into Solo distributions of Istio. - Check out the Istio docs to configure and deploy Istio routing resources.
- Monitor and observe your Istio environment with Gloo Mesh Core’s built-in telemetry tools.
- When it’s time to upgrade Istio, use Gloo Mesh Core to upgrade managed Istio installations.
Cilium: If you installed the Solo distribution of the Cilium CNI in your clusters:
- Find out more about hardened Cilium
n-4
version support built into Solo distributions of Cilium images. - Enable additional flow logs to monitor network traffic in your cluster.
- Import the Cilium Grafana dashboard to monitor the health of your Cilium CNI.
- Apply more Cilium network policies by following the Cilium docs.
Gloo Mesh Core:
- Customize your Gloo Mesh Core installation with a Helm-based setup.
Help and support:
- Talk to an expert to get advice or build out a proof of concept.
- Join the #gloo-mesh channel in the Solo.io community slack.
- Try out one of the Gloo workshops.
Cleanup
If you installed the Solo distribution of the Cilium CNI, remove the demo app resources and namespace.
kubectl delete -f https://raw.githubusercontent.com/cilium/cilium/$CILIUM_VERSION/examples/minikube/http-sw-app.yaml -n starwars kubectl delete cnp rule1 -n starwars kubectl delete ns starwars
If you no longer need this quick-start Gloo Mesh Core environment, you can follow the steps in the uninstall guide.