Set up the UI
Install the Gloo UI to get an at-a-glance view of the configuration, health, and compliance status of your Gloo Gateway setup and the workloads in your cluster.
To learn more about the features of the Gloo UI, see About the Gloo UI.
- In single cluster setups, you can install the Gloo UI in the same cluster alongside your Gloo Gateway installation.
- In multicluster setups, you can enable the Gloo UI relay architecture components that help you relay metrics, logs, and insights from each cluster to the Gloo UI component. This observability setup lets you use the Gloo UI as a single pane of glass for all your clusters. To learn more about the relay architecture, its components, and how they communicate with each other, see Relay architecture in the Gloo Mesh documentation.
This feature is an Enterprise-only feature that requires a Gloo Gateway Enterprise license.
Before you begin
Follow the Get started guide to install Gloo Gateway, set up a gateway resource, and deploy the httpbin sample app.
Get the external address of the gateway and save it in an environment variable.
Set up the Gloo UI
Single cluster
Use these instructions to install the Gloo UI in the same cluster as Gloo Gateway. The Gloo UI analyzes your Gloo Gateway setup and provides metrics and insights to you.
Set the name of your cluster and your Gloo Gateway license key as an environment variable.
export CLUSTER_NAME=<cluster-name> export GLOO_GATEWAY_LICENSE_KEY=<license-key>Add the Helm repo for the Gloo UI.
helm repo add gloo-platform https://storage.googleapis.com/gloo-platform/helm-charts helm repo updateInstall the custom resources for the Gloo UI.
helm upgrade -i gloo-platform-crds gloo-platform/gloo-platform-crds \ --namespace=gloo-system \ --version=2.7.7 \ --set installEnterpriseCrds=falseInstall the Gloo UI and configure it for Gloo Gateway. If you also installed an ambient mesh by using the Solo distribution of Istio, see the Gloo Gateway and ambient mesh tab.
Verify that the Gloo UI components are successfully installed.
kubectl get pods -n gloo-systemExample output:
NAME READY STATUS RESTARTS AGE extauth-f7695bf7f-f6dkt 1/1 Running 0 10m gloo-587b79d556-tpvfj 1/1 Running 0 10m gloo-mesh-ui-66db8d9584-kgjld 3/3 Running 0 72m gloo-telemetry-collector-68b8cf6f49-zhx87 1/1 Running 0 57m prometheus-server-7484d8bfd-tx5s4 2/2 Running 0 72m rate-limit-557dcb857f-9zq2t 1/1 Running 0 10m redis-5d6c6bcd4-cnmbm 1/1 Running 0 10mSend a few requests to the httpbin app.
Open the Gloo UI.
Port-forward the Gloo UI pod.
kubectl port-forward deployment/gloo-mesh-ui -n gloo-system 8090Open the Gloo UI dashboard.
open http://localhost:8090/dashboard
Figure: Gloo UI dashboard 
Figure: Gloo UI dashboard
Go to Observability > Graph to see the Gloo UI Graph. Select your cluster from the Cluster drop-down list, and the
httpbinandgloo-systemnamespaces from the Namespace drop-down list. Verify that you see requests from the gateway proxy to the httpbin app. Note that it might take a few seconds for the graph to show the requests that you sent.


Multicluster
If you have a multicluster setup, such as when you use Gloo Gateway as an ingress gateway to a multicluster mesh, you can set up the Gloo UI to capture metrics, logs, and insights for all of your clusters.
To capture telemetry data across multiple clusters, the Gloo UI uses a relay architecture concept that consists of a management server that is typically installed in the same cluster as your Gloo Gateway installation, and multiple agents that are installed in every cluster that you want to collect telemetry data from.
The following guide assumes a two-cluster setup. Cluster1 serves as the cluster that you install the relay management components into for the Gloo UI, and cluster2 serves as a cluster that you want to collect telemetry data from. If you have more clusters that you want to collect telemetry data from, simply repeat the same steps in all of your clusters.
In the clusters that you want to collect telemetry data from, you create a Gloo agent that performs service and mesh discovery. If you already installed the Gloo UI in a standalone cluster that you now want to add to a multicluster setup, you must first edit your existing Helm release for the Gloo UI to set glooUi.discovery.enabled to false. This disables discovery by the Gloo UI to prevent conflicts with discovery that the agent will now perform instead in the multicluster setup.
Set the name and kubeconfig of your clusters, and your Gloo Gateway license key as environment variables. In this example,
REMOTE_CLUSTER1serves as the management cluster where you install the relay management components into. This cluster is typically the same cluster where you installed Gloo Gateway. However, it can also be any other cluster.REMOTE_CLUSTER2is a cluster from which you want to collect telemetry data. Any collected telemetry data from this cluster is automatically sent to the management cluster.export REMOTE_CLUSTER1=<cluster1-name> export REMOTE_CLUSTER2=<cluster2-name> export REMOTE_CONTEXT1=<cluster1-context> export REMOTE_CONTEXT2=<cluster2-context> export GLOO_GATEWAY_LICENSE_KEY=<license-key>Add the Helm repo for the Gloo UI.
helm repo add gloo-platform https://storage.googleapis.com/gloo-platform/helm-chartsInstall the custom resources for the Gloo UI in each of your clusters.
helm upgrade -i gloo-platform-crds gloo-platform/gloo-platform-crds \ --kube-context $REMOTE_CONTEXT1 \ --namespace=gloo-system \ --version=2.7.7 \ --set installEnterpriseCrds=false helm upgrade -i gloo-platform-crds gloo-platform/gloo-platform-crds \ --kube-context $REMOTE_CONTEXT2 \ --namespace=gloo-system \ --version=2.7.7 \ --set installEnterpriseCrds=falseIn the cluster where you plan to install the Gloo UI components for managing the telemetry data relay, create one KubernetesCluster resource for each cluster that you want to collect telemetry data from. In this example command, you create a KubernetesCluster resource that references cluster2 in the management cluster, cluster1.
kubectl apply --context $REMOTE_CONTEXT1 -f- <<EOF apiVersion: admin.gloo.solo.io/v2 kind: KubernetesCluster metadata: name: ${REMOTE_CLUSTER2} namespace: gloo-system spec: clusterDomain: cluster.local EOFInstall the Gloo UI components, including the management server, in one of your clusters. This cluster serves as the management cluster for the telemetry data relay. You typically use the same cluster that Gloo Gateway is installed in, but you can also install it in a different cluster. If you also installed an ambient mesh by using the Solo distribution of Istio, see the Gloo Gateway and ambient mesh tab.
Verify that the Gloo UI components are successfully installed.
kubectl get pods -n gloo-system --context $REMOTE_CONTEXT1Example output:
NAME READY STATUS RESTARTS AGE gloo-mesh-mgmt-server-65b8f4b6cc-fcv27 1/1 Running 0 2m31s gloo-mesh-redis-5485f9f785-jq7n5 1/1 Running 0 10m gloo-mesh-ui-b8b46f698-2wdzt 3/3 Running 0 58s gloo-proxy-http-86554f88d4-ql7bn 1/1 Running 0 2d gloo-telemetry-collector-68b8cf6f49-hd7hj 1/1 Running 0 10m gloo-telemetry-gateway-85456cb66c-mghb7 1/1 Running 0 10m prometheus-server-7484d8bfd-bqzcc 2/2 Running 0 10mSave the external address and port that your cloud provider assigned to the
gloo-mesh-mgmt-serverservice. Thegloo-mesh-agentagent in each cluster accesses this address via a secure connection.export MGMT_SERVER_NETWORKING_DOMAIN=$(kubectl get svc -n gloo-system gloo-mesh-mgmt-server --context $REMOTE_CONTEXT1 -o jsonpath="{.status.loadBalancer.ingress[0]['hostname','ip']}") export MGMT_SERVER_NETWORKING_PORT=$(kubectl get svc -n gloo-system gloo-mesh-mgmt-server --context $REMOTE_CONTEXT1 -o jsonpath='{.spec.ports[?(@.name=="grpc")].port}') export MGMT_SERVER_NETWORKING_ADDRESS=${MGMT_SERVER_NETWORKING_DOMAIN}:${MGMT_SERVER_NETWORKING_PORT} echo $MGMT_SERVER_NETWORKING_ADDRESSSave the external address and port that your cloud provider assigned to the Gloo OpenTelemetry (OTel) gateway service. The OTel collector agents in each cluster send metrics to this address.
export TELEMETRY_GATEWAY_IP=$(kubectl get svc -n gloo-system gloo-telemetry-gateway --context $REMOTE_CONTEXT1 -o jsonpath="{.status.loadBalancer.ingress[0]['hostname','ip']}") export TELEMETRY_GATEWAY_PORT=$(kubectl get svc -n gloo-system gloo-telemetry-gateway --context $REMOTE_CONTEXT1 -o jsonpath='{.spec.ports[?(@.name=="otlp")].port}') export TELEMETRY_GATEWAY_ADDRESS=${TELEMETRY_GATEWAY_IP}:${TELEMETRY_GATEWAY_PORT} echo $TELEMETRY_GATEWAY_ADDRESSGet the value of the root CA certificate from the management server and create a secret in every other cluster in your setup.
kubectl get secret relay-root-tls-secret -n gloo-system --context $REMOTE_CONTEXT1 -o jsonpath='{.data.ca\.crt}' | base64 -d > ca.crt kubectl create secret generic relay-root-tls-secret -n gloo-system --context $REMOTE_CONTEXT2 --from-file ca.crt=ca.crt rm ca.crtGet the identity token from the management server and create a secret in every other cluster in your setup.
kubectl get secret relay-identity-token-secret -n gloo-system --context $REMOTE_CONTEXT1 -o jsonpath='{.data.token}' | base64 -d > token kubectl create secret generic relay-identity-token-secret -n gloo-system --context $REMOTE_CONTEXT2 --from-file token=token rm tokenEnable the Gloo UI agent and telemetry pipeline in each cluster that you want to collect telemetry data from.
helm upgrade -i gloo-platform gloo-platform/gloo-platform \ --namespace gloo-system \ --kube-context $REMOTE_CONTEXT2 \ --version=2.7.7 \ --set telemetryCollector.config.exporters.otlp.endpoint=$TELEMETRY_GATEWAY_ADDRESS \ --set glooAgent.relay.serverAddress=$MGMT_SERVER_NETWORKING_ADDRESS \ -f - <<EOF common: cluster: $REMOTE_CLUSTER2 adminNamespace: "gloo-system" glooAgent: enabled: true glooAnalyzer: enabled: true telemetryCollector: enabled: true mode: deployment replicaCount: 1 EOFVerify that the relay and telemetry components that the Gloo UI requires are successfully installed.
kubectl get pods -n gloo-system --context $REMOTE_CONTEXT2Example output:
gloo-mesh-agent-6c88546cc6-wjhwl 2/2 Running 0 91m gloo-telemetry-collector-68b8cf6f49-27gpq 1/1 Running 0 86mSend a few requests to the httpbin app.
Open the Gloo UI.
Port-forward the
gloo-mesh-uiservice on 8090.kubectl port-forward -n gloo-system svc/gloo-mesh-ui 8090:8090 --context $REMOTE_CONTEXT1Open your browser and connect to http://localhost:8090.
open http://localhost:8090/

Figure: Gloo UI dashboard 
Figure: Gloo UI dashboard Go to Observability > Graph to see the Gloo UI Graph. Select your clusters from the Cluster drop-down list, and the
httpbinandgloo-systemnamespaces from the Namespace drop-down list. Verify that you see requests from the gateway proxy to the httpbin app in cluster1. Note that it might take a few seconds for the graph to show the requests that you sent.


Next
Continue with exploring the features of the Gloo UI.