The information in this documentation is geared towards users that want to use Gloo Gateway proxies with the Kubernetes Gateway API. If you want to use the Gloo Edge API instead, see the Gloo Gateway (Gloo Edge API) documentation.
Set up the UI
Install the Gloo UI to get an at-a-glance view of the configuration, health, and compliance status of your Gloo Gateway setup and the workloads in your cluster.
To learn more about the features of the Gloo UI, see About the Gloo UI.
- In single cluster setups, you can install the Gloo UI in the same cluster alongside your Gloo Gateway installation.
- In multicluster setups, you can enable the Gloo UI relay architecture components that help you relay metrics, logs, and insights from each cluster to the Gloo UI component. This observability setup lets you use the Gloo UI as a single pane of glass for all your clusters. To learn more about the relay architecture, its components, and how they communicate with each other, see Relay architecture in the Gloo Mesh documentation.
This feature is an Enterprise-only feature that requires a Gloo Gateway Enterprise license.
Before you begin link
Follow the Get started guide to install Gloo Gateway, set up a gateway resource, and deploy the httpbin sample app.
Get the external address of the gateway and save it in an environment variable.
export INGRESS_GW_ADDRESS=$(kubectl get svc -n gloo-system gloo-proxy-http -o jsonpath="{.status.loadBalancer.ingress[0]['hostname','ip']}") echo $INGRESS_GW_ADDRESS
kubectl port-forward deployment/gloo-proxy-http -n gloo-system 8080:8080
Set up the Gloo UI link
Use these instructions to install the Gloo UI in the same cluster as Gloo Gateway. The Gloo UI analyzes your Gloo Gateway setup and provides metrics and insights to you.
Set the name of your cluster and your Gloo Gateway license key as an environment variable.
export CLUSTER_NAME=<cluster-name> export GLOO_GATEWAY_LICENSE_KEY=<license-key>
Add the Helm repo for the Gloo UI.
helm repo add gloo-platform https://storage.googleapis.com/gloo-platform/helm-charts helm repo update
Install the custom resources for the Gloo UI.
helm upgrade -i gloo-platform-crds gloo-platform/gloo-platform-crds \ --namespace=gloo-system \ --version=2.7.0 \ --set installEnterpriseCrds=false
Install the Gloo UI and configure it for Gloo Gateway.
helm upgrade -i gloo-platform gloo-platform/gloo-platform \ --namespace gloo-system \ --version=2.7.0 \ -f - <<EOF common: adminNamespace: "gloo-system" cluster: $CLUSTER_NAME featureGates: insightsConfiguration: true glooInsightsEngine: enabled: true glooAnalyzer: enabled: true glooUi: enabled: true licensing: glooGatewayLicenseKey: $GLOO_GATEWAY_LICENSE_KEY prometheus: enabled: true telemetryCollector: enabled: true mode: deployment replicaCount: 1 EOF
Verify that the Gloo UI components are successfully installed.
kubectl get pods -n gloo-system
Example output:
NAME READY STATUS RESTARTS AGE extauth-f7695bf7f-f6dkt 1/1 Running 0 10m gloo-587b79d556-tpvfj 1/1 Running 0 10m gloo-mesh-ui-66db8d9584-kgjld 3/3 Running 0 72m gloo-telemetry-collector-68b8cf6f49-zhx87 1/1 Running 0 57m prometheus-server-7484d8bfd-tx5s4 2/2 Running 0 72m rate-limit-557dcb857f-9zq2t 1/1 Running 0 10m redis-5d6c6bcd4-cnmbm 1/1 Running 0 10m
Send a few requests to the httpbin app.
for i in {1..10}; do curl -i http://$INGRESS_GW_ADDRESS:8080/headers \ -H "host: www.example.com"; done
Open the Gloo UI.
Port-forward the Gloo UI pod.
kubectl port-forward deployment/gloo-mesh-ui -n gloo-system 8090
Open the Gloo UI dashboard.
open http://localhost:8090/dashboard
Go to Observability > Graph to see the Gloo UI Graph. Select your cluster from the Cluster drop-down list, and the
httpbin
andgloo-system
namespaces from the Namespace drop-down list. Verify that you see requests from the gateway proxy to the httpbin app. Note that it might take a few seconds for the graph to show the requests that you sent.
If you have a multicluster setup, such as when you use Gloo Gateway as an ingress to a multicluster service mesh, you can set up the Gloo UI to capture metrics, logs, and insights for all of your clusters.
To capture telemetry data across multiple clusters, the Gloo UI uses a relay architecture concept that consists of a management server that is typically installed in the same cluster as your Gloo Gateway installation, and multiple agents that are installed in every cluster that you want to collect telemetry data from.
The following guide assumes a two-cluster setup. Cluster1 serves as the cluster that you install the relay management components into for the Gloo UI, and cluster2 serves as a cluster that you want to collect telemetry data from. If you have more clusters that you want to collect telemetry data from, simply repeat the same steps in all of your clusters.
Set the name and kubeconfig of your clusters, and your Gloo Gateway license key as environment variables. In this example,
CLUSTER_NAME1
serves as the management cluster where you install the relay management components into. This cluster is typically the same cluster where you installed Gloo Gateway. However, it can also be any other cluster.CLUSTER_NAME2
is a cluster from which you want to collect telemetry data. Any collected telemetry data from this cluster is automatically sent to the management cluster.export CLUSTER_NAME1=<cluster-name1> export CLUSTER_NAME2=<cluster-name2> export CLUSTER_CONTEXT1=<cluster-context1> export CLUSTER_CONTEXT2=<cluster-context2> export GLOO_GATEWAY_LICENSE_KEY=<license-key>
Add the Helm repo for the Gloo UI.
helm repo add gloo-platform https://storage.googleapis.com/gloo-platform/helm-charts
Install the custom resources for the Gloo UI in each of your clusters.
helm upgrade -i gloo-platform-crds gloo-platform/gloo-platform-crds \ --kube-context $CLUSTER_CONTEXT1 \ --namespace=gloo-system \ --version=2.7.0 \ --set installEnterpriseCrds=false helm upgrade -i gloo-platform-crds gloo-platform/gloo-platform-crds \ --kube-context $CLUSTER_CONTEXT2 \ --namespace=gloo-system \ --version=2.7.0 \ --set installEnterpriseCrds=false
In the cluster where you plan to install the Gloo UI components for managing the telemetry data relay, create one KubernetesCluster resource for each cluster that you want to collect telemetry data from. In this example command, you create a KubernetesCluster resource that references cluster2 in the management cluster, cluster1.
kubectl apply --context $CLUSTER_CONTEXT1 -f- <<EOF apiVersion: admin.gloo.solo.io/v2 kind: KubernetesCluster metadata: name: ${CLUSTER_NAME2} namespace: gloo-system spec: clusterDomain: cluster.local EOF
Install the Gloo UI components, including the management server in one of your clusters. This cluster serves as the management cluster for the telemetry data relay. You typically use the same cluster that Gloo Gateway is installed in, but you can also install it in a different cluster.
helm upgrade -i gloo-platform gloo-platform/gloo-platform \ --namespace gloo-system \ --kube-context $CLUSTER_CONTEXT1 \ --version=2.7.0 \ -f - <<EOF common: adminNamespace: "gloo-system" cluster: $CLUSTER_NAME1 featureGates: insightsConfiguration: true glooInsightsEngine: enabled: true runAsSidecar: false glooAnalyzer: enabled: true glooUi: enabled: true licensing: glooGatewayLicenseKey: $GLOO_GATEWAY_LICENSE_KEY prometheus: enabled: true redis: deployment: enabled: true telemetryCollector: enabled: true mode: deployment replicaCount: 1 telemetryGateway: enabled: true glooMgmtServer: enabled: true EOF
Verify that the Gloo UI components are successfully installed.
kubectl get pods -n gloo-system --context $CLUSTER_CONTEXT1
Example output:
gloo-mesh-mgmt-server-65b8f4b6cc-fcv27 1/1 Running 0 2m31s gloo-mesh-redis-5485f9f785-jq7n5 1/1 Running 0 10m gloo-mesh-ui-b8b46f698-2wdzt 3/3 Running 0 58s gloo-proxy-http-86554f88d4-ql7bn 1/1 Running 0 2d gloo-telemetry-collector-68b8cf6f49-hd7hj 1/1 Running 0 10m gloo-telemetry-gateway-85456cb66c-mghb7 1/1 Running 0 10m prometheus-server-7484d8bfd-bqzcc 2/2 Running 0 10m
Save the external address and port that your cloud provider assigned to the
gloo-mesh-mgmt-server
service. Thegloo-mesh-agent
agent in each cluster accesses this address via a secure connection.export MGMT_SERVER_NETWORKING_DOMAIN=$(kubectl get svc -n gloo-system gloo-mesh-mgmt-server --context $CLUSTER_CONTEXT1 -o jsonpath="{.status.loadBalancer.ingress[0]['hostname','ip']}") export MGMT_SERVER_NETWORKING_PORT=$(kubectl get svc -n gloo-system gloo-mesh-mgmt-server --context $CLUSTER_CONTEXT1 -o jsonpath='{.spec.ports[?(@.name=="grpc")].port}') export MGMT_SERVER_NETWORKING_ADDRESS=${MGMT_SERVER_NETWORKING_DOMAIN}:${MGMT_SERVER_NETWORKING_PORT} echo $MGMT_SERVER_NETWORKING_ADDRESS
Save the external address and port that your cloud provider assigned to the Gloo OpenTelemetry (OTel) gateway service. The OTel collector agents in each cluster send metrics to this address.
export TELEMETRY_GATEWAY_IP=$(kubectl get svc -n gloo-system gloo-telemetry-gateway --context $CLUSTER_CONTEXT1 -o jsonpath="{.status.loadBalancer.ingress[0]['hostname','ip']}") export TELEMETRY_GATEWAY_PORT=$(kubectl get svc -n gloo-system gloo-telemetry-gateway --context $CLUSTER_CONTEXT1 -o jsonpath='{.spec.ports[?(@.name=="otlp")].port}') export TELEMETRY_GATEWAY_ADDRESS=${TELEMETRY_GATEWAY_IP}:${TELEMETRY_GATEWAY_PORT} echo $TELEMETRY_GATEWAY_ADDRESS
Get the value of the root CA certificate from the management server and create a secret in every other cluster in your setup.
kubectl get secret relay-root-tls-secret -n gloo-system --context $CLUSTER_CONTEXT1 -o jsonpath='{.data.ca\.crt}' | base64 -d > ca.crt kubectl create secret generic relay-root-tls-secret -n gloo-system --context $CLUSTER_CONTEXT2 --from-file ca.crt=ca.crt rm ca.crt
Get the identity token from the management server and create a secret in every other cluster in your setup.
kubectl get secret relay-identity-token-secret -n gloo-system --context $CLUSTER_CONTEXT1 -o jsonpath='{.data.token}' | base64 -d > token kubectl create secret generic relay-identity-token-secret -n gloo-system --context $CLUSTER_CONTEXT2 --from-file token=token rm token
Enable the Gloo UI agent and telemetry pipeline in each cluster that you want to collect telemetry data from.
helm upgrade -i gloo-platform gloo-platform/gloo-platform \ --namespace gloo-system \ --kube-context $CLUSTER_CONTEXT2 \ --version=2.7.0 \ --set telemetryCollector.config.exporters.otlp.endpoint=$TELEMETRY_GATEWAY_ADDRESS \ --set glooAgent.relay.serverAddress=$MGMT_SERVER_NETWORKING_ADDRESS \ -f - <<EOF common: cluster: $CLUSTER_NAME2 adminNamespace: "gloo-system" glooAgent: enabled: true glooAnalyzer: enabled: true telemetryCollector: enabled: true mode: deployment replicaCount: 1 EOF
Verify that the relay and telemetry components that the Gloo UI requires are successfully installed.
kubectl get pods -n gloo-system --context $CLUSTER_CONTEXT2
Example output:
gloo-mesh-agent-6c88546cc6-wjhwl 2/2 Running 0 91m gloo-telemetry-collector-68b8cf6f49-27gpq 1/1 Running 0 86m
Next link
Continue with exploring the features of the Gloo UI.