Connect the Gloo UI to OpenShift Prometheus
Configure the Gloo UI to read metrics from OpenShift’s built-in Prometheus instance.
By default, the Gloo UI reads metrics from the built-in Prometheus to populate and display the Gloo UI Graph and other information. If you run in OpenShift, you can instead configure the Gloo UI to read metrics from OpenShift’s built-in Prometheus rather than from the built-in instance in Solo Enterprise for Istio.
Before you begin
Follow the steps to forward metrics to the built-in OpenShift Prometheus instance so that the Gloo UI can read metrics from the Prometheus instance in OpenShift.
Single cluster
Get the current values of the Helm release for your Solo Enterprise for Istio installation. Note that your Helm release might have a different name.
helm get values gloo-platform -n gloo-mesh -o yaml > gloo-single.yaml open gloo-single.yamlIn your Helm values file, add the following values.
glooUi: # Default URL for OpenShift's built-in Prometheus for workload monitoring. prometheusUrl: https://thanos-querier.openshift-monitoring.svc:9091 # The bearer token to access the Prometheus instance. This token is automatically extracted and mounted to the Gloo UI pod. prometheusBearerTokenFile: /var/run/secrets/kubernetes.io/serviceaccount/token # The public key that is used to authenticate with the Prometheus instance. This key is automatically mounted to the Gloo UI pod. prometheusCAFile: /var/run/secrets/kubernetes.io/serviceaccount/ca.crt # Set to true to skip the validation of the Prometheus server TLS certificate. prometheusSkipTLSVerify: trueUpgrade your Helm release. Change the release name as needed.
helm upgrade gloo-platform gloo-platform/gloo-platform \ --namespace gloo-mesh \ -f gloo-single.yaml \ --version ${MGMT_VERSION}Verify that the Gloo UI restarts successfully.
kubectl get pods -n gloo-mesh | grep uiOpen the Gloo UI. The Gloo UI is served from the
gloo-mesh-uiservice on port 8090. You can connect by using themeshctlorkubectlCLIs.- meshctl: For more information, see the CLI documentation.
meshctl dashboard - kubectl:
- Port-forward the
gloo-mesh-uiservice on 8090.kubectl port-forward -n gloo-mesh svc/gloo-mesh-ui 8090:8090 - Open your browser and connect to http://localhost:8090.
- Port-forward the
- meshctl: For more information, see the CLI documentation.
Send a request to the httpbin sample app.
curl -vik http://www.example.com:80/productpage --resolve www.example.com:80:$INGRESS_GW_ADDRESSIn the Gloo UI, navigate to Graph and verify that you can see the Gloo UI graph getting populated for that request.
Optional: Now that you set up the Gloo UI to use the OpenShift Prometheus instance, disable the Prometheus instance that is built into Solo Enterprise for Istio.
- In you Helm values file, add the following values.
prometheus: enabled: false - Upgrade your Helm release. Change the release name as needed.
helm upgrade gloo-platform gloo-platform/gloo-platform \ --namespace gloo-mesh \ -f gloo-single.yaml \ --version ${MGMT_VERSION} - Verify that the prometheus pod is removed.
kubectl get pods -n gloo-mesh
- In you Helm values file, add the following values.
Multicluster
Get the current Helm values of the management plane release.
helm get values gloo-platform -n gloo-mesh -o yaml --kube-context ${context1} > mgmt-server.yaml open mgmt-server.yamlIn your Helm values file for the management plane, add the following values.
glooUi: # Default URL for OpenShift's built-in Prometheus for workload monitoring. prometheusUrl: https://thanos-querier.openshift-monitoring.svc:9091 # DO NOT OVERWRITE. The bearer token to access the Prometheus instance. This token is automatically extracted and mounted to the Gloo UI pod. prometheusBearerTokenFile: /var/run/secrets/kubernetes.io/serviceaccount/token # DO NOT OVERWRITE. The public key that is used to authenticate with the Prometheus instance. This key is automatically mounted to the Gloo UI pod. prometheusCAFile: /var/run/secrets/kubernetes.io/serviceaccount/ca.crt # Set to true to skip the validation of the Prometheus server TLS certificate. prometheusSkipTLSVerify: trueUpgrade the Helm release with the updated values.
helm upgrade gloo-platform gloo-platform/gloo-platform \ --kube-context ${context1} \ --namespace gloo-mesh \ -f mgmt-server.yaml \ --version ${MGMT_VERSION}Verify that the Gloo UI successfully redeployed.
kubectl get pods --context ${context1} -n gloo-mesh | grep uiOpen the Gloo UI. The Gloo UI is served from the
gloo-mesh-uiservice on port 8090 in the cluster where the management plane is deployed. You can connect by using themeshctlorkubectlCLIs.- meshctl: For more information, see the CLI documentation.
meshctl dashboard --kube-context ${context1} - kubectl:
- Port-forward the
gloo-mesh-uiservice on 8090.kubectl port-forward -n gloo-mesh --context ${context1} svc/gloo-mesh-ui 8090:8090 - Open your browser and connect to http://localhost:8090.
- Port-forward the
- meshctl: For more information, see the CLI documentation.
Send a request to the httpbin sample app.
curl -vik http://www.example.com:80/productpage --resolve www.example.com:80:$INGRESS_GW_ADDRESSIn the Gloo UI, navigate to Graph and verify that you can see the Gloo UI Graph getting populated for that request.
Optional: Now that you set up the Gloo UI to use the OpenShift Prometheus instance, disable the Prometheus instance that is built into Solo Enterprise for Istio.
In the Helm values file for the management release, add the following values.
prometheus: enabled: falseUpgrade the Helm release with the updated values.
helm upgrade gloo-platform gloo-platform/gloo-platform \ --kube-context ${context1} \ --namespace gloo-mesh \ -f mgmt-server.yaml \ --version ${MGMT_VERSION}Verify that the Prometheus pod is removed.
kubectl get pods -n gloo-mesh --context ${context1}