By default, the Gloo UI reads metrics from the built-in Prometheus to populate and display the Gloo UI graph and other information. If you run in OpenShift, you can configure the Gloo UI to read metrics from the OpenShift’s built-in Prometheus for workload monitoring instead of the built-in Prometheus in Gloo Mesh Enterprise.

Before you begin

Follow the steps to forward metrics to the built-in OpenShift Prometheus instance so that the Gloo UI can read metrics from the Prometheus instance in OpenShift.

Single cluster

  1. Get the current values of the Helm release for your Gloo Mesh Core installation. Note that your Helm release might have a different name.

      helm get values gloo-platform -n gloo-mesh -o yaml > gloo-single.yaml
    open gloo-single.yaml
      
  2. In your Helm values file, add the following values.

      
    glooUI:
      # Default URL for OpenShift's built-in Prometheus for workload monitoring.
      prometheusUrl: https://thanos-querier.openshift-monitoring.svc:9091
      # The bearer token to access the Prometheus instance. This token is automatically extracted and mounted to the Gloo UI pod.
      prometheusBearerTokenFile: /var/run/secrets/kubernetes.io/serviceaccount/token
      # The public key that is used to authenticate with the Prometheus instance. This key is automatically mounted to the Gloo UI pod.
      prometheusCAFile: /var/run/secrets/kubernetes.io/serviceaccount/ca.crt
      # Set to true to skip the validation of the Prometheus server TLS certificate. 
      prometheusSkipTLSVerify: true
      
  3. Upgrade your Helm release. Change the release name as needed.

      helm upgrade gloo-platform gloo-platform/gloo-platform \
      --namespace gloo-mesh \
      -f gloo-single.yaml \
      --version $GLOO_VERSION
      
  4. Verify that the Gloo UI restarts successfully.

      kubectl get pods -n gloo-mesh | grep ui
      
  5. Open the Gloo UI.

      meshctl dashboard
      
  6. Send a request to the httpbin sample app.

      curl -vik http://www.example.com:80/productpage --resolve www.example.com:80:$INGRESS_GW_IP
      
  7. In the Gloo UI, navigate to Observability > Graph and verify that you can see the Gloo UI graph getting populated for that request.

  8. Optional: Now that you set up the Gloo UI to use the OpenShift Prometheus instance, disable the Prometheus instance that is built into Gloo Mesh Core.

    1. In you Helm values file, add the following values.
        
      prometheus:
        enabled: false
        
    2. Upgrade your Helm release. Change the release name as needed.
        helm upgrade gloo-platform gloo-platform/gloo-platform \
        --namespace gloo-mesh \
        -f gloo-single.yaml \
        --version $GLOO_VERSION
        
    3. Verify that the prometheus pod is removed.
        kubectl get pods -n gloo-mesh
        

Multicluster

  1. Get the current values of the Helm release for the management cluster. Note that your Helm release might have a different name.

      helm get values gloo-platform -n gloo-mesh -o yaml --kube-context $MGMT_CONTEXT > mgmt-server.yaml
    open mgmt-server.yaml
      
  2. In your Helm values file for the management cluster, add the following values.

      
    glooUI:
      # Default URL for OpenShift's built-in Prometheus for workload monitoring.
      prometheusUrl: https://thanos-querier.openshift-monitoring.svc:9091
      # DO NOT OVERWRITE. The bearer token to access the Prometheus instance. This token is automatically extracted and mounted to the Gloo UI pod.
      prometheusBearerTokenFile: /var/run/secrets/kubernetes.io/serviceaccount/token
      # DO NOT OVERWRITE. The public key that is used to authenticate with the Prometheus instance. This key is automatically mounted to the Gloo UI pod.
      prometheusCAFile: /var/run/secrets/kubernetes.io/serviceaccount/ca.crt
      # Set to true to skip the validation of the Prometheus server TLS certificate. 
      prometheusSkipTLSVerify: true
      
  3. Upgrade your Helm release in the management cluster. Change the release name as needed.

      helm upgrade gloo-platform gloo-platform/gloo-platform \
     --kube-context $MGMT_CONTEXT \
     --namespace gloo-mesh \
     -f mgmt-server.yaml \
     --version $GLOO_VERSION
      
  4. Verify that the Gloo UI redeploy successfully.

      kubectl get pods --context $MGMT_CONTEXT -n gloo-mesh | grep ui
      
  5. Open the Gloo UI.

      meshctl dashboard --kube-context $MGMT_CONTEXT
      
  6. Send a request to the httpbin sample app.

      curl -vik http://www.example.com:80/productpage --resolve www.example.com:80:$INGRESS_GW_IP
      
  7. In the Gloo UI, navigate to Observability > Graph and verify that you can see the Gloo UI graph getting populated for that request.

  8. Optional: Now that you set up the Gloo UI to use the OpenShift Prometheus instance, disable the Prometheus instance that is built into Gloo Mesh Core.

    1. In you Helm values file for the management cluster, add the following values.

        
      prometheus:
        enabled: false
        
    2. Upgrade your Helm release for the management cluster. Change the release name as needed.

        helm upgrade gloo-platform gloo-platform/gloo-platform \
       --kube-context $MGMT_CONTEXT \
       --namespace gloo-mesh \
       -f mgmt-server.yaml \
       --version $GLOO_VERSION
        
    3. Verify that the Prometheus pod is removed.

        kubectl get pods -n gloo-mesh --context $MGMT_CONTEXT