Set up the metrics pipeline
Gloo offers two options for collecting metrics from your workloads and making them available to Prometheus. Choose between the following links to get started with metrics in Prometheus.
- Set up the default metrics pipeline
- Set up the Gloo OpenTelemetry (OTel) metrics collector pipeline (alpha)
For more information about these metrics pipeline options, see Metrics pipeline options.
Set up the default metrics pipeline
When you follow the get started guide, the default metrics pipeline is set up automatically for you.
-
Check if the Prometheus server is running in your Gloo management cluster.
kubectl get pods -n gloo-mesh --context $MGMT_CONTEXT | grep prometheus
Example output:
prometheus-server-647b488bb-wxlzh 2/2 Running 0 66m
-
If no Prometheus server is set up in your management cluster, you can enable the Prometheus server by upgrading your Helm chart with the following command.
helm upgrade gloo-mgmt gloo-mesh-enterprise/gloo-mesh-enterprise \ --namespace gloo-mesh \ --kube-context=${MGMT_CONTEXT} \ --set prometheus.enabled=true \ --version ${GLOO_VERSION} \ --values values-mgmt-plane-env.yaml
Set up the Gloo OpenTelemetry (OTel) metrics collector pipeline (alpha)
The OpenTelemetry (OTel) metrics pipeline is released as an alpha feature. Functionality might change without prior notice in future releases. Do not use this feature in production environments.
Depending on your setup, you have the following options to set up the Gloo OTel metrics pipeline.
- Gloo Mesh is already installed with the default metrics pipeline: If you have an existing Gloo Mesh Enterprise installation that uses the default metrics pipeline, you can install the OTel metrics pipeline alongside the default one. With this approach, you can try out the OTel metrics pipeline without losing any data. If you decide that you want to deprecate the default pipeline, make sure that metrics for all of your workloads are available via the Gloo metrics endpoint first. Then, you can upgrade your Gloo Mesh Enterprise Helm installation and disable the default metrics pipeline by using the
--set legacyMetricsPipeline.enabled=false
Helm option. - Gloo Mesh is not yet installed: You can follow the steps in this guide to install the Gloo OTel metrics pipeline.
Check out the following steps to set up the Gloo OTel metrics pipeline.
-
Enable the Gloo metrics gateway in the Gloo management cluster. You can optionally change the default resource requests and resource limits for the Gloo metrics gateway by changing the
metricsgateway.resources.*
Helm values.helm upgrade --install gloo-mgmt gloo-mesh-enterprise/gloo-mesh-enterprise \ --namespace gloo-mesh \ --set metricsgateway.enabled=true \ --set metricsgateway.resources.requests.cpu=300m \ --set metricsgateway.resources.requests.memory="1Gi" \ --set metricsgateway.resources.limits.cpu=600m \ --set metricsgateway.resources.limits.memory="2Gi" \ --kube-context=${MGMT_CONTEXT} \ --version ${UPGRADE_VERSION} \ --values values-mgmt-plane-env.yaml
If you want to fully migrate to the Gloo OTel metrics pipeline, you can add the
--set legacyMetricsPipeline.enabled=false
Helm option to this command to disable the default metrics pipeline. -
Verify that all pods in the
gloo-mesh
namespace are up an running, and that you see agloo-metrics-gateway*
pod.kubectl get deployments -n gloo-mesh --context $MGMT_CONTEXT
-
Get the external IP of the load balancer service that was created for the Gloo metrics gateway.
METRICS_GATEWAY_IP=$(kubectl get svc -n gloo-mesh gloo-metrics-gateway --context $MGMT_CONTEXT -o jsonpath='{.status.loadBalancer.ingress[0].ip}') METRICS_GATEWAY_PORT=$(kubectl -n gloo-mesh get service gloo-metrics-gateway --context $MGMT_CONTEXT -o jsonpath='{.spec.ports[?(@.name=="otlp")].port}') METRICS_GATEWAY_ADDRESS=${METRICS_GATEWAY_IP}:${METRICS_GATEWAY_PORT} echo $METRICS_GATEWAY_ADDRESS
-
Set up OTel collectors in all your workload clusters. You can optionally change the default resource requests and resource limits for the Gloo metrics collector agents by changing the
metricsgateway.resources.*
Helm values.helm upgrade gloo-agent gloo-mesh-agent/gloo-mesh-agent \ --namespace gloo-mesh \ --set metricscollector.enabled=true \ --set metricscollector.config.exporters.otlp.endpoint=${METRICS_GATEWAY_ADDRESS} \ --set metricscollector.resources.requests.cpu=500m \ --set metricscollector.resources.requests.memory="1Gi" \ --set metricscollector.resources.limits.cpu=2 \ --set metricscollector.resources.limits.memory="2Gi" \ --kube-context=${REMOTE_CONTEXT} --version ${UPGRADE_VERSION} \ --values values-data-plane-env.yaml
-
Verify that the Gloo metrics collector agents are deployed in your cluster. Because the agents are deployed as a daemon set, the number of metrics collector agent pods equals the number of worker nodes in your cluster.
kubectl get pods -n gloo-mesh --context $REMOTE_CONTEXT
Example output:
NAME READY STATUS RESTARTS AGE gloo-mesh-agent-d89944685-mmgtt 1/1 Running 0 83m gloo-metrics-collector-agent-5cwn5 1/1 Running 0 107s gloo-metrics-collector-agent-7czjb 1/1 Running 0 107s gloo-metrics-collector-agent-jxmnv 1/1 Running 0 107s
-
Generate traffic for the apps in your cluster. For example, if you set up the Bookinfo app as part of the getting started guide, you can open the product page app in your browser to generate traffic.
- Open a port on your local machine for the product page app.
kubectl port-forward deploy/productpage-v1 -n bookinfo --context $REMOTE_CONTEXT 9080
- Open the product page in your browser.
open http://localhost:9080/productpage?u=normal
- Refresh the page a couple of times to generate traffic.
- Open a port on your local machine for the product page app.
-
Open the Gloo UI.
meshctl dashboard --kubecontext=$MGMT_CONTEXT
-
Verify that metrics were populated for your workloads by looking at the UI Graph.
-
You can optionally review the raw metrics by opening the Prometheus UI and entering
istio_requests_total
in the expression search bar.