Set up the pipeline
The Gloo OpenTelemetry (OTel) pipeline is released as an alpha feature. Functionality might change without prior notice in future releases. Do not use this feature in production environments.
Depending on your setup, you have the following options to set up the Gloo OTel metrics pipeline.
- Opt-in to use the Gloo OTel pipeline alongside the default metrics pipeline: You can set up the Gloo OTel pipeline alongside the default metrics pipeline in your Gloo Mesh Enterprise installation. With this approach, you can try out the OTel metrics pipeline and compare it to the default metrics pipeline.
- Migrate from the default metrics pipeline to the Gloo OTel pipeline: If you decide that you want to deprecate the default metrics pipeline and fully migrate to the Gloo OTel pipeline, make sure that metrics for all of your workloads are available via the Gloo metrics gateway first. Then, you can upgrade your Gloo Mesh Enterprise Helm installation and disable the default metrics pipeline by using the
--set legacyMetricsPipeline.enabled=false
Helm option.
Review the following steps to set up the Gloo OTel metrics pipeline with or without the default metrics pipeline.
-
Enable the Gloo metrics gateway in the Gloo management cluster. You can optionally change the default resource requests and resource limits for the Gloo metrics gateway by changing the
metricsgateway.resources.*
Helm values.helm upgrade --install gloo-platform gloo-platform/gloo-platform \ --namespace gloo-mesh \ --version $GLOO_VERSION \ --values gloo-gateway.yaml \ --set common.cluster=$MGMT_CLUSTER \ --set licensing.glooMeshLicenseKey=$GLOO_MESH_LICENSE_KEY \ --set legacyMetricsPipeline.enabled=true \ --set metricsgateway.enabled=true \ --set metricsgateway.resources.requests.cpu=300m \ --set metricsgateway.resources.requests.memory="1Gi" \ --set metricsgateway.resources.limits.cpu=600m \ --set metricsgateway.resources.limits.memory="2Gi"
If you installed Gloo Mesh using the
gloo-mesh-enterpise
,gloo-mesh-agent
, and other included Helm charts, or by usingmeshctl
version 2.2 or earlier, these Helm charts are considered legacy. Migrate your legacy installation to the newgloo-platform
Helm chart.helm upgrade --install gloo-mgmt gloo-mesh-enterprise/gloo-mesh-enterprise \ --namespace gloo-mesh \ --version $GLOO_VERSION \ --values values-mgmt-plane-env.yaml \ --set glooMeshLicenseKey=${GLOO_MESH_LICENSE_KEY} \ --set global.cluster=$MGMT_CLUSTER \ --set legacyMetricsPipeline.enabled=true \ --set metricsgateway.enabled=true \ --set metricsgateway.resources.requests.cpu=300m \ --set metricsgateway.resources.requests.memory="1Gi" \ --set metricsgateway.resources.limits.cpu=600m \ --set metricsgateway.resources.limits.memory="2Gi"
Make sure to include your Helm values when you upgrade either as a configuration file in the
–values
flag or with–set
flags. Otherwise, any previous custom values that you set might be overwritten. In single cluster setups, this might mean that your Gloo agent and ingress gateways are removed. For more information, see Get your Helm chart values in the upgrade guide.If you want to fully migrate to the Gloo OTel metrics pipeline, you can change the
--set legacyMetricsPipeline.enabled=false
Helm option to disable the default metrics pipeline. -
Verify that all pods in the
gloo-mesh
namespace are up an running, and that you see agloo-metrics-gateway*
pod.kubectl get deployments -n gloo-mesh --context $MGMT_CONTEXT
-
Get the external IP of the load balancer service that was created for the Gloo metrics gateway.
METRICS_GATEWAY_IP=$(kubectl get svc -n gloo-mesh gloo-metrics-gateway --context $MGMT_CONTEXT -o jsonpath='{.status.loadBalancer.ingress[0].ip}') METRICS_GATEWAY_PORT=$(kubectl -n gloo-mesh get service gloo-metrics-gateway --context $MGMT_CONTEXT -o jsonpath='{.spec.ports[?(@.name=="otlp")].port}') METRICS_GATEWAY_ADDRESS=${METRICS_GATEWAY_IP}:${METRICS_GATEWAY_PORT} echo $METRICS_GATEWAY_ADDRESS
-
Set up Gloo metrics collector agents in all your workload clusters. You can optionally change the default resource requests and resource limits for the agents by changing the
metricscollector.resources.*
Helm values.helm upgrade --install gloo-platform gloo-platform/gloo-platform \ --namespace gloo-mesh \ --version $GLOO_VERSION \ --values agent.yaml \ --set common.cluster=$REMOTE_CLUSTER \ --set licensing.glooMeshLicenseKey=$GLOO_MESH_LICENSE_KEY \ --set metricscollector.enabled=true \ --set metricscollector.config.exporters.otlp.endpoint=${METRICS_GATEWAY_ADDRESS} \ --set metricscollector.resources.requests.cpu=500m \ --set metricscollector.resources.requests.memory="1Gi" \ --set metricscollector.resources.limits.cpu=2 \ --set metricscollector.resources.limits.memory="2Gi"
If you installed Gloo Mesh using the
gloo-mesh-enterpise
,gloo-mesh-agent
, and other included Helm charts, or by usingmeshctl
version 2.2 or earlier, these Helm charts are considered legacy. Migrate your legacy installation to the newgloo-platform
Helm chart.helm upgrade --install gloo-agent gloo-mesh-agent/gloo-mesh-agent \ --namespace gloo-mesh \ --version $GLOO_VERSION \ --values values-data-plane-env.yaml \ --set glooMeshLicenseKey=${GLOO_MESH_LICENSE_KEY} \ --set global.cluster=$REMOTE_CLUSTER \ --set metricscollector.enabled=true \ --set metricscollector.config.exporters.otlp.endpoint=${METRICS_GATEWAY_ADDRESS} \ --set metricscollector.resources.requests.cpu=500m \ --set metricscollector.resources.requests.memory="1Gi" \ --set metricscollector.resources.limits.cpu=2 \ --set metricscollector.resources.limits.memory="2Gi"
Make sure to include your Helm values when you upgrade either as a configuration file in the
–values
flag or with–set
flags. Otherwise, any previous custom values that you set might be overwritten. In single cluster setups, this might mean that your Gloo agent and ingress gateways are removed. For more information, see Get your Helm chart values in the upgrade guide. -
Verify that the Gloo metrics collector agents are deployed in your cluster. Because the agents are deployed as a daemon set, the number of metrics collector agent pods equals the number of worker nodes in your cluster.
kubectl get pods -n gloo-mesh --context $REMOTE_CONTEXT
Example output:
NAME READY STATUS RESTARTS AGE gloo-mesh-agent-d89944685-mmgtt 1/1 Running 0 83m gloo-metrics-collector-agent-5cwn5 1/1 Running 0 107s gloo-metrics-collector-agent-7czjb 1/1 Running 0 107s gloo-metrics-collector-agent-jxmnv 1/1 Running 0 107s
-
Generate traffic for the apps in your cluster. For example, if you set up the Bookinfo app as part of the getting started guide, you can open the product page app in your browser to generate traffic.
- Open a port on your local machine for the product page app.
kubectl port-forward deploy/productpage-v1 -n bookinfo --context $REMOTE_CONTEXT 9080
- Open the product page in your browser.
open http://localhost:9080/productpage?u=normal
- Refresh the page a couple of times to generate traffic.
- Open a port on your local machine for the product page app.
-
Open the Gloo UI.
meshctl dashboard --kubecontext=$MGMT_CONTEXT
-
Verify that metrics were populated for your workloads by looking at the UI Graph.
-
You can optionally review the raw metrics by opening the Prometheus UI and entering
istio_requests_total
in the expression search bar.