Set up the pipeline

The Gloo OpenTelemetry (OTel) pipeline is released as an alpha feature. Functionality might change without prior notice in future releases. Do not use this feature in production environments.

Depending on your setup, you have the following options to set up the Gloo OTel metrics pipeline.

Review the following steps to set up the Gloo OTel metrics pipeline with or without the default metrics pipeline.

  1. Enable the Gloo metrics gateway in the Gloo management cluster. You can optionally change the default resource requests and resource limits for the Gloo metrics gateway by changing the metricsgateway.resources.* Helm values.

       helm upgrade --install gloo-mgmt gloo-mesh-enterprise/gloo-mesh-enterprise \
       --namespace gloo-mesh \
       --set legacyMetricsPipeline.enabled=true \
       --set metricsgateway.enabled=true \
       --set metricsgateway.resources.requests.cpu=300m \
       --set metricsgateway.resources.requests.memory="1Gi" \ 
       --set metricsgateway.resources.limits.cpu=600m \ 
       --set metricsgateway.resources.limits.memory="2Gi" \
       --kube-context=${MGMT_CONTEXT} \
       --version ${UPGRADE_VERSION} \
       --values values-mgmt-plane-env.yaml

    If you want to fully migrate to the Gloo OTel metrics pipeline, you can change the --set legacyMetricsPipeline.enabled=false Helm option to disable the default metrics pipeline.

  2. Verify that all pods in the gloo-mesh namespace are up an running, and that you see a gloo-metrics-gateway* pod.

    kubectl get deployments -n gloo-mesh --context $MGMT_CONTEXT
  3. Get the external IP of the load balancer service that was created for the Gloo metrics gateway.

    METRICS_GATEWAY_IP=$(kubectl get svc -n gloo-mesh gloo-metrics-gateway --context $MGMT_CONTEXT -o jsonpath='{.status.loadBalancer.ingress[0].ip}')
    METRICS_GATEWAY_PORT=$(kubectl -n gloo-mesh get service gloo-metrics-gateway --context $MGMT_CONTEXT -o jsonpath='{.spec.ports[?("otlp")].port}')
  4. Set up Gloo metrics collector agents in all your workload clusters. You can optionally change the default resource requests and resource limits for the agents by changing the metricscollector.resources.* Helm values.

       helm upgrade gloo-agent gloo-mesh-agent/gloo-mesh-agent \
       --namespace gloo-mesh \
       --set metricscollector.enabled=true \
       --set metricscollector.config.exporters.otlp.endpoint=${METRICS_GATEWAY_ADDRESS} \
       --set metricscollector.resources.requests.cpu=500m \
       --set metricscollector.resources.requests.memory="1Gi" \
       --set metricscollector.resources.limits.cpu=2 \
       --set metricscollector.resources.limits.memory="2Gi" \
       --version ${UPGRADE_VERSION} \
       --values values-data-plane-env.yaml

  5. Verify that the Gloo metrics collector agents are deployed in your cluster. Because the agents are deployed as a daemon set, the number of metrics collector agent pods equals the number of worker nodes in your cluster.

    kubectl get pods -n gloo-mesh --context $REMOTE_CONTEXT

    Example output:

    NAME                                 READY   STATUS    RESTARTS      AGE
    gloo-mesh-agent-d89944685-mmgtt      1/1     Running   0             83m
    gloo-metrics-collector-agent-5cwn5   1/1     Running   0             107s
    gloo-metrics-collector-agent-7czjb   1/1     Running   0             107s
    gloo-metrics-collector-agent-jxmnv   1/1     Running   0             107s
  6. Generate traffic for the apps in your cluster. For example, if you set up the Bookinfo app as part of the getting started guide, you can open the product page app in your browser to generate traffic.

    1. Open a port on your local machine for the product page app.
      kubectl port-forward deploy/productpage-v1 -n bookinfo --context $REMOTE_CONTEXT 9080
    2. Open the product page in your browser.
      open http://localhost:9080/productpage?u=normal
    3. Refresh the page a couple of times to generate traffic.
  7. Open the Gloo UI.

    meshctl dashboard --kubecontext=$MGMT_CONTEXT
  8. Verify that metrics were populated for your workloads by looking at the UI Graph.

  9. You can optionally review the raw metrics by opening the Prometheus UI and entering istio_requests_total in the expression search bar.