Set up the pipeline

The Gloo OpenTelemetry (OTel) pipeline is released as an alpha feature. Functionality might change without prior notice in future releases. Do not use this feature in production environments.

Depending on your setup, you have the following options to set up the Gloo OTel metrics pipeline.

Review the following steps to set up the Gloo OTel metrics pipeline with or without the default metrics pipeline.

  1. Enable the Gloo metrics gateway in the Gloo management cluster. You can optionally change the default resource requests and resource limits for the Gloo metrics gateway by changing the metricsgateway.resources.* Helm values.

    helm upgrade --install gloo-platform gloo-platform/gloo-platform \
       --namespace gloo-mesh \
       --version $GLOO_VERSION \
       --values gloo-gateway.yaml \
       --set common.cluster=$MGMT_CLUSTER \
       --set licensing.glooMeshLicenseKey=$GLOO_MESH_LICENSE_KEY \
       --set legacyMetricsPipeline.enabled=true \
       --set metricsgateway.enabled=true \
       --set metricsgateway.resources.requests.cpu=300m \
       --set metricsgateway.resources.requests.memory="1Gi" \ 
       --set metricsgateway.resources.limits.cpu=600m \ 
       --set metricsgateway.resources.limits.memory="2Gi"
    

    If you installed Gloo Mesh using the gloo-mesh-enterpise, gloo-mesh-agent, and other included Helm charts, or by using meshctl version 2.2 or earlier, these Helm charts are considered legacy. Migrate your legacy installation to the new gloo-platform Helm chart.

    helm upgrade --install gloo-mgmt gloo-mesh-enterprise/gloo-mesh-enterprise \
       --namespace gloo-mesh \
       --version $GLOO_VERSION \
       --values values-mgmt-plane-env.yaml \
       --set glooMeshLicenseKey=${GLOO_MESH_LICENSE_KEY} \
       --set global.cluster=$MGMT_CLUSTER \
       --set legacyMetricsPipeline.enabled=true \
       --set metricsgateway.enabled=true \
       --set metricsgateway.resources.requests.cpu=300m \
       --set metricsgateway.resources.requests.memory="1Gi" \ 
       --set metricsgateway.resources.limits.cpu=600m \ 
       --set metricsgateway.resources.limits.memory="2Gi"
    

    Make sure to include your Helm values when you upgrade either as a configuration file in the –values flag or with –set flags. Otherwise, any previous custom values that you set might be overwritten. In single cluster setups, this might mean that your Gloo agent and ingress gateways are removed. For more information, see Get your Helm chart values in the upgrade guide.

    If you want to fully migrate to the Gloo OTel metrics pipeline, you can change the --set legacyMetricsPipeline.enabled=false Helm option to disable the default metrics pipeline.

  2. Verify that all pods in the gloo-mesh namespace are up an running, and that you see a gloo-metrics-gateway* pod.

    kubectl get deployments -n gloo-mesh --context $MGMT_CONTEXT
    
  3. Get the external IP of the load balancer service that was created for the Gloo metrics gateway.

    METRICS_GATEWAY_IP=$(kubectl get svc -n gloo-mesh gloo-metrics-gateway --context $MGMT_CONTEXT -o jsonpath='{.status.loadBalancer.ingress[0].ip}')
    METRICS_GATEWAY_PORT=$(kubectl -n gloo-mesh get service gloo-metrics-gateway --context $MGMT_CONTEXT -o jsonpath='{.spec.ports[?(@.name=="otlp")].port}')
    METRICS_GATEWAY_ADDRESS=${METRICS_GATEWAY_IP}:${METRICS_GATEWAY_PORT}
    echo $METRICS_GATEWAY_ADDRESS
    
  4. Set up Gloo metrics collector agents in all your workload clusters. You can optionally change the default resource requests and resource limits for the agents by changing the metricscollector.resources.* Helm values.

    helm upgrade --install gloo-platform gloo-platform/gloo-platform \
       --namespace gloo-mesh \
       --version $GLOO_VERSION \
       --values agent.yaml \
       --set common.cluster=$REMOTE_CLUSTER \
       --set licensing.glooMeshLicenseKey=$GLOO_MESH_LICENSE_KEY \
       --set metricscollector.enabled=true \
       --set metricscollector.config.exporters.otlp.endpoint=${METRICS_GATEWAY_ADDRESS} \
       --set metricscollector.resources.requests.cpu=500m \
       --set metricscollector.resources.requests.memory="1Gi" \
       --set metricscollector.resources.limits.cpu=2 \
       --set metricscollector.resources.limits.memory="2Gi"
    

    If you installed Gloo Mesh using the gloo-mesh-enterpise, gloo-mesh-agent, and other included Helm charts, or by using meshctl version 2.2 or earlier, these Helm charts are considered legacy. Migrate your legacy installation to the new gloo-platform Helm chart.

    helm upgrade --install gloo-agent gloo-mesh-agent/gloo-mesh-agent \
       --namespace gloo-mesh \
       --version $GLOO_VERSION \
       --values values-data-plane-env.yaml \
       --set glooMeshLicenseKey=${GLOO_MESH_LICENSE_KEY} \
       --set global.cluster=$REMOTE_CLUSTER \
       --set metricscollector.enabled=true \
       --set metricscollector.config.exporters.otlp.endpoint=${METRICS_GATEWAY_ADDRESS} \
       --set metricscollector.resources.requests.cpu=500m \
       --set metricscollector.resources.requests.memory="1Gi" \
       --set metricscollector.resources.limits.cpu=2 \
       --set metricscollector.resources.limits.memory="2Gi"
    

    Make sure to include your Helm values when you upgrade either as a configuration file in the –values flag or with –set flags. Otherwise, any previous custom values that you set might be overwritten. In single cluster setups, this might mean that your Gloo agent and ingress gateways are removed. For more information, see Get your Helm chart values in the upgrade guide.

  5. Verify that the Gloo metrics collector agents are deployed in your cluster. Because the agents are deployed as a daemon set, the number of metrics collector agent pods equals the number of worker nodes in your cluster.

    kubectl get pods -n gloo-mesh --context $REMOTE_CONTEXT
    

    Example output:

    NAME                                 READY   STATUS    RESTARTS      AGE
    gloo-mesh-agent-d89944685-mmgtt      1/1     Running   0             83m
    gloo-metrics-collector-agent-5cwn5   1/1     Running   0             107s
    gloo-metrics-collector-agent-7czjb   1/1     Running   0             107s
    gloo-metrics-collector-agent-jxmnv   1/1     Running   0             107s
    
  6. Generate traffic for the apps in your cluster. For example, if you set up the Bookinfo app as part of the getting started guide, you can open the product page app in your browser to generate traffic.

    1. Open a port on your local machine for the product page app.
      kubectl port-forward deploy/productpage-v1 -n bookinfo --context $REMOTE_CONTEXT 9080
      
    2. Open the product page in your browser.
      open http://localhost:9080/productpage?u=normal
      
    3. Refresh the page a couple of times to generate traffic.
  7. Open the Gloo UI.

    meshctl dashboard --kubecontext=$MGMT_CONTEXT
    
  8. Verify that metrics were populated for your workloads by looking at the UI Graph.

  9. You can optionally review the raw metrics by opening the Prometheus UI and entering istio_requests_total in the expression search bar.