About the telemetry pipeline

You can gain insights into the health and performance of your cluster components by using the Gloo telemetry pipeline. Built on top of the OpenTelemetry open source project, the Gloo telemetry pipeline helps you to collect and export telemetry data, such as metrics, logs, traces, and Gloo insights, and to visualize this data by using Gloo observability tools.

Review the information on this page to learn more about the Gloo telemetry pipeline and how to use it in your cluster.

Setup

The Gloo telemetry pipeline is set up by default if you followed one of the installation guides:

To use the Gloo UI graph to visualize network traffic in your environment, you must set the telemetryCollector.enabled Helm setting to true in each cluster in your environment. If you installed Gloo Mesh at version 2.5 or later, this setting is enabled by default. If you installed Gloo Mesh in a multicluster environment at version 2.4 or earlier, be sure you enable this setting in the Helm values for your management cluster.

To see the receivers, processors, and exporters that are set up by default for you, run the following commands:

kubectl get configmap gloo-telemetry-gateway-config -n gloo-mesh -o yaml
kubectl get configmap gloo-telemetry-collector-config -n gloo-mesh -o yaml

Disable the telemetry pipeline

If you want to disable the Gloo telemetry pipeline, follow the Upgrade guide and add the following configuration to your Helm values file:

telemetryCollector:
  enabled: false
telemetryGateway:
  enabled: false

Disabling the Gloo telemetry pipeline removes the Gloo telemetry gateway and collector agent pods from your cluster. If you previously collected telemetry data, and data was not exported to a different observability tool, all telemetry data is removed. To keep telemetry data, consider exporting data to other observability tools, such as Prometheus, Jaeger, or your own before disabling the telemetry pipeline.

Customize the pipeline

You can customize the Gloo telemetry pipeline and set up additional receivers, processors, and exporters in your pipeline. The Gloo telemetry pipeline is set up with pre-built pipelines that use a variety of receivers, processors, and exporters to collect and store telemetry data in your cluster. You can enable and disable these pipelines as part of your Helm installation.

Because the Gloo telemetry pipeline is built on top of the OpenTelemetry open source project, you also have the option to add your own custom receivers, processors, and exporters to the pipeline. For more information, see the pipeline architecture information in the OpenTelemetry documentation.

To see the receivers, processors, and exporters that are set up by default for you, run the following commands:

kubectl get configmap gloo-telemetry-gateway-config -n gloo-mesh -o yaml
kubectl get configmap gloo-telemetry-collector-config -n gloo-mesh -o yaml

To add more telemetry data to the Gloo telemetry pipeline, see Customize the pipeline.

Architecture

The Gloo telemetry pipeline is decoupled from the Gloo agents and management server core functionality, and consists of two main components: the Gloo telemetry collector agent and telemetry gateway.

Flip through the cards to see how these components are set up in a single and multicluster environment.

In single cluster setups, only a Gloo telemetry collector agent is deployed to the cluster. The agent is configured to scrape metrics from workloads in the cluster, and to enrich the data, such as by adding workload IDs that you can later use to filter metrics. In addition, it receives other telemetry data, such as traces and access logs. Depending on the type of telemetry data, the collector agent then forwards this data to other observability tools, such as Jaeger as shown in the following image.

You also have the option to set up your own exporter to forward telemetry data to an observability tool of your choice. To see an example for how to export data to Datadog, see Forward metrics to Datadog.

Gloo telemetry pipeline architecture in single cluster setups

In multicluster setups, Gloo telemetry collector agents are deployed to the management cluster and each workload cluster. The agents are configured to scrape metrics from workloads in the cluster, and to enrich the data, such as by adding workload IDs that you can later use to filter metrics. In addition, they receive other telemetry data, such as traces and access logs.

A Gloo telemetry gateway is also deployed to the management cluster and exposed with a Kubernetes load balancer service. The gateway consolidates data in the Gloo management plane so that it can be forwarded to the built-in Gloo observability tools. The collector agents in each workload cluster send all their telemetry data to the telemetry gateway's service endpoint. You can choose from a set of pre-built pipelines to configure how the telemetry gateway forwards telemetry data within the cluster.

You also have the option to forward telemetry data to an observability tool of your choice by adding custom exporters to either the telemetry gateway or to each collector agent. The option that is right for you depends on the size of your environment, the amount of telemetry data that you want to export, and the compute resources that are available to the Gloo telemetry pipeline components. To see an example for how to export data to Datadog, see Forward metrics to Datadog.

Gloo telemetry pipeline architecture in multicluster setups

The diagram shows the default ports that are added as prometheus.io/port: "<port_number>" pod annotations to the workloads that expose metrics. This port is automatically used by the Gloo collector agent, Gloo telemetry gateway, and Prometheus to scrape the metrics from these workloads. You can change the port by changing the pod annotation. However, keep in mind that changing the default scraping ports might lead to unexpected results, because Gloo Mesh Enterprise processes might depend on the default setting.


Learn more about the telemetry data that is collected in the Gloo telemetry pipeline.

When you enable the Gloo telemetry pipeline, the collector agents and, if applicable, the telemetry gateway are configured to collect metrics in your Gloo Mesh Enterprise environment.

Gloo telemetry collector agents scrape metrics from workloads in your cluster, such as the Gloo agents and management server, the Istio control plane, and the workloads’ sidecar proxies. To determine the workloads that need to be scraped and find the port where metrics are exposed, the prometheus.io/scrape: "true" and prometheus.io/port: "<port_number>" pod annotations are used. All Gloo components that expose metrics and all Istio- and Cilium-specific workloads are automatically deployed with these annotations.

In Gloo Mesh Enterprise version 2.5.0, the prometheus.io/port: "<port_number>" annotation was removed from the Gloo management server and agent. However, the prometheus.io/scrape: true annotation is still present. If you have another Prometheus instance that runs in your cluster, and it is not set up with custom scraping jobs for the Gloo management server and agent, the instance automatically scrapes all ports on the management server and agent pods. This can lead to error messages in the management server and agent logs. To resolve this issue, see Run another Prometheus instance alongside the built-in one.

The agents then enrich and convert the metrics. For example, the ID of the source and destination workload is added to the metrics so that you can filter the metrics for the workload that you are interested in.

The built-in Prometheus server is set up to scrape metrics from the Gloo telemetry collector agents. In multicluster setups, the Prometheus server also scrapes metrics from the Gloo telemetry gateway in the management cluster that receives metrics from all workload clusters.

Observability tools, such as the Gloo UI or the Gloo operations dashboard, read metrics from Prometheus and visualize this data so that you can monitor the health of the Gloo Mesh Enterprise components and Istio workloads, and to receive alerts if an issue is detected. For more information, see the Prometheus overview.

You can configure the Gloo telemetry pipeline to collect metadata about the compute instances, such as virtual machines, that the workload cluster is deployed to so that you can visualize your Gloo Mesh setup across your cloud provider infrastructure network. The metadata is added as labels to metrics and exposed on the Gloo telemetry collector agent (single cluster), or sent to the Gloo telemetry gateway (multicluster) where they can be scraped by the built-in Prometheus server. You can then use the Prometheus expression browser to analyze these metrics.

For more information, see Collect compute instance metadata.

You can configure the Gloo telemetry pipeline to collect traces that your workloads emit and to forward these traces to the built-in Jaeger tracing platform that is embedded in to the Gloo UI. Note that workloads must be instrumented to emit traces before traces can be collected by the pipeline.

To add traces to the Gloo telemetry pipeline, you must configure the collector agents to pick up the traces and forward them to the built-in Jaeger platform directly (single cluster) or to the Gloo telemetry gateway where they can be forwarded to Jaeger (multicluster). You can also customize the pipeline to forward traces to your Jaeger platform instead.

For more information, see Add Istio request traces.

If your cluster uses the Cilium CNI, some Cilium-specific metrics are collected by default to visualize network communication in the Gloo UI graph. To add more Cilium, Hubble, and eBPF-specific metrics to the Gloo telemetry pipeline so that you can access them with the expression browser of the built-in Prometheus server, you can enable a pre-built Cilium processor on the Gloo telemetry collector agent. The processor exposes the metrics on the collector agent (single cluster) or sends them to the Gloo telemetry gateway (multicluster) where they can be scraped by the built-in Prometheus server. For more information, see Add Cilium metrics.

In addition, you have the option to add Hubble network flows to the Gloo telemetry collector agent configuration. Flow logs are exposed on the collector agent (single cluster) or sent to the Gloo telemetry gateway (multicluster), and can be accessed by using the meshctl hubble observe command. You can optionally set up a custom exporter to export these logs to an observability tool of your choice, such as Redis. For more information, see Add Cilium flow logs.

Built-in telemetry pipelines

The Gloo telemetry pipeline is set up with default pipelines that you can enable to collect telemetry data in your cluster.

Telemetry data Collector agent pipeline Description
Metrics metrics/ui The metrics/ui pipeline is enabled by default and collects the metrics that are required for the Gloo UI graph. Metrics in the collector agent are then scraped by the built-in Prometheus server so that they can be provided to Gloo observability tools. To view the metrics that are captured with this pipeline, see Default metrics in the pipeline.
Compute metadata metrics/otlp_relay The metrics/otlp_relay pipeline collects metadata about the compute instances, such as virtual machines, that the workload cluster is deployed to, and adds the metadata as labels on metrics. The metrics are exposed on the Gloo telemetry collector agent where they can be scraped by the built-in Prometheus server. For more information, see Collect compute instance metadata.
Traces traces/istio The traces/istio pipeline collects request traces from Istio-enabled workloads and sends them to the built-in Jaeger platform or a custom Jaeger instance. For more information, see Add Istio request traces.
Cilium metrics metrics/cilium The metrics/cilium pipeline collects Cilium, Hubble, and eBPF-specific metrics. Metrics are exposed on the Gloo telemetry collector agent where they are scraped by the built-in Prometheus server. You can access the metrics by using the Prometheus expression browser. Note that your cluster must be set up to use the Cilium CNI for Cilium metrics to be collected. For more information, see Add Cilium metrics.
Cilium flow logs logs/cilium_flows The logs/cilium_flows pipeline collects Hubble flow logs for the workloads in the cluster. Flow logs are exposed on the Gloo telemetry collector agent. You can access the flow logs with the meshctl hubble observe command. Note that your cluster must be set up to use the Cilium CNI for flow logs to be collected. For more information, see Add Cilium flow logs.
Telemetry data Collector agent pipeline Gateway pipeline Description
Metrics metrics/ui metrics/prometheus The metrics/ui pipeline collects the metrics that are required for the Gloo UI graph and forwards these metrics to the Gloo telemetry gateway. This pipeline is enabled by default. To view the metrics that are captured with this pipeline, see Default metrics in the pipeline. The metrics/prometheus pipeline is enabled by default and collects metrics from various sources, such as the Gloo management server, Gloo Platform, Istio, Cilium, and Gloo telemetry pipeline components. The built-in Prometheus server is configured to scrape all metrics from the Gloo telemetry gateway, including the metrics that were sent by the Gloo telemetry collector agents.
Compute metadata metrics/otlp_relay N/A The metrics/otlp_relay pipeline collects metadata about the compute instances, such as virtual machines, that the cluster is deployed to, and adds the metadata as labels on metrics. The metrics are sent to the Gloo telemetry gateway where they can be scraped by the built-in Prometheus server. For more information, see Collect compute instance metadata.
Traces traces/istio traces/jaeger The traces/istio pipeline collects request traces from Istio-enabled workloads and sends them to the built-in Jaeger platform or a custom Jaeger instance by using the traces/jaeger pipeline. For more information, see Add Istio request traces.
Cilium metrics metrics/cilium N/A The metrics/cilium pipeline collects Cilium, Hubble, and eBPF-specific metrics. Metrics are sent to the Gloo telemetry gateway where they are scraped by the built-in Prometheus server. You can access the metrics by using the Prometheus expression browser. Note that your cluster must be set up to use the Cilium CNI for Cilium metrics to be collected. For more information, see Add Cilium metrics.
Cilium flow logs logs/cilium_flows N/A The logs/cilium_flows pipeline collects Hubble flow logs for the workloads in the cluster. Flow logs are sent to the Gloo telemetry gateway where you can access them with the meshctl hubble observe command. Note that your cluster must be set up to use the Cilium CNI for flow logs to be collected. For more information, see Add Cilium flow logs.

Default metrics in the pipeline

By default, the Gloo telemetry pipeline is configured to scrape the metrics that are required for the Gloo UI from various workloads in your cluster by using the metrics/ui and metrics/prometheus pipelines. The built-in Prometheus server is configured to scrape metrics from the Gloo collector agent (single cluster), or Gloo telemetry gateway and collector agent (multicluster). To reduce cardinality in the Gloo telemetry pipeline, only a few labels are collected for each metric. For more information, see Metric labels.

Review the metrics that are available in the Gloo telemetry pipeline. You can set up additional receivers to scrape other metrics, or forward the metrics to other obersvability tools, such as Datadog, by creating your own custom exporter for the Gloo telemetry gateway. To find an example setup, see Forward metrics to Datadog.

Istio proxy metrics

Metric Description
istio_requests_total The number of requests that were processed for an Istio proxy.
istio_request_duration_milliseconds The time it takes for a request to reach its destination in milliseconds.
istio_request_duration_milliseconds_bucket The time it takes for a request to reach its destination in milliseconds.
istio_request_duration_milliseconds_count The total number of Istio requests since the Istio proxy was last started.
istio_request_duration_milliseconds_sum The sum of all request durations since the last start of the Istio proxy.
istio_tcp_sent_bytes_total The total number of bytes that are sent in a response.
istio_tcp_received_bytes_total The total number of bytes that are received in a request.
istio_tcp_connections_opened_total The total number of open connections to an Istio proxy.

Istiod metrics

Metric Description
pilot_proxy_convergence_time The time it takes between applying a configuration change and the Istio proxy receiving the configuration change.

Cilium metrics

Metric Description
hubble_flows_processed_total The total number of network flows that were processed by the Cilium agent.
hubble_drop_total The total number of packages that were dropped by the Cilium agent.

Gloo management server metrics

Metric Description
gloo_mesh_reconciler_time_sec_bucket The time the Gloo management server needs to sync with the Gloo agents in the workload clusters to apply the translated resources. This metric is captured in seconds for the following intervals (buckets): 1, 2, 5, 10, 15, 30, 50, 80, 100, 200.
gloo_mesh_redis_sync_err The number of times the Gloo mangement server could not read from or write to the Gloo Redis instance.
gloo_mesh_redis_write_time_sec The time it takes in seconds for the Gloo mangement server to write to the Redis database.
gloo_mesh_translation_time_sec_bucket The time the Gloo management server needs to translate Gloo resources into Istio, Envoy, or Cilium resources. This metric is captured in seconds for the following intervals (buckets): 1, 2, 5, 10, 15, 20, 25, 30, 45, 60, and 120.
gloo_mesh_translator_concurrency The number of translation operations that the Gloo management server can perform at the same time.
relay_pull_clients_connected The number of Gloo agents that are connected to the Gloo management server.
relay_push_clients_warmed The number of Gloo agents that are ready to accept updates from the Gloo management server.
solo_io_gloo_gateway_license The number of minutes until the Gloo Gateway license expires. To prevent your management server from crashing when the license expires, make sure to upgrade the license before expiration.
solo_io_gloo_mesh_license The number of minutes until the Gloo Mesh Enterprise license expires. To prevent your management server from crashing when the license expires, make sure to upgrade the license before expiration.
solo_io_gloo_network_license The number of minutes until the Gloo Network for Cilium license expires. To prevent your management server from crashing when the license expires, make sure to upgrade the license before expiration.
translation_error The number of translation errors that were reported by the Gloo management server.
translation_warning The number of translation warnings that were reported by the Gloo management server.

Gloo telemetry pipeline metrics

Metric Description
otelcol_processor_refused_metric_points The number of metrics that were refused by the Gloo telemetry pipeline. For example, metrics might be refused to prevent collector agents from being overloaded in the case of insufficient memory resources.
otelcol_processor_refused_spans The metric spans that were refused by the memory_limiter in the Gloo telemetry pipeline to prevent collector agents from being overloaded.
otelcol_exporter_queue_capacity The amount of telemetry data that can be stored in memory while waiting on a worker in the collector agent to become available to send the data.
otelcol_exporter_queue_size The amount of telemetry data that is currently stored in memory. If the size is equal or larger than otelcol_exporter_queue_capacity, new telemetry data is rejected.
otelcol_loadbalancer_backend_latency The time the collector agents need to export telemetry data.
otelcol_exporter_send_failed_spans The number of telemetry data spans that could not be sent to a backend.

Metric labels

To reduce cardinality in the Gloo telemetry pipeline, only the following labels are collected for each metric.

Metric group Labels
Istio [“cluster”, “collector_pod” , “connection_security_policy”, “destination_cluster”, “destination_principal”, “destination_service”, “destination_workload”, “destination_workload_id”, “destination_workload_namespace”, “gloo_mesh”, “namespace”, “pod_name”, “reporter”, “response_code”, “source_cluster”, “source_principal”, “source_workload”, “source_workload_namespace”, “version”, “workload_id”]
Telemetry pipeline [“app”, “cluster”, “collector_name”, “collector_pod”, “component”, “exporter”, “namespace”, “pod_template_generation”, “processor”, “service_version”]
Hubble [“app”, “cluster”, “collector_pod”, “component”, “destination”, “destination_cluster”, “destination_pod”, “destination_workload”, “destination_workload_id”, “destination_workload_namespace”, “k8s_app”, “namespace”, “pod”, “protocol”, “source”, “source_cluster”, “source_pod”, “source_workload”, “source_workload_namespace”, “subtype”, “type”, “verdict”, “workload_id”]
Cilium (if enabled in Gloo telemetry pipeline) [“action”, “address_type”, “api_call”, “app”, “arch”, “area”, “cluster”, “collector_pod”, “component”, “direction”, “endpoint_state”, “enforcement”, “equal”, “error”, “event_type”, “family”, “k8s_app”, “le”, “level”, “map_name”, “method”, “name”, “namespace”, “operation”, “outcome”, “path”, “pod”, “pod_template_generation”, “protocol”, “reason”, “return_code”, “revision”, “scope”, “source”, “source_cluster”, “source_node_name”, “status”, “subsystem”, “target_cluster”, “target_node_ip”, “target_node_name”, “target_node_type”, “type”, “valid”, “value”, “version”]
eBPF (if enabled in Gloo telemetry pipeline) [“app”, “client_addr”, “cluster”, “code”, “collector_pod”, “component”, “destination”, “local_addr”, “namespace”, “pod”, “pod_template_generation”, “remote_identity”, “server_identity”, “source”]