Customize the pipeline
You can customize the Gloo OTel pipeline and set up additional receivers, processors, and exporters in your pipeline.
To see the receivers, processors, and exporters that are set up by default for you, run kubectl get configmap gloo-telemetry-gateway-config -n gloo-mesh -o yaml
and kubectl get configmap gloo-telemetry-collector-config -n gloo-mesh -o yaml
.
- Receivers: Receivers listen on a network port to receive telemetry data. For example, you can set up receivers to scrape extra Prometheus targets. To see an example, check out Scrape the Gloo management server pod with the telemetry gateway.
- Processors: Processors can transform the data before it is forwarded to another processor and an exporter. For example, you can use processors to drop and generate new data.
- Exporters: Exporters forward the data they get to a destination on the local or remote network. For example, you can use an exporter to forward your data to a remote Thanos cluster, Mimir cluster, or a third-party provider, such as Datadog, Honeycomb, and others. To see an example, check out Forward data to Datadog.
For more information about receivers, processors, and exporters, see the pipeline architecture information in the OpenTelemetry documentation.
Scrape the Gloo management server pod with the telemetry gateway
Set up an extra receiver to scrape metrics from the Gloo management server and manipulate them so that they can be forwarded to an exporter.
-
Add the following configuration to your values file for the Gloo Gateway installation Helm chart. This configuration sets up a scraping job for the Gloo management server. In addition, regex expressions are used to drop or manipulate metrics. The data is then forwarded to the default
prometheus
exporter.telemetryGatewayCustomization: extraReceivers: prometheus/gloo-mgmt: config: scrape_configs: - job_name: gloo-mesh-mgmt-server-otel honor_labels: true kubernetes_sd_configs: - namespaces: names: - gloo-mesh role: pod relabel_configs: - action: keep regex: gloo-mesh-mgmt-server|gloo-mesh-ui source_labels: - __meta_kubernetes_pod_label_app - action: keep regex: true source_labels: - __meta_kubernetes_pod_annotation_prometheus_io_scrape - action: drop regex: true source_labels: - __meta_kubernetes_pod_annotation_prometheus_io_scrape_slow - action: replace regex: (https?) source_labels: - __meta_kubernetes_pod_annotation_prometheus_io_scheme target_label: __scheme__ - action: replace regex: (.+) source_labels: - __meta_kubernetes_pod_annotation_prometheus_io_path target_label: __metrics_path__ - action: replace regex: ([^:]+)(?::\d+)?;(\d+) replacement: $$1:$$2 source_labels: - __address__ - __meta_kubernetes_pod_annotation_prometheus_io_port target_label: __address__ - action: labelmap regex: __meta_kubernetes_pod_annotation_prometheus_io_param_(.+) replacement: __param_$$1 - action: labelmap regex: __meta_kubernetes_pod_label_(.+) - action: replace source_labels: - __meta_kubernetes_namespace target_label: namespace - action: replace source_labels: - __meta_kubernetes_pod_name target_label: pod - action: drop regex: Pending|Succeeded|Failed|Completed source_labels: - __meta_kubernetes_pod_phase extraPipelines: metrics/gloo-mgmt: receivers: - prometheus/gloo-mgmt # Prometheus scrape config for mgmt-server processors: - memory_limiter - batch exporters: - prometheus # Prometheus deployed by Gloo.
-
Follow the Upgrade Gloo Gateway guide to apply the changes in your environment. For multicluster environments, upgrade only the Gloo management server with your updated values file, as no upgrade of the Gloo agent is required.
-
Verify that the configmap for the telemetry gateway is updated with the values that you set in the values file.
kubectl get configmap gloo-telemetry-gateway-config -n gloo-mesh -o yaml
-
Perform a rollout restart of the gateway deployment to force your configmap changes to be applied in the telemetry gateway pods.
kubectl rollout restart -n gloo-mesh deployment/gloo-telemetry-gateway
Forward data to Datadog
Set up an extra exporter to forward pipeline metrics to your Datadog instance.
If you want to set up an exporter for a different destination, update your configuration in the values file for the installation Helm chart accordingly, and use the same steps as outlined in this topic to update your pipeline. You can find an overview of supported providers in the OpenTelemetry documentation.
-
Get the URL and the API key to log into your Datadog instance. For more information about setting up an API key, see the Datadog documentation.
-
Add the following configuration to your values file for the Gloo Gateway installation Helm chart. When you add this configuration to your Gloo OTel pipeline, all pipeline-specific metrics are forwarded to your Datadog instance.
telemetryGatewayCustomization: extraExporters: datadog: api: site: <datadog-site> # Example: datadoghq.eu key: <datadog-api-key> extraPipelines: metrics/workload-clusters: receivers: - otlp # Metrics received by the Collector in workload clusters. processors: - memory_limiter - batch exporters: - datadog # Exporter specified above as the new destination.
-
Follow the Upgrade Gloo Gateway guide to apply the changes in your environment.Multicluster environments only: Because the Gloo telemetry gateway has all telemetry data for your workloads, Gloo Platform, and metrics pipeline, new exporter configurations are ideally added to the Gloo telemetry gateway in the management cluster so that you can forward all telemetry data to your preferred destination. To apply the changes, you must upgrade only the Gloo management server with the updated values file. No upgrade of the Gloo agent is required. To optionally apply the changes to a Gloo OTel collector agent in one or all workload clusters, you must update the Gloo agent with the settings in your values file for the Helm release in the workload cluster. Note that you need to replace
telemetryGatewayCustomization
withtelemetryCollectorCustomization
in your Helm values file so that your changes can be applied to the OTel collector agents in your workload clusters. -
Depending on where you applied the changes, verify that the configmap for the telemetry gateway or collector agent pods is updated with the values you that set in the values file.
kubectl get configmap gloo-telemetry-gateway-config -n gloo-mesh -o yaml
kubectl get configmap gloo-telemetry-collector-config -n gloo-mesh -o yaml
-
Depending on where you applied the changes, perform a rollout restart of the gateway deployment or the collector daemon set to force your configmap changes to be applied to the telemetry gateway or collector agent pods.
kubectl rollout restart -n gloo-mesh deployment/gloo-telemetry-gateway
kubectl rollout restart -n gloo-mesh daemonset/gloo-telemetry-collector-agent