Customize the pipeline

You can customize the Gloo OTel pipeline and set up additional receivers, processors, and exporters in your pipeline.

To see the receivers, processors, and exporters that are set up by default for you, run kubectl get configmap gloo-telemetry-gateway-config -n gloo-mesh -o yaml and kubectl get configmap gloo-telemetry-collector-config -n gloo-mesh -o yaml.

For more information about receivers, processors, and exporters, see the pipeline architecture information in the OpenTelemetry documentation.

Scrape the Gloo management server pod with the telemetry gateway

Set up an extra receiver to scrape metrics from the Gloo management server and manipulate them so that they can be forwarded to an exporter.

  1. Add the following configuration to your values file for the Gloo Gateway installation Helm chart. This configuration sets up a scraping job for the Gloo management server. In addition, regex expressions are used to drop or manipulate metrics. The data is then forwarded to the default prometheus exporter.

    telemetryGatewayCustomization:
      extraReceivers:
        prometheus/gloo-mgmt:
          config:
            scrape_configs:
            - job_name: gloo-mesh-mgmt-server-otel
              honor_labels: true
              kubernetes_sd_configs:
              - namespaces:
                  names:
                  - gloo-mesh
                role: pod
              relabel_configs:
              - action: keep
                regex: gloo-mesh-mgmt-server|gloo-mesh-ui
                source_labels:
                - __meta_kubernetes_pod_label_app
              - action: keep
                regex: true
                source_labels:
                - __meta_kubernetes_pod_annotation_prometheus_io_scrape
              - action: drop
                regex: true
                source_labels:
                - __meta_kubernetes_pod_annotation_prometheus_io_scrape_slow
              - action: replace
                regex: (https?)
                source_labels:
                - __meta_kubernetes_pod_annotation_prometheus_io_scheme
                target_label: __scheme__
              - action: replace
                regex: (.+)
                source_labels:
                - __meta_kubernetes_pod_annotation_prometheus_io_path
                target_label: __metrics_path__
              - action: replace
                regex: ([^:]+)(?::\d+)?;(\d+)
                replacement: $$1:$$2
                source_labels:
                - __address__
                - __meta_kubernetes_pod_annotation_prometheus_io_port
                target_label: __address__
              - action: labelmap
                regex: __meta_kubernetes_pod_annotation_prometheus_io_param_(.+)
                replacement: __param_$$1
              - action: labelmap
                regex: __meta_kubernetes_pod_label_(.+)
              - action: replace
                source_labels:
                - __meta_kubernetes_namespace
                target_label: namespace
              - action: replace
                source_labels:
                - __meta_kubernetes_pod_name
                target_label: pod
              - action: drop
                regex: Pending|Succeeded|Failed|Completed
                source_labels:
                - __meta_kubernetes_pod_phase
      extraPipelines:
        metrics/gloo-mgmt:
          receivers:
          - prometheus/gloo-mgmt # Prometheus scrape config for mgmt-server
          processors:
          - memory_limiter
          - batch
          exporters:
          - prometheus # Prometheus deployed by Gloo.
    
  2. Follow the Upgrade Gloo Gateway guide to apply the changes in your environment. For multicluster environments, upgrade only the Gloo management server with your updated values file, as no upgrade of the Gloo agent is required.

  3. Verify that the configmap for the telemetry gateway is updated with the values that you set in the values file.

    kubectl get configmap gloo-telemetry-gateway-config -n gloo-mesh -o yaml
    
  4. Perform a rollout restart of the gateway deployment to force your configmap changes to be applied in the telemetry gateway pods.

    kubectl rollout restart -n gloo-mesh deployment/gloo-telemetry-gateway
    

Forward data to Datadog

Set up an extra exporter to forward pipeline metrics to your Datadog instance.

If you want to set up an exporter for a different destination, update your configuration in the values file for the installation Helm chart accordingly, and use the same steps as outlined in this topic to update your pipeline. You can find an overview of supported providers in the OpenTelemetry documentation.

  1. Get the URL and the API key to log into your Datadog instance. For more information about setting up an API key, see the Datadog documentation.

  2. Add the following configuration to your values file for the Gloo Gateway installation Helm chart. When you add this configuration to your Gloo OTel pipeline, a set of default metrics are forwarded to your Datadog instance. For more information, see Exported metrics. If you want to additionally forward Gloo management server-specific metrics, follow the steps in Scrape the Gloo management server pod with the telemetry gateway.

    telemetryGatewayCustomization:
      extraExporters:
        datadog:
          api:
            site: <datadog-site>  # Example: datadoghq.eu
            key: <datadog-api-key>
      extraPipelines:
        metrics/workload-clusters:
          receivers:
          - otlp # Metrics received by the Collector in workload clusters.
          processors:
          - memory_limiter
          - batch
          exporters:
          - datadog # Exporter specified above as the new destination.
    
  3. Follow the Upgrade Gloo Gateway guide to apply the changes in your environment.

    Multicluster environments only: Because the Gloo telemetry gateway has all telemetry data for your workloads, Gloo Platform, and metrics pipeline, new exporter configurations are ideally added to the Gloo telemetry gateway in the management cluster so that you can forward all telemetry data to your preferred destination. To apply the changes, you must upgrade only the Gloo management server with the updated values file. No upgrade of the Gloo agent is required. To optionally apply the changes to a Gloo OTel collector agent in one or all workload clusters, you must update the Gloo agent with the settings in your values file for the Helm release in the workload cluster. Note that you need to replace telemetryGatewayCustomization with telemetryCollectorCustomization in your Helm values file so that your changes can be applied to the OTel collector agents in your workload clusters.

  4. Depending on where you applied the changes, verify that the configmap for the telemetry gateway or collector agent pods is updated with the values you that set in the values file.

    kubectl get configmap gloo-telemetry-gateway-config -n gloo-mesh -o yaml
    
    kubectl get configmap gloo-telemetry-collector-config -n gloo-mesh -o yaml
    
  5. Depending on where you applied the changes, perform a rollout restart of the gateway deployment or the collector daemon set to force your configmap changes to be applied to the telemetry gateway or collector agent pods.

    kubectl rollout restart -n gloo-mesh deployment/gloo-telemetry-gateway
    
    kubectl rollout restart -n gloo-mesh daemonset/gloo-telemetry-collector-agent
    

Exported metrics

The following metrics are exported by default when you apply the Datadog configuration in this guide.

Metric category Metrics
Istio gateway metrics
  • istio_requests_total
  • istio_request_duration_milliseconds
  • istio_request_duration_milliseconds_bucket
  • istio_request_duration_milliseconds_count
  • istio_request_duration_milliseconds_sum
  • istio_tcp_sent_bytes_total
  • istio_tcp_received_bytes_total
  • istio_tcp_connections_opened_total
Istio metrics
  • pilot_proxy_convergence_time
Gloo Platform metrics
  • gloo_mesh_reconciler_time_sec
  • relay_pull_clients_connected
  • relay_push_clients_connected
  • relay_push_clients_warmed
Cilium metrics (if Gloo Network is enabled)
  • hubble_flows_processed_total
  • hubble_drop_total
OTel pipeline metrics
  • otelcol_processor_refused_metric_points
  • otelcol_processor_dropped_metric_points