Forward metrics to Datadog

Set up an extra exporter to forward metrics to your Datadog instance.

If you want to set up an exporter for a different destination, update your configuration in the values file for the installation Helm chart accordingly, and use the same steps as outlined in this topic to update your pipeline. You can find an overview of supported providers in the OpenTelemetry documentation.

  1. Get the URL and the API key to log into your Datadog instance. For more information about setting up an API key, see the Datadog documentation.

  2. Get your current installation Helm values, and save them in a file.

    helm get values gloo-platform -n gloo-mesh -o yaml > gloo-gateway-single.yaml
    open gloo-gateway-single.yaml
    
  3. Set up an extra exporter and add the Datadog site and API key information to your Helm values file. When you add this configuration, the default metrics are forwarded to your Datadog instance. You can configure your Gloo telemetry pipeline to collect more metrics, such as by setting up additional receivers, and make these metrics available to the telemetry collector agent so that they can be forwarded to Datadog.

    telemetryCollectorCustomization:
      extraExporters:
        datadog:
          api:
            site: <datadog-site>  # Example: datadoghq.eu
            key: <datadog-api-key>
      extraPipelines:
        metrics/export-to-datadog:
          receivers:
          - otlp # Metrics received by the collector agents
          processors:
          - memory_limiter
          - batch
          exporters:
          - datadog # Exporter specified above as the new destination.
    
  4. Upgrade your installation by using your updated values file.

    helm upgrade gloo-platform gloo-platform/gloo-platform \
      --namespace gloo-mesh \
      -f gloo-gateway-single.yaml \
      --version $UPGRADE_VERSION
    
  5. Verify that your settings were added to the Gloo telemetry collector configmap.

    kubectl get configmap gloo-telemetry-collector-config -n gloo-mesh -o yaml
    
  6. Perform a rollout restart of the telemetry collector daemon set to force your configmap changes to be applied to the telemetry collector agent pod.

    kubectl rollout restart -n gloo-mesh daemonset/gloo-telemetry-collector-agent
    

In a multicluster setup, you can decide if you want to add exporters for Datadog to the Gloo telemetry gateway in the management cluster, or to each Gloo telemetry collector agent in the workload cluster. The option that is right for you depends on the size of your environment, the amount of telemetry data that you want to export, and the compute resources that are available to the Gloo telemetry pipeline components.

Gloo telemetry collector agent

  1. Get your current values for the workload clusters.

    helm get values gloo-platform -n gloo-mesh -o yaml --kube-context $REMOTE_CONTEXT > agent.yaml
    open agent.yaml
    
  2. In the Helm values file, add an extra exporter for Datadog.

    telemetryCollectorCustomization:
      extraExporters:
        datadog:
          api:
            site: <datadog-site>  # Example: datadoghq.eu
            key: <datadog-api-key>
      extraPipelines:
        metrics/workload-clusters:
          receivers:
          - otlp # Metrics received by the collector agents
          processors:
          - memory_limiter
          - batch
          exporters:
          - datadog # Exporter specified above as the new destination.
    
  3. Upgrade the workload cluster.

    helm upgrade gloo-platform gloo-platform/gloo-platform \
      --kube-context $REMOTE_CONTEXT \
      --namespace gloo-mesh \
      -f agent.yaml \
      --version $UPGRADE_VERSION
    
  4. Verify that your settings are applied in the workload cluster.

    1. Verify that your settings were added to the Gloo telemetry collector configmap.

      kubectl get configmap gloo-telemetry-gateway-config -n gloo-mesh -o yaml --context $MGMT_CONTEXT
      
    2. Perform a rollout restart of the telemetry collector daemon set to force your configmap changes to be applied to the telemetry collector agent pods.

      kubectl rollout restart -n gloo-mesh deployment/gloo-telemetry-gateway --context $MGMT_CONTEXT
      

Gloo telemetry gateway

  1. Get your current values for the management cluster.

    helm get values gloo-platform -n gloo-mesh -o yaml --kube-context $MGMT_CONTEXT > mgmt-server.yaml
    open mgmt-server.yaml
    
  2. In your Helm values file, add an extra exporter for Datadog.

    telemetryGatewayCustomization:
      extraExporters:
        datadog:
          api:
            site: <datadog-site>  # Example: datadoghq.eu
            key: <datadog-api-key>
      extraPipelines:
        metrics/workload-clusters:
          receivers:
          - otlp # Metrics received by the collector agents
          processors:
          - memory_limiter
          - batch
          exporters:
          - datadog # Exporter specified above as the new destination.
    
  3. Upgrade the management cluster.

    helm upgrade gloo-platform gloo-platform/gloo-platform \
      --kube-context $MGMT_CONTEXT \
      --namespace gloo-mesh \
      -f mgmt-server.yaml \
      --version $UPGRADE_VERSION
    
  4. Verify that your settings are applied in the management cluster.

    1. Verify that your settings were added to the Gloo telemetry gateway configmap.

      kubectl get configmap gloo-telemetry-gateway-config -n gloo-mesh -o yaml --context $MGMT_CONTEXT
      
    2. Perform a rollout restart of the telemetry gateway deployment to force your configmap changes to be applied to the telemetry gateway pod.

      kubectl rollout restart -n gloo-mesh deployment/gloo-telemetry-gateway --context $MGMT_CONTEXT