Overview

If your cluster uses the Cilium CNI image that is provided by Solo, a few Cilium metrics are collected by default and can be accessed by using the expression browser of the built-in Prometheus server.

To collect more metrics, you can enable the filter/cilium processor on the Gloo telemetry collector agent that is built in to the Gloo telemetry pipeline. This processor collects the following metrics:

  • Cilium and Hubble: By default, all Cilium and Hubble metrics are collected. For an overview of metrics that are collected, see the Cilium documentation.

Add all Cilium and Hubble metrics

Enable the filter/cilium processor in the Gloo telemetry pipeline to collect Cilium-specific metrics. All metrics are exposed on the Gloo telemetry collector agent where they can be scraped by the built-in Prometheus. You can view these metrics by using the Prometheus expression browser.

Single cluster

  1. Check your Cilium CNI Helm values.

      helm get values cilium -n kube-system -o yaml > cilium.yaml
    open cilium.yaml
      
    • If customCalls.enabled is set to true, continue to the next steps.
    • If customCalls.enabled is set to false or is unset, perform a Helm upgrade to set it to true. This setting is required to collect the additional eBPF metrics.
  2. Review the configuration of the filter/cilium processor that is built in to the Gloo telemetry pipeline. This configuration includes a regex that scrapes all metrics that start with hubble_ or cilium_.

      
    filter/cilium:
      metrics:
        include:
          match_type: regexp
          metric_names:
            - hubble_.*
            - cilium_.*
      
  3. Get your current installation Helm values, and save them in a file.

      helm get values gloo-platform -n gloo-mesh -o yaml > gloo-single.yaml
    open gloo-single.yaml
      
  4. In your Helm values file, enable the metrics/cilium pipeline that uses the processor that you reviewed in the previous step. Be sure that glooNetwork.enabled is also included in your values.

      
    glooNetwork:
      enabled: true
    telemetryCollector:
      enabled: true
    telemetryCollectorCustomization:
      pipelines:
        metrics/cilium:
          enabled: true
      
  5. Upgrade your installation by using your updated values file.

      
    helm upgrade gloo-platform gloo-platform/gloo-platform \
       --namespace gloo-mesh \
       --version $GLOO_VERSION \
       --values gloo-single.yaml
      
  6. Verify that your settings were added to the Gloo telemetry collector configmap.

      kubectl get configmap gloo-telemetry-collector-config -n gloo-mesh -o yaml
      
  7. Perform a rollout restart of the telemetry collector daemon set to force your configmap changes to be applied to the telemetry collector agent pod.

      kubectl rollout restart -n gloo-mesh daemonset/gloo-telemetry-collector-agent
      
  8. Optional: To monitor your Cilium CNI, import the Cilium dashboard in Grafana.

Multicluster

  1. Check the Cilium CNI Helm values on a workload cluster.

      helm get values cilium -n kube-system --kube-context $REMOTE_CONTEXT -o yaml > cilium.yaml
    open cilium.yaml
      
    • If customCalls.enabled is set to true, continue to the next steps.
    • If customCalls.enabled is set to false or is unset, perform a Helm upgrade to set it to true. This setting is required to collect the additional eBPF metrics.
  2. Review the configuration of the filter/cilium processor that is built in to the Gloo telemetry pipeline. This configuration includes a regex that scrapes all metrics that start with hubble_ or cilium_.

      
    filter/cilium:
      metrics:
        include:
          match_type: regexp
          metric_names:
            - hubble_.*
            - cilium_.*
      
  3. Get your current values for the workload clusters.

      helm get values gloo-platform -n gloo-mesh -o yaml --kube-context $REMOTE_CONTEXT > data-plane.yaml
    open data-plane.yaml
      
  4. In your Helm values file, enable the metrics/cilium pipeline that uses the processor that you reviewed in the previous step. Be sure that glooNetwork.enabled is also included in your values.

      
    glooNetwork:
      enabled: true
    telemetryCollector:
      enabled: true
    telemetryCollectorCustomization:
      pipelines:
        metrics/cilium:
          enabled: true
      
  5. Upgrade your workload cluster installation by using your updated values file.

      
    helm upgrade gloo-platform gloo-platform/gloo-platform \
      --kube-context $REMOTE_CONTEXT \
      --namespace gloo-mesh \
      -f data-plane.yaml \
      --version $GLOO_VERSION
      
  6. Verify that your settings were added to the Gloo telemetry collector configmap.

      kubectl get configmap gloo-telemetry-collector-config -n gloo-mesh -o yaml --context $REMOTE_CONTEXT
      
  7. Perform a rollout restart of the telemetry collector daemon set to force your configmap changes to be applied to the telemetry collector agent pods.

      kubectl rollout restart -n gloo-mesh daemonset/gloo-telemetry-collector-agent --context $REMOTE_CONTEXT
      
  8. Optional: To monitor your Cilium CNI, import the Cilium dashboard in Grafana.

Customize the Cilium metrics collection

Instead of enabling all Hubble and Cilium metrics that the Cilium agent emits, you can customize the Cilium processor and include only the metrics that you want to collect. All metrics are exposed on the Gloo telemetry collector agent where they can be scraped by the built-in Prometheus. You can view these metrics by using the Prometheus expression browser.

Single cluster

  1. Get the Cilium and Hubble metrics that you want to collect. For an overview of supported metrics, see the Cilium documentation.

  2. Get your current installation Helm values, and save them in a file.

      helm get values gloo-platform -n gloo-mesh -o yaml > gloo-single.yaml
    open gloo-single.yaml
      
  3. In your Helm values file, disable the metrics/cilium pipeline, and create a custom pipeline for the metrics that you want to collect. In the following example, the cilium_node_connectivity_status and cilium_node_connectivity_latency_seconds metrics are collected. Then, add your custom processor to the metrics/cilium pipeline.

      
    telemetryCollector:
      enabled: true
    telemetryCollectorCustomization:
      extraProcessors:
        filter/cilium-custom:
          metrics:
            include:
              match_type: strict
              metric_names:
                # cilium hubble metrics to include
                - cilium_node_connectivity_status
                - cilium_node_connectivity_latency_seconds
      extraPipelines:
        metrics/cilium-custom:
          receivers:
            - prometheus
          processors:
            - memory_limiter
            - filter/cilium-custom
            - batch
          exporters:
            - otlp
      
  4. Upgrade your installation by using your updated values file.

      
    helm upgrade gloo-platform gloo-platform/gloo-platform \
       --namespace gloo-mesh \
       --version $GLOO_VERSION \
       --values gloo-single.yaml
      
  5. Verify that your custom Cilium settings were added to the Gloo telemetry collector configmap.

      kubectl get configmap gloo-telemetry-collector-config -n gloo-mesh -o yaml
      
  6. Perform a rollout restart of the telemetry collector daemon set to force your configmap changes to be applied to the telemetry collector agent pod.

      kubectl rollout restart -n gloo-mesh daemonset/gloo-telemetry-collector-agent
      
  7. Optional: To monitor your Cilium CNI, import the Cilium dashboard in Grafana.

Multicluster

  1. Get the Cilium and Hubble metrics that you want to collect. For an overview of supported metrics, see the Cilium documentation.

  2. Get the Helm values files for your workload cluster.

      helm get values gloo-platform -n gloo-mesh -o yaml --kube-context $REMOTE_CONTEXT > data-plane.yaml
    open data-plane.yaml
      
  3. In your Helm values file, disable the metrics/cilium pipeline, and create a custom pipeline for the metrics that you want to collect. In the following example, the cilium_node_connectivity_status and cilium_node_connectivity_latency_seconds metrics are collected. Then, add your custom processor to the metrics/cilium pipeline.

      
    telemetryCollector:
      enabled: true
    telemetryCollectorCustomization:
      extraProcessors:
        filter/cilium-custom:
          metrics:
            include:
              match_type: strict
              metric_names:
                # cilium hubble metrics to include
                - cilium_node_connectivity_status
                - cilium_node_connectivity_latency_seconds
      extraPipelines:
        metrics/cilium-custom:
          receivers:
            - prometheus
          processors:
            - memory_limiter
            - filter/cilium-custom
            - batch
          exporters:
            - otlp
      
  4. Upgrade the workload cluster.

      helm upgrade gloo-platform gloo-platform/gloo-platform \
      --kube-context $REMOTE_CONTEXT \
      --namespace gloo-mesh \
      -f data-plane.yaml \
      --version $GLOO_VERSION 
      
  5. Verify that your settings are applied in the workload cluster.

    1. Verify that your settings were added to the Gloo telemetry collector configmap.

        kubectl get configmap gloo-telemetry-collector-config -n gloo-mesh -o yaml --context $REMOTE_CONTEXT
        
    2. Perform a rollout restart of the telemetry collector daemon set to force your configmap changes to be applied to the telemetry collector agent pods.

        kubectl rollout restart -n gloo-mesh daemonset/gloo-telemetry-collector-agent --context $REMOTE_CONTEXT
        
  6. Optional: To monitor your Cilium CNI, import the Cilium dashboard in Grafana.