Set up the pipeline

Set up the Gloo OpenTelemetry (OTel) pipeline in a new or an existing Gloo Gateway installation.

Before you begin

  1. Review the default pipelines that are available in the Gloo OTel pipeline and decide on the pipelines that you want to enable. Pipelines can be enabled in either the Gloo telemetry agent or gateway as shown in the following tables. By default, the Gloo OTel pipeline is set up with the metrics/ui and metrics/prometheus pipelines.

    • Gloo telemetry collector agent pipelines:
      Pipeline Description
      metrics/ui The metrics/ui pipeline collects the metrics that are required for the Gloo UI graph. This pipeline is enabled by default. To view the metrics that are included with this pipeline, see View default metrics.
      metrics/cilium This pipeline collects extra Cilium metrics to feed the Cilium dashboard in Grafana.
      logs/istio_access_logs This pipeline collects Istio access logs from Istio-enabled workloads. For more information, see access logs for the ingress gateway and workloads in a service mesh.
      logs/cilium_flows This pipeline collects network flows for Cilium-enabled cluster workloads so that you can use the meshctl hubble observe command. For more information, see Network flow logs.
      traces/istio A pre-defined pipeline that collects traces to observe and monitor requests and pushes them to the built-in Jaeger platform or a custom Jaeger instance. For more information, see request tracing for the ingress gateway or workloads in a service mesh.
    • Gloo telemetry gateway pipelines:
      Pipeline Description
      logs/clickhouse This pipeline forwards Istio access logs that the collector agents receive to Clickhouse.
      metrics/prometheus This pipeline collects metrics from various sources, such as the Gloo management server, Gloo Platform, Istio, Cilium, and the Gloo OTel pipeline, and makes this data available to the built-in Prometheus server. This pipeline is enabled by default.
      traces/jaeger This pipeline receives traces from the Gloo telemetry collector agents, and forwards them to the built-in or custom Jaeger tracing platform.
  2. Choose how to secure the communication between the telemetry gateway in the management cluster and collector agents in the workload clusters.

Set up OTel with the default certificate

Enable the OTel telemetry pipeline by using the default certificate that the telemetry gateway is automatically created with.

  1. Enable the Gloo telemetry gateway. In multicluster setups, enable the telemetry gateway in your management cluster.

    1. Get your current installation Helm values, and save them in a file. Note that if you migrated from the legacy charts, your release might have a different name.

      helm get values gloo-platform -n gloo-mesh > gloo-gateway-single.yaml
      open gloo-gateway-single.yaml
      
    2. Add or update the following sections in your Helm values file.

      legacyMetricsPipeline:
        enabled: false
      telemetryGateway:
        enabled: true
        resources:
          limits:
            cpu: 600m
            memory: 2Gi
          requests:
            cpu: 300m
            memory: 1Gi
      telemetryCollector:
        config:
          exporters:
            otlp:
              endpoint: gloo-telemetry-gateway.gloo-mesh:4317
        enabled: true
        resources:
          limits:
            cpu: 2
            memory: 2Gi
          requests:
            cpu: 500m
            memory: 1Gi
      
    3. Optional: Enable additional default pipelines. The following example shows how to enable the traces/istio and traces/jaeger pipelines in the Gloo telemetry gateway and collector agents.

      telemetryCollectorCustomization: 
        pipelines: 
          traces/istio: 
            enabled: true
      telemetryGatewayCustomization: 
        pipelines:
          traces/jaeger: 
            enabled: true
      
    4. Upgrade your installation by using your updated values file.

      helm upgrade gloo-platform gloo-platform/gloo-platform \
         --namespace gloo-mesh \
         --values gloo-gateway-single.yaml
      
  2. Verify that all pods in the gloo-mesh namespace are up and running, and that you see a gloo-telemetry-gateway* and one or more gloo-telemetry-collector-agent* pods. Because the agents are deployed as a daemon set, the number of telemetry collector agent pods equals the number of worker nodes in your cluster.

    kubectl get pods -n gloo-mesh
    

    Example output:

    ...
    gloo-telemetry-collector-agent-7rzfb      1/1     Running   0          22s
    gloo-telemetry-collector-agent-dgs87      1/1     Running   0          22s
    gloo-telemetry-collector-agent-nbmr6      1/1     Running   0          22s
    gloo-telemetry-gateway-6547f479d5-rtj7s   1/1     Running   0          107s
    
  3. Verify that the default certificate secret for the telemetry gateway is created.

    kubectl get secrets -n gloo-mesh
    

    Example output:

    NAME                                       TYPE                 DATA   AGE
    dashboard                                  Opaque               0      3d20h
    gloo-telemetry-gateway-tls-secret          kubernetes.io/tls    3      3d20h
    ...
    
  4. Verify metrics collection.

Set up OTel with a custom certificate

Enable the OTel telemetry pipeline by using a custom certificate to secure the connection between the telemetry gateway and collector agents.

Using a custom certificate for the telemetry gateway is considered an advanced use case. This guide assumes that you set up Gloo Gateway in a multicluster setup.

  1. Decide on the root CA that you want to use to sign the certificate for the telemetry gateway. The recommended approach is to derive the telemetry gateway certificate from the same root CA that you used to sign the server and client TLS certificates for your relay connection. However, you can also use a custom root CA for your telemetry gateway certificate.

  2. Choose the domain name that you want to use for your telemetry gateway. In the following steps, the example domain gloo-telemetry-gateway.apps.cluster1.mydomain.net is used.

  3. Use your preferred certificate issuer to create a server certificate and key for the telemetry gateway's domain, and store that information in a secret named gloo-telemetry-gateway-tls-secret in the gloo-mesh namespace. You might follow steps similar to the management server certificate generation to generate your telemetry gateway certificate.

    For example, you might use the following YAML file with a cert-manager instance to create the certificate and a key for the gloo-telemetry-gateway.apps.cluster1.mydomain.net domain in a Vault instance. This example assumes that the root CA certificate and key are stored and managed in Vault so that Vault can derive the telemetry gateway certificate from the same root. After the telemetry gateway certificate and key are created, the information is stored in the gloo-telemetry-gateway-tls-secret secret in the gloo-mesh namespace. This file is provided only as an example; your certificate and key generation might be different, depending on your certificate setup.

    kind: Certificate
    apiVersion: cert-manager.io/v1
    metadata:
      name: gloo-telemetry-gateway
      namespace: gloo-mesh
    spec:
      secretName: gloo-telemetry-gateway-tls-secret
      duration: 8760h # 365 days
      renewBefore: 360h # 15 days
      # Issuer for certs
      issuerRef:
        kind: ClusterIssuer
        name: vault-issuer-gloo
      commonName: gloo-telemetry-gateway
      dnsNames:
        # Domain for gateway's DNS entry
        - gloo-telemetry-gateway.apps.cluster1.mydomain.net
      usages:
        - server auth
        - client auth
        - digital signature
        - key encipherment
      privateKey:
        algorithm: "RSA"
        size: 2048
    
  4. Verify that the gloo-telemetry-gateway-tls-secret secret is created. This secret name is referenced by default in the telemetryGateway.extraVolumes field of your Helm values file, which ensures that the telemetry gateway can access and use the certificate information.

    kubectl get secret gloo-telemetry-gateway-tls-secret -n gloo-mesh
    

    Example output:

    apiVersion: v1
    data:
      ca.crt: [ca.crt content]
      tls.crt: [tls.crt content]
      tls.key: [tls.key content]
    kind: Secret
    metadata:
      annotations:
        cert-manager.io/alt-names: gloo-telemetry-gateway.apps.cluster1.mydomain.net
        cert-manager.io/certificate-name: gloo-telemetry-gateway
        cert-manager.io/common-name: gloo-telemetry-gateway
        cert-manager.io/ip-sans: ""
        cert-manager.io/issuer-group: ""
        cert-manager.io/issuer-kind: ClusterIssuer
        cert-manager.io/issuer-name: vault-issuer-gloo
        cert-manager.io/uri-sans: ""
      creationTimestamp: "2023-02-17T00:57:39Z"
      labels:
        controller.cert-manager.io/fao: "true"
      name: gloo-telemetry-gateway-tls-secret
      namespace: gloo-mesh
      resourceVersion: "11625264"
      uid: 31c794da-2359-43e6-ae02-6575968a0814
    type: kubernetes.io/tls
    
  5. Enable the Gloo telemetry gateway. In multicluster setups, enable the telemetry gateway in your management cluster.

    1. Get your current installation Helm values, and save them in a file. Note that if you migrated from the legacy charts, your release might have a different name.
      helm get values gloo-platform -n gloo-mesh > gloo-gateway-single.yaml
      open gloo-gateway-single.yaml
      
    2. Add or update the following sections in your Helm values file.
      legacyMetricsPipeline:
        enabled: false
      telemetryGateway:
        enabled: true
        resources:
          limits:
            cpu: 600m
            memory: 2Gi
          requests:
            cpu: 300m
            memory: 1Gi
      telemetryGatewayCustomization:
        disableCertGeneration: true
      
    3. Optional: Enable additional default pipelines. The following example shows how to enable the traces/istio and traces/jaeger pipelines in the Gloo telemetry gateway and collector agents.
      telemetryCollectorCustomization: 
        pipelines: 
          traces/istio: 
            enabled: true
      telemetryGatewayCustomization: 
        pipelines:
          traces/jaeger: 
            enabled: true
      
    4. Upgrade your installation by using your updated values file.
      helm upgrade gloo-platform gloo-platform/gloo-platform \
         --namespace gloo-mesh \
         --values gloo-gateway-single.yaml
      
  6. Verify that all pods in the gloo-mesh namespace are up and running, and that you see a gloo-telemetry-gateway* pod.

    kubectl get deployments -n gloo-mesh
    
  7. Get the external IP address of the load balancer service that was created for the Gloo telemetry gateway.

    export TELEMETRY_GATEWAY_IP=$(kubectl get svc -n gloo-mesh gloo-telemetry-gateway -o jsonpath='{.status.loadBalancer.ingress[0].ip}')
    export TELEMETRY_GATEWAY_PORT=$(kubectl -n gloo-mesh get service gloo-telemetry-gateway -o jsonpath='{.spec.ports[?(@.name=="otlp")].port}')
    export TELEMETRY_GATEWAY_ADDRESS=${TELEMETRY_GATEWAY_IP}:${TELEMETRY_GATEWAY_PORT}
    echo $TELEMETRY_GATEWAY_ADDRESS
    
  8. Use your cloud or DNS provider to create a DNS entry in your domain for the telemetry gateway's IP address.

  9. Prepare the Gloo telemetry collector agent installation. To successfully connect from a collector agent in the workload cluster to the telemetry gateway in the management cluster, the root CA certificate must be stored in a Kubernetes secret on the workload cluster. By default, the collector agents are configured to look up the root CA certificate from the relay-root-tls-secret Kubernetes secret in the gloo-mesh namespace. This secret might already exist in your workload cluster if you implemented Option 2 or Option 3 of the relay certificate setup options. Review the following options to decide if you can use this Kubernetes secret or need to create a new one.

    If you implemented Option 2 or Option 3 of the relay certificate setup options and you used the same root CA certificate to create the certificate for the telemetry gateway, you can use the relay-root-tls-secret Kubernetes for the collector agents.

    1. Check whether the relay-root-tls-secret secret exists. In multicluster setups, check for this secret in the workload clusters.
      kubectl get secret relay-root-tls-secret -n gloo-mesh
      
    2. If the secret exists, no further action is required. If the certificate does not exist in your multicluster setup, copy the root CA certificate from the management cluster to each workload cluster.
      kubectl get secret relay-root-tls-secret -n gloo-mesh --context $MGMT_CONTEXT -o jsonpath='{.data.ca\.crt}' | base64 -d > ca.crt
      kubectl create secret generic relay-root-tls-secret -n gloo-mesh --context $REMOTE_CONTEXT --from-file ca.crt=ca.crt
      

    If you implemented Option 1 or Option 4 of the relay setup options, or if you decided to use a different root CA certificate for the telemetry gateway certificate, store the root CA certificate in a Kubernetes secret on each workload cluster.

    1. Store the root CA certificate that you want to use for the OTel pipeline in a secret. In multicluster setups, create this secret in each workload cluster.

      kubectl create secret generic telemetry-root-secret -n gloo-mesh --from-file ca.crt=<root_ca_cert>.crt
      
    2. Verify that the secret is created.

      kubectl get secret telemetry-root-secret -n gloo-mesh
      

  10. Enable the Gloo telemetry collector agents. In multicluster setups, enable the collector agents in each workload cluster.

    1. Get your updated installation Helm values again, and save them in a file. Note that if you migrated from the legacy charts, your release might have a different name.
      helm get values gloo-platform -n gloo-mesh > gloo-gateway-single.yaml
      open gloo-gateway-single.yaml
      
    2. Add or update the the following sections in your Helm values file. Replace the serverName value with the domain for your telemetry gateway's DNS entry. If you created a custom root CA certificate secret named telemetry-root-secret in step 1 of this guide, include that secret name in the extraVolumes section. If you decided to use the root CA certificate in the relay-root-tls-secret Kubernetes secret, you can remove the secretName: telemetry-root-secret line from the Helm values file.
      telemetryCollector:
        config:
          exporters:
            otlp:
              # Domain for gateway's DNS entry
              # The default port is 4317.
              # If you set up an external load balancer between the telemetry gateway and collector agents, and you configured TLS passthrough to forward data to the telemetry gateway on port 4317, use port 443 instead.
              endpoint: [domain]:4317
              tls:
                ca_file: /etc/otel-certs/ca.crt
        enabled: true
        resources:
          limits:
            cpu: 2
            memory: 2Gi
          requests:
            cpu: 500m
            memory: 1Gi
        # Include this section if you created a custom root CA cert secret
        extraVolumes:
          - name: root-ca
            secret:
              defaultMode: 420
              # Add your root CA cert secret name
              secretName: telemetry-root-secret
      telemetryCollectorCustomization:
        # Domain for gateway's DNS entry
        serverName: [domain]
      
    3. Upgrade your installation by using your updated values file. Include the telemetry gateway's address in a --set flag.
      helm upgrade gloo-platform gloo-platform/gloo-platform \
         --namespace gloo-mesh \
         --values gloo-gateway-single.yaml \
         --set telemetryCollector.config.exporters.otlp.endpoint=$TELEMETRY_GATEWAY_ADDRESS
      
  11. Verify that all telemetry pods in the gloo-mesh namespace are up and running. Because the agents are deployed as a daemon set, the number of telemetry collector agent pods equals the number of worker nodes in your cluster.

    kubectl get pods -n gloo-mesh 
    

    Example output:

    ...
    gloo-telemetry-collector-agent-7rzfb      1/1     Running   0          22s
    gloo-telemetry-collector-agent-dgs87      1/1     Running   0          22s
    gloo-telemetry-collector-agent-nbmr6      1/1     Running   0          22s
    gloo-telemetry-gateway-6547f479d5-rtj7s   1/1     Running   0          107s
    
  12. Verify metrics collection.

Verify metrics collection

  1. Generate traffic for the apps in your cluster. For example, if you set up the Bookinfo app as part of the getting started guide, you can open the product page app in your browser to generate traffic.

    1. Open a port on your local machine for the product page app.
      kubectl port-forward deploy/productpage-v1 -n bookinfo 9080
      
    2. Open the product page in your browser.
      open http://localhost:9080/productpage?u=normal
      
    3. Refresh the page a couple of times to generate traffic.
  2. Open the Gloo UI.

    meshctl dashboard
    
  3. Verify that metrics were populated for your workloads by looking at the UI Graph.

  4. You can optionally review the raw metrics by opening the Prometheus UI and entering istio_requests_total in the expression search bar.

Next