Telemetry pipeline certificates

By default, the Gloo telemetry pipeline is set up with self-signed certificates to secure the connection between Gloo telemetry collector agents and the Gloo telemetry gateway. You can choose to deploy the Gloo telemetry collector agents and gateway with your own certificates. For example, you might want to derive the certificate from the same root CA that you use for the Gloo management server and agent relay connection.

  1. Decide on the root CA that you want to use to sign the certificate for the telemetry gateway. The recommended approach is to derive the telemetry gateway certificate from the same root CA that you used to sign the server and client TLS certificates for your relay connection. However, you can also use a custom root CA for your telemetry gateway certificate.

  2. Choose the domain name that you want to use for your telemetry gateway. In the following steps, the example domain gloo-telemetry-gateway.apps.cluster1.mydomain.net is used.

  3. Use your preferred certificate issuer to create a server certificate and key for the telemetry gateway's domain, and store that information in a secret named gloo-telemetry-gateway-tls-secret in the gloo-mesh namespace. You might follow steps similar to the management server certificate generation to generate your telemetry gateway certificate.

    For example, you might use the following YAML file with a cert-manager instance to create the certificate and a key for the gloo-telemetry-gateway.apps.cluster1.mydomain.net domain in a Vault instance. This example assumes that the root CA certificate and key are stored and managed in Vault so that Vault can derive the telemetry gateway certificate from the same root. After the telemetry gateway certificate and key are created, the information is stored in the gloo-telemetry-gateway-tls-secret secret in the gloo-mesh namespace. This file is provided only as an example; your certificate and key generation might be different, depending on your certificate setup.

    kind: Certificate
    apiVersion: cert-manager.io/v1
    metadata:
      name: gloo-telemetry-gateway
      namespace: gloo-mesh
    spec:
      secretName: gloo-telemetry-gateway-tls-secret
      duration: 8760h # 365 days
      renewBefore: 360h # 15 days
      # Issuer for certs
      issuerRef:
        kind: ClusterIssuer
        name: vault-issuer-gloo
      commonName: gloo-telemetry-gateway
      dnsNames:
        # Domain for gateway's DNS entry
        - gloo-telemetry-gateway.apps.cluster1.mydomain.net
      usages:
        - server auth
        - client auth
        - digital signature
        - key encipherment
      privateKey:
        algorithm: "RSA"
        size: 2048
    
  4. Verify that the gloo-telemetry-gateway-tls-secret secret is created. This secret name is referenced by default in the telemetryGateway.extraVolumes field of your Helm values file, which ensures that the telemetry gateway can access and use the certificate information.

    kubectl get secret gloo-telemetry-gateway-tls-secret -n gloo-mesh -o yaml --context $MGMT_CONTEXT
    

    Example output:

    apiVersion: v1
    data:
      ca.crt: [ca.crt content]
      tls.crt: [tls.crt content]
      tls.key: [tls.key content]
    kind: Secret
    metadata:
      annotations:
        cert-manager.io/alt-names: gloo-telemetry-gateway.apps.cluster1.mydomain.net
        cert-manager.io/certificate-name: gloo-telemetry-gateway
        cert-manager.io/common-name: gloo-telemetry-gateway
        cert-manager.io/ip-sans: ""
        cert-manager.io/issuer-group: ""
        cert-manager.io/issuer-kind: ClusterIssuer
        cert-manager.io/issuer-name: vault-issuer-gloo
        cert-manager.io/uri-sans: ""
      creationTimestamp: "2023-02-17T00:57:39Z"
      labels:
        controller.cert-manager.io/fao: "true"
      name: gloo-telemetry-gateway-tls-secret
      namespace: gloo-mesh
      resourceVersion: "11625264"
      uid: 31c794da-2359-43e6-ae02-6575968a0814
    type: kubernetes.io/tls
    
  5. Enable the Gloo telemetry gateway in your management cluster.

    1. Get your current installation Helm values, and save them in a file. Note that if you migrated from the legacy charts, your release might have a different name.
      helm get values gloo-platform -n gloo-mesh -o yaml --kube-context $MGMT_CONTEXT > mgmt-server.yaml
      open mgmt-server.yaml
      
    2. Add or update the following sections in your Helm values file.
      legacyMetricsPipeline:
        enabled: false
      telemetryGateway:
        enabled: true
        resources:
          limits:
            cpu: 600m
            memory: 2Gi
          requests:
            cpu: 300m
            memory: 1Gi
      telemetryGatewayCustomization:
        disableCertGeneration: true
      
    3. Optional: Enable additional default pipelines. The following example shows how to enable the traces/jaeger pipeline in the Gloo telemetry gateway.
      telemetryGatewayCustomization:
        pipelines:
          traces/jaeger:
            enabled: true
      
    4. Upgrade your installation by using your updated values file.
      helm upgrade gloo-platform gloo-platform/gloo-platform \
         --namespace gloo-mesh \
         --kube-context $MGMT_CONTEXT \
         --values mgmt-server.yaml
      
  6. Verify that all pods in the gloo-mesh namespace are up and running, and that you see a gloo-telemetry-gateway* pod.

    kubectl get deployments -n gloo-mesh --context $MGMT_CONTEXT
    
  7. Get the external IP address of the load balancer service that was created for the Gloo telemetry gateway.

    export TELEMETRY_GATEWAY_IP=$(kubectl get svc -n gloo-mesh gloo-telemetry-gateway --context $MGMT_CONTEXT -o jsonpath='{.status.loadBalancer.ingress[0].ip}')
    export TELEMETRY_GATEWAY_PORT=$(kubectl -n gloo-mesh get service gloo-telemetry-gateway --context $MGMT_CONTEXT -o jsonpath='{.spec.ports[?(@.name=="otlp")].port}')
    export TELEMETRY_GATEWAY_ADDRESS=${TELEMETRY_GATEWAY_IP}:${TELEMETRY_GATEWAY_PORT}
    echo $TELEMETRY_GATEWAY_ADDRESS
    
  8. Use your cloud or DNS provider to create a DNS entry in your domain for the telemetry gateway's IP address.

  9. Prepare the Gloo telemetry collector agent installation. To successfully connect from a collector agent in the workload cluster to the telemetry gateway in the management cluster, the root CA certificate must be stored in a Kubernetes secret on the workload cluster. By default, the collector agents are configured to look up the root CA certificate from the relay-root-tls-secret Kubernetes secret in the gloo-mesh namespace. This secret might already exist in your workload cluster if you implemented Option 2 or Option 3 of the relay certificate setup options. Review the following options to decide if you can use this Kubernetes secret or need to create a new one.

    If you implemented Option 2 or Option 3 of the relay certificate setup options and you used the same root CA certificate to create the certificate for the telemetry gateway, you can use the relay-root-tls-secret Kubernetes for the collector agents.

    1. Check whether the relay-root-tls-secret secret exists on workload clusters.
      kubectl get secret relay-root-tls-secret -n gloo-mesh --context $REMOTE_CONTEXT
      
    2. If the secret exists, no further action is required. If the certificate does not exist, copy the root CA certificate from the management cluster to each workload cluster.
      kubectl get secret relay-root-tls-secret -n gloo-mesh --context $MGMT_CONTEXT -o jsonpath='{.data.ca\.crt}' | base64 -d > ca.crt
      kubectl create secret generic relay-root-tls-secret -n gloo-mesh --context $REMOTE_CONTEXT --from-file ca.crt=ca.crt
      

    If you implemented Option 1 or Option 4 of the relay setup options, or if you decided to use a different root CA certificate for the telemetry gateway certificate, store the root CA certificate in a Kubernetes secret on each workload cluster.

    1. Store the root CA certificate that you want to use for the OTel pipeline in a secret.

      kubectl create secret generic telemetry-root-secret -n gloo-mesh --context $REMOTE_CONTEXT --from-file ca.crt=<root_ca_cert>.crt
      
    2. Verify that the secret is created.

      kubectl get secret telemetry-root-secret -n gloo-mesh --context $REMOTE_CONTEXT
      

  10. Enable the Gloo telemetry collector agents in each workload cluster.

    1. Get your updated installation Helm values again, and save them in a file. Note that if you migrated from the legacy charts, your release might have a different name.
      helm get values gloo-platform -n gloo-mesh -o yaml --kube-context $REMOTE_CONTEXT > agent.yaml
      open agent.yaml
      
    2. Add or update the following sections in your Helm values file. Replace the serverName value with the domain for your telemetry gateway's DNS entry. If you created a custom root CA certificate secret named telemetry-root-secret, include that secret name in the extraVolumes section. If you decided to use the root CA certificate in the relay-root-tls-secret Kubernetes secret, you can remove the secretName: telemetry-root-secret line from the Helm values file.
      telemetryCollector:
        config:
          exporters:
            otlp:
              # Domain for gateway's DNS entry
              # The default port is 4317.
              # If you set up an external load balancer between the telemetry gateway and collector agents, and you configured TLS passthrough to forward data to the telemetry gateway on port 4317, use port 443 instead.
              endpoint: [domain]:4317
              tls:
                ca_file: /etc/otel-certs/ca.crt
        enabled: true
        resources:
          limits:
            cpu: 2
            memory: 2Gi
          requests:
            cpu: 500m
            memory: 1Gi
        extraVolumes:
        # Include this section if you created a custom root CA cert secret
        - name: root-ca  # customers modify this list entry for BYO SSL certs
          secret:
            # Add your root CA cert secret name
            secretName: telemetry-root-secret
            defaultMode: 420
        - name: telemetry-configmap
          configMap:
            name: gloo-telemetry-collector-config
            items:
              - key: relay
                path: relay.yaml
        - hostPath:
            path: /var/run/cilium
            type: DirectoryOrCreate
          name: cilium-run
        extraVolumeMounts:
          - name: root-ca  # customers modify this list entry for BYO SSL certs
            readOnly: true
            mountPath: /etc/otel-certs
          - name: telemetry-configmap
            mountPath: /conf
          - name: cilium-run
            mountPath: /var/run/cilium
      telemetryCollectorCustomization:
        # Domain for gateway's DNS entry
        serverName: [domain]
      
    3. Optional: Enable additional default pipelines. The following example shows how to enable the traces/istio pipelines in the Gloo telemetry collector agents.
      telemetryCollectorCustomization:
        pipelines:
          traces/istio:
            enabled: true
      
    4. OpenShift only: Elevate the permissions for the gloo-mesh service account to mount volumes on the host where the telemetry collector agents run. In Gloo Mesh Gateway version 2.4, a new cilium-run volume was added to the Gloo telemetry pipeline configuration to collect Cilium flow logs. For more information about this change, see the 2.4 release notes.
    oc adm policy add-scc-to-group hostmount-anyuid system:serviceaccounts:gloo-mesh
    
    1. Upgrade each workload cluster by using your updated values file. Include the telemetry gateway's address in a --set flag.
      helm upgrade gloo-platform gloo-platform/gloo-platform \
         --namespace gloo-mesh \
         --kube-context $REMOTE_CONTEXT \
         --values agent.yaml \
         --set telemetryCollector.config.exporters.otlp.endpoint=$TELEMETRY_GATEWAY_ADDRESS
      
  11. Verify that the Gloo telemetry collector agents are deployed in your workload clusters. Because the agents are deployed as a daemon set, the number of telemetry collector agent pods equals the number of worker nodes in your cluster.

    kubectl get pods -n gloo-mesh --context $REMOTE_CONTEXT
    

    Example output:

    NAME                                       READY   STATUS    RESTARTS      AGE
    gloo-mesh-agent-d89944685-mmgtt            1/1     Running   0             83m
    gloo-telemetry-collector-agent-7rzfb       1/1     Running   0          107s
    gloo-telemetry-collector-agent-dgs87       1/1     Running   0          107s
    gloo-telemetry-collector-agent-nbmr6       1/1     Running   0          107s