Set up the pipeline
Set up the Gloo OpenTelemetry (OTel) pipeline in a new or an existing Gloo Network installation.
Before you begin
-
Review the default pipelines that are available in the Gloo OTel pipeline and decide on the pipelines that you want to enable. Pipelines can be enabled in either the Gloo telemetry agent or gateway as shown in the following tables. By default, the Gloo OTel pipeline is set up with the
metrics/ui
andmetrics/prometheus
pipelines.- Gloo telemetry collector agent pipelines:
Pipeline Description metrics/ui
The metrics/ui
pipeline collects the metrics that are required for the Gloo UI graph. This pipeline is enabled by default. To view the metrics that are included with this pipeline, see View default metrics.metrics/cilium
This pipeline collects extra Cilium metrics to feed the Cilium dashboard in Grafana. logs/istio_access_logs
This pipeline collects Istio access logs from Istio-enabled workloads. For more information, see access logs for the ingress gateway and workloads in a service mesh. logs/cilium_flows
This pipeline collects network flows for Cilium-enabled cluster workloads so that you can use the meshctl hubble observe
command. For more information, see Network flow logs.traces/istio
A pre-defined pipeline that collects traces to observe and monitor requests and pushes them to the built-in Jaeger platform or a custom Jaeger instance. For more information, see request tracing for the ingress gateway or workloads in a service mesh. - Gloo telemetry gateway pipelines:
Pipeline Description logs/clickhouse
This pipeline forwards Istio access logs that the collector agents receive to Clickhouse. metrics/prometheus
This pipeline collects metrics from various sources, such as the Gloo management server, Gloo Platform, Istio, Cilium, and the Gloo OTel pipeline, and makes this data available to the built-in Prometheus server. This pipeline is enabled by default. traces/jaeger
This pipeline receives traces from the Gloo telemetry collector agents, and forwards them to the built-in or custom Jaeger tracing platform.
- Gloo telemetry collector agent pipelines:
-
Choose how to secure the communication between the telemetry gateway in the management cluster and collector agents in the workload clusters.
- Testing or demo setups: To use the default certificate that the telemetry gateway is automatically created with, see Set up OTel with the default certificate.
- POC or production setups: To bring your own certificate to secure the connection, see Set up OTel with a custom certificate.
Set up OTel with the default certificate
Enable the OTel telemetry pipeline by using the default certificate that the telemetry gateway is automatically created with.
Installing with Argo CD on a GKE dataplanev2 cluster? Add the following exclusions to your argocd-cm
configmap, which ensure that the CiliumIdentity
for the OTel pipeline is not managed by Argo.
resource.exclusions: |
- apiGroups:
- cilium.io
kinds:
- CiliumIdentity
clusters:
- "*"
-
Enable the Gloo telemetry gateway. In multicluster setups, enable the telemetry gateway in your management cluster.
- Get your current installation Helm values, and save them in a file. Note that if you migrated from the legacy charts, your release might have a different name.
helm get values gloo-platform -n gloo-mesh > gloo-gateway-single.yaml open gloo-gateway-single.yaml
- Add or update the following sections in your Helm values file.
legacyMetricsPipeline: enabled: false telemetryGateway: enabled: true resources: limits: cpu: 600m memory: 2Gi requests: cpu: 300m memory: 1Gi telemetryCollector: config: exporters: otlp: endpoint: gloo-telemetry-gateway.gloo-mesh:4317 enabled: true resources: limits: cpu: 2 memory: 2Gi requests: cpu: 500m memory: 1Gi
- Optional: Enable additional default pipelines. The following example shows how to enable the
traces/istio
andtraces/jaeger
pipelines in the Gloo telemetry gateway and collector agents.telemetryCollectorCustomization: pipelines: traces/istio: enabled: true telemetryGatewayCustomization: pipelines: traces/jaeger: enabled: true
- Upgrade your installation by using your updated values file.
helm upgrade gloo-platform gloo-platform/gloo-platform \ --namespace gloo-mesh \ --values gloo-gateway-single.yaml
- Get your current installation Helm values, and save them in a file. Note that if you migrated from the legacy charts, your release might have a different name.
-
Verify that all pods in the
gloo-mesh
namespace are up and running, and that you see agloo-telemetry-gateway*
and one or moregloo-telemetry-collector-agent*
pods. Because the agents are deployed as a daemon set, the number of telemetry collector agent pods equals the number of worker nodes in your cluster.kubectl get pods -n gloo-mesh
Example output:
... gloo-telemetry-collector-agent-7rzfb 1/1 Running 0 22s gloo-telemetry-collector-agent-dgs87 1/1 Running 0 22s gloo-telemetry-collector-agent-nbmr6 1/1 Running 0 22s gloo-telemetry-gateway-6547f479d5-rtj7s 1/1 Running 0 107s
-
Verify that the default certificate secret for the telemetry gateway is created.
kubectl get secrets -n gloo-mesh
Example output:
NAME TYPE DATA AGE dashboard Opaque 0 3d20h gloo-telemetry-gateway-tls-secret kubernetes.io/tls 3 3d20h ...
Set up OTel with a custom certificate
Enable the OTel telemetry pipeline by using a custom certificate to secure the connection between the telemetry gateway and collector agents.
Using a custom certificate for the telemetry gateway is considered an advanced use case. This guide assumes that you set up Gloo Network in a multicluster setup.
-
Decide on the root CA that you want to use to sign the certificate for the telemetry gateway. The recommended approach is to derive the telemetry gateway certificate from the same root CA that you used to sign the server and client TLS certificates for your relay connection. However, you can also use a custom root CA for your telemetry gateway certificate.
-
Choose the domain name that you want to use for your telemetry gateway. In the following steps, the example domain
gloo-telemetry-gateway.apps.cluster1.mydomain.net
is used. -
Use your preferred certificate issuer to create a server certificate and key for the telemetry gateway's domain, and store that information in a secret named
gloo-telemetry-gateway-tls-secret
in thegloo-mesh
namespace. You might follow steps similar to the management server certificate generation in the Gloo Mesh Enterprise documentation to generate your telemetry gateway certificate. For example, you might use the following YAML file with acert-manager
instance to create the certificate and a key for thegloo-telemetry-gateway.apps.cluster1.mydomain.net
domain in a Vault instance. This example assumes that the root CA certificate and key are stored and managed in Vault so that Vault can derive the telemetry gateway certificate from the same root. After the telemetry gateway certificate and key are created, the information is stored in thegloo-telemetry-gateway-tls-secret
secret in thegloo-mesh
namespace. This file is provided only as an example; your certificate and key generation might be different, depending on your certificate setup.kind: Certificate apiVersion: cert-manager.io/v1 metadata: name: gloo-telemetry-gateway namespace: gloo-mesh spec: secretName: gloo-telemetry-gateway-tls-secret duration: 8760h # 365 days renewBefore: 360h # 15 days # Issuer for certs issuerRef: kind: ClusterIssuer name: vault-issuer-gloo commonName: gloo-telemetry-gateway dnsNames: # Domain for gateway's DNS entry - gloo-telemetry-gateway.apps.cluster1.mydomain.net usages: - server auth - client auth - digital signature - key encipherment privateKey: algorithm: "RSA" size: 2048
-
Verify that the
gloo-telemetry-gateway-tls-secret
secret is created. This secret name is referenced by default in thetelemetryGateway.extraVolumes
field of your Helm values file, which ensures that the telemetry gateway can access and use the certificate information.kubectl get secret gloo-telemetry-gateway-tls-secret -n gloo-mesh
Example output:
apiVersion: v1 data: ca.crt: [ca.crt content] tls.crt: [tls.crt content] tls.key: [tls.key content] kind: Secret metadata: annotations: cert-manager.io/alt-names: gloo-telemetry-gateway.apps.cluster1.mydomain.net cert-manager.io/certificate-name: gloo-telemetry-gateway cert-manager.io/common-name: gloo-telemetry-gateway cert-manager.io/ip-sans: "" cert-manager.io/issuer-group: "" cert-manager.io/issuer-kind: ClusterIssuer cert-manager.io/issuer-name: vault-issuer-gloo cert-manager.io/uri-sans: "" creationTimestamp: "2023-02-17T00:57:39Z" labels: controller.cert-manager.io/fao: "true" name: gloo-telemetry-gateway-tls-secret namespace: gloo-mesh resourceVersion: "11625264" uid: 31c794da-2359-43e6-ae02-6575968a0814 type: kubernetes.io/tls
-
Enable the Gloo telemetry gateway. In multicluster setups, enable the telemetry gateway in your management cluster.
- Get your current installation Helm values, and save them in a file. Note that if you migrated from the legacy charts, your release might have a different name.
helm get values gloo-platform -n gloo-mesh > gloo-gateway-single.yaml open gloo-gateway-single.yaml
- Add or update the following sections in your Helm values file.
legacyMetricsPipeline: enabled: false telemetryGateway: enabled: true resources: limits: cpu: 600m memory: 2Gi requests: cpu: 300m memory: 1Gi telemetryGatewayCustomization: disableCertGeneration: true
- Optional: Enable additional default pipelines. The following example shows how to enable the
traces/istio
andtraces/jaeger
pipelines in the Gloo telemetry gateway and collector agents.telemetryCollectorCustomization: pipelines: traces/istio: enabled: true telemetryGatewayCustomization: pipelines: traces/jaeger: enabled: true
- Upgrade your installation by using your updated values file.
helm upgrade gloo-platform gloo-platform/gloo-platform \ --namespace gloo-mesh \ --values gloo-gateway-single.yaml
- Get your current installation Helm values, and save them in a file. Note that if you migrated from the legacy charts, your release might have a different name.
-
Verify that all pods in the
gloo-mesh
namespace are up and running, and that you see agloo-telemetry-gateway*
pod.kubectl get deployments -n gloo-mesh
-
Get the external IP address of the load balancer service that was created for the Gloo telemetry gateway.
export TELEMETRY_GATEWAY_IP=$(kubectl get svc -n gloo-mesh gloo-telemetry-gateway -o jsonpath='{.status.loadBalancer.ingress[0].ip}') export TELEMETRY_GATEWAY_PORT=$(kubectl -n gloo-mesh get service gloo-telemetry-gateway -o jsonpath='{.spec.ports[?(@.name=="otlp")].port}') export TELEMETRY_GATEWAY_ADDRESS=${TELEMETRY_GATEWAY_IP}:${TELEMETRY_GATEWAY_PORT} echo $TELEMETRY_GATEWAY_ADDRESS
-
Use your cloud or DNS provider to create a DNS entry in your domain for the telemetry gateway's IP address.
-
Prepare the Gloo telemetry collector agent installation. To successfully connect from a collector agent in the workload cluster to the telemetry gateway in the management cluster, the root CA certificate must be stored in a Kubernetes secret on the workload cluster. By default, the collector agents are configured to look up the root CA certificate from the
relay-root-tls-secret
Kubernetes secret in thegloo-mesh
namespace. This secret might already exist in your workload cluster if you implemented Option 2 or Option 3 of the relay certificate setup options. Review the following options to decide if you can use this Kubernetes secret or need to create a new one.If you implemented Option 2 or Option 3 of the relay certificate setup options and you used the same root CA certificate to create the certificate for the telemetry gateway, you can use the
relay-root-tls-secret
Kubernetes for the collector agents.- Check whether the
relay-root-tls-secret
secret exists. In multicluster setups, check for this secret in the workload clusters.kubectl get secret relay-root-tls-secret -n gloo-mesh
- If the secret exists, no further action is required. If the certificate does not exist in your multicluster setup, copy the root CA certificate from the management cluster to each workload cluster.
kubectl get secret relay-root-tls-secret -n gloo-mesh --context $MGMT_CONTEXT -o jsonpath='{.data.ca\.crt}' | base64 -d > ca.crt kubectl create secret generic relay-root-tls-secret -n gloo-mesh --context $REMOTE_CONTEXT --from-file ca.crt=ca.crt
If you implemented Option 1 or Option 4 of the relay setup options, or if you decided to use a different root CA certificate for the telemetry gateway certificate, store the root CA certificate in a Kubernetes secret on each workload cluster.
-
Store the root CA certificate that you want to use for the OTel pipeline in a secret. In multicluster setups, create this secret in each workload cluster.
kubectl create secret generic telemetry-root-secret -n gloo-mesh --from-file ca.crt=<root_ca_cert>.crt
-
Verify that the secret is created.
kubectl get secret telemetry-root-secret -n gloo-mesh
- Check whether the
-
Enable the Gloo telemetry collector agents. In multicluster setups, enable the collector agents in each workload cluster.
- Get your updated installation Helm values again, and save them in a file. Note that if you migrated from the legacy charts, your release might have a different name.
helm get values gloo-platform -n gloo-mesh > gloo-gateway-single.yaml open gloo-gateway-single.yaml
- Add or update the the following sections in your Helm values file. Replace the
serverName
value with the domain for your telemetry gateway's DNS entry. If you created a custom root CA certificate secret namedtelemetry-root-secret
in step 1 of this guide, include that secret name in theextraVolumes
section. If you decided to use the root CA certificate in therelay-root-tls-secret
Kubernetes secret, you can remove thesecretName: telemetry-root-secret
line from the Helm values file.telemetryCollector: config: exporters: otlp: # Domain for gateway's DNS entry endpoint: [domain]:443 tls: ca_file: /etc/otel-certs/ca.crt enabled: true resources: limits: cpu: 2 memory: 2Gi requests: cpu: 500m memory: 1Gi # Include this section if you created a custom root CA cert secret extraVolumes: - name: root-ca secret: defaultMode: 420 # Add your root CA cert secret name secretName: telemetry-root-secret telemetryCollectorCustomization: # Domain for gateway's DNS entry serverName: [domain]
- Upgrade your installation by using your updated values file. Include the telemetry gateway's address in a
--set
flag.helm upgrade gloo-platform gloo-platform/gloo-platform \ --namespace gloo-mesh \ --values gloo-gateway-single.yaml \ --set telemetryCollector.config.exporters.otlp.endpoint=$TELEMETRY_GATEWAY_ADDRESS
- Get your updated installation Helm values again, and save them in a file. Note that if you migrated from the legacy charts, your release might have a different name.
-
Verify that all telemetry pods in the
gloo-mesh
namespace are up and running. Because the agents are deployed as a daemon set, the number of telemetry collector agent pods equals the number of worker nodes in your cluster.kubectl get pods -n gloo-mesh
Example output:
... gloo-telemetry-collector-agent-7rzfb 1/1 Running 0 22s gloo-telemetry-collector-agent-dgs87 1/1 Running 0 22s gloo-telemetry-collector-agent-nbmr6 1/1 Running 0 22s gloo-telemetry-gateway-6547f479d5-rtj7s 1/1 Running 0 107s
Verify metrics collection
-
Generate traffic for the apps in your cluster. For example, if you set up the Bookinfo app as part of the getting started guide, you can open the product page app in your browser to generate traffic.
- Open a port on your local machine for the product page app.
kubectl port-forward deploy/productpage-v1 -n bookinfo 9080
- Open the product page in your browser.
open http://localhost:9080/productpage?u=normal
- Refresh the page a couple of times to generate traffic.
- Open a port on your local machine for the product page app.
-
Open the Gloo UI.
meshctl dashboard
-
Verify that metrics were populated for your workloads by looking at the UI Graph.
-
You can optionally review the raw metrics by opening the Prometheus UI and entering
istio_requests_total
in the expression search bar.