The Gloo telemetry pipeline integrates with Jaeger as the tracing platform. Jaeger is an open source tool that helps you follow the path of a request as it is forwarded between microservices. The chain of events and interactions are then captured by the Gloo telemetry pipeline and visualized in the Jaeger UI that is embedded in the Gloo UI. You can use this data to troubleshoot issues in your microservices and identify bottlenecks. You can also forward the traces from the Gloo telemetry gateway to your own Jaeger tracing platform.

Before you begin

Follow the Get started guide to install Gloo Mesh and install the Bookinfo sample app.

Step 1: Enable tracing in Istio

Instrument Istio workloads to collect traces by updating your Istio installation. The steps to update Istio vary depending on how you installed Istio.

Sidecar mode

  1. Get the current values for the istiod Helm release in your cluster.

      helm get values istiod -n istio-system -o yaml > istiod.yaml
    open istiod.yaml
      
  2. Make the following edits to enable tracing and set a sampling rate of 100% of requests. The traces are forwarded to the Gloo telemetry collector agents. For more information about the sampling rate, custom tag, and maximum path length settings, see the Istio tracing configuration docs. After you make the edit, save and close the file.

      ...
    meshConfig:
      # Enable tracing
      enableTracing: true
      # Specify tracing settings
      defaultConfig:
        tracing:
          sampling: 100
          zipkin:
            address: gloo-telemetry-collector.gloo-mesh.svc.cluster.local:9411
      
  3. Upgrade your Helm release with the updated values.

      helm upgrade istiod oci://${HELM_REPO}/istiod \
    -n istio-system \
    --version ${ISTIO_IMAGE} \
    -f istiod.yaml
      
  4. Verify that the istiod pods are successfully restarted. Note that it might take a few seconds for the pods to become available.

      kubectl get pods -n istio-system | grep istiod
      

    Example output:

      istiod-b84c55cff-tllfr   1/1     Running   0          58s
      
  5. Restart the Istio workloads that you want to collect traces for. For example, if you deployed the Bookinfo sample app as part of the Get started guide, you can restart the product page app with the following command.

      kubectl rollout restart deployment productpage-v1 -n bookinfo --context $REMOTE_CONTEXT 
      

Ambient mode

  1. Enable tracing for the istiod component.

    1. Get the current values for the istiod Helm release in your cluster.

        helm get values istiod -n istio-system -o yaml > istiod.yaml
      open istiod.yaml
        
    2. Make the following edits to enable tracing and set a sampling rate of 100% of requests. The traces are forwarded to the Gloo telemetry collector agents. For more information about the sampling rate, custom tag, and maximum path length settings, see the Istio tracing configuration docs. After you make the edit, save and close the file.

        ...
      meshConfig:
        # Enable tracing
        enableTracing: true
        # Specify tracing settings
        defaultConfig:
          tracing:
            sampling: 100
            zipkin:
              address: gloo-telemetry-collector.gloo-mesh.svc.cluster.local:9411
        
    3. Upgrade your Helm release with the updated values.

        helm upgrade istiod oci://${HELM_REPO}/istiod \
      -n istio-system \
      --version ${ISTIO_IMAGE} \
      -f istiod.yaml
        
    4. Verify that the istiod pods are successfully restarted. Note that it might take a few seconds for the pods to become available.

        kubectl get pods -n istio-system | grep istiod
        

      Example output:

        istiod-b84c55cff-tllfr   1/1     Running   0          58s
        
  2. Enable tracing for the ztunnel components.

    1. Get the current values for the istiod Helm release in your cluster.

        helm get values ztunnel -n istio-system -o yaml > ztunnel.yaml
      open ztunnel.yaml
        
    2. Update the ztunnel configmap to point to the OTLP endpoint that Gloo uses for trace collection.

        ...
      env:
        L7_ENABLED: true
      # Add the Gloo OTLP endpoint
      l7Telemetry:
        distributedTracing:
          otlpEndpoint: "http://gloo-telemetry-collector.gloo-mesh:4317"
        
    3. Upgrade your Helm release with the updated values.

        helm upgrade ztunnel oci://${HELM_REPO}/ztunnel -n istio-system --version ${ISTIO_IMAGE} -f ztunnel.yaml
        
    4. Verify that the ztunnel pods are successfully restarted. Note that it might take a few seconds for the pods to become available.

        kubectl get pods -n istio-system | grep ztunnel
        

      Example output:

        ztunnel-tvtzn                      1/1     Running   0             40s
      ztunnel-vtpjm                      1/1     Running   0             40s
      ztunnel-hllxg                      1/1     Running   0             40s
        
  3. Optional: If you use the Istio ingress gateway with the classic Istio network API, such as by following this guide in the community Istio docs, you can enable tracing for the Istio ingress gateway. In ambient mode, ztunnel traces can be generated only for requests that are routed by Envoy through the gateway.

    1. Get the current values for the ingress gateway Helm release in your cluster.

        helm get values istio-ingressgateway -n istio-ingress -o yaml > ingress-gateway.yaml
      open ingress-gateway.yaml
        
    2. Make the following edits to enable tracing and set a sampling rate of 100% of requests. The traces are forwarded to the Gloo telemetry collector agents. For more information about the sampling rate, custom tag, and maximum path length settings, see the Istio tracing configuration docs. After you make the edit, save and close the file.

        ...
      meshConfig:
        # Enable tracing
        enableTracing: true
        # Specify tracing settings
        defaultConfig:
          tracing:
            sampling: 100
            zipkin:
              address: gloo-telemetry-collector.gloo-mesh.svc.cluster.local:9411
        
    3. Upgrade your Helm release with the updated values.

        helm upgrade istio-ingressgateway istio/gateway \
      -n istio-ingress \
      --version ${ISTIO_IMAGE} \
      -f ingress-gateway.yaml
        
    4. Verify that the ingress gateway pods are successfully restarted and the load balancer service is assigned an external address.

        kubectl get pods,svc -n istio-ingress
        

      Example output:

        NAME                                    READY   STATUS    RESTARTS   AGE
      istio-ingressgateway-665d46686f-nhh52   1/1     Running   0          106s
      istio-ingressgateway-665d46686f-tlp5j   1/1     Running   0          2m1s
      NAME                        TYPE           CLUSTER-IP      EXTERNAL-IP   PORT(S)                                                                                                      AGE
      istio-ingressgateway        LoadBalancer   10.96.252.49    <externalip>  15021:32378/TCP,80:30315/TCP,443:32186/TCP,31400:30313/TCP,15443:31632/TCP                                   2m2s
        

Step 2: Enable the Jaeger UI and add traces to the pipeline

Now that your traces are enabled for Istio workloads, you can configure the Gloo telemetry pipeline to collect the traces and provide them to the built-in Jaeger instance that is embedded in to the Gloo UI or forward the traces to your own Jaeger tracing platform.

Use the built-in Jaeger

During your Gloo Mesh installation, you can enable Jaeger as the tracing platform for your Gloo environment and embed the Jaeger UI in to the Gloo UI.

Bring your own Jaeger

Instead of using the built-in Jaeger instance, you can configure the Gloo UI to point to your Jaeger instance instead.

Step 3: Verify tracing

Open the Gloo UI and verify that traces are collected for your Istio workloads.

Sidecar mode

  1. Open the Gloo UI.

  2. From the menu, select Observability > Tracing and verify that the Jaeger UI opens.

    Figure: Jaeger UI
    Figure: Jaeger UI
    Figure: Jaeger UI
    Figure: Jaeger UI
  3. Send a few sample requests to your Istio workloads. Each request produces Istio traces that are sent to the Jaeger instance that you configured. For example, if you deployed the Bookinfo sample app from the Get started guide, use the following steps to produce traces.

    1. Port-forward the product page app.
        kubectl port-forward deployment/productpage-v1 -n bookinfo 9080
        
    2. Open the product page app.
    3. Refresh the page multiple times.
  4. Wait a few seconds and verify that traces are displayed in the Gloo UI.

    Figure: Product page traces
    Figure: Product page traces
    Figure: Product page traces
    Figure: Product page traces

Ambient mode

  1. Open the Gloo UI.

      meshctl dashboard --kubecontext $MGMT_CONTEXT
      
  2. From the menu, select Observability > Tracing and verify that the Jaeger UI opens.

    Figure: Jaeger UI
    Figure: Jaeger UI
    Figure: Jaeger UI
    Figure: Jaeger UI
  3. Send a few sample requests to your Istio workloads. Each request produces Istio traces that are sent to the Jaeger instance that you configured. For example, if you deployed the Bookinfo sample app from the ambient mesh getting started guide, use the following steps to produce traces.

    1. If you have not already, expose Bookinfo externally.
        kubectl -n bookinfo apply -f https://raw.githubusercontent.com/istio/istio/1.24.2/samples/bookinfo/networking/bookinfo-gateway.yaml --context $REMOTE_CONTEXT
        
    2. Save the external address of your ingress gateway in an environment variable.
        export INGRESS_GW_ADDRESS=$(kubectl get svc -n gloo-mesh-gateways --context $REMOTE_CONTEXT istio-ingressgateway -o jsonpath="{.status.loadBalancer.ingress[0]['hostname','ip']}")
      echo $INGRESS_GW_ADDRESS
        
    3. Open the product page UI and refresh the page multiple times.
        open http://$INGRESS_GW_ADDRESS/productpage
        
  4. Wait a few seconds and verify that traces are displayed in the Gloo UI. For example, you can check traces through individual ztunnel pods to see traffic requests to Bookinfo services.

    Figure: Product page traces
    Figure: Product page traces