Use a custom ConfigMap to configure your Solo Enterprise for agentgateway proxy for tracing.

Before you begin

Install Solo Enterprise for agentgateway control plane.

Set up an OpenTelemetry collector

Install an OpenTelemetry collector that the Solo Enterprise for agentgateway proxy can send traces to. Depending on your environment, you can further configure your OpenTelemetry to export these traces to your preferred tracing platform, such as Jaeger.

  1. Install the OTel collector.

      helm upgrade --install opentelemetry-collector-traces opentelemetry-collector \
    --repo https://open-telemetry.github.io/opentelemetry-helm-charts \
    --version 0.127.2 \
    --set mode=deployment \
    --set image.repository="otel/opentelemetry-collector-contrib" \
    --set command.name="otelcol-contrib" \
    --namespace=telemetry \
    --create-namespace \
    -f -<<EOF
    config:
      receivers:
        otlp:
          protocols:
            grpc:
              endpoint: 0.0.0.0:4317
            http:
              endpoint: 0.0.0.0:4318
      exporters:
        otlp/tempo:
          endpoint: http://tempo.telemetry.svc.cluster.local:4317
          tls:
            insecure: true
        debug:
          verbosity: detailed
      service:
        pipelines:
          traces:
            receivers: [otlp]
            processors: [batch]
            exporters: [debug, otlp/tempo]
    EOF
      
  2. Verify that the collector is up and running.

      kubectl get pods -n telemetry
      

    Example output:

      NAME                                             READY   STATUS    RESTARTS   AGE
    opentelemetry-collector-traces-8f566f445-l82s6   1/1     Running   0          17m
      

Configure your proxy

  1. Create a ConfigMap with your agentgateway tracing configuration. The following example collects additional information about the request to the LLM and adds this information to the trace. The trace is then sent to the collector that you set up earlier. To learn more about the fields that you can configure, see the agentgateway docs.

      kubectl apply -f- <<EOF
    apiVersion: v1
    kind: ConfigMap
    metadata:
      name: agent-gateway-config
      namespace: gloo-system
    data:
      config.yaml: |-
        config:
          tracing:
            otlpEndpoint: http://opentelemetry-collector-traces.telemetry.svc.cluster.local:4317
            otlpProtocol: grpc
            randomSampling: true
            fields:
              add:
                gen_ai.operation.name: '"chat"'
                gen_ai.system: "llm.provider"
                gen_ai.request.model: "llm.requestModel"
                gen_ai.response.model: "llm.responseModel"
                gen_ai.usage.completion_tokens: "llm.outputTokens"
                gen_ai.usage.prompt_tokens: "llm.inputTokens"
    EOF
      
  2. Create a GlooGatewayParameters resource that references the ConfigMap that you created.

      kubectl apply -f- <<EOF
    apiVersion: gloo.solo.io/v1alpha1
    kind: GlooGatewayParameters
    metadata:
      name: tracing
      namespace: gloo-system
    spec:
      kube:
        agentgateway:
          customConfigMapName: agent-gateway-config
    EOF
      
  3. Create your Solo Enterprise for agentgateway proxy. Make sure to reference the GlooGatewayParameters resource that you created so that your proxy starts with the custom tracing configuration.

      kubectl apply -f- <<EOF
    kind: Gateway
    apiVersion: gateway.networking.k8s.io/v1
    metadata:
      name: agentgateway
      namespace: gloo-system
      labels:
        app: agentgateway
    spec:
      gatewayClassName: agentgateway-enterprise
      infrastructure:
        parametersRef:
          name: tracing
          group: gloo.solo.io  
          kind: GlooGatewayParameters  
      listeners:
      - protocol: HTTP
        port: 8080
        name: http
        allowedRoutes:
          namespaces:
            from: All
    EOF
      
  4. Verify that your Solo Enterprise for agentgateway proxy is up and running.

      kubectl get pods -n gloo-system
      

    Example output:

      NAMESPACE            NAME                                         READY   STATUS    RESTARTS   AGE
    gloo-system      agentgateway-8b5dc4874-bl79q         1/1     Running   0          12s
      
  5. Get the external address of the gateway and save it in an environment variable.

Set up access to Gemini

Configure access to an LLM provider such as Gemini and send a sample request. You later use this request to verify your tracing configuration.

  1. Save your Gemini API key as an environment variable. To retrieve your API key, log in to the Google AI Studio and select API Keys.

      export GOOGLE_KEY=<your-api-key>
      
  2. Create a secret to authenticate to Google.

      kubectl apply -f - <<EOF
    apiVersion: v1
    kind: Secret
    metadata:
      name: google-secret
      namespace: gloo-system
    type: Opaque
    stringData:
      Authorization: $GOOGLE_KEY
    EOF
      
    1. Create a Backend resource to define the Gemini destination.

        kubectl apply -f- <<EOF
      apiVersion: gateway.kgateway.dev/v1alpha1
      kind: Backend
      metadata:
        labels:
          app: agentgateway
        name: google
        namespace: gloo-system
      spec:
        ai:
          llm:
           gemini:
                apiVersion: v1beta
                authToken:
                  kind: SecretRef
                  secretRef:
                    name: google-secret
                model: gemini-2.5-flash-lite
        type: AI
      EOF
        

      Review the following table to understand this configuration.

      SettingDescription
      geminiThe Gemini AI provider.
      apiVersionThe API version of Gemini that is compatible with the model that you plan to use. In this example, you must use v1beta because the gemini-2.5-flash-lite model is not compatible with the v1 API version. For more information, see the Google AI docs.
      authTokenThe authentication token to use to authenticate to the LLM provider. The example refers to the secret that you created in the previous step.
      modelThe model to use to generate responses. In this example, you use the gemini-2.5-flash-lite model. For more models, see the Google AI docs.
    2. Create an HTTPRoute resource to route requests to the Gemini Backend. Note that Gloo Gateway automatically rewrites the endpoint that you set up (such as /gemini) to the appropriate chat completion endpoint of the LLM provider for you, based on the LLM provider that you set up in the Backend resource.

        kubectl apply -f- <<EOF
      apiVersion: gateway.networking.k8s.io/v1
      kind: HTTPRoute
      metadata:
        name: google
        namespace: gloo-system
      spec:
        parentRefs:
          - name: agentgateway
        rules:
        - matches:
          - path:
              type: PathPrefix
              value: /gemini
          backendRefs:
          - name: google
            group: gateway.kgateway.dev
            kind: Backend
      EOF
        
    1. Send a request to the LLM provider API. Verify that the request succeeds and that you get back a response from the chat completion API.

      Example output:

        {"id":"aGLEaMjbLp6p_uMPopeAoAc",
      "choices":
        [{"index":0,"message":{
            "content":"Imagine teaching a dog a trick.  You show it what to do, reward it when it's right, and correct it when it's wrong.  Eventually, the dog learns.\n\nAI is similar.  We \"teach\" computers by showing them lots of examples.  For example, to recognize cats in pictures, we show it thousands of pictures of cats, labeling each one \"cat.\"  The AI learns patterns in these pictures – things like pointy ears, whiskers, and furry bodies – and eventually, it can identify a cat in a new picture it's never seen before.\n\nThis learning process uses math and algorithms (like a secret code of instructions) to find patterns and make predictions.  Some AI is more like a dog learning tricks (learning from examples), and some is more like following a very detailed recipe (following pre-programmed rules).\n\nSo, in short: AI is about teaching computers to learn from data and make decisions or predictions, just like we teach dogs tricks.\n",
            "role":"assistant"
            },
         "finish_reason":"stop"
         }],
       "created":1757700714,
       "model":"gemini-1.5-flash-latest",
       "object":"chat.completion",
       "usage":{
           "prompt_tokens":8,
           "completion_tokens":205,
           "total_tokens":213
           }
      }
        

    Verify tracing

    1. Get the logs of the agentgateway proxy. In the CLI output, find the trace.id.

        kubectl logs deploy/agentgateway -n gloo-system
        

      Example output:

        info	request gateway=gloo-system/agentgateway listener=http 
      route=gloo-system/google endpoint=generativelanguage.googleapis.com:443
      src.addr=127.0.0.1:49576 http.method=POST http.host=localhost 
      http.path=/gemini http.version=HTTP/1.1 http.status=200 
      trace.id=d65e4eeb983e2d964e71e8dc8c405f97 span.id=b836e1b1d51b3e74 
      llm.provider=gemini llm.request.model= llm.request.tokens=8 
      llm.response.model=gemini-1.5-flash-latest llm.response.tokens=313 duration=3165ms
        
    2. Get the logs of the collector and search for the trace ID. Verify that you see the additional LLM that you configured initially.

        kubectl logs deploy/opentelemetry-collector-traces -n telemetry
        

      Example output:

        Span #0
       Trace ID       : d65e4eeb983e2d964e71e8dc8c405f97
       Parent ID      : 
       ID             : b836e1b1d51b3e74
       Name           : POST /gemini
       Kind           : Server
       Start time     : 2025-09-24 18:12:58.653868462 +0000 UTC
       End time       : 2025-09-24 18:13:01.821700755 +0000 UTC
       Status code    : Unset
       Status message : 
      Attributes:
        -> gateway: Str(gloo-system/agentgateway)
        -> listener: Str(http)
        -> route: Str(gloo-system/google)
        -> endpoint: Str(generativelanguage.googleapis.com:443)
        -> src.addr: Str(127.0.0.1:49576)
        -> http.method: Str(POST)
        -> http.host: Str(localhost)
        -> http.path: Str(/gemini)
        -> http.version: Str(HTTP/1.1)
        -> http.status: Int(200)
        -> trace.id: Str(d65e4eeb983e2d964e71e8dc8c405f97)
        -> span.id: Str(b836e1b1d51b3e74)
        -> llm.provider: Str(gemini)
        -> llm.request.model: Str()
        -> llm.request.tokens: Int(8)
        -> llm.response.model: Str(gemini-1.5-flash-latest)
        -> llm.response.tokens: Int(313)
        -> duration: Str(3165ms)
        -> url.scheme: Str(http)
        -> network.protocol.version: Str(1.1)
        -> gen_ai.operation.name: Str(chat)
        -> gen_ai.system: Str(gemini)
        

    Other tracing configurations

    Review common tracing providers configurations that you can use with agentgateway.

    Cleanup

    You can remove the resources that you created in this guide.
      kubectl delete gateway agentgateway -n gloo-system
    kubectl delete GlooGatewayParameters   tracing -n gloo-system
    kubectl delete configmap agent-gateway-config -n gloo-system
    helm uninstall opentelemetry-collector-traces -n telemetry
    kubectl delete httproute google -n gloo-system
    kubectl delete backend google -n gloo-system
    kubectl delete secret google-secret -n gloo-system