About prompt enrichment

Prompts are basic building blocks for guiding LLMs to produce relevant and accurate responses. By effectively managing both system prompts, which set initial guidelines, and user prompts, which provide specific context, you can significantly enhance the quality and coherence of the model’s outputs.

System prompts include initialization instructions, behavior guidelines, and background information. They set the foundation for the model’s behavior. User prompts encompass direct queries, sequential inputs, and task-oriented instructions. They ensure that the model responds accurately to specific user needs.

In the following example, you explore how to refactor system and user prompts to parse an unstructured text and turn it into valid CSV format.

Before you begin

Complete the Authenticate with API keys tutorial.

Refactor LLM prompts

  1. Send a request to the AI API with the following prompt: Parse the unstructured text into CSV format: Seattle, Los Angeles, and Chicago are cities in the United States. London, Paris, and Berlin are cities in Europe. Note that in this prompt, the system prompt is not separated from the user prompt.

      curl "$INGRESS_GW_ADDRESS:8080/openai" -H content-type:application/json -d '{
        "model": "gpt-3.5-turbo",
        "messages": [
          {
            "role": "user",
            "content": "Parse the unstructured text into CSV format: Seattle, Los Angeles, and Chicago are cities in the United States. London, Paris, and Berlin are cities in Europe."
          }
       ]
      }' | jq -r '.choices[].message.content'
      

    Verify that the request succeeds and that you get back a structured CSV response.

      City,Country
    Seattle,United States
    Los Angeles,United States
    Chicago,United States
    London,Europe
    Paris,Europe
    Berlin,Europe
      
  2. Refactor the request to improve readability and management of the prompt. In the following example, the instructions are separated from the unstructured text. The instructions are added as a system prompt and the unstructured text is added as a user prompt.

      curl "$INGRESS_GW_ADDRESS:8080/openai" -H content-type:application/json -d '{
       "model": "gpt-3.5-turbo",
       "messages": [
         {
           "role": "system",
           "content": "Parse the unstructured text into CSV format."
         },
         {
           "role": "user",
           "content": "Seattle, Los Angeles, and Chicago are cities in the United States. London, Paris, and Berlin are cities in Europe."
         }
       ]
     }' | jq -r '.choices[].message.content'
      

    Verify that you get back the same output as in the previous step.

      City, Country  
    Seattle, United States  
    Los Angeles, United States  
    Chicago, United States  
    London, Europe  
    Paris, Europe  
    Berlin, Europe
      

Append or prepend prompts

Use a RouteOption resource to enrich prompts by appending or prepending system and user prompts to each request. This way, you can centrally manage common prompts that you want to add to each request.

  1. Create a RouteOption resource to enrich your prompts and configure additional settings. The following example prepends a system prompt of Parse the unstructured text into CSV format. to each request that is sent to the openai HTTPRoute. Note that this RouteOption also disables the 15 second default Envoy route timeout. This setting is required to prevent timeout errors when sending requests to an LLM. Alternatively, you can also set a timeout that is higher than 15 seconds.

      kubectl apply -f- <<EOF
    apiVersion: gateway.solo.io/v1
    kind: RouteOption
    metadata:
      name: openai-opt
      namespace: gloo-system
    spec:
      targetRefs:
      - group: gateway.networking.k8s.io
        kind: HTTPRoute
        name: openai
      options:
        ai:
          promptEnrichment:
            prepend:
            - role: SYSTEM
              content: "Parse the unstructured text into CSV format."
        timeout: "0"
    EOF
      
  2. Send a request without a system prompt. Although the system prompt instructions are missing in the request, the unstructured text in the user prompt is still transformed into structured CSV format. This is because the system prompt is automatically prepended from the RouteOption resource before it is sent to the LLM provider.

      curl "$INGRESS_GW_ADDRESS:8080/openai" -H content-type:application/json -d '{
       "model": "gpt-3.5-turbo",
       "messages": [
         {
           "role": "user",
           "content": "The recipe called for eggs, flour and sugar. The price was $5, $3, and $2."
         }
       ]
     }' | jq -r '.choices[].message.content'
      

    Example output:

      Item, Price
    Eggs, $5
    Flour, $3
    Sugar, $2
      

Overwrite settings on the route level

To overwrite a setting that you added to a RouteOption resource, you simply include that setting in your request.

  1. Send a request to the AI API and include a custom system prompt that instructs the API to transform unstructured text into JSON format.

      curl "$INGRESS_GW_ADDRESS:8080/openai" -H content-type:application/json -d '{
       "model": "gpt-3.5-turbo",
       "messages": [
         {
           "role": "system",
           "content": "Parse the unstructured content and give back a JSON format"
    
         },
         {
           "role": "user",
           "content": "The recipe called for eggs, flour and sugar. The price was $5, $3, and $2."
         }
       ]
     }' | jq -r '.choices[].message.content'
      

    Example output:

      {
      "recipe": [
        {
          "ingredient": "eggs",
          "price": "$5"
        },
        {
          "ingredient": "flour",
          "price": "$3"
        },
        {
          "ingredient": "sugar",
          "price": "$2"
        }
      ]
    }
      
  2. Send another request. This time, you do not include a system prompt. Because the default setting in the RouteOption resource is applied, the unstructured text is returned in CSV format.

      curl "$INGRESS_GW_ADDRESS:8080/openai" -H content-type:application/json -d '{
       "model": "gpt-3.5-turbo",
       "messages": [
         {
           "role": "user",
           "content": "The recipe called for eggs, flour and sugar. The price was $5, $3, and $2."
         }
       ]
     }' | jq -r '.choices[].message.content'
      

    Example output:

      Item, Price
    Eggs, $5
    Flour, $3
    Sugar, $2
      

Next

Explore how to set up prompt guards to block unwanted requests and mask sensitive data.