Cloud LLM providers
Set up cloud LLM providers with AI Gateway.
Before you begin
Get the external address of the gateway and save it in an environment variable.
Choose a supported LLM provider.
Supported LLM providers
The examples throughout the Gloo AI Gateway docs use OpenAI as the LLM provider, but you can use other providers supported by Gloo AI Gateway.
Gloo Gateway supports the following AI providers:
For the full list of currently supported providers, see the AI options in the Upstream reference.
OpenAI
OpenAI is the most common LLM provider, and the examples throughout the AI Gateway docs use OpenAI. You can adapt these examples to your own provider, especially ones that use the OpenAI API, such as DeepSeek and Mistral.
To set up OpenAI, continue with the Authenticate to the LLM guide.
Google Gemini
Save your Gemini API key as an environment variable. To retrieve your API key, log in to the Google AI Studio and select API Keys.
export GOOGLE_KEY=<your-api-key>Create a secret to authenticate to Google. For other ways to authenticate, see the Auth guide.
kubectl apply -f - <<EOF apiVersion: v1 kind: Secret metadata: name: google-secret namespace: gloo-system labels: app: ai-gateway type: Opaque stringData: Authorization: $GOOGLE_KEY EOFCreate an Upstream resource to define the Gemini destination.
Review the following table to understand this configuration.kubectl apply -f- <<EOF apiVersion: gloo.solo.io/v1 kind: Upstream metadata: labels: app: ai-gateway name: google namespace: gloo-system spec: ai: gemini: apiVersion: v1beta authToken: kind: SecretRef secretRef: name: google-secret model: gemini-1.5-flash-latest EOFSetting Description geminiThe Gemini AI provider. apiVersionThe API version of Gemini that is compatible with the model that you plan to use. In this example, you must use v1betabecause thegemini-1.5-flash-latestmodel is not compatible with thev1API version. For more information, see the Google AI docs.authTokenThe authentication token to use to authenticate to the LLM provider. The example refers to the secret that you created in the previous step. modelThe model to use to generate responses. In this example, you use the gemini-1.5-flash-latestmodel. For more models, see the Google AI docs.Create an HTTPRoute resource to route requests to the Gemini upstream. Note that Gloo Gateway automatically rewrites the endpoint that you set up (such as
/gemini) to the appropriate chat completion endpoint of the LLM provider for you, based on the LLM provider that you set up in the Upstream resource.kubectl apply -f- <<EOF apiVersion: gateway.networking.k8s.io/v1 kind: HTTPRoute metadata: name: google namespace: gloo-system labels: app: ai-gateway spec: parentRefs: - name: ai-gateway namespace: gloo-system rules: - matches: - path: type: PathPrefix value: /gemini backendRefs: - name: google namespace: gloo-system group: gloo.solo.io kind: Upstream EOFSend a request to the LLM provider API. Verify that the request succeeds and that you get back a response from the chat completion API.
Example output:
{ "candidates": [ { "content": { "parts": [ { "text": "Learning patterns from data to make predictions.\n" } ], "role": "model" }, "finishReason": "STOP", "avgLogprobs": -0.017732446392377216 } ], "usageMetadata": { "promptTokenCount": 8, "candidatesTokenCount": 9, "totalTokenCount": 17, "promptTokensDetails": [ { "modality": "TEXT", "tokenCount": 8 } ], "candidatesTokensDetails": [ { "modality": "TEXT", "tokenCount": 9 } ] }, "modelVersion": "gemini-1.5-flash-latest", "responseId": "UxQ6aM_sKbjFnvgPocrJaA" }
Differences between LLM providers
Note the following differences in how AI Gateway features function for each provider.
RAG
Retrieval augmented generation (RAG) is currently not supported for the Gemini and Vertex AI providers.
Chat streaming
Gloo AI Gateway supports chat streaming, which allows the LLM to stream out tokens as they are generated. The way that chat streaming is determined varies by AI provider.
- OpenAI and most AI providers: Most providers send the
is-streamingboolean as part of the request to determine whether or not a request should receive a streamed response. - Google Gemini and Vertex AI: In contrast, the Gemini and Vertex AI providers change the path to determine streaming, such as the
streamGenerateContentsegment of the path in the Vertex AI streaming endpointhttps://generativelanguage.googleapis.com/v1beta/models/gemini-1.5-flash-latest:streamGenerateContent?key=<key>. To prevent the path you defined in your HTTPRoute from being overwritten by this streaming path, you instead indicate chat streaming for Gemini and Vertex AI by settingspec.options.ai.routeType=CHAT_STREAMINGin your RouteOptions resource.
Next
Now that you can send requests to an LLM provider, explore the other AI Gateway tutorials and guides.