Documentation Index
Fetch the complete documentation index at: https://docs.xenovia.io/llms.txt
Use this file to discover all available pages before exploring further.
Setup
npm install ai @ai-sdk/openai
import { createOpenAI } from "@ai-sdk/openai"
const xenovia = createOpenAI({
apiKey: process.env.XENOVIA_API_KEY,
baseURL: `https://runtime.xenovia.io/a/${process.env.XENOVIA_PROXY_ID}/openai/v1`
})
const model = xenovia("gpt-4o-mini")
Streaming text
import { streamText } from "ai"
const { textStream } = streamText({
model,
messages: [{ role: "user", content: "Explain AI governance in plain English" }]
})
for await (const chunk of textStream) {
process.stdout.write(chunk)
}
Xenovia stamps X-Xenovia-Session-Id and X-Xenovia-Trace-Id into the response headers before the first token, so streaming traces are always correlated correctly.
Next.js App Router route handler
// app/api/chat/route.ts
import { streamText } from "ai"
import { createOpenAI } from "@ai-sdk/openai"
import { randomUUID } from "crypto"
export async function POST(req: Request) {
const { messages, sessionId } = await req.json()
const xenovia = createOpenAI({
apiKey: process.env.XENOVIA_API_KEY,
baseURL: `https://runtime.xenovia.io/a/${process.env.XENOVIA_PROXY_ID}/openai/v1`,
headers: {
// Use a stable session ID from the client to group conversation turns
"X-Xenovia-Session-Id": sessionId ?? randomUUID(),
// Tag traces with application context
"X-Xenovia-Property-route": "chat",
}
})
const result = streamText({
model: xenovia("gpt-4o-mini"),
messages
})
return result.toDataStreamResponse()
}
Tool schemas are forwarded to the upstream LLM. The Xenovia request-stage policy evaluates input.tool_names before the call reaches the model.
import { streamText, tool } from "ai"
import { z } from "zod"
const result = streamText({
model,
messages: [{ role: "user", content: "What is the weather in London?" }],
tools: {
getWeather: tool({
description: "Get the weather for a location",
parameters: z.object({ location: z.string() }),
execute: async ({ location }) => ({ temperature: "15°C", location })
})
}
})
Session tracking
Pass a stable session ID to group all turns of a chat conversation. The Vercel AI SDK does not add session IDs automatically — provide one per conversation from your application layer.
const xenovia = createOpenAI({
apiKey: process.env.XENOVIA_API_KEY,
baseURL: `https://runtime.xenovia.io/a/${process.env.XENOVIA_PROXY_ID}/openai/v1`,
headers: {
"X-Xenovia-Session-Id": conversationId, // stable ID from your DB or cookie
"X-Xenovia-Session-Path": `users/${userId}/conversations`
}
})
Handling policy blocks
When a request is blocked, the runtime returns 403. Handle it at the route level before returning the stream response.
try {
const result = streamText({ model, messages })
return result.toDataStreamResponse()
} catch (error: any) {
if (error?.status === 403) {
return new Response(
JSON.stringify({ error: "Request blocked by policy" }),
{ status: 403, headers: { "Content-Type": "application/json" } }
)
}
throw error
}
The Vercel AI SDK uses persistent connections by default. Subsequent requests within the same server instance have lower latency because the TCP connection to Xenovia is reused.