Documentation Index
Fetch the complete documentation index at: https://docs.xenovia.io/llms.txt
Use this file to discover all available pages before exploring further.
This quickstart uses proxy mode because it is the fastest path to coverage. If you need to gate non-LLM actions too, continue with the Xenovia Python SDK after this.
1. Collect the two values you need
From the Xenovia platform:
- Create a proxy and note the proxy ID.
- Generate an API key (
xe_...) scoped to that proxy.
Set them as environment variables:
export XENOVIA_API_KEY=xe_...
export XENOVIA_PROXY_ID=your-proxy-id
2. Point your client at Xenovia
Python (OpenAI SDK)
Node.js (OpenAI SDK)
import os
from openai import OpenAI
client = OpenAI(
api_key=os.environ["XENOVIA_API_KEY"],
base_url=f"https://runtime.xenovia.io/a/{os.environ['XENOVIA_PROXY_ID']}/openai/v1"
)
import OpenAI from "openai"
const client = new OpenAI({
apiKey: process.env.XENOVIA_API_KEY,
baseURL: `https://runtime.xenovia.io/a/${process.env.XENOVIA_PROXY_ID}/openai/v1`
})
3. Send your first governed request
response = client.chat.completions.create(
model="gpt-4o-mini",
messages=[{"role": "user", "content": "Summarise AI governance in one sentence."}]
)
print(response.choices[0].message.content)
const response = await client.chat.completions.create({
model: "gpt-4o-mini",
messages: [{ role: "user", content: "Summarise AI governance in one sentence." }]
})
console.log(response.choices[0].message.content)
The request now flows through Xenovia for authentication, provider routing, session handling, policy checks, and trace recording.
4. Add a stable session ID
Group related calls into a session so multi-turn conversations appear together in Traces.
import uuid
client = OpenAI(
api_key=os.environ["XENOVIA_API_KEY"],
base_url=f"https://runtime.xenovia.io/a/{os.environ['XENOVIA_PROXY_ID']}/openai/v1",
default_headers={"X-Xenovia-Session-Id": str(uuid.uuid4())}
)
import { randomUUID } from "crypto"
const client = new OpenAI({
apiKey: process.env.XENOVIA_API_KEY,
baseURL: `https://runtime.xenovia.io/a/${process.env.XENOVIA_PROXY_ID}/openai/v1`,
defaultHeaders: { "X-Xenovia-Session-Id": randomUUID() }
})
The session ID must be a valid UUID. Xenovia validates the format and returns a 400 if it is malformed.
5. Confirm the trace exists
After the request completes, check the platform and confirm:
- A trace exists for the request
- The trace shows the expected model and proxy
- The response includes
X-Xenovia-Session-Id and X-Xenovia-Trace-Id
6. Attach a first policy
Start with one policy that proves enforcement is working. This example blocks a destructive tool name:
package xenovia.policy
default allow = true
deny if {
some tool in input.tool_names
tool == "delete_database"
}
Policies are evaluated per request. You can attach request-stage and response-stage rules independently.
7. Handle a policy block
When a request is blocked, the runtime returns 403 Forbidden. The OpenAI SDK raises this as PermissionDeniedError.
from openai import PermissionDeniedError
try:
response = client.chat.completions.create(
model="gpt-4o-mini",
messages=[{"role": "user", "content": "Delete the database."}],
tools=[{
"type": "function",
"function": {
"name": "delete_database",
"description": "Delete all records",
"parameters": {"type": "object", "properties": {}}
}
}]
)
except PermissionDeniedError as e:
print(f"Blocked by policy: {e.message}")
import { PermissionDeniedError } from "openai"
try {
const response = await client.chat.completions.create({
model: "gpt-4o-mini",
messages: [{ role: "user", content: "Delete the database." }],
tools: [{
type: "function",
function: {
name: "delete_database",
description: "Delete all records",
parameters: { type: "object", properties: {} }
}
}]
})
} catch (e) {
if (e instanceof PermissionDeniedError) {
console.log("Blocked by policy:", e.message)
}
}
Tag traces with business context so they stay useful as traffic grows.
client = OpenAI(
api_key=os.environ["XENOVIA_API_KEY"],
base_url=f"https://runtime.xenovia.io/a/{os.environ['XENOVIA_PROXY_ID']}/openai/v1",
default_headers={
"X-Xenovia-Session-Id": str(uuid.uuid4()),
"X-Xenovia-Property-environment": "dev",
"X-Xenovia-Property-workflow": "docs-quickstart"
}
)
const client = new OpenAI({
apiKey: process.env.XENOVIA_API_KEY,
baseURL: `https://runtime.xenovia.io/a/${process.env.XENOVIA_PROXY_ID}/openai/v1`,
defaultHeaders: {
"X-Xenovia-Session-Id": randomUUID(),
"X-Xenovia-Property-environment": "dev",
"X-Xenovia-Property-workflow": "docs-quickstart"
}
})
What to do next
After this first pass, move on to:
If your real workflow performs risky actions after the model call, add the Xenovia Python SDK so those actions are governed too.