If your stack can speak the OpenAI API, Xenovia usually fits with aDocumentation Index
Fetch the complete documentation index at: https://docs.xenovia.io/llms.txt
Use this file to discover all available pages before exploring further.
base_url change. Start there unless you need to govern actions that happen outside the model call itself.
Recommended order
- Start with the framework your production workload already uses.
- If you are unsure, use the OpenAI SDK guide because it maps cleanly to most OpenAI-compatible clients.
- Add the Xenovia Python SDK when you need policy checks around downstream actions.
Proxy mode integrations
Xenovia Runtime is an OpenAI-compatible proxy. Any client that accepts a custombase_url works without further changes, and every call passes through Xenovia’s policy and trace pipeline.
OpenAI SDK
LangChain
LlamaIndex
OpenAI Agents SDK
Vercel AI SDK
AutoGen / CrewAI
SDK mode
The Xenovia Python SDK (pip install xenovia-sdk) gates arbitrary agent actions such as tool calls, database writes, API requests, and file operations without proxying an LLM call.
Xenovia Python SDK
execute(), @guard(), session handling, and error behavior.Choose the right guide
| Proxy mode | SDK mode | |
|---|---|---|
| What is governed | Every LLM call through the proxy | Arbitrary agent actions such as tools, writes, and APIs |
| Code change required | base_url swap only | execute() call or @guard() decorator |
| Policy input | Full LLM request context | Capability string plus payload |
| Trace contains | LLM request and response, tokens, latency, tool calls | Capability, payload, decision, and session |
| Session tracking | Automatic via Xenovia session resolution | Explicit session_id or auto_session=True |
If your framework is not listed
- If it accepts an OpenAI-compatible
base_url, start from the OpenAI SDK guide. - If it only needs HTTP access, point it at
https://runtime.xenovia.io/a/{proxy_id}/openai/v1. - If the framework makes risky local calls after the model response, add the Xenovia Python SDK alongside it.
Supported proxy endpoints
| Endpoint | Use case |
|---|---|
POST /v1/chat/completions | Chat, agents, tool calling |
POST /v1/responses | OpenAI Agents SDK and multi-step runs |
POST /v1/embeddings | RAG and vector search |
POST /v1/completions | Legacy text completions |