Skip to main content

Documentation Index

Fetch the complete documentation index at: https://docs.xenovia.io/llms.txt

Use this file to discover all available pages before exploring further.

If your stack can speak the OpenAI API, Xenovia usually fits with a base_url change. Start there unless you need to govern actions that happen outside the model call itself.
  1. Start with the framework your production workload already uses.
  2. If you are unsure, use the OpenAI SDK guide because it maps cleanly to most OpenAI-compatible clients.
  3. Add the Xenovia Python SDK when you need policy checks around downstream actions.

Proxy mode integrations

Xenovia Runtime is an OpenAI-compatible proxy. Any client that accepts a custom base_url works without further changes, and every call passes through Xenovia’s policy and trace pipeline.
https://runtime.xenovia.io/a/{proxy_id}/openai/v1
Provider credentials are configured in the proxy and resolved server-side.

OpenAI SDK

OpenAI SDK badgePython and Node.js. Best default starting point for any OpenAI-compatible client.

LangChain

LangChain badgePython and JS chains, agents, and RAG pipelines.

LlamaIndex

LlamaIndex badgeRAG pipelines and agentic query engines.

OpenAI Agents SDK

OpenAI Agents badgeMulti-agent orchestration with the Responses API.

Vercel AI SDK

Vercel AI SDK badgeNext.js and edge streaming.

AutoGen / CrewAI

AutoGen and CrewAI badgeMulti-agent frameworks that can sit on top of an OpenAI-compatible endpoint.

SDK mode

The Xenovia Python SDK (pip install xenovia-sdk) gates arbitrary agent actions such as tool calls, database writes, API requests, and file operations without proxying an LLM call.
from xenovia_sdk import Xenovia

xenovia = Xenovia(api_key="xe_...", identity_id="billing-agent")

@xenovia.guard(capability="payments.transfer")
def transfer_funds(payload: dict) -> dict:
    return run_transfer(payload["amount"], payload["to"])

result = transfer_funds({"amount": 500, "to": "acct_123"})

Xenovia Python SDK

Xenovia SDK badgeFull SDK reference for execute(), @guard(), session handling, and error behavior.

Choose the right guide

Proxy modeSDK mode
What is governedEvery LLM call through the proxyArbitrary agent actions such as tools, writes, and APIs
Code change requiredbase_url swap onlyexecute() call or @guard() decorator
Policy inputFull LLM request contextCapability string plus payload
Trace containsLLM request and response, tokens, latency, tool callsCapability, payload, decision, and session
Session trackingAutomatic via Xenovia session resolutionExplicit session_id or auto_session=True
Use both together: proxy mode for LLM governance, SDK mode for downstream tool governance.

If your framework is not listed

  • If it accepts an OpenAI-compatible base_url, start from the OpenAI SDK guide.
  • If it only needs HTTP access, point it at https://runtime.xenovia.io/a/{proxy_id}/openai/v1.
  • If the framework makes risky local calls after the model response, add the Xenovia Python SDK alongside it.

Supported proxy endpoints

EndpointUse case
POST /v1/chat/completionsChat, agents, tool calling
POST /v1/responsesOpenAI Agents SDK and multi-step runs
POST /v1/embeddingsRAG and vector search
POST /v1/completionsLegacy text completions