Documentation Index
Fetch the complete documentation index at: https://docs.xenovia.io/llms.txt
Use this file to discover all available pages before exploring further.
The OpenAI Agents SDK is best paired with Xenovia through the Responses API (POST /v1/responses). Xenovia supports that endpoint natively, so each agent turn is traced and policy-checked without changing your orchestration flow.
Setup
pip install openai-agents
import os
from openai import AsyncOpenAI
from agents import Agent, Runner, OpenAIResponsesModel
client = AsyncOpenAI(
api_key=os.environ["XENOVIA_API_KEY"],
base_url=f"https://runtime.xenovia.io/a/{os.environ['XENOVIA_PROXY_ID']}/openai/v1"
)
model = OpenAIResponsesModel(
model="gpt-4o-mini",
openai_client=client
)
Single agent
agent = Agent(
name="Assistant",
instructions="You are a helpful assistant.",
model=model
)
result = await Runner.run(agent, "What is the capital of France?")
print(result.final_output)
Each Runner.run call may produce multiple Responses API turns. Each turn creates an independent trace in Xenovia. Use a session ID to group them.
Sessions via previous_response_id
The Responses API natively chains turns using previous_response_id. Xenovia’s session plugin recognises this field and automatically resolves sessions across chained calls, so you do not need an explicit session ID header if the SDK manages previous_response_id for you.
To also group runs explicitly:
import uuid
session_id = str(uuid.uuid4())
client = AsyncOpenAI(
api_key=os.environ["XENOVIA_API_KEY"],
base_url=f"https://runtime.xenovia.io/a/{os.environ['XENOVIA_PROXY_ID']}/openai/v1",
default_headers={"X-Xenovia-Session-Id": session_id}
)
Explicit X-Xenovia-Session-Id takes priority over the previous_response_id chain. Use explicit IDs when you need grouping across separate Runner.run calls.
Multi-agent handoff
research_agent = Agent(
name="Researcher",
instructions="Search and summarise information on the given topic.",
model=model
)
writer_agent = Agent(
name="Writer",
instructions="Write a clear report based on provided research.",
model=model,
handoffs=[research_agent]
)
result = await Runner.run(writer_agent, "Write a report on AI governance")
print(result.final_output)
Each agent in the handoff chain calls the Responses API independently. All calls are traced individually. Filter by session_id in the Traces view to see the full conversation in order.
from agents import function_tool
@function_tool
def get_policy_status(proxy_id: str) -> str:
"""Check the current policy status for a proxy."""
return f"Policy active for proxy {proxy_id}"
agent = Agent(
name="PolicyChecker",
instructions="Check policy status when asked.",
tools=[get_policy_status],
model=model
)
Tool names defined in the agent are included in input.tool_names in the Rego policy. A block rule matching the tool name returns 403 before the model is called.
Handling policy blocks
from openai import PermissionDeniedError
try:
result = await Runner.run(agent, "Delete all user records")
except PermissionDeniedError as e:
print(f"Blocked by policy: {e.message}")
The X-Xenovia-Trace-Id header on the 403 response identifies the blocking trace in the platform.