Skip to main content

Documentation Index

Fetch the complete documentation index at: https://docs.xenovia.io/llms.txt

Use this file to discover all available pages before exploring further.

Core entities

Xenovia organises runtime governance around five entities that exist in a strict hierarchy.

Proxies

Operator-created entry points. Each proxy has an owner, environment label, declared scope, provider configuration, and attached policies. All traffic governance is scoped to a proxy.

Sessions

Groups of related LLM calls. Xenovia resolves a session for every request using a five-strategy chain. Sessions track turn counts and carry custom path labels.

Policies

Rego rules evaluated by OPA. Each proxy can have independent request-stage and response-stage policies. Decisions are allow, block, or redact.

Traces

Structured records of every call: request/response bodies, token counts, latency, policy decision, intent score, tool calls, and custom properties.

Intents

Semantic capability definitions attached to a proxy. Xenovia scores each request against the declared intent and can allow, block, or escalate based on the score.

Operators

Reviewers responsible for approvals and incident response. Escalated actions notify operators asynchronously with full trace context.

Entity relationships

Proxy
 ├── Provider configuration (OpenAI / Anthropic / Gemini / Azure / Bedrock / Groq / vLLM)
 ├── Request-stage Rego policy
 ├── Response-stage Rego policy
 ├── Intent definition + trigger
 └── Sessions
      └── Traces (one per LLM call)
           ├── Policy decision (outcome, rule_id, reason)
           ├── Intent score + action
           ├── Tool call records
           └── Custom properties (X-Xenovia-Property-*)

Request lifecycle

Every proxied request passes through six plugins in order:
1

Auth

The xe_... API key is verified against Redis (5-minute cache). The key resolves a proxy identity containing proxy_id and org_id. A path hint header (X-Xenovia-Agent-Path) is validated to prevent cross-proxy key use.
2

Provider routing

The request’s target provider is rewritten based on the proxy’s configuration. OpenAI-format requests can be transparently routed to Anthropic, Gemini, Azure, Bedrock, Groq, or a self-hosted vLLM endpoint.
3

Session resolution

A session UUID is resolved using the five-strategy chain: explicit header → Responses API chain → user field → message fingerprint → new UUID. Turn count is incremented atomically.
4

Trace initialisation

A trace record is opened with the session context. X-Xenovia-Session-Id and X-Xenovia-Trace-Id are stamped into the response headers before the first byte (required for streaming).
5

Policy evaluation

The request is compiled and evaluated against the Rego policy using OPA (eval timeout: 200ms). For redact decisions, PII patterns (email, SSN) are removed from messages and tool arguments before forwarding.
6

Intent scoring

If an intent definition is configured and the trigger conditions are met, the request is sent to the guardrail service for semantic scoring. Actions above threshold are blocked or escalated.
After the LLM responds, the response-stage policy runs (outcome: allow, block_response, redact_response) and the completed trace is persisted asynchronously.

Proxy URL

Every proxy exposes an OpenAI-compatible base URL:
https://runtime.xenovia.io/a/{proxy_id}/openai/v1
The /openai/v1 suffix routes through standard routes (/v1/chat/completions, /v1/responses, /v1/embeddings, /v1/completions). The runtime also supports per-provider paths for non-OpenAI formats.

What the platform does not see

  • Upstream provider credentials — these are held in the proxy configuration, not in your application.
  • Raw API keys — only the SHA-256 prefix (8 hex chars) is logged.
  • Intent block reasons — the reason is logged server-side only and never forwarded to the agent.