Skip to main content

Python SDKs & Framework Integrations

Adjudon provides a core Python SDK plus drop-in integrations for seven popular frameworks.


Core SDK — adjudon

pip install adjudon

# Async support
pip install 'adjudon[async]'

Constructor parameters

from adjudon import Adjudon

client = Adjudon(
api_key="adj_agent_abc123...", # Required. Agent API key.
agent_id="my-agent", # Required. Agent identifier.
base_url="https://api.adjudon.com", # Default.
fail_mode="open", # "open" (default) or "closed". Open = never block on errors.
timeout=10.0, # Seconds. Default: 10.0.
max_retries=3, # Default: 3.
redact_pii=False, # Pre-redact PII client-side before sending. Default: False.
)

client.trace()

result = client.trace(
input_context={"prompt": "What is the capital of France?"}, # Required.
output_decision={"action": "llm_response", "text": "Paris is the capital of France."}, # Required.
metadata={"model": "gpt-4o", "session_id": "abc"}, # Optional.
idempotency_key="unique-request-id", # Optional. Prevents duplicate traces.
wait=True, # Default: True. If False, fires in background.
)

Returns a TraceResponse:

result.status      # "approved", "flagged", "blocked", or "passthrough"
result.trace_id # str — Adjudon trace ID
result.confidence # float — 0.0 to 1.0
result.message # str — human-readable status message
result.reason # str | None — policy block reason, if blocked

Shutdown

client.drain()  # Wait for background traces to complete before process exit.

Context manager

with Adjudon(api_key="adj_agent_abc123...", agent_id="my-agent") as client:
result = client.trace(...)
# drain() is called automatically on __exit__

Async client

from adjudon import AsyncAdjudon

async with AsyncAdjudon(api_key="adj_agent_abc123...", agent_id="my-agent") as client:
result = await client.atrace(
input_context={"prompt": "What is the capital of France?"},
output_decision={"action": "llm_response", "text": "Paris is the capital of France."},
)
print(result.status)

PII redaction

With redact_pii=True, the client scrubs the following patterns before sending:

  • Email addresses
  • IBANs
  • Credit card numbers (Luhn-validated)
  • Social Security Numbers
client = Adjudon(api_key="adj_agent_abc123...", agent_id="my-agent", redact_pii=True)

LangChain — adjudon-langchain

pip install adjudon-langchain
from langchain_openai import ChatOpenAI
from adjudon_langchain import AdjudonCallbackHandler

handler = AdjudonCallbackHandler(api_key="adj_agent_abc123...", agent_id="my-agent")
llm = ChatOpenAI(callbacks=[handler])
response = llm.invoke("What is the capital of France?")

Constructor parameters:

api_key: str               — Agent API key.
agent_id: str — Agent identifier.
client: Adjudon — Pre-built client (alternative to api_key + agent_id).
sample_rate: float — Fraction of calls to trace. Default: 1.0. Use 0.1 for 10% sampling.
raise_on_block: bool — Raise AdjudonBlockedException on policy block. Default: True.

Traced events: on_llm_start, on_llm_end, on_llm_error, on_tool_start, on_tool_end, on_tool_error, on_agent_action

Policy blocking: With raise_on_block=True (default), a policy block raises AdjudonBlockedException and stops the chain. Set raise_on_block=False to log the block and continue.

Async: Use AsyncAdjudonCallbackHandler — same API, async methods.


LlamaIndex — adjudon-llamaindex

pip install adjudon-llamaindex
from llama_index.core import Settings
from adjudon_llamaindex import AdjudonCallbackHandler

handler = AdjudonCallbackHandler(api_key="adj_agent_abc123...", agent_id="my-agent")
Settings.callback_manager.add_handler(handler)

Traced events: LLM, QUERY, RETRIEVE, FUNCTION_CALL, AGENT_STEP

Ignored events: CHUNKING, NODE_PARSING — these are infrastructure events, not compliance-relevant decisions.


CrewAI — adjudon-crewai

pip install adjudon-crewai

Per-task callback:

from crewai import Task
from adjudon_crewai import AdjudonTaskCallback

callback = AdjudonTaskCallback(
api_key="adj_agent_abc123...",
agent_id="my-crew",
task_name="Analyze report", # Optional label for the trace.
)
task = Task(description="Analyze report", callback=callback)

Patch an entire crew at once:

from adjudon import Adjudon
from adjudon_crewai import patch_crew

adjudon_client = Adjudon(api_key="adj_agent_abc123...", agent_id="my-crew")
patch_crew(crew, client=adjudon_client)
crew.kickoff()

patch_crew() attaches AdjudonTaskCallback to every task in the crew.

Constructor parameters: api_key, agent_id, client, task_name, sample_rate, raise_on_block


AutoGen — adjudon-autogen

pip install adjudon-autogen
from autogen import AssistantAgent
from adjudon_autogen import register_adjudon_hook

assistant = AssistantAgent("assistant", llm_config={...})
register_adjudon_hook(assistant, api_key="adj_agent_abc123...", agent_id="my-bot")

Hooks into AutoGen's process_message_before_send — every outgoing message is traced. The message passes through unchanged (fail-open).


PydanticAI — adjudon-pydantic-ai

pip install adjudon-pydantic-ai
from pydantic_ai import Agent
from adjudon_pydantic_ai import AdjudonAgent

raw_agent = Agent("openai:gpt-4o")
agent = AdjudonAgent(raw_agent, api_key="adj_agent_abc123...", agent_id="my-agent")

result = agent.run_sync("What is the capital of France?")
print(result.data) # "Paris is the capital of France."

AdjudonAgent wraps run_sync() and async run(). All other Agent attributes are accessible transparently via __getattr__. Token usage is automatically captured from RunResult.usage().


Anthropic Python SDK — adjudon-anthropic-tools

pip install adjudon-anthropic-tools
import anthropic
from adjudon_anthropic_tools import wrap_anthropic

client = wrap_anthropic(
anthropic.Anthropic(),
api_key="adj_agent_abc123...",
agent_id="my-agent",
)

# Every client.messages.create() is now traced automatically.
response = client.messages.create(
model="claude-3-5-sonnet-20241022",
max_tokens=100,
messages=[{"role": "user", "content": "What is the capital of France?"}],
)

Async:

from adjudon_anthropic_tools import wrap_async_anthropic

client = wrap_async_anthropic(anthropic.AsyncAnthropic(), api_key="adj_agent_abc123...", agent_id="my-agent")
response = await client.messages.create(...)

What gets captured: prompt (last user message), system prompt, completion text, tool_use blocks, model name, latency in ms, input/output token counts, Anthropic message ID.

Patches messages.create in-place. The original client is returned — no wrapper object to manage.