Node.js SDKs & Framework Integrations
Adjudon provides a core Node.js SDK plus framework wrappers for OpenAI, Anthropic, and the Vercel AI SDK.
Core SDK — @adjudon/node
npm install @adjudon/node
Zero runtime dependencies. Works in Node.js, Deno, Vercel Edge Functions, and Cloudflare Workers.
Constructor config
import { Adjudon } from '@adjudon/node';
const client = new Adjudon({
apiKey: 'adj_agent_abc123...', // Required.
agentId: 'my-agent', // Required.
baseUrl: 'https://api.adjudon.com/api/v1', // Default.
failMode: 'open', // 'open' (default) or 'closed'.
maxRetries: 3, // Default: 3.
timeoutMs: 10000, // Default: 10000ms.
wait: true, // Default: true. If false, fire-and-forget.
redactPii: false, // Default: false.
});
client.trace()
const result = await client.trace({
inputContext: { prompt: 'What is the capital of France?' }, // Required.
outputDecision: { action: 'llm_response', text: 'Paris is the capital of France.' }, // Required.
metadata: { model: 'gpt-4o', sessionId: 'abc' }, // Optional.
agentId: 'my-agent', // Optional — overrides key scope.
schemaVersion: '2026-04-11', // Optional — defaults to latest.
});
Returns a TraceResponse:
result.status // "approved" | "flagged" | "blocked" | "passthrough"
result.traceId // string — Adjudon trace ID
result.confidence // number — 0.0 to 1.0
result.message // string — human-readable status message
result.reason // string | null — block reason, if blocked
Shutdown
await client.drain(); // Wait for pending traces before process exit.
client.pendingTraces; // Number of in-flight traces.
OpenAI Wrapper — @adjudon/openai
npm install @adjudon/openai
import OpenAI from 'openai';
import { wrapOpenAI } from '@adjudon/openai';
const client = wrapOpenAI(new OpenAI(), {
adjudon: { apiKey: 'adj_agent_abc123...', agentId: 'my-agent' },
});
// Every client.chat.completions.create() is now traced.
const completion = await client.chat.completions.create({
model: 'gpt-4o',
messages: [{ role: 'user', content: 'What is the capital of France?' }],
});
Fire-and-forget by default — Adjudon tracing never adds latency to the OpenAI call.
Options:
enforce: true— throwAdjudonBlockedErroron policy blockssampleRate: 0.1— trace only 10% of calls
Anthropic Wrapper — @adjudon/anthropic
npm install @adjudon/anthropic
import Anthropic from '@anthropic-ai/sdk';
import { wrapAnthropic } from '@adjudon/anthropic';
const client = wrapAnthropic(new Anthropic(), {
adjudon: { apiKey: 'adj_agent_abc123...', agentId: 'my-agent' },
});
// Every client.messages.create() is now traced.
const message = await client.messages.create({
model: 'claude-3-5-sonnet-20241022',
max_tokens: 100,
messages: [{ role: 'user', content: 'What is the capital of France?' }],
});
Patches client.messages.create() in-place. Captures text blocks and tool_use blocks, token usage, model, and latency.
Vercel AI SDK — @adjudon/vercel-ai
npm install @adjudon/vercel-ai
import { generateText, wrapLanguageModel } from 'ai';
import { openai } from '@ai-sdk/openai';
import { adjudonMiddleware } from '@adjudon/vercel-ai';
const model = wrapLanguageModel({
model: openai('gpt-4o'),
middleware: adjudonMiddleware({
apiKey: 'adj_agent_abc123...',
agentId: 'my-agent',
}),
});
const { text } = await generateText({
model,
prompt: 'What is the capital of France?',
});
Streaming: Works with streamText as well. The middleware accumulates text-delta chunks and sends a single trace after the stream completes — no partial traces.
import { streamText } from 'ai';
const { textStream } = await streamText({ model, prompt: '...' });
for await (const chunk of textStream) {
process.stdout.write(chunk);
}
// Trace is sent automatically after the stream ends.