Integration Guide
Connect your application to the Mind Layer. End-to-end guide with examples for REST API and official SDKs.
Overview
Your backend manages business logic and user sessions. Call the Mind Layer for agent intelligence — it owns memory, personality, mood, relationships, and context assembly.
Integrate via the REST API using official SDKs for Go, TypeScript, and Python.
Official SDKs & Plugins
Official SDKs for Go, TypeScript, and Python, plus an OpenClaw plugin. Each SDK wraps the full REST API with typed methods, SSE streaming, automatic retries, and error handling.
Go
go get github.com/sonz-ai/sonzai-go
TypeScript / JavaScript
npm install @sonzai-labs/agents
Python
pip install sonzai
OpenClaw Plugin
npm install @sonzai-labs/openclaw-context
REST API
JSON-based endpoints. Chat responses stream via Server-Sent Events (SSE).
Authentication
All REST requests use Bearer authentication with your project API key:
# All REST requests use Bearer auth with your project API key
curl -H "Authorization: Bearer sk_your_api_key" \
https://api.sonz.ai/api/v1/agents/{agentId}/chatCore Interaction Flow (REST)
# Chat (SSE streaming response)
POST /api/v1/agents/{agentId}/chat
{ "messages": [{"role":"user","content":"Hello!"}], "user_id": "user-123" }
# Response: Server-Sent Events
# data: {"choices":[{"delta":{"content":"Hi"}}]}
# data: [DONE]SSE Parsing
Each line starts with data: . Strip the prefix and JSON.parse the remainder. The stream ends with data: [DONE].
Available REST Endpoints
POST /api/v1/agents Create agent
GET /api/v1/agents List agents
GET /api/v1/agents/{agentId} Get agent
POST /api/v1/agents/{agentId}/chat Chat (SSE streaming)
GET /api/v1/agents/{agentId}/notifications Pending notifications
POST /api/v1/agents/{agentId}/notifications/{id}/consume Consume notification
GET /api/v1/agents/{agentId}/notifications/history Notification historySDK Quickstart
All three SDKs wrap the same REST API with typed methods, SSE streaming, automatic retries, and error handling. Pick whichever matches your stack — they're all first-class.
import { Sonzai } from "@sonzai-labs/agents";
const client = new Sonzai({ apiKey: "sk_your_api_key" });
// Chat (non-streaming)
const response = await client.agents.chat({
agent: "agent-id",
messages: [{ role: "user", content: "Hello!" }],
userId: "user-123",
});
console.log(response.content);
// Chat (streaming)
for await (const event of client.agents.chatStream({
agent: "agent-id",
messages: [{ role: "user", content: "Tell me a story" }],
userId: "user-123",
language: "en",
timezone: "America/New_York",
})) {
process.stdout.write(event.choices?.[0]?.delta?.content ?? "");
}
// Memory, personality, context engine data
const memory = await client.agents.memory.list("agent-id", { userId: "user-123" });
const personality = await client.agents.personality.get("agent-id");
const mood = await client.agents.getMood("agent-id", { userId: "user-123" });Server-Side Only
All SDKs are for server-side use. Never expose API keys in browser code. See Browser / Frontend Apps below for the proxy pattern.
Browser / Frontend Apps
Server-Side Proxy Required
The Sonzai API does not accept browser (client-side) requests. API keys must never be exposed in frontend code. This is the same pattern used by OpenAI, Anthropic, and other AI API providers.
For web apps (React, Next.js, Vue, etc.), create a backend API route that proxies to Sonzai. Your frontend calls your server; your server calls Sonzai with the API key.
Next.js API Route
// app/api/chat/route.ts (runs on your server)
import { Sonzai } from "@sonzai-labs/agents";
const client = new Sonzai({ apiKey: process.env.SONZAI_API_KEY! });
export async function POST(req: Request) {
const { agentId, messages, userId } = await req.json();
const stream = client.agents.chatStream(agentId, { messages, userId });
return new Response(
new ReadableStream({
async start(controller) {
for await (const event of stream) {
controller.enqueue(new TextEncoder().encode(
`data: ${JSON.stringify(event)}\n\n`
));
}
controller.enqueue(new TextEncoder().encode("data: [DONE]\n\n"));
controller.close();
},
}),
{ headers: { "Content-Type": "text/event-stream" } }
);
}Frontend (Any Framework)
// Calls YOUR server, not Sonzai directly
const res = await fetch("/api/chat", {
method: "POST",
headers: { "Content-Type": "application/json" },
body: JSON.stringify({
agentId: "agent-uuid",
messages: [{ role: "user", content: "Hello!" }],
userId: "user-123",
}),
});
const reader = res.body.getReader();
const decoder = new TextDecoder();
while (true) {
const { done, value } = await reader.read();
if (done) break;
// Parse SSE chunks from your proxy
console.log(decoder.decode(value));
}Express / Fastify
// server.ts
import express from "express";
import { Sonzai } from "@sonzai-labs/agents";
const app = express();
const client = new Sonzai({ apiKey: process.env.SONZAI_API_KEY! });
app.post("/api/chat", async (req, res) => {
const { agentId, messages, userId } = req.body;
res.setHeader("Content-Type", "text/event-stream");
for await (const event of client.agents.chatStream(agentId, { messages, userId })) {
res.write(`data: ${JSON.stringify(event)}\n\n`);
}
res.write("data: [DONE]\n\n");
res.end();
});Connection Setup
Every SDK reads SONZAI_API_KEY from the environment by default. Override the base URL for self-hosted or local development.
import { Sonzai } from "@sonzai-labs/agents";
const client = new Sonzai({
apiKey: "sk_your_api_key", // or SONZAI_API_KEY env var
baseUrl: "https://api.sonz.ai", // optional
timeout: 30_000,
});Agent Lifecycle
When a user creates a new agent in your app, call agents.create with their personality configuration. Creation is idempotent — repeated calls with the same ID return the existing agent.
const agent = await client.agents.create({
name: "Luna",
gender: "female",
big5: {
openness: 0.75,
conscientiousness: 0.60,
extraversion: 0.80,
agreeableness: 0.70,
neuroticism: 0.30,
},
language: "en",
});
// agent.agentId is the platform-generated UUID — store it in your user record
console.log(agent.agentId);
// Fetch later
const profile = await client.agents.get(agent.agentId);Streaming Chat
The chat endpoint handles context assembly, AI streaming, and state updates in a single call.
for await (const event of client.agents.chatStream({
agent: "agent-id",
userId: "user-123",
messages: [{ role: "user", content: "I had a great day hiking!" }],
language: "en",
})) {
process.stdout.write(event.choices?.[0]?.delta?.content ?? "");
}Mood Labels
Labels: Blissful (80-100), Content (60-79), Neutral (40-59), Melancholy (20-39), Troubled (0-19). Mood naturally drifts back toward the agent's personality baseline over time.
Proactive Notifications
Agents can reach out to users between conversations. When triggered, the platform generates a contextual message using the agent's full state and stores it as "pending". Your app polls and marks notifications consumed after delivery.
REST Polling
# Poll for pending proactive messages
GET /api/v1/agents/{agentId}/notifications?status=pending&user_id=user-123
# Response
{
"notifications": [{
"message_id": "msg-uuid",
"user_id": "user-123",
"check_type": "check_in",
"intent": "Ask about yesterday's hiking trip",
"generated_message": "Hey! How was the hike at Mount Rainier?",
"status": "pending",
"created_at": "2026-03-07T10:00:00Z"
}]
}
# After delivering to user, mark consumed
POST /api/v1/agents/{agentId}/notifications/{messageId}/consumeDelivery Best Practice
Poll every 30-60 seconds. Always mark consumed after delivery to prevent re-delivery.
Webhook Integration
Register webhooks for event callbacks (wakeups, consolidation, breakthroughs):
// Register webhook endpoints
PUT /api/projects/{projectId}/webhooks/{eventType}
{
"webhook_url": "https://your-server.com/platform/webhooks/wakeup",
"auth_header": "Bearer YOUR_SERVER_KEY"
}
// Event types:
// - "wakeup" : Agent wants to proactively reach out
// - "consolidation" : Memory consolidation completed
// - "breakthrough" : Significant personality evolutionWakeup Webhook Payload
Includes the generated message for direct delivery:
{
"event_type": "on_wakeup_ready",
"agent_id": "agent-uuid",
"user_id": "user-123",
"generated_message": "Hey! How was the hike?",
"wakeup_id": "wakeup-uuid",
"check_type": "check_in"
}Polling Alternative
Prefer polling? Use the notifications API instead of webhooks.
Example: Backend Integration Flow
Three-service architecture:
Client App Your Backend Mind Layer
| | |
|--- Authenticate --->| |
| | |
|--- Create agent ->| |
| |--- REST: CreateAgent ------>|
| |<-- Agent ID + Profile ------|
|<-- Agent ready --| |
| | |
|--- Send message ---->| |
| |--- REST: Chat (SSE) ------->|
|<-- Streaming response <-- AI chunks + side effects -|
Your backend translates application events into Mind Layer API calls. You can swap the backend without changing agent behavior, or reuse agents across applications.
Knowledge Base
Push structured data to build a project-scoped knowledge graph. Agents search this graph during conversations. See the Knowledge Base guide for full details.
// Insert entities and relationships
const resp = await client.knowledge.insertFacts(projectId, {
source: "product_catalog",
facts: [
{
entityType: "product",
label: "Widget Pro",
properties: { price: 29.99, category: "tools" },
},
],
relationships: [
{ fromLabel: "Widget Pro", toLabel: "Tools", edgeType: "belongs_to" },
],
});
// Search the graph
const results = await client.knowledge.search(projectId, {
query: "widget price",
limit: 10,
});User Priming
Pre-load user metadata and content so agents already know a user before their first conversation. Metadata becomes instant facts; content blocks are extracted asynchronously via LLM.
const resp = await client.agents.priming.primeUser(agentId, userId, {
displayName: "Jane Smith",
metadata: {
company: "Acme Corp",
title: "VP Engineering",
email: "[email protected]",
custom: { region: "APAC", tier: "enterprise" },
},
content: [
{ type: "text", body: "Jane led the migration from AWS to GCP..." },
],
source: "crm",
});
console.log(`Job: ${resp.jobId}, Facts created: ${resp.factsCreated}`);Async Processing
Metadata facts (name, company, title) are created synchronously. Content blocks (text, chat transcripts) are processed in the background via LLM extraction. Poll the job status to track progress.
For AI Agents
Feeding these docs to an AI assistant or coding agent? Every page has a Copy for LLM button, and the bundles below are pre-formatted for ingestion. Append .md to any doc URL (e.g. /docs/en/integration.md) for the raw markdown.
Best Practices
- Use the streaming chat endpoint — it handles context assembly, AI streaming, and state updates in one call.
- Pass per-request application state via
compiledSystemPrompt. The platform doesn't cache it across requests. - Register webhooks for wakeup events so agents can initiate contact.
- Don't duplicate personality, memory, or relationship logic — let the engine own agent data.
- Poll notifications every 30–60 seconds. Consume after delivery to prevent re-delivery.
- All SDKs wrap the same REST API. Pick whichever matches your stack — they're all first-class.
- Browser apps must proxy through your backend — never expose API keys in client-side code. See the Browser / Frontend Apps section above.