Pattern 3: OpenClaw
Drop the @sonzai-labs/openclaw-context plugin into an OpenClaw project and Sonzai becomes the agent's contextEngine — persistent memory, mood, personality, relationships, all under OpenClaw's existing chat loop.
OpenClaw is an open-source
framework for building conversational AI agents through a slot-based
plugin system. The slot that decides what context goes into the system
prompt is called contextEngine. Installing
@sonzai-labs/openclaw-context registers the Sonzai context engine
under the name "sonzai" — assign it to the slot in openclaw.json and
every conversation flows through the Mind Layer with zero additional
code.
When to use this
- You're already building on OpenClaw, or your team has standardised on it.
- You want OpenClaw's existing chat loop, telemetry, and tool plugins to keep working — Sonzai only swaps the memory/personality layer.
- You want a
<sonzai-context>block injected into the system prompt on every turn, automatically priority-ordered and budget-trimmed.
When to switch
- Not on OpenClaw — switch to Pattern 1: Managed Runtime (we run the chat) or Pattern 4: Standalone Realtime (you run the chat).
- No real-time chat at all — switch to Pattern 5: Standalone Batch.
Architecture
OpenClaw Runtime SonzaiContextEngine Sonzai Mind Layer
| | |
|-- bootstrap(sessionId) ------->| |
| |-- resolve agent + session->|
| |<-- session state ----------|
| | |
|-- assemble(messages, budget) ->| |
| |-- fetch memory, mood, |
| | personality, goals ----->|
| |<-- ranked context blocks --|
|<-- systemPromptAddition -------| priority-ordered, |
| | token-budget-trimmed |
| | |
| [LLM call w/ enriched prompt] | |
| | |
|-- afterTurn(sessionId) ------->| |
| |-- send transcript -------->|
| | Mind Layer extracts |
| | facts, updates mood, |
| | evolves personality |
| | |
|-- compact(sessionId) --------->| |
| |-- merge short → long term->|
End-to-end snippet
The OpenClaw plugin is JavaScript-only (OpenClaw itself is JS). The Python and Go branches show the equivalent B2B provisioning flow: deterministically derive an agent UUID and write the OpenClaw config — the runtime that consumes it stays JS.
// 1. Install:
// openclaw plugins install @sonzai-labs/openclaw-context
// # or: npm install @sonzai-labs/openclaw-context
//
// 2. Run the setup wizard (interactive — asks for API key, agent name):
// npx @sonzai-labs/openclaw-context setup
//
// 3. The wizard writes openclaw.json:
// {
// "plugins": {
// "slots": { "contextEngine": "sonzai" },
// "entries": {
// "sonzai": {
// "enabled": true,
// "apiKey": "sk_your_api_key",
// "agentId": "a1b2c3d4-..."
// }
// }
// }
// }
//
// 4. Start chatting — Sonzai is now the contextEngine:
// openclaw chat
//
// For programmatic / B2B provisioning use the exported setup() helper:
import { setup } from "@sonzai-labs/openclaw-context";
const result = await setup({
apiKey: "sk_your_api_key",
agentName: "customer-support-bot",
configPath: "/path/to/openclaw.json",
});
console.log(result.agentId); // deterministic UUID — safe to re-run
console.log(result.written); // true — config file updatedIdempotent provisioning
Agent IDs are derived from SHA1(tenantID + agentName). Calling
setup() (or the Python/Go equivalent) multiple times for the same
tenant + name returns the same agent — safe to re-run on every
deploy.
Where to next
Pattern 2: MCP
Connect Claude Code, Cursor, Claude Desktop, ChatGPT, or any MCP-compatible client directly to the Mind Layer. Your assistant sees Sonzai as a set of tools and resources it can call by name.
Pattern 4: Standalone Memory (Real-Time)
You own the LLM and the chat loop. Sonzai owns memory, mood, personality, and relationships. Per-turn — sessions.start → loop of session.context() + your LLM + session.turn() → sessions.end.