From Zep (getzep)
Migrate users, sessions, and extracted facts from Zep's memory graph into Sonzai. Preserve either the raw chat history, the already-extracted facts, or both.
What you're migrating
Zep and Sonzai solve the same problem — persistent, structured agent memory — but with different primitives. Here's how they line up:
| Zep | Sonzai |
|---|---|
| User | User (same user_id works fine) |
| Session | Session — Sonzai has first-class sessions (sessions.start / sessions.end) and preserves the exact session_id on every extracted fact. For the bulk migration below we flatten historical sessions into chat_transcript blocks (one block per session); live chat post-migration should use real Sonzai sessions — see Sessions. |
| Message (in a session) | chat_transcript content block |
| Fact (extracted by Zep) | text content block — Sonzai will re-extract and dedupe against existing memory |
| User metadata | Sonzai metadata (email, company, custom) |
You have two viable migration strategies:
- Re-extract from raw transcripts — ship the session messages to Sonzai and let Sonzai's extraction pipeline rebuild facts. Highest fidelity, slowest to process.
- Forward Zep's facts — ship Zep's already-extracted fact strings as
textblocks. Faster, but depends on Zep's extraction quality.
You can do both in the same import. Sonzai will deduplicate across content blocks.
1. Export from Zep
import { ZepClient } from "@getzep/zep-cloud";
const zep = new ZepClient({ apiKey: process.env.ZEP_API_KEY! });
async function exportZepUser(userId: string) {
const user = await zep.user.get(userId);
const sessions = await zep.user.getSessions(userId);
// Flatten all session messages into a single transcript, newest last
const transcripts: string[] = [];
for (const s of sessions) {
const messages = await zep.memory.getSessionMessages(s.sessionId!);
const lines = messages.map(m =>
`${m.role === "user" ? "User" : "Agent"}: ${m.content}`
);
transcripts.push(`[session ${s.sessionId}]\n${lines.join("\n")}`);
}
// Pull Zep's own extracted facts about the user
const memory = await zep.user.getFacts(userId).catch(() => ({ facts: [] }));
return { user, transcripts, facts: memory.facts ?? [] };
}2. Map to Sonzai's import shape
Pack each session as its own chat_transcript block (preserves recency) and the extracted facts as text blocks.
{
"source": "zep",
"users": [
{
"user_id": "user_123",
"display_name": "Mia Tanaka",
"metadata": {
"email": "[email protected]",
"custom": { "zep_user_id": "mia_123" }
},
"content": [
{ "type": "chat_transcript", "body": "[session 01H...]\nUser: ...\nAgent: ..." },
{ "type": "chat_transcript", "body": "[session 01J...]\nUser: ...\nAgent: ..." },
{ "type": "text", "body": "Mia is allergic to peanuts." },
{ "type": "text", "body": "Mia works at Acme and leads the platform team." }
]
}
]
}The extractor runs on every block; text blocks with a single assertion are cheap to process and dedupe quickly against existing memory.
3. Import into Sonzai
import { Sonzai } from "@sonzai-labs/agents";
const sonzai = new Sonzai({ apiKey: process.env.SONZAI_API_KEY! });
const AGENT_ID = "agent_abc";
async function migrateZepUsers(zepUserIds: string[]) {
const users = await Promise.all(zepUserIds.map(async (zepId) => {
const { user, transcripts, facts } = await exportZepUser(zepId);
return {
user_id: user.userId!, // keep stable IDs across systems
display_name: user.firstName ? `${user.firstName} ${user.lastName ?? ""}`.trim() : undefined,
metadata: {
email: user.email,
custom: { zep_user_id: zepId },
},
content: [
...transcripts.map(body => ({ type: "chat_transcript", body })),
...facts.map(f => ({ type: "text", body: f.fact ?? String(f) })),
],
};
}));
const job = await sonzai.agents.priming.batchImport(AGENT_ID, {
source: "zep",
users,
});
return job;
}4. Verify
# Job status
curl -s https://api.sonz.ai/api/v1/agents/agent_abc/users/import/$JOB_ID \
-H "Authorization: Bearer $SONZAI_API_KEY" | jq '{status,facts_stored,facts_deduped,errors}'
# Memory for a specific migrated user
curl -s "https://api.sonz.ai/api/v1/agents/agent_abc/memory/facts?user_id=user_123&limit=50" \
-H "Authorization: Bearer $SONZAI_API_KEY" | jq '.facts[] | {content, source_type}'facts_deduped is particularly useful here — if you shipped both raw transcripts and Zep's extracted facts, a high dedup count confirms Sonzai's extractor rediscovered the same assertions.
Tips
- Session IDs. Sonzai has first-class sessions, but the bulk-import path flattens historical conversations into a single content array — so preserve each Zep session's ID as a header inside its
chat_transcriptblock (e.g.[session <id>]). Going forward, wrap live chat withsessions.start/sessions.endand extracted facts will carry their sourcesession_idautomatically — no manual bookkeeping. See Sessions for the live-chat flow, or the Sessions API reference for endpoint details. - Graph edges. Zep Cloud's knowledge-graph edges aren't directly importable. Sonzai rebuilds its own constellation from content. If the relationships are critical, include them as explicit text blocks (
"Mia manages Ren.","Ren reports to Mia.") and let the extractor wire them up. - Summaries. If you've been relying on Zep session summaries, don't import them — Sonzai generates its own.
What's next
- Sessions — Sonzai's session model and when to start/end them explicitly.
- Memory — how Sonzai's constellation dedupes facts.
- Conversations — what happens to migrated users during live chat.
From OpenAI Assistants
Migrate threads and messages from the OpenAI Assistants API into Sonzai. One thread per user, full message history preserved as chat transcripts.
From Mem0
Migrate per-user memories from Mem0 into Sonzai. Each Mem0 memory becomes a text block that Sonzai re-ingests, deduplicates, and links into its constellation.