Skip to main content

AI Employees & Personal AI — Quickstart

Build a task-oriented agent that remembers users, uses tools, and draws on a knowledge base. Task agents don't need personality or mood to do their job — this guide shows you what to wire up and what to safely skip.

This quickstart is for building an AI employee or personal AI — a task-oriented agent that helps a user get work done. Think: a support engineer, a sales-development rep, an inbox assistant, an onboarding guide.

What you'll build: a customer-support agent that (1) remembers each user across sessions, (2) can create tickets and look up order status via custom tools, and (3) answers product questions from a knowledge base.

What you can skip: the Emotions system. Mood still runs in the background but won't shape replies unless you opt in. Personality stays minimal — a professional tone profile is enough.

1. Create a project and get an API key

Go to platform.sonz.ai, create a project, and generate an API key. All requests use Bearer auth:

Authorization: Bearer sk_your_api_key

2. Create the agent

Give the agent a minimal professional personality — high conscientiousness, moderate agreeableness, low neuroticism. That's all you need for a task agent.

import { Sonzai } from "@sonzai-labs/agents";

const client = new Sonzai({ apiKey: process.env.SONZAI_API_KEY! });

const agent = await client.agents.create({
name: "Atlas",
bio: "Atlas is a calm, precise support engineer who answers product questions and handles tickets.",
big5: {
  openness: 0.55,
  conscientiousness: 0.85,
  extraversion: 0.5,
  agreeableness: 0.7,
  neuroticism: 0.2,
},
});

console.log(agent.agent_id);

3. Create a per-user instance

For task agents serving multiple end-users, use instances so each user has their own isolated memory scope under one agent definition.

const instance = await client.agents.instances.create("agent-id", {
name: "user-42",
description: "Support context for user 42",
});

Every memory, custom state, and notification scoped to instance_id = "user-42" stays isolated from every other user's context.

4. Seed what the agent knows about this user

Pre-load user facts so the agent's first response already reflects context — no cold start.

await client.agents.memory.seed("agent-id", {
userId: "user-42",
memories: [
  { content: "User's name is Priya Kapoor.", fact_type: "fact" },
  { content: "Priya is on the Enterprise plan, renewed 2026-03-15.", fact_type: "fact" },
  { content: "Priya reported a billing issue last week (ticket #4821, resolved).", fact_type: "event" },
],
});

5. Register custom tools

Tools let the LLM call your backend during inference. Sonzai doesn't execute them — it returns the tool call, your backend executes, and you pass the result back on the next turn.

await client.agents.sessions.setTools("agent-id", "session-id", [
{
  name: "create_ticket",
  description: "Create a support ticket for the user.",
  parameters: {
    type: "object",
    properties: {
      subject: { type: "string" },
      priority: { type: "string", enum: ["low", "normal", "high"] },
    },
    required: ["subject"],
  },
},
{
  name: "lookup_order",
  description: "Fetch the latest order status by order ID.",
  parameters: {
    type: "object",
    properties: { orderId: { type: "string" } },
    required: ["orderId"],
  },
},
]);

6. Upload a knowledge base

Point the agent at product docs, internal FAQs, or runbooks. The knowledge base is project-scoped — every agent in the project can search it.

import { readFileSync } from "node:fs";

const buf = readFileSync("./product-manual.pdf");
await client.knowledge.uploadDocument("project-id", "product-manual.pdf", buf, "application/pdf");

Agents automatically search the knowledge base during conversation when their knowledge_search capability is enabled.

7. Chat

Stream a response. The agent uses memory, knowledge, and tools automatically.

for await (const event of client.agents.chatStream({
agent: "agent-id",
userId: "user-42",
instanceId: instance.instance_id,
messages: [{ role: "user", content: "Hi, did my latest invoice go through?" }],
})) {
const delta = event.choices?.[0]?.delta?.content;
if (delta) process.stdout.write(delta);
}

After the response, memory extraction runs automatically — the agent will remember what happened without you lifting a finger.

8. Poll for proactive notifications (optional)

The agent can schedule follow-ups — e.g. "check back tomorrow on ticket #4821". Poll the notifications queue periodically, or register a webhook.

const pending = await client.agents.notifications.list("agent-id", { userId: "user-42", status: "pending" });

Next steps

Current SDK versions: TypeScript 1.1.3 · Python 1.1.4 · Go 1.2.0 (as of 2026-04-17)

On this page