Skip to main content

Documentation Index

Fetch the complete documentation index at: https://docs.maximem.ai/llms.txt

Use this file to discover all available pages before exploring further.

You’re already using another memory layer and you want to evaluate or move to Synap. This guide covers the three patterns we hear most often. Each section: (1) how the source system stores memory, (2) how to map it onto Synap’s model, (3) the SDK-call mapping table, (4) a backfill snippet. If your migration source isn’t listed here, the patterns generalize — email [email protected] with what you’re using and we’ll add a section.

Mem0 → Synap

How Mem0 stores memory. Mem0 has a single user-scoped memory bag accessed via m.add(), m.search(), m.get_all(). Memories are untyped strings. Multi-user is user_id; multi-tenant is not first-class. The mapping
Mem0 conceptSynap concept
user_iduser_id
No customer/org conceptAdd customer_id if you have multi-tenant; otherwise pass a stable sentinel like your app name
m.add(messages, user_id=...)sdk.memories.create(document=..., document_type="ai-chat-conversation", user_id=..., customer_id=...)
m.search(query, user_id=...)sdk.user.context.fetch(user_id=..., customer_id=..., search_query=[query])
m.get_all(user_id=...)No exact equivalent — Synap doesn’t paginate raw memories at the SDK layer. Use multiple targeted search_query calls or read via the REST API directly.
Memory deletion: m.delete(memory_id)REST: DELETE /v1/memories/{id} — SDK method coming soon
Backfill from Mem0
from mem0 import Memory
from maximem_synap import MaximemSynapSDK
from maximem_synap.memories.models import CreateMemoryRequest

old = Memory()   # Mem0
new = MaximemSynapSDK()
await new.initialize()

async def migrate_one_user(user_id: str, customer_id: str = "my_app"):
    mem0_memories = old.get_all(user_id=user_id)["memories"]

    batch = [
        CreateMemoryRequest(
            document=m["memory"],
            document_type="document",      # Mem0 doesn't store conversation framing
            user_id=user_id,
            customer_id=customer_id,
            document_created_at=parse(m["created_at"]) if m.get("created_at") else None,
            mode="long-range",
            metadata={
                "source": "mem0_backfill",
                "mem0_id": m["id"],
            },
        )
        for m in mem0_memories
    ]
    await new.memories.batch_create(documents=batch, fail_fast=False)
What you gain immediately
  • Typed extractions (facts vs preferences vs episodes vs emotions vs temporal events).
  • Customer / client / world scopes — not just user.
  • Entity resolution across conversations.
  • Context compaction.
What you’ll need to adapt Mem0’s “search returns a flat list of strings” pattern becomes “fetch returns a ContextResponse with typed lists.” Your prompt-construction code needs to iterate over ctx.facts, ctx.preferences, ctx.episodes etc. instead of one flat list — usually a few-line change. See Response shapes.

Zep → Synap

How Zep stores memory. Zep has Sessions (≈ conversations), Users, and an automatic Facts extraction over sessions. Multi-user is user_id; multi-tenant is loosely “Project.” The mapping
Zep conceptSynap concept
ProjectEither one Instance per project, or one Instance + customer_id per project
User (zep.user.add)Implicit — Synap creates user records on first ingestion by user_id
Session (zep.memory.add_session)conversation_id (must be a UUID — use str(uuid.uuid4()) per session)
zep.memory.addsdk.conversation.record_message (incremental) + periodic sdk.conversation.context.compact
zep.memory.getsdk.conversation.context.fetch or get_context_for_prompt
zep.memory.search_sessionssdk.user.context.fetch(user_id=..., search_query=[...])
Automatic factsNative — ContextResponse.facts is the equivalent of Zep’s automatic facts, with the addition of preferences/episodes/emotions/temporal types
Backfill from Zep
from zep_python.client import AsyncZep
from maximem_synap import MaximemSynapSDK
import uuid

zep = AsyncZep(api_key="...")
new = MaximemSynapSDK()
await new.initialize()

async def migrate_session(zep_user_id: str, zep_session_id: str, customer_id: str = "default"):
    messages = await zep.memory.get_session_messages(session_id=zep_session_id)
    synap_conv = str(uuid.uuid4())

    # Record each message individually so Synap can compact later
    for m in messages.messages:
        await new.conversation.record_message(
            conversation_id=synap_conv,
            role=m.role_type,         # "user" or "assistant"
            content=m.content,
            user_id=zep_user_id,
            customer_id=customer_id,
            metadata={"zep_session_id": zep_session_id, "zep_uuid": m.uuid},
        )

    # Trigger one compaction so the conversation arrives compacted
    await new.conversation.context.compact(
        conversation_id=synap_conv,
        strategy="adaptive",
    )
Differences worth noting
  • Zep stores the literal message log forever; Synap compacts it. If you rely on retrieving raw messages by ID, plan to keep your Zep log around for a transition period.
  • Zep has built-in evaluation via fact graphs; Synap does this differently — the entity graph is the equivalent and exposes itself through accurate retrieval mode.

Letta (formerly MemGPT) → Synap

How Letta stores memory. Letta tightly couples memory and agent runtime — agents have core_memory (small, always-in-prompt), archival_memory (large, retrieval-only), and recall memory. It’s a single-process agent state model, not a multi-tenant memory service. The mapping
Letta conceptSynap concept
agent.core_memoryBuild it from sdk.user.context.fetch results at every turn — it’s not stored in Synap, it’s assembled at retrieval time. Use get_context_for_prompt for the cached version.
agent.archival_memory.insert(text)sdk.memories.create(document=text, document_type="document", user_id=..., customer_id=...)
agent.archival_memory.search(query)sdk.user.context.fetch(user_id=..., search_query=[query], types=[ContextType.FACTS])
agent.recall_memory (last N messages)sdk.conversation.record_message + get_context_for_prompt (returns recent + compacted)
Per-agent statePer-user + per-conversation state — Letta’s “one agent” maps to Synap’s (user_id, conversation_id) pair
The bigger shift Letta wants you to think in terms of “one persistent agent per user.” Synap is provider-agnostic — the LLM provider doesn’t matter; Synap is just memory. Migrating means:
  1. Splitting agent state from memory: keep agent logic in your app code; move memory to Synap.
  2. Replacing Letta’s runtime with whichever LLM SDK you actually want (OpenAI, Anthropic, etc.).
  3. Re-modeling core_memory as a prompt template populated from ContextResponse at every turn.
This is a re-architecture, not a drop-in. The win is that your agent loop becomes portable across LLM providers.

SuperMemory → Synap

How SuperMemory stores memory. SuperMemory is consumer-oriented, single-user, focused on personal notes / web clips. The model is “ingest a URL or text → semantic search later.” The mapping
SuperMemory conceptSynap concept
addMemory(text)sdk.memories.create(document=text, document_type="document", user_id=..., customer_id=...)
search(query)sdk.user.context.fetch(user_id=..., search_query=[query])
space (rough grouping)customer_id (B2B) or a metadata.space tag (single-user)
Per-user isolationNative via user_id
The migration is mechanical — bulk-export from SuperMemory, ingest with mode="long-range". The interesting part is what to do with the extra structure Synap gives you (typed memories, entity graph), which SuperMemory doesn’t expose.

General migration checklist

For any migration, work through this list:
  • Identify your identity model. What’s a user? A customer? A tenant? Map to user_id / customer_id.
  • Pick one Instance vs many. One per environment is the usual answer. One per customer if residency or MACA differs.
  • Author a Use-Case Markdown. Synap generates the MACA from this. A 200-word file from you = significantly better extraction quality from day one. See Use-Case Markdown.
  • Decide on conversation_id strategy. Stable UUID per chat thread is the right answer for most apps.
  • Run the backfill at mode="long-range" with BOOTSTRAP priority via batch_create. Use document_id for idempotency.
  • Replace your retrieval call sites. Map old query → search_query=[…]. Iterate over typed lists in the response instead of one flat list.
  • Test scope isolation by querying as user A and confirming no user B memories appear.
  • Add graceful degradation. Synap should make your agent better, not block its hot path. See Graceful Degradation recipe.
Once you’re cut over, kill the old service. Don’t try to dual-write — diverging memory state is a much harder problem than a clean cutover.