Skip to main content
Think of memories as the knowledge in a library’s catalog, and context as the specific books and notes a researcher pulls from the shelves for a particular question. Synap manages both the catalog and the act of pulling the right materials.

What is a Memory?

A Memory is a unit of structured knowledge that Synap extracts from raw documents. When you ingest content — whether it is an AI chat conversation, a PDF, a knowledge base article, or a plain text document — Synap’s extraction pipeline breaks it down into discrete, typed memory units. Each memory has:
  • A type: one of five structured categories (fact, preference, episode, emotion, temporal event)
  • A confidence or strength score: a 0.0 to 1.0 value indicating extraction certainty
  • Source references: links back to the original document and extraction context
  • Entity links: connections to resolved entities (people, organizations, concepts)
  • Scope: the visibility boundary (user, customer, client, or world)
  • Timestamps: creation time, last accessed time, and optional temporal anchors
Memories are not raw text. They are structured, typed, and enriched representations of knowledge. This structure is what enables Synap to retrieve precisely relevant information rather than returning large blocks of unprocessed text.
A single ingested document can produce dozens or hundreds of individual memories. A five-minute conversation might yield facts about the user’s preferences, an episode describing what they discussed, temporal events about upcoming deadlines, and emotional context about how they felt.

What is Context?

Context is the assembled output that Synap delivers to your AI agent when it requests information. A context response contains:
  • Retrieved memories: the most relevant memories from long-term storage, ranked by relevance, recency, and confidence
  • Conversation history: the accumulated short-term context from the current session (if applicable)
  • Scope metadata: information about which scope levels contributed to the response
Context is what your agent consumes. When your agent needs to generate a response, it fetches context from Synap and uses the returned memories and history to inform its output. The quality of your agent’s responses depends directly on the quality and relevance of the context Synap provides.
# Fetching context for an agent response
context = await sdk.user.context.fetch(
    user_id="user_alice",
    customer_id="acme_corp",
    search_query=["project timeline", "upcoming deadlines"]
)

# Use the context to inform your agent's response
# context.facts -> relevant factual memories
# context.preferences -> user preferences
# context.episodes -> relevant past interactions
# context.temporal_events -> time-anchored information

Short-term vs Long-term Memory

Synap distinguishes between two categories of memory based on their lifespan and purpose.

Short-term Context

The accumulated context from the current conversation session. It builds turn by turn as the user and agent exchange messages. Short-term context lives only for the duration of the conversation and is managed through context compaction to stay within token limits.Characteristics:
  • Ephemeral — exists only during the active session
  • Grows with each conversational turn
  • Subject to compaction when it exceeds configured thresholds
  • Contains the immediate conversational state, recent decisions, and in-progress topics

Long-term Context

Extracted, structured knowledge that persists across conversations and sessions. Long-term memories are produced by the ingestion pipeline and stored in vector and graph engines. They survive indefinitely (subject to retention policies) and are retrieved based on relevance to the current query.Characteristics:
  • Persistent — survives across sessions, days, months
  • Built from ingested documents via the extraction pipeline
  • Stored in vector and graph storage engines
  • Retrieved based on semantic relevance, recency, and confidence scoring
The interplay between short-term and long-term memory is central to Synap’s value. During a conversation, your agent draws on both: short-term context provides immediate conversational continuity (“we were just talking about the Q2 budget”), while long-term context provides deep knowledge (“Alice prefers executive summaries” and “Acme Corp’s fiscal year ends in March”).

The five memory types

Synap’s extraction pipeline produces five distinct types of structured memory. Each type captures a different dimension of knowledge:
TypeWhat it capturesKey metricExample
FactsFactual statements and knowledgeConfidence (0.0-1.0)“The API rate limit is 1000 req/min”
PreferencesLikes, dislikes, and behavioral choicesStrength (0.0-1.0) + direction”User prefers concise responses”
EpisodesEvent narratives and interactionsSignificance (0.0-1.0)“Discussed migration plan in standup”
EmotionsDetected emotional statesIntensity (0.0-1.0)“User expressed frustration with onboarding”
Temporal EventsTime-anchored informationEvent type (point/recurring/deadline)“Board meeting scheduled for March 15”
Each type serves a different purpose in building rich, contextual responses. Facts provide grounding, preferences enable personalization, episodes give narrative continuity, emotions support empathy, and temporal events enable time-awareness.
You can configure which memory types are extracted via the Memory Architecture Configuration. Not every application needs all five types — a simple FAQ bot might only need facts, while a personal assistant benefits from all five.
For a detailed breakdown of each memory type, see Memory Types.

How memories become context

The journey from raw data to delivered context follows a well-defined pipeline:
1

Ingestion

You submit raw content to Synap via the SDK or API. This can be an AI chat conversation, a document, a knowledge base article, or any text content. You specify the document_type, and optionally the user_id and customer_id to set the scope.
2

Extraction

Synap’s pipeline processes the raw content through multiple extraction stages. It identifies entities, resolves them against known entities, and extracts structured memories (facts, preferences, episodes, emotions, temporal events) with confidence scores and source references.
3

Storage

Extracted memories are stored in both vector storage (for semantic similarity search) and graph storage (for entity relationships and structured queries). Memories are indexed by scope, type, entity, and embedding vector.
4

Retrieval

When your agent needs context, it sends a retrieval request with search queries and scope identifiers. Synap searches across applicable scope levels, finding memories that are semantically relevant to the query.
5

Context Assembly

Retrieved memories are merged across scopes, deduplicated, ranked by relevance/recency/confidence, and assembled into a structured context response. This response is delivered to your agent, ready to inform its next output.
Raw Document

[ Ingestion ] → [ Extraction ] → [ Storage ]

                              [ Vector + Graph ]

                   Query → [ Retrieval ] → [ Context Assembly ]

                                            Structured Context
                                            → Your AI Agent

Next steps

Short-term Context

How conversational context accumulates and is managed within a single session.

Long-term Context

Persistent knowledge that survives across conversations and sessions.

Memory Types

Deep dive into the five structured memory types extracted by Synap’s pipeline.

Memory Scopes

How memory isolation works across users, customers, and organizations.