Skip to main content
Not every application needs all five memory types. A knowledge base bot might only need Facts. A personal assistant benefits from all five. You can configure which types are extracted via the Memory Architecture Configuration.

Overview

TypeWhat it capturesKey metricDirectionExample
FactsVerifiable statementsConfidence (0.0-1.0)“The API supports pagination via cursor tokens”
PreferencesLikes, dislikes, choicesStrength (0.0-1.0)Positive / Negative”User prefers dark mode”
EpisodesEvent narrativesSignificance (0.0-1.0)“Discussed Q2 roadmap in Monday standup”
EmotionsEmotional statesIntensity (0.0-1.0)“User expressed frustration with onboarding”
Temporal EventsTime-anchored infoEvent type“Board meeting every first Monday”

Facts

Facts are the backbone of long-term memory. A fact is a verifiable, declarative statement extracted from ingested content. Facts represent knowledge about people, organizations, products, processes, and the world.

Structure

FieldTypeDescription
contentstringThe factual statement itself
confidencefloat (0.0-1.0)How certain the extraction pipeline is about this fact
source_refstringReference to the original document and location
entitieslistLinked entities (people, orgs, concepts)
scopeenumUSER, CUSTOMER, CLIENT, or WORLD

Confidence scoring

The confidence score reflects how certain the pipeline is that the extracted fact is accurate and well-formed:
  • 0.9 - 1.0: Explicitly stated, unambiguous facts (“The company was founded in 2019”)
  • 0.7 - 0.9: Strongly implied or clearly inferable facts (“The team uses agile methodology” — inferred from sprint references)
  • 0.5 - 0.7: Moderately confident extractions, may need verification (“The project is approximately 60% complete”)
  • Below 0.5: Low confidence, typically filtered out during retrieval ranking

What makes a good fact

The extraction pipeline looks for statements that are:
  • Specific: “API rate limit is 1,000 req/min” rather than “There are rate limits”
  • Verifiable: statements that can be confirmed or denied
  • Self-contained: understandable without needing the full surrounding context
  • Attributable: linked to specific entities or scopes

How facts are used in retrieval

Facts are the most commonly retrieved memory type. During retrieval:
  • Facts are ranked by a combination of relevance (semantic similarity to the query), recency (when the fact was created or last confirmed), and confidence (extraction certainty)
  • Higher-confidence facts surface before lower-confidence ones at the same relevance level
  • Conflicting facts at different scope levels are resolved by scope priority (user > customer > client > world)

Examples

Fact: "Acme Corp's engineering team has 25 members across 4 squads"
Confidence: 0.95
Scope: CUSTOMER
Entities: [Acme Corp, engineering team]

Fact: "Alice joined the company in March 2023"
Confidence: 0.91
Scope: USER
Entities: [Alice Chen, Acme Corp]

Fact: "The platform supports webhook notifications for pipeline events"
Confidence: 0.98
Scope: CLIENT
Entities: [webhooks, pipeline]

Preferences

Preferences capture likes, dislikes, behavioral tendencies, and personal choices. They are what enable your AI agent to personalize its responses — communicating in the style the user prefers, prioritizing the topics they care about, and avoiding the things they dislike.

Structure

FieldTypeDescription
contentstringThe preference statement
strengthfloat (0.0-1.0)How strong the preference is
directionenumpositive (likes/wants) or negative (dislikes/avoids)
source_refstringReference to the original document
entitieslistLinked entities
scopeenumUSER, CUSTOMER, CLIENT, or WORLD

Strength and direction

Preferences have two dimensions:
  • Direction indicates whether this is something the user likes (positive) or dislikes (negative)
  • Strength indicates how strongly they feel about it (0.0 = mild, 1.0 = very strong)
Strength rangeInterpretationExample
0.8 - 1.0Strong preference, always respect”I absolutely need bullet points, not paragraphs”
0.5 - 0.8Moderate preference, usually respect”I generally prefer concise answers”
0.2 - 0.5Mild preference, consider when relevant”I slightly prefer morning meetings”
0.0 - 0.2Very weak signal, informational only”I sometimes like to see code examples”

How preferences personalize responses

When your agent retrieves context, preferences inform how it should communicate:
  • A positive preference for “concise responses” with strength 0.9 tells the agent to keep answers short
  • A negative preference for “jargon” with strength 0.7 tells the agent to use plain language
  • A positive preference for “code examples” with strength 0.85 tells the agent to include code snippets

Examples

Preference: "Prefers executive summaries before detailed explanations"
Strength: 0.88
Direction: positive
Scope: USER
Entities: [Alice Chen]

Preference: "Dislikes overly formal language"
Strength: 0.72
Direction: negative
Scope: USER
Entities: [Bob Martinez]

Preference: "Prefers all technical documentation in Markdown format"
Strength: 0.65
Direction: positive
Scope: CUSTOMER
Entities: [Acme Corp]

Episodes

Episodes capture event narratives — things that happened, interactions that occurred, activities that took place. They provide your agent with a sense of history and narrative continuity. Episodes answer the question: “What happened?”

Structure

FieldTypeDescription
contentstringDescription of the event or episode
significancefloat (0.0-1.0)How important this episode is
source_refstringReference to the original document
entitieslistPeople, places, and things involved
scopeenumUSER, CUSTOMER, CLIENT, or WORLD

Significance scoring

Significance indicates how important or impactful the episode is:
  • 0.8 - 1.0: Major events — decisions made, milestones reached, problems resolved
  • 0.5 - 0.8: Notable events — discussions held, progress reported, plans shared
  • 0.2 - 0.5: Minor events — casual mentions, routine activities
  • Below 0.2: Trivial events, typically filtered out

How episodes provide narrative context

Episodes give your agent a timeline of events to reference:
  • “Last week we discussed migrating the auth service” — the agent knows the conversation happened
  • “In our previous meeting, you decided to use PostgreSQL” — the agent can reference past decisions
  • “You mentioned being behind on the Q2 deliverables” — the agent understands the current situation
Episodes are particularly valuable in ongoing relationships where the agent needs to demonstrate continuity and awareness of past interactions.

Examples

Episode: "Discussed the Q2 roadmap in the Monday standup; decided to prioritize API v3"
Significance: 0.82
Scope: CUSTOMER
Entities: [Q2 roadmap, API v3, Monday standup, engineering team]

Episode: "Alice completed the security audit and found 3 critical vulnerabilities"
Significance: 0.91
Scope: USER
Entities: [Alice Chen, security audit]

Episode: "Onboarding call with Acme Corp; walked through SDK integration and auth setup"
Significance: 0.75
Scope: CUSTOMER
Entities: [Acme Corp, SDK, onboarding]

Emotions

Emotions capture detected emotional states, sentiment, and affective signals from conversations. They enable your AI agent to respond with empathy and emotional awareness — recognizing when a user is frustrated, excited, anxious, or satisfied.

Structure

FieldTypeDescription
contentstringDescription of the emotional state and context
intensityfloat (0.0-1.0)How strong the emotion is
source_refstringReference to the original document
entitieslistPeople and topics associated with the emotion
scopeenumUSER, CUSTOMER, CLIENT, or WORLD

Intensity scoring

Intensity reflects how strongly the emotion was expressed:
  • 0.8 - 1.0: Very strong emotion — explicit expressions of frustration, excitement, anger, or joy
  • 0.5 - 0.8: Moderate emotion — clear but controlled emotional signals
  • 0.2 - 0.5: Mild emotion — subtle hints or implied sentiment
  • Below 0.2: Barely perceptible emotional signals

How emotions enable empathetic responses

Emotional memories allow your agent to:
  • Acknowledge feelings: “I understand you were frustrated with the setup process last time — let me walk you through this step by step.”
  • Adjust tone: If the user was previously frustrated, the agent can be more patient and thorough
  • Celebrate wins: “Great news — I remember how excited you were about the v3 launch. How did it go?”
  • Avoid triggers: If a user expressed strong negative emotions about a topic, the agent can approach it more carefully
Emotion extraction is optional and can be disabled via the Memory Architecture Configuration if your use case does not require emotional awareness. Some applications (e.g., technical Q&A bots) may not benefit from emotion tracking.

Examples

Emotion: "User expressed frustration with the onboarding documentation being outdated"
Intensity: 0.78
Scope: USER
Entities: [Alice Chen, onboarding, documentation]

Emotion: "User was enthusiastic about the new bulk import feature"
Intensity: 0.85
Scope: USER
Entities: [Bob Martinez, bulk import]

Emotion: "User expressed anxiety about the upcoming production migration deadline"
Intensity: 0.65
Scope: USER
Entities: [Carol Davis, production migration]

Temporal Events

Temporal events capture time-anchored information — dates, deadlines, recurring schedules, and time-sensitive facts. They enable your AI agent to be time-aware, understanding when things happened, when they will happen, and what recurs on a schedule.

Structure

FieldTypeDescription
contentstringDescription of the temporal event
event_typeenumpoint_in_time, recurring, or deadline
timestampdatetime | nullThe specific time (for point-in-time events)
recurrencestring | nullRecurrence pattern (for recurring events)
source_refstringReference to the original document
entitieslistPeople, places, and things involved
scopeenumUSER, CUSTOMER, CLIENT, or WORLD

Event types

A specific, one-time event anchored to a particular date or time.Examples:
  • “Board meeting on March 15, 2026”
  • “Alice started at the company on March 1, 2023”
  • “API v3 launched on January 20, 2026”
An event that repeats on a schedule.Examples:
  • “Sprint reviews every other Friday at 2pm”
  • “Monthly all-hands on the first Monday of each month”
  • “Daily standups at 9:15am PT”
A time-bound constraint or due date.Examples:
  • “Q2 OKRs due by June 30”
  • “Security audit must be completed before March 1”
  • “Contract renewal deadline: April 15”

How temporal events enable time-aware responses

Temporal events allow your agent to:
  • Provide timely reminders: “Your Q2 OKRs are due in 2 weeks”
  • Reference schedules: “Your next sprint review is this Friday at 2pm”
  • Understand urgency: “The security audit deadline is in 3 days — should we prioritize that discussion?”
  • Contextualize timing: “Since Alice joined 3 years ago, she has deep institutional knowledge”

Examples

Temporal Event: "Board meeting scheduled for March 15, 2026"
Event type: point_in_time
Scope: CUSTOMER
Entities: [board meeting, Acme Corp]

Temporal Event: "Sprint reviews every other Friday at 2pm PT"
Event type: recurring
Scope: CUSTOMER
Entities: [sprint review, engineering team]

Temporal Event: "API v3 migration must be completed by June 30, 2026"
Event type: deadline
Scope: CUSTOMER
Entities: [API v3, migration]

Configuring memory types

You can control which memory types are extracted through the Memory Architecture Configuration. This is done via the ingestion.categories setting in your MACA YAML:
ingestion:
  categories:
    - facts          # Always recommended
    - preferences    # Recommended for personalization
    - episodes       # Recommended for continuity
    - emotions       # Optional -- enable for empathetic agents
    - temporal_events # Optional -- enable for time-aware agents

When to disable types

ScenarioRecommended typesDisabled types
Technical Q&A botfactspreferences, episodes, emotions, temporal_events
Personal assistantfacts, preferences, episodes, temporal_eventsemotions (optional)
Support agentfacts, preferences, episodes, emotionstemporal_events (optional)
Full-featured agentAll fiveNone
Disabling unnecessary memory types reduces extraction processing time and storage costs. Only enable the types that your application will actually use during retrieval.

Filtering by type during retrieval

When fetching context, you can filter results to include only specific memory types using the types parameter:
# Retrieve only facts and preferences
context = await sdk.user.context.fetch(
    user_id="user_alice",
    customer_id="acme_corp",
    search_query=["project status"],
    types=["facts", "preferences"]
)

# Retrieve only temporal events (for a scheduling-focused query)
context = await sdk.user.context.fetch(
    user_id="user_alice",
    customer_id="acme_corp",
    search_query=["upcoming deadlines"],
    types=["temporal_events"]
)

# Retrieve all types (default behavior)
context = await sdk.user.context.fetch(
    user_id="user_alice",
    customer_id="acme_corp",
    search_query=["general update"]
)

Comparison table

AspectFactsPreferencesEpisodesEmotionsTemporal Events
What it capturesVerifiable statementsLikes/dislikes/choicesEvent narrativesEmotional statesTime-anchored info
Key metricConfidence (0-1)Strength (0-1) + directionSignificance (0-1)Intensity (0-1)Event type
Metric directionHigher = more certainHigher = stronger prefHigher = more importantHigher = stronger emotion
Primary useGrounding responses in factPersonalizing style/contentNarrative continuityEmpathetic responsesTime-aware responses
Example”API limit: 1K req/min""Prefers concise answers""Discussed migration plan""Frustrated with docs""Deadline: June 30”
Typical volumeHighMediumMediumLow-MediumLow
Recommended forAll applicationsPersonalized agentsConversational agentsEmpathetic agentsScheduling-aware agents

Next steps

Memories & Context

Return to the overview of how memories and context work together.

Memory Architecture

Configure which memory types are extracted and how they are stored.

SDK: Fetching Context

Learn how to retrieve and filter memories by type in your application.

Memory Scopes

Understand how memory types interact with scope boundaries.