Overview
| Type | What it captures | Key metric | Direction | Example |
|---|---|---|---|---|
| Facts | Verifiable statements | Confidence (0.0-1.0) | — | “The API supports pagination via cursor tokens” |
| Preferences | Likes, dislikes, choices | Strength (0.0-1.0) | Positive / Negative | ”User prefers dark mode” |
| Episodes | Event narratives | Significance (0.0-1.0) | — | “Discussed Q2 roadmap in Monday standup” |
| Emotions | Emotional states | Intensity (0.0-1.0) | — | “User expressed frustration with onboarding” |
| Temporal Events | Time-anchored info | Event type | — | “Board meeting every first Monday” |
Facts
Facts are the backbone of long-term memory. A fact is a verifiable, declarative statement extracted from ingested content. Facts represent knowledge about people, organizations, products, processes, and the world.Structure
| Field | Type | Description |
|---|---|---|
content | string | The factual statement itself |
confidence | float (0.0-1.0) | How certain the extraction pipeline is about this fact |
source_ref | string | Reference to the original document and location |
entities | list | Linked entities (people, orgs, concepts) |
scope | enum | USER, CUSTOMER, CLIENT, or WORLD |
Confidence scoring
The confidence score reflects how certain the pipeline is that the extracted fact is accurate and well-formed:- 0.9 - 1.0: Explicitly stated, unambiguous facts (“The company was founded in 2019”)
- 0.7 - 0.9: Strongly implied or clearly inferable facts (“The team uses agile methodology” — inferred from sprint references)
- 0.5 - 0.7: Moderately confident extractions, may need verification (“The project is approximately 60% complete”)
- Below 0.5: Low confidence, typically filtered out during retrieval ranking
What makes a good fact
The extraction pipeline looks for statements that are:- Specific: “API rate limit is 1,000 req/min” rather than “There are rate limits”
- Verifiable: statements that can be confirmed or denied
- Self-contained: understandable without needing the full surrounding context
- Attributable: linked to specific entities or scopes
How facts are used in retrieval
Facts are the most commonly retrieved memory type. During retrieval:- Facts are ranked by a combination of relevance (semantic similarity to the query), recency (when the fact was created or last confirmed), and confidence (extraction certainty)
- Higher-confidence facts surface before lower-confidence ones at the same relevance level
- Conflicting facts at different scope levels are resolved by scope priority (user > customer > client > world)
Examples
Preferences
Preferences capture likes, dislikes, behavioral tendencies, and personal choices. They are what enable your AI agent to personalize its responses — communicating in the style the user prefers, prioritizing the topics they care about, and avoiding the things they dislike.Structure
| Field | Type | Description |
|---|---|---|
content | string | The preference statement |
strength | float (0.0-1.0) | How strong the preference is |
direction | enum | positive (likes/wants) or negative (dislikes/avoids) |
source_ref | string | Reference to the original document |
entities | list | Linked entities |
scope | enum | USER, CUSTOMER, CLIENT, or WORLD |
Strength and direction
Preferences have two dimensions:- Direction indicates whether this is something the user likes (
positive) or dislikes (negative) - Strength indicates how strongly they feel about it (0.0 = mild, 1.0 = very strong)
| Strength range | Interpretation | Example |
|---|---|---|
| 0.8 - 1.0 | Strong preference, always respect | ”I absolutely need bullet points, not paragraphs” |
| 0.5 - 0.8 | Moderate preference, usually respect | ”I generally prefer concise answers” |
| 0.2 - 0.5 | Mild preference, consider when relevant | ”I slightly prefer morning meetings” |
| 0.0 - 0.2 | Very weak signal, informational only | ”I sometimes like to see code examples” |
How preferences personalize responses
When your agent retrieves context, preferences inform how it should communicate:- A
positivepreference for “concise responses” with strength 0.9 tells the agent to keep answers short - A
negativepreference for “jargon” with strength 0.7 tells the agent to use plain language - A
positivepreference for “code examples” with strength 0.85 tells the agent to include code snippets
Examples
Episodes
Episodes capture event narratives — things that happened, interactions that occurred, activities that took place. They provide your agent with a sense of history and narrative continuity. Episodes answer the question: “What happened?”Structure
| Field | Type | Description |
|---|---|---|
content | string | Description of the event or episode |
significance | float (0.0-1.0) | How important this episode is |
source_ref | string | Reference to the original document |
entities | list | People, places, and things involved |
scope | enum | USER, CUSTOMER, CLIENT, or WORLD |
Significance scoring
Significance indicates how important or impactful the episode is:- 0.8 - 1.0: Major events — decisions made, milestones reached, problems resolved
- 0.5 - 0.8: Notable events — discussions held, progress reported, plans shared
- 0.2 - 0.5: Minor events — casual mentions, routine activities
- Below 0.2: Trivial events, typically filtered out
How episodes provide narrative context
Episodes give your agent a timeline of events to reference:- “Last week we discussed migrating the auth service” — the agent knows the conversation happened
- “In our previous meeting, you decided to use PostgreSQL” — the agent can reference past decisions
- “You mentioned being behind on the Q2 deliverables” — the agent understands the current situation
Examples
Emotions
Emotions capture detected emotional states, sentiment, and affective signals from conversations. They enable your AI agent to respond with empathy and emotional awareness — recognizing when a user is frustrated, excited, anxious, or satisfied.Structure
| Field | Type | Description |
|---|---|---|
content | string | Description of the emotional state and context |
intensity | float (0.0-1.0) | How strong the emotion is |
source_ref | string | Reference to the original document |
entities | list | People and topics associated with the emotion |
scope | enum | USER, CUSTOMER, CLIENT, or WORLD |
Intensity scoring
Intensity reflects how strongly the emotion was expressed:- 0.8 - 1.0: Very strong emotion — explicit expressions of frustration, excitement, anger, or joy
- 0.5 - 0.8: Moderate emotion — clear but controlled emotional signals
- 0.2 - 0.5: Mild emotion — subtle hints or implied sentiment
- Below 0.2: Barely perceptible emotional signals
How emotions enable empathetic responses
Emotional memories allow your agent to:- Acknowledge feelings: “I understand you were frustrated with the setup process last time — let me walk you through this step by step.”
- Adjust tone: If the user was previously frustrated, the agent can be more patient and thorough
- Celebrate wins: “Great news — I remember how excited you were about the v3 launch. How did it go?”
- Avoid triggers: If a user expressed strong negative emotions about a topic, the agent can approach it more carefully
Emotion extraction is optional and can be disabled via the Memory Architecture Configuration if your use case does not require emotional awareness. Some applications (e.g., technical Q&A bots) may not benefit from emotion tracking.
Examples
Temporal Events
Temporal events capture time-anchored information — dates, deadlines, recurring schedules, and time-sensitive facts. They enable your AI agent to be time-aware, understanding when things happened, when they will happen, and what recurs on a schedule.Structure
| Field | Type | Description |
|---|---|---|
content | string | Description of the temporal event |
event_type | enum | point_in_time, recurring, or deadline |
timestamp | datetime | null | The specific time (for point-in-time events) |
recurrence | string | null | Recurrence pattern (for recurring events) |
source_ref | string | Reference to the original document |
entities | list | People, places, and things involved |
scope | enum | USER, CUSTOMER, CLIENT, or WORLD |
Event types
Point-in-time
Point-in-time
A specific, one-time event anchored to a particular date or time.Examples:
- “Board meeting on March 15, 2026”
- “Alice started at the company on March 1, 2023”
- “API v3 launched on January 20, 2026”
Recurring
Recurring
An event that repeats on a schedule.Examples:
- “Sprint reviews every other Friday at 2pm”
- “Monthly all-hands on the first Monday of each month”
- “Daily standups at 9:15am PT”
Deadline
Deadline
A time-bound constraint or due date.Examples:
- “Q2 OKRs due by June 30”
- “Security audit must be completed before March 1”
- “Contract renewal deadline: April 15”
How temporal events enable time-aware responses
Temporal events allow your agent to:- Provide timely reminders: “Your Q2 OKRs are due in 2 weeks”
- Reference schedules: “Your next sprint review is this Friday at 2pm”
- Understand urgency: “The security audit deadline is in 3 days — should we prioritize that discussion?”
- Contextualize timing: “Since Alice joined 3 years ago, she has deep institutional knowledge”
Examples
Configuring memory types
You can control which memory types are extracted through the Memory Architecture Configuration. This is done via theingestion.categories setting in your MACA YAML:
When to disable types
| Scenario | Recommended types | Disabled types |
|---|---|---|
| Technical Q&A bot | facts | preferences, episodes, emotions, temporal_events |
| Personal assistant | facts, preferences, episodes, temporal_events | emotions (optional) |
| Support agent | facts, preferences, episodes, emotions | temporal_events (optional) |
| Full-featured agent | All five | None |
Disabling unnecessary memory types reduces extraction processing time and storage costs. Only enable the types that your application will actually use during retrieval.
Filtering by type during retrieval
When fetching context, you can filter results to include only specific memory types using thetypes parameter:
Comparison table
| Aspect | Facts | Preferences | Episodes | Emotions | Temporal Events |
|---|---|---|---|---|---|
| What it captures | Verifiable statements | Likes/dislikes/choices | Event narratives | Emotional states | Time-anchored info |
| Key metric | Confidence (0-1) | Strength (0-1) + direction | Significance (0-1) | Intensity (0-1) | Event type |
| Metric direction | Higher = more certain | Higher = stronger pref | Higher = more important | Higher = stronger emotion | — |
| Primary use | Grounding responses in fact | Personalizing style/content | Narrative continuity | Empathetic responses | Time-aware responses |
| Example | ”API limit: 1K req/min" | "Prefers concise answers" | "Discussed migration plan" | "Frustrated with docs" | "Deadline: June 30” |
| Typical volume | High | Medium | Medium | Low-Medium | Low |
| Recommended for | All applications | Personalized agents | Conversational agents | Empathetic agents | Scheduling-aware agents |
Next steps
Memories & Context
Return to the overview of how memories and context work together.
Memory Architecture
Configure which memory types are extracted and how they are stored.
SDK: Fetching Context
Learn how to retrieve and filter memories by type in your application.
Memory Scopes
Understand how memory types interact with scope boundaries.