Documentation Index
Fetch the complete documentation index at: https://docs.maximem.ai/llms.txt
Use this file to discover all available pages before exploring further.
Every SDK method returns a Pydantic model. This page lists them all in one place so you can grep for field names without hunting through individual method docs.
All types live in maximem_synap and are importable from the top-level package:
from maximem_synap import (
Fact, Preference, Episode, Emotion, TemporalEvent,
ContextResponse, ResponseMetadata,
CreateMemoryResponse, IngestStatus,
CompactionResponse, CompactionStatusResponse, ContextForPromptResponse,
CompactionLevel,
)
Memory item types
These are the atomic units of structured memory. A ContextResponse is a bag of these.
Fact
class Fact(BaseModel):
id: str # opaque identifier
content: str # natural-language fact
confidence: float # 0.0 – 1.0
source: str # memory ID this fact was extracted from
extracted_at: datetime
metadata: Dict[str, Any] = {}
event_date: Optional[datetime] = None # when the fact became true, if known
valid_until: Optional[datetime] = None # when the fact stopped being true, if known
temporal_category: Optional[str] = None # "perpetual" | "temporal_fact" | "episode"
temporal_confidence: float = 0.0
Preference
class Preference(BaseModel):
id: str
category: str # e.g., "communication", "dietary", "ui"
content: str
strength: float # 0.0 – 1.0 — NOT named `confidence`
source: str = ""
extracted_at: datetime
metadata: Dict[str, Any] = {}
event_date: Optional[datetime] = None
valid_until: Optional[datetime] = None
temporal_category: Optional[str] = None
Episode
class Episode(BaseModel):
id: str
summary: str # narrative description — NOT `content`
occurred_at: datetime
significance: float # 0.0 – 1.0
participants: List[str] = [] # entity IDs involved
metadata: Dict[str, Any] = {}
event_date: Optional[datetime] = None
valid_until: Optional[datetime] = None
temporal_category: Optional[str] = None
temporal_confidence: float = 0.0
source_evidence: Optional[List[str]] = None
Emotion
class Emotion(BaseModel):
id: str
emotion_type: str # "frustrated" | "satisfied" | "confused" | …
intensity: float # 0.0 – 1.0
detected_at: datetime
context: str # what triggered the emotion
metadata: Dict[str, Any] = {}
event_date: Optional[datetime] = None
valid_until: Optional[datetime] = None
temporal_category: Optional[str] = None
temporal_confidence: float = 0.0
source_evidence: Optional[List[str]] = None
TemporalEvent
class TemporalEvent(BaseModel):
id: str
content: str
confidence: float
source: str
extracted_at: datetime
metadata: Dict[str, Any] = {}
event_date: Optional[datetime] = None
valid_until: Optional[datetime] = None
temporal_category: str # "perpetual" | "temporal_fact" | "episode"
temporal_confidence: float # 0.0 – 1.0
Context responses
ContextResponse
Returned by conversation.context.fetch, user.context.fetch, customer.context.fetch, client.context.fetch.
class ContextResponse(BaseModel):
facts: List[Fact] = []
preferences: List[Preference] = []
episodes: List[Episode] = []
emotions: List[Emotion] = []
temporal_events: List[TemporalEvent] = []
metadata: ResponseMetadata
# Optional cross-scope diagnostics
earliest_event_date: Optional[datetime] = None
latest_event_date: Optional[datetime] = None
Iterate over all items in priority order:
for item in ctx.facts + ctx.preferences + ctx.episodes + ctx.emotions + ctx.temporal_events:
print(item)
class ResponseMetadata(BaseModel):
correlation_id: str # log this on errors
ttl_seconds: int # local-cache validity
source: str # "cache" or "cloud"
compaction_applied: Optional[CompactionLevel] = None # enum if compaction ran, else None
retrieved_at: datetime
compaction_applied is not a bool — it’s None when no compaction ran, or a CompactionLevel enum value when one did. Test with if meta.compaction_applied is not None.
Ingestion responses
CreateMemoryResponse
Returned by memories.create. Ingestion is async — this comes back immediately with an ingestion_id you can poll.
class CreateMemoryResponse(BaseModel):
ingestion_id: UUID # poll status with sdk.memories.status(ingestion_id)
document_id: str
status: IngestStatus # see enum below
queued_at: datetime
error_message: Optional[str] = None # populated when status == FAILED
memory_ids: List[UUID] = [] # IDs of memories that will be created
IngestStatus enum
class IngestStatus(str, Enum):
QUEUED = "queued"
PROCESSING = "processing"
COMPLETED = "completed"
FAILED = "failed"
MemoryStatusResponse
Returned by memories.status(ingestion_id).
class MemoryStatusResponse(BaseModel):
ingestion_id: UUID
status: IngestStatus
progress: Optional[float] = None # 0.0 – 1.0
memories_extracted: Optional[int] = None
error_message: Optional[str] = None
completed_at: Optional[datetime] = None
Compaction responses
CompactionResponse
Returned by conversation.context.compact and get_compacted.
class CompactionResponse(BaseModel):
compacted_context: str # the actual compacted text
original_token_count: int
compacted_token_count: int
compression_ratio: float
level_applied: CompactionLevel
metadata: ResponseMetadata
compaction_id: Optional[str] = None
strategy_used: Optional[str] = None
validation_score: Optional[float] = None
validation_passed: Optional[bool] = None
quality_warning: Optional[bool] = None # True if quality below threshold
# Typed extractions surfaced from the underlying conversation
facts: List[Dict[str, Any]] = []
decisions: List[Dict[str, Any]] = []
preferences: List[Dict[str, Any]] = []
current_state: Optional[Dict[str, Any]] = None
CompactionStatusResponse
Returned by get_compaction_status. This is a Pydantic model — access fields as attributes, not dict keys.
class CompactionStatusResponse(BaseModel):
conversation_id: str
status: str # "completed" | "in_progress" | "failed" | "none"
compaction_id: Optional[str] = None
completed_at: Optional[datetime] = None
compression_ratio: Optional[float] = None
validation_score: Optional[float] = None
estimated_completion_seconds: Optional[int] = None
error_message: Optional[str] = None
latest_version: Optional[int] = None
latest_created_at: Optional[datetime] = None
ContextForPromptResponse
Returned by get_context_for_prompt. Optimized for direct injection into an LLM system prompt.
class ContextForPromptResponse(BaseModel):
formatted_context: Optional[str] = None # ready to splice into a system prompt
available: bool = False # is there compacted context yet?
is_stale: bool = False # new messages since the last compaction?
compression_ratio: Optional[float] = None
validation_score: Optional[float] = None
compaction_age_seconds: Optional[int] = None
quality_warning: bool = False # default False, never None
recent_messages: List[RecentMessage] = []
recent_message_count: int = 0
compacted_message_count: int = 0
total_message_count: int = 0
CompactionLevel enum
class CompactionLevel(str, Enum):
ADAPTIVE = "adaptive"
AGGRESSIVE = "aggressive"
BALANCED = "balanced"
CONSERVATIVE = "conservative"
(The SDK also exposes LOW, MEDIUM, HIGH for historical reasons — prefer the four canonical values above.)
Type-checking tips
- Every response model has
model_config = {"extra": "allow"}. That means if the cloud adds a new field, the SDK won’t error — but you also won’t see the new field as a typed attribute. Access unknown fields via response.model_extra.
- Datetime fields are timezone-aware (UTC). When comparing, use
datetime.now(timezone.utc), not datetime.utcnow().
- For runtime validation (e.g., in your application boundary), call
.model_validate(...) rather than constructing manually — Pydantic enforces all constraints.