This guide covers the full spectrum of Synap migration scenarios: upgrading the SDK, migrating MACA configurations, changing scope strategies, switching embedding models, and moving from development to production. Each section includes step-by-step procedures and rollback strategies.
For production deployments, pin to a specific version rather than using --upgrade. This prevents unexpected upgrades when rebuilding containers or refreshing dependencies.
requirements.txt
maximem-synap==1.3.0
3
Test in Staging
Deploy the upgraded SDK to your staging environment first. Run your full test suite and verify:
Follow the standard config migration workflow (dry run, staging, production).
3
Understand the Impact
After applying:
New ingestions will extract temporal events in addition to facts and preferences
Existing memories are not re-processed. Only newly ingested content will have temporal events extracted
Retrieval will start returning temporal events alongside other memory types
If you need temporal events extracted from content that was ingested before the category was enabled, you must re-ingest that content. There is no automatic re-processing mechanism.
4
Update Your Application Code
If your application code filters by memory type, update it to handle the new category:
context = await sdk.conversation.context.fetch( conversation_id=conv_id, search_query=[query], types=["facts", "preferences", "temporal_events"], # Add new type max_results=10, mode="fast")# Handle the new type in your prompt builderfor event in context.temporal_events: memory_lines.append( f"- Upcoming: {event.content} ({event.timestamp})" )
Changing the primary_scope in your MACA config affects how new memories are indexed and how retrieval optimizations are applied. This is a significant change that requires careful planning.
Changing primary_scope does not retroactively re-scope existing memories. Memories ingested with user_id remain at user scope regardless of the primary_scope setting. The primary_scope controls indexing optimization and default behavior for new memories, not the scope assignment of existing data.
A common migration path for applications that start as single-user and evolve to multi-user.
1
Audit Existing Memories
Understand what you have today. If your instance has been running with primary_scope: "instance", most memories are likely stored without user_id or customer_id:
These memories sit at client scope
They will remain visible to all users after the migration
They will NOT become user-scoped retroactively
2
Update the Config
storage: scoping: primary_scope: "user" # Changed from "instance"
3
Update Your Ingestion Code
Ensure all ingestion calls now include user_id:
# Before (instance scope -- no user_id)await sdk.memories.create( document=conversation, document_type="ai-chat-conversation", mode="fast")# After (user scope -- always pass user_id)await sdk.memories.create( document=conversation, document_type="ai-chat-conversation", user_id=current_user.id, # Now required customer_id=current_user.org, # Include if applicable mode="fast")
4
Update Your Retrieval Code
Ensure all retrieval calls include the appropriate scope identifiers:
# Before (instance scope)context = await sdk.conversation.context.fetch( conversation_id=conv_id, search_query=[query], mode="fast")# After (user scope)context = await sdk.conversation.context.fetch( conversation_id=conv_id, search_query=[query], mode="fast")# OR use scoped retrieval for user-specific contextcontext = await sdk.user.context.fetch( user_id=current_user.id, customer_id=current_user.org)
5
Handle Legacy Memories
Decide what to do with memories ingested before the scope change:
Option A: Leave them at client scope. All users see them as shared context. This is the simplest option and often acceptable.
Option B: Re-ingest with user_id. If you have the original source content and know which user it belongs to, re-ingest with proper scoping. Older memories will eventually be superseded.
Option C: Accept dual behavior. Old memories are shared, new memories are scoped. Over time, the user-scoped memories will dominate as new content is ingested.
If you need to switch embedding models (e.g., from OpenAI’s text-embedding-ada-002 with 1536 dimensions to text-embedding-3-small with a different dimension), this requires careful handling because embeddings with different dimensions are incompatible.
Changing embedding_dimension in your MACA config does not re-embed existing memories. Old memories retain their original embeddings and will not match queries using the new model. This can cause retrieval degradation during the transition period.
Option A: Gradual Migration (Recommended)
Let old embeddings age out naturally while new content uses the new model. Best when:
You have a retention policy that will eventually purge old embeddings
Old content is less critical than new content
You can tolerate a transition period of reduced retrieval quality for older memories
Steps:
Update embedding_dimension in your MACA config
Apply the config change
All new ingestions use the new embedding model
Old memories remain searchable (with reduced relevance scoring) until retention cleanup removes them
Monitor retrieval quality during the transition period
Option B: Full Re-Index
Re-ingest all source content to generate new embeddings. Best when:
All historical content is equally important
You have access to all original source documents
You can afford the ingestion costs and processing time
Steps:
Update embedding_dimension in your MACA config
Apply the config change
Re-ingest all source documents (use batch_create() for efficiency)
Old memories with stale embeddings will be superseded by the new versions
Optionally trigger a retention cleanup to remove the old versions
# Re-ingest in batchesfor batch in chunked(source_documents, size=50): await sdk.memories.batch_create( documents=[ { "document": doc.content, "document_type": doc.type, "user_id": doc.user_id, "customer_id": doc.customer_id, "metadata": doc.metadata } for doc in batch ], fail_fast=False )
Option C: New Instance
Create a fresh instance with the new embedding model and migrate traffic. Best when:
You want a clean break with no legacy data
The embedding dimension change is part of a larger architecture change
You are also changing other fundamental settings (scope, retention, etc.)
Steps:
Create a new instance in the Dashboard
Apply your updated MACA config to the new instance
Deploy your application against the new instance (update instance_id)
Optionally re-ingest critical historical content
Decommission the old instance once migration is complete
Moving from a development instance to production is not a version upgrade — it requires a fresh production instance with its own credentials, configuration, and monitoring.
Do not reuse your development instance for production. Development instances often contain test data, experimental configurations, and overly permissive settings that are inappropriate for production use.
1
Create a Production Instance
In the Synap Dashboard:
Navigate to Instances and click Create Instance
Name it clearly (e.g., “MyApp Production”)
Select the production region closest to your deployment
Note the new instance_id (e.g., inst_prod_abc123)
2
Generate a Fresh API Key
Generate a new API key for the production instance. Store it in your production secrets manager — not the same location as your dev key.
Production API key is stored in a production secrets manager
3
Apply Production MACA Config
Apply a production-tuned MACA config to the new instance. Key differences from your dev config:
Setting
Dev
Production
extraction.mode
"enhanced" (for debugging)
"standard" or "enhanced" (based on your needs)
confidence_threshold
0.5 (permissive for testing)
0.7+ (quality-focused)
retention.max_memory_age_days
0 (unlimited for dev)
Set based on use case
pii.handling
"passthrough" (dev convenience)
"redact" or "mask"
with open("maca-config-production.yaml") as f: prod_config = f.read()# Validate firstresult = await admin_client.config.validate( instance_id="inst_prod_abc123", config_yaml=prod_config, dry_run=True)assert result.valid# Applyawait admin_client.config.apply( instance_id="inst_prod_abc123", config_yaml=prod_config)
4
Update Application Configuration
Update your production deployment to use the new instance:
Existing memories are not re-processed. If old memories contain PII, you must either:
Re-ingest the source content (PII will be redacted on the new pass)
Manually delete specific memories via the Admin API
Wait for retention cleanup to purge old memories naturally
Rolling back a failed config change
If a config change causes issues in production:
# List available versionshistory = await admin_client.config.list_versions( instance_id="inst_a1b2c3d4e5f67890")# Find the last stable versionstable_version = next( v for v in history if v.status == "active" and v.version != current_broken_version)# Roll backresult = await admin_client.config.rollback( instance_id="inst_a1b2c3d4e5f67890", target_version=stable_version.version)print(f"Rolled back to v{result.version}")
Rollback takes effect immediately for new requests. In-flight requests continue with the old config until they complete.