This reference covers the full spectrum of Synap migration scenarios: upgrading the SDK, migrating MACA configurations, changing scope strategies, switching embedding models, moving from development to production, importing data from other memory systems, and performing bulk data imports. Each section includes step-by-step procedures and rollback strategies.
For production deployments, pin to a specific version rather than using --upgrade. This prevents unexpected upgrades when rebuilding containers or refreshing dependencies.
requirements.txt
maximem-synap==1.3.0
3
Test in Staging
Deploy the upgraded SDK to your staging environment first. Run your full test suite and verify:
Follow the standard config migration workflow (dry run, staging, production).
3
Understand the Impact
After applying:
New ingestions will extract temporal events in addition to facts and preferences
Existing memories are not re-processed. Only newly ingested content will have temporal events extracted
Retrieval will start returning temporal events alongside other memory types
If you need temporal events extracted from content that was ingested before the category was enabled, you must re-ingest that content. There is no automatic re-processing mechanism.
4
Update Your Application Code
If your application code filters by memory type, update it to handle the new category:
context = await sdk.conversation.context.fetch( conversation_id=conv_id, search_query=[query], types=["facts", "preferences", "temporal_events"], # Add new type max_results=10, mode="fast")# Handle the new type in your prompt builderfor event in context.temporal_events: memory_lines.append( f"- Upcoming: {event.content} ({event.timestamp})" )
Changing the primary_scope in your MACA config affects how new memories are indexed and how retrieval optimizations are applied. This is a significant change that requires careful planning. For background on scoping concepts, see Memory Scopes.
Changing primary_scope does not retroactively re-scope existing memories. Memories ingested with user_id remain at user scope regardless of the primary_scope setting. The primary_scope controls indexing optimization and default behavior for new memories, not the scope assignment of existing data.
A common migration path for applications that start as single-user and evolve to multi-user.
1
Audit Existing Memories
Understand what you have today. If your instance has been running with primary_scope: "instance", most memories are likely stored without user_id or customer_id:
These memories sit at client scope
They will remain visible to all users after the migration
They will NOT become user-scoped retroactively
2
Update the Config
storage: scoping: primary_scope: "user" # Changed from "instance"
3
Update Your Ingestion Code
Ensure all ingestion calls now include user_id:
# Before (instance scope -- no user_id)await sdk.memories.create( document=conversation, document_type="ai-chat-conversation", mode="fast")# After (user scope -- always pass user_id)await sdk.memories.create( document=conversation, document_type="ai-chat-conversation", user_id=current_user.id, # Now required customer_id=current_user.org, # Include if applicable mode="fast")
4
Update Your Retrieval Code
Ensure all retrieval calls include the appropriate scope identifiers:
# Before (instance scope)context = await sdk.conversation.context.fetch( conversation_id=conv_id, search_query=[query], mode="fast")# After (user scope)context = await sdk.conversation.context.fetch( conversation_id=conv_id, search_query=[query], mode="fast")# OR use scoped retrieval for user-specific contextcontext = await sdk.user.context.fetch( user_id=current_user.id, customer_id=current_user.org)
5
Handle Legacy Memories
Decide what to do with memories ingested before the scope change:
Option A: Leave them at client scope. All users see them as shared context. This is the simplest option and often acceptable.
Option B: Re-ingest with user_id. If you have the original source content and know which user it belongs to, re-ingest with proper scoping. Older memories will eventually be superseded.
Option C: Accept dual behavior. Old memories are shared, new memories are scoped. Over time, the user-scoped memories will dominate as new content is ingested.
If you need to switch embedding models (e.g., from OpenAI’s text-embedding-ada-002 with 1536 dimensions to text-embedding-3-small with a different dimension), this requires careful handling because embeddings with different dimensions are incompatible.
Changing embedding_dimension in your MACA config does not re-embed existing memories. Old memories retain their original embeddings and will not match queries using the new model. This can cause retrieval degradation during the transition period.
Moving from a development instance to production is not a version upgrade — it requires a fresh production instance with its own credentials, configuration, and monitoring.
Do not reuse your development instance for production. Development instances often contain test data, experimental configurations, and overly permissive settings that are inappropriate for production use.
1
Create a Production Instance
In the Synap Dashboard:
Navigate to Instances and click Create Instance
Name it clearly (e.g., “MyApp Production”)
Select the production region closest to your deployment
Note the new instance_id (e.g., inst_prod_abc123)
2
Generate a Fresh API Key
Generate a new API key for the production instance. Store it in your production secrets manager — not the same location as your dev key.
Production API key is stored in a production secrets manager
3
Apply Production MACA Config
Apply a production-tuned MACA config to the new instance. Key differences from your dev config:
Setting
Dev
Production
extraction.mode
"enhanced" (for debugging)
"standard" or "enhanced" (based on your needs)
confidence_threshold
0.5 (permissive for testing)
0.7+ (quality-focused)
retention.max_memory_age_days
0 (unlimited for dev)
Set based on use case
pii.handling
"passthrough" (dev convenience)
"redact" or "mask"
with open("maca-config-production.yaml") as f: prod_config = f.read()# Validate firstresult = await admin_client.config.validate( instance_id="inst_prod_abc123", config_yaml=prod_config, dry_run=True)assert result.valid# Applyawait admin_client.config.apply( instance_id="inst_prod_abc123", config_yaml=prod_config)
4
Update Application Configuration
Update your production deployment to use the new instance. See Installation for full SDK configuration options.
If you are moving to Synap from another memory or knowledge management system, the general approach is to export your existing data and import it into Synap using the memory creation APIs.
Export documents, re-ingest via POST /v1/memories or POST /v1/memories/batch. Synap will generate its own embeddings.
Conversation logs / chat history
Format as ai-chat-conversation document type and ingest. Synap extracts facts, preferences, and other categories automatically.
Structured knowledge bases
Format as document type with appropriate metadata. Use hints in your MACA config to guide extraction.
Other memory-as-a-service platforms
Export raw source documents (not embeddings) and re-ingest. Embeddings are model-specific and not portable.
Always import raw source documents rather than pre-computed embeddings. Synap generates its own embeddings using the model specified in your MACA config. Importing embeddings from another system will result in incompatible vector representations.
Inventory your data. Catalog the types, volumes, and scoping of your existing data. Identify which data is user-scoped versus shared.
Map to Synap concepts. Determine how your existing data maps to Synap’s memory categories (facts, preferences, episodes, etc.) and scoping model (user, customer, client). See Memory Scopes for details.
Configure your Instance. Set up your MACA config before importing. The ingestion pipeline uses the active config to determine extraction behavior, chunking, and PII handling.
Import in batches. Use the batch API for efficiency. Start with a small test batch to verify extraction quality before importing everything.
Validate results. After import, query the system to verify that memories are correctly extracted, scoped, and retrievable.
Priority: Use bootstrap priority for initial data loads. This signals to the ingestion pipeline that these are seed documents and may receive different queuing treatment than real-time ingestion.
Idempotency: Re-importing the same document creates a new memory version. Synap deduplicates at the extraction level, but repeated imports will consume processing resources.
Rate limits: Batch imports are subject to your Instance’s rate limits. For very large imports (100k+ documents), contact support to arrange temporary limit increases.
Ordering: Documents within a batch are processed concurrently. If temporal ordering matters (e.g., conversation history), include timestamps in your metadata.
After a data import or configuration change, run targeted queries to verify retrieval quality:
async def verify_retrieval(sdk, test_queries): """Run test queries and verify expected results are returned.""" for query_text, expected_keywords in test_queries: context = await sdk.conversation.context.fetch( conversation_id="migration_verification", search_query=[query_text], max_results=5, mode="accurate" ) results_text = " ".join( fact.content for fact in context.facts ) found = [kw for kw in expected_keywords if kw.lower() in results_text.lower()] missing = [kw for kw in expected_keywords if kw.lower() not in results_text.lower()] status = "PASS" if not missing else "WARN" print(f"[{status}] Query: '{query_text}'") if missing: print(f" Missing expected keywords: {missing}")
Existing memories are not re-processed. If old memories contain PII, you must either:
Re-ingest the source content (PII will be redacted on the new pass)
Manually delete specific memories via the Admin API
Wait for retention cleanup to purge old memories naturally
Rolling back a failed config change
If a config change causes issues in production:
# List available versionshistory = await admin_client.config.list_versions( instance_id="inst_a1b2c3d4e5f67890")# Find the last stable versionstable_version = next( v for v in history if v.status == "active" and v.version != current_broken_version)# Roll backresult = await admin_client.config.rollback( instance_id="inst_a1b2c3d4e5f67890", target_version=stable_version.version)print(f"Rolled back to v{result.version}")
Rollback takes effect immediately for new requests. In-flight requests continue with the old config until they complete.