Skip to main content

SDK Version Upgrades

SDK upgrades are the most common migration task. Synap follows semantic versioning, so understanding the version number tells you what to expect:
  • Patch (e.g., 1.2.3 to 1.2.4): Bug fixes only. Safe to upgrade without code changes.
  • Minor (e.g., 1.2.x to 1.3.0): New features, no breaking changes. May introduce new parameters with sensible defaults.
  • Major (e.g., 1.x to 2.0.0): Breaking changes. Requires code updates. Always accompanied by a migration guide in the changelog.
1

Check the Changelog

Before upgrading, review the changelog for every version between your current version and the target version. Pay special attention to:
  • Breaking changes (major versions): API signature changes, removed parameters, changed defaults
  • Deprecation notices (minor versions): Features that will be removed in the next major version
  • Behavioral changes: Subtle changes in how existing features work
# Check your current version
python -c "import maximem_synap; print(maximem_synap.__version__)"
The changelog is available at github.com/synap-dev/synap-sdk/blob/main/CHANGELOG.md and in the Resources section of these docs.
2

Update the Package

Upgrade the SDK package:
pip install --upgrade maximem-synap
For production deployments, pin to a specific version rather than using --upgrade. This prevents unexpected upgrades when rebuilding containers or refreshing dependencies.
requirements.txt
maximem-synap==1.3.0
3

Test in Staging

Deploy the upgraded SDK to your staging environment first. Run your full test suite and verify:
  • SDK initializes successfully
  • Ingestion operations complete without errors
  • Retrieval returns expected results
  • Error handling still works correctly
  • Cache functions normally
# Quick smoke test after upgrade
async def smoke_test(sdk):
    # Verify connection
    stats = await sdk.cache.stats()
    print(f"Cache stats: {stats}")

    # Verify ingestion
    response = await sdk.memories.create(
        document="Smoke test: SDK version upgrade verification.",
        document_type="document",
        user_id="user_smoke_test",
        mode="fast"
    )
    print(f"Ingestion: {response.status}")

    # Verify retrieval
    context = await sdk.conversation.context.fetch(
        conversation_id="conv_smoke_test",
        search_query=["smoke test"],
        max_results=1,
        mode="fast"
    )
    print(f"Retrieval: {len(context.facts)} facts returned")

    print("Smoke test passed.")
4

Deploy to Production

After staging validation, deploy to production. Monitor the Dashboard closely for the first few hours:
  • API error rates
  • Latency changes (P50, P95, P99)
  • Ingestion success rate
  • Any new warning/error logs
SDK version verified in staging before production deployment
5

Verify Post-Deployment

After the production deployment stabilizes, confirm everything is working:
stats = await sdk.cache.stats()
print(f"SDK version: {maximem_synap.__version__}")
print(f"Cache entries: {stats.entry_count}")
print(f"Cache hit rate: {stats.hit_rate:.1%}")
If issues arise, roll back to the previous SDK version by pinning the old version in your requirements and redeploying.

Configuration Migration

MACA configuration changes require careful planning because they affect how all new memories are processed and retrieved.

Migrating Between Config Versions

When updating your MACA config, follow this workflow:
1

Export the Current Config

Before making any changes, export and save your current configuration as a reference:
current_config = await admin_client.config.get_active(
    instance_id="inst_a1b2c3d4e5f67890"
)
print(f"Current version: {current_config.version}")

# Save locally as backup
with open("maca-config-backup.yaml", "w") as f:
    f.write(current_config.yaml_content)
2

Write the New Config

Make your changes to the YAML file. Increment the version field:
# Changed from "1.0.0" to "1.1.0"
version: "1.1.0"

# ... your changes ...
3

Validate with Dry Run

Submit the new config with dry_run=True to validate and preview changes:
with open("maca-config-v1.1.yaml") as f:
    new_config_yaml = f.read()

result = await admin_client.config.validate(
    instance_id="inst_a1b2c3d4e5f67890",
    config_yaml=new_config_yaml,
    dry_run=True
)

if not result.valid:
    print(f"Validation errors: {result.errors}")
else:
    print(f"Validation passed. Changes:")
    for change in result.diff:
        print(f"  {change.field}: {change.old_value} -> {change.new_value}")
    print(f"Warnings: {result.warnings}")
4

Review the Diff

Carefully examine the diff output. High-impact changes to watch for:
ChangeImpactAction Required
Category disabledNew memories of that type will not be extractedEnsure downstream code does not depend on the disabled category
Confidence threshold increasedFewer memories will be storedMonitor memory creation rate after applying
Embedding dimension changedNew embeddings incompatible with old onesRequires re-indexing (see below)
Primary scope changedNew memories indexed differentlyExisting memories retain old scope
Retention reducedOld memories may be purged at next cleanupVerify no important data will be lost
5

Apply in Staging First

Apply the config to your staging instance and verify behavior:
result = await admin_client.config.apply(
    instance_id="inst_staging_xxx",
    config_yaml=new_config_yaml
)
print(f"Applied version: {result.version}")
Ingest test data and verify that extraction and retrieval match expectations.
6

Apply to Production

Once staging validation is complete:
result = await admin_client.config.apply(
    instance_id="inst_a1b2c3d4e5f67890",
    config_yaml=new_config_yaml
)
print(f"Applied version: {result.version}, status: {result.status}")

Using Version History

The Dashboard maintains a complete history of all config versions applied to an instance. Use this for comparison and rollback:
# List all config versions
history = await admin_client.config.list_versions(
    instance_id="inst_a1b2c3d4e5f67890"
)

for version in history:
    print(
        f"v{version.version} | Applied: {version.applied_at} | "
        f"Status: {version.status} | By: {version.applied_by}"
    )
Dashboard configuration version history showing applied versions

Adding New Memory Categories

Enabling a new category (e.g., adding temporal_events to a config that previously only had facts and preferences) is a non-destructive change.
1

Update the Config

Enable the new category in your MACA YAML:
ingestion:
  categories:
    facts: true
    preferences: true
    temporal_events: true    # Newly enabled
    relationships: false
    procedures: false
    emotions: false
2

Apply the Change

Follow the standard config migration workflow (dry run, staging, production).
3

Understand the Impact

After applying:
  • New ingestions will extract temporal events in addition to facts and preferences
  • Existing memories are not re-processed. Only newly ingested content will have temporal events extracted
  • Retrieval will start returning temporal events alongside other memory types
If you need temporal events extracted from content that was ingested before the category was enabled, you must re-ingest that content. There is no automatic re-processing mechanism.
4

Update Your Application Code

If your application code filters by memory type, update it to handle the new category:
context = await sdk.conversation.context.fetch(
    conversation_id=conv_id,
    search_query=[query],
    types=["facts", "preferences", "temporal_events"],  # Add new type
    max_results=10,
    mode="fast"
)

# Handle the new type in your prompt builder
for event in context.temporal_events:
    memory_lines.append(
        f"- Upcoming: {event.content} ({event.timestamp})"
    )

Changing Scope Strategy

Changing the primary_scope in your MACA config affects how new memories are indexed and how retrieval optimizations are applied. This is a significant change that requires careful planning.
Changing primary_scope does not retroactively re-scope existing memories. Memories ingested with user_id remain at user scope regardless of the primary_scope setting. The primary_scope controls indexing optimization and default behavior for new memories, not the scope assignment of existing data.

Example: Instance Scope to User Scope

A common migration path for applications that start as single-user and evolve to multi-user.
1

Audit Existing Memories

Understand what you have today. If your instance has been running with primary_scope: "instance", most memories are likely stored without user_id or customer_id:
  • These memories sit at client scope
  • They will remain visible to all users after the migration
  • They will NOT become user-scoped retroactively
2

Update the Config

storage:
  scoping:
    primary_scope: "user"    # Changed from "instance"
3

Update Your Ingestion Code

Ensure all ingestion calls now include user_id:
# Before (instance scope -- no user_id)
await sdk.memories.create(
    document=conversation,
    document_type="ai-chat-conversation",
    mode="fast"
)

# After (user scope -- always pass user_id)
await sdk.memories.create(
    document=conversation,
    document_type="ai-chat-conversation",
    user_id=current_user.id,       # Now required
    customer_id=current_user.org,   # Include if applicable
    mode="fast"
)
4

Update Your Retrieval Code

Ensure all retrieval calls include the appropriate scope identifiers:
# Before (instance scope)
context = await sdk.conversation.context.fetch(
    conversation_id=conv_id,
    search_query=[query],
    mode="fast"
)

# After (user scope)
context = await sdk.conversation.context.fetch(
    conversation_id=conv_id,
    search_query=[query],
    mode="fast"
)
# OR use scoped retrieval for user-specific context
context = await sdk.user.context.fetch(
    user_id=current_user.id,
    customer_id=current_user.org
)
5

Handle Legacy Memories

Decide what to do with memories ingested before the scope change:
  • Option A: Leave them at client scope. All users see them as shared context. This is the simplest option and often acceptable.
  • Option B: Re-ingest with user_id. If you have the original source content and know which user it belongs to, re-ingest with proper scoping. Older memories will eventually be superseded.
  • Option C: Accept dual behavior. Old memories are shared, new memories are scoped. Over time, the user-scoped memories will dominate as new content is ingested.

Changing Embedding Models

If you need to switch embedding models (e.g., from OpenAI’s text-embedding-ada-002 with 1536 dimensions to text-embedding-3-small with a different dimension), this requires careful handling because embeddings with different dimensions are incompatible.
Changing embedding_dimension in your MACA config does not re-embed existing memories. Old memories retain their original embeddings and will not match queries using the new model. This can cause retrieval degradation during the transition period.
Re-ingest all source content to generate new embeddings. Best when:
  • All historical content is equally important
  • You have access to all original source documents
  • You can afford the ingestion costs and processing time
Steps:
  1. Update embedding_dimension in your MACA config
  2. Apply the config change
  3. Re-ingest all source documents (use batch_create() for efficiency)
  4. Old memories with stale embeddings will be superseded by the new versions
  5. Optionally trigger a retention cleanup to remove the old versions
# Re-ingest in batches
for batch in chunked(source_documents, size=50):
    await sdk.memories.batch_create(
        documents=[
            {
                "document": doc.content,
                "document_type": doc.type,
                "user_id": doc.user_id,
                "customer_id": doc.customer_id,
                "metadata": doc.metadata
            }
            for doc in batch
        ],
        fail_fast=False
    )
Create a fresh instance with the new embedding model and migrate traffic. Best when:
  • You want a clean break with no legacy data
  • The embedding dimension change is part of a larger architecture change
  • You are also changing other fundamental settings (scope, retention, etc.)
Steps:
  1. Create a new instance in the Dashboard
  2. Apply your updated MACA config to the new instance
  3. Deploy your application against the new instance (update instance_id)
  4. Optionally re-ingest critical historical content
  5. Decommission the old instance once migration is complete

Migrating from Development to Production

Moving from a development instance to production is not a version upgrade — it requires a fresh production instance with its own credentials, configuration, and monitoring.
Do not reuse your development instance for production. Development instances often contain test data, experimental configurations, and overly permissive settings that are inappropriate for production use.
1

Create a Production Instance

In the Synap Dashboard:
  1. Navigate to Instances and click Create Instance
  2. Name it clearly (e.g., “MyApp Production”)
  3. Select the production region closest to your deployment
  4. Note the new instance_id (e.g., inst_prod_abc123)
2

Generate a Fresh API Key

Generate a new API key for the production instance. Store it in your production secrets manager — not the same location as your dev key.
Production API key is stored in a production secrets manager
3

Apply Production MACA Config

Apply a production-tuned MACA config to the new instance. Key differences from your dev config:
SettingDevProduction
extraction.mode"enhanced" (for debugging)"standard" or "enhanced" (based on your needs)
confidence_threshold0.5 (permissive for testing)0.7+ (quality-focused)
retention.max_memory_age_days0 (unlimited for dev)Set based on use case
pii.handling"passthrough" (dev convenience)"redact" or "mask"
with open("maca-config-production.yaml") as f:
    prod_config = f.read()

# Validate first
result = await admin_client.config.validate(
    instance_id="inst_prod_abc123",
    config_yaml=prod_config,
    dry_run=True
)
assert result.valid

# Apply
await admin_client.config.apply(
    instance_id="inst_prod_abc123",
    config_yaml=prod_config
)
4

Update Application Configuration

Update your production deployment to use the new instance:
# Production configuration
sdk = MaximemSynapSDK(
    instance_id=os.environ["SYNAP_INSTANCE_ID"],   # inst_prod_abc123
    api_key=get_production_secret("synap-api-key"),
    config=SDKConfig(
        storage_path="/var/lib/myapp/synap",
        credentials_source="file",
        cache_backend="sqlite",
        session_timeout_minutes=60,
        log_level="WARNING",
        timeouts=TimeoutConfig(connect=10.0, read=30.0),
        retry_policy=RetryPolicy(max_attempts=3)
    )
)
5

Set Up Error Handling and Monitoring

Production requires proper error handling and monitoring that you may have skipped in development. Review the Production Checklist to ensure you have:
  • Graceful degradation when Synap is unavailable
  • All error types handled with correlation_id logging
  • Dashboard webhooks configured for critical events
  • Latency and error rate alerts in your monitoring platform
Production monitoring and alerting is configured before accepting traffic
6

Seed Production Data (Optional)

If your production instance needs initial knowledge (e.g., product documentation, FAQ content), ingest it before accepting user traffic:
# Seed client-scoped knowledge
documents = load_product_documentation()

await sdk.memories.batch_create(
    documents=[
        {
            "document": doc.content,
            "document_type": "document",
            "metadata": {"source": "product-docs", "version": doc.version}
        }
        for doc in documents
    ],
    fail_fast=False
)
7

Validate End-to-End

Run your full smoke test suite against the production instance before opening it to real users:
# End-to-end validation
async def validate_production(sdk):
    # Test ingestion
    resp = await sdk.memories.create(
        document="Production validation test message.",
        document_type="document",
        user_id="user_validation",
        mode="fast"
    )
    assert resp.status == "accepted"

    # Wait for processing
    await asyncio.sleep(5)

    # Test retrieval
    ctx = await sdk.user.context.fetch(user_id="user_validation")
    assert len(ctx.facts) > 0

    # Test cache
    stats = await sdk.cache.stats()
    assert stats is not None

    print("Production validation passed.")

Common Migration Scenarios

The 1.0 release introduced several breaking changes from the 0.x beta series:
  • Import path changed: from synap import ... became from maximem_synap import ...
  • memories.ingest() renamed: Use memories.create() for single documents and memories.batch_create() for batches
  • Config constructor changed: SynapConfig became SDKConfig with new field names
  • Error hierarchy restructured: All errors now extend SynapError with correlation_id
Search your codebase for the old patterns and update them:
# Old (0.x)
from synap import SynapSDK, SynapConfig
sdk = SynapSDK(instance="inst_xxx", token="xxx", config=SynapConfig(...))
await sdk.ingest(text="...", type="conversation")

# New (1.0)
from maximem_synap import MaximemSynapSDK, SDKConfig
sdk = MaximemSynapSDK(instance_id="inst_xxx", api_key="synap_xxx", config=SDKConfig(...))
await sdk.memories.create(document="...", document_type="ai-chat-conversation")
If your application grows to need separate instances for different environments or tenants:
  1. Create additional instances in the Dashboard
  2. Use environment-specific configuration to select the correct instance_id
  3. Each instance has independent credentials, config, and memory stores
  4. No data is shared between instances (unless you use world scope)
import os

env = os.environ.get("ENVIRONMENT", "development")

INSTANCE_MAP = {
    "development": "inst_dev_xxx",
    "staging": "inst_staging_xxx",
    "production": "inst_prod_xxx",
}

sdk = MaximemSynapSDK(
    instance_id=INSTANCE_MAP[env],
    api_key=get_secret(f"synap-{env}-api-key")
)
If you are adding real-time streaming to an application that currently uses REST only:
  1. Install the gRPC extra: pip install 'maximem-synap[grpc]'
  2. No SDK configuration changes needed — the SDK auto-detects gRPC availability
  3. Add streaming listeners in your application:
# Start listening for real-time updates
await sdk.instance.listen(
    on_reconnect=lambda: print("Stream reconnected"),
    on_disconnect=lambda: print("Stream disconnected")
)

# Stop listening when done
await sdk.instance.stop()
  1. Ensure your stream_idle timeout is appropriate for your traffic patterns
When tightening PII handling (common when moving from dev to production):
  1. Update the MACA config:
ingestion:
  pii:
    handling: "redact"           # Changed from "passthrough"
    categories: ["email", "phone", "ssn", "credit_card"]
  1. Apply the config change
  2. New ingestions will have PII redacted
  3. Existing memories are not re-processed. If old memories contain PII, you must either:
    • Re-ingest the source content (PII will be redacted on the new pass)
    • Manually delete specific memories via the Admin API
    • Wait for retention cleanup to purge old memories naturally
If a config change causes issues in production:
# List available versions
history = await admin_client.config.list_versions(
    instance_id="inst_a1b2c3d4e5f67890"
)
# Find the last stable version
stable_version = next(
    v for v in history
    if v.status == "active" and v.version != current_broken_version
)

# Roll back
result = await admin_client.config.rollback(
    instance_id="inst_a1b2c3d4e5f67890",
    target_version=stable_version.version
)
print(f"Rolled back to v{result.version}")
Rollback takes effect immediately for new requests. In-flight requests continue with the old config until they complete.

Pre-Migration Checklist

Before any migration, verify these items:
ItemWhy
Current config backed upEnables rollback if issues arise
Staging environment availableTest changes before production
Monitoring in placeDetect issues quickly after migration
Team notifiedEveryone knows a change is happening
Rollback plan documentedKnow exactly what to do if something goes wrong
Maintenance window scheduled (for major changes)Reduce user impact during migration

Next Steps

Production Checklist

Full checklist for production readiness after migration.

Changelog

Detailed changelog for all SDK versions.

Configuring Memory

Deep dive into MACA configuration options.

Support

Get help with complex migration scenarios.