This guide walks you through integrating Synap into your application. You will learn how to initialize the SDK in popular Python web frameworks, bridge async/sync execution models, and wire Synap into your LLM provider’s generation pipeline.By the end of this page, your application will have a working memory-augmented agent pattern: retrieve context from Synap, inject it into the LLM prompt, generate a response, and ingest the conversation back into Synap.
FastAPI is the most common framework for Synap integrations. Use the lifespan event for initialization and shutdown, and access the SDK instance from your route handlers.
Use FastAPI’s dependency injection (Depends(get_sdk)) to keep your route handlers clean and testable. In tests, you can override the dependency with a mock SDK.
Flask is synchronous by default, so you need to bridge to Synap’s async SDK. Use the app factory pattern and initialize the SDK at startup.
Using loop.run_until_complete() in Flask blocks the worker thread during async operations. For production Flask deployments with high concurrency, consider migrating to FastAPI or running the async SDK calls in a thread pool.
In Django, initialize the SDK in your AppConfig.ready() method. Use asgiref.sync_to_async for bridging in async views, or asyncio.run() for synchronous views.
The Synap SDK is async-native. If your application uses a synchronous framework, you need to bridge the async calls. Here are the recommended patterns:
asyncio.run() — Simplest approach
Use for scripts, CLI tools, and simple synchronous applications. Creates a new event loop for each call.
asyncio.run() creates a new event loop each time. This is fine for low-frequency calls but adds overhead for high-throughput applications.
Dedicated event loop — Best for web frameworks
Create a single event loop at startup and reuse it for all SDK calls. This is the pattern shown in the Flask and Django examples above.
import asyncio# At startuploop = asyncio.new_event_loop()# For each callresult = loop.run_until_complete(sdk.some_async_method())
asgiref.sync_to_async — Django-specific
If you are using Django’s async views alongside sync code, asgiref provides utilities for bridging:
from asgiref.sync import async_to_sync# Wrap an async SDK method for use in sync codefetch_context = async_to_sync(sdk.conversation.context.fetch)context = fetch_context(conversation_id="conv_123", ...)
Call your LLM provider with the enriched prompt. The model generates a response informed by the user’s history, preferences, and organizational context.
Synap should enhance your application, not be a single point of failure. Design your integration to degrade gracefully when Synap is unavailable.
async def chat_with_graceful_degradation(user_message: str, **kwargs) -> str: memories = [] # Attempt to retrieve context, but don't fail if Synap is down try: context = await synap_sdk.conversation.context.fetch( conversation_id=kwargs["conversation_id"], user_id=kwargs["user_id"], customer_id=kwargs["customer_id"], messages=[{"role": "user", "content": user_message}] ) memories = context.memories except Exception as e: logger.warning(f"Synap retrieval failed, proceeding without context: {e}") # Generate response (with or without memories) system_prompt = build_system_prompt(memories) # handles empty list gracefully response = await generate_response(system_prompt, user_message) # Attempt to ingest, but don't fail if Synap is down try: await synap_sdk.memories.create( content=f"User: {user_message}\nAssistant: {response}", user_id=kwargs["user_id"], customer_id=kwargs["customer_id"], metadata={"conversation_id": kwargs["conversation_id"]} ) except Exception as e: logger.warning(f"Synap ingestion failed: {e}") return response
Always wrap Synap SDK calls in try/except blocks in production. Network issues, rate limits, or service disruptions should not prevent your application from responding to users. The LLM can still generate useful responses without memory context — it just won’t be personalized.