Skip to main content

Overview

This guide walks you through integrating Synap into your application. You will learn how to initialize the SDK in popular Python web frameworks, bridge async/sync execution models, and wire Synap into your LLM provider’s generation pipeline. By the end of this page, your application will have a working memory-augmented agent pattern: retrieve context from Synap, inject it into the LLM prompt, generate a response, and ingest the conversation back into Synap.

Framework Integration

FastAPI is the most common framework for Synap integrations. Use the lifespan event for initialization and shutdown, and access the SDK instance from your route handlers.
from contextlib import asynccontextmanager
from fastapi import FastAPI, Depends
from maximem_synap import MaximemSynapSDK

sdk = None

@asynccontextmanager
async def lifespan(app: FastAPI):
    global sdk
    sdk = MaximemSynapSDK(
        instance_id="inst_xxx",
        api_key="synap_xxx"
    )
    await sdk.initialize()
    yield
    await sdk.shutdown()

app = FastAPI(lifespan=lifespan)

def get_sdk() -> MaximemSynapSDK:
    """Dependency that provides the initialized SDK."""
    if sdk is None:
        raise RuntimeError("SDK not initialized")
    return sdk

@app.post("/chat")
async def chat(
    message: str,
    user_id: str,
    customer_id: str,
    conversation_id: str,
    synap: MaximemSynapSDK = Depends(get_sdk)
):
    # 1. Retrieve relevant context
    context = await synap.conversation.context.fetch(
        conversation_id=conversation_id,
        user_id=user_id,
        customer_id=customer_id,
        messages=[{"role": "user", "content": message}]
    )

    # 2. Build prompt with retrieved memories
    system_prompt = build_system_prompt(context.memories)

    # 3. Call your LLM (see LLM integration below)
    response = await generate_response(system_prompt, message)

    # 4. Ingest the turn for long-term memory
    await synap.memories.create(
        content=f"User: {message}\nAssistant: {response}",
        user_id=user_id,
        customer_id=customer_id,
        metadata={"conversation_id": conversation_id}
    )

    return {"response": response}
Use FastAPI’s dependency injection (Depends(get_sdk)) to keep your route handlers clean and testable. In tests, you can override the dependency with a mock SDK.

Async/Sync Bridging

The Synap SDK is async-native. If your application uses a synchronous framework, you need to bridge the async calls. Here are the recommended patterns:
Use for scripts, CLI tools, and simple synchronous applications. Creates a new event loop for each call.
import asyncio
from maximem_synap import MaximemSynapSDK

sdk = MaximemSynapSDK(instance_id="inst_xxx", api_key="synap_xxx")
asyncio.run(sdk.initialize())

# Later, in synchronous code:
context = asyncio.run(sdk.conversation.context.fetch(...))
asyncio.run() creates a new event loop each time. This is fine for low-frequency calls but adds overhead for high-throughput applications.
Create a single event loop at startup and reuse it for all SDK calls. This is the pattern shown in the Flask and Django examples above.
import asyncio

# At startup
loop = asyncio.new_event_loop()

# For each call
result = loop.run_until_complete(sdk.some_async_method())
If you are using Django’s async views alongside sync code, asgiref provides utilities for bridging:
from asgiref.sync import async_to_sync

# Wrap an async SDK method for use in sync code
fetch_context = async_to_sync(sdk.conversation.context.fetch)
context = fetch_context(conversation_id="conv_123", ...)

LLM Provider Integration

Full example of a memory-augmented agent using OpenAI’s gpt-4o:
from openai import AsyncOpenAI
from maximem_synap import MaximemSynapSDK

openai_client = AsyncOpenAI(api_key="sk-...")
synap_sdk = MaximemSynapSDK(instance_id="inst_xxx", api_key="synap_xxx")

async def memory_augmented_chat(
    user_message: str,
    user_id: str,
    customer_id: str,
    conversation_id: str
) -> str:
    # Step 1: Retrieve relevant context from Synap
    context = await synap_sdk.conversation.context.fetch(
        conversation_id=conversation_id,
        user_id=user_id,
        customer_id=customer_id,
        messages=[{"role": "user", "content": user_message}]
    )

    # Step 2: Build the system prompt with retrieved memories
    memory_block = "\n".join([
        f"- {memory.content}" for memory in context.memories
    ])

    system_prompt = f"""You are a helpful assistant with access to the following
information about the user and their organization:

{memory_block}

Use this information to provide personalized, contextual responses.
If the user's question relates to something in the memories above,
reference it naturally in your response."""

    # Step 3: Call OpenAI with the enriched prompt
    response = await openai_client.chat.completions.create(
        model="gpt-4o",
        messages=[
            {"role": "system", "content": system_prompt},
            {"role": "user", "content": user_message}
        ],
        temperature=0.7
    )
    assistant_message = response.choices[0].message.content

    # Step 4: Ingest the conversation turn for long-term memory
    await synap_sdk.memories.create(
        content=f"User: {user_message}\nAssistant: {assistant_message}",
        user_id=user_id,
        customer_id=customer_id,
        metadata={
            "conversation_id": conversation_id,
            "model": "gpt-4o",
            "source": "chat"
        }
    )

    return assistant_message

The Memory-Augmented Agent Pattern

Every Synap integration follows the same fundamental pattern:
1

Retrieve

Fetch relevant memories from Synap based on the user’s message, identity, and conversation history.
context = await sdk.conversation.context.fetch(...)
2

Build Prompt

Inject retrieved memories into the system prompt or message history. This gives the LLM access to personalized, contextual information.
system_prompt = build_prompt_with_memories(context.memories)
3

Generate

Call your LLM provider with the enriched prompt. The model generates a response informed by the user’s history, preferences, and organizational context.
response = await llm_client.generate(system_prompt, user_message)
4

Ingest

Send the conversation turn back to Synap for processing and long-term storage. This closes the loop — today’s conversation becomes tomorrow’s context.
await sdk.memories.create(content=turn_content, user_id=..., customer_id=...)

Error Handling and Graceful Degradation

Synap should enhance your application, not be a single point of failure. Design your integration to degrade gracefully when Synap is unavailable.
async def chat_with_graceful_degradation(user_message: str, **kwargs) -> str:
    memories = []

    # Attempt to retrieve context, but don't fail if Synap is down
    try:
        context = await synap_sdk.conversation.context.fetch(
            conversation_id=kwargs["conversation_id"],
            user_id=kwargs["user_id"],
            customer_id=kwargs["customer_id"],
            messages=[{"role": "user", "content": user_message}]
        )
        memories = context.memories
    except Exception as e:
        logger.warning(f"Synap retrieval failed, proceeding without context: {e}")

    # Generate response (with or without memories)
    system_prompt = build_system_prompt(memories)  # handles empty list gracefully
    response = await generate_response(system_prompt, user_message)

    # Attempt to ingest, but don't fail if Synap is down
    try:
        await synap_sdk.memories.create(
            content=f"User: {user_message}\nAssistant: {response}",
            user_id=kwargs["user_id"],
            customer_id=kwargs["customer_id"],
            metadata={"conversation_id": kwargs["conversation_id"]}
        )
    except Exception as e:
        logger.warning(f"Synap ingestion failed: {e}")

    return response
Always wrap Synap SDK calls in try/except blocks in production. Network issues, rate limits, or service disruptions should not prevent your application from responding to users. The LLM can still generate useful responses without memory context — it just won’t be personalized.

Next Steps

Authentication

Configure API keys for secure communication.

SDK Initialization

Detailed SDK configuration options.

First Integration Guide

Step-by-step tutorial for your first Synap integration.