Documentation Index
Fetch the complete documentation index at: https://docs.maximem.ai/llms.txt
Use this file to discover all available pages before exploring further.
Maps Slack identifiers (team_id, channel_id, user_id) to Synap scopes so the bot remembers context per workspace, per channel, and per Slack user.
# pip install slack-bolt openai maximem-synap
import os
import uuid
import asyncio
from slack_bolt.async_app import AsyncApp
from slack_bolt.adapter.socket_mode.aiohttp import AsyncSocketModeHandler
from openai import AsyncOpenAI
from maximem_synap import MaximemSynapSDK
app = AsyncApp(token=os.environ["SLACK_BOT_TOKEN"])
sdk = MaximemSynapSDK()
openai = AsyncOpenAI()
# One conversation_id per Slack channel — channel-level threading
_channel_convs: dict[str, str] = {}
def conv_for(channel_id: str) -> str:
return _channel_convs.setdefault(channel_id, str(uuid.uuid4()))
@app.event("app_mention")
async def on_mention(event, say):
text = event["text"]
slack_user_id = event["user"] # e.g., "U01ABC"
team_id = event["team"] # e.g., "T01XYZ" — Slack workspace
channel_id = event["channel"]
# Map Slack identifiers to Synap scopes:
# customer_id ← team_id (one Slack workspace = one customer)
# user_id ← slack_user_id
# conversation_id ← per-channel UUID
ctx = await sdk.conversation.context.fetch(
conversation_id=conv_for(channel_id),
search_query=[text],
max_results=6,
)
memory_block = "\n".join(f"- {f.content}" for f in ctx.facts[:5])
prefs_block = "\n".join(f"- {p.content}" for p in ctx.preferences[:3])
completion = await openai.chat.completions.create(
model="gpt-4o",
messages=[
{"role": "system", "content": (
"You are a helpful Slack bot.\n"
f"Known facts:\n{memory_block or '(none)'}\n"
f"User preferences:\n{prefs_block or '(none)'}"
)},
{"role": "user", "content": text},
],
)
reply = completion.choices[0].message.content
await say(text=reply, thread_ts=event.get("ts"))
# Fire-and-forget ingestion so it doesn't add latency to the response
asyncio.create_task(sdk.memories.create(
document=f"<@{slack_user_id}>: {text}\nBot: {reply}",
document_type="ai-chat-conversation",
user_id=slack_user_id,
customer_id=team_id,
metadata={
"conversation_id": conv_for(channel_id),
"channel": channel_id,
"source": "slack",
},
))
async def main():
await sdk.initialize()
handler = AsyncSocketModeHandler(app, os.environ["SLACK_APP_TOKEN"])
await handler.start_async()
asyncio.run(main())
Why these mappings?
team_id → customer_id: each Slack workspace is one tenant. Two different workspaces never share memories.
slack_user_id → user_id: stable Slack identifier. Survives display-name changes.
- One
conversation_id per channel (kept in a process-local dict here; use Redis in production): the bot remembers cross-message context within a channel but doesn’t bleed across channels.
Channel-level vs thread-level memory
If you want each Slack thread to be its own conversation, key _channel_convs on event["thread_ts"] or event["ts"] instead of channel_id. That gives more isolated context per thread but loses cross-thread memory in the same channel.