Skip to main content

Install

pip install synap-llamaindex

What’s included

ClassPurpose
SynapChatMemoryBaseMemory implementation for chat engines
SynapRetrieverReturns NodeWithScore objects for RAG pipelines

SynapChatMemory

Drop into any LlamaIndex chat engine to give it persistent cross-session memory.
from llama_index.core.chat_engine import CondensePlusContextChatEngine
from synap_llamaindex import SynapChatMemory

memory = SynapChatMemory(
    sdk=sdk,
    conversation_id="conv-001",
    user_id="alice",
    customer_id="acme",   # optional
)

chat_engine = CondensePlusContextChatEngine.from_defaults(
    retriever=your_retriever,
    memory=memory,
)

response = await chat_engine.achat("What were my action items from last week?")
SynapChatMemory loads prior messages on get() and writes new turns back to Synap on put(). Failed writes raise so callers know persistence failed; failed reads return an empty buffer.

SynapRetriever

Use in any RAG pipeline that accepts a LlamaIndex retriever.
from synap_llamaindex import SynapRetriever

retriever = SynapRetriever(
    sdk=sdk,
    user_id="alice",
    customer_id="acme",
    max_results=6,
    mode="accurate",   # "fast" or "accurate"
)

nodes = await retriever.aretrieve("What are the user's project preferences?")
# Each NodeWithScore: node.text = memory text, node.score = relevance score
Compose with LlamaIndex’s RouterRetriever or QueryFusionRetriever to blend Synap memories with document retrieval.

Next steps

LangChain

Memory and callbacks for LangChain chains.

SDK Context Fetch

Raw context fetch API for custom retrieval logic.