Skip to main content

Install

npm install @maximem/synap-vercel-adk

What’s included

ExportPurpose
createSynapAsync factory that initializes the Synap provider
SynapProviderProvider class with wrap and listen methods

Quick start

import { generateText } from "ai";
import { anthropic } from "@ai-sdk/anthropic";
import { createSynap } from "@maximem/synap-vercel-adk";

const synap = await createSynap({
  apiKey: process.env.SYNAP_API_KEY!,
  instanceId: process.env.SYNAP_INSTANCE_ID!,
});

const model = synap.wrap(anthropic("claude-sonnet-4-6"), {
  userId: "alice",
  customerId: "acme",   // optional
});

const { text } = await generateText({
  model,
  messages: [{ role: "user", content: "What do you remember about my account?" }],
});
synap.wrap returns a standard Vercel AI SDK LanguageModel — pass it anywhere you’d use a plain model.

How it works

On every generateText / streamText / generateObject call:
  1. Before — fetches the user’s Synap context and injects it as a system message
  2. Generates — proxies the request to the wrapped model unchanged
  3. After — ingests the completed user + assistant turn into Synap asynchronously
your code → synap.wrap(model) → [fetch context] → wrapped model → [ingest turn] → your code

Works with any model

Wrap any Vercel AI SDK-compatible model:
import { openai } from "@ai-sdk/openai";
import { google } from "@ai-sdk/google";
import { anthropic } from "@ai-sdk/anthropic";

const gptWithMemory    = synap.wrap(openai("gpt-4o"),                 { userId: "alice" });
const geminiWithMemory = synap.wrap(google("gemini-2.0-flash"),        { userId: "alice" });
const claudeWithMemory = synap.wrap(anthropic("claude-sonnet-4-6"),    { userId: "alice" });

Streaming

Works with streamText and streamObject without any changes:
const { textStream } = await streamText({
  model: synap.wrap(openai("gpt-4o"), { userId: "alice" }),
  messages: [{ role: "user", content: "Summarize my recent priorities." }],
});

for await (const chunk of textStream) {
  process.stdout.write(chunk);
}

Per-request scoping

Override the user scope per call by wrapping per-request:
async function handleChat(userId: string, message: string) {
  const model = synap.wrap(openai("gpt-4o"), { userId });
  const { text } = await generateText({
    model,
    messages: [{ role: "user", content: message }],
  });
  return text;
}

Anticipation stream (advanced)

synap.listen() opens a gRPC stream that pre-fetches context speculatively before the user’s request arrives, reducing perceived latency:
const stop = synap.listen({ userId: "alice" });

// Later, when done:
stop();
Use listen in long-lived server processes where you can predict who will send the next message.

Next steps

Mastra

SynapMemory and tools for the Mastra ADK.

Claude Agent SDK

Hooks and MCP server for the Claude Agent SDK.