Skip to main content

Overview

Organizational context follows a specialized path through Synap. Unlike user or customer memories that arise from conversations, org-level context is typically ingested proactively — product documentation, company policies, FAQs, knowledge base articles — and made available to all users interacting with your application. Org context is stored at the CLIENT scope, cached with a longer TTL because it changes infrequently, and merged into every retrieval query at the lowest priority (after user and customer memories). This page covers the full lifecycle of organizational knowledge in Synap.

Lifecycle Stages

1

Ingestion — Loading Organizational Knowledge

Org context enters Synap through two primary methods:Bootstrap API (recommended for bulk loads)Use POST /v1/memories/batch with BOOTSTRAP priority for initial loads and large updates. Bootstrap priority ensures these documents are processed with higher throughput and don’t compete with real-time user memory ingestion.
import httpx

async with httpx.AsyncClient() as client:
    response = await client.post(
        "https://api.synap.maximem.ai/v1/memories/batch",
        headers={"Authorization": "Bearer <api_key>"},
        json={
            "memories": [
                {
                    "content": "Our refund policy allows returns within 30 days...",
                    "document_id": "doc_refund_policy_v3",
                    "metadata": {"source": "knowledge_base", "category": "policy"}
                },
                {
                    "content": "Product X supports integrations with Slack, Teams...",
                    "document_id": "doc_product_x_integrations",
                    "metadata": {"source": "product_docs", "category": "features"}
                }
            ],
            "priority": "BOOTSTRAP"
        }
    )
SDK (for programmatic ingestion)When you call sdk.memories.create() without specifying user_id or customer_id, the memory is automatically scoped to the CLIENT level — making it organizational context.
# No user_id or customer_id = CLIENT scope = org context
await sdk.memories.create(
    content="Our standard SLA guarantees 99.9% uptime...",
    metadata={"source": "sla_document", "document_id": "doc_sla_v2"}
)
2

Processing — Same Pipeline, Client Scope

Org context goes through the same multi-stage processing pipeline as user memories:
  1. Categorization — Org docs are typically classified as factual or procedural content.
  2. Extraction — Key facts, procedures, and entities are extracted.
  3. Chunking — Documents are split into semantically coherent chunks for embedding.
  4. Entity Resolution — Entities are resolved at the CLIENT scope. Product names, team names, and internal terminology are registered as client-level entities that all users can reference.
  5. Organization — Processed memories are organized and stored at CLIENT scope.
The key difference from user memory processing is scope: all entity resolution and storage operations happen at the CLIENT level, making the knowledge available across all users and customers.
Entity resolution at CLIENT scope means that when a user mentions “Product X” in a conversation, the system can resolve it against the entity registered during org context processing — connecting the user’s question to the relevant product documentation.
3

Storage — Client-Scoped Persistence

Processed org context is stored in both the vector store and graph store at CLIENT scope. This means:
  • Org memories are isolated from user and customer memories in storage — they won’t leak between tenants.
  • Org memories are accessible during retrieval because the scope chain includes CLIENT for all queries.
  • Entity relationships created from org docs (e.g., “Product X” → “Slack integration”) are stored in the graph and available for relationship-based queries.
┌─────────────────────────────────────────┐
│              Storage Layer               │
├──────────────────┬──────────────────────┤
│   Vector Store   │    Graph Store       │
├──────────────────┼──────────────────────┤
│ USER memories    │ USER entities        │
│ CUSTOMER memories│ CUSTOMER entities    │
│ CLIENT memories ◄┼─ CLIENT entities ◄───┤── Org context lives here
│ WORLD memories   │ WORLD entities       │
└──────────────────┴──────────────────────┘
4

Caching — 30-Minute TTL for Performance

Because organizational context changes infrequently (policy documents, product specs, and FAQs are updated occasionally, not every minute), Synap caches org context retrieval results with a 30-minute TTL.The POST /v1/context/client/fetch endpoint returns cached results within the TTL window. This provides two benefits:
  1. Lower latency — cached results are returned without hitting the vector/graph stores.
  2. Reduced load — the storage layer isn’t queried for the same org context on every user request.
The 30-minute TTL means that after you update org context, it may take up to 30 minutes for the changes to propagate to all retrieval queries. For time-sensitive updates, you can explicitly invalidate the cache via the API.
5

Retrieval — Scope Chain Merge

When any user queries for context, org-level memories are included in the retrieval results through the scope chain merge. The retrieval system queries all applicable scopes and merges the results with a priority ordering:
  1. USER memories (highest priority) — specific to the individual user
  2. CUSTOMER memories — shared across users in the same customer organization
  3. CLIENT memories (lowest priority) — organizational knowledge
If a user memory contradicts an org memory (e.g., user has a special refund arrangement that differs from the standard policy), the user memory takes precedence in the ranked results.
6

Updates — Idempotent Re-ingestion

When organizational documents change, re-ingest them with the same document_id. Synap uses document_id for idempotent updates:
  • If a memory with the same document_id already exists, the old version is replaced with the new content.
  • The new content goes through the full processing pipeline again.
  • Entity connections are updated to reflect the new content.
  • The cache is invalidated for affected entries (within the next TTL window).
# Update an existing org doc by re-ingesting with the same document_id
await sdk.memories.create(
    content="Our refund policy now allows returns within 45 days...",  # updated
    metadata={"source": "knowledge_base", "document_id": "doc_refund_policy_v3"}
)

How Org Context Surfaces in Retrieval

The following diagram shows how org context is merged with user and customer memories during retrieval:
User Query: "What is the refund policy?"


┌─────────────────────────────────────────────────┐
│              Retrieval System                    │
│                                                  │
│  1. Search USER scope (user_abc)                │
│     → "Alice has a VIP 60-day return window"    │  ← Highest priority
│                                                  │
│  2. Search CUSTOMER scope (cust_xyz)            │
│     → "Acme Corp negotiated 45-day returns"     │  ← Medium priority
│                                                  │
│  3. Search CLIENT scope (org context)           │
│     → "Standard refund policy: 30 days"         │  ← Lowest priority
│                                                  │
│  Merge & Rank:                                   │
│     Result 1: Alice's VIP policy (user, high)   │
│     Result 2: Acme Corp policy (customer, med)  │
│     Result 3: Standard policy (org, low)        │
└─────────────────────────────────────────────────┘


   Agent receives all three, with ranking
   that reflects the priority ordering.
The agent sees all relevant information but can prioritize the most specific answer. A well-designed prompt will instruct the agent to prefer user-specific information over general org knowledge.

TTL Management

By default, org context is cached for 30 minutes. No configuration is needed — this is handled automatically by the Synap Cloud.
Timeline:
T+0:00  — Org doc ingested and processed
T+0:05  — First retrieval: fetched from store, cached
T+0:10  — Second retrieval: served from cache (fast)
T+0:35  — Cache expired, next retrieval fetches from store

Best Practices

Always assign a stable document_id to organizational documents. This ensures that re-ingestion updates the existing memory rather than creating duplicates. Use a naming convention like doc_<category>_<name>_v<version> for clarity.
For frequently changing documents (pricing pages, feature lists, policy updates), set up a scheduled job to re-ingest from the source of truth. A daily or hourly cron job that pulls from your CMS or knowledge base and submits to Synap keeps org context fresh.
# Example: scheduled re-ingestion job
async def refresh_org_context():
    docs = await fetch_from_knowledge_base()
    for doc in docs:
        await sdk.memories.create(
            content=doc.content,
            metadata={
                "source": "knowledge_base",
                "document_id": doc.id,
                "updated_at": doc.last_modified.isoformat()
            }
        )
Do not include user_id or customer_id when ingesting organizational context. Including these fields would scope the memory to a specific user or customer, preventing it from being available to all users.
When loading many documents (initial setup, knowledge base migration), use BOOTSTRAP priority via the batch API. This ensures high-throughput processing without impacting real-time user memory ingestion.
Do not store sensitive internal documents (HR policies, financial data, executive communications) as org context unless your application is designed for internal use only. Org context at CLIENT scope is accessible to all users of your application.

Next Steps

Organizational Context

Conceptual overview of how organizational knowledge fits into Synap.

Long-term Context Lifecycle

How individual memories are managed over time.

Context API Reference

API endpoints for fetching and managing context.