Skip to main content
Think of organizational context as the “product brain” — the universal knowledge base that every user benefits from, regardless of which customer organization they belong to. It is the shared floor of knowledge beneath all customer-specific and user-specific memories.

When to use organizational context

Organizational context is the right choice when the knowledge applies universally across your application:

Product documentation

Feature descriptions, API references, usage guides, and how-to articles. Ensures your agent can answer product questions accurately for any user.

Changelog and announcements

Release notes, new features, deprecations, and migration guides. Keeps your agent up to date with product changes.

Global policies

Terms of service, privacy policies, SLA definitions, and compliance requirements. Ensures consistent, accurate policy answers.

Domain knowledge

Industry terminology, best practices, and reference material specific to your product’s domain.

How to ingest organizational context

Organizational context is created by ingesting documents without specifying user_id or customer_id. This causes Synap to store the resulting memories at the Client scope.

Via the SDK

from synap import Synap

sdk = Synap(api_key="your_api_key")

# Ingest a single product document
await sdk.memories.create(
    document="""
    # Synap API Rate Limits

    All API endpoints are subject to rate limiting to ensure fair usage:

    - Free tier: 100 requests per minute
    - Pro tier: 1,000 requests per minute
    - Enterprise tier: 10,000 requests per minute

    Rate limit headers are included in every response:
    - X-RateLimit-Limit: maximum requests per window
    - X-RateLimit-Remaining: requests remaining in current window
    - X-RateLimit-Reset: seconds until the window resets

    When rate limited, the API returns HTTP 429 with a Retry-After header.
    """,
    document_type="document"
    # No user_id or customer_id -- stored at CLIENT scope
)

Via the API

# Ingest a product document via REST API
curl -X POST https://api.synap.maximem.ai/v1/memories \
  -H "Authorization: Bearer YOUR_API_KEY" \
  -H "Content-Type: application/json" \
  -d '{
    "document": "Our platform supports SSO with SAML 2.0 and OpenID Connect...",
    "document_type": "document"
  }'

Bulk loading with the batch API

For initial product knowledge bootstrap or large documentation sets, use the batch API to ingest multiple documents in a single request:
# Bulk load product documentation at startup
documents = [
    {
        "document": open("docs/api-reference.md").read(),
        "document_type": "document"
    },
    {
        "document": open("docs/changelog-v3.md").read(),
        "document_type": "document"
    },
    {
        "document": open("docs/pricing.md").read(),
        "document_type": "document"
    },
    {
        "document": open("docs/security-whitepaper.md").read(),
        "document_type": "document"
    }
]

# Batch ingest -- all stored at CLIENT scope
await sdk.memories.batch_create(documents=documents)
# Batch ingest via REST API
curl -X POST https://api.synap.maximem.ai/v1/memories/batch \
  -H "Authorization: Bearer YOUR_API_KEY" \
  -H "Content-Type: application/json" \
  -d '{
    "documents": [
      {"document": "Document 1 content...", "document_type": "document"},
      {"document": "Document 2 content...", "document_type": "document"}
    ]
  }'
Batch ingestion is the recommended approach for loading product documentation. It is more efficient than individual calls and ensures all documents are processed as a cohesive set for better entity resolution across documents.

How to retrieve organizational context

Organizational context can be retrieved directly or as part of the scope chain during user conversations.

Direct retrieval (client scope only)

Use direct retrieval when you need only organizational context — for example, to pre-populate an FAQ or verify product information:
# Retrieve organizational context directly
context = await sdk.client.context.fetch(
    search_query=["rate limits", "pricing tiers"]
)

# context.facts might include:
# - "Free tier: 100 requests per minute"
# - "Pro tier: 1,000 requests per minute"
# - "Enterprise tier: 10,000 requests per minute"

As part of the scope chain

More commonly, organizational context surfaces automatically during user conversations. When you retrieve context for a specific user and customer, Synap searches the full scope chain and includes relevant Client-scoped memories alongside user-specific and customer-specific ones:
# User asks about rate limits during a conversation
context = await sdk.user.context.fetch(
    user_id="user_alice",
    customer_id="acme_corp",
    search_query=["what are the API rate limits"]
)

# Results may include memories from all scopes:
# USER scope:    "Alice has exceeded rate limits twice this month" (highest priority)
# CUSTOMER scope: "Acme Corp is on the Enterprise tier" (high priority)
# CLIENT scope:  "Enterprise tier: 10,000 requests per minute" (medium priority)
In this example, the agent can combine all three scopes to give a complete answer: “Your Enterprise plan allows 10,000 requests per minute. I notice you have hit rate limits twice this month — would you like to review your usage patterns?”

TTL and caching

Organizational context is cached with a 30-minute TTL (time to live). This caching exists because organizational knowledge changes infrequently — product documentation is updated periodically, not on every request. The 30-minute TTL provides a balance between freshness and retrieval performance.
AspectDetail
Cache TTL30 minutes
Cache scopePer-Instance
Cache invalidationAutomatic on TTL expiry; manual via API if needed
Why cacheOrganizational context is read-heavy, write-infrequent
If you update product documentation and need the changes to be reflected immediately, you can invalidate the cache manually. For most use cases, the 30-minute TTL is sufficient — new ingestions will be visible within 30 minutes of processing completion.

How organizational context merges with other scopes

During retrieval, organizational context is merged with customer-scoped and user-scoped memories following the scope chain priority rules. The merge process handles conflicts and redundancy:
ScenarioResolution
Same fact at multiple scopesNarrower scope wins (user > customer > client)
Complementary factsAll are included, ranked by relevance
Contradicting factsNarrower scope takes priority; broader scope may be included with lower rank
Budget exceededNarrower-scope memories are preserved first; organizational context is trimmed if necessary
This means that organizational context serves as a knowledge baseline that can be overridden by more specific customer or user context. For example:
  • Client scope: “Default support hours are 9am-5pm EST”
  • Customer scope: “Acme Corp has 24/7 premium support”
When Alice at Acme Corp asks about support hours, the customer-scoped memory takes priority, and the agent correctly responds with “24/7 premium support.”

Example: loading product docs at startup

A common pattern is to load your product documentation into organizational context when your application starts, and then update it whenever documentation changes:
from synap import Synap
import glob

sdk = Synap(api_key="your_api_key")

async def load_product_docs():
    """Load all product documentation into organizational context."""
    doc_files = glob.glob("docs/**/*.md", recursive=True)

    documents = []
    for filepath in doc_files:
        with open(filepath, "r") as f:
            documents.append({
                "document": f.read(),
                "document_type": "document",
            })

    # Batch ingest at CLIENT scope
    result = await sdk.memories.batch_create(documents=documents)
    print(f"Loaded {len(documents)} documents, created {result.memory_count} memories")

async def handle_user_question(user_id: str, customer_id: str, question: str):
    """Handle a user question, drawing on organizational + personal context."""
    context = await sdk.user.context.fetch(
        user_id=user_id,
        customer_id=customer_id,
        search_query=[question]
    )

    # Build prompt with retrieved context
    prompt = f"""You are a helpful support agent.

Context from memory:
- Facts: {context.facts}
- Preferences: {context.preferences}

User question: {question}
"""
    # Pass prompt to your LLM...

Next steps

Customer Context

Learn about customer-scoped knowledge shared within an organization.

Memory Scopes

Understand the full scope chain and priority resolution rules.

API: Context

API reference for context retrieval endpoints.

Memory Architecture

Configure how organizational context is stored, extracted, and retrieved.