Skip to main content

Overview

Connectors are pre-built integrations that pull data from external systems into Synap’s ingestion pipeline. Instead of writing custom code to extract data from your CRM, helpdesk, document store, or communication platform, you configure a connector and let it handle the data extraction, transformation, and submission.

How Connectors Work

A connector follows a simple three-step pattern:
1

Read from Source

The connector authenticates with the external system and reads data — conversations, tickets, documents, messages, or records — using the source’s API.
2

Transform to Synap Format

Raw data from the external system is transformed into Synap’s document format. This includes:
  • Extracting text content from the source’s data model
  • Mapping source identifiers to Synap’s user_id, customer_id, and document_id fields
  • Attaching relevant metadata (source system, timestamps, categories)
  • Handling pagination, rate limits, and incremental sync
3

Submit to Ingestion Pipeline

The transformed documents are submitted to Synap via the batch ingestion API (POST /v1/memories/batch). From there, they enter the standard processing pipeline — categorization, extraction, chunking, entity resolution, and storage.
┌───────────────┐     ┌───────────────┐     ┌───────────────────────┐
│  External     │     │   Connector   │     │   Synap Ingestion     │
│  System       │────►│   Transform   │────►│   Pipeline            │
│  (CRM, etc.)  │     │   & Submit    │     │   (Process & Store)   │
└───────────────┘     └───────────────┘     └───────────────────────┘

Availability

Connectors are currently under development. Check back for updates or contact us at [email protected] if you have a specific integration need.

Planned Connectors

The following connectors are on the roadmap:

Zendesk

Sync support tickets, conversations, and knowledge base articles from Zendesk. Map ticket requesters to Synap users and organizations to customers.

Intercom

Pull conversation history, user profiles, and help center articles from Intercom. Maintain conversation threading and user identity mapping.

Slack

Import channel messages, threads, and direct messages from Slack. Filter by channel, date range, and message type.

Notion

Sync pages, databases, and wiki content from Notion workspaces. Preserve document structure and metadata.

Confluence

Pull pages, blog posts, and space content from Confluence. Maintain page hierarchy and author information.

Google Drive

Import documents, spreadsheets, and presentations from Google Drive. Extract text content and preserve folder structure metadata.

Salesforce

Sync account records, contact notes, opportunity history, and case data from Salesforce. Map Salesforce accounts to Synap customers.
Have a connector need not listed above? Contact us — we prioritize connector development based on customer demand.

Building Custom Connectors

While official connectors are under development, you can build custom connectors using the SDK or API. The pattern is straightforward: read from your source, transform, and submit. Here is a reference implementation for a generic connector:
from maximem_synap import MaximemSynapSDK
from typing import AsyncIterator
from dataclasses import dataclass

@dataclass
class SourceDocument:
    """A document from your external source system."""
    id: str
    content: str
    user_email: str
    org_name: str
    created_at: str
    source_type: str

async def build_custom_connector(
    sdk: MaximemSynapSDK,
    source_documents: AsyncIterator[SourceDocument],
    user_map: dict[str, str],      # source email → synap user_id
    customer_map: dict[str, str],   # source org → synap customer_id
):
    """
    Generic connector pattern: read from source, transform, submit to Synap.
    """
    batch = []
    batch_size = 50

    async for doc in source_documents:
        # Transform: map source identifiers to Synap identifiers
        memory = {
            "content": doc.content,
            "user_id": user_map.get(doc.user_email),
            "customer_id": customer_map.get(doc.org_name),
            "metadata": {
                "document_id": f"src_{doc.source_type}_{doc.id}",
                "source": doc.source_type,
                "source_created_at": doc.created_at,
            }
        }
        batch.append(memory)

        # Submit in batches
        if len(batch) >= batch_size:
            await sdk.memories.batch_create(memories=batch)
            batch = []

    # Submit remaining documents
    if batch:
        await sdk.memories.batch_create(memories=batch)
Key considerations for custom connectors:
Track the last sync timestamp and only fetch new or updated records from the source system on each run. This reduces processing time and avoids duplicate ingestion.
Map source system identifiers (emails, account IDs) to Synap’s user_id and customer_id. Maintain a mapping table or use a consistent hashing scheme.
Always set document_id on ingested memories. This ensures that re-running the connector updates existing memories rather than creating duplicates. Use a stable identifier from the source system.
Respect both the source system’s rate limits and Synap’s ingestion rate limits. Use batching (batch_create) and add appropriate delays between batches.
Implement retry logic for transient failures. Log failed documents for manual review. Consider a dead-letter queue for documents that consistently fail processing.

Next Steps

Bootstrap Ingestion

Learn about bulk data loading patterns.

SDK Ingestion

Programmatic memory ingestion via the SDK.

Memory API

REST API for memory creation and batch operations.