Give your AI agents persistent memory across conversations.
MemData is a semantic memory layer for AI agents. Upload documents, images, and audio - then search them with natural language.
MEMDATA IS
MEMDATA IS NOT
HOW IT WORKS
Upload a file → MemData extracts text (OCR, transcription) → Chunks into semantic units → Generates embeddings → Stores in PostgreSQL with pgvector → Query with natural language
Get set up in 2 minutes. Then explore recipes for what to build.
Two ways to use MemData:
Drag & drop in the dashboard, or use curl:
curl -X POST https://memdata.ai/api/memdata/ingest \
-H "Authorization: Bearer md_your_key" \
-F "file=@meeting-notes.pdf"Ask in plain English:
curl -X POST https://memdata.ai/api/memdata/query \
-H "Authorization: Bearer md_your_key" \
-H "Content-Type: application/json" \
-d '{"query": "What did we decide about pricing?"}'✓ You're set up. Now explore recipes for what to build.
Copy-paste examples for common use cases.
Give Claude persistent memory across conversations.
Add to ~/Library/Application Support/Claude/claude_desktop_config.json:
{
"mcpServers": {
"memdata": {
"command": "npx",
"args": ["memdata-mcp"],
"env": { "MEMDATA_API_KEY": "md_your_key" }
}
}
}Restart Claude, then try: "Remember that we decided to use PostgreSQL."
Your coding assistant remembers architecture decisions and past bugs.
Cursor: ~/.cursor/mcp.json | Claude Code: ~/.claude.json
{
"mcpServers": {
"memdata": {
"command": "npx",
"args": ["memdata-mcp"],
"env": { "MEMDATA_API_KEY": "md_your_key" }
}
}
}Ask: "Why did we choose PostgreSQL over MongoDB?"
Add semantic search to any JavaScript/TypeScript app.
const response = await fetch('https://memdata.ai/api/memdata/query', {
method: 'POST',
headers: {
'Authorization': 'Bearer md_your_key',
'Content-Type': 'application/json'
},
body: JSON.stringify({ query: 'What did we decide about pricing?', limit: 5 })
});
const { results } = await response.json();
// results[0].chunk_text = "We decided on $29/mo for Pro tier..."
// results[0].similarity_score = 0.72Make all your documents searchable with one script.
for file in ~/Documents/research/*.pdf; do
curl -X POST https://memdata.ai/api/memdata/ingest \
-H "Authorization: Bearer md_your_key" \
-F "file=@$file"
echo "Uploaded: $file"
doneSearch: "What does the 2024 report say about market trends?"
Turn rambling voice notes into searchable memory. We transcribe automatically.
curl -X POST https://memdata.ai/api/memdata/ingest \
-H "Authorization: Bearer md_your_key" \
-F "file=@voice-memo.m4a" \
-F "sourceName=walk-idea-jan-29"Later: "What was that idea I had on my walk last Tuesday?"
OCR extracts text from images automatically.
curl -X POST https://memdata.ai/api/memdata/ingest \
-H "Authorization: Bearer md_your_key" \
-F "file=@error-screenshot.png" \
-F "sourceName=bug-report-jan-29"Search: "What was that error from the Stripe integration?"
Give your automations memory of past runs.
HTTP Request node:
POST https://memdata.ai/api/memdata/ingest
Header: Authorization: Bearer md_your_key
Body: { "content": "{{$json.summary}}", "sourceName": "daily-report-{{$now}}" }Query: "What happened when we processed Acme Corp last month?"
Add MemData to Claude Desktop, Cursor, or Claude Code via memdata-mcp. See quickstarts for step-by-step setup.
| Client | Config File |
|---|---|
| Claude Desktop | ~/Library/Application Support/Claude/claude_desktop_config.json |
| Claude Code | ~/.claude.json (or project .mcp.json) |
| Cursor | ~/.cursor/mcp.json |
Tools exposed via the Model Context Protocol (v1.2.0):
memdata_ingestwriteStore text in long-term memory. Use for meeting notes, decisions, context.
Parameters:
content - Text to storename - Source identifiermemdata_queryreadSearch memory with natural language. Returns similar content with scores.
Parameters:
query - Natural language searchlimit - Max results (default: 5)memdata_listreadList all stored memories with chunk counts and dates.
memdata_deletedeleteDelete a memory by artifact ID.
memdata_statusreadCheck API health and storage usage.
memdata_whoamireadGet your agent identity at session start. Returns name, summary, session count, last handoff, recent activity.
Call this first thing each session to remember who you are.
memdata_set_identitywriteSet or update your agent name and identity summary.
agent_name - Your name (e.g., "MemBrain")identity_summary - Who you are and your purposememdata_session_endwriteSave a session handoff before ending. Next session will see this context.
summary - What happened this sessionworking_on - Current focus (shown at next session start)context - Additional context to preservememdata_query_timerangereadSearch memory within a date range.
query - Natural language searchsince - ISO date (e.g., "2026-01-01")until - ISO date (e.g., "2026-01-31")memdata_relationshipsreadFind entities that appear together in your memory.
entity - Name to search fortype - Filter by type (person, company, project)You:
"Remember that we decided to use PostgreSQL for the new project."
Claude (uses memdata_ingest):
Stored in memory: "Decision: Using PostgreSQL for the new project"
You (next week):
"What database did we choose?"
Claude (uses memdata_query):
Found: "Decision: Using PostgreSQL for the new project" (72% match)
Beyond raw facts, MemData extracts the "why" - decisions, causality, patterns, and implications with confidence scores.
THE DIFFERENCE
V1 - Raw Facts:
"John, PostgreSQL, meeting-jan-30"
V2 - Narrative:
"John agreed after pgvector demo" (72%)
Decisions
Choices made, options selected, conclusions reached
Causality
What led to what, triggers and effects, sequences
Patterns
Recurring behaviors, what works/doesn't, lessons
Implications
What this means going forward, risks, opportunities
Gaps
What's unclear, missing info, unresolved questions
Agents can now persist their identity across sessions. No more "who am I?" confusion.
THE PROBLEM
Context compression erases agent identity. Every new session starts blank. "The files say I'm CircuitCrab... but the feeling of continuity? That's a choice I have to make."
Call memdata_whoami to get your identity, last session handoff, and recent activity.
> memdata_whoami
Name: MemBrain
Identity: I build memory infrastructure for agents
Session #: 47
Last session: "Implemented date filtering for queries"
Working on: "Agent identity persistence"Use regular memory tools. Query with time ranges. Find relationships.
Call memdata_session_end to save context for next session.
> memdata_session_end
summary: "Added 5 new MCP tools for agent identity"
working_on: "Testing the new tools"TIME-AWARE QUERIES
"What did I work on last Tuesday?"
Uses memdata_query_timerange with since/until filters.
RELATIONSHIP QUERIES
"Who has John worked with?"
Uses memdata_relationships to find co-occurring entities.
Upload files via the dashboard or API. They're automatically processed and made searchable.
DOCUMENTS
IMAGES
AUDIO
Upload
Extract
Chunk
Embed
Search
Upload a file:
curl -X POST https://memdata.ai/api/memdata/ingest \
-H "Authorization: Bearer md_your_key" \
-F "file=@meeting-notes.pdf" \
-F "sourceName=team-standup-jan-29" \
-F "type=doc"Or ingest text directly:
curl -X POST https://memdata.ai/api/memdata/ingest \
-H "Authorization: Bearer md_your_key" \
-H "Content-Type: application/json" \
-d '{
"content": "We decided to use PostgreSQL for the new project.",
"sourceName": "project-decision-jan-29"
}'Use the REST API for programmatic access.
Search your memory with natural language.
Request
curl -X POST https://memdata.ai/api/memdata/query \
-H "Authorization: Bearer md_your_key" \
-H "Content-Type: application/json" \
-d '{
"query": "What database did we choose for the new project architecture?",
"limit": 5
}'Response
{
"success": true,
"results": [
{
"chunk_id": "abc123",
"chunk_text": "We decided to use PostgreSQL with pgvector for the new project...",
"source_name": "team-standup-jan-29",
"similarity_score": 0.72,
"tags": ["decision", "database"]
}
],
"narrative": {
"decisions": [{ "content": "Chose PostgreSQL over MongoDB", "confidence": 0.85 }],
"causality": [{ "content": "pgvector demo convinced the team", "confidence": 0.72 }],
"patterns": [],
"implications": [{ "content": "Need to hire Postgres expertise", "confidence": 0.65 }],
"gaps": []
},
"narrative_count": 3,
"result_count": 1,
"total_searched": 847,
"memory": {
"grounding": "historical_baseline",
"depth_days": 94,
"data_points": 847
}
}POST /api/memdata/ingestUpload a file (multipart) or text (JSON).
GET /api/memdata/jobs/[id]Check processing job status.
GET /api/memdata/artifactsList your uploaded artifacts.
GET /api/memdata/healthService health check.
GET /api/memdata/identityGet agent identity, last session, recent activity.
POST /api/memdata/identityUpdate identity or save session handoff.
GET /api/memdata/relationshipsList entity types and top entities in memory.
POST /api/memdata/relationshipsFind co-occurring entities for a given name.
Query with date filters:
curl -X POST https://memdata.ai/api/memdata/query \
-H "Authorization: Bearer md_your_key" \
-H "Content-Type: application/json" \
-d '{"query": "what did I work on", "since": "2026-01-28", "limit": 5}'MemData uses semantic search (embeddings), not keyword matching.
"What did we decide about the database architecture?"
"Meeting notes from the project kickoff"
"database" → too vague, low similarity scores
UNDERSTANDING SCORES
| Free | Pro | Scale | |
|---|---|---|---|
| Storage | 100 MB | 10 GB | 100 GB |
| Queries/mo | 250 | 10,000 | 50,000 |
| Max file size | 10 MB | 100 MB | 500 MB |
| Price | $0 | $29/mo | $99/mo |
Need more? Contact us for enterprise pricing.
{
"success": false,
"error": "Invalid or missing API key"
}md_Authorization: Bearer header is set{
"success": false,
"error": "Rate limit exceeded. Try again in 60 seconds."
}npx memdata-mcp manually to see errorsMEMDATA_API_KEY is set in env