Frequently Asked Questions
Everything you need to know about brain-mcp. Can't find your answer? Open an issue.
TL;DR
Cost
Free & open source (MIT)
Privacy
100% local, anonymous telemetry (opt-out)
Setup
5 minutes, Python 3.11+
General
6 questionsAn open-source MCP server that indexes your AI conversation history and makes it queryable. It gives your AI persistent memory across sessions — decisions you made, questions still open, domains you were working in.
Anyone who has conversations with AI and wants persistent memory across sessions. Especially useful for people with ADHD or anyone who loses context when switching between projects. If you've ever spent 30 minutes re-explaining context to your AI, brain-mcp is for you.
Yes, fully free and MIT licensed. Core operations (search, embeddings, state recovery) run 100% locally. The only optional cost is LLM API calls for generating conversation summaries — typically ~$0.05/day for active use.
It's the trigger phrase. Say it in any conversation with your MCP client (Claude, Cursor, Windsurf) and the brain-mcp tools become available. Your AI can then query your conversation history, recover context, check open threads, and more.
Two commands: pipx install brain-mcp && brain-mcp setup. The setup wizard discovers your conversations, imports them, creates embeddings, and configures your MCP clients — all automatically. Takes about 30 seconds.
Run pipx upgrade brain-mcp. Your data and configuration are preserved. Restart your MCP client after updating.
Privacy & Security
3 questions100% on your machine. Parquet files for conversations, LanceDB for vector embeddings, JSON for structured summaries — all local files. No cloud database, no account required. brain-mcp includes anonymous usage telemetry (tool names and latencies only — no conversation content, no personal data). Opt out anytime with brain-mcp telemetry off.
Core operations (search, embeddings, state recovery) are fully local. Summary generation optionally calls an LLM API to generate structured summaries of your conversations. You can skip this step entirely and still use all the search and prosthetic tools.
Only needed for summary generation (optional). Embeddings use the local nomic-embed-text model — no API key, no account, no cost. If you want summaries, you'll need an API key for your preferred LLM provider.
Technical
6 questionsClaude Desktop, Cursor, Windsurf, and any MCP-compatible client. brain-mcp implements the standard Model Context Protocol, so any client that speaks MCP can connect to it.
Not directly — ChatGPT doesn't support MCP (yet). brain-mcp works with any MCP-compatible client: Claude Desktop, Claude Code, Cursor, and Windsurf. You can still import your ChatGPT conversation history as a data source.
No. But Apple Silicon (M1 or newer) is recommended for fast local embeddings via MPS acceleration. On Intel Macs, Linux, or Windows without a GPU, embeddings run on CPU — slower but fully functional. CUDA GPUs are also supported if available. Works on Windows too (v0.3.1+).
Yes — brain-mcp supports macOS, Linux, and Windows (as of v0.3.1). Python 3.11+ is the only hard requirement. Works with Docker or a local Python setup. Embedding performance depends on your hardware (Apple Silicon MPS is fastest, CUDA GPU on Linux/Windows, CPU works as fallback).
Depends on your conversation volume. Typical usage: 500MB–2GB total for Parquet files plus vector embeddings. The nomic-embed-text model itself is ~250MB (downloaded once).
Python 3.11 or newer. Required for some of the type annotations and performance features used in the codebase.
Data
4 questionsAuto-detected: Claude Code, Claude Desktop, Cursor, Windsurf, and Gemini CLI. Manual import: ChatGPT (via JSON export), Clawdbot, or any custom source via generic JSONL. The system is designed to be extensible.
Yes. Any Parquet file with these columns works: message_id, conversation_id, role, content, created, source. Place the file in your data directory and run brain-mcp sync.
message_id | conversation_id | role | content | created | sourceGo to Settings → Data Controls → Export Data in ChatGPT. OpenAI will email you a download link. Place the conversations.json in your brain-mcp config directory and run brain-mcp sync. Note: the ChatGPT importer is still being refined — check the GitHub repo for the latest status.
Copy your ~/.config/brain-mcp/ directory to the new machine. It contains your config, parquet data, and vector embeddings. Then run pipx install brain-mcp && brain-mcp setup on the new machine — it will detect the existing data and configure your MCP clients.
Comparison
3 questionsMem0 stores extracted facts as key-value pairs. brain-mcp reconstructs cognitive state — where you were in a problem, what you'd decided, what questions are still open, and what it would cost to switch to a different domain. Different philosophy: facts vs. state.
Khoj is a document-based "second brain" that indexes your files, notes, and documents. brain-mcp specifically indexes AI conversations and models attention patterns, domain switching, and cognitive state. They solve different problems and can complement each other.
MCP (Model Context Protocol) means any compatible client — Claude Desktop, Cursor, Windsurf — can use the tools natively, without writing any integration code. You configure it once and every conversation has access. No middleware, no webhooks, no custom code.
ADHD / Cognitive
3 questions8 tools designed for how attention actually works (especially monotropic/ADHD minds): tunnel_state, context_recovery, switching_cost, open_threads, dormant_contexts, trust_dashboard, cognitive_patterns, and tunnel_history. They model domain-specific state, track open questions across domains, and quantify the cost of switching attention.
See all prosthetic tools →It reconstructs your "save game" for any domain — where you were, what's open, what you decided, what stage you're at. Instead of spending 30 minutes fumbling through old conversations to reconstruct context, you get instant recall. The difference is measured in minutes saved per context switch.
No. Anyone who context-switches between projects benefits from persistent memory and instant state recovery. ADHD minds benefit most because context loss is more severe and more frequent — but the tools are useful for any knowledge worker who juggles multiple domains.