Frequently Asked Questions
Common questions about brain-mcp, organized by topic.
General
What is brain-mcp?+
An open-source MCP server that indexes your AI conversation history and makes it queryable. It gives your AI persistent memory across sessions — decisions you made, questions still open, domains you were working in.
Who is it for?+
Anyone who has conversations with AI and wants persistent memory across sessions. Especially useful for people with ADHD or anyone who loses context when switching between projects. If you've ever spent 30 minutes re-explaining context to your AI, brain-mcp is for you.
Is it free?+
Yes, fully free and MIT licensed. Core operations (search, embeddings, state recovery) run 100% locally. The only optional cost is LLM API calls for generating conversation summaries — typically ~$0.05/day for active use.
What does "use brain" mean?+
It's the trigger phrase. Say it in any conversation with your MCP client (Claude, Cursor, Windsurf) and the brain-mcp tools become available. Your AI can then query your conversation history, recover context, check open threads, and more.
Privacy & Security
Where is my data stored?+
100% on your machine. Parquet files for conversations, LanceDB for vector embeddings, JSON for structured summaries — everything is local files. No cloud database, no account required, no telemetry.
Does anything leave my computer?+
Core operations (search, embeddings, state recovery) are fully local. Summary generation optionally calls an LLM API to generate structured summaries of your conversations. You can skip this step entirely and still use all the search and prosthetic tools.
What about API keys?+
Only needed for summary generation (optional). Embeddings use the local nomic-embed-text model — no API key, no account, no cost. If you want summaries, you'll need an API key for your preferred LLM provider.
Technical
What MCP clients are supported?+
Claude Desktop, Cursor, Windsurf, and any MCP-compatible client. brain-mcp implements the standard Model Context Protocol, so any client that speaks MCP can connect to it.
Do I need a GPU?+
No. But Apple Silicon (M1 or newer) is recommended for fast local embeddings via MPS acceleration. On Intel Macs or Linux without a GPU, embeddings run on CPU — slower but fully functional. CUDA GPUs are also supported if available.
Does it work on Linux?+
Yes. Python 3.11+ is the only hard requirement. Works with Docker or a local Python setup. Embedding performance depends on your hardware (CUDA GPU is fastest on Linux, CPU works as fallback).
How much disk space does it need?+
Depends on your conversation volume. Typical usage: 500MB–2GB total for Parquet files plus vector embeddings. The nomic-embed-text model itself is ~250MB (downloaded once).
What Python version?+
Python 3.11 or newer. Required for some of the type annotations and performance features used in the codebase.
Data
What conversation sources are supported?+
ChatGPT (via JSON export), Claude Desktop (direct log reading), Claude Code (session sync), Clawdbot, or any custom source that produces Parquet files with the right schema. The system is designed to be extensible.
Can I bring my own data?+
Yes. Any Parquet file with these columns works: message_id, conversation_id, role, content, created, source.
How do I export from ChatGPT?+
Go to Settings → Data Controls → Export Data. OpenAI will email you a download link. The export contains a conversations.json file that brain-mcp can import directly.
Comparison
How is this different from Mem0?+
Mem0 stores extracted facts as key-value pairs. brain-mcp reconstructs cognitive state — where you were in a problem, what you'd decided, what questions are still open, and what it would cost to switch to a different domain. Different philosophy: facts vs. state.
How is this different from Khoj?+
Khoj is a document-based “second brain” that indexes your files, notes, and documents. brain-mcp specifically indexes AI conversations and models attention patterns, domain switching, and cognitive state. They solve different problems and can complement each other.
Why MCP instead of a REST API?+
MCP (Model Context Protocol) means any compatible client — Claude Desktop, Cursor, Windsurf — can use the tools natively, without writing any integration code. You configure it once and every conversation has access. No middleware, no webhooks, no custom code.
ADHD / Cognitive
What are the prosthetic tools?+
8 tools designed for how attention actually works (especially monotropic/ADHD minds): tunnel_state, context_recovery, switching_cost, open_threads, dormant_contexts, trust_dashboard, cognitive_patterns, tunnel_history. They model domain-specific state, track open questions across domains, and quantify the cost of switching attention.
How does tunnel_state help?+
It reconstructs your “save game” for any domain — where you were, what's open, what you decided, what stage you're at. Instead of spending 30 minutes fumbling through old conversations to reconstruct context, you get instant recall. The difference is measured in minutes saved per context switch.
Do I need ADHD to benefit?+
No. Anyone who context-switches between projects benefits from persistent memory and instant state recovery. ADHD minds benefit most because context loss is more severe and more frequent — but the tools are useful for any knowledge worker who juggles multiple domains.