Quickstart

From zero to “use brain” in under 5 minutes.

Prerequisites

You need Python 3.11+ and an MCP client (Claude Desktop, Cursor, or Windsurf). Apple Silicon recommended for fast local embeddings.

1

Clone and install

Clone the repository and set up a Python virtual environment:

pip install brain-mcp

Or install from source:

git clone https://github.com/mordechaipotash/brain-mcp.git
cd brain-mcp
pip install -e .
2

Initialize brain-mcp

Run the init command to discover data sources and create directories:

brain-mcp init

This scans for ChatGPT exports, Claude Desktop logs, and Claude Code sessions.

3

Run the health check

Verify everything is configured correctly:

brain-mcp doctor
Expected output
✅ Python 3.12.4
✅ Dependencies installed
✅ Data directory exists
✅ Conversation sources found: 3
✅ Embedding model available (nomic-embed-text-v1.5)
✅ LanceDB initialized
✅ Ready to go!
4

Configure your MCP client

The easiest way — auto-configure Claude Desktop:

python -m cli setup claude

Other clients

For Cursor or Windsurf, see the Cursor guide or add manually to your MCP config:

claude_desktop_config.json
{
  "mcpServers": {
    "brain": {
      "command": "python",
      "args": ["mcp_brain_server.py"],
      "cwd": "/path/to/brain-mcp"
    }
  }
}
5

Your first query

Open your MCP client and type:

“use brain”

Then try your first real query:

> tunnel_state("my-project")
Example response
🧠 Tunnel State: my-project
Stage: executing | Tone: focused
Open questions: 3 | Decisions: 5

❓ Top open questions:
  1. Which chunking strategy for embeddings?
  2. Should we add streaming support?
  3. Rate limiting approach for API?

✅ Recent decisions:
  • Use semantic chunking over fixed-size
  • Target 768-dim embeddings (nomic)
  • Keep everything local-first

📊 Last active: 2 hours ago
   Conversations: 12 | Messages: 847

Next Steps