Quickstart

From zero to “use brain” in under 5 minutes.

Prerequisites

You need Python 3.11+ and an MCP client (Claude Desktop, Cursor, or Windsurf). Apple Silicon recommended for fast local embeddings.

1

Install brain-mcp

Install via pipx (recommended) or pip:

pipx install brain-mcp

Or with pip: pip install brain-mcp

2

Run setup

The setup wizard discovers sources, imports conversations, creates embeddings, and configures your MCP clients — all in one command:

brain-mcp setup

This discovers conversations from Claude Code, Claude Desktop, Cursor, Windsurf, and Gemini CLI.

3

Run the health check

Verify everything is configured correctly:

brain-mcp doctor
Expected output
✅ Python 3.12.4
✅ Dependencies installed
✅ Data directory exists
✅ Conversation sources found: 3
✅ Embedding model available (nomic-embed-text-v1.5)
✅ LanceDB initialized
✅ Ready to go!
4

Configure your MCP client

brain-mcp setup already configures your MCP clients automatically. To configure a specific client later:

brain-mcp setup claude    # Claude Desktop + Code
brain-mcp setup cursor    # Cursor
brain-mcp setup windsurf  # Windsurf

Manual configuration

If you prefer to configure manually, add this to your MCP config:

MCP client config
{
  "mcpServers": {
    "my-brain": {
      "command": "brain-mcp",
      "args": ["serve"]
    }
  }
}
5

Your first query

Open your MCP client and type:

“use brain”

Then try your first real query:

> tunnel_state("my-project")
Example response
🧠 Tunnel State: my-project
Stage: executing | Tone: focused
Open questions: 3 | Decisions: 5

❓ Top open questions:
  1. Which chunking strategy for embeddings?
  2. Should we add streaming support?
  3. Rate limiting approach for API?

✅ Recent decisions:
  • Use semantic chunking over fixed-size
  • Target 768-dim embeddings (nomic)
  • Keep everything local-first

📊 Last active: 2 hours ago
   Conversations: 12 | Messages: 847

Next Steps