Basic Memory lets you build persistent knowledge through natural conversations with Large Language Models (LLMs) like Claude, while keeping everything in simple Markdown files on your computer. It uses the Model Context Protocol (MCP) to enable any compatible LLM to read and write to your local knowledge base.
- Website: http://basicmachines.co
- Documentation: http://memory.basicmachines.co
Basic Memory provides persistent contextual awareness across sessions through a structured knowledge graph. The system enables LLMs to access and reference prior conversations, track semantic relationships between concepts, and incorporate human edits made directly to knowledge files.
# Install with uv (recommended)
uv tool install basic-memory
# Configure Claude Desktop (edit ~/Library/Application Support/Claude/claude_desktop_config.json)
# Add this to your config:
{
"mcpServers": {
"basic-memory": {
"command": "uvx",
"args": [
"basic-memory",
"mcp"
]
}
}
}
# Now in Claude Desktop, you can:
# - Write notes with "Create a note about coffee brewing methods"
# - Read notes with "What do I know about pour over coffee?"
# - Search with "Find information about Ethiopian beans"
You can view shared context via files in ~/basic-memory (default directory location).
You can use Smithery to automatically configure Basic Memory for Claude Desktop:
npx -y @smithery/cli install @basicmachines-co/basic-memory --client claudeThis installs and configures Basic Memory without requiring manual edits to the Claude Desktop configuration file. The Smithery server hosts the MCP server component, while your data remains stored locally as Markdown files.
You can also install the CLI tools to sync files or manage projects.
uv tool install basic-memory
# create a new project in a different directory
basic-memory project add coffee ./examples/coffee
# you can set the project to the default
basic-memory project default coffeeView available projects
basic-memory project list
Basic Memory Projects
┏━━━━━━━━┳━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┳━━━━━━━━━┳━━━━━━━━┓
┃ Name ┃ Path ┃ Default ┃ Active ┃
┡━━━━━━━━╇━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━╇━━━━━━━━━╇━━━━━━━━┩
│ main │ ~/basic-memory │ ✓ │ ✓ │
│ coffee │ ~/dev/basicmachines/basic-memory/examples/coffee │ │ │
└────────┴──────────────────────────────────────────────────┴─────────┴────────┘Basic Memory will write notes in Markdown format. Open your project directory in your text editor to view project files while you have conversations with an LLM.
Most LLM interactions are ephemeral - you ask a question, get an answer, and everything is forgotten. Each conversation starts fresh, without the context or knowledge from previous ones. Current workarounds have limitations:
- Chat histories capture conversations but aren't structured knowledge
- RAG systems can query documents but don't let LLMs write back
- Vector databases require complex setups and often live in the cloud
- Knowledge graphs typically need specialized tools to maintain
Basic Memory addresses these problems with a simple approach: structured Markdown files that both humans and LLMs can read and write to. The key advantages:
- Local-first: All knowledge stays in files you control
- Bi-directional: Both you and the LLM read and write to the same files
- Structured yet simple: Uses familiar Markdown with semantic patterns
- Traversable knowledge graph: LLMs can follow links between topics
- Standard formats: Works with existing editors like Obsidian
- Lightweight infrastructure: Just local files indexed in a local SQLite database
With Basic Memory, you can:
- Have conversations that build on previous knowledge
- Create structured notes during natural conversations
- Have conversations with LLMs that remember what you've discussed before
- Navigate your knowledge graph semantically
- Keep everything local and under your control
- Use familiar tools like Obsidian to view and edit notes
- Build a personal knowledge base that grows over time
Let's say you're exploring coffee brewing methods and want to capture your knowledge. Here's how it works:
- Start by chatting normally:
I've been experimenting with different coffee brewing methods. Key things I've learned:
- Pour over gives more clarity in flavor than French press
- Water temperature is critical - around 205°F seems best
- Freshly ground beans make a huge difference
... continue conversation.
- Ask the LLM to help structure this knowledge:
"Let's write a note about coffee brewing methods."
LLM creates a new Markdown file on your system (which you can see instantly in Obsidian or your editor):
---
title: Coffee Brewing Methods
permalink: coffee-brewing-methods
tags:
- coffee
- brewing
---
# Coffee Brewing Methods
## Observations
- [method] Pour over provides more clarity and highlights subtle flavors
- [technique] Water temperature at 205°F (96°C) extracts optimal compounds
- [principle] Freshly ground beans preserve aromatics and flavor
## Relations
- relates_to [[Coffee Bean Origins]]
- requires [[Proper Grinding Technique]]
- affects [[Flavor Extraction]]The note embeds semantic content and links to other topics via simple Markdown formatting.
- You see this file on your computer in real time in the
~/$HOME/basic-memorydirectory:
---
title: Coffee Brewing Methods
permalink: coffee-brewing-methods
type: note
---
# Coffee Brewing Methods
## Observations
- [method] Pour over provides more clarity and highlights subtle flavors
- [technique] Water temperature at 205°F (96°C) extracts optimal compounds
- [principle] Freshly ground beans preserve aromatics and flavor
- [preference] Medium-light roasts work best for pour over # Added by you
## Relations
- relates_to [[Coffee Bean Origins]]
- requires [[Proper Grinding Technique]]
- affects [[Flavor Extraction]]
- pairs_with [[Breakfast Pastries]] # Added by you- In a new chat with the LLM, you can reference this knowledge:
Look at `coffee-brewing-methods` for context about pour over coffee
The LLM can now build rich context from the knowledge graph. For example:
Following relation 'relates_to [[Coffee Bean Origins]]':
- Found information about Ethiopian Yirgacheffe
- Notes on Colombian beans' nutty profile
- Altitude effects on bean characteristics
Following relation 'requires [[Proper Grinding Technique]]':
- Burr vs. blade grinder comparisons
- Grind size recommendations for different methods
- Impact of consistent particle size on extraction
Each related document can lead to more context, building a rich semantic understanding of your knowledge base. All of this context comes from standard Markdown files that both humans and LLMs can read and write.
Every time the LLM writes notes, they are saved in local Markdown files that you can:
- Edit in any text editor
- Version via git
- Back up normally
- Share when you want to
Under the hood, Basic Memory:
- Stores everything in Markdown files
- Uses a SQLite database for searching and indexing
- Extracts semantic meaning from simple Markdown patterns
- Files become
Entityobjects - Each
Entitycan haveObservations, or facts associated with it Relationsconnect entities together to form the knowledge graph
- Files become
- Maintains the local knowledge graph derived from the files
- Provides bidirectional synchronization between files and the knowledge graph
- Implements the Model Context Protocol (MCP) for AI integration
- Exposes tools that let AI assistants traverse and manipulate the knowledge graph
- Uses memory:// URLs to reference entities across tools and conversations
The file format is just Markdown with some simple markup:
Each Markdown file has:
title: <Entity title>
type: <The type of Entity> (e.g. note)
permalink: <a uri slug>
- <optional metadata> (such as tags) Observations are facts about a topic.
They can be added by creating a Markdown list with a special format that can reference a category, tags using a
"#" character, and an optional context.
Observation Markdown format:
- [category] content #tag (optional context)Examples of observations:
- [method] Pour over extracts more floral notes than French press
- [tip] Grind size should be medium-fine for pour over #brewing
- [preference] Ethiopian beans have bright, fruity flavors (especially from Yirgacheffe)
- [fact] Lighter roasts generally contain more caffeine than dark roasts
- [experiment] Tried 1:15 coffee-to-water ratio with good results
- [resource] James Hoffman's V60 technique on YouTube is excellent
- [question] Does water temperature affect extraction of different compounds differently?
- [note] My favorite local shop uses a 30-second bloom timeRelations are links to other topics. They define how entities connect in the knowledge graph.
Markdown format:
- relation_type [[WikiLink]] (optional context)Examples of relations:
- pairs_well_with [[Chocolate Desserts]]
- grown_in [[Ethiopia]]
- contrasts_with [[Tea Brewing Methods]]
- requires [[Burr Grinder]]
- improves_with [[Fresh Beans]]
- relates_to [[Morning Routine]]
- inspired_by [[Japanese Coffee Culture]]
- documented_in [[Coffee Journal]]Here's a complete example of a note with frontmatter, observations, and relations:
---
title: Pour Over Coffee Method
type: note
permalink: pour-over-coffee-method
tags:
- brewing
- coffee
- techniques
---
# Pour Over Coffee Method
This note documents the pour over brewing method and my experiences with it.
## Overview
The pour over method involves pouring hot water through coffee grounds in a filter. The water drains through the coffee
and filter into a carafe or cup.
## Observations
- [equipment] Hario V60 dripper produces clean, bright cup #gear
- [technique] Pour in concentric circles to ensure even extraction
- [ratio] 1:16 coffee-to-water ratio works best for balanced flavor
- [timing] Total brew time should be 2:30-3:00 minutes for medium roast
- [temperature] Water at 205°F (96°C) extracts optimal flavor compounds
- [grind] Medium-fine grind similar to table salt texture
- [tip] 30-45 second bloom with double the coffee weight in water
- [result] Produces a cleaner cup with more distinct flavor notes than immersion methods
## Relations
- complements [[Light Roast Beans]]
- requires [[Gooseneck Kettle]]
- contrasts_with [[French Press Method]]
- pairs_with [[Breakfast Pastries]]
- documented_in [[Brewing Journal]]
- inspired_by [[Japanese Brewing Techniques]]
- affects [[Flavor Extraction]]
- part_of [[Morning Ritual]]Basic Memory will parse the Markdown and derive the semantic relationships in the content. When you run
basic-memory sync:
- New and changed files are detected
- Markdown patterns become semantic knowledge:
[tech]becomes a categorized observation[[WikiLink]]creates a relation in the knowledge graph- Tags and metadata are indexed for search
- A SQLite database maintains these relationships for fast querying
- MCP-compatible LLMs can access this knowledge via memory:// URLs
This creates a two-way flow where:
- Humans write and edit Markdown files
- LLMs read and write through the MCP protocol
- Sync keeps everything consistent
- All knowledge stays in local files.
Basic Memory is built using the MCP (Model Context Protocol) and works with the Claude desktop app (https://claude.ai/):
- Configure Claude Desktop to use Basic Memory:
Edit your MCP configuration file (usually located at ~/Library/Application Support/Claude/claude_desktop_config.json
for OS X):
{
"mcpServers": {
"basic-memory": {
"command": "uvx",
"args": [
"basic-memory",
"mcp"
]
}
}
}If you want to use a specific project (see Multiple Projects below), update your Claude Desktop config:
{
"mcpServers": {
"basic-memory": {
"command": "uvx",
"args": [
"basic-memory",
"mcp",
"--project",
"your-project-name"
]
}
}
}- Sync your knowledge:
# One-time sync of local knowledge updates
basic-memory sync
# Run realtime sync process (recommended)
basic-memory sync --watch- In Claude Desktop, the LLM can now use these tools:
write_note(title, content, folder, tags) - Create or update notes
read_note(identifier, page, page_size) - Read notes by title or permalink
build_context(url, depth, timeframe) - Navigate knowledge graph via memory:// URLs
search(query, page, page_size) - Search across your knowledge base
recent_activity(type, depth, timeframe) - Find recently updated information
canvas(nodes, edges, title, folder) - Generate knowledge visualizations
- Example prompts to try:
"Create a note about our project architecture decisions"
"Find information about JWT authentication in my notes"
"Create a canvas visualization of my project components"
"Read my notes on the authentication system"
"What have I been working on in the past week?"
Basic Memory supports managing multiple separate knowledge bases through projects. This feature allows you to maintain separate knowledge graphs for different purposes (e.g., personal notes, work projects, research topics).
# List all configured projects
basic-memory project list
# Add a new project
basic-memory project add work ~/work-basic-memory
# Set the default project
basic-memory project default work
# Remove a project (doesn't delete files)
basic-memory project remove personal
# Show current project
basic-memory project currentAll commands support the --project flag to specify which project to use:
# Sync a specific project
basic-memory --project=work sync
# Run MCP server for a specific project
basic-memory --project=personal mcpYou can also set the BASIC_MEMORY_PROJECT environment variable:
BASIC_MEMORY_PROJECT=work basic-memory syncEach project maintains:
- Its own collection of markdown files in the specified directory
- A separate SQLite database for that project
- Complete knowledge graph isolation from other projects
Basic Memory is built on some key ideas:
- Your knowledge should stay in files you control
- Both humans and AI should use natural formats
- Simple text patterns can capture rich meaning
- Local-first doesn't mean feature-poor
- Knowledge should persist across conversations
- AI assistants should build on past context
- File formats should be human-readable and editable
- Semantic structure should emerge from natural patterns
- Knowledge graphs should be both AI and human navigable
- Systems should augment human memory, not replace it
Basic Memory provides CLI commands to import data from various sources, converting them into the structured Markdown format:
First, request an export of your data from your Claude account. The data will be emailed to you in several files,
including
conversations.json and projects.json.
Import Claude.ai conversation data
basic-memory import claude conversations The conversations will be turned into Markdown files and placed in the "conversations" folder by default (this can be changed with the --folder arg).
Example:
Importing chats from conversations.json...writing to .../basic-memory
Reading chat data... ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 100%
╭────────────────────────────╮
│ Import complete! │
│ │
│ Imported 307 conversations │
│ Containing 7769 messages │
╰────────────────────────────╯Next, you can run the sync command to import the data into basic-memory
basic-memory syncYou can also import project data from Claude.ai
➜ basic-memory import claude projects
Importing projects from projects.json...writing to .../basic-memory/projects
Reading project data... ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 100%
╭────────────────────────────────╮
│ Import complete! │
│ │
│ Imported 101 project documents │
│ Imported 32 prompt templates │
╰────────────────────────────────╯
Run 'basic-memory sync' to index the new files. ➜ basic-memory import chatgpt
Importing chats from conversations.json...writing to .../basic-memory/conversations
Reading chat data... ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 100%
╭────────────────────────────╮
│ Import complete! │
│ │
│ Imported 198 conversations │
│ Containing 11777 messages │
╰────────────────────────────╯
From the MCP Server: https://github.com/modelcontextprotocol/servers/tree/main/src/memory
➜ basic-memory import memory-json
Importing from memory.json...writing to .../basic-memory
Reading memory.json... ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 100%
Creating entities... ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 100%
╭──────────────────────╮
│ Import complete! │
│ │
│ Created 126 entities │
│ Added 252 relations │
╰──────────────────────╯Once you've built up a knowledge base, you can interact with it in several ways:
Basic Memory provides a powerful CLI for managing your knowledge:
# See all available commands
basic-memory --help
# Check the status of your knowledge sync
basic-memory status
# Access specific tool functionality directly
basic-memory tools
# Start a continuous sync process
basic-memory sync --watchBasic Memory works seamlessly with Obsidian, a popular knowledge management app:
- Point Obsidian to your Basic Memory directory
- Use standard Obsidian features like backlinks and graph view
- See your knowledge graph visually
- Use the canvas visualization generated by Basic Memory
Basic Memory is flexible about how you organize your files:
- Group by topic in folders
- Use a flat structure with descriptive filenames
- Add custom metadata in frontmatter
- Tag files for better searchability
The system will build the semantic knowledge graph regardless of your file organization preference.
The write-note tool supports reading content from standard input (stdin), allowing for more flexible workflows when
creating or updating notes in your Basic Memory knowledge base.
This feature is particularly useful for:
- Piping output from other commands directly into Basic Memory notes
- Creating notes with multi-line content without having to escape quotes or special characters
- Integrating with AI assistants like Claude Code that can generate content and pipe it to Basic Memory
- Processing text data from files or other sources
You can pipe content from another command into write_note:
# Pipe output of a command into a new note
echo "# My Note\n\nThis is a test note" | basic-memory tools write-note --title "Test Note" --folder "notes"
# Pipe output of a file into a new note
cat README.md | basic-memory tools write-note --title "Project README" --folder "documentation"
# Process text through other tools before saving as a note
cat data.txt | grep "important" | basic-memory tools write-note --title "Important Data" --folder "data"For multi-line content, you can use heredoc syntax:
# Create a note with heredoc
cat << EOF | basic-memory tools write_note --title "Project Ideas" --folder "projects"
# Project Ideas for Q2
## AI Integration
- Improve recommendation engine
- Add semantic search to product catalog
## Infrastructure
- Migrate to Kubernetes
- Implement CI/CD pipeline
EOFYou can redirect input from a file:
# Create a note from file content
basic-memory tools write-note --title "Meeting Notes" --folder "meetings" < meeting_notes.mdAGPL-3.0
Built with
