Turn scattered AI chat histories into persistent, structured memory.
mempiper scans your machine for conversations from 9 AI coding tools, normalizes them into a unified format, and produces layered memory files — daily summaries, weekly/monthly rollups, and a distilled core memory (MEMORY.md) suitable for use as a system prompt.
You talk to AI assistants every day. Those conversations contain decisions, conventions, preferences, and hard-won context — but they're scattered across tools and forgotten by next session. mempiper extracts durable knowledge from that history so your future AI sessions start with context instead of from scratch.
Let your Claude Code agent do everything — no API key needed:
npm install -g mempiper
mempiper command install # creates .claude/commands/memories.mdThen in Claude Code, type /project:memories. The agent runs a 4-stage pipeline:
- Collect —
mempiper ingestnormalizes chat histories from all detected providers - Prepare —
mempiper preparegroups by date, computes stats, formats conversation bundles - Summarize — the agent reads each prepared daily bundle and writes memory summaries
- Distill — the agent creates weekly/monthly rollups and a core
MEMORY.md
The CLI does all the data processing; the agent only does what it's good at — summarization.
Run the full pipeline yourself with your own LLM API key:
npm install -g mempiper
mempiper init # interactive setup — configure LLM, adapters, output
mempiper scan # discover available chat history sources
mempiper ingest # normalize conversations → memory-output/ingested/
mempiper organize # summarize via LLM → daily/rollups/core memoryRequires MEMPIPER_LLM_API_KEY (or configure inline via mempiper init). Supports any OpenAI-compatible endpoint and Anthropic, both with custom base URLs.
| Provider | Type | Auto-detect | Source |
|---|---|---|---|
| Claude Code | Local | ✓ | ~/.claude/ |
| OpenCode | Local | ✓ | SQLite DB |
| Codex (OpenAI) | Local | ✓ | ~/.codex/ |
| Cursor | Local | ✓ | vscdb files |
| GitHub Copilot | Local | ✓ | VS Code workspaceStorage/ |
| Aider | Local | ✓ | .aider.chat.history.md |
| Chatbox | Export | — | JSON export from app |
| ChatGPT | Export | — | Data export from settings |
| Claude.ai | Export | — | Data export via email |
Local providers are auto-detected by mempiper scan. Export providers require you to export data from the app and point mempiper at the file.
memory-output/
├── ingested/ # Normalized JSON conversations (per provider)
├── memory/
│ ├── 2026-02-15.md # Daily summary
│ ├── 2026-02-16.md
│ └── ...
├── memory-rollups/
│ ├── 2026-W07.md # Weekly rollup
│ └── 2026-01.md # Monthly rollup
└── MEMORY.md # Core memory — distilled essentials
| Layer | Content | Produced when |
|---|---|---|
memory/ |
One file per day — key decisions, outcomes, blockers | Every run |
memory-rollups/ |
Weekly (7 daily files) and monthly (4+ weeks) aggregations | Enough data exists |
MEMORY.md |
Distilled preferences, conventions, technical decisions | Aggregated from rollups |
| Command | Description |
|---|---|
mempiper init |
Interactive setup — LLM provider, model, API key/base URL, adapters, privacy |
mempiper init --yes |
Non-interactive with defaults |
mempiper scan |
Discover chat history sources on your machine |
mempiper ingest |
Normalize conversations into memory-output/ingested/ |
mempiper prepare |
Group, format, and compute stats for agent summarization |
mempiper organize |
Summarize ingested data via LLM into daily/rollups/core |
mempiper status |
Show processing status and checkpoint info |
mempiper export |
Export memories in various formats |
mempiper command install |
Install Claude Code /project:memories command |
| Flag | Applies to | Default | Description |
|---|---|---|---|
--concurrency <n> |
ingest, organize | 4 / 2 | Parallel workers |
--max-days <n> |
ingest, organize | 30 | Only process last N days |
--max-conversations <n> |
ingest, organize | — | Limit to newest N per provider |
--force |
ingest, organize, init | false | Re-process / overwrite |
--dir <path> |
command install | . |
Target project directory |
mempiper init creates .mempiper/config.yaml:
outputDir: ./memory-output
llm:
daily:
provider: openai # or "anthropic"
model: gpt-4o-mini
baseURL: https://api.openai.com # custom endpoint supported
temperature: 0.3
maxTokens: 4096
adapters:
enabled: [] # empty = auto-detect all
privacy:
sanitizePaths: true
sanitizeKeys: true| Environment Variable | Description |
|---|---|
MEMPIPER_LLM_API_KEY |
API key for the configured LLM provider |
MEMPIPER_OUTPUT_DIR |
Override the output directory |
All processing happens locally. Raw chat histories never leave your machine — only summarized content is sent to the configured LLM (or stays entirely local when using the Claude Code command).
- Path stripping —
/Users/you/project/src/→<project>/src/ - Key redaction — API keys, tokens, secrets are removed before processing
- Content filtering — Passwords and credentials are stripped
If you are an AI coding agent and the user asked you to set up mempiper:
npm install -g mempiper
mempiper command installThis creates .claude/commands/memories.md. The user types /project:memories in Claude Code to trigger the full pipeline.
The pipeline has 4 stages. The CLI handles stages 1–2; you handle stages 3–4:
mempiper ingest --max-days 30— collects and normalizes conversationsmempiper prepare --max-days 30— groups by date, formats conversation bundles, writesmemory-output/prepared/manifest.jsonand per-day.mdbundles- Read
manifest.json, process each day wherealreadySummarizedisfalse— read the prepared bundle, summarize, write tomemory-output/memory/YYYY-MM-DD.md - Create weekly/monthly rollups →
memory-output/memory-rollups/, distill core memory →memory-output/MEMORY.md
Do not run mempiper organize — that requires an external LLM API key. You are the LLM. The detailed prompts and output format specs are in .claude/commands/memories.md.
MIT