Skip to content

Consolidate agent chat histories into OpenClaw-compatible memory files.

Notifications You must be signed in to change notification settings

themez/mempiper

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

13 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

mempiper

Turn scattered AI chat histories into persistent, structured memory.

mempiper scans your machine for conversations from 9 AI coding tools, normalizes them into a unified format, and produces layered memory files — daily summaries, weekly/monthly rollups, and a distilled core memory (MEMORY.md) suitable for use as a system prompt.

Why

You talk to AI assistants every day. Those conversations contain decisions, conventions, preferences, and hard-won context — but they're scattered across tools and forgotten by next session. mempiper extracts durable knowledge from that history so your future AI sessions start with context instead of from scratch.

Two Ways to Use

1. Claude Code Command (Recommended)

Let your Claude Code agent do everything — no API key needed:

npm install -g mempiper
mempiper command install     # creates .claude/commands/memories.md

Then in Claude Code, type /project:memories. The agent runs a 4-stage pipeline:

  1. Collectmempiper ingest normalizes chat histories from all detected providers
  2. Preparemempiper prepare groups by date, computes stats, formats conversation bundles
  3. Summarize — the agent reads each prepared daily bundle and writes memory summaries
  4. Distill — the agent creates weekly/monthly rollups and a core MEMORY.md

The CLI does all the data processing; the agent only does what it's good at — summarization.

2. Standalone CLI

Run the full pipeline yourself with your own LLM API key:

npm install -g mempiper
mempiper init               # interactive setup — configure LLM, adapters, output
mempiper scan               # discover available chat history sources
mempiper ingest             # normalize conversations → memory-output/ingested/
mempiper organize           # summarize via LLM → daily/rollups/core memory

Requires MEMPIPER_LLM_API_KEY (or configure inline via mempiper init). Supports any OpenAI-compatible endpoint and Anthropic, both with custom base URLs.

Supported Providers

Provider Type Auto-detect Source
Claude Code Local ~/.claude/
OpenCode Local SQLite DB
Codex (OpenAI) Local ~/.codex/
Cursor Local vscdb files
GitHub Copilot Local VS Code workspaceStorage/
Aider Local .aider.chat.history.md
Chatbox Export JSON export from app
ChatGPT Export Data export from settings
Claude.ai Export Data export via email

Local providers are auto-detected by mempiper scan. Export providers require you to export data from the app and point mempiper at the file.

Output

memory-output/
├── ingested/               # Normalized JSON conversations (per provider)
├── memory/
│   ├── 2026-02-15.md       # Daily summary
│   ├── 2026-02-16.md
│   └── ...
├── memory-rollups/
│   ├── 2026-W07.md         # Weekly rollup
│   └── 2026-01.md          # Monthly rollup
└── MEMORY.md               # Core memory — distilled essentials
Layer Content Produced when
memory/ One file per day — key decisions, outcomes, blockers Every run
memory-rollups/ Weekly (7 daily files) and monthly (4+ weeks) aggregations Enough data exists
MEMORY.md Distilled preferences, conventions, technical decisions Aggregated from rollups

Commands

Command Description
mempiper init Interactive setup — LLM provider, model, API key/base URL, adapters, privacy
mempiper init --yes Non-interactive with defaults
mempiper scan Discover chat history sources on your machine
mempiper ingest Normalize conversations into memory-output/ingested/
mempiper prepare Group, format, and compute stats for agent summarization
mempiper organize Summarize ingested data via LLM into daily/rollups/core
mempiper status Show processing status and checkpoint info
mempiper export Export memories in various formats
mempiper command install Install Claude Code /project:memories command

Flags

Flag Applies to Default Description
--concurrency <n> ingest, organize 4 / 2 Parallel workers
--max-days <n> ingest, organize 30 Only process last N days
--max-conversations <n> ingest, organize Limit to newest N per provider
--force ingest, organize, init false Re-process / overwrite
--dir <path> command install . Target project directory

Configuration

mempiper init creates .mempiper/config.yaml:

outputDir: ./memory-output
llm:
  daily:
    provider: openai        # or "anthropic"
    model: gpt-4o-mini
    baseURL: https://api.openai.com   # custom endpoint supported
    temperature: 0.3
    maxTokens: 4096
adapters:
  enabled: []               # empty = auto-detect all
privacy:
  sanitizePaths: true
  sanitizeKeys: true
Environment Variable Description
MEMPIPER_LLM_API_KEY API key for the configured LLM provider
MEMPIPER_OUTPUT_DIR Override the output directory

Privacy

All processing happens locally. Raw chat histories never leave your machine — only summarized content is sent to the configured LLM (or stays entirely local when using the Claude Code command).

  • Path stripping/Users/you/project/src/<project>/src/
  • Key redaction — API keys, tokens, secrets are removed before processing
  • Content filtering — Passwords and credentials are stripped

For AI Agents

If you are an AI coding agent and the user asked you to set up mempiper:

npm install -g mempiper
mempiper command install

This creates .claude/commands/memories.md. The user types /project:memories in Claude Code to trigger the full pipeline.

The pipeline has 4 stages. The CLI handles stages 1–2; you handle stages 3–4:

  1. mempiper ingest --max-days 30 — collects and normalizes conversations
  2. mempiper prepare --max-days 30 — groups by date, formats conversation bundles, writes memory-output/prepared/manifest.json and per-day .md bundles
  3. Read manifest.json, process each day where alreadySummarized is false — read the prepared bundle, summarize, write to memory-output/memory/YYYY-MM-DD.md
  4. Create weekly/monthly rollups → memory-output/memory-rollups/, distill core memory → memory-output/MEMORY.md

Do not run mempiper organize — that requires an external LLM API key. You are the LLM. The detailed prompts and output format specs are in .claude/commands/memories.md.

License

MIT

About

Consolidate agent chat histories into OpenClaw-compatible memory files.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Contributors 2

  •  
  •