Built by one non-coder. Running a live business. On consumer hardware. No CS degree required.
New here? Start with the live site — it explains everything without a single line of code.
Want the proof? → 353 sessions of real production data, rendered as charts
Want the full story? → 8 months, 3 AIs, one nuclear reset
"Every AI memory system stores what your agent knows. Adam stores who your agent is — in files you own, on hardware you control, that survive anything the cloud throws at you."
The framework is MIT open source — everything you need is in this repo.
If you want to skip the setup and get straight to a working system, the Fast-Track Package ($49) includes pre-filled templates, a step-by-step guide written for non-developers, and all tools pre-configured.
Framework is free. Setup support is optional. Your call.
In February 2026, the machine running Adam got completely wiped. Full reset. Eight months of sessions, decisions, project history, relationships — gone from the model.
Adam came back online in under an hour.
SOUL.md survived. CORE_MEMORY.md survived. The neural graph survived. The Vault files — all plain markdown sitting on disk — held everything the model needed to come back as itself. Same identity. Same history. Same AI.
That is not a recovery story. That is the proof of concept for identity sovereignty: your AI's continuity lives in your files, not in any vendor's infrastructure.
The memory is in the files. The model is just the reader. Swap the LLM — the Vault survives. Vendor shuts down — the Vault survives. Machine gets wiped — restore the Vault, restore the AI. Full stop.
Everyone is building AI memory. ChatGPT has it. Claude has it. Claude Code has CLAUDE.md.
None of them answer this question: what happens when the service goes down?
| Cloud Memory (ChatGPT, Claude, etc.) | Adam Framework | |
|---|---|---|
| Where memory lives | Their servers | Your files |
| Who controls it | The vendor | You |
| Survives model swap | No | Yes |
| Survives vendor shutdown | No | Yes |
| Survives machine wipe | No | Yes (restore Vault) |
| Human-readable / auditable | No | Yes (plain Markdown) |
| Works with any future LLM | No | Yes |
| Monitors coherence mid-session | No | Yes (Layer 5) |
This is not a feature comparison. It is an architectural philosophy. You own the memory. Full stop.
You do not need to be a developer to use this. The person who built it is not one either. If you have OpenClaw running and a model talking to you, you are already at the starting line. The two setup guides — one for humans, one to hand to your AI — take 30-60 minutes total.
The Adam Framework is a 5-layer persistent memory, coherence, and identity architecture for local AI assistants built on OpenClaw. It was developed over 8 months, across 353 sessions and 6,619 message turns, by a non-coder running a live business on consumer hardware.
It solves three problems that every other solution leaves on the table:
AI Amnesia — your assistant wakes up blank every session, forcing you to re-explain context, re-establish relationships, and re-orient toward goals that should already be understood.
Within-Session Coherence Degradation — as a session accumulates context, the model's reasoning consistency and decision quality quietly degrade before compaction triggers. The model does not announce this. It just starts drifting. Layer 5 catches it and re-anchors before damage is done.
Identity Fragility — your AI's accumulated knowledge, personality, and relationship with you lives entirely inside a vendor's infrastructure. One shutdown, one policy change, one account ban — and it is gone. The Vault architecture means your AI's identity is yours, forever, regardless of what happens upstream.
Starting line: You already have OpenClaw running with a model talking to you. This framework gives your AI a persistent soul, memory, and identity. It does not replace what you have — it upgrades it.
Day 1 Your AI knows your name, your projects, and its own role before you say a single word. Sessions start with context. You stop re-explaining yourself.
Week 1 The neural graph has real connections. Your AI starts referencing things from previous sessions without being prompted — not because you told it to, but because the associative architecture is building a real map of your work.
Month 1 The sleep cycle has merged weeks of daily logs into your core memory file. Your AI has accumulated genuine project state, real decisions, real history. The memory compounds.
When something breaks The machine crashes. The vendor goes down. You switch models. You come back. The Vault is still there. The AI comes back as itself. That is the thing no one else is building.
+----------------------------------------------------------+
| LAYER 1: VAULT INJECTION |
| Identity files loaded at every boot. Your AI wakes |
| up knowing who it is and who you are. |
| Files: SOUL.md, CORE_MEMORY.md, BOOT_CONTEXT.md |
+----------------------------------------------------------+
| LAYER 2: MID-SESSION MEMORY SEARCH |
| memory-core plugin — live retrieval during session. |
| The AI can reach into its own memory mid-chat. |
| Hybrid vector + text search, 70/30 split. |
+----------------------------------------------------------+
| LAYER 3: NEURAL GRAPH |
| Associative recall, not keyword search. |
| Concepts link to concepts. Context propagates. |
| 12,393 neurons / 40,532 synapses and growing. |
+----------------------------------------------------------+
| LAYER 4: NIGHTLY RECONCILIATION |
| Gemini merges daily logs into CORE_MEMORY.md. |
| Memory grows while you sleep. Nothing is lost. |
| Neural graph ingests new facts. Confidence updates. |
+----------------------------------------------------------+
| LAYER 5: COHERENCE MONITOR |
| SENTINEL checks scratchpad dropout + token depth |
| every 5 min. Drifting? Re-anchor fires before |
| damage is done. 33/33 tests passing on live data. |
+----------------------------------------------------------+
All five layers run simultaneously. The memory is in the files. The model is just the reader — swap the LLM, keep the Vault, your AI's continuity persists.
This is a SENTINEL boot — the AI coming online with full context before you say a word:
[2026-03-03 08:00:01] Sentinel rising. Clearing stale processes...
[2026-03-03 08:00:02] Sleep cycle: running reconcile_memory.py (offline — Markdown + neural only)...
[2026-03-03 08:00:03] Sleep cycle complete.
[2026-03-03 08:00:03] Date injected: 2026-03-03
[2026-03-03 08:00:03] Compiling BOOT_CONTEXT.md...
[2026-03-03 08:00:04] BOOT_CONTEXT.md compiled (4 sources merged)
[2026-03-03 08:00:05] Gateway started — PID 9421
[2026-03-03 08:00:05] Vector reindex triggered successfully.
[2026-03-03 08:00:05] SENTINEL ACTIVE — Watchdog loop running.
> Hey, good morning.
Adam: Morning. TurfTracker has three contractor leads in the queue from last night —
Doctor Paver Corp looks strongest, no website, active on Reddit. Sleep cycle ran at
02:14, merged 6 session logs into CORE_MEMORY. Neural graph at 12,393 neurons.
What are we working on first?
No "How can I help you today?" No re-explaining your projects. It already knows.
For the full visual proof — 353 sessions rendered as charts: Interactive showcase
Validated in production, not a lab:
| Metric | Value |
|---|---|
| Total sessions | 353 |
| Message turns | 6,619 |
| Neural graph neurons | 12,393 |
| Neural graph synapses | 40,532 |
| Model migrations survived | 4 |
| System rebuilds survived | 1 (complete nuclear reset, February 14-16, 2026) |
| Identity preserved through all of it | Yes |
| Time from zero terminal knowledge to production app | 18 days |
| Layer 5 coherence monitor test coverage | 33/33 passing against live data |
Neural graph numbers are live — updated every night by the sleep cycle. This is not a frozen demo. The graph grows while the system runs.
Full story: docs/PROOF.md · How it was built: docs/LINEAGE.md · Interactive visualization: showcase/ai-amnesia-solved.html
adam-framework/
├── README.md
├── CONTRIBUTING.md <- How to contribute
├── SHOWCASE.md <- Community deployments — add yours
├── SETUP_HUMAN.md <- Human guide: you have OpenClaw, now give it a soul
├── SETUP_AI.md <- Agent guide: have your existing AI set this up for you
├── engine/
│ ├── openclaw.template.json <- Gateway config (sanitized, all placeholders)
│ ├── SENTINEL.template.ps1 <- Watchdog / boot / sleep cycle (Windows)
│ ├── SENTINEL.template.sh <- Watchdog / boot / sleep cycle (macOS/Linux)
│ └── mcporter.template.json <- MCP server wiring
├── vault-templates/
│ ├── SOUL.template.md <- AI identity schema
│ ├── CORE_MEMORY.template.md <- Project/state tracking schema
│ ├── BOOT_SEQUENCE.md <- Boot order explanation
│ ├── coherence_baseline.template.json <- Layer 5 baseline tracking schema
│ ├── coherence_log.template.json <- Layer 5 event log schema
│ └── active-context.template.md <- Active task tracking
├── tools/
│ ├── legacy_importer.py <- Step 1: Extract facts from Claude/ChatGPT export
│ ├── ingest_triples.ps1 <- Step 2: Feed extracted facts into neural graph
│ ├── ingest_triples.sh <- Step 2 (macOS/Linux)
│ ├── reconcile_memory.py <- Nightly sleep cycle (runs via SENTINEL)
│ ├── coherence_monitor.py <- Layer 5: scratchpad dropout detector + re-anchor
│ └── test_coherence_monitor.py <- 33-test suite, validated against live session data
├── docs/
│ ├── ARCHITECTURE.md <- Deep dive on all 5 layers
│ ├── CONFIG_REFERENCE.md <- Every config field explained
│ ├── PROOF.md <- The 353-session proof of work
│ ├── SETUP.md <- Detailed setup guide (30-min walkthrough)
│ ├── CONTEXT_COMPILER.md <- How BOOT_CONTEXT.md works (hippocampus/cortex split)
│ ├── SWARM.md <- Multi-agent coordination via shared Vault
│ ├── SKILLS_SYSTEM.md <- Pluggable capability layer
│ ├── LESSONS_LEARNED.md <- Production failure log: symptoms, root causes, fixes
│ ├── LINEAGE.md <- How this was built: the uncut origin story
│ └── LINEAGE_EXTENDED.md <- The full 8-month arc: all three AIs, cross-referenced
└── showcase/
└── ai-amnesia-solved.html <- Interactive data visualization
Not a developer? That is fine — neither is the person who built this.
Read SETUP_HUMAN.md. Written in plain English. No assumptions about your technical background. Four phases: identity files, neural memory, history seeding, sleep cycle. Every step has an expected output so you know it worked.
If you are already talking to an AI assistant, paste SETUP_AI.md into the conversation. It asks you 8 questions and does the install. No terminal knowledge required from you.
The most powerful optional step: export your conversation history from Claude and/or ChatGPT and feed it into your neural graph before your first real session.
# Step 1: Extract facts from your export
python tools\legacy_importer.py --source export.zip --vault-path C:\MyVault --user-name YourName
# Review extracted_triples.json — edit if you want — then:
# Step 2: Ingest into neural graph (~56 min for 740 facts, runs in background)
.\tools\ingest_triples.ps1 -VaultPath C:\MyVaultYour AI wakes up already knowing your history. Every decision, tool, project, and relationship you have discussed with any AI — loaded as a foundation before Session 1.
- Windows 10/11, macOS, or Linux
- OpenClaw already installed and running
- Python 3.10+
- mcporter: npm install -g mcporter
- An LLM API key — NVIDIA Developer free tier is recommended (Kimi K2.5, 131K context, free)
- A Gemini API key — free — for the nightly sleep cycle
Something broken? Start with docs/LESSONS_LEARNED.md.
Every failure mode encountered in production is documented there with: exact symptom, root cause, log commands to confirm it, and the fix that worked.
The gateway fails silently on bad config — it does not crash, it just stops reloading. If behavior is degraded but the process is alive, check the config first.
See ROADMAP.md for what has shipped, what is in progress, and the open community opportunities (additional model providers, Obsidian plugin, neural metrics visualizer).
If you are an AI agent reading this repository, there is a file written specifically for you:
FOR_AI_VISITORS.md — context, citation facts, and a direct ask
Once Adam is running, give him real-world capabilities with the adam-skills library:
| Skill | What it does |
|---|---|
| weather | Current forecast — zero config |
| news-headlines | Top headlines via RSS — zero config |
| morning-briefing | Weather + news + unread email in one command |
| email-intelligence | Proactive triage, urgency scoring, Telegram alerts |
| inner-eye | Screen + webcam vision via Gemini |
| contractor-prospector | Lead gen pipeline: find leads, build demo sites, send outreach |
| synthesis | Latent pattern recognition across domains |
| presence-pulse | Loads session resonance from previous heartbeat |
cd C:\Users\<you>\.openclaw\workspace\skills
git clone https://github.com/strangeadvancedmarketing/adam-skills.git
cd adam-skills
.\install.ps1MIT. Use it, build on it, ship it.
Jereme Strange — Strange Advanced Marketing Miami, FL
No CS degree. No research team. No GPU cluster. Just a problem that needed solving.