Skip to content

shreybansal365/universal-handoff-protocol

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

4 Commits
 
 
 
 
 
 

Repository files navigation

Universal Handoff Protocol (UHP)

A zero-dependency context transfer system for seamlessly switching between AI coding assistants without losing project context.

License: MIT PRs Welcome


The Problem

When you switch between AI coding assistants — Claude, GPT, Codex, Gemini, Cursor, Copilot — each new AI starts with zero memory of your project. You lose:

  • What was just built
  • What's broken
  • What to do next
  • WHY things were done a certain way
  • Design system consistency

You end up re-explaining everything. Context is lost. Momentum dies.

The Solution

UHP maintains structured context files inside your project that any AI can read and write. When you switch:

  1. Tell the old AI: rate limits dying, prepare for handoff
  2. Tell the new AI: Read the .handoff/ directory... (see Quick Start)
  3. The new AI picks up exactly where the old one left off.

No dependencies. No installation. No configuration. Just files.


Architecture

your-project/
└── .handoff/
    ├── README.md           ← Protocol rules (13 rules for AI behavior)
    ├── PROJECT_BIBLE.md    ← Project encyclopedia (auto-generated)
    ├── ACTIVE_SESSION.md   ← Live work state (auto-updated)
    ├── SESSION_LOG.md      ← Historical audit trail
    └── PROMPTS.md          ← User quick-reference card

Defense-in-Depth System (8 Layers)

Layer 1: Arrival prompt         ← Critical rules in primary instruction
Layer 2: README.md              ← Full 13-rule protocol
Layer 3: Embedded file headers  ← Instructions inside each file
Layer 4: CRITICAL summary       ← Recency bias exploit at end of README
Layer 5: Mid-session correction ← User can snap AI back on track
Layer 6: Self-healing           ← Missing files auto-regenerated
Layer 7: Freshness check        ← Stale context detected and flagged
Layer 8: Validation proof       ← AI proves it read everything

Quick Start

Setting Up UHP on Any Project

Tell any AI assistant:

Set up the Universal Handoff Protocol in this project. Fetch the exact
template files from https://github.com/shreybansal365/universal-handoff-protocol/tree/main/template
— download README.md and PROMPTS.md exactly as they are, do not modify
or reinterpret them. Place them in a .handoff/ directory in the project
root. Then scan this project's codebase and generate a fresh
PROJECT_BIBLE.md, ACTIVE_SESSION.md, and SESSION_LOG.md. Add !.handoff/
to .gitignore if needed. Run the setup verification checklist. IMPORTANT:
Throughout this session, silently update .handoff/ACTIVE_SESSION.md after
every significant change.

Resuming on an Existing Project

Tell any AI assistant:

Read the .handoff/ directory in my project root. Start with README.md,
then PROJECT_BIBLE.md, ACTIVE_SESSION.md, and SESSION_LOG.md. After
reading, reply with a "UHP loaded" validation proof showing: project
name, current state, next task, session count, and last AI model.
Then continue from where the last AI left off. IMPORTANT: Throughout
this session, silently update .handoff/ACTIVE_SESSION.md after every
significant change you make.

User Keywords

Say This What Happens
rate limits dying, prepare for handoff Full context save — ready to switch AIs
checkpoint or save Quick mid-session save — you're staying
follow the protocol Snap a misbehaving AI back on track

Key Features

Feature Description
Continuous Auto-Sync AI silently updates handoff files as it works
Checkpoint Saves User can force a save at any time with "checkpoint"
Rolling Log Compression Keeps last 5 sessions, compresses older into summary
Integrity Checking New AI validates project context against actual codebase
Freshness Detection Warns if handoff data is >48 hours stale
Decision Documentation Captures WHY things were built a certain way
Self-Healing Missing or corrupted files auto-regenerated
Git Protection .handoff/ survives git clone via .gitignore rules
Security Protocol forbids plaintext secrets in handoff files
Offline Fallback Works with web-only AIs via copy-paste
Validation Proof AI must prove it read and understood all files
Mid-Session Correction User can re-engage a misbehaving AI instantly
Setup Verification Bootstrapping runs 6-point checklist to confirm
Embedded Instructions Each file has point-of-action rules inside it
Cross-File Reminders Updating one file reminds AI about related files
Language Agnostic Works with JS, Python, Rust, Go, or any language

Design Principles

  1. Zero Dependencies — No scripts, no tools, no extensions. Pure markdown files.
  2. Self-Contained — Everything lives in .handoff/. Nothing installed on the host.
  3. Self-Healing — The system repairs itself when files are lost or corrupted.
  4. Self-Bootstrapping — One prompt creates the entire system from scratch.
  5. Defense in Depth — 8 independent layers ensure protocol compliance.
  6. Graceful Degradation — Even if a weak AI follows 50% of rules, the system works. The next smart AI cleans up.

Template Files

The template/ directory contains the exact protocol files. When setting up UHP on a new project, AIs should download these verbatim — not recreate them from a description.

File Lines Purpose
README.md ~340 Complete 13-rule protocol with embedded templates
PROMPTS.md ~70 User quick-reference card with all keywords
PROJECT_BIBLE.md template Auto-generated by AI on first setup
ACTIVE_SESSION.md template Auto-populated by AI during work
SESSION_LOG.md template Auto-populated on each handoff

FAQ

Q: Does this work with ALL AI coding assistants?
A: Yes. If the AI can read local files, it works natively. If it can't (web-only ChatGPT), there's an offline fallback where you paste file contents.

Q: What happens if I forget to say "prepare for handoff"?
A: The continuous auto-sync rule ensures files are updated throughout the session. Even without an explicit handoff, the files are ~95% current.

Q: What if a weak AI doesn't follow the rules?
A: The embedded instructions inside each file catch most weak models. For stubborn cases, use the correction keyword. And the next strong model will self-heal any gaps.

Q: Can I use this for non-coding projects?
A: Yes. The protocol is generic enough for any project where AI assistants need context — writing, research, design, etc.

Q: What if I lose the prompt?
A: Every project with UHP has PROMPTS.md inside it. Open any existing project and the prompt is right there.


License

MIT — Use it anywhere, share it with anyone, modify it for your needs.

About

A zero-dependency context transfer system for switching between AI coding assistants without losing project context.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors