A five-minute, dependency-light demo of a complete JEP accountability stack:
Human
→ Agent
→ Tool
→ JEP Event
→ HJS Receipt
→ JAC Lineage
→ Archive
→ Replay
→ Verification
The demo is intentionally not production security. It uses a mock agent, a mock MCP-style tool, deterministic timestamps, and SHA-256 over canonical JSON so the chain is easy to inspect and replay.
Run the full demo with one command:
python demo.pyThat command generates archive.jsonl and immediately replays it for verification.
To replay an existing archive:
./jep replay archive.jsonlIf the package is installed in editable mode, the command is also available as:
jep replay archive.jsonl- Human submits a request about invoice
INV-042. - Mock Agent judges that structured invoice evidence is needed before answering.
- Mock MCP Tool returns deterministic invoice validation evidence without calling any external system.
- JEP Events capture the accountable decision path:
JudgmentDelegationTerminationVerification
- HJS-style Receipt binds each event to a deterministic
subject_hashand mock signature. - JAC-style Lineage links each receipt-bearing event to the previous archive line hash.
- Archive writes one JSON envelope per line to
archive.jsonl. - Replay recomputes hashes from the archive.
- Verification checks receipt integrity, lineage continuity, line hashes, and required event coverage.
python demo.py creates:
| Artifact | Purpose |
|---|---|
archive.jsonl |
Append-friendly JSONL archive containing JEP events, HJS-style receipts, and JAC-style lineage links. |
jep_event |
The accountable event payload: type, actor, timestamp, judgment/delegation/termination/verification details. |
hjs_receipt |
A receipt with issuer, subject event ID, canonical JSON hash, and mock signature. |
jac_lineage |
A hash-chain link containing the previous line hash and current line hash. |
| Replay report | Human-readable verification output from python demo.py or jep replay archive.jsonl. |
A single archive line has this shape:
{
"sequence": 1,
"jep_event": { "type": "Judgment", "event_id": "evt-001-judgment" },
"hjs_receipt": { "receipt_type": "HJS-style-receipt", "subject_hash": "..." },
"jac_lineage": { "lineage_type": "JAC-style-lineage-link", "previous_line_hash": "GENESIS", "line_hash": "..." }
}Replay verification for archive.jsonl
Events replayed: 4
- PASS evt-001-judgment: Judgment receipt and lineage verified
- PASS evt-002-delegation: Delegation receipt and lineage verified
- PASS evt-003-termination: Termination receipt and lineage verified
- PASS evt-004-verification: Verification receipt and lineage verified
- PASS archive: required event types present
Verdict: PASS
Plain logs usually answer, "what text did the system print?" This demo shows a stronger accountability pattern:
- Typed judgment events explain why the agent made a decision.
- Delegation events identify the tool boundary and the exact mock tool input/output.
- Receipts bind event content to hashes, so replay can detect event mutation.
- Lineage chains each archive line to the previous one, so replay can detect reorder, deletion, or insertion.
- Replay verification recomputes the archive rather than trusting runtime output.
- Termination and verification events close the loop with final outcome and audit verdict.
This is still a toy implementation, but it demonstrates the core JEP stack as a verifiable chain instead of an unstructured stream of logs.
.
├── demo.py # one-command full demo runner
├── jep # executable wrapper for replay CLI
├── jep_core.py # mock agent, mock tool, archive, replay, verification logic
├── pyproject.toml # optional console-script entry point
└── README.md