Open-source tools for teams that ship work through AI coding agents.
Agents are productive, but they break trust when they change the wrong files, skip prerequisites, or trigger side effects without proof. These tools add deterministic guardrails — no dashboards, no hosted services, just local-first CLI tools that run in your existing workflow.
| Tool | One-liner | Repo |
|---|---|---|
| Accord | Define what "done" means and prove it deterministically | accord |
| ScopeFence | Did this diff stay inside the fence? | scopefence |
| ReplayKit | Turn failed agent traces into replayable regression cases | replaykit |
| ApprovalPack | Turn agent run artifacts into decision-ready approval packets | approvalpack |
| ToolPact | Stop unsafe tool calls before they execute | toolpact |
- Local-first. Everything runs on your machine or in CI. No hosted services.
- Deterministic-first. Checks are pass/fail, not probabilistic. No LLM grading in the critical path.
- Small surface area. Each tool does one thing. A YAML config, a CLI command, a clear result.
- Pre-execution enforcement. The valuable moment is before the side effect, not after.
Every tool follows the same pattern:
pip install -e ".[dev]"
<tool> --helpPick the one that matches your pain:
- Agent changed files outside its scope? → ScopeFence
- Agent called a tool it shouldn't have? → ToolPact
- Agent says it's done but isn't? → Accord
- Need to prove what happened for review? → ApprovalPack
- Same failure keeps recurring? → ReplayKit
All tools are Python 3.12+, MIT licensed, and designed to be understood in one sitting.