Systematic codebase investigation and fix execution for Claude Code. Designed for production hardening, security audits, technical debt cleanup, and multi-issue fix campaigns where 5+ issues span multiple files.
Complements autopilot as the dedicated multi-issue fix methodology it relies on internally.
# Project-level (recommended)
cp skill.md /path/to/your/project/.claude/skills/team-fix/skill.md
# Global (available in all projects)
cp skill.md ~/.claude/skills/team-fix/skill.mdRequires Claude Code CLI or desktop app.
# After a security audit, code review, or production incident
/team-fix harden this codebase before launch
# With a specific list of items
/team-fix implement these 12 items from the audit
# For cleanup campaigns
/team-fix sweep the codebase for reliability issues
The skill activates automatically on triggers like "harden", "production ready", "fix all issues", "audit and fix", or whenever the user provides 5+ items to implement.
| Use case | Fit |
|---|---|
| Production hardening / pre-launch security audit | Yes |
| Technical debt cleanup campaigns | Yes |
| Implementing code review feedback with many items | Yes |
| Any task requiring investigation of 5+ issues across multiple files | Yes |
| Single-file fix or fewer than 5 items | No — use direct edits instead |
Phase 1: Investigate
→ Parallel Explore agents (one per domain: security, reliability,
performance, observability, backend logic, frontend)
→ Each returns a compact findings table (max 15 items)
Phase 2: Consolidate
→ Deduplicate into a single prioritized table
→ Group into 4-5 item batches by theme and file ownership
Phase 3: Context Gate
→ Read actual source for every item (exact line numbers, current code,
fix patterns from already-fixed siblings)
→ Blocks Phase 4 until ALL items have precise context
Phase 4: Skeptical Verification
→ 1 agent per item (no batching)
→ Independent CONFIRM / SKIP / MODIFY verdict
→ Typical rate: ~30-50% of items adjusted or dropped
→ Blocks Phase 5 until every item has a verdict
Phase 5: Sequential Team Execution
→ Teams execute one at a time (not parallel — isolation)
→ File ownership enforced (no two teammates touch the same file)
→ Max 2 fix rounds per team before escalation
→ Commit after each team, clean git before the next
Phase 6: Deep Post-Implementation Audit
→ Independent agents audit committed code line by line
→ Verify old pattern is gone, error paths handled, no collateral damage
→ Required for critical/high severity items; optional for trivial ones
Investigation agents return file names and approximate line numbers, but that's not enough. Teammates given imprecise descriptions waste turns searching instead of fixing. Phase 3 forces exact line numbers, current code, and fix patterns from already-fixed siblings BEFORE any team is created.
Every item gets an independent agent review before implementation. The agent's only job is to disprove the issue. Even items that "look obviously real" turn out to be false positives 30-50% of the time — often handled by upstream guards, contradicting intentional design decisions, or sitting at wrong line numbers from recent code changes.
No exceptions. Batching defeats the purpose — each item gets its own dedicated verification agent.
Teams execute one at a time. This isolates failures (a broken team doesn't derail the others), lets the next team build on the previous team's work, and keeps git history clean with one commit per team. File ownership within a team is exclusive to prevent merge conflicts.
Tests passing is necessary but not sufficient. Code can pass all tests and still be wrong. Phase 6 sends independent agents to read the committed code line by line, verifying the old pattern is fully eliminated, error paths are handled, and no collateral damage was introduced.
Critical/high severity items always get Phase 6. Trivial one-line changes with good test coverage can skip it.
Per round, team-fix typically coordinates:
- 3-6 investigation agents (Phase 1)
- Context gate agents (Phase 3, one per team group)
- 1 verification agent per item (Phase 4)
- 2-3 builder agents per team (Phase 5)
- Deep audit agents (Phase 6)
All agents use Opus. The orchestrator delegates all code — it makes decisions, not edits.
Parallel teams sound faster but create problems at scale:
- Merge conflicts when teams touch related files
- Harder to diagnose which team introduced a regression
- Git history with concurrent commits is hard to bisect
- One broken team blocks the others in review
Sequential execution with per-team commits is slower wall-clock but dramatically faster to debug, review, and roll back. In practice, a well-ordered sequential run finishes faster than a tangled parallel one.
- Not for single-file or fewer than 5 items — the coordination overhead isn't worth it
- Investigation agents work best on codebases with existing structure — greenfield code is harder to audit
- Phase 6 is required for anything security-critical — skipping it defeats the point
- Verify before implementing — 30-50% of items are false positives or need adjustment; independent agents catch what self-review misses
- Context before action — fix patterns from already-fixed siblings beat generic descriptions every time
- Sequential over parallel — isolation and commit discipline over throughput
- Audit after implementing — tests passing doesn't mean the fix is right
MIT