Why
Wave-1 (invest) demonstrates a single agent acting on its own preapproval. Wave-2 (deploy-pipeline, see #3) demonstrates one agent + one human reviewer. Wave-3 should demonstrate the next problem layer: multiple agents coordinating in the same domain without stepping on each other.
This is the structural gap that's eating CrewAI / AutoGen / LangGraph use-cases right now (see research findings in ~/Desktop/IDF/2026-05-03-ai-angles-research-spec-runtime-mcp.html): multi-agent frameworks orchestrate agent calls, but they don't manage shared state between agents. The community currently solves this with custom state-stores, vector-databases, or framework-specific scratchpads. None of those carry domain semantics.
Fold already has the primitive: Φ as event log + co-selection cross-projection state-sharing. Co-selection landed in idf-sdk PRs #300/#303/#308/#311/#313 in April 2026. We just haven't shipped a domain that uses it as the demo's core narrative.
Domain candidates
Pick one from below. Lean: incident response — recognizable HN audience, natural multi-agent role split, real-world consequences.
Option A — incident response (recommended)
Three agents + one on-call human in the same Incident lifecycle:
- Triage agent — classifies severity, finds related runbooks
- Investigator agent — pulls logs, traces, correlates with recent deploys (via cross-domain link to wave-2
deploy-pipeline)
- Mitigator agent — proposes rollback / config-change / scale-up
- On-call human — approves mitigation; only human can
close_incident
Shared state: Incident.timeline[], Incident.status, Incident.assignedTo. Co-selection lets all three agents see what the others are looking at without polling.
Option B — sales pipeline
CRM-like domain. Agents specialize: enrichment, follow-up, scheduling, contract-drafting. Shared Deal state. Less HN-resonant (audience is dev, not sales), but closest to existing CrewAI demos — direct apples-to-apples comparison signal.
Option C — code review
Three agents: linter, security-scanner, architect-reviewer. Single PR, shared verdict. Closest to GitHub-native HN audience, but the «multiple perspectives on one PR» model is well-trodden — competitors include Greptile, CodeRabbit, qodo.
Three rejection types this domain showcases
- Co-selection conflict — two agents try to mutate the same
Incident.assignedTo simultaneously. The runtime intercepts the second mutation with a structured rejection citing the first agent's lock. Cardinality invariant: max 1 active assignee.
- Cross-agent SoD — Investigator agent cannot also Mitigate the same incident (separation-of-duties expression invariant:
mitigator !== investigator). Mirrors the wave-2 approve_deployment SoD pattern.
- Dependency-graph constraint — Mitigator cannot
propose_rollback until Investigator has logged at least one correlation_finding. Cardinality invariant with where: { type: "correlation" }, min: 1.
What's new vs Wave-1/2
- Co-selection demo — first quickstart that exercises
<CoSelectionProvider> + useCoSelection. Not just runtime guards, but shared cursor / focus between agents-as-pixels (when wired to UI). For the npm/MCP demo, manifests as agents reading each other's currentFocus.{incidentId, fieldPath} from /world.
- Cross-domain reference —
Incident.deployTriggeredBy → Deployment.id (FK to wave-2 domain). Demonstrates that IDF artifacts compose across domains without runtime coupling.
- First multi-agent preapproval — each agent has its own
AgentPreapproval row, with allowedAgentRoles: ["triage"] field. The runtime's preapproval guard now needs to check role-scoped action sets, not just user-scoped.
Implementation paths
Path A — preset preset in existing Docker quickstart (~2 hours):
- New
idf repo branch with domains/incident-response/{ontology,intents}.js
BOOTSTRAP_DOMAIN=incident docker compose up
- Three agent-personas as separate test users (entrypoint.sh seeds them)
- New scripts:
demo:co-selection-conflict, demo:sod-cross-agent, demo:dependency-graph
Path B — full multi-agent simulator (~1-2 days):
- Background process in container that simulates Triage / Investigator / Mitigator concurrently with
setInterval calls. Real LLM-driven multi-agent demo without requiring viewers to set up CrewAI / LangGraph themselves.
- Demo shows agents competing for state, runtime resolves via Φ's
proposed → confirmed | rejected semantics + co-selection.
Lean: A first, B as wave-3.5 if A gets traction.
Open questions
- Co-selection requires
<CoSelectionProvider> mounted in a UI client. For npm/MCP demo, what's the equivalent? Maybe /api/agent/:domain/focus GET endpoint that returns last-N agents' current focus rows. Schema TBD.
- Should the human in this demo be an
oncall role or incident-commander? Tradeoff: oncall is recognizable, incident-commander is more accurate for governance. Lean: oncall for demo-clarity.
- Cross-domain link to wave-2 — only works if wave-2 is shipped first. Hard dependency: wave-3 cannot demo before wave-2.
- Is it 3 agents + 1 human, or N agents? Tradeoff: 3 is concrete, N is impressive but harder to demo in 75 sec. Lean: 3.
Decision frame
Open this only if:
Don't open this:
Source narrative
Wave-3 closes the gap CrewAI / AutoGen / LangGraph leave open: agent orchestration vs agent substrate. Frameworks tell agents what to do next. Fold tells agents what world they share.
«Multi-agent isn't orchestration. It's a shared substrate.»
This issue records the design before it's needed. Decisions made today (3 agents > N, incident-response > sales, Path A > B) are revocable when issue is reopened.
Why
Wave-1 (
invest) demonstrates a single agent acting on its own preapproval. Wave-2 (deploy-pipeline, see #3) demonstrates one agent + one human reviewer. Wave-3 should demonstrate the next problem layer: multiple agents coordinating in the same domain without stepping on each other.This is the structural gap that's eating CrewAI / AutoGen / LangGraph use-cases right now (see research findings in
~/Desktop/IDF/2026-05-03-ai-angles-research-spec-runtime-mcp.html): multi-agent frameworks orchestrate agent calls, but they don't manage shared state between agents. The community currently solves this with custom state-stores, vector-databases, or framework-specific scratchpads. None of those carry domain semantics.Fold already has the primitive: Φ as event log + co-selection cross-projection state-sharing. Co-selection landed in idf-sdk PRs #300/#303/#308/#311/#313 in April 2026. We just haven't shipped a domain that uses it as the demo's core narrative.
Domain candidates
Pick one from below. Lean: incident response — recognizable HN audience, natural multi-agent role split, real-world consequences.
Option A — incident response (recommended)
Three agents + one on-call human in the same
Incidentlifecycle:deploy-pipeline)close_incidentShared state:
Incident.timeline[],Incident.status,Incident.assignedTo. Co-selection lets all three agents see what the others are looking at without polling.Option B — sales pipeline
CRM-like domain. Agents specialize: enrichment, follow-up, scheduling, contract-drafting. Shared
Dealstate. Less HN-resonant (audience is dev, not sales), but closest to existing CrewAI demos — direct apples-to-apples comparison signal.Option C — code review
Three agents: linter, security-scanner, architect-reviewer. Single PR, shared verdict. Closest to GitHub-native HN audience, but the «multiple perspectives on one PR» model is well-trodden — competitors include Greptile, CodeRabbit, qodo.
Three rejection types this domain showcases
Incident.assignedTosimultaneously. The runtime intercepts the second mutation with a structured rejection citing the first agent's lock. Cardinality invariant:max 1 active assignee.mitigator !== investigator). Mirrors the wave-2approve_deploymentSoD pattern.propose_rollbackuntil Investigator has logged at least onecorrelation_finding. Cardinality invariant withwhere: { type: "correlation" }, min: 1.What's new vs Wave-1/2
<CoSelectionProvider>+useCoSelection. Not just runtime guards, but shared cursor / focus between agents-as-pixels (when wired to UI). For the npm/MCP demo, manifests as agents reading each other'scurrentFocus.{incidentId, fieldPath}from/world.Incident.deployTriggeredBy → Deployment.id(FK to wave-2 domain). Demonstrates that IDF artifacts compose across domains without runtime coupling.AgentPreapprovalrow, withallowedAgentRoles: ["triage"]field. The runtime's preapproval guard now needs to check role-scoped action sets, not just user-scoped.Implementation paths
Path A — preset preset in existing Docker quickstart (~2 hours):
idfrepo branch withdomains/incident-response/{ontology,intents}.jsBOOTSTRAP_DOMAIN=incident docker compose updemo:co-selection-conflict,demo:sod-cross-agent,demo:dependency-graphPath B — full multi-agent simulator (~1-2 days):
setIntervalcalls. Real LLM-driven multi-agent demo without requiring viewers to set up CrewAI / LangGraph themselves.proposed → confirmed | rejectedsemantics + co-selection.Lean: A first, B as wave-3.5 if A gets traction.
Open questions
<CoSelectionProvider>mounted in a UI client. For npm/MCP demo, what's the equivalent? Maybe/api/agent/:domain/focusGET endpoint that returns last-N agents' current focus rows. Schema TBD.oncallrole orincident-commander? Tradeoff:oncallis recognizable,incident-commanderis more accurate for governance. Lean:oncallfor demo-clarity.Decision frame
Open this only if:
Don't open this:
image:line — compose не пытается pull несуществующий образ #5) or decision lineage (docs: add Loom walkthrough в README #6) insteadSource narrative
Wave-3 closes the gap CrewAI / AutoGen / LangGraph leave open: agent orchestration vs agent substrate. Frameworks tell agents what to do next. Fold tells agents what world they share.
This issue records the design before it's needed. Decisions made today (3 agents > N, incident-response > sales, Path A > B) are revocable when issue is reopened.