Define AI workflows as code. Execute with deterministic reliability.
Factorial is a DOT-based workflow orchestrator for multi-stage AI pipelines. Write your workflow as a Graphviz graph, run it with built-in quality gates, human approvals, and parallel execution.
Reference Implementation: Based on the StrongDM AI Attractor with enhancements for self-hosting maturity, DTU validation, and deterministic governance.
npm install @mhingston5/factorialRequirements: Node.js >= 20
For API backend (default), also install your provider:
npm install @ai-sdk/openai # or @ai-sdk/anthropic, @ai-sdk/google# workflow.dot
digraph MyWorkflow {
graph [goal="Generate and review code"]
rankdir=LR
start [shape=Mdiamond]
exit [shape=Msquare]
generate [prompt="Write a Fibonacci function"]
review [shape=hexagon, type="wait.human", label="Review"]
start -> generate -> review -> exit
}# .env
OPENAI_API_KEY=sk-...npx factorial run --graph workflow.dot --logs-root ./logsReplay later with identical config:
npx factorial replay --manifest ./logs/run_manifest.jsonimport { Attractor } from '@mhingston5/factorial';
const attractor = new Attractor({
dotFile: './workflow.dot',
logsRoot: './logs',
});
const result = await attractor.run();
console.log(`Status: ${result.status}`);- Why Factorial?
- Features
- Examples
- CLI Commands
- Configuration
- Workflow Builder Skill
- Node Types
- Development
- Documentation
| Feature | Benefit |
|---|---|
| π― Deterministic Runs | Same inputs produce identical outputs and artifacts |
| π Governance-Ready | Quality gates, human escalation, and audit trails built-in |
| β‘ Production-Grade | Retry, checkpoints, resume, and replay out of the box |
| π Parallelizable | Fan-out/fan-in with git worktree isolation |
| π Multi-Provider | Optimized tooling for OpenAI, Anthropic, and Gemini |
Define complex AI pipelines as simple DOT graphs:
digraph CodeReview {
start [shape=Mdiamond]
exit [shape=Msquare]
review [label="Review", prompt="Review this code change"]
gate [type="confidence.gate", escalation_threshold=0.8]
human [type="wait.human", label="Needs Review"]
start -> review -> gate
gate -> exit [label="Auto-approved"]
gate -> human [label="Escalate"]
human -> exit
}Process images, PDFs, and audio files:
digraph MultiModal {
analyze_image [prompt="Describe this UI", image_input="./ui.png"]
read_pdf [prompt="Summarize findings", document_input="./paper.pdf"]
transcribe [prompt="Transcribe meeting", audio_input="./meeting.m4a", llm_provider="gemini"]
}Supported formats:
- Images: PNG, JPEG, GIF, WEBP (all providers)
- Documents: PDF, TXT, MD (Anthropic + Gemini)
- Audio: WAV, MP3, M4A (Gemini only)
Optimized tool formats for each provider:
- OpenAI:
apply_patchv4a format for edits - Anthropic:
edit_fileexact-match editing - Gemini:
edit_fileexact-match editing
Reduce API costs by 50-90%:
node [llm_provider="anthropic", enable_caching="true", cache_strategy="system-plus-early"]Spawn parallel subagents for independent tasks:
spawn [type="tool", tool_name="spawn_agent", task="Research topic"]
wait [type="tool", tool_name="wait"]More Features
Quality Gates
quality.gate- Run lint, tests, typecheck, security scansjudge.rubric- AI-powered evaluation with scoringconfidence.gate- Auto-approve high confidence, escalate low confidence
Digital Twin Universe (DTU) Test against deterministic fixtures with reference twins for external dependencies.
Governance & Audit Built-in automation for repository health, documentation freshness, and release hardening.
Queue Interviewer (Deterministic Testing)
Test workflows with wait.human nodes by pre-recording answers for CI/CD.
See 20+ complete examples in examples/:
| Category | Examples |
|---|---|
| Starter | simple.dot, branching.dot, human-gate.dot, parallel.dot |
| Quality | confidence-escalation.dot, quality-pipeline.dot, retry-loop.dot, code-review-complete.dot |
| Multi-modal | image-analysis.dot, document-qa.dot, audio-transcription.dot |
| Subagents | lightweight-subagent.dot, parallel-research.dot, manager-loop.dot |
| Automation | pr-automation.dot, engineering-loop-parent.dot, engineering-loop-child.dot |
# Run and inspect
npx factorial run --graph workflow.dot
npx factorial validate --graph workflow.dot
npx factorial visualize --graph workflow.dot
npx factorial replay --manifest ./logs/run_manifest.json
# Quality and governance
npx factorial confidence-tune --logs-root ./logs
npx factorial check:freshness --artifact ./docs
npx factorial compound-weekly
# DTU and scenarios
npx factorial dtu-run --fixtures ./fixtures
npx factorial dtu-curate
npx factorial metrics:satisfaction
# Reliability and autonomy
npx factorial telemetry:aggregate
npx factorial workflow:self-modify
npx factorial cross-repo:validateSee CLI Reference for complete documentation.
Create config.json for defaults:
{
"logs_root": "./logs",
"llm_backend": "api",
"default_provider": "openai",
"providers": {
"openai": {
"api_key_env": "OPENAI_API_KEY",
"default_model": "gpt-4o-mini"
}
},
"checkpoint_interval": 1
}Backends:
api(default) - Vercel AI SDK with provider librariescli- Execute external commands directly
Factorial includes a comprehensive AI skill for building DOT workflows:
# Available at:
skills/factorial-workflow-builder/
# Provides:
# - Complete node type reference
# - All node and graph attributes with examples
# - Common workflow patterns and best practicesSee skills/factorial-workflow-builder/ for full documentation.
| Shape | Type | Purpose |
|---|---|---|
Mdiamond |
start |
Entry point |
box |
codergen (default) |
LLM task |
hexagon |
wait.human |
Human approval |
diamond |
conditional / confidence.gate |
Branch routing |
component |
parallel |
Fan-out |
tripleoctagon |
parallel.fan_in |
Fan-in |
Msquare |
exit |
Exit point |
Quality & Governance:
quality.gate- Run commands (lint, test, typecheck)judge.rubric- AI evaluation with scoringfailure.analyze- Classify failures for targeted retry
See Node Types Reference for full details.
# Install
npm install
# Build
npm run build
# Test
npm run test:run # Unit tests
npm run test:golden # Regression suite
npm run test:worktree # Git worktree parity
# Quality
npm run lint
npm run typecheck
# Audit
npm run agent:audit
npm run docs:freshness
npm run claims:audit- Roadmap - Current status and direction
- Self-hosting Maturity - Level definitions and gates
- Spec Conformance - Attractor spec alignment
- Companion Spec Scope - Implemented features
- Execution Event Stream - Event schema for UI/telemetry consumers
- DTU Satisfaction Report - Scenario satisfaction metrics
- Active Handoff - Current execution context
- Reasoning Token Coverage
- Anthropic Caching Effectiveness
- Subagent Performance
- Reliability SLO
- Full Autonomy Readiness
DOT File β Parser β Graph AST β Execution Engine β Handlers β AI Backend
β
(api: Vercel SDK | cli: Commands)
Core preserves the Attractor execution model with production enhancements:
- Self-hosting maturity ladder with objective gates
- DTU validation platform
- Deterministic replay and flake detection
- Multi-provider parity evidence
MIT
Companion specs adopted: coding-agent-loop, unified-llm (bounded scope)
Current maturity level: full-autonomy with FA-001βFA-009 evidence published
See maturity ladder for ongoing maintenance criteria

