Skip to content

A DOT-based pipeline runner for orchestrating multi-stage AI workflows.

Notifications You must be signed in to change notification settings

mhingston/factorial

Folders and files

NameName
Last commit message
Last commit date

Latest commit

Β 

History

24 Commits
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 

Repository files navigation

Factorial Logo

Software Factory

Factorial

Define AI workflows as code. Execute with deterministic reliability.

npm version license node version

Factorial is a DOT-based workflow orchestrator for multi-stage AI pipelines. Write your workflow as a Graphviz graph, run it with built-in quality gates, human approvals, and parallel execution.

Reference Implementation: Based on the StrongDM AI Attractor with enhancements for self-hosting maturity, DTU validation, and deterministic governance.


Quick Start

npm install @mhingston5/factorial

Requirements: Node.js >= 20

For API backend (default), also install your provider:

npm install @ai-sdk/openai  # or @ai-sdk/anthropic, @ai-sdk/google

1. Create a Workflow

# workflow.dot
digraph MyWorkflow {
    graph [goal="Generate and review code"]
    rankdir=LR

    start [shape=Mdiamond]
    exit  [shape=Msquare]

    generate [prompt="Write a Fibonacci function"]
    review [shape=hexagon, type="wait.human", label="Review"]
    
    start -> generate -> review -> exit
}

2. Configure Environment

# .env
OPENAI_API_KEY=sk-...

3. Run It

npx factorial run --graph workflow.dot --logs-root ./logs

Replay later with identical config:

npx factorial replay --manifest ./logs/run_manifest.json

4. Programmatic Usage

import { Attractor } from '@mhingston5/factorial';

const attractor = new Attractor({
  dotFile: './workflow.dot',
  logsRoot: './logs',
});

const result = await attractor.run();
console.log(`Status: ${result.status}`);

Table of Contents


Why Factorial?

Feature Benefit
🎯 Deterministic Runs Same inputs produce identical outputs and artifacts
πŸ”’ Governance-Ready Quality gates, human escalation, and audit trails built-in
⚑ Production-Grade Retry, checkpoints, resume, and replay out of the box
πŸš€ Parallelizable Fan-out/fan-in with git worktree isolation
πŸ”Œ Multi-Provider Optimized tooling for OpenAI, Anthropic, and Gemini

Features

Visual Workflow Definition

Define complex AI pipelines as simple DOT graphs:

digraph CodeReview {
  start [shape=Mdiamond]
  exit  [shape=Msquare]
  
  review [label="Review", prompt="Review this code change"]
  gate [type="confidence.gate", escalation_threshold=0.8]
  human [type="wait.human", label="Needs Review"]
  
  start -> review -> gate
  gate -> exit [label="Auto-approved"]
  gate -> human [label="Escalate"]
  human -> exit
}

Multi-Modal Support

Process images, PDFs, and audio files:

digraph MultiModal {
  analyze_image [prompt="Describe this UI", image_input="./ui.png"]
  read_pdf [prompt="Summarize findings", document_input="./paper.pdf"]
  transcribe [prompt="Transcribe meeting", audio_input="./meeting.m4a", llm_provider="gemini"]
}

Supported formats:

  • Images: PNG, JPEG, GIF, WEBP (all providers)
  • Documents: PDF, TXT, MD (Anthropic + Gemini)
  • Audio: WAV, MP3, M4A (Gemini only)

Provider-Native Tool Profiles

Optimized tool formats for each provider:

  • OpenAI: apply_patch v4a format for edits
  • Anthropic: edit_file exact-match editing
  • Gemini: edit_file exact-match editing

Anthropic Prompt Caching

Reduce API costs by 50-90%:

node [llm_provider="anthropic", enable_caching="true", cache_strategy="system-plus-early"]

Lightweight Subagent Tools

Spawn parallel subagents for independent tasks:

spawn [type="tool", tool_name="spawn_agent", task="Research topic"]
wait [type="tool", tool_name="wait"]
More Features

Quality Gates

  • quality.gate - Run lint, tests, typecheck, security scans
  • judge.rubric - AI-powered evaluation with scoring
  • confidence.gate - Auto-approve high confidence, escalate low confidence

Digital Twin Universe (DTU) Test against deterministic fixtures with reference twins for external dependencies.

Governance & Audit Built-in automation for repository health, documentation freshness, and release hardening.

Queue Interviewer (Deterministic Testing) Test workflows with wait.human nodes by pre-recording answers for CI/CD.


Examples

See 20+ complete examples in examples/:

Category Examples
Starter simple.dot, branching.dot, human-gate.dot, parallel.dot
Quality confidence-escalation.dot, quality-pipeline.dot, retry-loop.dot, code-review-complete.dot
Multi-modal image-analysis.dot, document-qa.dot, audio-transcription.dot
Subagents lightweight-subagent.dot, parallel-research.dot, manager-loop.dot
Automation pr-automation.dot, engineering-loop-parent.dot, engineering-loop-child.dot

CLI Commands

# Run and inspect
npx factorial run --graph workflow.dot
npx factorial validate --graph workflow.dot
npx factorial visualize --graph workflow.dot
npx factorial replay --manifest ./logs/run_manifest.json

# Quality and governance
npx factorial confidence-tune --logs-root ./logs
npx factorial check:freshness --artifact ./docs
npx factorial compound-weekly

# DTU and scenarios
npx factorial dtu-run --fixtures ./fixtures
npx factorial dtu-curate
npx factorial metrics:satisfaction

# Reliability and autonomy
npx factorial telemetry:aggregate
npx factorial workflow:self-modify
npx factorial cross-repo:validate

See CLI Reference for complete documentation.


Configuration

Create config.json for defaults:

{
  "logs_root": "./logs",
  "llm_backend": "api",
  "default_provider": "openai",
  "providers": {
    "openai": {
      "api_key_env": "OPENAI_API_KEY",
      "default_model": "gpt-4o-mini"
    }
  },
  "checkpoint_interval": 1
}

Backends:

  • api (default) - Vercel AI SDK with provider libraries
  • cli - Execute external commands directly

Workflow Builder Skill

Factorial includes a comprehensive AI skill for building DOT workflows:

# Available at:
skills/factorial-workflow-builder/

# Provides:
# - Complete node type reference
# - All node and graph attributes with examples
# - Common workflow patterns and best practices

See skills/factorial-workflow-builder/ for full documentation.


Node Types

Shape Type Purpose
Mdiamond start Entry point
box codergen (default) LLM task
hexagon wait.human Human approval
diamond conditional / confidence.gate Branch routing
component parallel Fan-out
tripleoctagon parallel.fan_in Fan-in
Msquare exit Exit point

Quality & Governance:

  • quality.gate - Run commands (lint, test, typecheck)
  • judge.rubric - AI evaluation with scoring
  • failure.analyze - Classify failures for targeted retry

See Node Types Reference for full details.


Development

# Install
npm install

# Build
npm run build

# Test
npm run test:run        # Unit tests
npm run test:golden     # Regression suite
npm run test:worktree   # Git worktree parity

# Quality
npm run lint
npm run typecheck

# Audit
npm run agent:audit
npm run docs:freshness
npm run claims:audit

Documentation

Evidence Reports


Architecture

DOT File β†’ Parser β†’ Graph AST β†’ Execution Engine β†’ Handlers β†’ AI Backend
                                                      ↓
                                              (api: Vercel SDK | cli: Commands)

Core preserves the Attractor execution model with production enhancements:

  • Self-hosting maturity ladder with objective gates
  • DTU validation platform
  • Deterministic replay and flake detection
  • Multi-provider parity evidence

License

MIT


Companion specs adopted: coding-agent-loop, unified-llm (bounded scope)

Current maturity level: full-autonomy with FA-001–FA-009 evidence published

See maturity ladder for ongoing maintenance criteria

About

A DOT-based pipeline runner for orchestrating multi-stage AI workflows.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Contributors 2

  •  
  •