ao-fleet is the company layer above AO teams. It ships as a CLI and MCP server and incorporates the fleet-agent workflow patterns from ao-fleet-pack.
It is the control plane for:
- team and project inventory
- schedule policy and daemon intent
- fleet audit history
- knowledge capture and retrieval
- fleet-native MCP tools
- company-wide AO workflows
Think of the system like this:
ao-fleetis the company- an AO instance is a team
- a project is a repo or workspace owned by a team
- the knowledge base is company memory
That means a marketing team can own marketing repos, while an app team can own one product repo or several repos. ao-fleet coordinates the teams; AO still executes the work inside each repo.
AO already solves per-project orchestration well:
- each repo has its own
.ao/config - each repo runs its own daemon
- each repo exposes its own AO MCP surface
What is still missing is a true fleet-level control plane:
- one place to register and classify managed repos
- one place to define when teams and projects should be active
- one place to start, pause, stop, and rebalance AO daemons
- one place to store company knowledge and operational history
- one MCP surface for the whole fleet
ao-fleet is that layer.
The repository exposes 54 CLI commands and a full MCP server. The surface covers:
- Database:
db-init - Audit:
audit-list— searchable append-only log of all fleet mutations - Config snapshots:
config-snapshot-export,config-snapshot-import - Fleet overview:
fleet-overview,founder-overview - Hosts:
host-create,host-get,host-import,host-logs,host-log-stream,host-list,host-sync,host-sync-all,host-update,host-delete - Teams:
team-create,team-get,team-list,team-update,team-delete - Projects:
project-create,project-ao-json,project-config-get,project-discover,project-events,project-get,project-host-assign,project-host-clear,project-host-list,project-list,project-status,project-update,project-delete - Schedules:
schedule-create,schedule-get,schedule-list,schedule-update,schedule-delete - Knowledge:
knowledge-source-upsert,knowledge-source-list,knowledge-document-create,knowledge-document-list,knowledge-fact-create,knowledge-fact-list,knowledge-search - Daemon:
daemon-override-upsert,daemon-override-list,daemon-override-clear,daemon-status,daemon-health-rollup,daemon-reconcile - MCP:
mcp-list,mcp-serve
SQLite-backed fleet state. Stdio MCP transport. Persisted daemon status and config snapshot import/export for founder bootstrap.
Every create, update, and delete operation writes an audit entry. Use audit-list to replay the history of any change:
cargo run -q -p ao-fleet-cli -- --db-path /tmp/ao-fleet.db audit-listThe audit log is the primary source of truth for "who changed what and when" across the fleet. It is append-only and scoped to fleet mutations — not workflow execution history, which lives inside each AO daemon.
The fleet knowledge base stores company memory as three layered types:
- Sources — pointers to external artifacts (runbooks, documents, URLs, manual notes)
- Documents — rich records with title, summary, body, kind, and tags
- Facts — lightweight assertions attached to a team or company scope
All three types support full-text search across scope, kind, and tag filters:
cargo run -q -p ao-fleet-cli -- --db-path /tmp/ao-fleet.db knowledge-search \
--scope company \
--text "launch checklist" \
--tag operationsKnowledge is the long-term memory that survives daemon restarts, repo migrations, and team changes. Sources link out; documents and facts are stored directly in the fleet database.
Create a local database:
cargo run -q -p ao-fleet-cli -- --db-path /tmp/ao-fleet.db db-initCreate a company team:
cargo run -q -p ao-fleet-cli -- --db-path /tmp/ao-fleet.db team-create \
--slug marketing \
--name Marketing \
--mission "owns launch campaigns and growth ops" \
--ownership company \
--business-priority 80Create a repo/project under that team:
cargo run -q -p ao-fleet-cli -- --db-path /tmp/ao-fleet.db project-create \
--team-id <TEAM_ID> \
--slug marketing-site \
--root-path /Users/me/marketing-site \
--ao-project-root /Users/me/marketing-site \
--default-branch main \
--enabledDiscover existing repos and .ao projects under one or more workspace roots:
cargo run -q -p ao-fleet-cli -- --db-path /tmp/ao-fleet.db project-discover \
--search-root /Users/me/repos \
--search-root /Users/me/workspacesBy default, discovery skips AO-only shell directories whose only entry is .ao. Include those explicitly if you want them:
cargo run -q -p ao-fleet-cli -- --db-path /tmp/ao-fleet.db project-discover \
--search-root /Users/me/repos \
--include-ao-shellsRegister any discovered projects that are not already in the fleet:
cargo run -q -p ao-fleet-cli -- --db-path /tmp/ao-fleet.db project-discover \
--search-root /Users/me/repos \
--register \
--team-id <TEAM_ID>Register a founder-managed host and assign a project to it:
cargo run -q -p ao-fleet-cli -- --db-path /tmp/ao-fleet.db host-create \
--slug founder-mac \
--name "Founder Mac" \
--address founder.local \
--platform macos \
--status healthy \
--capacity-slots 6
cargo run -q -p ao-fleet-cli -- --db-path /tmp/ao-fleet.db project-host-assign \
--project-id <PROJECT_ID> \
--host-id <HOST_ID> \
--assignment-source founderCreate a business-hours schedule. Weekday numbers use 0 = Monday and 6 = Sunday:
cargo run -q -p ao-fleet-cli -- --db-path /tmp/ao-fleet.db schedule-create \
--team-id <TEAM_ID> \
--timezone America/Mexico_City \
--policy-kind business_hours \
--window 0,9,17 \
--window 1,9,17 \
--window 2,9,17 \
--window 3,9,17 \
--window 4,9,17 \
--enabledWrite company knowledge:
cargo run -q -p ao-fleet-cli -- --db-path /tmp/ao-fleet.db knowledge-source-upsert \
--scope team \
--scope-ref <TEAM_ID> \
--kind manual_note \
--label launch-notes \
--uri file:///ops/launch.md \
--sync-state ready
cargo run -q -p ao-fleet-cli -- --db-path /tmp/ao-fleet.db knowledge-document-create \
--scope team \
--scope-ref <TEAM_ID> \
--kind runbook \
--source-kind manual_note \
--source-id <SOURCE_ID> \
--title "Campaign launch checklist" \
--summary "Operational checklist for launches" \
--body "Verify the launch checklist before enabling the campaign." \
--tag marketing
cargo run -q -p ao-fleet-cli -- --db-path /tmp/ao-fleet.db knowledge-search \
--scope team \
--scope-ref <TEAM_ID> \
--text launch \
--tag marketing \
--document-kind runbookReconcile daemon intent with observed state:
cargo run -q -p ao-fleet-cli -- --db-path /tmp/ao-fleet.db daemon-reconcile --applyRetrieve structured logs from a hostd node:
cargo run -q -p ao-fleet-cli -- --db-path /tmp/ao-fleet.db host-logs \
--base-url http://127.0.0.1:7444 \
--auth-token dev-token \
--project-id ao-fleet \
--cat daemon \
--level error \
--limit 50Tail the live websocket stream from a hostd node:
cargo run -q -p ao-fleet-cli -- --db-path /tmp/ao-fleet.db host-log-stream \
--base-url http://127.0.0.1:7444 \
--auth-token dev-token \
--project-id ao-fleet \
--cat daemon \
--level error \
--tail 20Start the MCP server:
cargo run -q -p ao-fleet-cli -- --db-path /tmp/ao-fleet.db mcp-servePhase 1 is meant for a single founder or a very small founding team running a company, not a large ops org.
The practical bootstrap loop is:
- Initialize the fleet database.
- Create the company teams.
- Attach the repos or workspaces each team owns.
- Add schedules and company knowledge.
- Export a snapshot for backup or seed replication.
- Run
mcp-serveas the long-lived control-plane process. - Use
daemon-status --refreshanddaemon-reconcile --applyon a timer. Local projects use direct AO CLI control; placed remote projects should pointao-fleetat the host-scoped AO web API or MCP endpoint for that project.
For a founder-run deployment, treat ao-fleet as a service with one persistent database path and one long-lived MCP endpoint. Use a service manager such as systemd or launchd to keep mcp-serve running, then schedule reconciliation separately. Local projects can keep using direct AO CLI control; placed remote projects should use the host-scoped AO API or MCP transport.
README.md: product overview and operator entry pointdocs/architecture.md: system model and implementation shapedocs/operator-guide.md: concrete CLI workflows and examples
This repo is intended to become a standalone open-source service and CLI that:
- manages many AO projects from one fleet registry
- schedules project activity windows and operational policies
- supervises AO daemons across the fleet
- stores company knowledge and makes it searchable
- exposes a fleet-native MCP server
- runs its own AO instance for smart workflow automation across repos
- ships fleet-agent workflow patterns (conductor-style reconciliation and fleet-wide review, migrated from
ao-fleet-pack)
The design target is "Brain as a product" rather than "a few shell scripts".
- replacing AO inside individual repositories
- moving workflow execution logic into a fleet daemon
- duplicating AO's task, requirement, queue, or workflow runtime internals
- becoming a desktop dashboard
ao-fleet should orchestrate AO, not absorb it.
ao: execution kernel and per-project daemonao-dashboard: visual client that should eventually consumeao-fleetao-fleet-tools: early scripts and MCP experiments to fold into this repobrain: private operator workspace that proved the operating model
- Language: Rust
- Current CLI binary:
ao-fleet-cli - Persistence: SQLite for fleet state and history
- Config: YAML or TOML for declarative fleet config
- MCP transport: stdio first, optional HTTP later
- AO integration: spawn AO CLI and consume AO MCP rather than vendoring AO internals
- Multi-host control: uses local AO CLI for colocated projects and host-scoped AO web API or MCP for remote placements; AO already has the control surface, and the remaining work lives in
ao-fleettransport, auth, and placement policy
Rust is the right default because AO is already Rust and the operational parts here are process supervision, scheduling, IO, and durable state.
The repo has a working core surface: 54 CLI commands covering registry, scheduling, daemon reconciliation, MCP, knowledge operations, fleet audit, daemon health rollup, and config snapshots. Fleet-agent workflow patterns from ao-fleet-pack are incorporated. The next operator-facing layers live in docs/operator-guide.md and docs/architecture.md.