diff --git a/.gitignore b/.gitignore
index 30a2d0428..845833b88 100644
--- a/.gitignore
+++ b/.gitignore
@@ -39,5 +39,15 @@ docs/.vitepress/dist/
# Claude Code worktrees
.claude/worktrees/
+# Claude Code local settings
+.claude/settings.local.json
+
# Managed worktree MCP fallback config
/.mcp.json
+
+# Local Cargo cache config
+.cargo/config.local.toml
+
+# Test result artifacts with hardcoded local paths
+TEST_RESULTS*.md
+/crates/llm-cli-wrapper/TEST_RESULTS*.md
diff --git a/CONTRIBUTING.md b/CONTRIBUTING.md
new file mode 100644
index 000000000..908489564
--- /dev/null
+++ b/CONTRIBUTING.md
@@ -0,0 +1,181 @@
+# Contributing to Animus
+
+Thank you for your interest in contributing to Animus! This document provides guidelines and instructions for contributing.
+
+## Getting Started
+
+### Prerequisites
+
+- **Rust**: Animus is a Rust-only project. Install Rust via [rustup](https://rustup.rs/).
+- **Node.js** (optional, for web UI work): v18+
+- **At least one AI coding CLI** for testing agent integration:
+ - `@anthropic-ai/claude-code` (recommended)
+ - `@openai/codex`
+ - `@google/gemini-cli`
+
+### Building Locally
+
+```bash
+# Clone the repository
+git clone https://github.com/samishukri/animus.git
+cd animus
+
+# Build the project
+cargo build
+
+# Run tests
+cargo test
+
+# Format and lint
+cargo fmt
+cargo clippy
+```
+
+## Development Workflow
+
+### Before You Start
+
+1. **Check existing issues and PRs** to avoid duplicate work
+2. **Fork the repository** on GitHub
+3. **Create a feature branch** from `main`:
+ ```bash
+ git checkout -b feature/your-feature-name
+ ```
+
+### Making Changes
+
+- Keep changes focused and minimal — one feature per PR
+- Follow the existing code style (Rust conventions)
+- Add tests for new functionality
+- Update documentation if your changes affect user-facing behavior
+- Reference the [CLAUDE.md](./CLAUDE.md) file for architecture landmarks and verification checks
+
+### Code Organization
+
+The workspace is organized into functional crates:
+
+- **Core orchestration**: `orchestrator-cli`, `orchestrator-core`, `orchestrator-config`, `orchestrator-store`
+- **Runtime & agents**: `agent-runner`, `llm-cli-wrapper`, `workflow-runner-v2`, `orchestrator-daemon-runtime`
+- **Web & API**: `orchestrator-web-server`, `orchestrator-web-api`
+- **Utilities**: `orchestrator-providers`, `orchestrator-git-ops`, `orchestrator-notifications`, `protocol`
+
+### Running Tests
+
+```bash
+# Test a specific crate
+cargo test -p crate-name
+
+# Run all tests
+cargo test --workspace
+
+# Run web UI tests
+cd crates/orchestrator-web-server/web-ui
+npm test
+```
+
+### Verifying Your Changes
+
+Before submitting a PR, verify your changes don't break anything:
+
+```bash
+# Format check
+cargo fmt --check
+
+# Lint check
+cargo clippy --all-targets
+
+# Test all crates
+cargo test --workspace
+
+# Web UI checks (if relevant)
+cd crates/orchestrator-web-server/web-ui
+npm run typecheck
+npm run build
+```
+
+## Submitting Changes
+
+### Pull Request Process
+
+1. **Push your branch** to your fork:
+ ```bash
+ git push origin feature/your-feature-name
+ ```
+
+2. **Create a Pull Request** with a clear title and description:
+ - Link any related issues using `Closes #123`
+ - Explain *why* the change is needed, not just *what* changed
+ - Include testing notes if applicable
+
+3. **Respond to feedback** from reviewers and update the PR as needed
+
+4. **Ensure CI passes** — all automated checks must pass before merging
+
+### Commit Guidelines
+
+- Write clear, descriptive commit messages
+- Use conventional commit format when possible: `type(scope): description`
+ - `feat`: new feature
+ - `fix`: bug fix
+ - `refactor`: code restructuring
+ - `test`: test additions or updates
+ - `docs`: documentation updates
+ - `chore`: maintenance tasks
+
+Example:
+```
+feat(cli): add --dry-run flag to task create command
+
+Allows users to preview task creation without persisting state.
+```
+
+## Documentation
+
+- **CLI changes**: Update `docs/reference/cli/index.md`
+- **MCP tools**: Update `docs/reference/mcp-tools.md` and `docs/guides/agents.md`
+- **Configuration**: Update relevant docs in `docs/reference/`
+- **README**: Keep it current with major feature additions
+
+## Code Standards
+
+### Rust Style
+
+- Use `cargo fmt` for formatting — this is enforced in CI
+- Follow Clippy suggestions — address warnings before submitting
+- Write idiomatic Rust code
+
+### Web UI Standards (TypeScript/React)
+
+- Use React 18 best practices
+- Follow component patterns established in the codebase
+- Test changes with responsive and accessibility checks
+
+### State Management
+
+- Treat AO-managed state (in `~/.ao/`) as immutable except through CLI commands
+- Use service APIs rather than direct file manipulation
+- Preserve backward compatibility where possible
+
+## Reporting Bugs
+
+When reporting bugs, please include:
+
+1. **Environment**: OS, Rust version, Animus version
+2. **Steps to reproduce**: Clear, minimal example
+3. **Expected behavior**: What should happen
+4. **Actual behavior**: What actually happens
+5. **Logs**: Output of `animus doctor` and relevant error messages
+
+## Questions or Feedback?
+
+- **Discussions**: Use GitHub Discussions for questions
+- **Issues**: File issues for bugs or feature requests
+- **Security**: For security issues, email security@example.com (do not open public issues)
+
+## License
+
+By contributing to Animus, you agree that your contributions will be licensed under the same [Elastic License 2.0 (ELv2)](LICENSE) as the project.
+
+---
+
+Thank you for contributing to Animus!
diff --git a/README.md b/README.md
index e5556921d..b4a917a69 100644
--- a/README.md
+++ b/README.md
@@ -1,10 +1,10 @@
-
+
-[](https://github.com/launchapp-dev/ao)
+[](https://github.com/samishukri/animus)
@@ -24,7 +24,7 @@
## Install
```bash
-curl -fsSL https://raw.githubusercontent.com/launchapp-dev/ao/main/install.sh | bash
+curl -fsSL https://raw.githubusercontent.com/samishukri/animus/main/install.sh | bash
```
The upstream installer currently targets macOS. On Linux and Windows, use a release archive or build from source.
@@ -34,10 +34,10 @@ The upstream installer currently targets macOS. On Linux and Windows, use a rele
```bash
# Specific version
-AO_VERSION=v0.0.11 curl -fsSL https://raw.githubusercontent.com/launchapp-dev/ao/main/install.sh | bash
+ANIMUS_VERSION=v0.0.11 curl -fsSL https://raw.githubusercontent.com/samishukri/animus/main/install.sh | bash
# Custom directory
-AO_INSTALL_DIR=/usr/local/bin curl -fsSL https://raw.githubusercontent.com/launchapp-dev/ao/main/install.sh | bash
+ANIMUS_INSTALL_DIR=/usr/local/bin curl -fsSL https://raw.githubusercontent.com/samishukri/animus/main/install.sh | bash
```
@@ -57,15 +57,15 @@ npm install -g @google/gemini-cli # Gemini
---
-## What is AO?
+## What is Animus?
-AO turns a single YAML file into an autonomous software delivery pipeline.
+Animus turns a single YAML file into an autonomous software delivery pipeline.
-You define agents, wire them into phases, compose phases into workflows, schedule everything with cron — and AO's daemon handles the rest: dispatching tasks to AI agents in isolated git worktrees, managing quality gates, and merging the results.
+You define agents, wire them into phases, compose phases into workflows, schedule everything with cron — and Animus's daemon handles the rest: dispatching tasks to AI agents in isolated git worktrees, managing quality gates, and merging the results.
```
┌──────────────────────────────────────────────────┐
- │ AO Daemon (Rust) │
+ │ Animus Daemon (Rust) │
│ │
┌────────┐ │ ┌───────────┐ ┌───────────┐ ┌────────┐ │ ┌────────┐
│ Tasks │───▶│───▶│ Dispatch │───▶│ Agents │───▶│ Phases │─│──▶│ PRs │
@@ -86,13 +86,13 @@ You define agents, wire them into phases, compose phases into workflows, schedul
```bash
cd your-project # any git repo
-ao doctor # check prerequisites
-ao setup # initialize .ao/
+animus doctor # check prerequisites
+animus setup # initialize .ao/
-ao task create --title "Add rate limiting" --task-type feature --priority high
-ao workflow run --task-id TASK-001 # run once
+animus task create --title "Add rate limiting" --task-type feature --priority high
+animus workflow run --task-id TASK-001 # run once
-ao daemon start --autonomous # or go fully autonomous
+animus daemon start --autonomous # or go fully autonomous
```
---
@@ -112,7 +112,7 @@ agents:
default:
model: claude-sonnet-4-6
tool: claude
- mcp_servers: ["ao", "context7"]
+ mcp_servers: ["animus", "context7"]
work-planner:
system_prompt: |
@@ -198,7 +198,7 @@ schedules:
## The Full Agent Team
-AO doesn't run one agent. It runs an **entire product organization**:
+Animus doesn't run one agent. It runs an **entire product organization**:
```
┌─────────────────────────────────────────────────────────────────┐
@@ -264,11 +264,11 @@ Every task gets its own git worktree. Agents work in parallel on separate branch
## Claude Code Integration
-Install [**AO Skills**](https://github.com/launchapp-dev/ao-skills) for deep AO integration inside Claude Code:
+Install [**Animus Skills**](https://github.com/samishukri/animus-skills) for deep Animus integration inside Claude Code:
```bash
-git clone https://github.com/launchapp-dev/ao-skills.git ~/ao-skills
-claude --plugin-dir ~/ao-skills
+git clone https://github.com/samishukri/animus-skills.git ~/animus-skills
+claude --plugin-dir ~/animus-skills
```
@@ -279,7 +279,7 @@ claude --plugin-dir ~/ao-skills
| Command | What it does |
|:---|:---|
-| `/setup-ao` | Initialize AO in your project |
+| `/setup-animus` | Initialize Animus in your project |
| `/getting-started` | Install, concepts, first task |
| `/workflow-authoring` | Write custom YAML workflows |
| `/pack-authoring` | Build workflow packs |
@@ -298,7 +298,7 @@ claude --plugin-dir ~/ao-skills
| `daemon-operations` | Daemon monitoring and troubleshooting |
| `workflow-patterns` | Patterns from 150+ autonomous PRs |
| `agent-personas` | PO, architect, auditor agents |
-| `mcp-tools` | Complete `ao.*` tool reference |
+| `mcp-tools` | Complete `animus.*` tool reference |
@@ -309,25 +309,25 @@ claude --plugin-dir ~/ao-skills
## CLI
```
-ao task Create, list, update, prioritize tasks
-ao workflow Run and manage multi-phase workflows
-ao daemon Start/stop the autonomous scheduler
-ao queue Inspect and manage the dispatch queue
-ao agent Control agent runner processes
-ao output Stream and inspect agent output
-ao doctor Health checks and auto-remediation
-ao setup Interactive project initialization
-ao requirements Manage product requirements
-ao mcp Start AO as an MCP server
-ao web Launch the embedded web dashboard
-ao status Project overview at a glance
+animus task Create, list, update, prioritize tasks
+animus workflow Run and manage multi-phase workflows
+animus daemon Start/stop the autonomous scheduler
+animus queue Inspect and manage the dispatch queue
+animus agent Control agent runner processes
+animus output Stream and inspect agent output
+animus doctor Health checks and auto-remediation
+animus setup Interactive project initialization
+animus requirements Manage product requirements
+animus mcp Start Animus as an MCP server
+animus web Launch the embedded web dashboard
+animus status Project overview at a glance
```
---
## Architecture
-AO is a Rust-only workspace with 17 crates. The major crates are:
+Animus is a Rust-only workspace with 17 crates. The major crates are:
- `orchestrator-cli` - CLI commands and dispatch
- `orchestrator-core` - services, state, and workflow lifecycle
@@ -386,17 +386,17 @@ This project is licensed under the [Elastic License 2.0 (ELv2)](LICENSE). You ma
**Update**
```bash
-curl -fsSL https://raw.githubusercontent.com/launchapp-dev/ao/main/install.sh | bash
+curl -fsSL https://raw.githubusercontent.com/samishukri/animus/main/install.sh | bash
```
**Uninstall**
```bash
-rm -f ~/.local/bin/ao \
+rm -f ~/.local/bin/animus \
~/.local/bin/agent-runner \
~/.local/bin/llm-cli-wrapper \
- ~/.local/bin/ao-oai-runner \
- ~/.local/bin/ao-workflow-runner
+ ~/.local/bin/animus-oai-runner \
+ ~/.local/bin/animus-workflow-runner
```
diff --git a/crates/agent-runner/src/runner/mcp_policy/tests.rs b/crates/agent-runner/src/runner/mcp_policy/tests.rs
index 59f7c8984..eb8a00e35 100644
--- a/crates/agent-runner/src/runner/mcp_policy/tests.rs
+++ b/crates/agent-runner/src/runner/mcp_policy/tests.rs
@@ -47,10 +47,10 @@ fn native_mcp_policy_rejects_unknown_cli_when_enforced() {
enabled: true,
endpoint: None,
stdio: Some(McpStdioConfig {
- command: "/Users/samishukri/ao-cli/target/debug/ao".to_string(),
+ command: "/path/to/ao/target/debug/ao".to_string(),
args: vec![
"--project-root".to_string(),
- "/Users/samishukri/ao-cli".to_string(),
+ "/path/to/project".to_string(),
"mcp".to_string(),
"serve".to_string(),
],
@@ -173,10 +173,10 @@ fn native_mcp_policy_preserves_primary_server_when_additional_server_name_collid
enabled: true,
endpoint: None,
stdio: Some(McpStdioConfig {
- command: "/Users/samishukri/ao-cli/target/debug/ao".to_string(),
+ command: "/path/to/ao/target/debug/ao".to_string(),
args: vec![
"--project-root".to_string(),
- "/Users/samishukri/ao-cli".to_string(),
+ "/path/to/project".to_string(),
"mcp".to_string(),
"serve".to_string(),
],
@@ -209,13 +209,13 @@ fn native_mcp_policy_preserves_primary_server_when_additional_server_name_collid
assert_eq!(
parsed.pointer("/mcpServers/ao/command").and_then(serde_json::Value::as_str),
- Some("/Users/samishukri/ao-cli/target/debug/ao")
+ Some("/path/to/ao/target/debug/ao")
);
assert_eq!(
parsed.pointer("/mcpServers/ao/args").and_then(serde_json::Value::as_array).cloned(),
Some(vec![
serde_json::Value::String("--project-root".to_string()),
- serde_json::Value::String("/Users/samishukri/ao-cli".to_string()),
+ serde_json::Value::String("/path/to/project".to_string()),
serde_json::Value::String("mcp".to_string()),
serde_json::Value::String("serve".to_string()),
])
@@ -261,10 +261,10 @@ fn codex_native_lockdown_sets_stdio_transport_when_configured() {
apply_codex_native_mcp_lockdown(
&mut args,
McpServerTransport::Stdio {
- command: "/Users/samishukri/ao-cli/target/debug/ao",
+ command: "/path/to/ao/target/debug/ao",
args: &[
"--project-root".to_string(),
- "/Users/samishukri/ao-cli".to_string(),
+ "/path/to/project".to_string(),
"mcp".to_string(),
"serve".to_string(),
],
@@ -275,10 +275,8 @@ fn codex_native_lockdown_sets_stdio_transport_when_configured() {
);
let joined = args.join(" ");
- assert!(joined.contains("mcp_servers.ao.command=\"/Users/samishukri/ao-cli/target/debug/ao\""));
- assert!(
- joined.contains("mcp_servers.ao.args=[\"--project-root\", \"/Users/samishukri/ao-cli\", \"mcp\", \"serve\"]")
- );
+ assert!(joined.contains("mcp_servers.ao.command=\"/path/to/ao/target/debug/ao\""));
+ assert!(joined.contains("mcp_servers.ao.args=[\"--project-root\", \"/path/to/project\", \"mcp\", \"serve\"]"));
assert!(joined.contains("mcp_servers.ao.enabled=true"));
}
@@ -294,10 +292,10 @@ fn native_mcp_policy_sets_gemini_system_settings_path_for_stdio_transport() {
enabled: true,
endpoint: None,
stdio: Some(McpStdioConfig {
- command: "/Users/samishukri/ao-cli/target/debug/ao".to_string(),
+ command: "/path/to/ao/target/debug/ao".to_string(),
args: vec![
"--project-root".to_string(),
- "/Users/samishukri/ao-cli".to_string(),
+ "/path/to/project".to_string(),
"mcp".to_string(),
"serve".to_string(),
],
@@ -377,10 +375,10 @@ fn native_mcp_policy_sets_opencode_local_mcp_command_array() {
enabled: true,
endpoint: None,
stdio: Some(McpStdioConfig {
- command: "/Users/samishukri/ao-cli/target/debug/ao".to_string(),
+ command: "/path/to/ao/target/debug/ao".to_string(),
args: vec![
"--project-root".to_string(),
- "/Users/samishukri/ao-cli".to_string(),
+ "/path/to/project".to_string(),
"mcp".to_string(),
"serve".to_string(),
],
@@ -403,7 +401,7 @@ fn native_mcp_policy_sets_opencode_local_mcp_command_array() {
assert_eq!(parsed.pointer("/mcp/ao/type").and_then(serde_json::Value::as_str), Some("local"));
assert_eq!(
parsed.pointer("/mcp/ao/command/0").and_then(serde_json::Value::as_str),
- Some("/Users/samishukri/ao-cli/target/debug/ao")
+ Some("/path/to/ao/target/debug/ao")
);
assert_eq!(parsed.pointer("/mcp/ao/command/4").and_then(serde_json::Value::as_str), Some("serve"));
assert!(parsed.pointer("/mcp/ao/args").is_none());
@@ -428,12 +426,12 @@ fn native_mcp_policy_inserts_oai_runner_mcp_config_after_run_subcommand() {
enabled: true,
endpoint: None,
stdio: Some(McpStdioConfig {
- command: "/Users/samishukri/ao-cli/target/debug/ao".to_string(),
+ command: "/path/to/ao/target/debug/ao".to_string(),
args: vec![
"mcp".to_string(),
"serve".to_string(),
"--project-root".to_string(),
- "/Users/samishukri/ao-cli".to_string(),
+ "/path/to/project".to_string(),
],
}),
agent_id: "ao".to_string(),
diff --git a/crates/llm-cli-wrapper/TEST_RESULTS.md b/crates/llm-cli-wrapper/TEST_RESULTS.md
deleted file mode 100644
index 7bc7499b7..000000000
--- a/crates/llm-cli-wrapper/TEST_RESULTS.md
+++ /dev/null
@@ -1,109 +0,0 @@
-# CLI Wrapper Test Results
-
-## ✅ Test Run: Success!
-
-### Discovery Test
-**Command**: `./target/release/llm-cli-wrapper discover`
-
-**Result**: ✅ PASS
-```
-✓ Found 3 CLI(s)
-```
-
-**CLIs Discovered**:
-- ✅ Claude Code at `/Users/samishukri/.local/bin/claude`
-- ✅ OpenAI Codex at `/Users/samishukri/.bun/bin/codex`
-- ✅ Google Gemini CLI at `/Users/samishukri/.nvm/versions/node/v22.17.0/bin/gemini`
-- ⚠️ Aider not found in PATH
-
----
-
-### List Test
-**Command**: `./target/release/llm-cli-wrapper list`
-
-**Result**: ✅ PASS
-```
-Installed CLIs:
-────────────────────────────────────────────────────────────
-Claude Code ⚠ Not Authenticated
-OpenAI Codex ⚠ Not Authenticated
-Google Gemini CLI ⚠ Not Authenticated
-```
-
-**Note**: CLIs are detected but not authenticated (no API keys set)
-
----
-
-### Health Check Test
-**Command**: `./target/release/llm-cli-wrapper health`
-
-**Result**: ✅ PASS (Detection works, auth needs setup)
-```
-Running health checks...
-────────────────────────────────────────────────────────────
-✗ UNHEALTHY OpenAI Codex (0ms)
- CLI is not authenticated
-✗ UNHEALTHY Claude Code (0ms)
- CLI is not authenticated
-✗ UNHEALTHY Google Gemini CLI (0ms)
- CLI is not authenticated
-```
-
-**Note**: Health checks correctly identify missing authentication
-
----
-
-## Test Summary
-
-| Test | Status | Details |
-|------|--------|---------|
-| CLI Discovery | ✅ PASS | Found 3 CLIs successfully |
-| CLI List | ✅ PASS | Lists all discovered CLIs |
-| Health Check | ✅ PASS | Correctly detects auth status |
-| Info Command | ✅ PASS | Shows CLI capabilities |
-
-## Features Verified
-
-✅ **Auto-discovery**: Automatically finds CLIs in PATH
-✅ **Multi-CLI support**: Works with Claude, Codex, Gemini
-✅ **Status detection**: Identifies authentication state
-✅ **Logging**: Clear, colored output
-✅ **Error handling**: Graceful handling of missing CLIs
-
-## Authentication Setup Needed
-
-To make CLIs fully functional, set these environment variables:
-
-```bash
-# For Claude
-export ANTHROPIC_API_KEY="your-key-here"
-
-# For Codex
-codex login
-# OR
-export OPENAI_API_KEY="your-key-here"
-
-# For Gemini
-export GEMINI_API_KEY="your-key-here"
-# OR
-export GOOGLE_APPLICATION_CREDENTIALS="/path/to/credentials.json"
-```
-
-## Next Steps
-
-1. ✅ Discovery works - Can find installed CLIs
-2. ✅ Listing works - Can show all CLIs with status
-3. ✅ Health checks work - Can verify CLI state
-4. ⚠️ Auth needed - Set up API keys to test full functionality
-5. 🚧 Run full test suite - `./target/release/llm-cli-wrapper test`
-
-## Conclusion
-
-The CLI wrapper is **fully functional** and successfully:
-- Discovers installed CLIs automatically
-- Detects authentication status
-- Provides detailed CLI information
-- Shows clear, colored output
-- Handles missing CLIs gracefully
-
-**Status**: Ready for use! 🎉
diff --git a/crates/llm-cli-wrapper/TEST_RESULTS_COMPLETE.md b/crates/llm-cli-wrapper/TEST_RESULTS_COMPLETE.md
deleted file mode 100644
index d33591353..000000000
--- a/crates/llm-cli-wrapper/TEST_RESULTS_COMPLETE.md
+++ /dev/null
@@ -1,305 +0,0 @@
-# CLI Wrapper - Complete Test Results
-
-## ✅ All CLIs Working - Production Ready!
-
-**Date**: 2026-02-01
-**Status**: 🎉 4 CLIs Discovered and Tested Successfully
-
----
-
-## Executive Summary
-
-The CLI wrapper successfully discovers, authenticates, and executes commands on **4 different AI coding assistants**:
-
-✅ **Claude Code** - Anthropic's official CLI
-✅ **OpenAI Codex** - OpenAI's coding assistant
-✅ **Google Gemini CLI** - Google's AI assistant
-✅ **OpenCode** - Open-source multi-model CLI *(newly added)*
-
-All CLIs pass health checks and basic verification tests.
-
----
-
-## Discovered CLIs
-
-| CLI | Version | Path | Health | Tests |
-|-----|---------|------|--------|-------|
-| **Claude Code** | 2.1.29 | `/Users/samishukri/.local/bin/claude` | ✅ Healthy (851ms) | 2/2 PASS |
-| **OpenAI Codex** | 0.92.0 | `/Users/samishukri/.bun/bin/codex` | ✅ Healthy (133ms) | 2/2 PASS |
-| **Google Gemini** | 0.26.0 | `/Users/samishukri/.nvm/versions/node/v22.17.0/bin/gemini` | ✅ Healthy (4144ms) | 2/2 PASS |
-| **OpenCode** | Latest | `/Users/samishukri/.opencode/bin/opencode` | ✅ Healthy (1582ms) | 2/2 PASS |
-
----
-
-## Test Results by CLI
-
-### ✅ Claude Code
-**Command Format**: `claude "prompt"`
-
-**Health Check**: ✅ PASS (851ms)
-- Version detection working
-- Authentication verified
-- Ready for use
-
-**Basic Tests**: ✅ 2/2 PASS
-- Simple greeting: ✓ PASS (3651ms)
-- Simple math: ✓ PASS (3493ms)
-
-**Notes**: Fastest response times, excellent for interactive use
-
----
-
-### ✅ OpenAI Codex
-**Command Format**: `codex exec --skip-git-repo-check "prompt"`
-
-**Health Check**: ✅ PASS (133ms)
-- Version detection working
-- Authentication verified
-- Ready for use
-
-**Basic Tests**: ✅ 2/2 PASS
-- Simple greeting: ✓ PASS (5762ms)
-- Simple math: ✓ PASS (4652ms)
-
-**Notes**:
-- Requires `--skip-git-repo-check` flag for non-repo directories
-- Supports advanced reasoning with high effort mode
-- Excellent for complex coding tasks
-
----
-
-### ✅ Google Gemini CLI
-**Command Format**: `gemini -p "prompt"`
-
-**Health Check**: ✅ PASS (4144ms)
-- Version detection working
-- Authentication verified
-- Ready for use
-
-**Basic Tests**: ✅ 2/2 PASS
-- Simple greeting: ✓ PASS (8454ms)
-- Simple math: ✓ PASS (8151ms)
-
-**Notes**:
-- Longest response times (high reasoning effort)
-- Massive context window (1M tokens)
-- Best for large codebases
-
----
-
-### ✅ OpenCode (New!)
-**Command Format**: `opencode run "prompt"`
-
-**Health Check**: ✅ PASS (1582ms)
-- Version detection working
-- Authentication verified
-- Ready for use
-
-**Basic Tests**: ✅ 2/2 PASS
-- Simple greeting: ✓ PASS (3724ms)
-- Simple math: ✓ PASS (5971ms)
-
-**Notes**:
-- Open-source, privacy-focused
-- Supports 75+ model providers
-- Can use local models via Ollama
-- Great for sensitive projects
-
-**Why OpenCode?**
-OpenCode is a standout addition because it:
-- Doesn't store your code or context (privacy-first)
-- Supports multiple providers (OpenAI, Anthropic, local models)
-- Switch models mid-session without losing context
-- Terminal-native with polished TUI
-- Integrates with language servers for code intelligence
-
----
-
-## Issues Fixed in This Session
-
-### 1. ✅ Codex Not Working in Test Directories
-**Problem**: Codex failed with "Not inside a trusted directory" error
-**Solution**: Added `--skip-git-repo-check` flag to exec command
-**File**: `src/cli/codex.rs:53`
-
-### 2. ✅ OpenCode Discovery and Integration
-**Added**:
-- OpenCode CLI type to enum (src/cli/types.rs)
-- OpenCode implementation (src/cli/opencode.rs)
-- Registry integration (src/cli/registry.rs)
-- Command parser support (src/main.rs)
-
-**Command Format**: Uses `opencode run "message"` for execution
-
----
-
-## Performance Comparison
-
-| CLI | Health Check | Greeting Test | Math Test | Avg Response |
-|-----|-------------|---------------|-----------|--------------|
-| Claude | 851ms | 3651ms | 3493ms | **3572ms** ⚡ |
-| OpenCode | 1582ms | 3724ms | 5971ms | **4848ms** |
-| Codex | 133ms | 5762ms | 4652ms | **5207ms** |
-| Gemini | 4144ms | 8454ms | 8151ms | **8303ms** |
-
-**Fastest**: Claude Code (3.6s average)
-**Slowest**: Gemini (8.3s average - but handles largest context)
-
----
-
-## CLI Capabilities Matrix
-
-| Capability | Claude | Codex | Gemini | OpenCode |
-|-----------|--------|-------|--------|----------|
-| File Editing | ✅ | ✅ | ✅ | ✅ |
-| Streaming | ✅ | ✅ | ✅ | ✅ |
-| Tool Use | ✅ | ✅ | ✅ | ✅ |
-| Vision | ✅ | ❌ | ✅ | ❌ |
-| Long Context | ✅ 200K | ✅ 128K | ✅ 1M | ✅ 200K+ |
-| Local Models | ❌ | ❌ | ❌ | ✅ |
-| Multi-Provider | ❌ | ❌ | ❌ | ✅ |
-
----
-
-## Integration Readiness
-
-### ✅ Ready for Agent Orchestrator
-
-The CLI wrapper can now:
-
-1. **CLI Selection**
- - Automatically discover all 4 installed CLIs
- - Check health before task assignment
- - Select best CLI based on task requirements
-
-2. **Health Monitoring**
- - Pre-execution health checks
- - Performance tracking
- - Automatic fallback to alternative CLIs
-
-3. **Task Routing**
- - Vision tasks → Claude or Gemini
- - Large context → Gemini (1M tokens)
- - Privacy-sensitive → OpenCode (local models)
- - General coding → Claude (fastest)
- - Complex reasoning → Codex
-
-4. **Quality Gates**
- - Verify CLI availability before workflow
- - Validate output quality
- - Track execution times
-
----
-
-## Commands Reference
-
-### Discovery
-```bash
-./target/release/llm-cli-wrapper discover
-# Output: ✓ Found 4 CLI(s)
-```
-
-### List All CLIs
-```bash
-./target/release/llm-cli-wrapper list
-# Shows all CLIs with authentication status
-```
-
-### Health Checks
-```bash
-# All CLIs
-./target/release/llm-cli-wrapper health
-
-# Specific CLI
-./target/release/llm-cli-wrapper health opencode
-```
-
-### Run Tests
-```bash
-# Test specific CLI
-./target/release/llm-cli-wrapper test claude --suite basic
-./target/release/llm-cli-wrapper test codex --suite basic
-./target/release/llm-cli-wrapper test gemini --suite basic
-./target/release/llm-cli-wrapper test opencode --suite basic
-
-# All CLIs
-./target/release/llm-cli-wrapper test --suite basic
-```
-
-### CLI Information
-```bash
-./target/release/llm-cli-wrapper info opencode
-# Shows version, capabilities, and status
-```
-
----
-
-## Next Steps
-
-### Immediate
-- ✅ All 4 CLIs discovered and tested
-- ✅ Health checks passing
-- ✅ Basic verification complete
-- ✅ Ready for integration
-
-### Future Enhancements
-1. **Additional Test Suites**
- - File operations (read/write/edit)
- - Code generation
- - Multi-file refactoring
-
-2. **Advanced Features**
- - Performance benchmarking
- - Cost tracking (API usage)
- - Automatic model selection based on task
- - Parallel execution tests
-
-3. **Integration**
- - Import into agent-runner daemon
- - Add to workflow executor
- - Implement in PM/EM loops
-
----
-
-## OpenCode Resources
-
-Based on web research, OpenCode is a significant addition:
-
-- **Installation**: `curl -fsSL https://opencode.ai/install | bash`
-- **GitHub**: [opencode-ai/opencode](https://github.com/opencode-ai/opencode)
-- **Documentation**: [opencode.ai/docs/cli](https://opencode.ai/docs/cli/)
-- **Comparison**: [OpenCode vs Claude Code](https://www.builder.io/blog/opencode-vs-claude-code)
-
-**Key Differentiator**: Privacy-focused, open-source alternative with multi-provider support and local model capability via Ollama.
-
-### Sources:
-- [CLI | OpenCode](https://opencode.ai/docs/cli/)
-- [GitHub - opencode-ai/opencode](https://github.com/opencode-ai/opencode)
-- [OpenCode CLI Guide 2026](https://yuv.ai/learn/opencode-cli)
-- [OpenCode vs Claude Code](https://www.builder.io/blog/opencode-vs-claude-code)
-- [Top 5 CLI Coding Agents in 2026](https://dev.to/lightningdev123/top-5-cli-coding-agents-in-2026-3pia)
-
----
-
-## Conclusion
-
-🎉 **CLI Wrapper Status: Production Ready**
-
-**Achievements**:
-- ✅ 4 CLIs discovered automatically
-- ✅ All health checks passing
-- ✅ All basic tests passing (8/8)
-- ✅ Codex fixed for non-repo directories
-- ✅ OpenCode added with full support
-- ✅ Ready for Agent Orchestrator integration
-
-**Total CLIs Supported**: 4 active + 3 planned (Aider, Cursor, Cline)
-
-The system can now intelligently route tasks to the most appropriate CLI based on:
-- Task complexity
-- Context window requirements
-- Privacy needs
-- Performance requirements
-- Feature requirements (vision, tool use, etc.)
-
-Ready for production use in autonomous agent workflows! 🚀
diff --git a/crates/llm-cli-wrapper/TEST_RESULTS_FINAL.md b/crates/llm-cli-wrapper/TEST_RESULTS_FINAL.md
deleted file mode 100644
index d0d7f7fb2..000000000
--- a/crates/llm-cli-wrapper/TEST_RESULTS_FINAL.md
+++ /dev/null
@@ -1,191 +0,0 @@
-# CLI Wrapper - Final Test Results
-
-## ✅ All Tests Passing!
-
-**Date**: 2026-02-01
-**Status**: Production Ready
-
----
-
-## Test Summary
-
-| Test Category | Status | Details |
-|--------------|---------|---------|
-| Discovery | ✅ PASS | Successfully found 3 CLIs |
-| Health Checks | ✅ PASS | All CLIs healthy and authenticated |
-| Basic Verification | ✅ PASS | Claude & Gemini pass all tests |
-| CLI Execution | ✅ PASS | Successfully executes AI prompts |
-
----
-
-## Discovered CLIs
-
-### ✅ Claude Code
-- **Path**: `/Users/samishukri/.local/bin/claude`
-- **Version**: 2.1.29 (Claude Code)
-- **Status**: Healthy (861ms response time)
-- **Tests**: 2/2 passed
- - ✓ Simple greeting test (3651ms)
- - ✓ Simple math test (3493ms)
-
-### ✅ Google Gemini CLI
-- **Path**: `/Users/samishukri/.nvm/versions/node/v22.17.0/bin/gemini`
-- **Version**: 0.26.0
-- **Status**: Healthy (3581ms response time)
-- **Tests**: 2/2 passed
- - ✓ Simple greeting test (8454ms)
- - ✓ Simple math test (8151ms)
-
-### ✅ OpenAI Codex
-- **Path**: `/Users/samishukri/.bun/bin/codex`
-- **Version**: codex-cli 0.92.0
-- **Status**: Healthy (125ms response time)
-- **Tests**: Requires workspace context (expected behavior)
-
-### ⚠️ Aider
-- **Status**: Not found in PATH
-- **Note**: Not installed on this system
-
----
-
-## Issues Fixed
-
-### 1. ✅ Authentication Detection
-**Problem**: CLIs were showing as "Not Authenticated" despite being logged in
-**Solution**: Changed authentication check from environment variable validation to version command execution
-**File**: `src/cli/{claude,codex,gemini}.rs`
-
-### 2. ✅ Incorrect CLI Commands
-**Problem**: Wrong subcommands being used for Codex and Gemini
-**Solution**:
-- Codex: Changed from `codex run` to `codex exec`
-- Gemini: Changed from `gemini chat` to `gemini -p`
-**Files**: `src/cli/codex.rs`, `src/cli/gemini.rs`
-
-### 3. ✅ Working Directory Not Found
-**Problem**: Test execution failing with "No such file or directory" error
-**Root Cause**: Temp directory `/var/folders/.../llm-cli-wrapper-tests` didn't exist
-**Solution**: Create test workspace directory before running tests
-**File**: `src/main.rs:126-129`
-
-### 4. ✅ Case-Sensitive Output Validation
-**Problem**: Tests failing because AI responses use "Hello" instead of "hello"
-**Solution**: Made output validation case-insensitive
-**File**: `src/tester/test_runner.rs:97-106`
-
----
-
-## Architecture Improvements
-
-### Error Handling
-- Added executable path existence check before spawning
-- Added working directory validation
-- Improved error messages for debugging
-
-### Logging
-- Added debug statements for command execution
-- Track spawn success/failure
-- Monitor working directory changes
-
-### Test Framework
-- Case-insensitive output matching
-- Proper working directory management
-- Timeout handling for long-running AI operations
-
----
-
-## Commands Verified
-
-### Discovery
-```bash
-./target/release/llm-cli-wrapper discover
-```
-✅ Successfully discovers all installed CLIs
-
-### List
-```bash
-./target/release/llm-cli-wrapper list
-```
-✅ Shows all CLIs with authentication status
-
-### Health Checks
-```bash
-./target/release/llm-cli-wrapper health
-```
-✅ All CLIs report healthy status
-
-### Individual CLI Tests
-```bash
-./target/release/llm-cli-wrapper test claude --suite basic
-./target/release/llm-cli-wrapper test gemini --suite basic
-```
-✅ Both pass all test cases
-
----
-
-## Performance Metrics
-
-| CLI | Health Check | Greeting Test | Math Test |
-|-----|-------------|---------------|-----------|
-| Claude | 861ms | 3651ms | 3493ms |
-| Gemini | 3581ms | 8454ms | 8151ms |
-| Codex | 125ms | N/A* | N/A* |
-
-*Codex requires workspace context for exec mode
-
----
-
-## Integration Points
-
-This CLI wrapper is ready for integration with the Agent Orchestrator:
-
-### ✅ Workflow Executor
-- Can verify CLI availability before task assignment
-- Health check before executing workflow steps
-- Detect and report CLI failures
-
-### ✅ Engineering Manager Loop
-- Monitor CLI availability
-- Track CLI performance metrics
-- Automatic fallback to alternative CLIs
-
-### ✅ Product Manager Loop
-- Assess CLI capabilities
-- Match tasks to appropriate CLIs
-- Evaluate CLI suitability for requirements
-
----
-
-## Next Steps
-
-1. **Production Deployment**
- - ✅ All tests passing
- - ✅ Error handling robust
- - ✅ Authentication working
- - Ready for use in daemon
-
-2. **Additional Test Suites**
- - File operations test suite
- - Code generation test suite
- - Multi-file editing tests
-
-3. **Integration**
- - Import into agent-runner daemon
- - Use for CLI selection in workflows
- - Add to quality gates
-
----
-
-## Conclusion
-
-The CLI wrapper is **production ready** with all core functionality working:
-
-- ✅ Auto-discovery of installed CLIs
-- ✅ Health monitoring
-- ✅ Authentication detection
-- ✅ Command execution
-- ✅ Output validation
-- ✅ Error handling
-- ✅ Performance tracking
-
-**Status**: Ready for integration with the Agent Orchestrator system.
diff --git a/crates/llm-cli-wrapper/test_spawn.rs b/crates/llm-cli-wrapper/test_spawn.rs
index 55b6665b6..fe7d30dd2 100644
--- a/crates/llm-cli-wrapper/test_spawn.rs
+++ b/crates/llm-cli-wrapper/test_spawn.rs
@@ -3,7 +3,7 @@ use std::process::Stdio;
#[tokio::main]
async fn main() {
- let path = "/Users/samishukri/.local/bin/claude";
+ let path = "/usr/local/bin/claude";
let args = vec!["Say hello"];
println!("Testing spawn with path: {:?}", path);
diff --git a/crates/orchestrator-cli/src/cli_types/cloud_types.rs b/crates/orchestrator-cli/src/cli_types/cloud_types.rs
index 3920a9f57..110f5296d 100644
--- a/crates/orchestrator-cli/src/cli_types/cloud_types.rs
+++ b/crates/orchestrator-cli/src/cli_types/cloud_types.rs
@@ -2,13 +2,15 @@ use clap::{Parser, Subcommand};
#[derive(Debug, Subcommand)]
pub(crate) enum CloudCommand {
+ /// Authenticate with animus cloud using device auth flow.
+ Login(CloudLoginArgs),
/// Configure the sync server connection for this project.
Setup(CloudSetupArgs),
- /// Push local tasks and requirements to the sync server.
+ /// Push local tasks, requirements, and workflow config to the sync server.
Push,
/// Pull tasks and requirements from the sync server into local state.
Pull,
- /// Show sync configuration and last sync status.
+ /// Show sync configuration, cloud projects, daemon states, and active workflows.
Status,
/// Link this project to a specific remote project by ID.
Link(CloudLinkArgs),
@@ -19,6 +21,14 @@ pub(crate) enum CloudCommand {
},
}
+#[derive(Debug, Parser)]
+pub(crate) struct CloudLoginArgs {
+ #[arg(long, help = "Animus cloud server URL (defaults to https://api.animus.cloud)")]
+ pub(crate) server: Option,
+ #[arg(long, help = "Skip opening browser (print URL instead)")]
+ pub(crate) no_browser: bool,
+}
+
#[derive(Debug, Subcommand)]
pub(crate) enum DeployCommand {
/// Create a new deployment
@@ -43,8 +53,8 @@ pub(crate) struct CloudSetupArgs {
#[derive(Debug, Parser)]
pub(crate) struct CloudLinkArgs {
- #[arg(long, help = "Remote project ID to link to")]
- pub(crate) project_id: String,
+ #[arg(long, help = "Remote project ID to link to (auto-detects from git remote if not provided)")]
+ pub(crate) project_id: Option,
}
#[derive(Debug, Parser)]
diff --git a/crates/orchestrator-cli/src/services/cloud.rs b/crates/orchestrator-cli/src/services/cloud.rs
index 5a2558439..6d614ac97 100644
--- a/crates/orchestrator-cli/src/services/cloud.rs
+++ b/crates/orchestrator-cli/src/services/cloud.rs
@@ -1,15 +1,17 @@
+use std::path::PathBuf;
use std::sync::Arc;
+use std::time::Duration;
use anyhow::{Context, Result};
use orchestrator_core::{FileServiceHub, ServiceHub};
use protocol::orchestrator::{OrchestratorTask, RequirementItem};
use protocol::sync_config::SyncConfig;
-use protocol::DeployConfig;
+use protocol::{ConfigBundle, DeployConfig};
use serde::{Deserialize, Serialize};
use crate::{
- print_value, CloudCommand, CloudLinkArgs, CloudSetupArgs, DeployCommand, DeployCreateArgs, DeployDestroyArgs,
- DeployStartArgs, DeployStatusArgs, DeployStopArgs,
+ print_value, CloudCommand, CloudLinkArgs, CloudLoginArgs, CloudSetupArgs, DeployCommand, DeployCreateArgs,
+ DeployDestroyArgs, DeployStartArgs, DeployStatusArgs, DeployStopArgs,
};
pub(crate) async fn handle_cloud(
@@ -19,6 +21,7 @@ pub(crate) async fn handle_cloud(
json: bool,
) -> Result<()> {
match command {
+ CloudCommand::Login(args) => handle_login(args, json).await,
CloudCommand::Setup(args) => handle_setup(args, project_root, json).await,
CloudCommand::Link(args) => handle_link(args, project_root, json).await,
CloudCommand::Push => handle_push(hub, project_root, json).await,
@@ -28,6 +31,102 @@ pub(crate) async fn handle_cloud(
}
}
+async fn handle_login(args: CloudLoginArgs, json: bool) -> Result<()> {
+ let server = args.server.unwrap_or_else(|| "https://api.animus.cloud".to_string());
+ let server = server.trim_end_matches('/');
+
+ // Step 1: Initiate device auth flow
+ let client = reqwest::Client::new();
+ let resp = client
+ .post(&format!("{}/api/cli/auth/initiate", server))
+ .send()
+ .await
+ .context("Failed to connect to auth server")?;
+
+ if !resp.status().is_success() {
+ let status = resp.status();
+ let body = resp.text().await.unwrap_or_default();
+ anyhow::bail!("Auth initiation failed ({status}): {body}");
+ }
+
+ let auth_response: AuthInitiateResponse = resp.json().await.context("Failed to parse auth response")?;
+
+ // Step 2: Open browser or print URL
+ let auth_url = &auth_response.auth_url;
+ if args.no_browser {
+ if !json {
+ eprintln!("Open the following URL in your browser to authenticate:");
+ eprintln!("{}", auth_url);
+ eprintln!("Device code: {}", auth_response.device_code);
+ }
+ } else {
+ // Attempt to open browser
+ let _ = open_browser(auth_url);
+ if !json {
+ eprintln!("Opening browser for authentication...");
+ eprintln!("If browser did not open, visit: {}", auth_url);
+ }
+ }
+
+ // Step 3: Poll for completion
+ let max_attempts = 120; // 2 minutes with 1 second polling
+ let poll_interval = Duration::from_secs(1);
+
+ for attempt in 0..max_attempts {
+ tokio::time::sleep(poll_interval).await;
+
+ let resp = client
+ .post(&format!("{}/api/cli/auth/complete", server))
+ .json(&AuthCompleteRequest { device_code: auth_response.device_code.clone() })
+ .send()
+ .await;
+
+ match resp {
+ Ok(r) if r.status().is_success() => {
+ let complete_response: AuthCompleteResponse =
+ r.json().await.context("Failed to parse completion response")?;
+
+ // Step 4: Store token in SyncConfig
+ let mut config = SyncConfig::load_global();
+ config.server = Some(server.to_string());
+ config.token = Some(complete_response.token.clone());
+ config.save_global()?;
+
+ let result = LoginResult {
+ authenticated: true,
+ server: server.to_string(),
+ message: "Successfully authenticated with animus cloud".to_string(),
+ };
+
+ if !json {
+ eprintln!("✓ Authentication successful!");
+ eprintln!("Server: {}", server);
+ }
+
+ return print_value(result, json);
+ }
+ Ok(r) if r.status().as_u16() == 400 => {
+ // Not yet complete, continue polling
+ continue;
+ }
+ Ok(r) => {
+ let status = r.status();
+ let body = r.text().await.unwrap_or_default();
+ anyhow::bail!("Auth completion failed ({status}): {body}");
+ }
+ Err(e) if attempt < max_attempts - 1 => {
+ // Network error, retry
+ continue;
+ }
+ Err(e) => {
+ anyhow::bail!("Auth completion request failed: {}", e);
+ }
+ }
+ }
+
+ anyhow::bail!("Authentication timeout - user did not complete login within 2 minutes")
+}
+
async fn handle_setup(args: CloudSetupArgs, project_root: &str, json: bool) -> Result<()> {
let mut global_config = SyncConfig::load_global();
global_config.server = Some(args.server.clone());
@@ -69,14 +168,99 @@ async fn handle_setup(args: CloudSetupArgs, project_root: &str, json: bool) -> R
}
async fn handle_link(args: CloudLinkArgs, project_root: &str, json: bool) -> Result<()> {
+ let config = SyncConfig::load_for_project(project_root);
+ let server = config.server_url()?;
+ let token = config.bearer_token()?;
+
+ let project_id = if let Some(ref id) = args.project_id {
+ // Explicit project_id provided
+ id.clone()
+ } else {
+ // Auto-detect from git remote
+ let origin_url = get_git_origin(project_root)
+ .ok_or_else(|| anyhow::anyhow!("Could not detect git remote. Run: animus cloud link --project-id "))?;
+
+ let (owner, repo) = parse_github_repo(&origin_url).ok_or_else(|| {
+ anyhow::anyhow!(
+ "Could not parse GitHub repo from remote URL: {}. Run: animus cloud link --project-id ",
+ origin_url
+ )
+ })?;
+
+ // Call /api/cli/projects/ensure to check for GitHub App installation
+ let client = build_client(&token)?;
+ let ensure_url = format!(
+ "{}/api/cli/projects/ensure?owner={}&repo={}",
+ server.trim_end_matches('/'),
+ urlencoding(&owner),
+ urlencoding(&repo)
+ );
+
+ let resp = client.post(&ensure_url).send().await.context("Failed to connect to projects endpoint")?;
+
+ if !resp.status().is_success() {
+ let status = resp.status();
+ let body = resp.text().await.unwrap_or_default();
+ if status.as_u16() == 404 {
+ anyhow::bail!(
+ "No GitHub App installation found for {}/{}. Run: animus cloud link --project-id ",
+ owner,
+ repo
+ );
+ }
+ anyhow::bail!("Project detection failed ({status}): {body}");
+ }
+
+ let body = resp.json::().await.context("Failed to parse projects response")?;
+ body.project_id
+ };
+
let mut config = SyncConfig::load_for_project(project_root);
- config.project_id = Some(args.project_id.clone());
+ config.project_id = Some(project_id.clone());
config.save_for_project(project_root)?;
- let result = serde_json::json!({ "linked": true, "project_id": args.project_id });
+ let result = serde_json::json!({ "linked": true, "project_id": project_id });
print_value(result, json)
}
+fn build_config_bundle(project_root: &str) -> Result {
+ let mut bundle = ConfigBundle::new();
+ let ao_dir = PathBuf::from(project_root).join(".ao");
+
+ // Collect workflow YAML files
+ if let Ok(entries) = std::fs::read_dir(ao_dir.join("workflows")) {
+ for entry in entries.flatten() {
+ let path = entry.path();
+ if path.extension().map_or(false, |ext| ext == "yaml" || ext == "yml") {
+ if let Ok(content) = std::fs::read_to_string(&path) {
+ if let Some(file_name) = path.file_name() {
+ let key = format!(".ao/workflows/{}", file_name.to_string_lossy());
+ bundle.add_file(key, content);
+ }
+ }
+ }
+ }
+ }
+
+ // Collect root workflows.yaml
+ let workflows_file = ao_dir.join("workflows.yaml");
+ if workflows_file.exists() {
+ if let Ok(content) = std::fs::read_to_string(&workflows_file) {
+ bundle.add_file(".ao/workflows.yaml".to_string(), content);
+ }
+ }
+
+ // Collect config.json
+ let config_file = ao_dir.join("config.json");
+ if config_file.exists() {
+ if let Ok(content) = std::fs::read_to_string(&config_file) {
+ bundle.add_file(".ao/config.json".to_string(), content);
+ }
+ }
+
+ Ok(bundle)
+}
+
async fn handle_push(hub: Arc, project_root: &str, json: bool) -> Result<()> {
let config = SyncConfig::load_for_project(project_root);
let server = config.server_url()?;
@@ -107,6 +291,25 @@ async fn handle_push(hub: Arc, project_root: &str, json: bool) -
let sync_resp: SyncResponse = resp.json().await.context("Failed to parse sync response")?;
+ // Push config bundle to cloud
+ let config_bundle = build_config_bundle(project_root)?;
+ let config_files_count = config_bundle.file_count();
+
+ if !config_bundle.is_empty() {
+ let config_resp = client
+ .post(&format!("{}/api/projects/{}/configs", server.trim_end_matches('/'), project_id))
+ .json(&config_bundle)
+ .send()
+ .await
+ .context("Failed to connect to configs endpoint")?;
+
+ if !config_resp.status().is_success() {
+ let status = config_resp.status();
+ let body = config_resp.text().await.unwrap_or_default();
+ anyhow::bail!("Config push failed ({status}): {body}");
+ }
+ }
+
let mut config = SyncConfig::load_for_project(project_root);
config.last_synced_at = Some(sync_resp.server_time.clone());
config.save_for_project(project_root)?;
@@ -114,6 +317,7 @@ async fn handle_push(hub: Arc, project_root: &str, json: bool) -
let result = PushResult {
tasks_sent: tasks_count,
requirements_sent: reqs_count,
+ config_files_sent: config_files_count,
conflicts: sync_resp.conflicts.len(),
server_time: sync_resp.server_time,
};
@@ -174,15 +378,54 @@ async fn handle_pull(hub: Arc, project_root: &str, json: bool) -
async fn handle_status(project_root: &str, json: bool) -> Result<()> {
let config = SyncConfig::load_for_project(project_root);
+
+ // Try to fetch cloud status if configured
+ let (projects, daemons, workflows) = if config.is_configured() {
+ match fetch_cloud_status(&config).await {
+ Ok((projects, daemons, workflows)) => (Some(projects), Some(daemons), Some(workflows)),
+ Err(_) => {
+ // Fall back gracefully if cloud API is unavailable
+ (None, None, None)
+ }
+ }
+ } else {
+ (None, None, None)
+ };
+
let result = StatusResult {
configured: config.is_configured(),
server: config.server.clone(),
project_id: config.project_id.clone(),
last_synced_at: config.last_synced_at.clone(),
+ cloud_projects: projects,
+ cloud_daemons: daemons,
+ active_workflows: workflows,
};
print_value(result, json)
}
+async fn fetch_cloud_status(config: &SyncConfig) -> Result<(Vec, Vec, Vec)> {
+ let server = config.server_url()?;
+ let token = config.bearer_token()?;
+
+ let client = build_client(&token)?;
+ let resp = client
+ .get(&format!("{}/api/cli/status", server.trim_end_matches('/')))
+ .send()
+ .await
+ .context("Failed to connect to cloud status endpoint")?;
+
+ if !resp.status().is_success() {
+ let status = resp.status();
+ let body = resp.text().await.unwrap_or_default();
+ anyhow::bail!("Cloud status check failed ({status}): {body}");
+ }
+
+ let cloud_response: CloudStatusResponse = resp.json().await.context("Failed to parse cloud status response")?;
+
+ Ok((cloud_response.projects, cloud_response.daemons, cloud_response.workflows))
+}
+
async fn handle_deploy(command: DeployCommand, project_root: &str, json: bool) -> Result<()> {
match command {
DeployCommand::Create(args) => handle_create(args, project_root, json).await,
@@ -194,21 +437,50 @@ async fn handle_deploy(command: DeployCommand, project_root: &str, json: bool) -
}
async fn handle_create(args: DeployCreateArgs, project_root: &str, json: bool) -> Result<()> {
- let mut deploy_config = DeployConfig::load_for_project(project_root);
+ let config = SyncConfig::load_for_project(project_root);
+ let server = config.server_url()?;
+ let token = config.bearer_token()?;
+ let project_id = config
+ .project_id
+ .as_ref()
+ .ok_or_else(|| anyhow::anyhow!("No project linked. Run: animus cloud link --project-id "))?;
- // For production deployment, we would use the Fly.io API token
- // For now, we save the configuration and provide feedback
+ let client = build_client(&token)?;
+ let create_request = CreateDaemonRequest {
+ app_name: args.app_name.clone(),
+ region: args.region.clone(),
+ machine_size: args.machine_size.clone(),
+ };
+
+ let resp = client
+ .post(&format!("{}/api/cli/projects/{}/daemons", server.trim_end_matches('/'), project_id))
+ .json(&create_request)
+ .send()
+ .await
+ .context("Failed to connect to daemon creation endpoint")?;
+
+ if !resp.status().is_success() {
+ let status = resp.status();
+ let body = resp.text().await.unwrap_or_default();
+ anyhow::bail!("Daemon creation failed ({status}): {body}");
+ }
+
+ let daemon_resp: DaemonResponse = resp.json().await.context("Failed to parse daemon response")?;
+
+ // Save daemon ID locally for future reference
+ let mut deploy_config = DeployConfig::load_for_project(project_root);
deploy_config.app_name = Some(args.app_name.clone());
deploy_config.region = Some(args.region.clone());
deploy_config.last_deployed_at = Some(chrono::Utc::now().to_rfc3339());
+ deploy_config.machine_ids.push(daemon_resp.daemon_id.clone());
deploy_config.save_for_project(project_root)?;
let result = DeployCreateResult {
app_name: args.app_name,
region: args.region,
machine_size: args.machine_size,
- status: "created".to_string(),
- deployed_at: deploy_config.last_deployed_at.clone().unwrap_or_default(),
+ status: daemon_resp.status,
+ deployed_at: daemon_resp.created_at,
};
if !json {
@@ -222,28 +494,62 @@ async fn handle_create(args: DeployCreateArgs, project_root: &str, json: bool) -
}
async fn handle_destroy(args: DeployDestroyArgs, project_root: &str, json: bool) -> Result<()> {
- let mut deploy_config = DeployConfig::load_for_project(project_root);
+ let config = SyncConfig::load_for_project(project_root);
+ let server = config.server_url()?;
+ let token = config.bearer_token()?;
+ let project_id = config
+ .project_id
+ .as_ref()
+ .ok_or_else(|| anyhow::anyhow!("No project linked. Run: animus cloud link --project-id "))?;
+
+ let deploy_config = DeployConfig::load_for_project(project_root);
// Verify the app name matches
if let Some(ref configured_app) = deploy_config.app_name {
if configured_app != &args.app_name {
anyhow::bail!(
- "App name mismatch: configured '{}' but attempting to destroy '{}'. Use 'ao cloud status' to check.",
+ "App name mismatch: configured '{}' but attempting to destroy '{}'. Use 'ao cloud deploy status' to check.",
configured_app,
args.app_name
);
}
+ } else {
+ anyhow::bail!("No deployment configured for this project. Run 'ao cloud deploy create' first.");
+ }
+
+ // Get the daemon ID from local config
+ let daemon_id = deploy_config
+ .machine_ids
+ .first()
+ .ok_or_else(|| anyhow::anyhow!("No daemon ID found in local configuration"))?;
+
+ let client = build_client(&token)?;
+ let resp = client
+ .delete(&format!(
+ "{}/api/cli/projects/{}/daemons/{}",
+ server.trim_end_matches('/'),
+ project_id,
+ daemon_id
+ ))
+ .send()
+ .await
+ .context("Failed to connect to daemon destruction endpoint")?;
+
+ if !resp.status().is_success() {
+ let status = resp.status();
+ let body = resp.text().await.unwrap_or_default();
+ anyhow::bail!("Daemon destruction failed ({status}): {body}");
}
- // Clear deployment configuration
+ // Clear deployment configuration locally
+ let mut deploy_config = DeployConfig::load_for_project(project_root);
deploy_config.app_name = None;
deploy_config.region = None;
deploy_config.machine_ids.clear();
deploy_config.status = Some("destroyed".to_string());
deploy_config.save_for_project(project_root)?;
- let result =
- DeployDestroyResult { app_name: args.app_name, status: "destroyed".to_string(), machines_destroyed: 0 };
+ let result = DeployDestroyResult { app_name: args.app_name, status: "destroyed".to_string(), machines_destroyed: 1 };
if !json {
eprintln!("Deployment destroyed successfully!");
@@ -253,6 +559,23 @@ async fn handle_destroy(args: DeployDestroyArgs, project_root: &str, json: bool)
print_value(result, json)
}
+#[derive(Serialize)]
+struct CreateDaemonRequest {
+ app_name: String,
+ region: String,
+ machine_size: String,
+}
+
+#[derive(Deserialize)]
+struct DaemonResponse {
+ daemon_id: String,
+ app_name: String,
+ region: String,
+ status: String,
+ created_at: String,
+ updated_at: Option,
+}
+
fn build_client(token: &str) -> Result {
let mut headers = reqwest::header::HeaderMap::new();
headers.insert(reqwest::header::AUTHORIZATION, reqwest::header::HeaderValue::from_str(&format!("Bearer {token}"))?);
@@ -285,6 +608,90 @@ fn urlencoding(s: &str) -> String {
.collect()
}
+fn parse_github_repo(url: &str) -> Option<(String, String)> {
+ // Handle both HTTPS and SSH GitHub URLs
+ // HTTPS: https://github.com/owner/repo or https://github.com/owner/repo.git
+ // SSH: git@github.com:owner/repo or git@github.com:owner/repo.git
+
+ let url = url.trim();
+
+ // SSH URL format: git@github.com:owner/repo[.git]
+ if let Some(stripped) = url.strip_prefix("git@github.com:") {
+ let repo_part = stripped.trim_end_matches(".git").trim_end_matches('/');
+ let parts: Vec<&str> = repo_part.split('/').collect();
+ if parts.len() >= 2 {
+ return Some((parts[0].to_string(), parts[1].to_string()));
+ }
+ }
+
+ // HTTPS URL format: https://github.com/owner/repo[.git]
+ if let Some(stripped) = url.strip_prefix("https://github.com/") {
+ let repo_part = stripped.trim_end_matches(".git").trim_end_matches('/');
+ let parts: Vec<&str> = repo_part.split('/').collect();
+ if parts.len() >= 2 {
+ return Some((parts[0].to_string(), parts[1].to_string()));
+ }
+ }
+
+ // Also try with http (less common but possible)
+ if let Some(stripped) = url.strip_prefix("http://github.com/") {
+ let repo_part = stripped.trim_end_matches(".git").trim_end_matches('/');
+ let parts: Vec<&str> = repo_part.split('/').collect();
+ if parts.len() >= 2 {
+ return Some((parts[0].to_string(), parts[1].to_string()));
+ }
+ }
+
+ None
+}
+
+#[cfg(test)]
+mod tests {
+ use super::*;
+
+ #[test]
+ fn test_parse_github_repo_https() {
+ let result = parse_github_repo("https://github.com/anthropics/claude-code");
+ assert_eq!(result, Some(("anthropics".to_string(), "claude-code".to_string())));
+ }
+
+ #[test]
+ fn test_parse_github_repo_https_with_git() {
+ let result = parse_github_repo("https://github.com/anthropics/claude-code.git");
+ assert_eq!(result, Some(("anthropics".to_string(), "claude-code".to_string())));
+ }
+
+ #[test]
+ fn test_parse_github_repo_ssh() {
+ let result = parse_github_repo("git@github.com:anthropics/claude-code");
+ assert_eq!(result, Some(("anthropics".to_string(), "claude-code".to_string())));
+ }
+
+ #[test]
+ fn test_parse_github_repo_ssh_with_git() {
+ let result = parse_github_repo("git@github.com:anthropics/claude-code.git");
+ assert_eq!(result, Some(("anthropics".to_string(), "claude-code".to_string())));
+ }
+
+ #[test]
+ fn test_parse_github_repo_with_trailing_slash() {
+ let result = parse_github_repo("https://github.com/anthropics/claude-code/");
+ assert_eq!(result, Some(("anthropics".to_string(), "claude-code".to_string())));
+ }
+
+ #[test]
+ fn test_parse_github_repo_http() {
+ let result = parse_github_repo("http://github.com/anthropics/claude-code");
+ assert_eq!(result, Some(("anthropics".to_string(), "claude-code".to_string())));
+ }
+
+ #[test]
+ fn test_parse_github_repo_invalid() {
+ let result = parse_github_repo("https://gitlab.com/anthropics/claude-code");
+ assert_eq!(result, None);
+ }
+}
+
#[derive(Serialize)]
struct SetupResult {
server: String,
@@ -297,6 +704,7 @@ struct SetupResult {
struct PushResult {
tasks_sent: usize,
requirements_sent: usize,
+ config_files_sent: usize,
conflicts: usize,
server_time: String,
}
@@ -314,6 +722,48 @@ struct StatusResult {
server: Option,
project_id: Option,
last_synced_at: Option,
+ #[serde(skip_serializing_if = "Option::is_none")]
+ cloud_projects: Option>,
+ #[serde(skip_serializing_if = "Option::is_none")]
+ cloud_daemons: Option>,
+ #[serde(skip_serializing_if = "Option::is_none")]
+ active_workflows: Option>,
+}
+
+#[derive(Serialize, Deserialize, Debug, Clone)]
+struct CloudProject {
+ id: String,
+ name: String,
+ created_at: String,
+}
+
+#[derive(Serialize, Deserialize, Debug, Clone)]
+struct CloudDaemon {
+ id: String,
+ project_id: String,
+ app_name: String,
+ status: String,
+ region: String,
+ machine_size: String,
+ created_at: String,
+ updated_at: Option,
+}
+
+#[derive(Serialize, Deserialize, Debug, Clone)]
+struct CloudWorkflow {
+ id: String,
+ name: String,
+ project_id: String,
+ status: String,
+ started_at: String,
+ completed_at: Option,
+}
+
+#[derive(Deserialize)]
+struct CloudStatusResponse {
+ projects: Vec,
+ daemons: Vec,
+ workflows: Vec,
}
#[derive(Deserialize)]
@@ -327,6 +777,11 @@ struct ProjectInfo {
name: String,
}
+#[derive(Deserialize)]
+struct EnsureProjectResponse {
+ project_id: String,
+}
+
#[derive(Serialize)]
struct SyncRequest {
tasks: Vec,
@@ -359,6 +814,14 @@ struct DeployCreateResult {
}
async fn handle_start(args: DeployStartArgs, project_root: &str, json: bool) -> Result<()> {
+ let config = SyncConfig::load_for_project(project_root);
+ let server = config.server_url()?;
+ let token = config.bearer_token()?;
+ let project_id = config
+ .project_id
+ .as_ref()
+ .ok_or_else(|| anyhow::anyhow!("No project linked. Run: animus cloud link --project-id "))?;
+
let deploy_config = DeployConfig::load_for_project(project_root);
// Verify the app name matches
@@ -374,10 +837,36 @@ async fn handle_start(args: DeployStartArgs, project_root: &str, json: bool) ->
anyhow::bail!("No deployment configured for this project. Run 'ao cloud deploy create' first.");
}
+ // Get the daemon ID from local config
+ let daemon_id = deploy_config
+ .machine_ids
+ .first()
+ .ok_or_else(|| anyhow::anyhow!("No daemon ID found in local configuration"))?;
+
+ let client = build_client(&token)?;
+ let resp = client
+ .post(&format!(
+ "{}/api/cli/projects/{}/daemons/{}/start",
+ server.trim_end_matches('/'),
+ project_id,
+ daemon_id
+ ))
+ .send()
+ .await
+ .context("Failed to connect to daemon start endpoint")?;
+
+ if !resp.status().is_success() {
+ let status = resp.status();
+ let body = resp.text().await.unwrap_or_default();
+ anyhow::bail!("Daemon start failed ({status}): {body}");
+ }
+
+ let daemon_resp: DaemonResponse = resp.json().await.context("Failed to parse daemon response")?;
+
let result = DeployStartResult {
app_name: args.app_name,
- status: "started".to_string(),
- started_at: chrono::Utc::now().to_rfc3339(),
+ status: daemon_resp.status,
+ started_at: daemon_resp.updated_at.unwrap_or_else(|| chrono::Utc::now().to_rfc3339()),
};
if !json {
@@ -390,6 +879,14 @@ async fn handle_start(args: DeployStartArgs, project_root: &str, json: bool) ->
}
async fn handle_stop(args: DeployStopArgs, project_root: &str, json: bool) -> Result<()> {
+ let config = SyncConfig::load_for_project(project_root);
+ let server = config.server_url()?;
+ let token = config.bearer_token()?;
+ let project_id = config
+ .project_id
+ .as_ref()
+ .ok_or_else(|| anyhow::anyhow!("No project linked. Run: animus cloud link --project-id "))?;
+
let deploy_config = DeployConfig::load_for_project(project_root);
// Verify the app name matches
@@ -405,10 +902,36 @@ async fn handle_stop(args: DeployStopArgs, project_root: &str, json: bool) -> Re
anyhow::bail!("No deployment configured for this project. Run 'ao cloud deploy create' first.");
}
+ // Get the daemon ID from local config
+ let daemon_id = deploy_config
+ .machine_ids
+ .first()
+ .ok_or_else(|| anyhow::anyhow!("No daemon ID found in local configuration"))?;
+
+ let client = build_client(&token)?;
+ let resp = client
+ .post(&format!(
+ "{}/api/cli/projects/{}/daemons/{}/stop",
+ server.trim_end_matches('/'),
+ project_id,
+ daemon_id
+ ))
+ .send()
+ .await
+ .context("Failed to connect to daemon stop endpoint")?;
+
+ if !resp.status().is_success() {
+ let status = resp.status();
+ let body = resp.text().await.unwrap_or_default();
+ anyhow::bail!("Daemon stop failed ({status}): {body}");
+ }
+
+ let daemon_resp: DaemonResponse = resp.json().await.context("Failed to parse daemon response")?;
+
let result = DeployStopResult {
app_name: args.app_name,
- status: "stopped".to_string(),
- stopped_at: chrono::Utc::now().to_rfc3339(),
+ status: daemon_resp.status,
+ stopped_at: daemon_resp.updated_at.unwrap_or_else(|| chrono::Utc::now().to_rfc3339()),
};
if !json {
@@ -421,24 +944,61 @@ async fn handle_stop(args: DeployStopArgs, project_root: &str, json: bool) -> Re
}
async fn handle_status_deploy(args: DeployStatusArgs, project_root: &str, json: bool) -> Result<()> {
+ let config = SyncConfig::load_for_project(project_root);
+ let server = config.server_url()?;
+ let token = config.bearer_token()?;
+ let project_id = config
+ .project_id
+ .as_ref()
+ .ok_or_else(|| anyhow::anyhow!("No project linked. Run: animus cloud link --project-id "))?;
+
let deploy_config = DeployConfig::load_for_project(project_root);
// Check if the app name matches if a deployment is configured
if let Some(ref configured_app) = deploy_config.app_name {
if configured_app != &args.app_name {
anyhow::bail!(
- "App name mismatch: configured '{}' but checking status for '{}'. Use 'ao cloud deploy status' without --app-name to check configured deployment.",
+ "App name mismatch: configured '{}' but checking status for '{}'. Use 'ao cloud deploy status --app-name {}' to check configured deployment.",
configured_app,
- args.app_name
+ args.app_name,
+ configured_app
);
}
+ } else {
+ anyhow::bail!("No deployment configured for this project. Run 'ao cloud deploy create' first.");
+ }
+
+ // Get the daemon ID from local config
+ let daemon_id = deploy_config
+ .machine_ids
+ .first()
+ .ok_or_else(|| anyhow::anyhow!("No daemon ID found in local configuration"))?;
+
+ let client = build_client(&token)?;
+ let resp = client
+ .get(&format!(
+ "{}/api/cli/projects/{}/daemons/{}",
+ server.trim_end_matches('/'),
+ project_id,
+ daemon_id
+ ))
+ .send()
+ .await
+ .context("Failed to connect to daemon status endpoint")?;
+
+ if !resp.status().is_success() {
+ let status = resp.status();
+ let body = resp.text().await.unwrap_or_default();
+ anyhow::bail!("Daemon status check failed ({status}): {body}");
}
+ let daemon_resp: DaemonResponse = resp.json().await.context("Failed to parse daemon response")?;
+
let result = DeployStatusDeployResult {
app_name: args.app_name,
- status: deploy_config.status.clone().unwrap_or_else(|| "unknown".to_string()),
+ status: daemon_resp.status,
region: deploy_config.region.clone(),
- machines: deploy_config.machine_ids.clone(),
+ machines: vec![daemon_resp.daemon_id],
last_deployed_at: deploy_config.last_deployed_at.clone(),
};
@@ -490,3 +1050,56 @@ struct DeployStatusDeployResult {
machines: Vec,
last_deployed_at: Option,
}
+
+#[derive(Deserialize)]
+struct AuthInitiateResponse {
+ device_code: String,
+ auth_url: String,
+}
+
+#[derive(Serialize)]
+struct AuthCompleteRequest {
+ device_code: String,
+}
+
+#[derive(Deserialize)]
+struct AuthCompleteResponse {
+ token: String,
+}
+
+#[derive(Serialize)]
+struct LoginResult {
+ authenticated: bool,
+ server: String,
+ message: String,
+}
+
+fn open_browser(url: &str) -> Result<()> {
+ #[cfg(target_os = "macos")]
+ {
+ std::process::Command::new("open").arg(url).spawn()?;
+ }
+
+ #[cfg(target_os = "linux")]
+ {
+ // Try xdg-open first, then firefox, then chromium
+ let _ = std::process::Command::new("xdg-open").arg(url).spawn().or_else(|_| {
+ std::process::Command::new("firefox")
+ .arg(url)
+ .spawn()
+ .or_else(|_| std::process::Command::new("chromium").arg(url).spawn())
+ });
+ }
+
+ #[cfg(target_os = "windows")]
+ {
+ std::process::Command::new("cmd").args(&["/C", "start", url]).spawn()?;
+ }
+
+ #[cfg(not(any(target_os = "macos", target_os = "linux", target_os = "windows")))]
+ {
+ // On other platforms, just return Ok (user will need to visit URL manually)
+ }
+
+ Ok(())
+}
diff --git a/crates/orchestrator-core/src/services/runner_helpers.rs b/crates/orchestrator-core/src/services/runner_helpers.rs
index e05018062..a468d0755 100644
--- a/crates/orchestrator-core/src/services/runner_helpers.rs
+++ b/crates/orchestrator-core/src/services/runner_helpers.rs
@@ -782,7 +782,8 @@ mod tests {
std::env::remove_var("AO_SKIP_RUNNER_START");
std::env::remove_var("AGENT_RUNNER_TOKEN");
- std::env::set_current_dir("/Users/samishukri/ao-cli").ok();
+ // Preserve current directory for test environment
+ let test_cwd = std::env::current_dir().ok();
let expected_build_id = runner_binary_build_id(&_binary);
let startup_result = ensure_agent_runner_running(&project_root).await;
@@ -820,7 +821,8 @@ mod tests {
std::env::remove_var("AO_SKIP_RUNNER_START");
std::env::remove_var("AGENT_RUNNER_TOKEN");
- std::env::set_current_dir("/Users/samishukri/ao-cli").ok();
+ // Preserve current directory for test environment
+ let test_cwd2 = std::env::current_dir().ok();
let second_startup_result = ensure_agent_runner_running(&project_root).await;
diff --git a/crates/protocol/src/config_bundle.rs b/crates/protocol/src/config_bundle.rs
new file mode 100644
index 000000000..ba2f88ef7
--- /dev/null
+++ b/crates/protocol/src/config_bundle.rs
@@ -0,0 +1,38 @@
+use serde::{Deserialize, Serialize};
+use std::collections::BTreeMap;
+
+/// Represents a bundle of project configuration files to be pushed to cloud.
+/// Includes workflow YAML files and project configuration.
+#[derive(Debug, Clone, Serialize, Deserialize)]
+pub struct ConfigBundle {
+ /// Config files indexed by relative path from project root
+ pub files: BTreeMap,
+}
+
+impl ConfigBundle {
+ /// Create a new empty config bundle
+ pub fn new() -> Self {
+ Self { files: BTreeMap::new() }
+ }
+
+ /// Add a file to the bundle
+ pub fn add_file(&mut self, path: String, content: String) {
+ self.files.insert(path, content);
+ }
+
+ /// Check if bundle has any files
+ pub fn is_empty(&self) -> bool {
+ self.files.is_empty()
+ }
+
+ /// Get number of files in bundle
+ pub fn file_count(&self) -> usize {
+ self.files.len()
+ }
+}
+
+impl Default for ConfigBundle {
+ fn default() -> Self {
+ Self::new()
+ }
+}
diff --git a/crates/protocol/src/lib.rs b/crates/protocol/src/lib.rs
index 3009c2fc7..188861049 100644
--- a/crates/protocol/src/lib.rs
+++ b/crates/protocol/src/lib.rs
@@ -8,6 +8,7 @@
pub mod agent_runner;
pub mod common;
pub mod config;
+pub mod config_bundle;
pub mod credentials;
pub mod daemon;
pub mod daemon_event_record;
@@ -30,6 +31,7 @@ pub use config::{
cli_tracker_path, daemon_events_log_path, default_allowed_tool_prefixes, parse_env_bool, parse_env_bool_opt,
ClaudeProfileEntry, Config, ProjectMcpServerEntry,
};
+pub use config_bundle::ConfigBundle;
pub use daemon::*;
pub use daemon_event_record::*;
pub use deploy_config::DeployConfig;
diff --git a/crates/workflow-runner-v2/src/runtime_contract.rs b/crates/workflow-runner-v2/src/runtime_contract.rs
index 02bffef71..3c3f56f24 100644
--- a/crates/workflow-runner-v2/src/runtime_contract.rs
+++ b/crates/workflow-runner-v2/src/runtime_contract.rs
@@ -643,8 +643,8 @@ mod tests {
"mcp": {
"agent_id": "ao",
"stdio": {
- "command": "/Users/samishukri/ao-cli/target/debug/ao",
- "args": ["--project-root", "/Users/samishukri/ao-cli", "mcp", "serve"]
+ "command": "/path/to/ao/target/debug/ao",
+ "args": ["--project-root", "/path/to/project", "mcp", "serve"]
}
}
});
@@ -677,19 +677,13 @@ mod tests {
"mcp": {
"agent_id": "ao",
"stdio": {
- "command": "/Users/samishukri/ao-cli/target/debug/ao",
- "args": ["--project-root", "/Users/samishukri/ao-cli", "mcp", "serve"]
+ "command": "/path/to/ao/target/debug/ao",
+ "args": ["--project-root", "/path/to/project", "mcp", "serve"]
}
}
});
- inject_named_mcp_servers(
- &mut runtime_contract,
- "/Users/samishukri/ao-cli",
- &ctx,
- "requirements",
- &["ao".to_string()],
- )
- .expect("named MCP injection should succeed");
+ inject_named_mcp_servers(&mut runtime_contract, "/path/to/project", &ctx, "requirements", &["ao".to_string()])
+ .expect("named MCP injection should succeed");
assert!(
runtime_contract.pointer("/mcp/additional_servers").is_none(),
diff --git a/docs/reference/cli/index.md b/docs/reference/cli/index.md
index 9ca94cf8c..e26c21400 100644
--- a/docs/reference/cli/index.md
+++ b/docs/reference/cli/index.md
@@ -242,11 +242,13 @@ ao
│
├── setup Guided onboarding and configuration wizard
├── cloud Sync tasks and requirements with a remote ao-sync server
+│ ├── login Authenticate with animus cloud using device auth flow
│ ├── setup Configure the sync server connection for this project
-│ ├── push Push local tasks and requirements to the sync server
+│ ├── push Push local tasks, requirements, and workflow config to the sync server
│ ├── pull Pull tasks and requirements from the sync server into local state
-│ ├── status Show sync configuration and last sync status
-│ └── link Link this project to a specific remote project by ID
+│ ├── status Show sync configuration, cloud projects, daemon states, and active workflows
+│ ├── link Link this project (auto-detects from git remote or uses explicit project ID)
+│ └── deploy Manage deployments on ao-cloud
│
└── doctor Run environment and configuration diagnostics
```