diff --git a/.agents/skills/developing-vm-cli/references/vm-cli.md b/.agents/skills/developing-vm-cli/references/vm-cli.md index 4bc8f1e..8b20fa7 120000 --- a/.agents/skills/developing-vm-cli/references/vm-cli.md +++ b/.agents/skills/developing-vm-cli/references/vm-cli.md @@ -1 +1 @@ -../../../docs/vm-cli.md \ No newline at end of file +../../../../docs/vm-cli.md \ No newline at end of file diff --git a/.agents/skills/extending-capabilities/references/capabilities.md b/.agents/skills/extending-capabilities/references/capabilities.md index 904084a..6da4fff 120000 --- a/.agents/skills/extending-capabilities/references/capabilities.md +++ b/.agents/skills/extending-capabilities/references/capabilities.md @@ -1 +1 @@ -../../../docs/capabilities.md \ No newline at end of file +../../../../docs/capabilities.md \ No newline at end of file diff --git a/.agents/skills/managing-provisioning/references/vm-provisioning.md b/.agents/skills/managing-provisioning/references/vm-provisioning.md index 6e99d72..3c3d855 120000 --- a/.agents/skills/managing-provisioning/references/vm-provisioning.md +++ b/.agents/skills/managing-provisioning/references/vm-provisioning.md @@ -1 +1 @@ -../../../docs/vm-provisioning.md \ No newline at end of file +../../../../docs/vm-provisioning.md \ No newline at end of file diff --git a/.agents/skills/understanding-architecture/references/architecture.md b/.agents/skills/understanding-architecture/references/architecture.md index 12790e3..13fb7c8 120000 --- a/.agents/skills/understanding-architecture/references/architecture.md +++ b/.agents/skills/understanding-architecture/references/architecture.md @@ -1 +1 @@ -../../../docs/architecture.md \ No newline at end of file +../../../../docs/architecture.md \ No newline at end of file diff --git a/bun.lock b/bun.lock index b5aa1ff..a18e7e9 100644 --- a/bun.lock +++ b/bun.lock @@ -75,6 +75,7 @@ "@clawctl/types": "workspace:*", "execa": "^9.0.0", "semver": "^7.7.4", + "yaml": "^2.8.2", }, }, "packages/templates": { diff --git a/docs/TODO.md b/docs/TODO.md index 8d336a8..eb60a98 100644 --- a/docs/TODO.md +++ b/docs/TODO.md @@ -7,14 +7,12 @@ so they don't get lost. - [x] **Remove home directory mount as default** _(done: e770d69)_ -- [ ] **Auto-commit mechanism for inner openclaw changes** - The agent inside the VM writes to `data/` (configs, workspace files, - etc.) but can't commit — git runs on the host. We need a way for the - inner openclaw to request a commit. Rough idea: the agent writes a - file like `data/.git-request` containing the commit message; a `fs` - watcher on the host picks it up, stages `data/`, commits with that - message, and removes the request file. Could be a long-running - background process or integrated into the CLI's manage mode. +- [x] **Auto-commit mechanism for inner openclaw changes** _(done: v0.9.0)_ + Implemented as the checkpoint system: `claw checkpoint --message "reason"` + writes a signal file to `data/.checkpoint-request`; the host daemon's + `checkpoint-watch` task detects it, runs `git add data/ && git commit`, + and removes the file. The checkpoint capability also installs a skill + into the workspace so the agent knows how to use it. - [x] **Headless / preconfigured provisioning (skip the wizard)** _(done: 14b93d1)_ @@ -39,22 +37,19 @@ so they don't get lost. wizard or via config. Playwright + Chromium is the highest value add — web access is a core capability gap right now. -- [ ] **Automate manual post-setup steps** - Things currently done by hand after onboarding that should be part of - the provisioning flow. Based on real usage: - - **Docker permissions** — add the openclaw user to the docker group - so the agent can run containers without sudo - - **Sandbox disabled** — for trusted single-user setups, disable the - openclaw sandbox. Needs a config set during post-onboard setup. - Should probably be a wizard option ("trusted environment?") since - it's a security trade-off. - - **Workspace on shared mount** — already done (we set - `agents.defaults.workspace` to `/mnt/project/data/workspace`) - - **Headless Chromium** — install and configure so the agent can - browse. Overlaps with the pre-installed tooling item above. - - **Heartbeat security reviews** — configure periodic security - review tasks. Needs investigation into how openclaw schedules - these (cron? built-in scheduler?). +- [ ] **Automate manual post-setup steps** _(partially done)_ + Some items are now handled by the bootstrap flow; others remain. + - [x] **Sandbox disabled** — wizard option, bootstrap sets + `agents.defaults.sandbox.mode off` when configured + - [x] **Workspace on shared mount** — bootstrap sets + `agents.defaults.workspace /mnt/project/data/workspace` + - [ ] **Docker permissions** — add the openclaw user to the docker group + so the agent can run containers without sudo + - [ ] **Headless Chromium** — install and configure so the agent can + browse. Overlaps with the pre-installed tooling item above. + - [ ] **Heartbeat security reviews** — configure periodic security + review tasks. Needs investigation into how openclaw schedules + these (cron? built-in scheduler?). - [ ] **Post-provision setup commands for optional services** Allow configuring 1Password, Tailscale, etc. on an already-running VM. @@ -68,13 +63,25 @@ so they don't get lost. These reuse the same provisioning logic from the wizard steps but can target an existing instance. +- [ ] **Manage mount points after VM creation** + Currently mounts are only configurable at create time via `config.mounts`. + After that, the only way to add or remove a mount is editing + `~/.lima//lima.yaml` directly and restarting. clawctl should own this: + - `clawctl mount add --mount-point [--writable]` + - `clawctl mount remove ` + - `clawctl mount list ` + Under the hood: edit the Lima yaml and restart the VM. Lima doesn't + support hot-adding mounts, so a restart is required — the command should + warn and confirm. Also update `clawctl.json` so the mount survives a + future rebuild. + - [x] **`clawctl restart` with health verification** _(done: v0.4.0)_ -- [ ] **In-place upgrades** - When openclaw ships a new version, update the VM without rebuilding. - Re-run the idempotent provisioning scripts, restart the daemon. State - survives because it lives in `data/`. A simple - `clawctl upgrade ` command. +- [x] **In-place upgrades** _(done: v0.16.0)_ + Implemented as `clawctl update`: checks for new releases, downloads + and self-replaces the host binary, then pushes the new `claw` binary + to all running VMs and runs `claw migrate` for capability migrations. + Stopped VMs get a `pendingClawUpdate` flag and are updated on next start. - [ ] **Skill portability — make clawctl aware of the skills convention** Openclaw already has a natural convention: each skill lives in @@ -101,38 +108,28 @@ so they don't get lost. Longer term: skill sharing between instances, maybe a registry. -- [ ] **VM-side CLI (`claw`) — agent tooling inside the VM** - Split the tooling into two CLIs: `clawctl` (host, VM lifecycle) and - `claw` (VM-side, agent tooling). Separate packages in a monorepo, - sharing code but independently built. - - `claw` owns everything that happens _inside_ the VM: - - `claw bootstrap` — post-onboarding setup (daemon install, config set, - workspace init). Replaces the imperative shell commands in `bin/cli.tsx`. - - `claw doctor` — health checks beyond `openclaw doctor` (mount - verification, env vars, PATH, service status). - - `claw create skill` — scaffold a new skill directory with SKILL.md, - scripts/, package.json in the right structure. - - `claw update` — self-update the VM-side CLI (pulled from host mount - or downloaded). - - Future: any agent-facing commands (skill management, config, etc.) - - **How it gets there:** Built at provisioning time (`bun build --compile`), - copied into the VM during provisioning. `clawctl upgrade` rebuilds and - pushes the new binary. - - **Host→VM interface:** `clawctl` calls `claw` commands inside the VM - instead of raw `bash -lc` strings. `claw` returns structured output - (JSON or exit codes) so the host can parse reliably. This replaces the - current pattern of regex-parsing shell output. - - **What this subsumes:** The host-side CLI proxy (`clawctl openclaw` / `oc`) - already exists. With `claw` on the VM side, it would become - `clawctl oc ` → `limactl shell ... claw `. The proxy - logic is just dispatch, `claw` does the real work. - - **Naming rationale:** `clawctl` = control plane (host), `claw` = the - tool itself (VM). Short and natural for interactive use inside the VM. +- [x] **VM-side CLI (`claw`) — agent tooling inside the VM** _(done: v0.8.0)_ + Implemented as `@clawctl/vm-cli`. Commands: `claw provision `, + `claw doctor`, `claw checkpoint`, `claw migrate`. Built with + `bun build --compile` for linux-arm64, deployed into VM at + `/usr/local/bin/claw`. All commands return structured JSON. Host calls + claw via `limactl shell` instead of raw bash strings. + - [ ] **`claw create skill`** — scaffold a new skill directory (not yet implemented) + +- [ ] **Adopt native OpenClaw installations into a clawctl VM** + Many users have OpenClaw running natively on their machine. `clawctl adopt` + would create a VM that takes over an existing native installation: + - Detect the native OpenClaw data dirs (state, config, workspace) + - Create a new VM with mounts pointing at the existing data + - Provision the VM (idempotent — packages already installed natively + get skipped) + - Stop the native daemon, start the VM-based one + - Optionally move data into the clawctl project directory layout + + This is the general-purpose version of the one-off migration done for + the original Klaus VM (which was adopted from a pre-clawctl Lima setup). + The native case is harder because data paths vary and the native daemon + must be stopped cleanly. --- diff --git a/docs/architecture.md b/docs/architecture.md index a2cc158..9794046 100644 --- a/docs/architecture.md +++ b/docs/architecture.md @@ -239,6 +239,38 @@ Trade-offs: The eventual goal is to keep Ink active during onboarding by embedding the subprocess output in a virtual terminal surface. This would allow a contextual guidance sidebar with tips based on what the onboarding wizard is currently showing. Requires `node-pty` for PTY management and `xterm-headless` for ANSI parsing into a renderable screen buffer. +## CLI Command Conventions + +### Instance resolution + +Every command that targets an instance uses `requireInstance(opts)` from +host-core. It resolves the instance in this order: + +1. Explicit `-i ` / `--instance ` flag +2. Local `.clawctl` context file (set by `clawctl use`) +3. Global context (`~/.config/clawctl/context.json`) +4. Error if none found + +### Positional `[name]` argument + +Commands that **only** target an instance (no other positional args) +offer `[name]` as a convenience positional: + +``` +clawctl status [name] # OK — no other positionals +clawctl start [name] # OK +clawctl mount list [name] # OK +``` + +Commands that have **other required positional arguments** must NOT use +`[name]` — Commander consumes the first positional as the optional name, +swallowing the real argument. Use `-i` or context resolution instead: + +``` +clawctl mount add # No [name] — would eat +clawctl mount remove # No [name] — would eat +``` + ## Error Handling - Each step handles its own errors and displays them inline diff --git a/packages/cli/bin/cli.tsx b/packages/cli/bin/cli.tsx index a02e5bf..1ecc8ce 100755 --- a/packages/cli/bin/cli.tsx +++ b/packages/cli/bin/cli.tsx @@ -26,6 +26,9 @@ import { runDaemonLogs, runDaemonRun, runUpdate, + runMountList, + runMountAdd, + runMountRemove, } from "../src/commands/index.js"; import { ensureDaemon } from "@clawctl/daemon"; import { checkAndPromptUpdate } from "../src/update-hook.js"; @@ -170,6 +173,51 @@ program await runUse(name, opts); }); +const mountCmd = program + .command("mount") + .description("Manage VM mount points") + .action(() => { + mountCmd.help(); + }); + +mountCmd + .command("list [name]") + .description("List all mounts for an instance") + .option("-i, --instance ", "Instance to target") + .action(async (name: string | undefined, opts: { instance?: string }) => { + await runMountList(driver, { instance: opts.instance ?? name }); + }); + +mountCmd + .command("add") + .description("Add a host directory mount to the VM") + .argument("", "Host directory to mount") + .argument("", "Mount point inside the VM") + .option("-i, --instance ", "Instance to target") + .option("--writable", "Mount as read-write (default: read-only)") + .option("--no-restart", "Update config but don't restart the VM") + .showHelpAfterError(true) + .action( + async ( + hostPath: string, + guestPath: string, + opts: { instance?: string; writable?: boolean; noRestart?: boolean }, + ) => { + await runMountAdd(driver, opts, hostPath, guestPath); + }, + ); + +mountCmd + .command("remove") + .description("Remove a mount from the VM") + .argument("", "Mount point to remove") + .option("-i, --instance ", "Instance to target") + .option("--no-restart", "Update config but don't restart the VM") + .showHelpAfterError(true) + .action(async (guestPath: string, opts: { instance?: string; noRestart?: boolean }) => { + await runMountRemove(driver, opts, guestPath); + }); + const daemonCmd = program.command("daemon").description("Manage the background daemon"); daemonCmd diff --git a/packages/cli/src/commands/index.ts b/packages/cli/src/commands/index.ts index e0b9f1d..c456d13 100644 --- a/packages/cli/src/commands/index.ts +++ b/packages/cli/src/commands/index.ts @@ -19,3 +19,4 @@ export { runDaemonRun, } from "./daemon.js"; export { runUpdate } from "./update.js"; +export { runMountList, runMountAdd, runMountRemove } from "./mount.js"; diff --git a/packages/cli/src/commands/mount.ts b/packages/cli/src/commands/mount.ts new file mode 100644 index 0000000..2632c14 --- /dev/null +++ b/packages/cli/src/commands/mount.ts @@ -0,0 +1,163 @@ +import { readFile, writeFile, access } from "fs/promises"; +import { constants } from "fs"; +import { join, resolve } from "path"; +import type { VMDriver } from "@clawctl/host-core"; +import { requireInstance } from "@clawctl/host-core"; +import { PROJECT_MOUNT_POINT } from "@clawctl/types"; +import type { MountSpec } from "@clawctl/types"; + +const BUILTIN_MOUNT_POINTS = new Set([PROJECT_MOUNT_POINT, `${PROJECT_MOUNT_POINT}/data`]); + +export async function runMountList(driver: VMDriver, opts: { instance?: string }): Promise { + const entry = await requireInstance(opts); + const mounts = await driver.readMounts(entry.vmName); + + if (mounts.length === 0) { + console.log("No mounts configured."); + return; + } + + // Column widths + const locW = Math.max(8, ...mounts.map((m) => m.location.length)); + const mpW = Math.max(11, ...mounts.map((m) => m.mountPoint.length)); + + console.log( + `${"LOCATION".padEnd(locW)} ${"MOUNT POINT".padEnd(mpW)} ${"MODE".padEnd(5)} TYPE`, + ); + for (const m of mounts) { + const mode = m.writable ? "rw" : "ro"; + const type = BUILTIN_MOUNT_POINTS.has(m.mountPoint) ? "built-in" : "user"; + console.log( + `${m.location.padEnd(locW)} ${m.mountPoint.padEnd(mpW)} ${mode.padEnd(5)} ${type}`, + ); + } +} + +export async function runMountAdd( + driver: VMDriver, + opts: { instance?: string; writable?: boolean; noRestart?: boolean }, + hostPath: string, + guestPath: string, +): Promise { + const entry = await requireInstance(opts); + const resolvedHost = resolve(hostPath.replace(/^~/, process.env.HOME ?? "~")); + + // Validate host path exists + try { + await access(resolvedHost, constants.R_OK); + } catch { + console.warn(`Warning: host path "${resolvedHost}" does not exist yet.`); + } + + // Read current mounts, check for duplicates + const mounts = await driver.readMounts(entry.vmName); + const existing = mounts.find((m) => m.mountPoint === guestPath); + if (existing) { + console.error(`Mount point "${guestPath}" is already in use (→ ${existing.location}).`); + process.exit(1); + } + + // Add the new mount + const newMount: MountSpec = { + location: hostPath, + mountPoint: guestPath, + writable: opts.writable ?? false, + }; + mounts.push(newMount); + + // Apply + await applyMountChange(driver, entry, mounts, opts.noRestart); + await syncClawctlJson(entry.projectDir, mounts); + + console.log( + `Added mount: ${hostPath} → ${guestPath} (${newMount.writable ? "read-write" : "read-only"})`, + ); +} + +export async function runMountRemove( + driver: VMDriver, + opts: { instance?: string; noRestart?: boolean }, + guestPath: string, +): Promise { + const entry = await requireInstance(opts); + + // Prevent removing built-in mounts + if (BUILTIN_MOUNT_POINTS.has(guestPath)) { + console.error(`Cannot remove built-in mount "${guestPath}".`); + process.exit(1); + } + + // Read current mounts + const mounts = await driver.readMounts(entry.vmName); + const idx = mounts.findIndex((m) => m.mountPoint === guestPath); + if (idx === -1) { + console.error(`No mount found at "${guestPath}".`); + process.exit(1); + } + + const removed = mounts[idx]; + mounts.splice(idx, 1); + + // Apply + await applyMountChange(driver, entry, mounts, opts.noRestart); + await syncClawctlJson(entry.projectDir, mounts); + + console.log(`Removed mount: ${removed.location} → ${guestPath}`); +} + +// --------------------------------------------------------------------------- +// Helpers +// --------------------------------------------------------------------------- + +async function applyMountChange( + driver: VMDriver, + entry: { vmName: string; name: string }, + mounts: MountSpec[], + noRestart?: boolean, +): Promise { + const status = await driver.status(entry.vmName); + + if (status === "Running") { + if (noRestart) { + await driver.stop(entry.vmName); + await driver.writeMounts(entry.vmName, mounts); + console.log("Mount config updated. VM stopped — start it manually to apply."); + return; + } + console.log(`Restarting "${entry.name}" to apply mount change...`); + await driver.stop(entry.vmName); + await driver.writeMounts(entry.vmName, mounts); + await driver.start(entry.vmName); + } else { + await driver.writeMounts(entry.vmName, mounts); + console.log("Mount config updated. Start the VM to apply."); + } +} + +/** + * Sync the user-added mounts (excluding built-ins) to clawctl.json + * so they survive a full VM rebuild. + */ +async function syncClawctlJson(projectDir: string, allMounts: MountSpec[]): Promise { + const configPath = join(projectDir, "clawctl.json"); + let config: Record; + try { + config = JSON.parse(await readFile(configPath, "utf-8")); + } catch { + // No clawctl.json — nothing to sync + return; + } + + const userMounts = allMounts.filter((m) => !BUILTIN_MOUNT_POINTS.has(m.mountPoint)); + if (userMounts.length > 0) { + config.mounts = userMounts.map((m) => ({ + location: m.location, + mountPoint: m.mountPoint, + ...(m.writable ? { writable: true } : {}), + })); + } else { + delete config.mounts; + } + + await writeFile(configPath, JSON.stringify(config, null, 2) + "\n"); +} diff --git a/packages/host-core/package.json b/packages/host-core/package.json index 57f029c..e88a852 100644 --- a/packages/host-core/package.json +++ b/packages/host-core/package.json @@ -11,6 +11,7 @@ "@clawctl/types": "workspace:*", "@clawctl/templates": "workspace:*", "execa": "^9.0.0", - "semver": "^7.7.4" + "semver": "^7.7.4", + "yaml": "^2.8.2" } } diff --git a/packages/host-core/src/drivers/lima.ts b/packages/host-core/src/drivers/lima.ts index 3e9f2cf..4564349 100644 --- a/packages/host-core/src/drivers/lima.ts +++ b/packages/host-core/src/drivers/lima.ts @@ -1,12 +1,13 @@ -import { writeFile, unlink } from "fs/promises"; +import { readFile, writeFile, unlink } from "fs/promises"; import { join } from "path"; import { tmpdir } from "os"; import { execa } from "execa"; +import YAML from "yaml"; import { exec, execWithLogs } from "../exec.js"; import { installFormula, isFormulaInstalled } from "../homebrew.js"; import { parseLimaVersion } from "../parse.js"; import { generateLimaYaml } from "@clawctl/templates"; -import type { VMConfig } from "@clawctl/types"; +import type { VMConfig, MountSpec } from "@clawctl/types"; import type { VMDriver, VMCreateOptions, ExecResult, OnLine } from "./types.js"; export class LimaDriver implements VMDriver { @@ -177,4 +178,42 @@ export class LimaDriver implements VMDriver { shellCommand(name: string): string { return `limactl shell ${name}`; } + + /** Get the Lima instance directory from `limactl list --json`. */ + private async instanceDir(name: string): Promise { + const result = await exec("limactl", ["list", "--json"]); + if (result.exitCode !== 0) { + throw new Error(`Failed to list Lima instances: ${result.stderr}`); + } + for (const line of result.stdout.trim().split("\n")) { + const vm = JSON.parse(line); + if (vm.name === name) return vm.dir as string; + } + throw new Error(`Lima instance "${name}" not found`); + } + + async readMounts(name: string): Promise { + const dir = await this.instanceDir(name); + const raw = await readFile(join(dir, "lima.yaml"), "utf-8"); + const doc = YAML.parse(raw); + if (!Array.isArray(doc.mounts)) return []; + return doc.mounts.map((m: Record) => ({ + location: m.location as string, + mountPoint: m.mountPoint as string, + writable: (m.writable as boolean) ?? false, + })); + } + + async writeMounts(name: string, mounts: MountSpec[]): Promise { + const dir = await this.instanceDir(name); + const yamlPath = join(dir, "lima.yaml"); + const raw = await readFile(yamlPath, "utf-8"); + const doc = YAML.parse(raw); + doc.mounts = mounts.map((m) => ({ + location: m.location, + mountPoint: m.mountPoint, + writable: m.writable ?? false, + })); + await writeFile(yamlPath, YAML.stringify(doc)); + } } diff --git a/packages/host-core/src/drivers/types.ts b/packages/host-core/src/drivers/types.ts index d6556fd..afb901e 100644 --- a/packages/host-core/src/drivers/types.ts +++ b/packages/host-core/src/drivers/types.ts @@ -39,6 +39,12 @@ export interface VMDriver { ): Promise; copy(name: string, localPath: string, remotePath: string): Promise; + // Mounts + /** Read current mounts from the VM backend config. */ + readMounts(name: string): Promise; + /** Write updated mounts to the VM backend config. VM must be stopped. */ + writeMounts(name: string, mounts: MountSpec[]): Promise; + // Host prerequisites isInstalled(): Promise; install(onLine?: OnLine): Promise; // returns version diff --git a/packages/host-core/src/index.ts b/packages/host-core/src/index.ts index 5d1a59b..e37d0f8 100644 --- a/packages/host-core/src/index.ts +++ b/packages/host-core/src/index.ts @@ -32,7 +32,7 @@ export { export type { SecretRef, ResolvedSecretRef } from "./secrets.js"; // Provision -export { provisionVM, deployClaw } from "./provision.js"; +export { provisionVM, deployClaw, runClawProvision } from "./provision.js"; export type { ProvisionCallbacks } from "./provision.js"; // Claw binary (embedded asset in compiled mode, direct path in dev mode) diff --git a/packages/host-core/src/provision.ts b/packages/host-core/src/provision.ts index 2cf8392..f595e8c 100644 --- a/packages/host-core/src/provision.ts +++ b/packages/host-core/src/provision.ts @@ -44,7 +44,7 @@ export async function deployClaw( /** * Run a claw provision subcommand and parse its JSON output. */ -async function runClawProvision( +export async function runClawProvision( driver: VMDriver, vmName: string, subcommand: string, diff --git a/tasks/2026-04-01_1058_clawctl-mount-command/TASK.md b/tasks/2026-04-01_1058_clawctl-mount-command/TASK.md new file mode 100644 index 0000000..d4f1c5d --- /dev/null +++ b/tasks/2026-04-01_1058_clawctl-mount-command/TASK.md @@ -0,0 +1,72 @@ +# `clawctl mount` — Manage VM mount points after creation + +## Status: Resolved + +## Scope + +Add a `clawctl mount` command with `list`, `add`, and `remove` subcommands to manage host→guest mounts on existing instances. Currently mounts can only be set at VM creation time. + +Does NOT cover: + +- Hot-adding mounts without restart (Lima limitation) +- Changing built-in mounts (project, data) + +## Context + +Mounts are configured at create time via `config.mounts` and baked into lima.yaml. After creation there's no clawctl-level way to add or remove mounts — the only option is manually editing `~/.lima//lima.yaml` and restarting. This gap became apparent during a VM migration where mount paths needed updating. + +Mount management belongs on the abstract `VMDriver` interface (not Lima-specific) since any future backend would need the same operations. + +## Plan + +### Approach + +Extend `VMDriver` with `readMounts()` and `writeMounts()` methods. Implement for Lima by parsing/writing the lima.yaml at `~/.lima//lima.yaml` (path discovered via `limactl list --json`). Build a CLI command that uses these methods and handles the restart cycle. + +### Why this approach + +- Keeps mount management backend-agnostic via the driver interface +- Reuses existing `MountSpec` type and `yaml` package +- Lima doesn't support hot-mount, so stop→edit→start is the only option +- Syncing `clawctl.json` ensures mounts survive full VM rebuilds + +### Files to create/modify + +| File | Action | +| ----------------------------------------- | --------------------------------------------- | +| `packages/host-core/src/drivers/types.ts` | Add `readMounts`, `writeMounts` to `VMDriver` | +| `packages/host-core/src/drivers/lima.ts` | Implement mount methods | +| `packages/host-core/package.json` | Add `yaml` dependency | +| `packages/cli/src/commands/mount.ts` | **Create** — mount list/add/remove | +| `packages/cli/src/commands/index.ts` | Add mount exports | +| `packages/cli/bin/cli.tsx` | Wire mount subcommands | + +## Steps + +- [x] Delete one-off adopt script +- [x] Add `readMounts`/`writeMounts` to `VMDriver` interface, implement for Lima +- [x] Create `mount.ts` command (list/add/remove with restart flow + clawctl.json sync) +- [x] Wire into CLI (commander definitions + exports) +- [x] Lint, format, test +- [x] Commit + +## Notes + +- Built-in mounts (`/mnt/project`, `/mnt/project/data`) are protected from removal +- Lima handles `~` expansion natively in mount locations — pass through as-is +- `clawctl.json` only stores user-added extra mounts, not the built-in ones +- The `yaml` package is already used in `@clawctl/templates`; added to `@clawctl/host-core` too +- The `runClawProvision` export added to host-core during the adoption work is kept — useful for future commands +- Commander's optional `[name]` positional conflicts with required positional args — dropped `[name]` from `mount add` and `mount remove`, documented the convention in `docs/architecture.md` +- Fixed broken symlinks in all 4 agent skills (needed 4 levels of `../` not 3) + +## Outcome + +Delivered `clawctl mount list/add/remove` with: + +- `VMDriver` interface extended with `readMounts()`/`writeMounts()` +- Lima implementation that parses/writes `~/.lima//lima.yaml` +- Automatic restart on mount changes (with `--no-restart` escape hatch) +- `clawctl.json` sync so mounts survive VM rebuilds +- Built-in mount protection, duplicate detection, help on missing args +- CLI convention documented, broken skill symlinks fixed along the way