Skip to content

Add support for podman as container runtime#26

Open
nezhar wants to merge 7 commits intomainfrom
podman
Open

Add support for podman as container runtime#26
nezhar wants to merge 7 commits intomainfrom
podman

Conversation

@nezhar
Copy link
Copy Markdown
Collaborator

@nezhar nezhar commented Mar 4, 2026

Adds first-class support for Podman as an alternative container runtime to Docker throughout VibePod. It updates documentation, configuration, and CLI commands to allow users to select between Docker and Podman, including auto-detection and explicit selection methods. Additionally, it introduces fixes for rootless Podman compatibility, such as permission handling for bind mounts.

Documentation updates:

  • Added a dedicated Podman setup and usage guide (docs/podman.md), linked from navigation and other docs. Updated Quickstart and main documentation to reflect Podman support and new configuration options.

Config and environment variable documentation:

  • Documented the new container_runtime config key and VP_CONTAINER_RUNTIME environment variable in docs/configuration.md, including example usage.

Summary by CodeRabbit

  • New Features

    • Podman (rootless) support with a new global runtime preference and CLI/runtime options (--runtime, --userns); env overrides VP_CONTAINER_RUNTIME and VP_CONTAINER_USERNS_MODE; proxy address may use the proxy container IP.
  • Documentation

    • New "Using Podman" guide; quickstart, configuration, and proxy docs updated; wording generalized to “container”.
  • Tests

    • Added and expanded unit and integration tests for runtime detection, selection, proxy behavior, and userns handling.
  • Chores

    • CI workflow and helper scripts to exercise runtime scenarios.

@coderabbitai
Copy link
Copy Markdown
Contributor

coderabbitai bot commented Mar 4, 2026

📝 Walkthrough

Walkthrough

Adds Podman support and runtime-agnostic container management: runtime detection/selection and persistence, a get_manager factory, runtime-aware DockerManager changes (sockets, volume/userns handling), new CLI --runtime/--userns options and config/env keys, docs, scripts, and tests exercising runtime and userns behavior. (34 words)

Changes

Cohort / File(s) Summary
Documentation
docs/configuration.md, docs/index.md, docs/quickstart.md, docs/podman.md, mkdocs.yml, README.md
Add Podman docs and nav entry; document container_runtime and container_userns_mode config keys and env overrides (VP_CONTAINER_RUNTIME, VP_CONTAINER_USERNS_MODE); add vp config runtime docs; update built-in proxy description.
Constants & Config
src/vibepod/constants.py, src/vibepod/core/config.py
Add runtime and socket constants; add default config keys container_runtime and container_userns_mode; env parsing for VP_CONTAINER_RUNTIME/VP_CONTAINER_USERNS_MODE; expose get_container_userns_mode().
Runtime Detection & Persistence
src/vibepod/core/runtime.py
New runtime detection/resolution module: socket candidate probing with timeout, detect available runtimes, interactive selection, and save/load runtime preference APIs.
Docker / Manager Implementation
src/vibepod/core/docker.py
Add get_manager() factory; extend DockerManager to accept base_url and runtime; add Podman-aware helpers for volume prep and userns resolution; propagate userns_mode into run/ensure APIs and adapt behavior per runtime.
CLI Commands
src/vibepod/commands/...
src/vibepod/commands/list_cmd.py, .../logs.py, .../proxy.py, .../run.py, .../stop.py, .../update.py, .../config.py
Thread --runtime (and --userns where applicable) through commands; replace direct DockerManager() with get_manager(...); add vp config runtime command; make version output runtime-aware; forward userns into proxy/run flows.
CLI Alias
src/vibepod/cli.py
Extend hidden alias_ui to accept and forward port, no_open, runtime, and userns to logs_start.
Tests
tests/...
tests/test_list.py, tests/test_proxy_permissions.py, tests/test_run.py, tests/test_config.py, tests/test_runtime.py, tests/test_version_integration.py
Switch tests to patch get_manager; add runtime attribute to stubs; add runtime probe/selection tests, userns forwarding assertions, proxy-IP proxy-env tests, and end-to-end runtime scenario integration tests.
CI & Scripts
.github/workflows/runtime-integration.yml, scripts/prepare_runtime_scenario.sh, scripts/cleanup_runtime_scenario.sh
Add CI workflow and helper scripts to prepare/cleanup Docker/Podman runtime scenarios for integration tests.

Sequence Diagram(s)

sequenceDiagram
    participant User as User/CLI
    participant Cmd as Command Handler
    participant Runtime as runtime.resolve_runtime
    participant Detector as detect_available_runtimes
    participant Sock as Socket Probe
    participant Factory as get_manager
    participant Manager as DockerManager

    User->>Cmd: invoke command (with optional --runtime / --userns)
    Cmd->>Runtime: resolve_runtime(override, config)
    alt override or saved preference
        Runtime-->>Factory: return (runtime, socket_url)
    else auto-detect
        Runtime->>Detector: detect_available_runtimes()
        Detector->>Sock: probe candidate sockets (docker/podman)
        Sock-->>Detector: reachable / unreachable
        Detector-->>Runtime: available runtimes
        alt multiple & TTY
            Runtime->>User: prompt to choose runtime
            User-->>Runtime: selection
            Runtime->>Runtime: save_runtime_preference(selection)
        end
        Runtime-->>Factory: return (chosen_runtime, socket_url)
    end
    Factory->>Manager: create DockerManager(base_url=socket_url, runtime)
    Manager-->>Cmd: manager instance
    Cmd->>Manager: run/stop/proxy/logs (userns_mode forwarded when applicable)
Loading

Estimated code review effort

🎯 4 (Complex) | ⏱️ ~50 minutes

Possibly related issues

Possibly related PRs

Poem

🐰 I hopped through sockets, sniffed each trail,
Podman and Docker both set sail.
I picked a runtime, tucked it away,
Mended mounts and userns today.
Two runtimes now — what a play!

🚥 Pre-merge checks | ✅ 2 | ❌ 1

❌ Failed checks (1 warning)

Check name Status Explanation Resolution
Docstring Coverage ⚠️ Warning Docstring coverage is 31.55% which is insufficient. The required threshold is 80.00%. Write docstrings for the functions missing them to satisfy the coverage threshold.
✅ Passed checks (2 passed)
Check name Status Explanation
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.
Title check ✅ Passed The title 'Add support for podman as container runtime' clearly and concisely describes the main objective of the PR: implementing Podman as a supported container runtime alternative to Docker.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing Touches
📝 Generate docstrings
  • Create stacked PR
  • Commit on current branch
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Commit unit tests in branch podman

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link
Copy Markdown
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 4

🧹 Nitpick comments (1)
src/vibepod/commands/run.py (1)

269-292: Podman rootless permission fixup logic is sound.

The approach of injecting chmod -R a+rwX before init commands correctly addresses the subordinate UID permission problem in rootless Podman. The use of dict.fromkeys() for deduplication while preserving order is idiomatic.

One minor edge case: paths containing single quotes would break the shell command. This is unlikely for typical config paths but worth noting.

🛡️ Optional: More robust path quoting

If you want to handle paths with special characters more defensively:

-        quoted = " ".join(f"'{p}'" for p in dict.fromkeys(paths_to_fix))
+        # Use printf %q style quoting for safety with special chars
+        quoted = " ".join(shlex.quote(p) for p in dict.fromkeys(paths_to_fix))

This would require adding import shlex at the top.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/vibepod/commands/run.py` around lines 269 - 292, The current podman_fixup
builds a shell command by joining paths into quoted using simple single-quote
wrapping, which breaks if any path contains a single quote; update the logic in
the block that builds podman_fixup (references: podman_fixup, paths_to_fix,
extra_volumes_pre, quoted) to use robust shell-quoting (e.g., shlex.quote) for
each path and import shlex at the top of the file so paths with special
characters or embedded quotes are safely quoted before constructing the chmod
command.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@docs/podman.md`:
- Around line 86-90: Replace the overly permissive workspace permission
suggestion "chmod -R a+rwX ~/my-project" with owner/group-scoped guidance:
recommend "chmod -R u+rwX ~/my-project" and mention an optional "chmod -R g+rwX
~/my-project" line for collaborative setups; update the docs/podman.md snippet
accordingly and remove or discourage the world-writable (a+rwX) example.

In `@src/vibepod/commands/update.py`:
- Around line 34-39: The change to the info dict in update.py replaced the
single "docker" key with "runtime" and "runtime_version", which breaks consumers
of the --json output; either add a compatibility field "docker" (e.g., derive
from runtime or runtime_version and set info["docker"] =
"<runtime>/<runtime_version>" or original docker string) so both schemas are
present, or update release notes/changelog to document the breaking change;
locate the info dict in update.py (keys: "vibepod", "python", "runtime",
"runtime_version") and implement the chosen fix so existing tooling continues to
work or users are explicitly informed.

In `@src/vibepod/core/config.py`:
- Line 139: The VP_CONTAINER_RUNTIME value should be normalized (trimmed and
lowercased) before storing into container_runtime to avoid mismatches like "
Podman " or "PODMAN"; update the tuple for "VP_CONTAINER_RUNTIME" in the mapping
in src/vibepod/core/config.py to use a parser that performs v.strip().lower()
(or replace str with a named helper parser function) and optionally validate
against allowed runtimes (e.g., "docker","podman") so the parsed value assigned
to container_runtime is normalized and predictable.

In `@src/vibepod/core/runtime.py`:
- Around line 57-62: The _probe_socket function currently creates a Docker
client (client = _docker.DockerClient(...)) and may return False from the except
branch without closing it; update the function to ensure client.close() is
always called by moving the cleanup into a finally block (or by tracking the
client and closing it in both success and failure paths). Specifically, wrap the
client creation, client.ping(), and return logic in try/except/finally,
reference the existing client variable and ensure client.close() is invoked in
the finally so no DockerClient connection leaks occur.

---

Nitpick comments:
In `@src/vibepod/commands/run.py`:
- Around line 269-292: The current podman_fixup builds a shell command by
joining paths into quoted using simple single-quote wrapping, which breaks if
any path contains a single quote; update the logic in the block that builds
podman_fixup (references: podman_fixup, paths_to_fix, extra_volumes_pre, quoted)
to use robust shell-quoting (e.g., shlex.quote) for each path and import shlex
at the top of the file so paths with special characters or embedded quotes are
safely quoted before constructing the chmod command.

ℹ️ Review info
⚙️ Run configuration

Configuration used: defaults

Review profile: CHILL

Plan: Pro

Run ID: 661f0184-e4fd-49f5-965b-613a0a9d7d83

📥 Commits

Reviewing files that changed from the base of the PR and between 944b5dd and e3638fe.

📒 Files selected for processing (18)
  • docs/configuration.md
  • docs/index.md
  • docs/podman.md
  • docs/quickstart.md
  • mkdocs.yml
  • src/vibepod/commands/list_cmd.py
  • src/vibepod/commands/logs.py
  • src/vibepod/commands/proxy.py
  • src/vibepod/commands/run.py
  • src/vibepod/commands/stop.py
  • src/vibepod/commands/update.py
  • src/vibepod/constants.py
  • src/vibepod/core/config.py
  • src/vibepod/core/docker.py
  • src/vibepod/core/runtime.py
  • tests/test_list.py
  • tests/test_proxy_permissions.py
  • tests/test_run.py

Comment thread docs/podman.md Outdated
Comment thread src/vibepod/commands/update.py
Comment thread src/vibepod/core/config.py Outdated
Comment thread src/vibepod/core/runtime.py Outdated
@bobek
Copy link
Copy Markdown

bobek commented Mar 6, 2026

This is awesome. It doesn't work for me on Debian sid. strace reveled, that requests are timing out. And indeed they take more than current 3s timeout (src/vibepod/core/runtime.py:30). Using curl I can confirm that even simple /version takes almost 5s. Stracing podman shows that it does a lot of stuff on incoming request (e.g. running dpkg-query).

Anyway, setting timeout=10 "solved" the issue...

~/Sources/moistureguard/global_senzomatic main ❯ curl -w "\nDNS:%{time_namelookup}s Connect:%{time_connect}s TLS:%{time_appconnect}s StartTransfer:%{time_starttransfer}s Total:%{time_total}s Speed:%{speed_download}B/s Code:%{http_code}\n" --unix-socket /run/user/1000/podman/podman.sock -v -X GET http://d/v5.7.0/libpod/version
Note: Unnecessary use of -X or --request, GET is already inferred.
*   Trying /run/user/1000/podman/podman.sock:0...
* Established connection to d (/run/user/1000/podman/podman.sock port 0) from  port 0 
* using HTTP/1.x
> GET /v5.7.0/libpod/version HTTP/1.1
> Host: d
> User-Agent: curl/8.19.0-rc3
> Accept: */*
> 
* Request completely sent off
< HTTP/1.1 200 OK
< Api-Version: 1.41
< Content-Type: application/json
< Libpod-Api-Version: 5.7.0
< Server: Libpod/5.7.0 (linux)
< X-Reference-Id: 0xc000518008
< Date: Fri, 06 Mar 2026 03:46:02 GMT
< Content-Length: 912
< 
{"Platform":{"Name":"linux/amd64/debian-unknown"},"Components":[{"Name":"Podman Engine","Version":"5.7.0","Details":{"APIVersion":"5.7.0","Arch":"amd64","BuildTime":"2025-12-03T02:36:32+01:00","Experimental":"false","GitCommit":"","GoVersion":"go1.24.9","KernelVersion":"6.18.15+deb14-amd64","MinAPIVersion":"4.0.0","Os":"linux"}},{"Name":"Conmon","Version":"conmon version 2.1.13, commit: unknown","Details":{"Package":"conmon_2.1.13+ds1-2_amd64"}},{"Name":"OCI Runtime (crun)","Version":"crun version 1.26\ncommit: 3241e671f92c33b0c003cd7de319e4f32add6231\nrundir: /run/user/1000/crun\nspec: 1.0.0\n+SYSTEMD +SELINUX +APPARMOR +CAP +SECCOMP +EBPF +CRIU +YAJL","Details":{"Package":"crun_1.26-1_amd64"}}],"Version":"5.7.0","ApiVersion":"1.41","MinAPIVersion":"1.24","GitCommit":"","GoVersion":"go1.24.9","Os":"linux","Arch":"amd64","KernelVersion":"6.18.15+deb14-amd64","BuildTime":"2025-12-03T02:36:32+01:00"}
* Connection #0 to host d:80 left intact

DNS:0.000022s Connect:0.000058s TLS:0.000000s StartTransfer:4.819876s Total:4.819928s Speed:189B/s Code:200
~/Sources/moistureguard/global_senzomatic main ❯ curl -w "\nDNS:%{time_namelookup}s Connect:%{time_connect}s TLS:%{time_appconnect}s StartTransfer:%{time_starttransfer}s Total:%{time_total}s Speed:%{speed_download}B/s Code:%{http_code}\n" --unix-socket /run/user/1000/podman/podman.sock -v -X GET http://d/version              
Note: Unnecessary use of -X or --request, GET is already inferred.
*   Trying /run/user/1000/podman/podman.sock:0...
* Established connection to d (/run/user/1000/podman/podman.sock port 0) from  port 0 
* using HTTP/1.x
> GET /version HTTP/1.1
> Host: d
> User-Agent: curl/8.19.0-rc3
> Accept: */*
> 
* Request completely sent off
< HTTP/1.1 200 OK
< Api-Version: 1.41
< Content-Type: application/json
< Libpod-Api-Version: 5.7.0
< Server: Libpod/5.7.0 (linux)
< X-Reference-Id: 0xc000682000
< Date: Fri, 06 Mar 2026 03:46:26 GMT
< Content-Length: 912
< 
{"Platform":{"Name":"linux/amd64/debian-unknown"},"Components":[{"Name":"Podman Engine","Version":"5.7.0","Details":{"APIVersion":"5.7.0","Arch":"amd64","BuildTime":"2025-12-03T02:36:32+01:00","Experimental":"false","GitCommit":"","GoVersion":"go1.24.9","KernelVersion":"6.18.15+deb14-amd64","MinAPIVersion":"4.0.0","Os":"linux"}},{"Name":"Conmon","Version":"conmon version 2.1.13, commit: unknown","Details":{"Package":"conmon_2.1.13+ds1-2_amd64"}},{"Name":"OCI Runtime (crun)","Version":"crun version 1.26\ncommit: 3241e671f92c33b0c003cd7de319e4f32add6231\nrundir: /run/user/1000/crun\nspec: 1.0.0\n+SYSTEMD +SELINUX +APPARMOR +CAP +SECCOMP +EBPF +CRIU +YAJL","Details":{"Package":"crun_1.26-1_amd64"}}],"Version":"5.7.0","ApiVersion":"1.41","MinAPIVersion":"1.24","GitCommit":"","GoVersion":"go1.24.9","Os":"linux","Arch":"amd64","KernelVersion":"6.18.15+deb14-amd64","BuildTime":"2025-12-03T02:36:32+01:00"}
* Connection #0 to host d:80 left intact

DNS:0.000025s Connect:0.000063s TLS:0.000000s StartTransfer:4.676405s Total:4.676445s Speed:195B/s Code:200

@bobek
Copy link
Copy Markdown

bobek commented Mar 6, 2026

One more thing -- mapping UIDs/GIDs. It is quite typical with podman, that you would have ID mapping configured. E.g.

$ cat /etc/subuid
bobek:100000:65536
$ id
uid=1000(bobek) gid=1000(bobek) groups=1000(bobek)

Now, let me run container in /tmp/xx as my user 1000(bobek):

/tmp/xx ❯ ll
total 0
drwxrwxrwx  2 bobek bobek  40 Mar  6 05:10 .
drwxrwxrwt 24 root  root  700 Mar  6 05:03 ..
/tmp/xx ❯ podman run -it --rm -v .:/workspace debian:bookworm-slim /bin/bash
root@422b730a12eb:/# cd /workspace/
root@422b730a12eb:/workspace# id
uid=0(root) gid=0(root) groups=0(root)
root@422b730a12eb:/workspace# touch first
root@422b730a12eb:/workspace# useradd testuser
root@422b730a12eb:/workspace# su - testuser
su: warning: cannot change directory to /home/testuser: No such file or directory
$ bash
testuser@422b730a12eb:/workspace$ id
uid=1000(testuser) gid=1000(testuser) groups=1000(testuser)
testuser@422b730a12eb:/workspace$ touch second-testuser
testuser@422b730a12eb:/workspace$ 
exit
$ 
root@422b730a12eb:/workspace# 
exit
/tmp/xx ❯ ll
total 0
drwxrwxrwx  2 bobek  bobek   80 Mar  6 05:11 .
drwxrwxrwt 24 root   root   700 Mar  6 05:03 ..
-rw-r--r--  1 bobek  bobek    0 Mar  6 05:11 first
-rw-r--r--  1 100999 100999   0 Mar  6 05:11 second-testuser
/tmp/xx ❯ 

As you see, the container's root user was mapped to my normal UID (1000) while the container user 1000 was bumped accordingly. I can trivially solve this ad hoc with --userns keep-id:

/tmp/xx ❯ ll
total 0
drwxrwxrwx  2 bobek bobek  40 Mar  6 05:15 .
drwxrwxrwt 24 root  root  700 Mar  6 05:03 ..
/tmp/xx ❯ podman run -it --userns keep-id --rm -v .:/workspace debian:bookworm-slim /bin/bash
bobek@3295edc546bf:/$ id 
uid=1000(bobek) gid=1000(bobek) groups=1000(bobek)
bobek@3295edc546bf:/$ cd /workspace/
bobek@3295edc546bf:/workspace$ touch keep-id
bobek@3295edc546bf:/workspace$ 
exit
/tmp/xx ❯ ll
total 0
drwxrwxrwx  2 bobek bobek  60 Mar  6 05:15 .
drwxrwxrwt 24 root  root  700 Mar  6 05:03 ..
-rw-r--r--  1 bobek bobek   0 Mar  6 05:15 keep-id
/tmp/xx ❯

So it would be really great if we can get control over setting userns to containers.

@nezhar nezhar force-pushed the podman branch 2 times, most recently from f811416 to 019cb23 Compare March 12, 2026 19:58
Copy link
Copy Markdown
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 3

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (2)
tests/test_run.py (1)

271-272: ⚠️ Potential issue | 🟠 Major

The auto-pull tests are still stubbing the old construction path.

run() now reads get_manager(...).runtime, but these cases still patch run_cmd.DockerManager, so the stub is bypassed entirely. That matches the CI failures at Lines 328, 352, 380, and 394. After switching the monkeypatch to run_cmd.get_manager, _StubDockerManager also needs a runtime attribute or these tests will fail on manager.runtime.

🧪 Minimal fix
 class _StubDockerManager:
     """Minimal DockerManager stub that records pull_image calls."""

     def __init__(self) -> None:
+        self.runtime = "docker"
         self.pulled: list[str] = []
 ...
-    monkeypatch.setattr(run_cmd, "DockerManager", lambda: stub)
+    monkeypatch.setattr(run_cmd, "get_manager", lambda **kwargs: stub)

Apply the same monkeypatch replacement in each auto-pull test.

Also applies to: 321-394

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@tests/test_run.py` around lines 271 - 272, Tests are patching
run_cmd.DockerManager but run() now uses get_manager(...).runtime, so update the
auto-pull tests to monkeypatch run_cmd.get_manager instead of
run_cmd.DockerManager and have the replacement return the existing
_StubDockerManager instance; additionally add a runtime attribute to
_StubDockerManager (e.g., set manager.runtime = <expected runtime stub> or
implement a runtime property on _StubDockerManager) so manager.runtime access in
run() does not fail; update all affected test cases around the auto-pull blocks
(those failing at lines ~328, 352, 380, 394) to use get_manager monkeypatch and
ensure the stub exposes runtime.
src/vibepod/commands/list_cmd.py (1)

59-66: ⚠️ Potential issue | 🟠 Major

Don't suppress explicit runtime-resolution failures here.

This fallback now also swallows bad or unavailable explicit runtime selections and returns running: [] / “No running agents” instead of surfacing the error. That is misleading for --runtime ..., VP_CONTAINER_RUNTIME, or a saved container_runtime preference. Please only keep the silent fallback for the pure auto-detect, non---running case.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/vibepod/commands/list_cmd.py` around lines 59 - 66, The try/except around
get_manager/list_managed is currently swallowing DockerClientError even when an
explicit runtime was requested; change it so DockerClientError is only silently
handled for the pure auto-detect case (no explicit runtime selection) and only
when not in the `running` path. Specifically, in the except DockerClientError as
exc block, check the `runtime` (the runtime_override passed to get_manager) and
`running` flags: if runtime is None/empty (auto-detect) and not running, set
containers = [] as the fallback; otherwise log the error (using error(str(exc))
or re-raise) and exit with typer.Exit(EXIT_DOCKER_NOT_RUNNING) so explicit
runtime failures surface. Ensure you reference the existing symbols get_manager,
runtime_override/runtime, DockerClientError, running, list_managed, typer.Exit,
and EXIT_DOCKER_NOT_RUNNING when making the change.
♻️ Duplicate comments (1)
src/vibepod/commands/update.py (1)

33-39: ⚠️ Potential issue | 🟡 Minor

version --json still has a breaking schema change.

Replacing the historical docker field with runtime / runtime_version will break existing parsers. Please either keep a compatibility key for one release cycle or call this out explicitly in release notes/changelog.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/vibepod/commands/update.py` around lines 33 - 39, The JSON schema change
removed the historical "docker" field causing breakage; in the info dict in
update.py (where _runtime_version(runtime_override=runtime) is used to set
"runtime" and "runtime_version"), add back the legacy keys "docker" and
"docker_version" populated from the same rt_name and rt_version values (or map
appropriately) so existing parsers keep working for one release cycle, and mark
them as deprecated in a comment near the info construction.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@src/vibepod/commands/run.py`:
- Around line 274-296: The Podman fixup list currently only adds
spec.config_mount_path and extra agent volumes but misses the user workspace
bind (mounted as /workspace), so add the workspace mount path to paths_to_fix
when manager.runtime == RUNTIME_PODMAN; update the block that builds
podman_fixup (references: podman_fixup, paths_to_fix, spec.config_mount_path,
_agent_extra_volumes, agent_config_dir) to also include the workspace mount
(e.g. '/workspace' or spec.workspace_mount if available) before quoting and
creating podman_fixup, so that the chmod command covers the project workspace
the DockerManager.run_agent binds to /workspace and prevents rootless Podman
permission issues; keep the rest of the flow (manager.resolve_launch_command,
_init_entrypoint) unchanged.

In `@src/vibepod/core/docker.py`:
- Around line 93-116: The current _prepare_volume_dir makes VibePod-managed dirs
world-readable/writable (0o777/0o666) which exposes sensitive state; instead
when runtime == RUNTIME_PODMAN change ownership and restrictive perms rather
than broad chmod: determine the container-side UID/GID mapping for rootless
Podman (query Podman or derive subordinate uid range), chown the path and its
children to that mapped UID:GID and set restrictive perms (e.g. 0o700 for dirs,
0o600 for files); if you cannot resolve a mapped UID:GID, fall back to applying
POSIX ACLs (setfacl) granting the mapped container UID access while keeping
owner-only perms for others; update _prepare_volume_dir to perform chown +
0o700/0o600 (or setfacl) and remove any 0o777/0o666 usage so agent config dirs,
log/proxy state, and proxy CA remain private.

In `@src/vibepod/core/runtime.py`:
- Around line 52-62: The _probe_socket function currently uses a 3-second
DockerClient timeout which can produce false negatives for slower Podman
responses; update the timeout passed to _docker.DockerClient in _probe_socket
(and any related client creation there) to a higher value such as 6 seconds (or
8s) so Podman's ~4.7–4.8s startup ping/version latency is accommodated, keeping
the rest of the try/except behavior the same.

---

Outside diff comments:
In `@src/vibepod/commands/list_cmd.py`:
- Around line 59-66: The try/except around get_manager/list_managed is currently
swallowing DockerClientError even when an explicit runtime was requested; change
it so DockerClientError is only silently handled for the pure auto-detect case
(no explicit runtime selection) and only when not in the `running` path.
Specifically, in the except DockerClientError as exc block, check the `runtime`
(the runtime_override passed to get_manager) and `running` flags: if runtime is
None/empty (auto-detect) and not running, set containers = [] as the fallback;
otherwise log the error (using error(str(exc)) or re-raise) and exit with
typer.Exit(EXIT_DOCKER_NOT_RUNNING) so explicit runtime failures surface. Ensure
you reference the existing symbols get_manager, runtime_override/runtime,
DockerClientError, running, list_managed, typer.Exit, and
EXIT_DOCKER_NOT_RUNNING when making the change.

In `@tests/test_run.py`:
- Around line 271-272: Tests are patching run_cmd.DockerManager but run() now
uses get_manager(...).runtime, so update the auto-pull tests to monkeypatch
run_cmd.get_manager instead of run_cmd.DockerManager and have the replacement
return the existing _StubDockerManager instance; additionally add a runtime
attribute to _StubDockerManager (e.g., set manager.runtime = <expected runtime
stub> or implement a runtime property on _StubDockerManager) so manager.runtime
access in run() does not fail; update all affected test cases around the
auto-pull blocks (those failing at lines ~328, 352, 380, 394) to use get_manager
monkeypatch and ensure the stub exposes runtime.

---

Duplicate comments:
In `@src/vibepod/commands/update.py`:
- Around line 33-39: The JSON schema change removed the historical "docker"
field causing breakage; in the info dict in update.py (where
_runtime_version(runtime_override=runtime) is used to set "runtime" and
"runtime_version"), add back the legacy keys "docker" and "docker_version"
populated from the same rt_name and rt_version values (or map appropriately) so
existing parsers keep working for one release cycle, and mark them as deprecated
in a comment near the info construction.

ℹ️ Review info
⚙️ Run configuration

Configuration used: defaults

Review profile: CHILL

Plan: Pro

Run ID: 6c6e202f-a858-456e-8570-c4fe9db2d44d

📥 Commits

Reviewing files that changed from the base of the PR and between e3638fe and 019cb23.

📒 Files selected for processing (18)
  • docs/configuration.md
  • docs/index.md
  • docs/podman.md
  • docs/quickstart.md
  • mkdocs.yml
  • src/vibepod/commands/list_cmd.py
  • src/vibepod/commands/logs.py
  • src/vibepod/commands/proxy.py
  • src/vibepod/commands/run.py
  • src/vibepod/commands/stop.py
  • src/vibepod/commands/update.py
  • src/vibepod/constants.py
  • src/vibepod/core/config.py
  • src/vibepod/core/docker.py
  • src/vibepod/core/runtime.py
  • tests/test_list.py
  • tests/test_proxy_permissions.py
  • tests/test_run.py
🚧 Files skipped from review as they are similar to previous changes (6)
  • docs/configuration.md
  • docs/quickstart.md
  • tests/test_proxy_permissions.py
  • src/vibepod/constants.py
  • src/vibepod/core/config.py
  • docs/podman.md

Comment thread src/vibepod/commands/run.py
Comment on lines +93 to +116
def _prepare_volume_dir(self, path: Path) -> None:
"""Ensure *path* is accessible inside a rootless Podman container.

Rootless Podman maps the host UID to root inside the container, but
a non-root process started via ``su`` / ``gosu`` in the entrypoint
gets a subordinate UID that cannot read host files with standard
permissions. Making VibePod-managed directories world-accessible
(``0o777``) avoids "Permission denied" errors without needing
``userns_mode=keep-id`` (which breaks entrypoints that call ``su``).

This is only applied to directories VibePod itself creates — never
to the user's workspace.
"""
if self.runtime != RUNTIME_PODMAN:
return
try:
path.chmod(0o777)
for child in path.rglob("*"):
try:
child.chmod(0o777 if child.is_dir() else 0o666)
except OSError:
pass
except OSError:
pass
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Avoid making VibePod state world-readable and world-writable.

_prepare_volume_dir() recursively flips managed trees to 0777/0666 for Podman. In this PR that helper is used for agent config directories, log/proxy state, and the proxy CA directory, so on a multi-user machine other local users can read or tamper with credentials, session data, and CA material. Please use a runtime-specific ownership/ACL strategy instead of relaxing host permissions this broadly.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/vibepod/core/docker.py` around lines 93 - 116, The current
_prepare_volume_dir makes VibePod-managed dirs world-readable/writable
(0o777/0o666) which exposes sensitive state; instead when runtime ==
RUNTIME_PODMAN change ownership and restrictive perms rather than broad chmod:
determine the container-side UID/GID mapping for rootless Podman (query Podman
or derive subordinate uid range), chown the path and its children to that mapped
UID:GID and set restrictive perms (e.g. 0o700 for dirs, 0o600 for files); if you
cannot resolve a mapped UID:GID, fall back to applying POSIX ACLs (setfacl)
granting the mapped container UID access while keeping owner-only perms for
others; update _prepare_volume_dir to perform chown + 0o700/0o600 (or setfacl)
and remove any 0o777/0o666 usage so agent config dirs, log/proxy state, and
proxy CA remain private.

Comment thread src/vibepod/core/runtime.py
Copy link
Copy Markdown
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

♻️ Duplicate comments (1)
src/vibepod/commands/run.py (1)

285-291: ⚠️ Potential issue | 🟠 Major

Podman permission fix still misses the /workspace bind mount.

Line 286 builds paths_to_fix without the main workspace mount, so rootless Podman can still fail on project files written/read under /workspace.

🔧 Suggested patch
     if manager.runtime == RUNTIME_PODMAN:
-        paths_to_fix = [spec.config_mount_path]
+        paths_to_fix = [spec.config_mount_path, "/workspace"]
         extra_volumes_pre = _agent_extra_volumes(selected_agent, agent_config_dir(selected_agent))
         if extra_volumes_pre:
             paths_to_fix.extend(cp for _, cp, _ in extra_volumes_pre)
         quoted = " ".join(f"'{p}'" for p in dict.fromkeys(paths_to_fix))
         podman_fixup = [f"chmod -R a+rwX {quoted} 2>/dev/null || true"]
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/vibepod/commands/run.py` around lines 285 - 291, The podman permission
fix builds paths_to_fix only from spec.config_mount_path and
_agent_extra_volumes, missing the main workspace bind mount (/workspace); update
the block where manager.runtime == RUNTIME_PODMAN to also include the primary
workspace mount in paths_to_fix (e.g. append spec.workspace_mount_path if that
attribute exists, or fall back to adding the literal '/workspace' before
deduping), so that quoted and podman_fixup cover the workspace as well as the
config and extra volumes.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@src/vibepod/core/runtime.py`:
- Around line 172-179: The prompt currently uses SUPPORTED_RUNTIMES as choices
which can cause a KeyError when accessing available[choice]; change Prompt.ask
to present only actual detected runtimes (use available.keys() or
list(available)) so users can only pick an entry present in the available
mapping, and after selection use available.get(choice) (or validate existence)
before calling save_runtime_preference and returning to avoid KeyError; update
the block around sys.stdin.isatty(), Prompt.ask, save_runtime_preference, and
the return to use these safeguards.

---

Duplicate comments:
In `@src/vibepod/commands/run.py`:
- Around line 285-291: The podman permission fix builds paths_to_fix only from
spec.config_mount_path and _agent_extra_volumes, missing the main workspace bind
mount (/workspace); update the block where manager.runtime == RUNTIME_PODMAN to
also include the primary workspace mount in paths_to_fix (e.g. append
spec.workspace_mount_path if that attribute exists, or fall back to adding the
literal '/workspace' before deduping), so that quoted and podman_fixup cover the
workspace as well as the config and extra volumes.

ℹ️ Review info
⚙️ Run configuration

Configuration used: defaults

Review profile: CHILL

Plan: Pro

Run ID: b8af64c2-7ead-4138-b984-ad82e6eabf78

📥 Commits

Reviewing files that changed from the base of the PR and between 019cb23 and 7f040ac.

📒 Files selected for processing (13)
  • docs/configuration.md
  • docs/podman.md
  • src/vibepod/cli.py
  • src/vibepod/commands/logs.py
  • src/vibepod/commands/proxy.py
  • src/vibepod/commands/run.py
  • src/vibepod/core/config.py
  • src/vibepod/core/docker.py
  • src/vibepod/core/runtime.py
  • tests/test_config.py
  • tests/test_proxy_permissions.py
  • tests/test_run.py
  • tests/test_runtime.py
🚧 Files skipped from review as they are similar to previous changes (2)
  • docs/configuration.md
  • docs/podman.md

Comment thread src/vibepod/core/runtime.py Outdated
Copy link
Copy Markdown
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (1)
src/vibepod/core/docker.py (1)

287-294: ⚠️ Potential issue | 🟠 Major

Recreate reused service containers when launch options change.

Both methods return any running service container before comparing it to the requested userns_mode. After a user updates config or reruns vp proxy start --userns ... / vp ui --userns ..., the old container keeps running with the previous namespace setting until it is manually stopped or removed.

Also applies to: 350-358

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/vibepod/core/docker.py` around lines 287 - 294, The current logic in
find_datasette() handling existing containers returns a running container
without checking if its launch options (notably userns_mode) match the requested
setting; update the check after existing.reload() to read the container's
current user namespace from existing.attrs (e.g., HostConfig.UsernsMode) and
compare it to the requested userns_mode parameter (and keep the existing
PROXY_DB_PATH Env check), and if they differ remove the container
(existing.remove(force=True)) so it will be recreated with the correct
userns_mode; apply the same comparison/fix in the other similar block around the
code referenced at 350-358.
♻️ Duplicate comments (3)
src/vibepod/core/runtime.py (2)

66-81: ⚠️ Potential issue | 🟡 Minor

Always close probe clients on failure paths.

If client.ping() throws, the function returns False without closing the client first. Put cleanup in a finally block so repeated runtime detection doesn't leak open connections or file descriptors.

🔧 Proposed fix
 def _probe_socket(base_url: str) -> bool:
     """Return True if a container engine responds on *base_url*."""
+    client = None
     try:
         import docker as _docker
 
         # Podman's Docker-compatible API can take several seconds to answer
         # even simple probes on some hosts, so use a wider timeout here.
-        client = _docker.DockerClient(
+        client = _docker.DockerClient(
             base_url=base_url,
             timeout=_runtime_probe_timeout(),
         )
         client.ping()
-        client.close()
         return True
     except Exception:
         return False
+    finally:
+        if client is not None:
+            try:
+                client.close()
+            except Exception:
+                pass
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/vibepod/core/runtime.py` around lines 66 - 81, The _probe_socket function
may leak the Docker client if client.ping() raises; ensure the Docker client
created in _probe_socket is always closed by moving client.close() into a
finally block (or use a context manager) and only call client.ping() between
creation and cleanup; update the code around the DockerClient creation and the
client.ping()/client.close() calls in _probe_socket so any exception path still
closes the client before returning False.

137-149: ⚠️ Potential issue | 🟡 Minor

Trim explicit runtime values before validating them.

lower() alone makes values like " Podman " become " podman ", which still falls into the unknown-runtime branch for CLI, env, or saved-config inputs.

🔧 Proposed fix
     if explicit is not None:
-        explicit = explicit.lower()
+        explicit = explicit.strip().lower()
         if explicit not in SUPPORTED_RUNTIMES:
             raise RuntimeError(
                 f"Unknown container runtime '{explicit}'. "
                 f"Supported: {', '.join(SUPPORTED_RUNTIMES)}"
             )
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/vibepod/core/runtime.py` around lines 137 - 149, The explicit runtime
string (variable explicit set from override,
os.environ.get("VP_CONTAINER_RUNTIME"), or cfg.get("container_runtime")) must be
trimmed of surrounding whitespace before validation; modify the runtime
resolution code so that after assigning explicit from any source you call
explicit = explicit.strip() (or explicit = explicit.strip() if not None) and
then explicit = explicit.lower() before checking membership in
SUPPORTED_RUNTIMES or comparing to RUNTIME_AUTO; update the block around
explicit, detect_available_runtimes(), and the explicit not in
SUPPORTED_RUNTIMES check to use the stripped+lowercased value.
src/vibepod/core/docker.py (1)

93-117: ⚠️ Potential issue | 🟠 Major

Don't recursively make mounted paths world-writable.

For Podman this turns VibePod-managed config, logs, proxy state, and CA material into 0777/0666, and run_agent() now applies the same helper to every extra_volumes host path. On a multi-user machine that exposes sensitive local state to read/tamper by other users. Use a runtime-specific ownership/ACL strategy instead of broad chmod.

Also applies to: 214-216, 300-301, 364-365

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/vibepod/core/docker.py` around lines 93 - 117, The current
_prepare_volume_dir implementation recursively chmods paths (and similar helper
uses in run_agent/extra_volumes) which makes mounted host paths world-writable;
change it to avoid touching bind-mounted or user-provided volumes and to set
safe ownership/permissions only on directories VibePod created: detect mounts
(use os.path.ismount or compare st_dev/st_ino with parent) and skip them, only
operate on paths that we created (or are under our managed data directories),
and instead of 0o777/0666 use restrictive perms (e.g., 0o750 for dirs, 0o640 for
files) and prefer chown/chacl to set appropriate owner/ACL for the runtime UID
where available; update all call sites (e.g., run_agent handling of
extra_volumes) to not call the helper for external mounts.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@src/vibepod/core/docker.py`:
- Around line 244-247: The code unconditionally forwards userns_mode into
run_kwargs causing Docker client failures; update the places where run_kwargs is
built (the blocks that set run_kwargs["userns_mode"] alongside
run_kwargs["user"]) to validate/translate userns_mode against self.runtime: if
self.runtime indicates Docker only allow or translate to 'host' (or omit it) and
for Podman allow keep-id/nomap/auto, and apply the same validation at all
occurrences where userns_mode is assigned (the three run_kwargs construction
sites in this module) so that Docker never receives unsupported Podman modes.
Ensure checks reference self.runtime and run_kwargs (and the local userns_mode
variable) and skip or map the value before adding run_kwargs["userns_mode"].

---

Outside diff comments:
In `@src/vibepod/core/docker.py`:
- Around line 287-294: The current logic in find_datasette() handling existing
containers returns a running container without checking if its launch options
(notably userns_mode) match the requested setting; update the check after
existing.reload() to read the container's current user namespace from
existing.attrs (e.g., HostConfig.UsernsMode) and compare it to the requested
userns_mode parameter (and keep the existing PROXY_DB_PATH Env check), and if
they differ remove the container (existing.remove(force=True)) so it will be
recreated with the correct userns_mode; apply the same comparison/fix in the
other similar block around the code referenced at 350-358.

---

Duplicate comments:
In `@src/vibepod/core/docker.py`:
- Around line 93-117: The current _prepare_volume_dir implementation recursively
chmods paths (and similar helper uses in run_agent/extra_volumes) which makes
mounted host paths world-writable; change it to avoid touching bind-mounted or
user-provided volumes and to set safe ownership/permissions only on directories
VibePod created: detect mounts (use os.path.ismount or compare st_dev/st_ino
with parent) and skip them, only operate on paths that we created (or are under
our managed data directories), and instead of 0o777/0666 use restrictive perms
(e.g., 0o750 for dirs, 0o640 for files) and prefer chown/chacl to set
appropriate owner/ACL for the runtime UID where available; update all call sites
(e.g., run_agent handling of extra_volumes) to not call the helper for external
mounts.

In `@src/vibepod/core/runtime.py`:
- Around line 66-81: The _probe_socket function may leak the Docker client if
client.ping() raises; ensure the Docker client created in _probe_socket is
always closed by moving client.close() into a finally block (or use a context
manager) and only call client.ping() between creation and cleanup; update the
code around the DockerClient creation and the client.ping()/client.close() calls
in _probe_socket so any exception path still closes the client before returning
False.
- Around line 137-149: The explicit runtime string (variable explicit set from
override, os.environ.get("VP_CONTAINER_RUNTIME"), or
cfg.get("container_runtime")) must be trimmed of surrounding whitespace before
validation; modify the runtime resolution code so that after assigning explicit
from any source you call explicit = explicit.strip() (or explicit =
explicit.strip() if not None) and then explicit = explicit.lower() before
checking membership in SUPPORTED_RUNTIMES or comparing to RUNTIME_AUTO; update
the block around explicit, detect_available_runtimes(), and the explicit not in
SUPPORTED_RUNTIMES check to use the stripped+lowercased value.

ℹ️ Review info
⚙️ Run configuration

Configuration used: defaults

Review profile: CHILL

Plan: Pro

Run ID: de41294f-e499-4085-80f7-2cef77783a79

📥 Commits

Reviewing files that changed from the base of the PR and between 7f040ac and ace32f6.

📒 Files selected for processing (13)
  • docs/configuration.md
  • docs/podman.md
  • src/vibepod/cli.py
  • src/vibepod/commands/logs.py
  • src/vibepod/commands/proxy.py
  • src/vibepod/commands/run.py
  • src/vibepod/core/config.py
  • src/vibepod/core/docker.py
  • src/vibepod/core/runtime.py
  • tests/test_config.py
  • tests/test_proxy_permissions.py
  • tests/test_run.py
  • tests/test_runtime.py
🚧 Files skipped from review as they are similar to previous changes (5)
  • docs/configuration.md
  • tests/test_runtime.py
  • src/vibepod/commands/run.py
  • src/vibepod/commands/logs.py
  • docs/podman.md

Comment thread src/vibepod/core/docker.py Outdated
Copy link
Copy Markdown
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🧹 Nitpick comments (1)
tests/test_runtime.py (1)

110-122: Consider adding a test for _probe_socket failure cleanup.

The current tests verify successful probing, but there's no test verifying that the Docker client is properly closed when ping() fails. This would help catch the resource leak identified in _probe_socket.

Example test
def test_probe_socket_closes_client_on_failure(monkeypatch) -> None:
    recorded: dict[str, bool] = {}

    class _FakeClient:
        def __init__(self, *, base_url: str, timeout: float) -> None:
            pass

        def ping(self) -> None:
            raise ConnectionError("Daemon not responding")

        def close(self) -> None:
            recorded["closed"] = True

    monkeypatch.delenv("VP_RUNTIME_PROBE_TIMEOUT", raising=False)
    monkeypatch.setitem(sys.modules, "docker", SimpleNamespace(DockerClient=_FakeClient))

    assert runtime._probe_socket("unix:///tmp/unavailable.sock") is False
    assert recorded.get("closed") is True, "Client should be closed even on failure"
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@tests/test_runtime.py` around lines 110 - 122, Add a test that asserts Docker
client is closed when _probe_socket fails: mock a fake DockerClient class with
ping() raising (e.g., ConnectionError) and close() recording a flag, monkeypatch
docker.DockerClient to that fake, call
runtime._probe_socket("unix:///tmp/unavailable.sock") and assert it returns
False and that the recorded flag shows close() was called; ensure
VP_RUNTIME_PROBE_TIMEOUT env var is unset in the test (monkeypatch.delenv(...,
raising=False)) so probe uses the default timeout and the test references
runtime._probe_socket, docker.DockerClient, ping, and close to locate relevant
code.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Nitpick comments:
In `@tests/test_runtime.py`:
- Around line 110-122: Add a test that asserts Docker client is closed when
_probe_socket fails: mock a fake DockerClient class with ping() raising (e.g.,
ConnectionError) and close() recording a flag, monkeypatch docker.DockerClient
to that fake, call runtime._probe_socket("unix:///tmp/unavailable.sock") and
assert it returns False and that the recorded flag shows close() was called;
ensure VP_RUNTIME_PROBE_TIMEOUT env var is unset in the test
(monkeypatch.delenv(..., raising=False)) so probe uses the default timeout and
the test references runtime._probe_socket, docker.DockerClient, ping, and close
to locate relevant code.

ℹ️ Review info
⚙️ Run configuration

Configuration used: defaults

Review profile: CHILL

Plan: Pro

Run ID: a4c23591-6a3e-4d48-a14f-db09f906be91

📥 Commits

Reviewing files that changed from the base of the PR and between ace32f6 and 7613ef1.

📒 Files selected for processing (8)
  • README.md
  • docs/configuration.md
  • docs/podman.md
  • src/vibepod/commands/config.py
  • src/vibepod/core/docker.py
  • src/vibepod/core/runtime.py
  • tests/test_config.py
  • tests/test_runtime.py
✅ Files skipped from review due to trivial changes (2)
  • README.md
  • docs/podman.md
🚧 Files skipped from review as they are similar to previous changes (1)
  • tests/test_config.py

@nezhar
Copy link
Copy Markdown
Collaborator Author

nezhar commented Mar 13, 2026

@coderabbitai review

@coderabbitai
Copy link
Copy Markdown
Contributor

coderabbitai bot commented Mar 13, 2026

✅ Actions performed

Review triggered.

Note: CodeRabbit is an incremental review system and does not re-review already reviewed commits. This command is applicable only when automatic reviews are paused.

@greptile-apps
Copy link
Copy Markdown

greptile-apps bot commented Mar 14, 2026

Greptile Summary

This PR introduces first-class Podman support as an alternative to Docker across the entire VibePod CLI: auto-detection of available runtimes, a new vp config runtime command, --runtime/--userns flags on every subcommand, rootless Podman permission fixes, and updated documentation. The architecture is sound and the integration test matrix is comprehensive.

Key issues found:

  • _resolved_userns_mode silently drops valid parametric Podman modes (src/vibepod/core/docker.py:138–139). The strict set check normalized in {"auto", "keep-id", "nomap"} discards any extended form such as keep-id:uid=1000 or auto:uidmapping=0:1000:1 without raising an error or warning, so users who configure these modes get silent no-ops.
  • OSError from save_runtime_preference is uncaught in get_manager (src/vibepod/core/docker.py:60–63). When stdin is a TTY and both runtimes are available, resolve_runtime calls save_runtime_preference, which can raise OSError (e.g., read-only config directory). get_manager only catches RuntimeError and ValueError, so the OSError propagates to the command handler as an unhandled exception instead of a friendly DockerClientError.
  • In-container chmod -R a+rwX Podman fixup (src/vibepod/commands/run.py:330–331). The injected entrypoint chmod runs inside the container on bind-mounted directories, propagating world-writable permissions back to the host. Combined with _prepare_volume_dir's 0o777 host-side chmod, API keys and credentials in managed directories become world-readable on the host. See also the related _prepare_volume_dir thread.
  • Runtime detection runs eagerly before checking explicit overrides (src/vibepod/core/runtime.py:185, noted in a previous thread). detect_available_runtimes() probes all sockets before the explicit-override branch is evaluated, adding unnecessary latency on every command when a runtime is already pinned.

Confidence Score: 2/5

  • Not safe to merge as-is: credential directories can be made world-readable on the host, a valid Podman userns mode config is silently ignored, and an uncaught OSError produces a raw traceback in an interactive flow.
  • Three actionable issues of varying severity: the chmod a+rwX host permission bleed-through on credential dirs is a real security concern, _resolved_userns_mode silently drops valid parametric Podman modes (user misconfiguration is invisible), and the uncaught OSError from save_runtime_preference causes a raw Python traceback instead of a graceful error. The runtime detection latency issue (noted in a previous thread) is a performance concern. The overall architecture and test coverage are otherwise solid.
  • src/vibepod/core/docker.py (userns mode filtering and OSError handling) and src/vibepod/commands/run.py (in-container chmod entrypoint fixup) need attention before merging.

Important Files Changed

Filename Overview
src/vibepod/core/runtime.py New module implementing runtime detection, selection, and preference persistence; resolve_runtime always calls detect_available_runtimes() before checking explicit overrides (noted in previous threads), and the non-TTY both-runtimes path silently assumes Docker is always present in available.
src/vibepod/core/docker.py Core Docker/Podman management; _resolved_userns_mode silently drops valid parametric Podman modes (e.g. keep-id:uid=1000); get_manager does not catch OSError from save_runtime_preference; _prepare_volume_dir sets 0o777 world-writable on credential dirs (noted in previous threads).
src/vibepod/commands/run.py Run command updated to support --runtime and --userns flags; Podman fixup injects chmod -R a+rwX into the container entrypoint which propagates world-writable permissions back to host bind mounts containing API keys; proxy URL is now resolved from container IP with hostname fallback.
src/vibepod/commands/config.py New vp config runtime subcommand added to view/set the global container runtime preference; input validation is thorough and errors are handled cleanly.
src/vibepod/core/config.py New container_runtime and container_userns_mode config keys added with env-var overrides; get_container_userns_mode helper added; normalisation is correct.
tests/test_runtime.py New unit tests for runtime detection and selection; covers probe timeout, interactive prompting, env-var normalisation, and error wrapping; good coverage.
tests/test_proxy_permissions.py Extended proxy permissions tests; covers userns mode forwarding for Podman, omission for Docker, and network reconnection of existing proxy containers.
.github/workflows/runtime-integration.yml New CI workflow running integration tests against 7 runtime scenarios; matrix is well-structured with fail-fast disabled; probe timeout set to 20s for CI.
docs/podman.md New comprehensive Podman setup guide; covers prerequisites, runtime selection methods, known limitations (interactive attach, userns mapping, volume permissions, rootful mode).

Sequence Diagram

sequenceDiagram
    participant CLI as CLI Command
    participant GM as get_manager()
    participant RR as resolve_runtime()
    participant DAR as detect_available_runtimes()
    participant PS as _probe_socket()
    participant SRP as save_runtime_preference()
    participant DM as DockerManager

    CLI->>GM: get_manager(runtime_override, config)
    GM->>RR: resolve_runtime(override, config)
    Note over RR: Check override → env → config<br/>for explicit runtime
    RR->>DAR: detect_available_runtimes()
    DAR->>PS: probe docker socket
    PS-->>DAR: True/False
    DAR->>PS: probe podman socket(s)
    PS-->>DAR: True/False
    DAR-->>RR: {docker: url, podman: url}

    alt explicit runtime set
        RR-->>GM: (runtime_name, socket_url)
    else single runtime found
        RR-->>GM: (runtime_name, socket_url)
    else both runtimes, TTY
        RR->>RR: Prompt.ask()
        RR->>SRP: save_runtime_preference(choice)
        Note over SRP: OSError not caught by get_manager!
        RR-->>GM: (choice, socket_url)
    else both runtimes, non-TTY
        RR-->>GM: (RUNTIME_DOCKER, docker_url)
    end

    GM->>DM: DockerManager(base_url, runtime)
    DM-->>GM: manager
    GM-->>CLI: manager
Loading

Last reviewed commit: "Add integration test..."

Comment thread src/vibepod/core/docker.py
Comment thread src/vibepod/core/runtime.py
Comment thread src/vibepod/core/docker.py
Comment thread src/vibepod/core/docker.py
@nezhar nezhar force-pushed the podman branch 2 times, most recently from 5763d07 to 7a77a60 Compare April 9, 2026 19:59
@VibePod VibePod deleted a comment from greptile-apps bot Apr 9, 2026
@VibePod VibePod deleted a comment from greptile-apps bot Apr 9, 2026
@VibePod VibePod deleted a comment from greptile-apps bot Apr 9, 2026
Copy link
Copy Markdown
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

♻️ Duplicate comments (1)
src/vibepod/commands/update.py (1)

34-39: ⚠️ Potential issue | 🟡 Minor

Preserve legacy JSON field for compatibility.

Line 34-Line 39 removes the previous docker key from --json output, which can break existing scripts that parse the old schema. Consider keeping a compatibility field during a transition period.

Suggested compatibility patch
     info = {
         "vibepod": __version__,
         "python": platform.python_version(),
         "runtime": rt_name,
         "runtime_version": rt_version,
+        # Backward-compatibility for existing automation consuming older schema.
+        "docker": rt_version,
     }
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/vibepod/commands/update.py` around lines 34 - 39, The JSON output removed
the legacy "docker" key and will break downstream parsers; modify the info dict
construction (the info variable in update.py where keys
"vibepod","python","runtime","runtime_version" are set) to retain a
backwards-compatible "docker" field that maps to the previous value (for example
set "docker": rt_name or derive from "runtime"/"runtime_version") so both the
new "runtime"/"runtime_version" fields and the legacy "docker" key are present
during the transition; ensure the value reflects the same runtime identifier
previously exposed and add a short comment indicating it is kept for
compatibility.
🧹 Nitpick comments (1)
src/vibepod/commands/run.py (1)

354-377: Podman permission fixup excludes the user workspace intentionally.

The chmod -R a+rwX fixup targets only VibePod-managed directories (config mount and extra agent volumes) but not /workspace. This appears intentional since:

  1. The workspace is user-controlled content that shouldn't have its permissions modified
  2. Users can resolve workspace permission issues via --userns keep-id for compatible images

However, per the PR discussion (bobek's comments), rootless Podman UID mapping can still cause ownership mismatches on the workspace bind mount. Consider documenting this limitation or adding a CLI flag to opt into workspace permission fixup for users who encounter issues.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/vibepod/commands/run.py` around lines 354 - 377, The current Podman chmod
fixup only includes spec.config_mount_path and extra volumes (built in
podman_fixup via paths_to_fix and _agent_extra_volumes) and deliberately
excludes /workspace; add an opt-in CLI flag (e.g., --fix-workspace or
fix_workspace: bool on the command handler) that, when true and manager.runtime
== RUNTIME_PODMAN, appends "/workspace" to paths_to_fix before building quoted
and podman_fixup so the chmod line (podman_fixup = [f"chmod -R a+rwX {quoted}
2>/dev/null || true"]) will include /workspace; ensure the flag default is false
and update the command help/usage text accordingly and propagate the flag into
the same branch that computes podman_fixup and entrypoint (used by
_init_entrypoint).
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@scripts/prepare_runtime_scenario.sh`:
- Around line 58-63: The script uses the environment variable USER directly when
writing /etc/subuid and /etc/subgid, which can produce invalid entries if USER
is unset; update the top of the relevant block to resolve a safe username into a
local variable (e.g., username="$(printf '%s' "${USER:-}" ; if [ -z "${USER:-}"
]; then id -un; fi)" or equivalent) and then use that variable instead of $USER
in the grep and tee lines (references: the grep checks and the echo/tee
invocations that currently use ${USER}). Ensure the resolved username is quoted
when used and that behavior is unchanged when id -un fails (fail early with a
clear error).

---

Duplicate comments:
In `@src/vibepod/commands/update.py`:
- Around line 34-39: The JSON output removed the legacy "docker" key and will
break downstream parsers; modify the info dict construction (the info variable
in update.py where keys "vibepod","python","runtime","runtime_version" are set)
to retain a backwards-compatible "docker" field that maps to the previous value
(for example set "docker": rt_name or derive from "runtime"/"runtime_version")
so both the new "runtime"/"runtime_version" fields and the legacy "docker" key
are present during the transition; ensure the value reflects the same runtime
identifier previously exposed and add a short comment indicating it is kept for
compatibility.

---

Nitpick comments:
In `@src/vibepod/commands/run.py`:
- Around line 354-377: The current Podman chmod fixup only includes
spec.config_mount_path and extra volumes (built in podman_fixup via paths_to_fix
and _agent_extra_volumes) and deliberately excludes /workspace; add an opt-in
CLI flag (e.g., --fix-workspace or fix_workspace: bool on the command handler)
that, when true and manager.runtime == RUNTIME_PODMAN, appends "/workspace" to
paths_to_fix before building quoted and podman_fixup so the chmod line
(podman_fixup = [f"chmod -R a+rwX {quoted} 2>/dev/null || true"]) will include
/workspace; ensure the flag default is false and update the command help/usage
text accordingly and propagate the flag into the same branch that computes
podman_fixup and entrypoint (used by _init_entrypoint).
🪄 Autofix (Beta)

Fix all unresolved CodeRabbit comments on this PR:

  • Push a commit to this branch (recommended)
  • Create a new PR with the fixes

ℹ️ Review info
⚙️ Run configuration

Configuration used: defaults

Review profile: CHILL

Plan: Pro

Run ID: d9ae0a0a-5f32-4f0a-bb8a-d2e73d731332

📥 Commits

Reviewing files that changed from the base of the PR and between 7613ef1 and 7a77a60.

📒 Files selected for processing (27)
  • .github/workflows/runtime-integration.yml
  • README.md
  • docs/configuration.md
  • docs/index.md
  • docs/podman.md
  • docs/quickstart.md
  • mkdocs.yml
  • scripts/cleanup_runtime_scenario.sh
  • scripts/prepare_runtime_scenario.sh
  • src/vibepod/cli.py
  • src/vibepod/commands/config.py
  • src/vibepod/commands/list_cmd.py
  • src/vibepod/commands/logs.py
  • src/vibepod/commands/proxy.py
  • src/vibepod/commands/run.py
  • src/vibepod/commands/stop.py
  • src/vibepod/commands/update.py
  • src/vibepod/constants.py
  • src/vibepod/core/config.py
  • src/vibepod/core/docker.py
  • src/vibepod/core/runtime.py
  • tests/test_config.py
  • tests/test_list.py
  • tests/test_proxy_permissions.py
  • tests/test_run.py
  • tests/test_runtime.py
  • tests/test_version_integration.py
✅ Files skipped from review due to trivial changes (8)
  • docs/quickstart.md
  • README.md
  • tests/test_list.py
  • mkdocs.yml
  • scripts/cleanup_runtime_scenario.sh
  • .github/workflows/runtime-integration.yml
  • docs/podman.md
  • src/vibepod/constants.py
🚧 Files skipped from review as they are similar to previous changes (6)
  • docs/index.md
  • src/vibepod/cli.py
  • tests/test_runtime.py
  • src/vibepod/commands/list_cmd.py
  • src/vibepod/core/runtime.py
  • tests/test_proxy_permissions.py

Comment on lines +58 to +63
if ! grep -q "^${USER}:" /etc/subuid; then
echo "${USER}:100000:65536" | sudo tee -a /etc/subuid >/dev/null
fi
if ! grep -q "^${USER}:" /etc/subgid; then
echo "${USER}:100000:65536" | sudo tee -a /etc/subgid >/dev/null
fi
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Avoid relying on $USER when writing subuid/subgid entries.

Line 58-Line 63 can write invalid entries if USER is unset. Resolve the username from id -un fallback before file writes.

Suggested fix
 install_podman() {
   sudo touch /etc/subuid /etc/subgid
+  local username="${USER:-$(id -un)}"
 
-  if ! grep -q "^${USER}:" /etc/subuid; then
-    echo "${USER}:100000:65536" | sudo tee -a /etc/subuid >/dev/null
+  if ! grep -q "^${username}:" /etc/subuid; then
+    echo "${username}:100000:65536" | sudo tee -a /etc/subuid >/dev/null
   fi
-  if ! grep -q "^${USER}:" /etc/subgid; then
-    echo "${USER}:100000:65536" | sudo tee -a /etc/subgid >/dev/null
+  if ! grep -q "^${username}:" /etc/subgid; then
+    echo "${username}:100000:65536" | sudo tee -a /etc/subgid >/dev/null
   fi
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
if ! grep -q "^${USER}:" /etc/subuid; then
echo "${USER}:100000:65536" | sudo tee -a /etc/subuid >/dev/null
fi
if ! grep -q "^${USER}:" /etc/subgid; then
echo "${USER}:100000:65536" | sudo tee -a /etc/subgid >/dev/null
fi
install_podman() {
sudo touch /etc/subuid /etc/subgid
local username="${USER:-$(id -un)}"
if ! grep -q "^${username}:" /etc/subuid; then
echo "${username}:100000:65536" | sudo tee -a /etc/subuid >/dev/null
fi
if ! grep -q "^${username}:" /etc/subgid; then
echo "${username}:100000:65536" | sudo tee -a /etc/subgid >/dev/null
fi
}
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@scripts/prepare_runtime_scenario.sh` around lines 58 - 63, The script uses
the environment variable USER directly when writing /etc/subuid and /etc/subgid,
which can produce invalid entries if USER is unset; update the top of the
relevant block to resolve a safe username into a local variable (e.g.,
username="$(printf '%s' "${USER:-}" ; if [ -z "${USER:-}" ]; then id -un; fi)"
or equivalent) and then use that variable instead of $USER in the grep and tee
lines (references: the grep checks and the echo/tee invocations that currently
use ${USER}). Ensure the resolved username is quoted when used and that behavior
is unchanged when id -un fails (fail early with a clear error).

@augmentcode
Copy link
Copy Markdown

augmentcode bot commented Apr 10, 2026

augment review

@augmentcode
Copy link
Copy Markdown

augmentcode bot commented Apr 10, 2026

🤖 Augment PR Summary

Summary: This PR adds first-class Podman support alongside Docker, including runtime discovery/selection and runtime-aware container operations.

Changes:

  • Introduces container runtime detection and selection logic (new src/vibepod/core/runtime.py) with saved global preference support.
  • Adds a new vp config runtime command plus docs for container_runtime and container_userns_mode config/env keys.
  • Extends most CLI commands (run, logs, proxy, list, stop, version) with --runtime (and where relevant --userns) flags.
  • Updates the container manager to be runtime-aware, including socket selection and userns forwarding behavior.
  • Adds Podman-specific handling for managed bind-mount permissions and minor runtime-specific behaviors.
  • Updates proxy wiring so agents can use the proxy via its container IP on the selected network.
  • Expands unit tests for runtime selection/config and userns forwarding, and adds dedicated CI integration scenarios for real Docker/Podman environments.
  • Adds new documentation (docs/podman.md) and links it from the docs navigation and quickstart.

Technical Notes: Runtime selection priority is CLI flag → env var → config preference → auto-detect (prompt if interactive); a new workflow and scripts validate behavior across multiple host runtime scenarios.

🤖 Was this summary useful? React with 👍 or 👎

Copy link
Copy Markdown

@augmentcode augmentcode bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Review completed. 2 suggestions posted.

Fix All in Augment

Comment augment review to trigger a new review at any time.


try:
runtime_name, socket_url = resolve_runtime(override=runtime_override, config=config)
except (RuntimeError, ValueError) as exc:
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

get_manager() only wraps RuntimeError/ValueError, but resolve_runtime() can write the saved preference and may raise OSError/yaml.YAMLError (e.g., config dir not writable), which would currently bubble out as an unhandled exception. Consider catching those here too so callers consistently see a DockerClientError with a clean message.

Severity: medium

Fix This in Augment

🤖 Was this useful? React with 👍 or 👎, or 🚀 if it prevented an incident/outage.

# Ensure VibePod-managed dirs are accessible inside rootless Podman
self._prepare_volume_dir(config_dir)
if extra_volumes:
for host_path, _, _ in extra_volumes:
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

run_agent() calls _prepare_volume_dir() on every extra_volumes host path, even though _prepare_volume_dir()’s docstring says it’s only for VibePod-managed directories. If extra_volumes includes user-provided mounts (e.g., X11 sockets or other host paths), this can unexpectedly chmod/chown arbitrary host directories and broaden permissions.

Severity: high

Fix This in Augment

🤖 Was this useful? React with 👍 or 👎, or 🚀 if it prevented an incident/outage.

Copy link
Copy Markdown
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

♻️ Duplicate comments (2)
src/vibepod/commands/run.py (1)

361-367: ⚠️ Potential issue | 🟠 Major

Podman fixup still skips the /workspace bind mount.

At Line 362, paths_to_fix omits /workspace, so the main project bind mount can still hit rootless Podman UID/GID permission mismatches.

Targeted fix
     if manager.runtime == RUNTIME_PODMAN:
-        paths_to_fix = [spec.config_mount_path]
+        paths_to_fix = ["/workspace", spec.config_mount_path]
         extra_volumes_pre = _agent_extra_volumes(selected_agent, agent_config_dir(selected_agent))
         if extra_volumes_pre:
             paths_to_fix.extend(cp for _, cp, _ in extra_volumes_pre)
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/vibepod/commands/run.py` around lines 361 - 367, The Podman
permission-fix list currently builds paths_to_fix from spec.config_mount_path
but omits the project/workspace bind mount; update the code that constructs
paths_to_fix (used when manager.runtime == RUNTIME_PODMAN) to also include the
workspace/project mount (e.g., spec.workspace_mount_path or the attribute that
holds "/workspace") before extending with extra_volumes_pre, so the quoted/chmod
command includes the workspace path as well and fixes rootless Podman UID/GID
issues.
src/vibepod/commands/update.py (1)

34-39: ⚠️ Potential issue | 🟡 Minor

JSON schema change still breaks existing --json consumers.

On Line 37 and Line 38, the payload no longer includes the legacy docker key. That’s a compatibility break for scripts parsing prior output.

Compatibility bridge (transition period)
     info = {
         "vibepod": __version__,
         "python": platform.python_version(),
         "runtime": rt_name,
         "runtime_version": rt_version,
     }
+    # Backward compatibility for older automation expecting `docker`
+    info["docker"] = rt_version if rt_name == "docker" else f"{rt_name} {rt_version}"
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/vibepod/commands/update.py` around lines 34 - 39, The JSON payload
removed the legacy "docker" key causing backward-compatibility breaks; restore
the legacy fields by adding info["docker"] (and info["docker_version"] if your
previous schema had it) and populate them from rt_name and rt_version (the same
values used for "runtime" and "runtime_version") so existing --json consumers
keep receiving the expected keys; update the info dict near the existing
definition that uses __version__, platform.python_version(), rt_name and
rt_version.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Duplicate comments:
In `@src/vibepod/commands/run.py`:
- Around line 361-367: The Podman permission-fix list currently builds
paths_to_fix from spec.config_mount_path but omits the project/workspace bind
mount; update the code that constructs paths_to_fix (used when manager.runtime
== RUNTIME_PODMAN) to also include the workspace/project mount (e.g.,
spec.workspace_mount_path or the attribute that holds "/workspace") before
extending with extra_volumes_pre, so the quoted/chmod command includes the
workspace path as well and fixes rootless Podman UID/GID issues.

In `@src/vibepod/commands/update.py`:
- Around line 34-39: The JSON payload removed the legacy "docker" key causing
backward-compatibility breaks; restore the legacy fields by adding
info["docker"] (and info["docker_version"] if your previous schema had it) and
populate them from rt_name and rt_version (the same values used for "runtime"
and "runtime_version") so existing --json consumers keep receiving the expected
keys; update the info dict near the existing definition that uses __version__,
platform.python_version(), rt_name and rt_version.

ℹ️ Review info
⚙️ Run configuration

Configuration used: defaults

Review profile: CHILL

Plan: Pro

Run ID: 7e65a8dc-6e0e-465a-9988-6447b6dddf70

📥 Commits

Reviewing files that changed from the base of the PR and between 7a77a60 and b653e17.

📒 Files selected for processing (27)
  • .github/workflows/runtime-integration.yml
  • README.md
  • docs/configuration.md
  • docs/index.md
  • docs/podman.md
  • docs/quickstart.md
  • mkdocs.yml
  • scripts/cleanup_runtime_scenario.sh
  • scripts/prepare_runtime_scenario.sh
  • src/vibepod/cli.py
  • src/vibepod/commands/config.py
  • src/vibepod/commands/list_cmd.py
  • src/vibepod/commands/logs.py
  • src/vibepod/commands/proxy.py
  • src/vibepod/commands/run.py
  • src/vibepod/commands/stop.py
  • src/vibepod/commands/update.py
  • src/vibepod/constants.py
  • src/vibepod/core/config.py
  • src/vibepod/core/docker.py
  • src/vibepod/core/runtime.py
  • tests/test_config.py
  • tests/test_list.py
  • tests/test_proxy_permissions.py
  • tests/test_run.py
  • tests/test_runtime.py
  • tests/test_version_integration.py
✅ Files skipped from review due to trivial changes (10)
  • docs/quickstart.md
  • docs/index.md
  • README.md
  • scripts/cleanup_runtime_scenario.sh
  • mkdocs.yml
  • .github/workflows/runtime-integration.yml
  • src/vibepod/constants.py
  • tests/test_runtime.py
  • docs/configuration.md
  • tests/test_version_integration.py
🚧 Files skipped from review as they are similar to previous changes (12)
  • tests/test_list.py
  • src/vibepod/cli.py
  • src/vibepod/commands/stop.py
  • src/vibepod/core/config.py
  • tests/test_config.py
  • scripts/prepare_runtime_scenario.sh
  • src/vibepod/commands/proxy.py
  • src/vibepod/commands/logs.py
  • src/vibepod/core/runtime.py
  • src/vibepod/core/docker.py
  • docs/podman.md
  • src/vibepod/commands/list_cmd.py

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants