Conversation
📝 WalkthroughWalkthroughAdds Podman support and runtime-agnostic container management: runtime detection/selection and persistence, a get_manager factory, runtime-aware DockerManager changes (sockets, volume/userns handling), new CLI --runtime/--userns options and config/env keys, docs, scripts, and tests exercising runtime and userns behavior. (34 words) Changes
Sequence Diagram(s)sequenceDiagram
participant User as User/CLI
participant Cmd as Command Handler
participant Runtime as runtime.resolve_runtime
participant Detector as detect_available_runtimes
participant Sock as Socket Probe
participant Factory as get_manager
participant Manager as DockerManager
User->>Cmd: invoke command (with optional --runtime / --userns)
Cmd->>Runtime: resolve_runtime(override, config)
alt override or saved preference
Runtime-->>Factory: return (runtime, socket_url)
else auto-detect
Runtime->>Detector: detect_available_runtimes()
Detector->>Sock: probe candidate sockets (docker/podman)
Sock-->>Detector: reachable / unreachable
Detector-->>Runtime: available runtimes
alt multiple & TTY
Runtime->>User: prompt to choose runtime
User-->>Runtime: selection
Runtime->>Runtime: save_runtime_preference(selection)
end
Runtime-->>Factory: return (chosen_runtime, socket_url)
end
Factory->>Manager: create DockerManager(base_url=socket_url, runtime)
Manager-->>Cmd: manager instance
Cmd->>Manager: run/stop/proxy/logs (userns_mode forwarded when applicable)
Estimated code review effort🎯 4 (Complex) | ⏱️ ~50 minutes Possibly related issues
Possibly related PRs
Poem
🚥 Pre-merge checks | ✅ 2 | ❌ 1❌ Failed checks (1 warning)
✅ Passed checks (2 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches📝 Generate docstrings
🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Actionable comments posted: 4
🧹 Nitpick comments (1)
src/vibepod/commands/run.py (1)
269-292: Podman rootless permission fixup logic is sound.The approach of injecting
chmod -R a+rwXbefore init commands correctly addresses the subordinate UID permission problem in rootless Podman. The use ofdict.fromkeys()for deduplication while preserving order is idiomatic.One minor edge case: paths containing single quotes would break the shell command. This is unlikely for typical config paths but worth noting.
🛡️ Optional: More robust path quoting
If you want to handle paths with special characters more defensively:
- quoted = " ".join(f"'{p}'" for p in dict.fromkeys(paths_to_fix)) + # Use printf %q style quoting for safety with special chars + quoted = " ".join(shlex.quote(p) for p in dict.fromkeys(paths_to_fix))This would require adding
import shlexat the top.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/vibepod/commands/run.py` around lines 269 - 292, The current podman_fixup builds a shell command by joining paths into quoted using simple single-quote wrapping, which breaks if any path contains a single quote; update the logic in the block that builds podman_fixup (references: podman_fixup, paths_to_fix, extra_volumes_pre, quoted) to use robust shell-quoting (e.g., shlex.quote) for each path and import shlex at the top of the file so paths with special characters or embedded quotes are safely quoted before constructing the chmod command.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@docs/podman.md`:
- Around line 86-90: Replace the overly permissive workspace permission
suggestion "chmod -R a+rwX ~/my-project" with owner/group-scoped guidance:
recommend "chmod -R u+rwX ~/my-project" and mention an optional "chmod -R g+rwX
~/my-project" line for collaborative setups; update the docs/podman.md snippet
accordingly and remove or discourage the world-writable (a+rwX) example.
In `@src/vibepod/commands/update.py`:
- Around line 34-39: The change to the info dict in update.py replaced the
single "docker" key with "runtime" and "runtime_version", which breaks consumers
of the --json output; either add a compatibility field "docker" (e.g., derive
from runtime or runtime_version and set info["docker"] =
"<runtime>/<runtime_version>" or original docker string) so both schemas are
present, or update release notes/changelog to document the breaking change;
locate the info dict in update.py (keys: "vibepod", "python", "runtime",
"runtime_version") and implement the chosen fix so existing tooling continues to
work or users are explicitly informed.
In `@src/vibepod/core/config.py`:
- Line 139: The VP_CONTAINER_RUNTIME value should be normalized (trimmed and
lowercased) before storing into container_runtime to avoid mismatches like "
Podman " or "PODMAN"; update the tuple for "VP_CONTAINER_RUNTIME" in the mapping
in src/vibepod/core/config.py to use a parser that performs v.strip().lower()
(or replace str with a named helper parser function) and optionally validate
against allowed runtimes (e.g., "docker","podman") so the parsed value assigned
to container_runtime is normalized and predictable.
In `@src/vibepod/core/runtime.py`:
- Around line 57-62: The _probe_socket function currently creates a Docker
client (client = _docker.DockerClient(...)) and may return False from the except
branch without closing it; update the function to ensure client.close() is
always called by moving the cleanup into a finally block (or by tracking the
client and closing it in both success and failure paths). Specifically, wrap the
client creation, client.ping(), and return logic in try/except/finally,
reference the existing client variable and ensure client.close() is invoked in
the finally so no DockerClient connection leaks occur.
---
Nitpick comments:
In `@src/vibepod/commands/run.py`:
- Around line 269-292: The current podman_fixup builds a shell command by
joining paths into quoted using simple single-quote wrapping, which breaks if
any path contains a single quote; update the logic in the block that builds
podman_fixup (references: podman_fixup, paths_to_fix, extra_volumes_pre, quoted)
to use robust shell-quoting (e.g., shlex.quote) for each path and import shlex
at the top of the file so paths with special characters or embedded quotes are
safely quoted before constructing the chmod command.
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: 661f0184-e4fd-49f5-965b-613a0a9d7d83
📒 Files selected for processing (18)
docs/configuration.mddocs/index.mddocs/podman.mddocs/quickstart.mdmkdocs.ymlsrc/vibepod/commands/list_cmd.pysrc/vibepod/commands/logs.pysrc/vibepod/commands/proxy.pysrc/vibepod/commands/run.pysrc/vibepod/commands/stop.pysrc/vibepod/commands/update.pysrc/vibepod/constants.pysrc/vibepod/core/config.pysrc/vibepod/core/docker.pysrc/vibepod/core/runtime.pytests/test_list.pytests/test_proxy_permissions.pytests/test_run.py
|
This is awesome. It doesn't work for me on Debian sid. Anyway, setting ~/Sources/moistureguard/global_senzomatic main ❯ curl -w "\nDNS:%{time_namelookup}s Connect:%{time_connect}s TLS:%{time_appconnect}s StartTransfer:%{time_starttransfer}s Total:%{time_total}s Speed:%{speed_download}B/s Code:%{http_code}\n" --unix-socket /run/user/1000/podman/podman.sock -v -X GET http://d/v5.7.0/libpod/version
Note: Unnecessary use of -X or --request, GET is already inferred.
* Trying /run/user/1000/podman/podman.sock:0...
* Established connection to d (/run/user/1000/podman/podman.sock port 0) from port 0
* using HTTP/1.x
> GET /v5.7.0/libpod/version HTTP/1.1
> Host: d
> User-Agent: curl/8.19.0-rc3
> Accept: */*
>
* Request completely sent off
< HTTP/1.1 200 OK
< Api-Version: 1.41
< Content-Type: application/json
< Libpod-Api-Version: 5.7.0
< Server: Libpod/5.7.0 (linux)
< X-Reference-Id: 0xc000518008
< Date: Fri, 06 Mar 2026 03:46:02 GMT
< Content-Length: 912
<
{"Platform":{"Name":"linux/amd64/debian-unknown"},"Components":[{"Name":"Podman Engine","Version":"5.7.0","Details":{"APIVersion":"5.7.0","Arch":"amd64","BuildTime":"2025-12-03T02:36:32+01:00","Experimental":"false","GitCommit":"","GoVersion":"go1.24.9","KernelVersion":"6.18.15+deb14-amd64","MinAPIVersion":"4.0.0","Os":"linux"}},{"Name":"Conmon","Version":"conmon version 2.1.13, commit: unknown","Details":{"Package":"conmon_2.1.13+ds1-2_amd64"}},{"Name":"OCI Runtime (crun)","Version":"crun version 1.26\ncommit: 3241e671f92c33b0c003cd7de319e4f32add6231\nrundir: /run/user/1000/crun\nspec: 1.0.0\n+SYSTEMD +SELINUX +APPARMOR +CAP +SECCOMP +EBPF +CRIU +YAJL","Details":{"Package":"crun_1.26-1_amd64"}}],"Version":"5.7.0","ApiVersion":"1.41","MinAPIVersion":"1.24","GitCommit":"","GoVersion":"go1.24.9","Os":"linux","Arch":"amd64","KernelVersion":"6.18.15+deb14-amd64","BuildTime":"2025-12-03T02:36:32+01:00"}
* Connection #0 to host d:80 left intact
DNS:0.000022s Connect:0.000058s TLS:0.000000s StartTransfer:4.819876s Total:4.819928s Speed:189B/s Code:200
~/Sources/moistureguard/global_senzomatic main ❯ curl -w "\nDNS:%{time_namelookup}s Connect:%{time_connect}s TLS:%{time_appconnect}s StartTransfer:%{time_starttransfer}s Total:%{time_total}s Speed:%{speed_download}B/s Code:%{http_code}\n" --unix-socket /run/user/1000/podman/podman.sock -v -X GET http://d/version
Note: Unnecessary use of -X or --request, GET is already inferred.
* Trying /run/user/1000/podman/podman.sock:0...
* Established connection to d (/run/user/1000/podman/podman.sock port 0) from port 0
* using HTTP/1.x
> GET /version HTTP/1.1
> Host: d
> User-Agent: curl/8.19.0-rc3
> Accept: */*
>
* Request completely sent off
< HTTP/1.1 200 OK
< Api-Version: 1.41
< Content-Type: application/json
< Libpod-Api-Version: 5.7.0
< Server: Libpod/5.7.0 (linux)
< X-Reference-Id: 0xc000682000
< Date: Fri, 06 Mar 2026 03:46:26 GMT
< Content-Length: 912
<
{"Platform":{"Name":"linux/amd64/debian-unknown"},"Components":[{"Name":"Podman Engine","Version":"5.7.0","Details":{"APIVersion":"5.7.0","Arch":"amd64","BuildTime":"2025-12-03T02:36:32+01:00","Experimental":"false","GitCommit":"","GoVersion":"go1.24.9","KernelVersion":"6.18.15+deb14-amd64","MinAPIVersion":"4.0.0","Os":"linux"}},{"Name":"Conmon","Version":"conmon version 2.1.13, commit: unknown","Details":{"Package":"conmon_2.1.13+ds1-2_amd64"}},{"Name":"OCI Runtime (crun)","Version":"crun version 1.26\ncommit: 3241e671f92c33b0c003cd7de319e4f32add6231\nrundir: /run/user/1000/crun\nspec: 1.0.0\n+SYSTEMD +SELINUX +APPARMOR +CAP +SECCOMP +EBPF +CRIU +YAJL","Details":{"Package":"crun_1.26-1_amd64"}}],"Version":"5.7.0","ApiVersion":"1.41","MinAPIVersion":"1.24","GitCommit":"","GoVersion":"go1.24.9","Os":"linux","Arch":"amd64","KernelVersion":"6.18.15+deb14-amd64","BuildTime":"2025-12-03T02:36:32+01:00"}
* Connection #0 to host d:80 left intact
DNS:0.000025s Connect:0.000063s TLS:0.000000s StartTransfer:4.676405s Total:4.676445s Speed:195B/s Code:200 |
|
One more thing -- mapping UIDs/GIDs. It is quite typical with $ cat /etc/subuid
bobek:100000:65536
$ id
uid=1000(bobek) gid=1000(bobek) groups=1000(bobek)Now, let me run container in /tmp/xx ❯ ll
total 0
drwxrwxrwx 2 bobek bobek 40 Mar 6 05:10 .
drwxrwxrwt 24 root root 700 Mar 6 05:03 ..
/tmp/xx ❯ podman run -it --rm -v .:/workspace debian:bookworm-slim /bin/bash
root@422b730a12eb:/# cd /workspace/
root@422b730a12eb:/workspace# id
uid=0(root) gid=0(root) groups=0(root)
root@422b730a12eb:/workspace# touch first
root@422b730a12eb:/workspace# useradd testuser
root@422b730a12eb:/workspace# su - testuser
su: warning: cannot change directory to /home/testuser: No such file or directory
$ bash
testuser@422b730a12eb:/workspace$ id
uid=1000(testuser) gid=1000(testuser) groups=1000(testuser)
testuser@422b730a12eb:/workspace$ touch second-testuser
testuser@422b730a12eb:/workspace$
exit
$
root@422b730a12eb:/workspace#
exit
/tmp/xx ❯ ll
total 0
drwxrwxrwx 2 bobek bobek 80 Mar 6 05:11 .
drwxrwxrwt 24 root root 700 Mar 6 05:03 ..
-rw-r--r-- 1 bobek bobek 0 Mar 6 05:11 first
-rw-r--r-- 1 100999 100999 0 Mar 6 05:11 second-testuser
/tmp/xx ❯ As you see, the container's /tmp/xx ❯ ll
total 0
drwxrwxrwx 2 bobek bobek 40 Mar 6 05:15 .
drwxrwxrwt 24 root root 700 Mar 6 05:03 ..
/tmp/xx ❯ podman run -it --userns keep-id --rm -v .:/workspace debian:bookworm-slim /bin/bash
bobek@3295edc546bf:/$ id
uid=1000(bobek) gid=1000(bobek) groups=1000(bobek)
bobek@3295edc546bf:/$ cd /workspace/
bobek@3295edc546bf:/workspace$ touch keep-id
bobek@3295edc546bf:/workspace$
exit
/tmp/xx ❯ ll
total 0
drwxrwxrwx 2 bobek bobek 60 Mar 6 05:15 .
drwxrwxrwt 24 root root 700 Mar 6 05:03 ..
-rw-r--r-- 1 bobek bobek 0 Mar 6 05:15 keep-id
/tmp/xx ❯So it would be really great if we can get control over setting |
f811416 to
019cb23
Compare
There was a problem hiding this comment.
Actionable comments posted: 3
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (2)
tests/test_run.py (1)
271-272:⚠️ Potential issue | 🟠 MajorThe auto-pull tests are still stubbing the old construction path.
run()now readsget_manager(...).runtime, but these cases still patchrun_cmd.DockerManager, so the stub is bypassed entirely. That matches the CI failures at Lines 328, 352, 380, and 394. After switching the monkeypatch torun_cmd.get_manager,_StubDockerManageralso needs aruntimeattribute or these tests will fail onmanager.runtime.🧪 Minimal fix
class _StubDockerManager: """Minimal DockerManager stub that records pull_image calls.""" def __init__(self) -> None: + self.runtime = "docker" self.pulled: list[str] = [] ... - monkeypatch.setattr(run_cmd, "DockerManager", lambda: stub) + monkeypatch.setattr(run_cmd, "get_manager", lambda **kwargs: stub)Apply the same monkeypatch replacement in each auto-pull test.
Also applies to: 321-394
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@tests/test_run.py` around lines 271 - 272, Tests are patching run_cmd.DockerManager but run() now uses get_manager(...).runtime, so update the auto-pull tests to monkeypatch run_cmd.get_manager instead of run_cmd.DockerManager and have the replacement return the existing _StubDockerManager instance; additionally add a runtime attribute to _StubDockerManager (e.g., set manager.runtime = <expected runtime stub> or implement a runtime property on _StubDockerManager) so manager.runtime access in run() does not fail; update all affected test cases around the auto-pull blocks (those failing at lines ~328, 352, 380, 394) to use get_manager monkeypatch and ensure the stub exposes runtime.src/vibepod/commands/list_cmd.py (1)
59-66:⚠️ Potential issue | 🟠 MajorDon't suppress explicit runtime-resolution failures here.
This fallback now also swallows bad or unavailable explicit runtime selections and returns
running: []/ “No running agents” instead of surfacing the error. That is misleading for--runtime ...,VP_CONTAINER_RUNTIME, or a savedcontainer_runtimepreference. Please only keep the silent fallback for the pure auto-detect, non---runningcase.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/vibepod/commands/list_cmd.py` around lines 59 - 66, The try/except around get_manager/list_managed is currently swallowing DockerClientError even when an explicit runtime was requested; change it so DockerClientError is only silently handled for the pure auto-detect case (no explicit runtime selection) and only when not in the `running` path. Specifically, in the except DockerClientError as exc block, check the `runtime` (the runtime_override passed to get_manager) and `running` flags: if runtime is None/empty (auto-detect) and not running, set containers = [] as the fallback; otherwise log the error (using error(str(exc)) or re-raise) and exit with typer.Exit(EXIT_DOCKER_NOT_RUNNING) so explicit runtime failures surface. Ensure you reference the existing symbols get_manager, runtime_override/runtime, DockerClientError, running, list_managed, typer.Exit, and EXIT_DOCKER_NOT_RUNNING when making the change.
♻️ Duplicate comments (1)
src/vibepod/commands/update.py (1)
33-39:⚠️ Potential issue | 🟡 Minor
version --jsonstill has a breaking schema change.Replacing the historical
dockerfield withruntime/runtime_versionwill break existing parsers. Please either keep a compatibility key for one release cycle or call this out explicitly in release notes/changelog.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/vibepod/commands/update.py` around lines 33 - 39, The JSON schema change removed the historical "docker" field causing breakage; in the info dict in update.py (where _runtime_version(runtime_override=runtime) is used to set "runtime" and "runtime_version"), add back the legacy keys "docker" and "docker_version" populated from the same rt_name and rt_version values (or map appropriately) so existing parsers keep working for one release cycle, and mark them as deprecated in a comment near the info construction.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@src/vibepod/commands/run.py`:
- Around line 274-296: The Podman fixup list currently only adds
spec.config_mount_path and extra agent volumes but misses the user workspace
bind (mounted as /workspace), so add the workspace mount path to paths_to_fix
when manager.runtime == RUNTIME_PODMAN; update the block that builds
podman_fixup (references: podman_fixup, paths_to_fix, spec.config_mount_path,
_agent_extra_volumes, agent_config_dir) to also include the workspace mount
(e.g. '/workspace' or spec.workspace_mount if available) before quoting and
creating podman_fixup, so that the chmod command covers the project workspace
the DockerManager.run_agent binds to /workspace and prevents rootless Podman
permission issues; keep the rest of the flow (manager.resolve_launch_command,
_init_entrypoint) unchanged.
In `@src/vibepod/core/docker.py`:
- Around line 93-116: The current _prepare_volume_dir makes VibePod-managed dirs
world-readable/writable (0o777/0o666) which exposes sensitive state; instead
when runtime == RUNTIME_PODMAN change ownership and restrictive perms rather
than broad chmod: determine the container-side UID/GID mapping for rootless
Podman (query Podman or derive subordinate uid range), chown the path and its
children to that mapped UID:GID and set restrictive perms (e.g. 0o700 for dirs,
0o600 for files); if you cannot resolve a mapped UID:GID, fall back to applying
POSIX ACLs (setfacl) granting the mapped container UID access while keeping
owner-only perms for others; update _prepare_volume_dir to perform chown +
0o700/0o600 (or setfacl) and remove any 0o777/0o666 usage so agent config dirs,
log/proxy state, and proxy CA remain private.
In `@src/vibepod/core/runtime.py`:
- Around line 52-62: The _probe_socket function currently uses a 3-second
DockerClient timeout which can produce false negatives for slower Podman
responses; update the timeout passed to _docker.DockerClient in _probe_socket
(and any related client creation there) to a higher value such as 6 seconds (or
8s) so Podman's ~4.7–4.8s startup ping/version latency is accommodated, keeping
the rest of the try/except behavior the same.
---
Outside diff comments:
In `@src/vibepod/commands/list_cmd.py`:
- Around line 59-66: The try/except around get_manager/list_managed is currently
swallowing DockerClientError even when an explicit runtime was requested; change
it so DockerClientError is only silently handled for the pure auto-detect case
(no explicit runtime selection) and only when not in the `running` path.
Specifically, in the except DockerClientError as exc block, check the `runtime`
(the runtime_override passed to get_manager) and `running` flags: if runtime is
None/empty (auto-detect) and not running, set containers = [] as the fallback;
otherwise log the error (using error(str(exc)) or re-raise) and exit with
typer.Exit(EXIT_DOCKER_NOT_RUNNING) so explicit runtime failures surface. Ensure
you reference the existing symbols get_manager, runtime_override/runtime,
DockerClientError, running, list_managed, typer.Exit, and
EXIT_DOCKER_NOT_RUNNING when making the change.
In `@tests/test_run.py`:
- Around line 271-272: Tests are patching run_cmd.DockerManager but run() now
uses get_manager(...).runtime, so update the auto-pull tests to monkeypatch
run_cmd.get_manager instead of run_cmd.DockerManager and have the replacement
return the existing _StubDockerManager instance; additionally add a runtime
attribute to _StubDockerManager (e.g., set manager.runtime = <expected runtime
stub> or implement a runtime property on _StubDockerManager) so manager.runtime
access in run() does not fail; update all affected test cases around the
auto-pull blocks (those failing at lines ~328, 352, 380, 394) to use get_manager
monkeypatch and ensure the stub exposes runtime.
---
Duplicate comments:
In `@src/vibepod/commands/update.py`:
- Around line 33-39: The JSON schema change removed the historical "docker"
field causing breakage; in the info dict in update.py (where
_runtime_version(runtime_override=runtime) is used to set "runtime" and
"runtime_version"), add back the legacy keys "docker" and "docker_version"
populated from the same rt_name and rt_version values (or map appropriately) so
existing parsers keep working for one release cycle, and mark them as deprecated
in a comment near the info construction.
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: 6c6e202f-a858-456e-8570-c4fe9db2d44d
📒 Files selected for processing (18)
docs/configuration.mddocs/index.mddocs/podman.mddocs/quickstart.mdmkdocs.ymlsrc/vibepod/commands/list_cmd.pysrc/vibepod/commands/logs.pysrc/vibepod/commands/proxy.pysrc/vibepod/commands/run.pysrc/vibepod/commands/stop.pysrc/vibepod/commands/update.pysrc/vibepod/constants.pysrc/vibepod/core/config.pysrc/vibepod/core/docker.pysrc/vibepod/core/runtime.pytests/test_list.pytests/test_proxy_permissions.pytests/test_run.py
🚧 Files skipped from review as they are similar to previous changes (6)
- docs/configuration.md
- docs/quickstart.md
- tests/test_proxy_permissions.py
- src/vibepod/constants.py
- src/vibepod/core/config.py
- docs/podman.md
| def _prepare_volume_dir(self, path: Path) -> None: | ||
| """Ensure *path* is accessible inside a rootless Podman container. | ||
|
|
||
| Rootless Podman maps the host UID to root inside the container, but | ||
| a non-root process started via ``su`` / ``gosu`` in the entrypoint | ||
| gets a subordinate UID that cannot read host files with standard | ||
| permissions. Making VibePod-managed directories world-accessible | ||
| (``0o777``) avoids "Permission denied" errors without needing | ||
| ``userns_mode=keep-id`` (which breaks entrypoints that call ``su``). | ||
|
|
||
| This is only applied to directories VibePod itself creates — never | ||
| to the user's workspace. | ||
| """ | ||
| if self.runtime != RUNTIME_PODMAN: | ||
| return | ||
| try: | ||
| path.chmod(0o777) | ||
| for child in path.rglob("*"): | ||
| try: | ||
| child.chmod(0o777 if child.is_dir() else 0o666) | ||
| except OSError: | ||
| pass | ||
| except OSError: | ||
| pass |
There was a problem hiding this comment.
Avoid making VibePod state world-readable and world-writable.
_prepare_volume_dir() recursively flips managed trees to 0777/0666 for Podman. In this PR that helper is used for agent config directories, log/proxy state, and the proxy CA directory, so on a multi-user machine other local users can read or tamper with credentials, session data, and CA material. Please use a runtime-specific ownership/ACL strategy instead of relaxing host permissions this broadly.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/vibepod/core/docker.py` around lines 93 - 116, The current
_prepare_volume_dir makes VibePod-managed dirs world-readable/writable
(0o777/0o666) which exposes sensitive state; instead when runtime ==
RUNTIME_PODMAN change ownership and restrictive perms rather than broad chmod:
determine the container-side UID/GID mapping for rootless Podman (query Podman
or derive subordinate uid range), chown the path and its children to that mapped
UID:GID and set restrictive perms (e.g. 0o700 for dirs, 0o600 for files); if you
cannot resolve a mapped UID:GID, fall back to applying POSIX ACLs (setfacl)
granting the mapped container UID access while keeping owner-only perms for
others; update _prepare_volume_dir to perform chown + 0o700/0o600 (or setfacl)
and remove any 0o777/0o666 usage so agent config dirs, log/proxy state, and
proxy CA remain private.
There was a problem hiding this comment.
Actionable comments posted: 1
♻️ Duplicate comments (1)
src/vibepod/commands/run.py (1)
285-291:⚠️ Potential issue | 🟠 MajorPodman permission fix still misses the
/workspacebind mount.Line 286 builds
paths_to_fixwithout the main workspace mount, so rootless Podman can still fail on project files written/read under/workspace.🔧 Suggested patch
if manager.runtime == RUNTIME_PODMAN: - paths_to_fix = [spec.config_mount_path] + paths_to_fix = [spec.config_mount_path, "/workspace"] extra_volumes_pre = _agent_extra_volumes(selected_agent, agent_config_dir(selected_agent)) if extra_volumes_pre: paths_to_fix.extend(cp for _, cp, _ in extra_volumes_pre) quoted = " ".join(f"'{p}'" for p in dict.fromkeys(paths_to_fix)) podman_fixup = [f"chmod -R a+rwX {quoted} 2>/dev/null || true"]🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/vibepod/commands/run.py` around lines 285 - 291, The podman permission fix builds paths_to_fix only from spec.config_mount_path and _agent_extra_volumes, missing the main workspace bind mount (/workspace); update the block where manager.runtime == RUNTIME_PODMAN to also include the primary workspace mount in paths_to_fix (e.g. append spec.workspace_mount_path if that attribute exists, or fall back to adding the literal '/workspace' before deduping), so that quoted and podman_fixup cover the workspace as well as the config and extra volumes.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@src/vibepod/core/runtime.py`:
- Around line 172-179: The prompt currently uses SUPPORTED_RUNTIMES as choices
which can cause a KeyError when accessing available[choice]; change Prompt.ask
to present only actual detected runtimes (use available.keys() or
list(available)) so users can only pick an entry present in the available
mapping, and after selection use available.get(choice) (or validate existence)
before calling save_runtime_preference and returning to avoid KeyError; update
the block around sys.stdin.isatty(), Prompt.ask, save_runtime_preference, and
the return to use these safeguards.
---
Duplicate comments:
In `@src/vibepod/commands/run.py`:
- Around line 285-291: The podman permission fix builds paths_to_fix only from
spec.config_mount_path and _agent_extra_volumes, missing the main workspace bind
mount (/workspace); update the block where manager.runtime == RUNTIME_PODMAN to
also include the primary workspace mount in paths_to_fix (e.g. append
spec.workspace_mount_path if that attribute exists, or fall back to adding the
literal '/workspace' before deduping), so that quoted and podman_fixup cover the
workspace as well as the config and extra volumes.
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: b8af64c2-7ead-4138-b984-ad82e6eabf78
📒 Files selected for processing (13)
docs/configuration.mddocs/podman.mdsrc/vibepod/cli.pysrc/vibepod/commands/logs.pysrc/vibepod/commands/proxy.pysrc/vibepod/commands/run.pysrc/vibepod/core/config.pysrc/vibepod/core/docker.pysrc/vibepod/core/runtime.pytests/test_config.pytests/test_proxy_permissions.pytests/test_run.pytests/test_runtime.py
🚧 Files skipped from review as they are similar to previous changes (2)
- docs/configuration.md
- docs/podman.md
There was a problem hiding this comment.
Actionable comments posted: 1
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
src/vibepod/core/docker.py (1)
287-294:⚠️ Potential issue | 🟠 MajorRecreate reused service containers when launch options change.
Both methods return any running service container before comparing it to the requested
userns_mode. After a user updates config or rerunsvp proxy start --userns .../vp ui --userns ..., the old container keeps running with the previous namespace setting until it is manually stopped or removed.Also applies to: 350-358
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/vibepod/core/docker.py` around lines 287 - 294, The current logic in find_datasette() handling existing containers returns a running container without checking if its launch options (notably userns_mode) match the requested setting; update the check after existing.reload() to read the container's current user namespace from existing.attrs (e.g., HostConfig.UsernsMode) and compare it to the requested userns_mode parameter (and keep the existing PROXY_DB_PATH Env check), and if they differ remove the container (existing.remove(force=True)) so it will be recreated with the correct userns_mode; apply the same comparison/fix in the other similar block around the code referenced at 350-358.
♻️ Duplicate comments (3)
src/vibepod/core/runtime.py (2)
66-81:⚠️ Potential issue | 🟡 MinorAlways close probe clients on failure paths.
If
client.ping()throws, the function returnsFalsewithout closing the client first. Put cleanup in afinallyblock so repeated runtime detection doesn't leak open connections or file descriptors.🔧 Proposed fix
def _probe_socket(base_url: str) -> bool: """Return True if a container engine responds on *base_url*.""" + client = None try: import docker as _docker # Podman's Docker-compatible API can take several seconds to answer # even simple probes on some hosts, so use a wider timeout here. - client = _docker.DockerClient( + client = _docker.DockerClient( base_url=base_url, timeout=_runtime_probe_timeout(), ) client.ping() - client.close() return True except Exception: return False + finally: + if client is not None: + try: + client.close() + except Exception: + pass🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/vibepod/core/runtime.py` around lines 66 - 81, The _probe_socket function may leak the Docker client if client.ping() raises; ensure the Docker client created in _probe_socket is always closed by moving client.close() into a finally block (or use a context manager) and only call client.ping() between creation and cleanup; update the code around the DockerClient creation and the client.ping()/client.close() calls in _probe_socket so any exception path still closes the client before returning False.
137-149:⚠️ Potential issue | 🟡 MinorTrim explicit runtime values before validating them.
lower()alone makes values like" Podman "become" podman ", which still falls into the unknown-runtime branch for CLI, env, or saved-config inputs.🔧 Proposed fix
if explicit is not None: - explicit = explicit.lower() + explicit = explicit.strip().lower() if explicit not in SUPPORTED_RUNTIMES: raise RuntimeError( f"Unknown container runtime '{explicit}'. " f"Supported: {', '.join(SUPPORTED_RUNTIMES)}" )🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/vibepod/core/runtime.py` around lines 137 - 149, The explicit runtime string (variable explicit set from override, os.environ.get("VP_CONTAINER_RUNTIME"), or cfg.get("container_runtime")) must be trimmed of surrounding whitespace before validation; modify the runtime resolution code so that after assigning explicit from any source you call explicit = explicit.strip() (or explicit = explicit.strip() if not None) and then explicit = explicit.lower() before checking membership in SUPPORTED_RUNTIMES or comparing to RUNTIME_AUTO; update the block around explicit, detect_available_runtimes(), and the explicit not in SUPPORTED_RUNTIMES check to use the stripped+lowercased value.src/vibepod/core/docker.py (1)
93-117:⚠️ Potential issue | 🟠 MajorDon't recursively make mounted paths world-writable.
For Podman this turns VibePod-managed config, logs, proxy state, and CA material into
0777/0666, andrun_agent()now applies the same helper to everyextra_volumeshost path. On a multi-user machine that exposes sensitive local state to read/tamper by other users. Use a runtime-specific ownership/ACL strategy instead of broadchmod.Also applies to: 214-216, 300-301, 364-365
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/vibepod/core/docker.py` around lines 93 - 117, The current _prepare_volume_dir implementation recursively chmods paths (and similar helper uses in run_agent/extra_volumes) which makes mounted host paths world-writable; change it to avoid touching bind-mounted or user-provided volumes and to set safe ownership/permissions only on directories VibePod created: detect mounts (use os.path.ismount or compare st_dev/st_ino with parent) and skip them, only operate on paths that we created (or are under our managed data directories), and instead of 0o777/0666 use restrictive perms (e.g., 0o750 for dirs, 0o640 for files) and prefer chown/chacl to set appropriate owner/ACL for the runtime UID where available; update all call sites (e.g., run_agent handling of extra_volumes) to not call the helper for external mounts.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@src/vibepod/core/docker.py`:
- Around line 244-247: The code unconditionally forwards userns_mode into
run_kwargs causing Docker client failures; update the places where run_kwargs is
built (the blocks that set run_kwargs["userns_mode"] alongside
run_kwargs["user"]) to validate/translate userns_mode against self.runtime: if
self.runtime indicates Docker only allow or translate to 'host' (or omit it) and
for Podman allow keep-id/nomap/auto, and apply the same validation at all
occurrences where userns_mode is assigned (the three run_kwargs construction
sites in this module) so that Docker never receives unsupported Podman modes.
Ensure checks reference self.runtime and run_kwargs (and the local userns_mode
variable) and skip or map the value before adding run_kwargs["userns_mode"].
---
Outside diff comments:
In `@src/vibepod/core/docker.py`:
- Around line 287-294: The current logic in find_datasette() handling existing
containers returns a running container without checking if its launch options
(notably userns_mode) match the requested setting; update the check after
existing.reload() to read the container's current user namespace from
existing.attrs (e.g., HostConfig.UsernsMode) and compare it to the requested
userns_mode parameter (and keep the existing PROXY_DB_PATH Env check), and if
they differ remove the container (existing.remove(force=True)) so it will be
recreated with the correct userns_mode; apply the same comparison/fix in the
other similar block around the code referenced at 350-358.
---
Duplicate comments:
In `@src/vibepod/core/docker.py`:
- Around line 93-117: The current _prepare_volume_dir implementation recursively
chmods paths (and similar helper uses in run_agent/extra_volumes) which makes
mounted host paths world-writable; change it to avoid touching bind-mounted or
user-provided volumes and to set safe ownership/permissions only on directories
VibePod created: detect mounts (use os.path.ismount or compare st_dev/st_ino
with parent) and skip them, only operate on paths that we created (or are under
our managed data directories), and instead of 0o777/0666 use restrictive perms
(e.g., 0o750 for dirs, 0o640 for files) and prefer chown/chacl to set
appropriate owner/ACL for the runtime UID where available; update all call sites
(e.g., run_agent handling of extra_volumes) to not call the helper for external
mounts.
In `@src/vibepod/core/runtime.py`:
- Around line 66-81: The _probe_socket function may leak the Docker client if
client.ping() raises; ensure the Docker client created in _probe_socket is
always closed by moving client.close() into a finally block (or use a context
manager) and only call client.ping() between creation and cleanup; update the
code around the DockerClient creation and the client.ping()/client.close() calls
in _probe_socket so any exception path still closes the client before returning
False.
- Around line 137-149: The explicit runtime string (variable explicit set from
override, os.environ.get("VP_CONTAINER_RUNTIME"), or
cfg.get("container_runtime")) must be trimmed of surrounding whitespace before
validation; modify the runtime resolution code so that after assigning explicit
from any source you call explicit = explicit.strip() (or explicit =
explicit.strip() if not None) and then explicit = explicit.lower() before
checking membership in SUPPORTED_RUNTIMES or comparing to RUNTIME_AUTO; update
the block around explicit, detect_available_runtimes(), and the explicit not in
SUPPORTED_RUNTIMES check to use the stripped+lowercased value.
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: de41294f-e499-4085-80f7-2cef77783a79
📒 Files selected for processing (13)
docs/configuration.mddocs/podman.mdsrc/vibepod/cli.pysrc/vibepod/commands/logs.pysrc/vibepod/commands/proxy.pysrc/vibepod/commands/run.pysrc/vibepod/core/config.pysrc/vibepod/core/docker.pysrc/vibepod/core/runtime.pytests/test_config.pytests/test_proxy_permissions.pytests/test_run.pytests/test_runtime.py
🚧 Files skipped from review as they are similar to previous changes (5)
- docs/configuration.md
- tests/test_runtime.py
- src/vibepod/commands/run.py
- src/vibepod/commands/logs.py
- docs/podman.md
There was a problem hiding this comment.
🧹 Nitpick comments (1)
tests/test_runtime.py (1)
110-122: Consider adding a test for_probe_socketfailure cleanup.The current tests verify successful probing, but there's no test verifying that the Docker client is properly closed when
ping()fails. This would help catch the resource leak identified in_probe_socket.Example test
def test_probe_socket_closes_client_on_failure(monkeypatch) -> None: recorded: dict[str, bool] = {} class _FakeClient: def __init__(self, *, base_url: str, timeout: float) -> None: pass def ping(self) -> None: raise ConnectionError("Daemon not responding") def close(self) -> None: recorded["closed"] = True monkeypatch.delenv("VP_RUNTIME_PROBE_TIMEOUT", raising=False) monkeypatch.setitem(sys.modules, "docker", SimpleNamespace(DockerClient=_FakeClient)) assert runtime._probe_socket("unix:///tmp/unavailable.sock") is False assert recorded.get("closed") is True, "Client should be closed even on failure"🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@tests/test_runtime.py` around lines 110 - 122, Add a test that asserts Docker client is closed when _probe_socket fails: mock a fake DockerClient class with ping() raising (e.g., ConnectionError) and close() recording a flag, monkeypatch docker.DockerClient to that fake, call runtime._probe_socket("unix:///tmp/unavailable.sock") and assert it returns False and that the recorded flag shows close() was called; ensure VP_RUNTIME_PROBE_TIMEOUT env var is unset in the test (monkeypatch.delenv(..., raising=False)) so probe uses the default timeout and the test references runtime._probe_socket, docker.DockerClient, ping, and close to locate relevant code.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Nitpick comments:
In `@tests/test_runtime.py`:
- Around line 110-122: Add a test that asserts Docker client is closed when
_probe_socket fails: mock a fake DockerClient class with ping() raising (e.g.,
ConnectionError) and close() recording a flag, monkeypatch docker.DockerClient
to that fake, call runtime._probe_socket("unix:///tmp/unavailable.sock") and
assert it returns False and that the recorded flag shows close() was called;
ensure VP_RUNTIME_PROBE_TIMEOUT env var is unset in the test
(monkeypatch.delenv(..., raising=False)) so probe uses the default timeout and
the test references runtime._probe_socket, docker.DockerClient, ping, and close
to locate relevant code.
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: a4c23591-6a3e-4d48-a14f-db09f906be91
📒 Files selected for processing (8)
README.mddocs/configuration.mddocs/podman.mdsrc/vibepod/commands/config.pysrc/vibepod/core/docker.pysrc/vibepod/core/runtime.pytests/test_config.pytests/test_runtime.py
✅ Files skipped from review due to trivial changes (2)
- README.md
- docs/podman.md
🚧 Files skipped from review as they are similar to previous changes (1)
- tests/test_config.py
|
@coderabbitai review |
✅ Actions performedReview triggered.
|
Greptile SummaryThis PR introduces first-class Podman support as an alternative to Docker across the entire VibePod CLI: auto-detection of available runtimes, a new Key issues found:
Confidence Score: 2/5
Important Files Changed
Sequence DiagramsequenceDiagram
participant CLI as CLI Command
participant GM as get_manager()
participant RR as resolve_runtime()
participant DAR as detect_available_runtimes()
participant PS as _probe_socket()
participant SRP as save_runtime_preference()
participant DM as DockerManager
CLI->>GM: get_manager(runtime_override, config)
GM->>RR: resolve_runtime(override, config)
Note over RR: Check override → env → config<br/>for explicit runtime
RR->>DAR: detect_available_runtimes()
DAR->>PS: probe docker socket
PS-->>DAR: True/False
DAR->>PS: probe podman socket(s)
PS-->>DAR: True/False
DAR-->>RR: {docker: url, podman: url}
alt explicit runtime set
RR-->>GM: (runtime_name, socket_url)
else single runtime found
RR-->>GM: (runtime_name, socket_url)
else both runtimes, TTY
RR->>RR: Prompt.ask()
RR->>SRP: save_runtime_preference(choice)
Note over SRP: OSError not caught by get_manager!
RR-->>GM: (choice, socket_url)
else both runtimes, non-TTY
RR-->>GM: (RUNTIME_DOCKER, docker_url)
end
GM->>DM: DockerManager(base_url, runtime)
DM-->>GM: manager
GM-->>CLI: manager
Last reviewed commit: "Add integration test..." |
5763d07 to
7a77a60
Compare
There was a problem hiding this comment.
Actionable comments posted: 1
♻️ Duplicate comments (1)
src/vibepod/commands/update.py (1)
34-39:⚠️ Potential issue | 🟡 MinorPreserve legacy JSON field for compatibility.
Line 34-Line 39 removes the previous
dockerkey from--jsonoutput, which can break existing scripts that parse the old schema. Consider keeping a compatibility field during a transition period.Suggested compatibility patch
info = { "vibepod": __version__, "python": platform.python_version(), "runtime": rt_name, "runtime_version": rt_version, + # Backward-compatibility for existing automation consuming older schema. + "docker": rt_version, }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/vibepod/commands/update.py` around lines 34 - 39, The JSON output removed the legacy "docker" key and will break downstream parsers; modify the info dict construction (the info variable in update.py where keys "vibepod","python","runtime","runtime_version" are set) to retain a backwards-compatible "docker" field that maps to the previous value (for example set "docker": rt_name or derive from "runtime"/"runtime_version") so both the new "runtime"/"runtime_version" fields and the legacy "docker" key are present during the transition; ensure the value reflects the same runtime identifier previously exposed and add a short comment indicating it is kept for compatibility.
🧹 Nitpick comments (1)
src/vibepod/commands/run.py (1)
354-377: Podman permission fixup excludes the user workspace intentionally.The
chmod -R a+rwXfixup targets only VibePod-managed directories (config mount and extra agent volumes) but not/workspace. This appears intentional since:
- The workspace is user-controlled content that shouldn't have its permissions modified
- Users can resolve workspace permission issues via
--userns keep-idfor compatible imagesHowever, per the PR discussion (bobek's comments), rootless Podman UID mapping can still cause ownership mismatches on the workspace bind mount. Consider documenting this limitation or adding a CLI flag to opt into workspace permission fixup for users who encounter issues.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/vibepod/commands/run.py` around lines 354 - 377, The current Podman chmod fixup only includes spec.config_mount_path and extra volumes (built in podman_fixup via paths_to_fix and _agent_extra_volumes) and deliberately excludes /workspace; add an opt-in CLI flag (e.g., --fix-workspace or fix_workspace: bool on the command handler) that, when true and manager.runtime == RUNTIME_PODMAN, appends "/workspace" to paths_to_fix before building quoted and podman_fixup so the chmod line (podman_fixup = [f"chmod -R a+rwX {quoted} 2>/dev/null || true"]) will include /workspace; ensure the flag default is false and update the command help/usage text accordingly and propagate the flag into the same branch that computes podman_fixup and entrypoint (used by _init_entrypoint).
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@scripts/prepare_runtime_scenario.sh`:
- Around line 58-63: The script uses the environment variable USER directly when
writing /etc/subuid and /etc/subgid, which can produce invalid entries if USER
is unset; update the top of the relevant block to resolve a safe username into a
local variable (e.g., username="$(printf '%s' "${USER:-}" ; if [ -z "${USER:-}"
]; then id -un; fi)" or equivalent) and then use that variable instead of $USER
in the grep and tee lines (references: the grep checks and the echo/tee
invocations that currently use ${USER}). Ensure the resolved username is quoted
when used and that behavior is unchanged when id -un fails (fail early with a
clear error).
---
Duplicate comments:
In `@src/vibepod/commands/update.py`:
- Around line 34-39: The JSON output removed the legacy "docker" key and will
break downstream parsers; modify the info dict construction (the info variable
in update.py where keys "vibepod","python","runtime","runtime_version" are set)
to retain a backwards-compatible "docker" field that maps to the previous value
(for example set "docker": rt_name or derive from "runtime"/"runtime_version")
so both the new "runtime"/"runtime_version" fields and the legacy "docker" key
are present during the transition; ensure the value reflects the same runtime
identifier previously exposed and add a short comment indicating it is kept for
compatibility.
---
Nitpick comments:
In `@src/vibepod/commands/run.py`:
- Around line 354-377: The current Podman chmod fixup only includes
spec.config_mount_path and extra volumes (built in podman_fixup via paths_to_fix
and _agent_extra_volumes) and deliberately excludes /workspace; add an opt-in
CLI flag (e.g., --fix-workspace or fix_workspace: bool on the command handler)
that, when true and manager.runtime == RUNTIME_PODMAN, appends "/workspace" to
paths_to_fix before building quoted and podman_fixup so the chmod line
(podman_fixup = [f"chmod -R a+rwX {quoted} 2>/dev/null || true"]) will include
/workspace; ensure the flag default is false and update the command help/usage
text accordingly and propagate the flag into the same branch that computes
podman_fixup and entrypoint (used by _init_entrypoint).
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: d9ae0a0a-5f32-4f0a-bb8a-d2e73d731332
📒 Files selected for processing (27)
.github/workflows/runtime-integration.ymlREADME.mddocs/configuration.mddocs/index.mddocs/podman.mddocs/quickstart.mdmkdocs.ymlscripts/cleanup_runtime_scenario.shscripts/prepare_runtime_scenario.shsrc/vibepod/cli.pysrc/vibepod/commands/config.pysrc/vibepod/commands/list_cmd.pysrc/vibepod/commands/logs.pysrc/vibepod/commands/proxy.pysrc/vibepod/commands/run.pysrc/vibepod/commands/stop.pysrc/vibepod/commands/update.pysrc/vibepod/constants.pysrc/vibepod/core/config.pysrc/vibepod/core/docker.pysrc/vibepod/core/runtime.pytests/test_config.pytests/test_list.pytests/test_proxy_permissions.pytests/test_run.pytests/test_runtime.pytests/test_version_integration.py
✅ Files skipped from review due to trivial changes (8)
- docs/quickstart.md
- README.md
- tests/test_list.py
- mkdocs.yml
- scripts/cleanup_runtime_scenario.sh
- .github/workflows/runtime-integration.yml
- docs/podman.md
- src/vibepod/constants.py
🚧 Files skipped from review as they are similar to previous changes (6)
- docs/index.md
- src/vibepod/cli.py
- tests/test_runtime.py
- src/vibepod/commands/list_cmd.py
- src/vibepod/core/runtime.py
- tests/test_proxy_permissions.py
| if ! grep -q "^${USER}:" /etc/subuid; then | ||
| echo "${USER}:100000:65536" | sudo tee -a /etc/subuid >/dev/null | ||
| fi | ||
| if ! grep -q "^${USER}:" /etc/subgid; then | ||
| echo "${USER}:100000:65536" | sudo tee -a /etc/subgid >/dev/null | ||
| fi |
There was a problem hiding this comment.
Avoid relying on $USER when writing subuid/subgid entries.
Line 58-Line 63 can write invalid entries if USER is unset. Resolve the username from id -un fallback before file writes.
Suggested fix
install_podman() {
sudo touch /etc/subuid /etc/subgid
+ local username="${USER:-$(id -un)}"
- if ! grep -q "^${USER}:" /etc/subuid; then
- echo "${USER}:100000:65536" | sudo tee -a /etc/subuid >/dev/null
+ if ! grep -q "^${username}:" /etc/subuid; then
+ echo "${username}:100000:65536" | sudo tee -a /etc/subuid >/dev/null
fi
- if ! grep -q "^${USER}:" /etc/subgid; then
- echo "${USER}:100000:65536" | sudo tee -a /etc/subgid >/dev/null
+ if ! grep -q "^${username}:" /etc/subgid; then
+ echo "${username}:100000:65536" | sudo tee -a /etc/subgid >/dev/null
fi📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| if ! grep -q "^${USER}:" /etc/subuid; then | |
| echo "${USER}:100000:65536" | sudo tee -a /etc/subuid >/dev/null | |
| fi | |
| if ! grep -q "^${USER}:" /etc/subgid; then | |
| echo "${USER}:100000:65536" | sudo tee -a /etc/subgid >/dev/null | |
| fi | |
| install_podman() { | |
| sudo touch /etc/subuid /etc/subgid | |
| local username="${USER:-$(id -un)}" | |
| if ! grep -q "^${username}:" /etc/subuid; then | |
| echo "${username}:100000:65536" | sudo tee -a /etc/subuid >/dev/null | |
| fi | |
| if ! grep -q "^${username}:" /etc/subgid; then | |
| echo "${username}:100000:65536" | sudo tee -a /etc/subgid >/dev/null | |
| fi | |
| } |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@scripts/prepare_runtime_scenario.sh` around lines 58 - 63, The script uses
the environment variable USER directly when writing /etc/subuid and /etc/subgid,
which can produce invalid entries if USER is unset; update the top of the
relevant block to resolve a safe username into a local variable (e.g.,
username="$(printf '%s' "${USER:-}" ; if [ -z "${USER:-}" ]; then id -un; fi)"
or equivalent) and then use that variable instead of $USER in the grep and tee
lines (references: the grep checks and the echo/tee invocations that currently
use ${USER}). Ensure the resolved username is quoted when used and that behavior
is unchanged when id -un fails (fail early with a clear error).
|
augment review |
🤖 Augment PR SummarySummary: This PR adds first-class Podman support alongside Docker, including runtime discovery/selection and runtime-aware container operations. Changes:
Technical Notes: Runtime selection priority is CLI flag → env var → config preference → auto-detect (prompt if interactive); a new workflow and scripts validate behavior across multiple host runtime scenarios. 🤖 Was this summary useful? React with 👍 or 👎 |
|
|
||
| try: | ||
| runtime_name, socket_url = resolve_runtime(override=runtime_override, config=config) | ||
| except (RuntimeError, ValueError) as exc: |
There was a problem hiding this comment.
get_manager() only wraps RuntimeError/ValueError, but resolve_runtime() can write the saved preference and may raise OSError/yaml.YAMLError (e.g., config dir not writable), which would currently bubble out as an unhandled exception. Consider catching those here too so callers consistently see a DockerClientError with a clean message.
Severity: medium
🤖 Was this useful? React with 👍 or 👎, or 🚀 if it prevented an incident/outage.
| # Ensure VibePod-managed dirs are accessible inside rootless Podman | ||
| self._prepare_volume_dir(config_dir) | ||
| if extra_volumes: | ||
| for host_path, _, _ in extra_volumes: |
There was a problem hiding this comment.
run_agent() calls _prepare_volume_dir() on every extra_volumes host path, even though _prepare_volume_dir()’s docstring says it’s only for VibePod-managed directories. If extra_volumes includes user-provided mounts (e.g., X11 sockets or other host paths), this can unexpectedly chmod/chown arbitrary host directories and broaden permissions.
Severity: high
🤖 Was this useful? React with 👍 or 👎, or 🚀 if it prevented an incident/outage.
There was a problem hiding this comment.
♻️ Duplicate comments (2)
src/vibepod/commands/run.py (1)
361-367:⚠️ Potential issue | 🟠 MajorPodman fixup still skips the
/workspacebind mount.At Line 362,
paths_to_fixomits/workspace, so the main project bind mount can still hit rootless Podman UID/GID permission mismatches.Targeted fix
if manager.runtime == RUNTIME_PODMAN: - paths_to_fix = [spec.config_mount_path] + paths_to_fix = ["/workspace", spec.config_mount_path] extra_volumes_pre = _agent_extra_volumes(selected_agent, agent_config_dir(selected_agent)) if extra_volumes_pre: paths_to_fix.extend(cp for _, cp, _ in extra_volumes_pre)🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/vibepod/commands/run.py` around lines 361 - 367, The Podman permission-fix list currently builds paths_to_fix from spec.config_mount_path but omits the project/workspace bind mount; update the code that constructs paths_to_fix (used when manager.runtime == RUNTIME_PODMAN) to also include the workspace/project mount (e.g., spec.workspace_mount_path or the attribute that holds "/workspace") before extending with extra_volumes_pre, so the quoted/chmod command includes the workspace path as well and fixes rootless Podman UID/GID issues.src/vibepod/commands/update.py (1)
34-39:⚠️ Potential issue | 🟡 MinorJSON schema change still breaks existing
--jsonconsumers.On Line 37 and Line 38, the payload no longer includes the legacy
dockerkey. That’s a compatibility break for scripts parsing prior output.Compatibility bridge (transition period)
info = { "vibepod": __version__, "python": platform.python_version(), "runtime": rt_name, "runtime_version": rt_version, } + # Backward compatibility for older automation expecting `docker` + info["docker"] = rt_version if rt_name == "docker" else f"{rt_name} {rt_version}"🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/vibepod/commands/update.py` around lines 34 - 39, The JSON payload removed the legacy "docker" key causing backward-compatibility breaks; restore the legacy fields by adding info["docker"] (and info["docker_version"] if your previous schema had it) and populate them from rt_name and rt_version (the same values used for "runtime" and "runtime_version") so existing --json consumers keep receiving the expected keys; update the info dict near the existing definition that uses __version__, platform.python_version(), rt_name and rt_version.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Duplicate comments:
In `@src/vibepod/commands/run.py`:
- Around line 361-367: The Podman permission-fix list currently builds
paths_to_fix from spec.config_mount_path but omits the project/workspace bind
mount; update the code that constructs paths_to_fix (used when manager.runtime
== RUNTIME_PODMAN) to also include the workspace/project mount (e.g.,
spec.workspace_mount_path or the attribute that holds "/workspace") before
extending with extra_volumes_pre, so the quoted/chmod command includes the
workspace path as well and fixes rootless Podman UID/GID issues.
In `@src/vibepod/commands/update.py`:
- Around line 34-39: The JSON payload removed the legacy "docker" key causing
backward-compatibility breaks; restore the legacy fields by adding
info["docker"] (and info["docker_version"] if your previous schema had it) and
populate them from rt_name and rt_version (the same values used for "runtime"
and "runtime_version") so existing --json consumers keep receiving the expected
keys; update the info dict near the existing definition that uses __version__,
platform.python_version(), rt_name and rt_version.
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: 7e65a8dc-6e0e-465a-9988-6447b6dddf70
📒 Files selected for processing (27)
.github/workflows/runtime-integration.ymlREADME.mddocs/configuration.mddocs/index.mddocs/podman.mddocs/quickstart.mdmkdocs.ymlscripts/cleanup_runtime_scenario.shscripts/prepare_runtime_scenario.shsrc/vibepod/cli.pysrc/vibepod/commands/config.pysrc/vibepod/commands/list_cmd.pysrc/vibepod/commands/logs.pysrc/vibepod/commands/proxy.pysrc/vibepod/commands/run.pysrc/vibepod/commands/stop.pysrc/vibepod/commands/update.pysrc/vibepod/constants.pysrc/vibepod/core/config.pysrc/vibepod/core/docker.pysrc/vibepod/core/runtime.pytests/test_config.pytests/test_list.pytests/test_proxy_permissions.pytests/test_run.pytests/test_runtime.pytests/test_version_integration.py
✅ Files skipped from review due to trivial changes (10)
- docs/quickstart.md
- docs/index.md
- README.md
- scripts/cleanup_runtime_scenario.sh
- mkdocs.yml
- .github/workflows/runtime-integration.yml
- src/vibepod/constants.py
- tests/test_runtime.py
- docs/configuration.md
- tests/test_version_integration.py
🚧 Files skipped from review as they are similar to previous changes (12)
- tests/test_list.py
- src/vibepod/cli.py
- src/vibepod/commands/stop.py
- src/vibepod/core/config.py
- tests/test_config.py
- scripts/prepare_runtime_scenario.sh
- src/vibepod/commands/proxy.py
- src/vibepod/commands/logs.py
- src/vibepod/core/runtime.py
- src/vibepod/core/docker.py
- docs/podman.md
- src/vibepod/commands/list_cmd.py
Adds first-class support for Podman as an alternative container runtime to Docker throughout VibePod. It updates documentation, configuration, and CLI commands to allow users to select between Docker and Podman, including auto-detection and explicit selection methods. Additionally, it introduces fixes for rootless Podman compatibility, such as permission handling for bind mounts.
Documentation updates:
docs/podman.md), linked from navigation and other docs. Updated Quickstart and main documentation to reflect Podman support and new configuration options.Config and environment variable documentation:
container_runtimeconfig key andVP_CONTAINER_RUNTIMEenvironment variable indocs/configuration.md, including example usage.Summary by CodeRabbit
New Features
Documentation
Tests
Chores