Conversation
📝 WalkthroughWalkthroughAdds a "cancel run" feature across CLI, API helper, MCP endpoint, tests, and mock server; increments project versions from 0.3.51 to 0.3.52; tweaks error output behavior and a local runtime env var. Changes
Sequence DiagramsequenceDiagram
participant User as User/CLI
participant CLI as tower-cmd CLI
participant API as API helper
participant Backend as Mock/Backend
User->>CLI: tower apps cancel <app_name> <run_number>
CLI->>CLI: parse args
CLI->>API: cancel_run(config, name, seq)
API->>Backend: POST /v1/apps/{name}/runs/{seq}
Backend->>Backend: mark run "cancelled" and set cancelled_at
Backend-->>API: CancelRunResponse
API-->>CLI: CancelRunResponse
CLI-->>User: display success message with status
Estimated code review effort🎯 4 (Complex) | ⏱️ ~45 minutes Suggested reviewers
Poem
🚥 Pre-merge checks | ✅ 2 | ❌ 1❌ Failed checks (1 warning)
✅ Passed checks (2 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches
🧪 Generate unit tests (beta)
Comment |
Expose the existing cancel_run API endpoint through the CLI and MCP server. Adds `tower apps cancel <app_name> <run_number>` as a subcommand under apps (consistent with existing `apps show`, `apps logs`, etc.) and a corresponding `tower_apps_cancel` MCP tool. Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
* Add some tests for existing info commands, and a new is_local function * Apply suggestions from code review Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com> * Fix formatting --------- Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
There was a problem hiding this comment.
Actionable comments posted: 1
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@crates/tower-cmd/src/run.rs`:
- Line 187: When preparing the subprocess env in run.rs (where
env_vars.insert("TOWER__RUNTIME__IS_LOCAL".to_string(), "true".to_string()) is
set), also explicitly clear the cloud-managed flag by inserting
env_vars.insert("TOWER__RUNTIME__IS_TOWER_MANAGED".to_string(),
"false".to_string()); ensure this happens in the local-run branch (the code path
that sets IS_LOCAL when protected_mode is false / cache_dir != None) so the
child process cannot inherit TOWER__RUNTIME__IS_TOWER_MANAGED from the parent.
ℹ️ Review info
⚙️ Run configuration
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
Run ID: 5c046f09-6b2b-483d-87a9-f86e478b2c36
📒 Files selected for processing (3)
crates/tower-cmd/src/run.rssrc/tower/info/__init__.pytests/tower/test_info.py
| let session = config.session.as_ref().ok_or(Error::NoSession)?; | ||
|
|
||
| env_vars.insert("TOWER_JWT".to_string(), session.token.jwt.to_string()); | ||
| env_vars.insert("TOWER__RUNTIME__IS_LOCAL".to_string(), "true".to_string()); |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
set -euo pipefail
echo "Inspect subprocess environment construction:"
rg -n -C4 'env_clear|\.env\(|\.envs\(|Command::new|env_vars' --type rust
echo
echo "Inspect the local-run path and related runtime flags:"
rg -n -C4 'do_run_local_impl|build_cli_execution_spec|TOWER__RUNTIME__IS_LOCAL|IS_TOWER_MANAGED' --type rustRepository: tower/tower-cli
Length of output: 50371
🏁 Script executed:
# Check how cache_dir is configured in do_run_local_impl and local execution path
rg -n "cache_dir|StartOptions" crates/tower-runtime/src/local.rs | head -30Repository: tower/tower-cli
Length of output: 348
🏁 Script executed:
# Check SubprocessBackend execution to see if cache_dir is passed
rg -n -A5 "SubprocessBackend::new|cache_dir.*Some\|cache_dir.*None" crates/tower-runtime/src/subprocess.rs | head -40Repository: tower/tower-cli
Length of output: 41
🏁 Script executed:
# Look for where do_run_local_impl gets cache_dir info or uses LocalApp
rg -n -B5 -A10 "fn do_run_local_impl" crates/tower-cmd/src/run.rs | head -50Repository: tower/tower-cli
Length of output: 558
🏁 Script executed:
# Get full context of do_run_local_impl to see how backend/StartOptions is created
sed -n '145,230p' crates/tower-cmd/src/run.rsRepository: tower/tower-cli
Length of output: 2799
🏁 Script executed:
# Look at how SubprocessBackend.create is called and what options are passed
rg -n -B3 -A8 "backend.create\|SubprocessBackend::create" crates/tower-cmd/src/run.rsRepository: tower/tower-cli
Length of output: 41
🏁 Script executed:
# Check if cache_dir is provided in build_cli_execution_spec or StartOptions for local runs
rg -n -B5 -A10 "cache_dir:" crates/tower-cmd/src/run.rsRepository: tower/tower-cli
Length of output: 702
🏁 Script executed:
# Check how Config is initialized and what cache_dir defaults to
rg -n "struct Config|cache_dir:" crates/tower-cmd/src/config.rs | head -20Repository: tower/tower-cli
Length of output: 129
🏁 Script executed:
# Check the default_cache_dir function that was referenced earlier
rg -n "default_cache_dir" crates/tower-cmd/src/ -A5 | head -30Repository: tower/tower-cli
Length of output: 41
🏁 Script executed:
# Look for config initialization in the main run flow
rg -n "Config::|cache_dir" crates/tower-cmd/src/run.rsRepository: tower/tower-cli
Length of output: 269
🏁 Script executed:
# Find the Config struct definition
fd -t f "\.rs$" | xargs grep -l "struct Config" | head -5Repository: tower/tower-cli
Length of output: 126
🏁 Script executed:
# Look for cache_dir initialization in main or initialization code
rg -n "cache_dir\s*[:=]" crates/tower-cmd/src/*.rs | grep -v "cache_dir:" | head -20Repository: tower/tower-cli
Length of output: 41
🏁 Script executed:
# Check if there's a config module or initialization
ls -la crates/tower-cmd/src/ | grep -E "config|init"Repository: tower/tower-cli
Length of output: 41
🏁 Script executed:
# Look at the config crate to understand cache_dir
rg -n "struct Config|cache_dir" crates/config/src/lib.rs | head -30Repository: tower/tower-cli
Length of output: 496
🏁 Script executed:
# Check if cache_dir has a default value
rg -n "cache_dir" crates/config/src/lib.rs -B2 -A2Repository: tower/tower-cli
Length of output: 984
Explicitly set TOWER__RUNTIME__IS_TOWER_MANAGED=false in local runs to prevent flag bleed.
Local runs inherit the parent environment because protected_mode is false (cache_dir defaults to Some(default_cache_dir()), not None). This means if the parent tower-cli process has TOWER__RUNTIME__IS_TOWER_MANAGED=true from a prior cloud execution, it will leak into the subprocess. The current code only adds TOWER__RUNTIME__IS_LOCAL=true but does not explicitly clear the cloud flag, making it possible for both tower.info.is_cloud_run() and tower.info.is_local() to return True.
Required hardening change
env_vars.insert("TOWER_JWT".to_string(), session.token.jwt.to_string());
+ env_vars.insert(
+ "TOWER__RUNTIME__IS_TOWER_MANAGED".to_string(),
+ "false".to_string(),
+ );
env_vars.insert("TOWER__RUNTIME__IS_LOCAL".to_string(), "true".to_string());🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@crates/tower-cmd/src/run.rs` at line 187, When preparing the subprocess env
in run.rs (where env_vars.insert("TOWER__RUNTIME__IS_LOCAL".to_string(),
"true".to_string()) is set), also explicitly clear the cloud-managed flag by
inserting env_vars.insert("TOWER__RUNTIME__IS_TOWER_MANAGED".to_string(),
"false".to_string()); ensure this happens in the local-run branch (the code path
that sets IS_LOCAL when protected_mode is false / cache_dir != None) so the
child process cannot inherit TOWER__RUNTIME__IS_TOWER_MANAGED from the parent.
is_localinfo functionSummary by CodeRabbit
New Features
Improvements
Tests
Chores