Fix continue for sphere#513
Conversation
Co-authored-by: Copilot <copilot@github.com>
Co-authored-by: Copilot <copilot@github.com>
WalkthroughRefactors continue-run to return aggregated structured command tuples (command list + working folder) and to use mock inputs to avoid reading template folders; JadeApp now extends/aggregates those commands and, in GLOBAL_JOB mode, launches global jobs via Changes
Sequence DiagramsequenceDiagram
participant App as JadeApp
participant BRun as BenchmarkRun
participant Factory as SingleRunFactory
participant SRun as SingleRun
participant MockInp as MockInput
participant JobMgr as JobManager
App->>BRun: continue_run(testing)
activate BRun
BRun->>BRun: _get_continue_run_command(..., testing)
BRun->>Factory: create(..., mock_input=True)
activate Factory
Factory->>MockInp: instantiate (no file I/O)
Factory-->>BRun: SingleRun
deactivate Factory
BRun->>SRun: run(env_vars, folder, continue_run=True, test=testing)
activate SRun
SRun-->>BRun: list[tuple[list[str], Path]]
deactivate SRun
BRun-->>App: aggregated commands (list of tuples)
deactivate BRun
App->>App: commands.extend(each_result)
alt RunMode == GLOBAL_JOB
App->>JobMgr: launch_global_jobs(commands, env_vars, test=testing)
activate JobMgr
JobMgr-->>App: submission result
deactivate JobMgr
else
App-->>App: return aggregated commands
end
Estimated code review effort🎯 3 (Moderate) | ⏱️ ~22 minutes Possibly related PRs
Suggested reviewers
Poem
🚥 Pre-merge checks | ✅ 3 | ❌ 2❌ Failed checks (1 warning, 1 inconclusive)
✅ Passed checks (3 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches📝 Generate docstrings
🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Review rate limit: 0/1 reviews remaining, refill in 60 minutes.Comment |
Codecov Report❌ Patch coverage is
🚀 New features to boost your workflow:
|
There was a problem hiding this comment.
Actionable comments posted: 3
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
src/jade/run/benchmark.py (1)
482-483:⚠️ Potential issue | 🟡 MinorStabilize continue-run ordering.
os.listdir()is filesystem-order dependent, so the returned command list and aggregated global-job script order are nondeterministic. That makes the new Sphere tests flaky and the generated submission scripts harder to reason about.🛠️ Suggested fix
- for single_run_folder in os.listdir(benchmark_root): + for single_run_folder in sorted(os.listdir(benchmark_root)):🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/jade/run/benchmark.py` around lines 482 - 483, The loop over os.listdir(benchmark_root) yields non-deterministic ordering; change the iteration to use a stable, deterministic order (e.g., iterate over sorted(os.listdir(benchmark_root)) or otherwise sort the entries) so that single_run_folder and derived single_run_root are processed consistently and the aggregated global-job script order is stable; update the for loop that currently reads "for single_run_folder in os.listdir(benchmark_root):" to iterate over a sorted list instead.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@src/jade/run/benchmark.py`:
- Around line 181-199: The JOB_SUBMISSION branch is incorrectly guarded by
continue_run so normal BenchmarkRun runs never call _submit_job; remove the
continue_run check so that when env_vars.run_mode == RunMode.JOB_SUBMISSION you
always call self._submit_job(...) for the regular path, and preserve the
existing test behavior (when test is True call _submit_job(..., test=test) and
return the command). Update the conditional logic around env_vars.run_mode,
_submit_job, continue_run, and test (used by SingleRun.run) to ensure job
submission occurs for non-test runs as well.
- Around line 415-418: The constructor currently sets self.name from the
template folder basename which is wrong for MCNP continues; instead locate the
actual MCNP input file in template_folder (e.g., search for a file with
extension ".i" or case-insensitive variants), set self.inp to that input Path,
and derive self.name from self.inp.stem so generated continue commands target
the real input file; update the __init__ logic that uses self.template_folder,
self.inp, and self.name accordingly.
In `@tests/dummy_structure/simulations/_mcnp_-_continue`
sphere_/Sphere/Sphere_m101/metadata.json:
- Around line 2-3: The metadata field benchmark_name in the Sphere fixture is
incorrect (currently "Oktavian"); update the benchmark_name value in the
Sphere_m101 fixture's metadata.json so it correctly identifies the Sphere
benchmark (replace "Oktavian" with the expected "Sphere" or the project's
canonical benchmark name), leaving other fields like benchmark_version untouched
to ensure the fixture matches its directory/intent.
---
Outside diff comments:
In `@src/jade/run/benchmark.py`:
- Around line 482-483: The loop over os.listdir(benchmark_root) yields
non-deterministic ordering; change the iteration to use a stable, deterministic
order (e.g., iterate over sorted(os.listdir(benchmark_root)) or otherwise sort
the entries) so that single_run_folder and derived single_run_root are processed
consistently and the aggregated global-job script order is stable; update the
for loop that currently reads "for single_run_folder in
os.listdir(benchmark_root):" to iterate over a sorted list instead.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
Run ID: 661af19f-e152-43a8-90c1-fb3d5070dbeb
📒 Files selected for processing (8)
src/jade/app/app.pysrc/jade/run/benchmark.pytests/app/test_app.pytests/dummy_structure/simulations/_mcnp_-_continue sphere_/Sphere/Sphere_dummy1/Sphere_dummy1.itests/dummy_structure/simulations/_mcnp_-_continue sphere_/Sphere/Sphere_dummy1/metadata.jsontests/dummy_structure/simulations/_mcnp_-_continue sphere_/Sphere/Sphere_m101/Sphere_M101_.itests/dummy_structure/simulations/_mcnp_-_continue sphere_/Sphere/Sphere_m101/metadata.jsontests/run/test_benchmark.py
There was a problem hiding this comment.
♻️ Duplicate comments (1)
src/jade/run/benchmark.py (1)
412-416:⚠️ Potential issue | 🟠 MajorDerive
MockInput.namefrom the real input file, not the folder.This still assumes the run directory name matches the MCNP/D1S input stem. For Sphere-style generated cases that is not guaranteed, so continue can still emit
i=<folder>.ifor a file that does not exist.🛠️ Suggested fix
class MockInput: @@ def __init__(self, template_folder: PathLike, lib: Library): - self.template_folder = template_folder - self.inp = None - self.name = os.path.basename(template_folder) + folder = Path(template_folder) + self.template_folder = folder + input_files = sorted([*folder.glob("*.i"), *folder.glob("*.I")]) + self.inp = input_files[0] if input_files else None + self.name = self.inp.stem if self.inp is not None else folder.name self.lib = lib🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/jade/run/benchmark.py` around lines 412 - 416, The constructor for MockInput sets self.name to the template folder basename which can be wrong for Sphere-style cases; change MockInput.__init__ to derive self.name from the actual input file stem (use Path(self.inp).stem or locate the MCNP/D1S input file inside template_folder and use its stem) instead of os.path.basename(template_folder), and ensure self.inp is set/checked before deriving the name so continue emits the correct i=<file>.i value.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Duplicate comments:
In `@src/jade/run/benchmark.py`:
- Around line 412-416: The constructor for MockInput sets self.name to the
template folder basename which can be wrong for Sphere-style cases; change
MockInput.__init__ to derive self.name from the actual input file stem (use
Path(self.inp).stem or locate the MCNP/D1S input file inside template_folder and
use its stem) instead of os.path.basename(template_folder), and ensure self.inp
is set/checked before deriving the name so continue emits the correct i=<file>.i
value.
ℹ️ Review info
⚙️ Run configuration
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
Run ID: 4d331242-a678-4c8d-a930-77c1b794d8f3
📒 Files selected for processing (1)
src/jade/run/benchmark.py
|
missing coverage is related to edge cases that are not worrysome |
Fix #507
The continue option for jade now works also for the Sphere-like benchmarks. Moreover, the capability to run the continue with a global job (previously bugged for all type of benchmarks) has been added.
Suitable CI tests have been added and also a manual one using the GLOBAL_JOB option has been performed
Summary by CodeRabbit
Release Notes
New Features
Improvements