Skip to content

Fix continue for sphere#513

Open
dodu94 wants to merge 4 commits intodevelopingfrom
fix-continue-for-Sphere
Open

Fix continue for sphere#513
dodu94 wants to merge 4 commits intodevelopingfrom
fix-continue-for-Sphere

Conversation

@dodu94
Copy link
Copy Markdown
Member

@dodu94 dodu94 commented Apr 29, 2026

Fix #507

The continue option for jade now works also for the Sphere-like benchmarks. Moreover, the capability to run the continue with a global job (previously bugged for all type of benchmarks) has been added.

Suitable CI tests have been added and also a manual one using the GLOBAL_JOB option has been performed

Summary by CodeRabbit

Release Notes

  • New Features

    • Enhanced continuation run support with improved command aggregation for complex benchmarks
    • Better handling of global job submission mode for continued benchmark runs
  • Improvements

    • Refactored continuation command processing for improved reliability
    • Added mock input framework enabling comprehensive testing without physical input files
    • Extended test coverage for Sphere benchmark continuation scenarios

Davide Laghi and others added 3 commits April 29, 2026 10:02
Co-authored-by: Copilot <copilot@github.com>
Co-authored-by: Copilot <copilot@github.com>
Co-authored-by: Copilot <copilot@github.com>
@coderabbitai
Copy link
Copy Markdown
Contributor

coderabbitai Bot commented Apr 29, 2026

Walkthrough

Refactors continue-run to return aggregated structured command tuples (command list + working folder) and to use mock inputs to avoid reading template folders; JadeApp now extends/aggregates those commands and, in GLOBAL_JOB mode, launches global jobs via launch_global_jobs when testing is passed.

Changes

Cohort / File(s) Summary
App Continue-Run Handler
src/jade/app/app.py
continue_run now treats each benchmark.continue_run(...) result as an iterable, uses commands.extend(...), and conditionally calls launch_global_jobs(commands, env_vars, test=testing) for RunMode.GLOBAL_JOB, returning submission results.
Benchmark Core Refactor
src/jade/run/benchmark.py
Refactored continue-run to return list[tuple[list[str], Path]]; added MockInput and mock_input flag in SingleRunFactory.create to avoid real template I/O; adjusted SingleRun.run/submission behavior to pass testing through and produce structured commands.
Tests — App
tests/app/test_app.py
Updated test_continue_run to expect successful execution with testing=True and verify job-script headers; added test_continue_run_sphere covering MCNP Sphere scenario and GLOBAL_JOB submission behavior.
Tests — Benchmark
tests/run/test_benchmark.py
Updated assertions to match new return shape (accessing command text at command[0][0]) and to check mpirun and inclusion/exclusion conditions in structured tuples.
Test Fixtures (dummy simulations)
tests/dummy_structure/simulations/_mcnp_-_continue sphere_/Sphere/...
Added new MCNP input file and metadata.json entries to support Sphere continue-run tests (benchmark metadata and input deck files).

Sequence Diagram

sequenceDiagram
    participant App as JadeApp
    participant BRun as BenchmarkRun
    participant Factory as SingleRunFactory
    participant SRun as SingleRun
    participant MockInp as MockInput
    participant JobMgr as JobManager

    App->>BRun: continue_run(testing)
    activate BRun
    BRun->>BRun: _get_continue_run_command(..., testing)
    BRun->>Factory: create(..., mock_input=True)
    activate Factory
    Factory->>MockInp: instantiate (no file I/O)
    Factory-->>BRun: SingleRun
    deactivate Factory
    BRun->>SRun: run(env_vars, folder, continue_run=True, test=testing)
    activate SRun
    SRun-->>BRun: list[tuple[list[str], Path]]
    deactivate SRun
    BRun-->>App: aggregated commands (list of tuples)
    deactivate BRun

    App->>App: commands.extend(each_result)
    alt RunMode == GLOBAL_JOB
        App->>JobMgr: launch_global_jobs(commands, env_vars, test=testing)
        activate JobMgr
        JobMgr-->>App: submission result
        deactivate JobMgr
    else
        App-->>App: return aggregated commands
    end
Loading

Estimated code review effort

🎯 3 (Moderate) | ⏱️ ~22 minutes

Possibly related PRs

Suggested reviewers

  • alexvalentine94

Poem

🐰 I hop through mocks and stubbed delight,
No template folders in my sight.
Commands now gather, neat and bright,
GLOBAL jobs launched into the night. 🚀

🚥 Pre-merge checks | ✅ 3 | ❌ 2

❌ Failed checks (1 warning, 1 inconclusive)

Check name Status Explanation Resolution
Docstring Coverage ⚠️ Warning Docstring coverage is 46.67% which is insufficient. The required threshold is 80.00%. Write docstrings for the functions missing them to satisfy the coverage threshold.
Title check ❓ Inconclusive The title is vague and does not clearly convey the specific nature of the fix; 'Fix continue for sphere' lacks detail about what issue is being resolved. Consider expanding the title to specify the fix more clearly, such as 'Fix FileNotFoundError in continue run for Sphere benchmark' to better communicate the change.
✅ Passed checks (3 passed)
Check name Status Explanation
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.
Linked Issues check ✅ Passed The pull request successfully addresses issue #507 by refactoring SingleRunFactory and BenchmarkRun to use mock inputs during continue operations, eliminating the need to access benchmark_templates.
Out of Scope Changes check ✅ Passed All changes are directly related to fixing the continue run functionality for Sphere benchmarks; no out-of-scope modifications detected.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing Touches
📝 Generate docstrings
  • Create stacked PR
  • Commit on current branch
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Commit unit tests in branch fix-continue-for-Sphere

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share
Review rate limit: 0/1 reviews remaining, refill in 60 minutes.

Comment @coderabbitai help to get the list of available commands and usage tips.

@codecov
Copy link
Copy Markdown

codecov Bot commented Apr 29, 2026

Codecov Report

❌ Patch coverage is 80.55556% with 7 lines in your changes missing coverage. Please review.

Files with missing lines Patch % Lines
src/jade/run/benchmark.py 78.78% 7 Missing ⚠️
Files with missing lines Coverage Δ
src/jade/app/app.py 82.69% <100.00%> (+0.16%) ⬆️
src/jade/run/benchmark.py 86.30% <78.78%> (-0.70%) ⬇️
🚀 New features to boost your workflow:
  • ❄️ Test Analytics: Detect flaky tests, report on failures, and find test suite problems.

Copy link
Copy Markdown
Contributor

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 3

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (1)
src/jade/run/benchmark.py (1)

482-483: ⚠️ Potential issue | 🟡 Minor

Stabilize continue-run ordering.

os.listdir() is filesystem-order dependent, so the returned command list and aggregated global-job script order are nondeterministic. That makes the new Sphere tests flaky and the generated submission scripts harder to reason about.

🛠️ Suggested fix
-        for single_run_folder in os.listdir(benchmark_root):
+        for single_run_folder in sorted(os.listdir(benchmark_root)):
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/jade/run/benchmark.py` around lines 482 - 483, The loop over
os.listdir(benchmark_root) yields non-deterministic ordering; change the
iteration to use a stable, deterministic order (e.g., iterate over
sorted(os.listdir(benchmark_root)) or otherwise sort the entries) so that
single_run_folder and derived single_run_root are processed consistently and the
aggregated global-job script order is stable; update the for loop that currently
reads "for single_run_folder in os.listdir(benchmark_root):" to iterate over a
sorted list instead.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@src/jade/run/benchmark.py`:
- Around line 181-199: The JOB_SUBMISSION branch is incorrectly guarded by
continue_run so normal BenchmarkRun runs never call _submit_job; remove the
continue_run check so that when env_vars.run_mode == RunMode.JOB_SUBMISSION you
always call self._submit_job(...) for the regular path, and preserve the
existing test behavior (when test is True call _submit_job(..., test=test) and
return the command). Update the conditional logic around env_vars.run_mode,
_submit_job, continue_run, and test (used by SingleRun.run) to ensure job
submission occurs for non-test runs as well.
- Around line 415-418: The constructor currently sets self.name from the
template folder basename which is wrong for MCNP continues; instead locate the
actual MCNP input file in template_folder (e.g., search for a file with
extension ".i" or case-insensitive variants), set self.inp to that input Path,
and derive self.name from self.inp.stem so generated continue commands target
the real input file; update the __init__ logic that uses self.template_folder,
self.inp, and self.name accordingly.

In `@tests/dummy_structure/simulations/_mcnp_-_continue`
sphere_/Sphere/Sphere_m101/metadata.json:
- Around line 2-3: The metadata field benchmark_name in the Sphere fixture is
incorrect (currently "Oktavian"); update the benchmark_name value in the
Sphere_m101 fixture's metadata.json so it correctly identifies the Sphere
benchmark (replace "Oktavian" with the expected "Sphere" or the project's
canonical benchmark name), leaving other fields like benchmark_version untouched
to ensure the fixture matches its directory/intent.

---

Outside diff comments:
In `@src/jade/run/benchmark.py`:
- Around line 482-483: The loop over os.listdir(benchmark_root) yields
non-deterministic ordering; change the iteration to use a stable, deterministic
order (e.g., iterate over sorted(os.listdir(benchmark_root)) or otherwise sort
the entries) so that single_run_folder and derived single_run_root are processed
consistently and the aggregated global-job script order is stable; update the
for loop that currently reads "for single_run_folder in
os.listdir(benchmark_root):" to iterate over a sorted list instead.
🪄 Autofix (Beta)

Fix all unresolved CodeRabbit comments on this PR:

  • Push a commit to this branch (recommended)
  • Create a new PR with the fixes

ℹ️ Review info
⚙️ Run configuration

Configuration used: Organization UI

Review profile: CHILL

Plan: Pro

Run ID: 661af19f-e152-43a8-90c1-fb3d5070dbeb

📥 Commits

Reviewing files that changed from the base of the PR and between ba664bd and 4960d76.

📒 Files selected for processing (8)
  • src/jade/app/app.py
  • src/jade/run/benchmark.py
  • tests/app/test_app.py
  • tests/dummy_structure/simulations/_mcnp_-_continue sphere_/Sphere/Sphere_dummy1/Sphere_dummy1.i
  • tests/dummy_structure/simulations/_mcnp_-_continue sphere_/Sphere/Sphere_dummy1/metadata.json
  • tests/dummy_structure/simulations/_mcnp_-_continue sphere_/Sphere/Sphere_m101/Sphere_M101_.i
  • tests/dummy_structure/simulations/_mcnp_-_continue sphere_/Sphere/Sphere_m101/metadata.json
  • tests/run/test_benchmark.py

Comment thread src/jade/run/benchmark.py
Comment thread src/jade/run/benchmark.py
Copy link
Copy Markdown
Contributor

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

♻️ Duplicate comments (1)
src/jade/run/benchmark.py (1)

412-416: ⚠️ Potential issue | 🟠 Major

Derive MockInput.name from the real input file, not the folder.

This still assumes the run directory name matches the MCNP/D1S input stem. For Sphere-style generated cases that is not guaranteed, so continue can still emit i=<folder>.i for a file that does not exist.

🛠️ Suggested fix
 class MockInput:
@@
     def __init__(self, template_folder: PathLike, lib: Library):
-        self.template_folder = template_folder
-        self.inp = None
-        self.name = os.path.basename(template_folder)
+        folder = Path(template_folder)
+        self.template_folder = folder
+        input_files = sorted([*folder.glob("*.i"), *folder.glob("*.I")])
+        self.inp = input_files[0] if input_files else None
+        self.name = self.inp.stem if self.inp is not None else folder.name
         self.lib = lib
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/jade/run/benchmark.py` around lines 412 - 416, The constructor for
MockInput sets self.name to the template folder basename which can be wrong for
Sphere-style cases; change MockInput.__init__ to derive self.name from the
actual input file stem (use Path(self.inp).stem or locate the MCNP/D1S input
file inside template_folder and use its stem) instead of
os.path.basename(template_folder), and ensure self.inp is set/checked before
deriving the name so continue emits the correct i=<file>.i value.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Duplicate comments:
In `@src/jade/run/benchmark.py`:
- Around line 412-416: The constructor for MockInput sets self.name to the
template folder basename which can be wrong for Sphere-style cases; change
MockInput.__init__ to derive self.name from the actual input file stem (use
Path(self.inp).stem or locate the MCNP/D1S input file inside template_folder and
use its stem) instead of os.path.basename(template_folder), and ensure self.inp
is set/checked before deriving the name so continue emits the correct i=<file>.i
value.

ℹ️ Review info
⚙️ Run configuration

Configuration used: Organization UI

Review profile: CHILL

Plan: Pro

Run ID: 4d331242-a678-4c8d-a930-77c1b794d8f3

📥 Commits

Reviewing files that changed from the base of the PR and between 4960d76 and 1f5be40.

📒 Files selected for processing (1)
  • src/jade/run/benchmark.py

@dodu94 dodu94 requested a review from alexvalentine94 April 29, 2026 15:05
@dodu94
Copy link
Copy Markdown
Member Author

dodu94 commented Apr 29, 2026

missing coverage is related to edge cases that are not worrysome

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

[BUG] - OpenMC Sphere continue run issue

1 participant