From abbfe4458e527daacf44f9c84020eadde9a0cc52 Mon Sep 17 00:00:00 2001 From: Kyle Mandli Date: Mon, 16 Mar 2026 11:48:02 -0400 Subject: [PATCH 1/8] Add new testing stub --- doc/testing.rst | 28 +++++++++++++++++++++++++++- 1 file changed, 27 insertions(+), 1 deletion(-) diff --git a/doc/testing.rst b/doc/testing.rst index 3f847e1..55fdce3 100644 --- a/doc/testing.rst +++ b/doc/testing.rst @@ -5,6 +5,33 @@ Testing your installation =================================================================== +Fortran Regression Tests +------------------------- +The Fortran code in Clawpack has a suite of regression tests that can be run to +check that the code is working properly. In each of the Fortran packages there +are a series of regression tests along side some of the examples as well as some +tests for Python functionality. All these tests can be run by going to the base +directory of the corresponding pacakge and running:: + + pytest + +The most useful option for debugging a failing test is to use:: + + pytest --basetemp=./test_output + +which will save the output from the teset into the directory `test_output`. The +package `pytetst` also has a number of additional debugging options that you can +use. See the `pytest documentation `_ for more +details. + +Adding Regression Tests +----------------------- + +:TODO: add instructions for adding regression tests here. + +Old Testing +=========== + PyClaw ------ If you downloaded Clawpack manually, you can test your :ref:`pyclaw` @@ -46,4 +73,3 @@ There are similar `tests` subdirectories of `$CLAW/amrclaw` and More extensive tests can be performed by running all of the examples in the `examples` directory and comparing the resulting plots against those archived in the :ref:`galleries`. See also :ref:`regression`. - From 3d638d4a02d971c4cb0c61031ea249f557c525f1 Mon Sep 17 00:00:00 2001 From: Kyle Mandli Date: Mon, 16 Mar 2026 12:10:55 -0400 Subject: [PATCH 2/8] Flesh out new testing with pytest and add a new test example --- doc/testing.rst | 114 ++++++++++++++++++++++++++++++++++++++++++++++-- 1 file changed, 111 insertions(+), 3 deletions(-) diff --git a/doc/testing.rst b/doc/testing.rst index 55fdce3..bdbffe3 100644 --- a/doc/testing.rst +++ b/doc/testing.rst @@ -5,8 +5,21 @@ Testing your installation =================================================================== +PyClaw Tests +------------ + +You can exercise all the tests in PyClaw by running the following command from +the base of the `pyclaw directory`: + +.. code-block:: console + + cd $CLAW/pyclaw + pytest + + Fortran Regression Tests ------------------------- + The Fortran code in Clawpack has a suite of regression tests that can be run to check that the code is working properly. In each of the Fortran packages there are a series of regression tests along side some of the examples as well as some @@ -27,10 +40,105 @@ details. Adding Regression Tests ----------------------- -:TODO: add instructions for adding regression tests here. +If you want to add a new regression test using the new `pytest` framework, you can follow along with this example for the acoustics_1d_example1 test. If something more complicated is needed, take a look at the other tests available in the packages, or reach out to the developers for help. + +Adding a Test for `acoustics_1d_example1` +^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ + +1. Create a new file in the `examples/acoustics_1d_example1` directory called `test_acoustics_1d_example1.py` by: + +.. code-block:: console + + touch examples/acoustics_1d_example1/test_acoustics_1d_example1.py + +and place the following content in it: + +.. code-block:: python + :linenos: + + #!/usr/bin/env python + + from pathlib import Path + import pytest + + import clawpack.classic.test as test + + + def test_acoustics_1d_example1(tmp_path: Path, save: bool): + runner = test.ClassicTestRunner( + tmp_path=tmp_path, + test_path=Path(__file__).parent, + ) + + # Set data using default setrun.py file in local directory. If you want + # to override this then hand it another setrun.py + runner.set_data() + + runner.rundata.clawdata.num_output_times = 2 + runner.rundata.clawdata.tfinal = 1.0 + runner.rundata.clawdata.output_t0 = False + + runner.write_data() + + # Build xclaw and execute code + runner.executable_name = "xclaw" + runner.build_executable() + runner.run_code() + + # Check t=0.5 and t=1.0, we are looking at both the pressure and velocity + # in this test so need to specify those indices + runner.check_frame(1, indices=(0, 1), save=save) + runner.check_frame(2, indices=(0, 1), save=save) + + if __name__=="__main__": + pytest.main([__file__]) + +This file is executable from the command line. The middle section modifies what is in the local `setrun.py` file to make the test small and deterministic. The final section runs the test when the file is executed from the command line. You can run this test with: + +.. code-block:: console + + python test_acoustics_1d_example1.py + +or with: + +.. code-block:: console + + pytest test_acoustics_1d_example1.py + + +2. We now need to generate the expected results for this test. To do this, run the test with the `--save` option: + +.. code-block:: console + + pytest test_acoustics_1d_example1.py --save + +This will run the test and save the results in a directory called `regression_data` in the same directory as the test. This file contains the expected results for the test, which will be used to compare against future runs of the test. Note that if you would like to see the full output of the test, you can add `--basetemp=./test_output` to the command above, which will save the output from the test into the directory `test_output`. + + +3. Now you can run the test without the `--save` option to check that it is working properly. If the test passes, you should see output similar to this: + +.. code-block:: console + + ============================= test session starts ============================== + platform darwin -- Python 3.8.10, pytest-7.1.2, pluggy-1.0.0 + rootdir: /path/to/clawpack/classic/examples/acoustics_1d_example1 + collected 1 item + + test_acoustics_1d_example1.py . [100%] + + ============================== 1 passed in 5.00s =============================== + +To complete the test you will want to add the test script `test_acoustics_1d_example1.py` add the regression data to the repository. + +============== +Legacy Testing +============== + +Tests via `nose` are no longer supported, but if you have an older version of +Clawpack installed and `nostests` available, you can still run the old tests. +These are not as comprehensive as the new `pytest` tests, but they can be useful +for checking that your installation is working properly. -Old Testing -=========== PyClaw ------ From 3f2a0273a723f39ad949e65e4fedd9b746ab40c1 Mon Sep 17 00:00:00 2001 From: Kyle Mandli Date: Mon, 16 Mar 2026 12:26:11 -0400 Subject: [PATCH 3/8] Add testing refactor doc --- doc/testing_refactor.rst | 98 ++++++++++++++++++++++++++++++++++++++++ 1 file changed, 98 insertions(+) create mode 100644 doc/testing_refactor.rst diff --git a/doc/testing_refactor.rst b/doc/testing_refactor.rst new file mode 100644 index 0000000..a159399 --- /dev/null +++ b/doc/testing_refactor.rst @@ -0,0 +1,98 @@ +========================= +Clawpack Testing Refactor +========================= + +Overview +-------- + +Clawpack is moving to a pytest-based testing model built around example-local regression tests and shared test infrastructure in clawutil. + +This refactor is motivated by the need to: + - simplify test authoring + - reduce custom test scaffolding + - better match pytest conventions + - improve CI integration + - support incremental migration from the legacy regression framework + +Current reference implementations include: + - https://github.com/clawpack/clawutil/issues/187 + - https://github.com/clawpack/classic/issues/96 + - https://github.com/clawpack/amrclaw/issues/310 + +Design decisions +---------------- + +1. **Pytest is the system-wide test runner** - All new tests should be written + for pytest. +2. **Example-based regression tests are the primary solver test model** - For + solver-heavy code, the canonical test is a small example that: + - writes input data + - builds using the example Makefile + - runs in a temporary directory + - compares output to saved regression data +3. **Shared testing infrastructure lives in clawutil** - Common runner logic and + helpers should be centralized rather than duplicated across repositories. +4. **Tests should use the real build workflow** - Tests should exercise the same + example Makefile workflow that users rely on. +5. **Fresh builds should be explicit** - Tests should request a fresh build + through the runner or build target, rather than relying on import-time + cleanup or hidden state mutation. +6. **Legacy test infrastructure is transitional** - Existing legacy tests may + remain temporarily, but new tests should follow the pytest model and old + tests should be migrated over time. + +Test layout +----------- + +A typical migrated example should contain:: + + example_name/ + Makefile + setrun.py + test_example_name.py + regression_data/ + frame0001.txt + frame0002.txt + +Typical test workflow +--------------------- + +A typical example test: +1. creates or modifies rundata +2. writes data files +3. builds the executable +4. runs in tmp_path +5. compares selected frames or diagnostics + +Regression data policy +---------------------- + +Regression data should be: + - small + - reviewable in a PR + - deterministic + - specific to the example + +Use `--save` to regenerate baselines intentionally. + +CI policy +--------- + +CI should: + - run pytest directly + - store test artifacts in a predictable directory + - prefer fast, stable examples in PR checks + - allow broader coverage in scheduled or extended workflows + +Migration guidance +------------------ + +When migrating an old test: + - prefer example-local placement + - move shared behavior into clawutil + - remove hidden setup side effects + - keep the test close to the user-facing workflow + +Reference example +----------------- +`$CLAW/classic/examples/acoustics_1d_heterogeneous/test_acoustics_1d_heterogeneous.py` is intended to serve as an example. From 48fcdcec34ae8be8a0386f3e156951b908a7eb76 Mon Sep 17 00:00:00 2001 From: Kyle Mandli Date: Mon, 16 Mar 2026 12:27:08 -0400 Subject: [PATCH 4/8] Bump CC-BY date --- doc/conf.py | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/doc/conf.py b/doc/conf.py index 4629497..ba3a364 100644 --- a/doc/conf.py +++ b/doc/conf.py @@ -75,7 +75,7 @@ # General information about the project. project = u'Clawpack' -copyright = u'CC-BY 2024, The Clawpack Development Team' +copyright = u'CC-BY 2026, The Clawpack Development Team' # The version info for the project you're documenting, acts as replacement for # |version| and |release|, also used in various other places throughout the From e302938080eead643438382f27446158cf32e45d Mon Sep 17 00:00:00 2001 From: Kyle Mandli Date: Mon, 16 Mar 2026 13:54:16 -0400 Subject: [PATCH 5/8] Correct argument naming error --- doc/testing.rst | 19 ++++++++----------- doc/testing_refactor.rst | 3 ++- 2 files changed, 10 insertions(+), 12 deletions(-) diff --git a/doc/testing.rst b/doc/testing.rst index bdbffe3..5593e9d 100644 --- a/doc/testing.rst +++ b/doc/testing.rst @@ -24,11 +24,15 @@ The Fortran code in Clawpack has a suite of regression tests that can be run to check that the code is working properly. In each of the Fortran packages there are a series of regression tests along side some of the examples as well as some tests for Python functionality. All these tests can be run by going to the base -directory of the corresponding pacakge and running:: +directory of the corresponding pacakge and running: + +.. code-block:: console pytest -The most useful option for debugging a failing test is to use:: +The most useful option for debugging a failing test is to use: + +.. code-block:: console pytest --basetemp=./test_output @@ -65,13 +69,9 @@ and place the following content in it: def test_acoustics_1d_example1(tmp_path: Path, save: bool): - runner = test.ClassicTestRunner( - tmp_path=tmp_path, - test_path=Path(__file__).parent, - ) + runner = test.ClassicTestRunner(tmp_path, + test_path=Path(__file__).parent) - # Set data using default setrun.py file in local directory. If you want - # to override this then hand it another setrun.py runner.set_data() runner.rundata.clawdata.num_output_times = 2 @@ -80,13 +80,10 @@ and place the following content in it: runner.write_data() - # Build xclaw and execute code runner.executable_name = "xclaw" runner.build_executable() runner.run_code() - # Check t=0.5 and t=1.0, we are looking at both the pressure and velocity - # in this test so need to specify those indices runner.check_frame(1, indices=(0, 1), save=save) runner.check_frame(2, indices=(0, 1), save=save) diff --git a/doc/testing_refactor.rst b/doc/testing_refactor.rst index a159399..486690f 100644 --- a/doc/testing_refactor.rst +++ b/doc/testing_refactor.rst @@ -95,4 +95,5 @@ When migrating an old test: Reference example ----------------- -`$CLAW/classic/examples/acoustics_1d_heterogeneous/test_acoustics_1d_heterogeneous.py` is intended to serve as an example. +`$CLAW/classic/examples/acoustics_1d_heterogeneous/test_acoustics_1d_heterogeneous.py` +is intended to serve as an example setup. From 34f618287569172de61c2e7a8075ce9b1eda14fa Mon Sep 17 00:00:00 2001 From: Kyle Mandli Date: Tue, 17 Mar 2026 11:33:32 -0400 Subject: [PATCH 6/8] Add mention of compiler flag issues --- doc/testing_refactor.rst | 10 ++++++++++ 1 file changed, 10 insertions(+) diff --git a/doc/testing_refactor.rst b/doc/testing_refactor.rst index 486690f..06220a6 100644 --- a/doc/testing_refactor.rst +++ b/doc/testing_refactor.rst @@ -84,6 +84,16 @@ CI should: - prefer fast, stable examples in PR checks - allow broader coverage in scheduled or extended workflows +Compiler Flags and Numerical Reproducibility +^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ + +Regression tests are sensitive to floating-point roundoff and compiler +optimizations. To ensure stable and reproducible results across platforms, +CI uses conservative optimization flags (e.g., `-O1`). + +Higher optimization levels may produce small numerical differences and are +not currently used for regression validation. + Migration guidance ------------------ From 0232cd95123cb428f429c968ab6da38b0cbd51b6 Mon Sep 17 00:00:00 2001 From: Kyle Mandli Date: Tue, 17 Mar 2026 12:32:48 -0400 Subject: [PATCH 7/8] Add instructions for using alternative setrun and plotting test output --- doc/testing.rst | 23 ++++++++++++++++++++++- 1 file changed, 22 insertions(+), 1 deletion(-) diff --git a/doc/testing.rst b/doc/testing.rst index 5593e9d..25d7ee8 100644 --- a/doc/testing.rst +++ b/doc/testing.rst @@ -41,10 +41,31 @@ package `pytetst` also has a number of additional debugging options that you can use. See the `pytest documentation `_ for more details. +If you would like to use a different default `setrun.py` file for testing you +can modify the test script to use a different `setrun.py` file. + +If you would like to plot the output of a test, you can use the same plotting +tools that are used for the examples. You can find the output of the test in +the `test_output` directory if you used the `--basetemp` option above. You can +then use the plotting tools to plot the output from the test. For example: + +.. code-block:: console + + cd $CLAW/classic/examples/acoustics_1d_example1 + pytest --basetemp=./test_output . + python plotclaw.py test_output/test_acoustics_1d_example1/ ./_plots ./setplot.py + +which will run the test and save the output into a subdirectory of +`test_output`. The plotting command will then plot the output from the +appropriate subdirectory specified. + Adding Regression Tests ----------------------- -If you want to add a new regression test using the new `pytest` framework, you can follow along with this example for the acoustics_1d_example1 test. If something more complicated is needed, take a look at the other tests available in the packages, or reach out to the developers for help. +If you want to add a new regression test using the new `pytest` framework, you +can follow along with this example for the acoustics_1d_example1 test. If +something more complicated is needed, take a look at the other tests available +in the packages, or reach out to the developers for help. Adding a Test for `acoustics_1d_example1` ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ From 511c069a3a11998cc8f20088ce8be6c619aa5d68 Mon Sep 17 00:00:00 2001 From: Kyle Mandli Date: Tue, 17 Mar 2026 12:49:51 -0400 Subject: [PATCH 8/8] Add some hints for testing --- doc/testing.rst | 26 +++++++++++++++++--------- 1 file changed, 17 insertions(+), 9 deletions(-) diff --git a/doc/testing.rst b/doc/testing.rst index 25d7ee8..c634896 100644 --- a/doc/testing.rst +++ b/doc/testing.rst @@ -41,13 +41,23 @@ package `pytetst` also has a number of additional debugging options that you can use. See the `pytest documentation `_ for more details. -If you would like to use a different default `setrun.py` file for testing you -can modify the test script to use a different `setrun.py` file. +Hints +^^^^^ +- Often times the output from a failing test will overwhelm the console output. In this case, you can use the following to pipe the output into the file `log.txt` and look at it directly: -If you would like to plot the output of a test, you can use the same plotting -tools that are used for the examples. You can find the output of the test in -the `test_output` directory if you used the `--basetemp` option above. You can -then use the plotting tools to plot the output from the test. For example: +.. code-block:: console + + pytest --basetemp=./test_output > log.txt 2>&1 + +- If you would like to use a different default `setrun.py` file for testing you + can modify the test script to use a different `setrun.py` file. +- If you would like to plot the output of a test, you can use the same plotting + tools that are used for the examples. You can find the output of the test in + the `test_output` directory if you used the `--basetemp` option above. You + can then use the plotting tools to plot the output from the test. For + example this code will run the test and save the output into a subdirectory + of `test_output`. The plotting command will then plot the output from the + appropriate subdirectory specified: .. code-block:: console @@ -55,9 +65,7 @@ then use the plotting tools to plot the output from the test. For example: pytest --basetemp=./test_output . python plotclaw.py test_output/test_acoustics_1d_example1/ ./_plots ./setplot.py -which will run the test and save the output into a subdirectory of -`test_output`. The plotting command will then plot the output from the -appropriate subdirectory specified. + Adding Regression Tests -----------------------