Skip to content
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 1 addition & 1 deletion doc/conf.py
Original file line number Diff line number Diff line change
Expand Up @@ -75,7 +75,7 @@

# General information about the project.
project = u'Clawpack'
copyright = u'CC-BY 2024, The Clawpack Development Team'
copyright = u'CC-BY 2026, The Clawpack Development Team'

# The version info for the project you're documenting, acts as replacement for
# |version| and |release|, also used in various other places throughout the
Expand Down
162 changes: 161 additions & 1 deletion doc/testing.rst
Original file line number Diff line number Diff line change
Expand Up @@ -5,6 +5,167 @@
Testing your installation
===================================================================

PyClaw Tests
------------

You can exercise all the tests in PyClaw by running the following command from
the base of the `pyclaw directory`:

.. code-block:: console

cd $CLAW/pyclaw
pytest


Fortran Regression Tests
-------------------------

The Fortran code in Clawpack has a suite of regression tests that can be run to
check that the code is working properly. In each of the Fortran packages there
are a series of regression tests along side some of the examples as well as some
tests for Python functionality. All these tests can be run by going to the base
directory of the corresponding pacakge and running:

.. code-block:: console

pytest

The most useful option for debugging a failing test is to use:

.. code-block:: console

pytest --basetemp=./test_output

which will save the output from the teset into the directory `test_output`. The
package `pytetst` also has a number of additional debugging options that you can
use. See the `pytest documentation <https://docs.pytest.org/>`_ for more
details.

Hints
^^^^^
- Often times the output from a failing test will overwhelm the console output. In this case, you can use the following to pipe the output into the file `log.txt` and look at it directly:

.. code-block:: console

pytest --basetemp=./test_output > log.txt 2>&1

- If you would like to use a different default `setrun.py` file for testing you
can modify the test script to use a different `setrun.py` file.
- If you would like to plot the output of a test, you can use the same plotting
tools that are used for the examples. You can find the output of the test in
the `test_output` directory if you used the `--basetemp` option above. You
can then use the plotting tools to plot the output from the test. For
example this code will run the test and save the output into a subdirectory
of `test_output`. The plotting command will then plot the output from the
appropriate subdirectory specified:

.. code-block:: console

cd $CLAW/classic/examples/acoustics_1d_example1
pytest --basetemp=./test_output .
python plotclaw.py test_output/test_acoustics_1d_example1/ ./_plots ./setplot.py



Adding Regression Tests
-----------------------

If you want to add a new regression test using the new `pytest` framework, you
can follow along with this example for the acoustics_1d_example1 test. If
something more complicated is needed, take a look at the other tests available
in the packages, or reach out to the developers for help.

Adding a Test for `acoustics_1d_example1`
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

1. Create a new file in the `examples/acoustics_1d_example1` directory called `test_acoustics_1d_example1.py` by:

.. code-block:: console

touch examples/acoustics_1d_example1/test_acoustics_1d_example1.py

and place the following content in it:

.. code-block:: python
:linenos:

#!/usr/bin/env python

from pathlib import Path
import pytest

import clawpack.classic.test as test


def test_acoustics_1d_example1(tmp_path: Path, save: bool):
runner = test.ClassicTestRunner(tmp_path,
test_path=Path(__file__).parent)

runner.set_data()

runner.rundata.clawdata.num_output_times = 2
runner.rundata.clawdata.tfinal = 1.0
runner.rundata.clawdata.output_t0 = False

runner.write_data()

runner.executable_name = "xclaw"
runner.build_executable()
runner.run_code()

runner.check_frame(1, indices=(0, 1), save=save)
runner.check_frame(2, indices=(0, 1), save=save)

if __name__=="__main__":
pytest.main([__file__])

This file is executable from the command line. The middle section modifies what is in the local `setrun.py` file to make the test small and deterministic. The final section runs the test when the file is executed from the command line. You can run this test with:

.. code-block:: console

python test_acoustics_1d_example1.py

or with:

.. code-block:: console

pytest test_acoustics_1d_example1.py


2. We now need to generate the expected results for this test. To do this, run the test with the `--save` option:

.. code-block:: console

pytest test_acoustics_1d_example1.py --save

This will run the test and save the results in a directory called `regression_data` in the same directory as the test. This file contains the expected results for the test, which will be used to compare against future runs of the test. Note that if you would like to see the full output of the test, you can add `--basetemp=./test_output` to the command above, which will save the output from the test into the directory `test_output`.


3. Now you can run the test without the `--save` option to check that it is working properly. If the test passes, you should see output similar to this:

.. code-block:: console

============================= test session starts ==============================
platform darwin -- Python 3.8.10, pytest-7.1.2, pluggy-1.0.0
rootdir: /path/to/clawpack/classic/examples/acoustics_1d_example1
collected 1 item

test_acoustics_1d_example1.py . [100%]

============================== 1 passed in 5.00s ===============================

To complete the test you will want to add the test script `test_acoustics_1d_example1.py` add the regression data to the repository.

==============
Legacy Testing
==============

Tests via `nose` are no longer supported, but if you have an older version of
Clawpack installed and `nostests` available, you can still run the old tests.
These are not as comprehensive as the new `pytest` tests, but they can be useful
for checking that your installation is working properly.


PyClaw
------
If you downloaded Clawpack manually, you can test your :ref:`pyclaw`
Expand Down Expand Up @@ -46,4 +207,3 @@ There are similar `tests` subdirectories of `$CLAW/amrclaw` and
More extensive tests can be performed by running all of the examples in the
`examples` directory and comparing the resulting plots against those
archived in the :ref:`galleries`. See also :ref:`regression`.

109 changes: 109 additions & 0 deletions doc/testing_refactor.rst
Original file line number Diff line number Diff line change
@@ -0,0 +1,109 @@
=========================
Clawpack Testing Refactor
=========================

Overview
--------

Clawpack is moving to a pytest-based testing model built around example-local regression tests and shared test infrastructure in clawutil.

This refactor is motivated by the need to:
- simplify test authoring
- reduce custom test scaffolding
- better match pytest conventions
- improve CI integration
- support incremental migration from the legacy regression framework

Current reference implementations include:
- https://github.com/clawpack/clawutil/issues/187
- https://github.com/clawpack/classic/issues/96
- https://github.com/clawpack/amrclaw/issues/310

Design decisions
----------------

1. **Pytest is the system-wide test runner** - All new tests should be written
for pytest.
2. **Example-based regression tests are the primary solver test model** - For
solver-heavy code, the canonical test is a small example that:
- writes input data
- builds using the example Makefile
- runs in a temporary directory
- compares output to saved regression data
3. **Shared testing infrastructure lives in clawutil** - Common runner logic and
helpers should be centralized rather than duplicated across repositories.
4. **Tests should use the real build workflow** - Tests should exercise the same
example Makefile workflow that users rely on.
5. **Fresh builds should be explicit** - Tests should request a fresh build
through the runner or build target, rather than relying on import-time
cleanup or hidden state mutation.
6. **Legacy test infrastructure is transitional** - Existing legacy tests may
remain temporarily, but new tests should follow the pytest model and old
tests should be migrated over time.

Test layout
-----------

A typical migrated example should contain::

example_name/
Makefile
setrun.py
test_example_name.py
regression_data/
frame0001.txt
frame0002.txt

Typical test workflow
---------------------

A typical example test:
1. creates or modifies rundata
2. writes data files
3. builds the executable
4. runs in tmp_path
5. compares selected frames or diagnostics

Regression data policy
----------------------

Regression data should be:
- small
- reviewable in a PR
- deterministic
- specific to the example

Use `--save` to regenerate baselines intentionally.

CI policy
---------

CI should:
- run pytest directly
- store test artifacts in a predictable directory
- prefer fast, stable examples in PR checks
- allow broader coverage in scheduled or extended workflows

Compiler Flags and Numerical Reproducibility
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

Regression tests are sensitive to floating-point roundoff and compiler
optimizations. To ensure stable and reproducible results across platforms,
CI uses conservative optimization flags (e.g., `-O1`).

Higher optimization levels may produce small numerical differences and are
not currently used for regression validation.

Migration guidance
------------------

When migrating an old test:
- prefer example-local placement
- move shared behavior into clawutil
- remove hidden setup side effects
- keep the test close to the user-facing workflow

Reference example
-----------------
`$CLAW/classic/examples/acoustics_1d_heterogeneous/test_acoustics_1d_heterogeneous.py`
is intended to serve as an example setup.