Skip to content

add known issues in statistics guide#2383

Open
loretet wants to merge 5 commits intodevelopfrom
doc/add-stats-issues
Open

add known issues in statistics guide#2383
loretet wants to merge 5 commits intodevelopfrom
doc/add-stats-issues

Conversation

@loretet
Copy link

@loretet loretet commented Mar 16, 2026

Added a warning to the "Statistics guide" section of the documentation to include two known issues, #2278 and #2378.

Both issues are not proper bugs and can be easily worked around, but I think the user should still be made aware of their existence (otherwise both issues can silently mess up the user analysis, more or less significantly)

Copy link
Collaborator

@adperezm adperezm left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for putting this up!

Isn't it best to simply recommend to always use weighted averages for processing any of the statistics from Neko? Is there any case where that would fail?

I would say that when post-processing the data, it is important to inspect the results. However, I do not see the harm in giving this information.

On the other hand, I am still not sure that these things are actually read.

Copy link
Collaborator

@vbaconnet vbaconnet left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for opening the PR!

I agree with Adalberto's comments.

The two "issues" you have raised only appear if you use an arithmetic mean for the final averaging. Therefore I would suggest that we:

  • recommend the user use weighted averages exclusively, as is done in the provided post-processing script average_fields_in_time
  • inform the user that a zero-filled stats file will always be dumped at (or slightly after) start_time so they know what to expect.
  • Maybe refer to the documentation for the case file to remind them about the behavior of output_at_end (also not an issue if using weighted averaging).

@vbaconnet
Copy link
Collaborator

Hi, I pushed some stuff just to reformat a bit with the 80 column limit.

the following two characteristics of the `stats0` files:

1. If one computes statistics from time 0 (`"start_time": 0.0`), the first
entry in the `stats0` file will be zero-filled (both for scalar and fluid
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

To make it more general, the name "stats0" is specific to the csv format

Suggested change
entry in the `stats0` file will be zero-filled (both for scalar and fluid
sample will be zero-filled (both for scalar and fluid

@vbaconnet
Copy link
Collaborator

Thanks for making all the chnages! I added two comments but they are quite minor, up to you if you want to change or not

vbaconnet
vbaconnet approved these changes Mar 23, 2026
Co-authored-by: Victor Baconnet <baconnet@kth.se>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

documentation Improvements or additions to documentation

Projects

None yet

Development

Successfully merging this pull request may close these issues.

In fluid/scalar statistics time 0 is always filled with zeros Fluid/scalar statistics output filled with zeros at end_time

3 participants