This project provides an implementation of the Noise2Inverse (N2I)
framework for self-supervised denoising of CT reconstructions — no
clean ground-truth images required. Two convolution modes are
available, selected with --mode:
| Mode | Flag | Network | Best for |
|---|---|---|---|
| 2.5D (default) | --mode 2.5d |
2D U-Net, stacks N adjacent slices as channels | General synchrotron CT; fast, memory-efficient |
| 3D | --mode 3d |
Full 3D U-Net with skip connections | Coherent X-ray / XNH data; removes structured 3D noise and ring artifacts |
Uses a lightweight 2D U-Net (no skip connections, GroupNorm, LeakyReLU) that takes a stack of adjacent axial slices as input channels. This suppresses ring and streak artifacts while remaining fast and memory-efficient — suitable for large synchrotron CT datasets.
Uses a full 3D U-Net with skip connections (Laugros et al., bioRxiv 2025) that operates on cubic sub-volumes. By processing all three spatial dimensions simultaneously, it can remove structured 3D noise such as probe-object mixing artifacts in X-ray holographic nanotomography (XNH) — noise that slice-by-slice processing cannot reach. 3D mode requires more GPU memory and is best suited for coherent X-ray microscopy data.
Create the conda environment:
git clone https://github.com/AISDC/Noise2Inverse360 denoise
cd denoise
conda env create -f envs/denoise_environment.yml
conda activate denoise
pip install .Dependencies include:
- albumentations (data augmentation)
- pytorch >= 2.0 (with CUDA support)
- tifffile
- tqdm
- matplotlib
- scikit-image
- scipy
- pyyaml
-
All output (training results, inference results, trained models) are saved inside the reconstruction directory.
- User: John Smith
- Sample 1 Directory:
- Provided by the User:
- Full Reconstruction (Directory)
- Created by
tomocupy recon_steps(tomocupy env):- Sub-Reconstruction 0 (Directory)
- Sub-Reconstruction 1 (Directory)
- Created by
denoise prepare(denoise env):config.yaml
- Created by
denoise train/ inference:- TrainOutput (Directory)
<sample>_denoised_slices/<sample>_denoised_volume_2.5d/or<sample>_denoised_volume_3d/
- Provided by the User:
- Sample 1 Directory:
- User: John Smith
-
Data is saved as
.tifffiles (.tifor.tiff). -
Model type/size is consistent across datasets.
- U-Net without skip connections + leaky ReLU + group norm has proven robust.
-
Inference can run while training is still in progress.
- Automatic batch size optimization for A100/V100 GPUs
- Accounts for image size, GPU memory, and model size to reduce OOM errors.
- Support for 2.5D inference with PyTorch
- Flexible plug-and-play workflow across different samples/users
denoise/
├── denoise/
│ ├── __init__.py
│ ├── __main__.py # CLI entry point (prepare / train / slice / volume / register / search)
│ ├── registry.py # local model registry (~/.denoise/registry/)
│ ├── log.py # colored logging module
│ ├── train.py # DDP training loop (2.5D and 3D)
│ ├── slice.py # single-slice inference (2.5D only)
│ ├── volume.py # full-volume inference (2.5D and 3D)
│ ├── data.py # 2.5D dataset classes
│ ├── data3d.py # 3D dataset classes (cubic patches, 3D stitching)
│ ├── data_utils.py # patch extraction / stitching utilities
│ ├── model.py # 2.5D U-Net (no skip connections)
│ ├── model3d.py # 3D U-Net with skip connections (Laugros et al. 2025)
│ ├── loss.py # LCL loss
│ ├── eval.py # evaluation metrics
│ ├── tiffs.py # TIFF I/O utilities
│ └── utils.py # image utilities
├── docs/ # Sphinx documentation
│ └── source/img/ # workflow and example figures
├── envs/
│ ├── denoise_environment.yml
│ └── requirements.txt
├── baseline_config.yaml
├── LICENSE
├── setup.py
└── VERSION
git clone https://github.com/AISDC/Noise2Inverse360 denoise
cd denoise
conda env create -f envs/denoise_environment.yml
conda activate denoise
pip install .Step 1 — write the config YAML (run in the denoise environment):
(denoise) $ denoise prepare --file-name /data/sample.h5This writes sample_rec_config.yaml (with instrument metadata read from
the HDF5) and prints the two tomocupy recon_steps commands you need to
run next. The generated YAML includes parameters for both 2.5D and 3D
modes — the mode is chosen later at denoise train time with --mode.
Note:
denoise preparedoes not create the sub-reconstruction directories. Due to a NumPy compatibility issue between thedenoiseandtomocupyenvironments, the sub-reconstructions must be created manually by running the printed commands in thetomocupyenvironment.
Step 2 — create the sub-reconstructions (run in the tomocupy environment):
# even-indexed projections (0, 2, 4, ...)
(tomocupy) $ tomocupy recon_steps \
--file-name /data/sample.h5 \
--start-proj 0 --proj-step 2 \
--out-path-name /data/sample_rec_0 \
[... same options as the full reconstruction ...]
# odd-indexed projections (1, 3, 5, ...)
(tomocupy) $ tomocupy recon_steps \
--file-name /data/sample.h5 \
--start-proj 1 --proj-step 2 \
--out-path-name /data/sample_rec_1 \
[... same options as the full reconstruction ...]denoise prepare prints the exact paths for --out-path-name derived
from --file-name, so you can copy-paste them directly.
Before launching a new training run, denoise train automatically
searches the local model registry (~/.denoise/registry/) for a model
trained under the same instrument conditions. If a match is found, it
is listed and you are asked whether to proceed:
# 2.5D mode (default — used when no --mode flag or mode stored in YAML)
(denoise) $ denoise train --config /data/sample_rec_config.yaml --gpus 0,1
# 3D mode
(denoise) $ denoise train --config /data/sample_rec_config.yaml --gpus 0,1 --mode 3d
Registry search found 1 matching model(s):
[1] 2BM_pink_30keV_FLIROryx_20260219_143000 (9/9 criteria match — 100%)
beamline: 2-BM | mode: pink | energy: 30.0 keV | ...
registry path: /home/user/.denoise/registry/2BM_pink_30keV_FLIROryx_...
Train a new model anyway? [y/N]Enter N to skip training and use the existing model, or y to
train anyway. To bypass the search entirely, add --no-search.
To stop automatically when the validation loss plateaus, add patience to
the train section of the config YAML (default 0 = disabled):
train:
patience: 200 # stop if val loss does not improve for 200 epochsResume interrupted training with --resume:
(denoise) $ denoise train --config /data/sample_rec_config.yaml --gpus 0,1 --resumeTo run two training jobs in parallel on the same node (e.g. two datasets
on a 4-GPU machine), assign a different --master-port to each job to
avoid a port conflict:
# job 1 — GPUs 0,1, default port
(denoise) $ denoise train --config /data/delta_config.yaml --gpus 0,1 --no-search
# job 2 — GPUs 2,3, different port
(denoise) $ denoise train --config /data/beta_config.yaml --gpus 2,3 --no-search --master-port 29501After training, register the model so it can be found automatically in future sessions:
(denoise) $ denoise register \
--config /data/sample_rec_config.yaml \
--model-dir /data/sample_rec/TrainOutputModels are stored in ~/.denoise/registry/ (never committed to git).
On APS machines where tocai and tomo4 share a GPFS home directory, a
model registered on tocai is immediately visible on tomo4.
(denoise) $ denoise search --config /data/new_sample_rec_config.yamlPrints all registry entries that match the noise fingerprint of the given config, ranked by score (fraction of criteria matched).
denoise slice --config /data/sample_rec_config.yaml --slice-number 500- Loads pretrained model, fetches slice ± neighboring slices (2.5D stack)
- Applies sliding-window patching, normalizes, saves
.tiffto<sample>_denoised_slices/ - Not available in 3D mode — use
denoise volumeinstead
# 2.5D (default)
denoise volume --config /data/sample_rec_config.yaml
denoise volume --config /data/sample_rec_config.yaml --start-slice 500 --end-slice 600
# 3D
denoise volume --config /data/sample_rec_config.yaml --mode 3d- Processes the full volume or a sub-volume (both modes)
- Automatic batch size calculation, sliding-window patching with Hann blending
- 3D mode uses cubic patches and 3D overlap-add stitching
- Saves output
.tiffstack to<sample>_denoised_volume_2.5d/or<sample>_denoised_volume_3d/(mode-suffixed so both results coexist)
Left: denoised | Right: noisy reconstruction (brain CT, APS 2-BM)
Areas for improvement:
- Fine-tuning from previous models (reduces training time from 8--12 hrs to ~30--60 min)
- Exploring alternative architectures beyond U-Net
Relevant Citations:
@article{Laugros2025.02.10.633538,
author = {Laugros, Alfred and Cloetens, Peter and Bosch, Carles and Schoonhoven, Richard and Pavlovic, Liam and Kuan, Aaron T. and Livingstone, Jayde and Zhang, Yuxin and Kim, Minsu and Hendriksen, Allard and Holler, Mirko and Wanner, Adrian A. and Azevedo, Anthony and Batenburg, K. Joost and Tuthill, John C. and Lee, Wei-Chung Allen and Schaefer, Andreas T. and Vigano, Nicola and Pacureanu, Alexandra},
title = {Self-supervised image restoration in coherent X-ray neuronal microscopy},
journal = {bioRxiv},
year = {2025},
elocation-id = {2025.02.10.633538},
doi = {10.1101/2025.02.10.633538},
publisher = {Cold Spring Harbor Laboratory},
URL = {https://www.biorxiv.org/content/early/2025/02/10/2025.02.10.633538}
}
@article{https://doi.org/10.1109/TCI.2020.3019647,
title={Noise2inverse: Self-supervised deep convolutional denoising for tomography},
author={Hendriksen, Allard Adriaan and Pelt, Dani{\"e}l Maria and Batenburg, K Joost},
journal={IEEE Transactions on Computational Imaging},
volume={6},
pages={1320--1335},
year={2020},
publisher={IEEE}
}
@article{https://doi.org/10.1038/s41598-021-91084-8,
title={Deep denoising for multi-dimensional synchrotron X-ray tomography without high-quality reference data},
author={Hendriksen, Allard A and B{\"u}hrer, Minna and Leone, Laura and Merlini, Marco and Vigano, Nicola and Pelt, Dani{\"e}l M and Marone, Federica and Di Michiel, Marco and Batenburg, K Joost},
journal={Scientific reports},
volume={11},
number={1},
pages={11895},
year={2021},
publisher={Nature Publishing Group UK London}
}
@article{https://doi.org/10.1016/j.tmater.2025.100075,
title={Boosting Noise2Inverse via enhanced model selection for denoising computed tomography data},
author={Yunker, Austin and Kenesei, Peter and Sharma, Hemant and Park, Jun-Sang and Miceli, Antonino and Kettimuthu, Rajkumar},
journal={Tomography of Materials and Structures},
pages={100075},
year={2025},
publisher={Elsevier}
}