Official release code for the NeurIPS 2025 paper:
Diffusion-Classifier Synergy: Reward-Aligned Learning via Mutual Boosting Loop for FSCIL
This release keeps the core components needed for the paper workflow:
- FSCIL classifier training based on the ADBS baseline
- Stable Diffusion 3.5 M + DAS-based image generation
- DCS reward wiring for:
R_PAMMDR_VMR_RCR_CSCA
- minimal scripts for generation, classifier training, and end-to-end orchestration
The reward/session combinations implemented in the release follow the paper:
- base session generation:
R_PAMMD + R_VM + R_CSCA - incremental new-class generation:
R_PAMMD + R_VM + R_RC - incremental old-class generation:
R_PAMMD + R_VM + R_CSCA
- fscil/ — FSCIL training code
- generation/ — DAS/SD3 generation and DCS reward code
- scripts/ — entry points
- requirements.txt — Python dependencies
Create an environment and install the dependencies:
pip install -r requirements.txtYou will also need:
- the benchmark datasets prepared in FSCIL-compatible layout
- a local Stable Diffusion 3.5 Medium checkpoint
- a classifier checkpoint for reward-guided generation
Supported datasets:
cifar100mini_imagenetcub200
The classifier code expects the original FSCIL dataset layouts:
- CUB-200 under
CUB_200_2011/ - miniImageNet under
miniimagenet/imagesandminiimagenet/split - CIFAR-100 through torchvision download
Index files used for FSCIL sessions are stored in fscil/data/index_list.
Modes:
base→ usesR_PAMMD + R_VM + R_CSCAnew→ usesR_PAMMD + R_VM + R_RCold→ usesR_PAMMD + R_VM + R_CSCA
Outputs are written under:
generated/<dataset>/base/generated/<dataset>/current/generated/<dataset>/previous/
python scripts/generate_data.py \
--dataset cub200 \
--model-path /path/to/stable-diffusion-3.5-medium \
--classifier-checkpoint /path/to/session0_max_acc.pth \
--dataset-root /path/to/datasets \
--output-root generated \
--session 0 \
--mode base \
--guidance-scale 2.0 \
--num-inference-steps 10 \
--num-particles 16 \
--batch-particles 1 \
--tempering-gamma 0.008 \
--kl-coeff 0.001python scripts/generate_data.py \
--dataset mini_imagenet \
--model-path /path/to/stable-diffusion-3.5-medium \
--classifier-checkpoint /path/to/session0_max_acc.pth \
--dataset-root /path/to/datasets \
--output-root generated \
--session 0 \
--mode base \
--guidance-scale 2.0 \
--num-inference-steps 10 \
--num-particles 16 \
--batch-particles 1 \
--tempering-gamma 0.008 \
--kl-coeff 0.001python scripts/generate_data.py \
--dataset cifar100 \
--model-path /path/to/stable-diffusion-3.5-medium \
--classifier-checkpoint /path/to/session0_max_acc.pth \
--dataset-root /path/to/datasets \
--output-root generated \
--session 0 \
--mode base \
--guidance-scale 2.0 \
--num-inference-steps 10 \
--num-particles 16 \
--batch-particles 1 \
--tempering-gamma 0.008 \
--kl-coeff 0.001Outputs are written under outputs/.
python scripts/train_fscil.py \
-dataset cub200 \
-dataroot /path/to/datasets \
-generated_root generated \
-output_root outputs \
-start_session 0 \
-base_mode ft_cos \
-new_mode avg_cos \
-epochs_base 120 \
-lr_base 0.002 \
-schedule Cosine \
-epochs_new_train 10 \
-lr_new 0.0005 \
-momentum 0.9 \
-decay 0.0005 \
-reg_alpha 0.01 \
-marginpython scripts/train_fscil.py \
-dataset mini_imagenet \
-dataroot /path/to/datasets \
-generated_root generated \
-output_root outputs \
-start_session 0 \
-base_mode ft_cos \
-new_mode avg_cos \
-epochs_base 120 \
-lr_base 0.1 \
-schedule Cosine \
-epochs_new_train 30 \
-lr_new 0.05 \
-momentum 0.9 \
-decay 0.0005 \
-reg_alpha 0.01 \
-marginpython scripts/train_fscil.py \
-dataset cifar100 \
-dataroot /path/to/datasets \
-generated_root generated \
-output_root outputs \
-start_session 0 \
-base_mode ft_cos \
-new_mode avg_cos \
-epochs_base 50 \
-lr_base 0.1 \
-schedule Cosine \
-epochs_new_train 5 \
-lr_new 0.01 \
-momentum 0.9 \
-decay 0.0005 \
-reg_alpha 0.01 \
-marginExample:
python scripts/run_pipeline.py \
--dataset cub200 \
--model-path /path/to/stable-diffusion-3.5-medium \
--classifier-checkpoint /path/to/session0_max_acc.pth \
--data-root /path/to/datasets \
--generated-root generated \
--output-root outputs \
--session 0If you use this code, please cite the NeurIPS paper.
@inproceedings{wu2025diffusion,
title={Diffusion-Classifier Synergy: Reward-Aligned Learning via Mutual Boosting Loop for {FSCIL}},
author={Wu, Ruitao and Zhao, Yifan and Chen, Guangyao and Li, Jia},
booktitle={Advances in Neural Information Processing Systems (NeurIPS)},
year={2025},
}This release builds on codebases that informed the original research workflow:
- the ADBS FSCIL baseline used as the classifier-side foundation
- the DAS repository used as the diffusion-alignment foundation
- the open-source
diffusersecosystem