Skip to content

iCVTEAM/DCS

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

1 Commit
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Diffusion-Classifier Synergy (DCS)

Official release code for the NeurIPS 2025 paper:

Diffusion-Classifier Synergy: Reward-Aligned Learning via Mutual Boosting Loop for FSCIL

What is included

This release keeps the core components needed for the paper workflow:

  • FSCIL classifier training based on the ADBS baseline
  • Stable Diffusion 3.5 M + DAS-based image generation
  • DCS reward wiring for:
    • R_PAMMD
    • R_VM
    • R_RC
    • R_CSCA
  • minimal scripts for generation, classifier training, and end-to-end orchestration

The reward/session combinations implemented in the release follow the paper:

  • base session generation: R_PAMMD + R_VM + R_CSCA
  • incremental new-class generation: R_PAMMD + R_VM + R_RC
  • incremental old-class generation: R_PAMMD + R_VM + R_CSCA

Directory structure

Installation

Create an environment and install the dependencies:

pip install -r requirements.txt

You will also need:

  • the benchmark datasets prepared in FSCIL-compatible layout
  • a local Stable Diffusion 3.5 Medium checkpoint
  • a classifier checkpoint for reward-guided generation

Datasets

Supported datasets:

  • cifar100
  • mini_imagenet
  • cub200

The classifier code expects the original FSCIL dataset layouts:

  • CUB-200 under CUB_200_2011/
  • miniImageNet under miniimagenet/images and miniimagenet/split
  • CIFAR-100 through torchvision download

Index files used for FSCIL sessions are stored in fscil/data/index_list.

Stage 1: generate images with DCS rewards

Modes:

  • base → uses R_PAMMD + R_VM + R_CSCA
  • new → uses R_PAMMD + R_VM + R_RC
  • old → uses R_PAMMD + R_VM + R_CSCA

Outputs are written under:

  • generated/<dataset>/base/
  • generated/<dataset>/current/
  • generated/<dataset>/previous/

Example commands

CUB-200 base session

python scripts/generate_data.py \
  --dataset cub200 \
  --model-path /path/to/stable-diffusion-3.5-medium \
  --classifier-checkpoint /path/to/session0_max_acc.pth \
  --dataset-root /path/to/datasets \
  --output-root generated \
  --session 0 \
  --mode base \
  --guidance-scale 2.0 \
  --num-inference-steps 10 \
  --num-particles 16 \
  --batch-particles 1 \
  --tempering-gamma 0.008 \
  --kl-coeff 0.001

miniImageNet base session

python scripts/generate_data.py \
  --dataset mini_imagenet \
  --model-path /path/to/stable-diffusion-3.5-medium \
  --classifier-checkpoint /path/to/session0_max_acc.pth \
  --dataset-root /path/to/datasets \
  --output-root generated \
  --session 0 \
  --mode base \
  --guidance-scale 2.0 \
  --num-inference-steps 10 \
  --num-particles 16 \
  --batch-particles 1 \
  --tempering-gamma 0.008 \
  --kl-coeff 0.001

CIFAR-100 base session

python scripts/generate_data.py \
  --dataset cifar100 \
  --model-path /path/to/stable-diffusion-3.5-medium \
  --classifier-checkpoint /path/to/session0_max_acc.pth \
  --dataset-root /path/to/datasets \
  --output-root generated \
  --session 0 \
  --mode base \
  --guidance-scale 2.0 \
  --num-inference-steps 10 \
  --num-particles 16 \
  --batch-particles 1 \
  --tempering-gamma 0.008 \
  --kl-coeff 0.001

Stage 2: train the FSCIL classifier

Outputs are written under outputs/.

Example commands

CUB-200

python scripts/train_fscil.py \
  -dataset cub200 \
  -dataroot /path/to/datasets \
  -generated_root generated \
  -output_root outputs \
  -start_session 0 \
  -base_mode ft_cos \
  -new_mode avg_cos \
  -epochs_base 120 \
  -lr_base 0.002 \
  -schedule Cosine \
  -epochs_new_train 10 \
  -lr_new 0.0005 \
  -momentum 0.9 \
  -decay 0.0005 \
  -reg_alpha 0.01 \
  -margin

miniImageNet

python scripts/train_fscil.py \
  -dataset mini_imagenet \
  -dataroot /path/to/datasets \
  -generated_root generated \
  -output_root outputs \
  -start_session 0 \
  -base_mode ft_cos \
  -new_mode avg_cos \
  -epochs_base 120 \
  -lr_base 0.1 \
  -schedule Cosine \
  -epochs_new_train 30 \
  -lr_new 0.05 \
  -momentum 0.9 \
  -decay 0.0005 \
  -reg_alpha 0.01 \
  -margin

CIFAR-100

python scripts/train_fscil.py \
  -dataset cifar100 \
  -dataroot /path/to/datasets \
  -generated_root generated \
  -output_root outputs \
  -start_session 0 \
  -base_mode ft_cos \
  -new_mode avg_cos \
  -epochs_base 50 \
  -lr_base 0.1 \
  -schedule Cosine \
  -epochs_new_train 5 \
  -lr_new 0.01 \
  -momentum 0.9 \
  -decay 0.0005 \
  -reg_alpha 0.01 \
  -margin

End-to-end run

Example:

python scripts/run_pipeline.py \
  --dataset cub200 \
  --model-path /path/to/stable-diffusion-3.5-medium \
  --classifier-checkpoint /path/to/session0_max_acc.pth \
  --data-root /path/to/datasets \
  --generated-root generated \
  --output-root outputs \
  --session 0

Citation

If you use this code, please cite the NeurIPS paper.

@inproceedings{wu2025diffusion,
  title={Diffusion-Classifier Synergy: Reward-Aligned Learning via Mutual Boosting Loop for {FSCIL}},
  author={Wu, Ruitao and Zhao, Yifan and Chen, Guangyao and Li, Jia},
  booktitle={Advances in Neural Information Processing Systems (NeurIPS)},
  year={2025},
}

Acknowledgments

This release builds on codebases that informed the original research workflow:

About

Official release code for the NeurIPS 2025 paper: Diffusion-Classifier Synergy: Reward-Aligned Learning via Mutual Boosting Loop for FSCIL

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors

Languages