Skip to content

iCVTEAM/LBD

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

1 Commit
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

LBD: Language-inspired Bootstrapped Disentanglement for Class-Incremental Semantic Segmentation

Official PyTorch implementation of:

Learning Yourself: Class-Incremental Semantic Segmentation with Language-Inspired Bootstrapped Disentanglement
ICCV 2025

Requirements

Install dependencies:

pip install -r requirements.txt

Tested with Python 3.8+ and PyTorch 1.12+.

Before you start

Prepare the OpenCLIP ViT-B/16 checkpoint and set its path in the config file:

train:
  pretrained: /path/to/open_clip_model.safetensors

Dataset setup

Pascal VOC 2012

Prepare Pascal VOC 2012 with augmented masks (SegmentationClassAug/), then set:

dataset:
  data_root: /path/to/VOCdevkit/VOC2012

ADE20K

Prepare ADE20K, then set:

dataset:
  data_root: /path/to/ADEChallengeData2016

Protocol split files are already included under datasets/data.

Training

Run the default VOC training script:

bash train.sh

Or launch training manually:

torchrun \
    --nproc_per_node=2 \
    --master_port=29500 \
    main.py \
    --config ./configs/voc.yaml \
    --log voc_15-5

To train on ADE20K, use:

torchrun \
    --nproc_per_node=2 \
    --master_port=29500 \
    main.py \
    --config ./configs/ade20k.yaml \
    --log ade_100-10

Main configuration options

Option Description
task Incremental protocol, such as 15-5, 15-1, 10-10 for VOC or 100-10, 100-50, 50-50 for ADE20K
overlap True for overlapped setting, False for disjoint setting
curr_step Starting incremental step
train.context_k Number of background prompts
train.distill_args Weight of output distillation
train.pseudo_thresh Confidence threshold for pseudo-labeling
optimizer.inc_lr Learning-rate multiplier for incremental steps

Edit configs/voc.yamlor configs/ade20k.yaml as needed.

Repository structure

LBD/
├── configs/          # Configuration files
├── core/             # Segmenter and decoder
├── datasets/         # Dataset loaders and split files
├── denseCLIP/        # CLIP-based model code
├── metrics/          # Evaluation metrics
├── utils/            # Losses, transforms, tasks, and helpers
├── main.py           # Training entry point
├── train.sh          # Example launch script
└── requirements.txt  # Python dependencies

Citation

If you find this work useful, please cite:

@inproceedings{wu2025lbd,
  title={Learning Yourself: Class-Incremental Semantic Segmentation with Language-Inspired Bootstrapped Disentanglement},
  author={Wu, Ruitao and Zhao, Yifan and Li, Jia},
  booktitle={Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV)},
  year={2025}
}

Acknowledgements

This codebase builds on several excellent prior works, especially:

About

Official release code for the ICCV 2025 paper: Learning Yourself: Class-Incremental Semantic Segmentation with Language-Inspired Bootstrapped Disentanglement

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors