Skip to content

VIPL-SLP/MSLR_ICCV2025

Folders and files

NameName
Last commit message
Last commit date

Latest commit

ย 

History

2 Commits
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 

Repository files navigation

A Closer Look at Skeleton-based Continuous Sign Language Recognition

๐Ÿ† Official repository for A Closer Look at Skeleton-based Continuous Sign Language Recognition, the winner (1st place) in both the Signer-Independent and Unseen Sentences tasks of the ICCV 2025 SignEval 2025: The First Multimodal Sign Language Recognition Challenge. This implementation is largely built upon VAC and CoSign frameworks.

Prerequisites

  • This project is implemented in Pytorch (better ==2.0.0 to be compatible with ctcdecode or these may exist errors). Thus, please install Pytorch first.
  • ctcdecode==0.4 [parlance/ctcdecode], for beam search decode.
  • sclite [kaldi-asr/kaldi], install the kaldi tool to get sclite for evaluation. After installation, create a soft link to the sclite:
mkdir ./software
ln -s PATH_TO_KALDI/tools/sctk-2.4.10/bin/sclite ./software/sclite

Setup Instructions

  1. Download the dataset [download link] and place the dataset in the ./datasets folder.

  2. Download the annotation [download link] and place them in the ./preprocess/mslr2025 folder.

  3. Preprocess the dataset. Run the command to generate gloss dict, dataset info and groundtruth for evaluation.

cd ./preprocess/mslr2025
python mslr_process.py

Running the Model

We provide the pretrained models for inference, you can download them from:

Task Baseline Test (WER) Weight
Signer Independent 7.44% GoogleDrive
Unseen Sentences 28.20% GoogleDrive
Task Baseline Dev (WER) Weight
Signer Independent 2.2% GoogleDrive
Unseen Sentences 35.6% GoogleDrive

Note: Different tasks are suited for different data augmentation strategies during the training phase. Change the strategy in ./datasets/skeleton_feeder.py on line 194.

Signer Independent

  • Train: running the command
python main.py --config ./configs/Double_Cosign_si.yaml
  • Test: running the command
python main.py --config ./configs/Double_Cosign_si.yaml --phase test --load-weights PATH_TO_PRETRAINED_MODEL

Unseen Sentences

  • Train: download the pretrained weight from here, place it in the ./ folder and running the command
python main.py --config ./configs/Double_Cosign_us.yaml --load-weights PATH_TO_PRETRAINED_MODEL --ignore-weights classifier_static.weight classifier_motion.weight classifier_fusion.weight
  • Test: running the command
python main.py --config ./configs/Double_Cosign_us.yaml --phase test --load-weights PATH_TO_PRETRAINED_MODEL

Citation

If you find this repo useful in your research works, please consider citing:

@inproceedings{min2025closer,
  title={A Closer Look at Skeleton-based Continuous Sign Language Recognition},
  author={Min, Yuecong and Yang, Yifan and Jiao, Peiqi and Nan, Zixi and Chen, Xilin},
  booktitle={Proceedings of the IEEE/CVF International Conference on Computer Vision Workshops},
  year={2025}
}

@inproceedings{jiao2023cosign,
  title={Cosign: Exploring co-occurrence signals in skeleton-based continuous sign language recognition},
  author={Jiao, Peiqi and Min, Yuecong and Li, Yanan and Wang, Xiaotao and Lei, Lei and Chen, Xilin},
  booktitle={Proceedings of the IEEE/CVF International Conference on Computer Vision},
  pages={20676--20686},
  year={2023}
}

About

๐Ÿ† The 1st Place Solution in the CSLR Challenge at ICCV 2025

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published