This repository presents the official implementation of LRDNet, a novel lightweight deep learning architecture specifically engineered for efficient free road space detection in autonomous driving scenarios. Our methodology addresses the critical computational constraints inherent in real-time embedded deployment while maintaining competitive segmentation accuracy across established benchmarks.
The proposed network architecture demonstrates remarkable parameter efficiency, utilizing merely 19.5M parameters while achieving state-of-the-art processing speeds suitable for resource-constrained environments. Through innovative cascaded feature pooling mechanisms and strategic architectural design choices, LRDNet establishes new performance benchmarks in the intersection of computational efficiency and semantic understanding for road scene analysis.
- Parameter-Efficient Architecture: Revolutionary lightweight design achieving competitive performance with minimal computational overhead
- Embedded System Optimization: Framework selection and architectural decisions specifically tailored for deployment on resource-constrained hardware platforms
- Real-Time Processing Capability: Demonstrates exceptional inference speeds reaching up to 300 FPS on optimized hardware configurations
- Comprehensive Benchmark Evaluation: Extensive validation across multiple standard datasets including KITTI, Cityscapes, and R2D benchmarks
Representative segmentation outputs demonstrating LRDNet's capability across diverse road scenarios on KITTI benchmark
Comprehensive performance analysis comparing parameter count, processing speed, and accuracy metrics against state-of-the-art methods
Experimental Configuration: Evaluation conducted on NVIDIA GeForce RTX 2080Ti with 188GB system memory, utilizing 48-core Intel Xeon processor architecture.
LRDNet/
├── train.py # Primary training pipeline with configurable hyperparameters
├── trainc.py # Continuation training from pre-trained checkpoints
├── test.py # Inference and evaluation framework
├── AUG/ # Static data augmentation generation utilities (MATLAB)
├── ADI/ # Modified Adaptive Data Integration implementation
├── data/ # Dataset organization directory
│ ├── training/ # Training dataset placement
│ ├── testing/ # Evaluation dataset storage
│ └── data_road_aug/ # Augmented dataset repository
│ ├── train/ # Augmented training samples
│ └── val/ # Augmented validation samples
└── seg_results_images/ # Generated segmentation outputs
Core Requirements:
tensorflow-gpu==1.14.0
keras==2.2.4
tqdm
pillow
numpyOptional Performance Analysis Tools:
FLOPS Computation: Download net_flops.py from the Keras FLOPS repository and position within root directory for computational complexity analysis.
Backbone Network Integration: Install segmentation models library following documentation from qubvel/segmentation_models.
Advanced Data Augmentation: Implement Albumentations library as detailed in official documentation for sophisticated augmentation strategies.
Primary Evaluation Datasets:
- KITTI Road Benchmark: Comprehensive autonomous driving dataset for road segmentation evaluation
- Cityscapes Dataset: Large-scale urban scene understanding benchmark
- R2D Dataset: Specialized road segmentation evaluation framework
Establish dataset organization following the prescribed directory hierarchy to ensure compatibility with training and evaluation pipelines.
python train.pyConfigure model hyperparameters and architectural variants within the designated model variable section
python trainc.pySpecify pre-trained weight file paths for continued optimization from existing checkpoints
python test.pyDefine model path specifications for trained network evaluation on test datasets
Generated segmentation outputs are systematically stored within the seg_results_images/ directory. For compatibility with KITTI evaluation protocols, implement BEV coordinate transformation following guidelines established in the KITTI Road Development Kit.
- Establish official evaluation account using institutional email credentials
- Implement evaluation protocols as specified in KITTI Road Kit documentation
- Submit BEV-transformed segmentation results for official benchmark evaluation
We provide comprehensive pre-trained network weights representing different architectural configurations evaluated on the KITTI benchmark server:
- LRDNet+: Enhanced architectural variant incorporating advanced feature extraction mechanisms
- LRDNet(S): Standard configuration optimized for balanced performance and efficiency
- LRDNet(L): Large-scale variant designed for maximum accuracy scenarios
Resource Access: Complete model weights, BEV submission files, modified ADI implementations, and KITTI submission documentation available through institutional cloud storage.
Framework Selection Rationale: While acknowledging PyTorch's superior computational performance in traditional GPU environments, our implementation utilizes Keras/TensorFlow for enhanced compatibility with embedded deployment scenarios. This architectural decision facilitates seamless integration with resource-constrained hardware platforms capable of achieving the target 300 FPS processing requirements.
Code Verification Status: The current repository contains cleaned implementation code that has not undergone comprehensive testing post-refactoring. Users encountering implementation issues are encouraged to report problems through the repository's issue tracking system.
Please reference our work using the following citation format:
@article{DBLP:journals/tmm/KhanSRSS25,
author = {Abdullah Aman Khan and
Jie Shao and
Yunbo Rao and
Lei She and
Heng Tao Shen},
title = {LRDNet: Lightweight LiDAR Aided Cascaded Feature Pools for Free Road
Space Detection},
journal = {{IEEE} Trans. Multim.},
volume = {27},
pages = {652--664},
year = {2025},
}For technical inquiries, implementation questions, or collaboration opportunities, please utilize the repository's issue tracking system or contact the corresponding author through institutional channels.
Disclaimer: This implementation represents ongoing research in autonomous driving perception systems. Users are advised to conduct thorough validation before deployment in safety-critical applications.
