Skip to content

Teleinfrastructure-Research-Lab/lb-dpct

Repository files navigation

LB-DPCT

Dynamic 3D point clouds are a key enabler for immersive communication, telepresence, and robotics, but wireless streaming remains difficult due to high data rates and codec complexity. Conventional digital pipelines combining compression and channel coding perform well at long block lengths but suffer from the cliff effect and poor adaptability in short block length settings. This work investigates deep joint source–channel coding for dynamic point cloud transmission. A Transformer-based autoencoder is introduced together with two synchronization strategies: a pseudorandom-code matched filter for reliable frame boundary recovery and a phase-invariant decoder. Experimental results demonstrate that the proposed approach delivers superior reconstruction quality and exhibits more graceful degradation under deteriorating channel conditions compared to baseline methods, all while requiring significantly fewer channel symbols per frame.

The project is built on top of smol runtime configs and focuses on experiments with latent size, SNR/channel perturbations, and reconstruction quality.

Requirements

  • Python: 3.10
  • CUDA: 12.1 (for the provided GPU-enabled PyTorch install path)
  • smol: use commit 214a773 (see the smol submodule or install from the provided subfolder). Reference: https://github.com/Teleinfrastructure-Research-Lab/smol/tree/214a773b7a953972cc39d0c7c9edc31f6add2808
  • Dataset inputs:
    • Main training path expects an .h5 dataset file (H5_PATH) configured in config/out_of_project_paths.yaml
    • Optional raw ShapeNet-Part usage is available via the dedicated dataloader config (data.shapenet-part-root)

Install

git clone --recurse-submodules https://github.com/Teleinfrastructure-Research-Lab/lb-dpct.git
cd lb-dpct
python install_dependencies.py
pip install -e .
pip install submodules/smol

mv config/out_of_project_paths.yaml.example config/out_of_project_paths.yaml
mv config/checkpoints.yaml.example config/checkpoints.yaml
# Fill both files with your local paths/checkpoints

mkdir -p runs
mkdir -p runs/sept_foldingnet_awgn
mkdir -p runs/dpct_awgn
mkdir -p runs/dpct_fft
mkdir -p data

Codebase

  • architecture/: autoencoder architectures and channel modules (folding, sept, dpct, graph/transformer layers).
  • dataloaders/: HDF5 and ShapeNet-Part data loaders, preprocessing, normalization, batching.
  • loss/: training losses (Chamfer-based point-cloud loss).
  • pipeline/training/: training scripts.
  • experiment_definitions/: ready experiment YAMLs grouped by architecture/setting.
  • results/: scripts for SNR/synchronization sweeps and PSNR reporting.
  • scripts/visualize/: dataset and reconstruction visualization helpers.
  • scripts/model_stats/: architecture parameter counting.
  • utils/: geometry helpers, common utilities, visualization helpers.

Configuration

  1. config/out_of_project_paths.yaml

    • Set H5_PATH to your prepared dataset file.
  2. config/data.yaml

    • Controls data roots, cache behavior, and NUM_POINTS.
  3. config/default-params.yaml

    • Global training/model defaults (epochs, LR, latent dim, decoder options, etc.).

Experiment definitions

The experiment_definitions/ directory contains ready-to-run YAMLs:

  • sept_foldingnet_awgn/: FoldingNet and SEPT experiments under AWGN settings.
  • exp_dpct/: DPCT AWGN experiments.
  • dpct_fft/: DPCT experiments with FFT path enabled.

Each folder includes latent sizes 128/256/512 and SNR-conditioned training variants.

Common edits before a run:

  • Batch size: experiment_definitions/*/batch_config/batch_size_conf.yaml
  • Latent size: select corresponding YAML or change num_features
  • Optimizer/LR/scheduler/epochs/output path: edit the selected YAML
  • Channel settings: awgn_snr, temporal/sync params (for DPCT variants)

Pipeline

Training

Run training by pointing to one experiment-definition directory:

python pipeline/training/train_from_config.py experiment_definitions/sept_foldingnet_awgn
python pipeline/training/train_from_config.py experiment_definitions/exp_dpct
python pipeline/training/train_from_config.py experiment_definitions/dpct_fft

Evaluation / sweeps

The results/ scripts run structured evaluation sweeps and export CSVs:

  • sync_snr.py: PSNR vs SNR sweep
  • sync_rk.py: temporal offset / synchronization parameter sweep
  • bw_snr.py: checkpoint- and SNR-based PSNR aggregation

Acknowledgements

The authors acknowledge the support of the ‘‘Teleinfrastructure’’ Research and Development Laboratory at the Technical University of Sofia and the ‘‘Intelligent Communication Infrastructures’’ Research and Development Laboratory at Sofia Tech Park, Sofia, Bulgaria.

Citation

@INPROCEEDINGS{11351024,
  author={Bozhilov, Ivaylo and Petkova, Radostina and Tonchev, Krasimir and Manolova, Agata},
  booktitle={2025 28th International Symposium on Wireless Personal Multimedia Communications (WPMC)}, 
  title={Learning-Based Dynamic Point Cloud Transmission}, 
  year={2025},
  volume={},
  number={},
  pages={1-6},
  keywords={Point cloud compression;Wireless communication;Three-dimensional displays;Telepresence;Autoencoders;Symbols;Transformers;Synchronization;Reliability;Robots;Autoencoders;Deep Joint Source-Channel Coding;Deep Learning;Point Clouds;Wireless Transmission},
  doi={10.1109/WPMC67460.2025.11351024}}

About

Learning-Based Dynamic Point Cloud Transmission

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors

Languages