Dynamic 3D point clouds are a key enabler for immersive communication, telepresence, and robotics, but wireless streaming remains difficult due to high data rates and codec complexity. Conventional digital pipelines combining compression and channel coding perform well at long block lengths but suffer from the cliff effect and poor adaptability in short block length settings. This work investigates deep joint source–channel coding for dynamic point cloud transmission. A Transformer-based autoencoder is introduced together with two synchronization strategies: a pseudorandom-code matched filter for reliable frame boundary recovery and a phase-invariant decoder. Experimental results demonstrate that the proposed approach delivers superior reconstruction quality and exhibits more graceful degradation under deteriorating channel conditions compared to baseline methods, all while requiring significantly fewer channel symbols per frame.
The project is built on top of smol runtime configs and focuses on experiments with latent size, SNR/channel perturbations, and reconstruction quality.
- Python:
3.10 - CUDA:
12.1(for the provided GPU-enabled PyTorch install path) - smol: use commit
214a773(see thesmolsubmodule or install from the provided subfolder). Reference: https://github.com/Teleinfrastructure-Research-Lab/smol/tree/214a773b7a953972cc39d0c7c9edc31f6add2808 - Dataset inputs:
- Main training path expects an
.h5dataset file (H5_PATH) configured inconfig/out_of_project_paths.yaml - Optional raw ShapeNet-Part usage is available via the dedicated dataloader config (
data.shapenet-part-root)
- Main training path expects an
git clone --recurse-submodules https://github.com/Teleinfrastructure-Research-Lab/lb-dpct.git
cd lb-dpct
python install_dependencies.py
pip install -e .
pip install submodules/smol
mv config/out_of_project_paths.yaml.example config/out_of_project_paths.yaml
mv config/checkpoints.yaml.example config/checkpoints.yaml
# Fill both files with your local paths/checkpoints
mkdir -p runs
mkdir -p runs/sept_foldingnet_awgn
mkdir -p runs/dpct_awgn
mkdir -p runs/dpct_fft
mkdir -p dataarchitecture/: autoencoder architectures and channel modules (folding,sept,dpct, graph/transformer layers).dataloaders/: HDF5 and ShapeNet-Part data loaders, preprocessing, normalization, batching.loss/: training losses (Chamfer-based point-cloud loss).pipeline/training/: training scripts.experiment_definitions/: ready experiment YAMLs grouped by architecture/setting.results/: scripts for SNR/synchronization sweeps and PSNR reporting.scripts/visualize/: dataset and reconstruction visualization helpers.scripts/model_stats/: architecture parameter counting.utils/: geometry helpers, common utilities, visualization helpers.
-
config/out_of_project_paths.yaml- Set
H5_PATHto your prepared dataset file.
- Set
-
config/data.yaml- Controls data roots, cache behavior, and
NUM_POINTS.
- Controls data roots, cache behavior, and
-
config/default-params.yaml- Global training/model defaults (epochs, LR, latent dim, decoder options, etc.).
The experiment_definitions/ directory contains ready-to-run YAMLs:
sept_foldingnet_awgn/: FoldingNet and SEPT experiments under AWGN settings.exp_dpct/: DPCT AWGN experiments.dpct_fft/: DPCT experiments with FFT path enabled.
Each folder includes latent sizes 128/256/512 and SNR-conditioned training variants.
Common edits before a run:
- Batch size:
experiment_definitions/*/batch_config/batch_size_conf.yaml - Latent size: select corresponding YAML or change
num_features - Optimizer/LR/scheduler/epochs/output path: edit the selected YAML
- Channel settings:
awgn_snr, temporal/sync params (for DPCT variants)
Run training by pointing to one experiment-definition directory:
python pipeline/training/train_from_config.py experiment_definitions/sept_foldingnet_awgn
python pipeline/training/train_from_config.py experiment_definitions/exp_dpct
python pipeline/training/train_from_config.py experiment_definitions/dpct_fftThe results/ scripts run structured evaluation sweeps and export CSVs:
sync_snr.py: PSNR vs SNR sweepsync_rk.py: temporal offset / synchronization parameter sweepbw_snr.py: checkpoint- and SNR-based PSNR aggregation
The authors acknowledge the support of the ‘‘Teleinfrastructure’’ Research and Development Laboratory at the Technical University of Sofia and the ‘‘Intelligent Communication Infrastructures’’ Research and Development Laboratory at Sofia Tech Park, Sofia, Bulgaria.
@INPROCEEDINGS{11351024,
author={Bozhilov, Ivaylo and Petkova, Radostina and Tonchev, Krasimir and Manolova, Agata},
booktitle={2025 28th International Symposium on Wireless Personal Multimedia Communications (WPMC)},
title={Learning-Based Dynamic Point Cloud Transmission},
year={2025},
volume={},
number={},
pages={1-6},
keywords={Point cloud compression;Wireless communication;Three-dimensional displays;Telepresence;Autoencoders;Symbols;Transformers;Synchronization;Reliability;Robots;Autoencoders;Deep Joint Source-Channel Coding;Deep Learning;Point Clouds;Wireless Transmission},
doi={10.1109/WPMC67460.2025.11351024}}