Skip to content

ut-amrl/bev-patch-pf

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

1 Commit
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

BEV-Patch-PF

Project Website: https://bev-patch-pf.github.io/

Installation

conda create -y -n bev-patch-pf python=3.12
conda activate bev-patch-pf
conda install -y -c conda-forge manifpy
pip install -e .

Training BEV-Patch-PF model

Train with single GPU

python src/train.py

Train with multi-GPU

accelerate launch --multi_gpu --num_processes=<num_of_GPUs> --mixed_precision=fp16 src/train_ddp.py

Run Particle Filter

python src/run_pf.py sequence=<dataset> ckpt_path=<path/to/ckpt>

Real-time experiment

  1. Export ONNX model
python scripts/export_to_onnx.py --ckpt_path=<ckpt_path> --out_dir=<outdir>
  1. Build TensorRT model and run ROS2 node

Generate Trainig Dataset

A) Generate Dataset from bagfiles

1. Extract sensor data from bagfiles (RGB, depth, IMU, etc.)

2. Generate a trajectory

3. Export a GeoTIFF map (QGIS)

  • Important: ensure the GeoTIFF is in a correct UTM zone so distances are in meters.

4. Align the SLAM trajectory into the GeoTIFF/map coordinate frame:

python preprocessing/align_trajectory_geotiff.py \
  --geotiff=<path/to/geotiff.tiff> \
  --traj=<path/to/trajectory.csv>

5. Perform dataset-specific preprocessing (e.g., time synchronization, rectification, filtering).

  • Example: python preprocessing/preprocess_arl_jackal.py

B) Generate Dataset from TartanDrive2.0 dataset

  1. Download bagfiles
  1. Extract images and GT odom
  1. Preprocess the extracted data
  • python preprocessing/preprocess_tartandrive.py

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages