A real-time object detection system for drones using YOLOv8, developed as part of the Drohnen mit Künstlicher Intelligenz module (Master-Projekt - Studienfeld Intelligente Systeme, WS2526) at Frankfurt University of Applied Sciences.
- Dominik Bartsch
- Gajus Petrauskas
- Nhat Khanh Hoang
This project implements a real-time trash detection system for drones using YOLOv8. The system processes video streams from drone cameras and communicates with the drone's flight controller via a Raspberry Pi bridge.
inference.py- Main script for running the YOLOv8 model inference on video streamsreal_mission.py- Handles communication between the local computer and the drone through Raspberry Pi (UDP-based)setup_test.py- Check if local computer is correctly setup for WasteWingyolov8n_waste.pt- Pre-trained YOLOv8 model weights file
The project poster and the 3d printable drop mechanism can be found in the assets folder.
- Ensure your computer and Raspberry Pi are connected to the same network
- Install dependencies:
Make sure
pip install -r requirements.txt
torchandtorchvisionare installed with GPU support to ensure maximum inference performance.
Run the inference script with the model weights:
python inference.py -w yolov8n_waste.pt -s 1 --showParameters:
-w/--weights: Path to the model weights file (e.g.,yolov8n_waste.pt)-s/--source: Video source (0 for webcam, 1 for capture card, or path to video file)--show: Display the detection results in real-time
For more advanced usage, see the help menu:
python inference.py --helpThis project can be run using uv as well
uv sync
uv run inference -w yolov8n_waste.pt -s 1See requirements.txt for the complete list of dependencies. Main requirements include:
- Ultralytics YOLOv8
- PyTorch
- OpenCV
- NumPy
The system consists of:
- Local Computer: Runs YOLOv8 inference on video streams
- Raspberry Pi: Acts as a bridge for UDP communication with the drone's flight controller
- Drone: Receives commands based on detection results
See LICENSE file for details.