Skip to content

Micnasr/Cyclops

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

64 Commits
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Cyclops

Real-time multi-camera 360-degree obstacle awareness system for cyclists, built on NVIDIA Jetson Orin Nano.

Cyclops mounted on bicycle 3D visualization frontend on phone

What It Does

Urban cycling is dangerous because riders have almost no visibility behind or beside them. Cyclops solves this by mounting four cameras on a bicycle and running all processing on-device at the edge. The system detects vehicles and pedestrians in every direction, estimates their distance and closing speed, determines vehicle orientation, and streams the results over WebSocket to a web app on a phone mounted to the handlebars so the rider has continuous spatial awareness of their surroundings.

Key Features

  • 4-camera surround vision
  • YOLOv11s object detection with TensorRT FP16 inference
  • Kalman-filtered multi-object tracking across all camera feeds
  • Monocular depth estimation via per-camera calibrated look-up tables
  • Vehicle orientation detection (front/rear-facing vs side-on)
  • Radial speed and time-to-collision estimation
  • Real-time WebSocket streaming to a 3D visualization frontend

System Overview

System architecture and processing pipeline

Hardware

Rear node -- Jetson Orin Nano

  • 2x USB cameras (left and right)
  • 1x CSI camera (rear, wide FOV)
  • Handles all processing and streams results to the phone

Rear node enclosure (Jetson Orin Nano)

Front node -- Raspberry Pi 4

  • 1x CSI camera (forward-facing)
  • Streams H.264 video over WiFi (UDP RTP) to the Jetson for processing

Front node enclosure (Raspberry Pi 4)

Project Structure

Capstone/
  software/
    core/                   Main C++ application
      include/              Header files
      src/                  Implementation files
      camera_calibration/   Intrinsic calibration YAMLs and depth LUTs
      CMakeLists.txt
      build.sh              Build script
      main.cpp              Entry point
    models/                 TensorRT engine files (not tracked in git)
    scripts/
      service/              systemd user service for auto-start at boot
      object_detection.py   Python detection scripts (prototyping)
    calibration_tools/      Camera intrinsic calibration utilities
    tools/                  Debugging utilities
    config/                 Runtime config (auto-detected camera devices)
  enclosures/               CAD models for both enclosures
  docs/images/              Project images and diagrams

Prerequisites

  • NVIDIA Jetson Orin Nano with JetPack 6
  • CUDA 12
  • TensorRT 10+
  • OpenCV 4.x (with GStreamer support)
  • jetson-utils
  • IXWebSocket (optional, for WebSocket streaming)

Building

cd software/core
./build.sh

This cleans any previous build, runs CMake, compiles with all available cores, and places the cyclops binary in software/.

To enable the debug bounding-box overlay during development, edit build.sh and uncomment the DEBUG_DRAW CMake flag.

Running

Via systemd service (recommended)

The service auto-detects connected cameras at startup and launches Cyclops with the correct device paths.

cd software/scripts/service
./install_service.sh

systemctl --user start   cyclops    # start now
systemctl --user stop    cyclops    # stop
systemctl --user restart cyclops    # restart
systemctl --user status  cyclops    # check status
journalctl --user -u cyclops -f     # live logs

Manual (development)

cd software
./cyclops --usb-left /dev/video1 --usb-right /dev/video3
Flag Description
--usb-left <dev> USB left camera device (default: /dev/video1)
--usb-right <dev> USB right camera device (default: /dev/video3)
--ws-port <port> WebSocket server port (default: 8765)
--calib Enter depth calibration mode
--cam <id> Camera ID for calibration (0-3)
--class <name> Object class for calibration (pedestrian, car, truck)
--dist <meters> Known distance for calibration
--samples <count> Number of calibration samples (default: 30)

Depth Calibration

Each camera needs a distance look-up table (LUT) that maps bounding box height in pixels to real-world distance in meters. To calibrate:

  1. Place a known object (car, pedestrian, or truck) at a measured distance from the camera.
  2. Run calibration mode:
    cd software
    ./cyclops --calib --cam 0 --class car --dist 5.0 --samples 30
  3. The system collects bounding box samples, fits a curve, and saves the LUT to core/camera_calibration/.
  4. Repeat for each camera and object class.

WebSocket API

When built with WebSocket support, Cyclops streams detection data as JSON arrays on each inference frame:

[
  {
    "id": "cam0_obj12",
    "type": "car",
    "speed": 2.35,
    "direction": "left",
    "distance": 8.40,
    "x": -1.20,
    "y": 0,
    "isFrontFacing": true
  }
]
Field Description
id Unique object identifier (camera + track ID)
type Object class (car, truck, pedestrian, bicycle)
speed Radial speed in m/s (positive = approaching)
direction Camera direction (left, back, right, front)
distance Estimated distance in meters
x Lateral offset in meters
isFrontFacing true if vehicle is front/rear-facing, false if side-on

Poster

Cyclops project poster

About

Real-time multi-camera 360-degree obstacle awareness system for cyclists, built on NVIDIA Jetson Orin Nano.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors