Skip to content

kanungle/RoboBank

 
 

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

2 Commits
 
 
 
 
 
 
 
 
 
 

Repository files navigation

RoboBank: AI-powered Trajectory Memory and Reflex Suggestion System

RoboBank is an interactive, web-based demonstration of a robotic trajectory memory and reflex suggestion system. It combines vector-based embeddings, trajectory simulation, and real-time retrieval to provide robots with safe fallback actions based on past experiences. While this demo operates entirely in a simulated 2D environment, the system architecture is fully compatible with real-world robotic platforms.


🚀 Motivation

Modern robots often encounter situations where pre-programmed controllers fail due to unexpected obstacles or dynamic environments. Human operators frequently intervene to correct mistakes, but fully autonomous systems need memory and intuition to handle novel scenarios safely.

RoboBank addresses this by:

  • Storing short trajectories and their sensor-action sequences as vector embeddings.
  • Labeling each trajectory with safety outcomes (safe, near-miss, unsafe).
  • Retrieving the most relevant past experiences in real-time to suggest safe reflex actions.

This system can serve as a fallback mechanism for real robots, reducing collisions and improving adaptability without explicit reprogramming.


🧠 How It Works: Mathematical Overview

  1. Trajectory Representation Each trajectory is a sequence of:

    • Sensor readings over T timesteps (e.g., LIDAR distances) → S ∈ ℝ^{n_rays × T}
    • Corresponding actions over the same timesteps (velocity, angular velocity) → A ∈ ℝ^{2 × T}

    These are flattened and normalized into a single vector:

    $$v = \frac{[S.flatten(); A.flatten()]}{\|[S.flatten(); A.flatten()]\|}$$

    This vector is stored in Qdrant.

  2. Safety Labeling

    • collision: Boolean if the robot intersects an obstacle.
    • min_dist: Minimum distance to any obstacle during the trajectory.
    • near_miss: True if collision == False but min_dist < threshold.
  3. Vector Retrieval Given the robot’s current state v_current, we query Qdrant for the k nearest trajectories (v_i) using cosine similarity:

    $$similarity(v_i, v_current) = \frac{v_i \cdot v_current}{\|v_i\|\|v_current\|}$$
  4. Reflex Suggestion The system selects the next action as the first step of the safest retrieved trajectory:

    $$a_suggested = argmax_{a ∈ neighbors} (safety_score_i * similarity_i)$$

🎨 Website Features

1. Interactive 2D Robot Simulator

  • Grid-based 10×10 room on a dark grey/black background.
  • Robot represented by a triangle with orientation arrow.
  • Obstacles drawn as grey squares; cursor effects highlight user clicks.

2. Manual Control

  • Arrow buttons for movement (velocity & angular velocity).
  • Real-time trajectory recording as the robot moves.

3. Trajectory Playback

  • Play back manually driven or retrieved trajectories.
  • Color-coded paths indicate safe (green), near-miss (orange), unsafe (red) outcomes.

4. Memory Panel

  • Displays past trajectories stored in Qdrant.
  • Clickable cards show trajectory path and outcome.

5. Reflex Suggestion

  • Query button retrieves nearest safe trajectories from Qdrant.
  • Suggested action displayed as a glowing arrow on the canvas.

6. Trajectory Planning

  • Combine multiple retrieved sequences to plan short paths.
  • Projected path is color-coded by predicted safety.

🌐 Real-world Applicability

  • Can replace simulation sensors with LIDAR, IMU, or cameras.
  • Actions can be mapped to motor commands on physical robots.
  • Enables autonomous fallback and adaptive behavior in dynamic environments.

Potential applications:

  • Warehouse robots avoiding collisions.
  • Delivery drones navigating cluttered environments.
  • Autonomous vehicles handling near-miss scenarios.
  • Assistive robots in hospitals or homes.

🔮 Future Directions

  • Extend to 3D environments and multi-robot coordination.
  • Continuous learning prioritizing novel and safe trajectories.
  • Multi-robot memory sharing.
  • Multi-modal embeddings (sensor + camera + semantic map).

🛠️ Setup Instructions

Prerequisites

  • Python 3.8+
  • Node.js 16+
  • Qdrant cloud account (or local instance)

1. Clone and Setup Environment

git clone https://github.com/sanyakapoor27/RoboBank

# Copy environment template
cp .env.example .env
# Edit .env with your Qdrant API credentials

2. Backend Setup

# Create virtual environment
cd backend
python3 -m venv venv
source venv/bin/activate  # On Windows: venv\Scripts\activate

# Install dependencies
pip install -r requirements.txt

# Start FastAPI server
python main.py
# Backend will run on http://localhost:8000

3. Frontend Setup

cd frontend
# Install Node.js dependencies
npm install

# Start React development server
npm start
# Frontend will run on http://localhost:3000

4. Quick Start (One Command)

chmod +x start.sh
./start.sh
# Automatically starts both backend and frontend servers

🎮 How to Use

  1. Control the Robot

    • Arrow buttons to queue movement commands

      • ↑ = Forward, ↓ = Backward, ←/→ = Turn left/right
    • Click Play Trajectory to simulate the path

  2. Learn from Experience

    • Each trajectory is stored in Qdrant

    • Outcomes:

      • 🟢 Safe: No collisions
      • 🟠 Near-miss: Close to obstacles but safe
      • 🔴 Unsafe: Collision occurred
  3. Get AI Suggestions

    • Click Query Memory to find similar past situations
    • AI suggests a safe reflex action (glowing blue arrow)
    • Click Execute Suggestion to perform the move
  4. Browse Robot Memory

    • View all stored trajectories in the Memory Panel
    • Click any card to replay that trajectory
    • Color indicates outcome

🔧 Technical Details

Data Flow

Commands → /simulate → Robot physics simulation
Trajectory + Sensors → Vector embedding → Stored in Qdrant
Current situation → /query → Find similar past experiences
Neighbors → Safety analysis → Suggested reflex action

Vector Embedding

  • Sensor data: 16-ray LIDAR readings × 5 timesteps = 80 dims
  • Action data: 2 actions × 5 timesteps = 10 dims
  • Total: 90-dimensional L2-normalized vectors
  • Distance metric: Cosine similarity

Robot Physics

  • Environment: 10m × 10m box with obstacles
  • Robot model: Point robot with position (x, y) and orientation θ
  • Sensors: 16-ray LIDAR, 5m range, π FOV
  • Dynamics: Simple kinematic model

📁 Project Structure

robotics-intuition-bank/
├── main.py                 # FastAPI backend
├── requirements.txt        # Python dependencies
├── .env.example            # Environment template
├── start.sh                # Startup script
├── src/
│   ├── App.js              # Main React app
│   ├── index.js            # React entry point
│   ├── index.css           # Global styles
│   └── components/
│       └── RoboticsIntuitionBank.jsx  # Main component
├── public/
│   └── index.html          # HTML template
├── package.json            # Node.js dependencies
└── tailwind.config.js      # Tailwind CSS config

📊 System Flowchart

Untitled diagram _ Mermaid Chart-2025-09-16-155517

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors

Languages

  • JavaScript 66.6%
  • Python 28.3%
  • Shell 2.5%
  • CSS 1.4%
  • HTML 1.2%