RoboBank is an interactive, web-based demonstration of a robotic trajectory memory and reflex suggestion system. It combines vector-based embeddings, trajectory simulation, and real-time retrieval to provide robots with safe fallback actions based on past experiences. While this demo operates entirely in a simulated 2D environment, the system architecture is fully compatible with real-world robotic platforms.
Modern robots often encounter situations where pre-programmed controllers fail due to unexpected obstacles or dynamic environments. Human operators frequently intervene to correct mistakes, but fully autonomous systems need memory and intuition to handle novel scenarios safely.
RoboBank addresses this by:
- Storing short trajectories and their sensor-action sequences as vector embeddings.
- Labeling each trajectory with safety outcomes (safe, near-miss, unsafe).
- Retrieving the most relevant past experiences in real-time to suggest safe reflex actions.
This system can serve as a fallback mechanism for real robots, reducing collisions and improving adaptability without explicit reprogramming.
-
Trajectory Representation Each trajectory is a sequence of:
- Sensor readings over
Ttimesteps (e.g., LIDAR distances) →S ∈ ℝ^{n_rays × T} - Corresponding actions over the same timesteps (velocity, angular velocity) →
A ∈ ℝ^{2 × T}
These are flattened and normalized into a single vector:
$$v = \frac{[S.flatten(); A.flatten()]}{\|[S.flatten(); A.flatten()]\|}$$ This vector is stored in Qdrant.
- Sensor readings over
-
Safety Labeling
collision: Boolean if the robot intersects an obstacle.min_dist: Minimum distance to any obstacle during the trajectory.near_miss: True ifcollision == Falsebutmin_dist < threshold.
-
Vector Retrieval Given the robot’s current state
v_current, we query Qdrant for the k nearest trajectories (v_i) using cosine similarity:$$similarity(v_i, v_current) = \frac{v_i \cdot v_current}{\|v_i\|\|v_current\|}$$ -
Reflex Suggestion The system selects the next action as the first step of the safest retrieved trajectory:
$$a_suggested = argmax_{a ∈ neighbors} (safety_score_i * similarity_i)$$
- Grid-based 10×10 room on a dark grey/black background.
- Robot represented by a triangle with orientation arrow.
- Obstacles drawn as grey squares; cursor effects highlight user clicks.
- Arrow buttons for movement (velocity & angular velocity).
- Real-time trajectory recording as the robot moves.
- Play back manually driven or retrieved trajectories.
- Color-coded paths indicate safe (green), near-miss (orange), unsafe (red) outcomes.
- Displays past trajectories stored in Qdrant.
- Clickable cards show trajectory path and outcome.
- Query button retrieves nearest safe trajectories from Qdrant.
- Suggested action displayed as a glowing arrow on the canvas.
- Combine multiple retrieved sequences to plan short paths.
- Projected path is color-coded by predicted safety.
- Can replace simulation sensors with LIDAR, IMU, or cameras.
- Actions can be mapped to motor commands on physical robots.
- Enables autonomous fallback and adaptive behavior in dynamic environments.
Potential applications:
- Warehouse robots avoiding collisions.
- Delivery drones navigating cluttered environments.
- Autonomous vehicles handling near-miss scenarios.
- Assistive robots in hospitals or homes.
- Extend to 3D environments and multi-robot coordination.
- Continuous learning prioritizing novel and safe trajectories.
- Multi-robot memory sharing.
- Multi-modal embeddings (sensor + camera + semantic map).
- Python 3.8+
- Node.js 16+
- Qdrant cloud account (or local instance)
git clone https://github.com/sanyakapoor27/RoboBank
# Copy environment template
cp .env.example .env
# Edit .env with your Qdrant API credentials# Create virtual environment
cd backend
python3 -m venv venv
source venv/bin/activate # On Windows: venv\Scripts\activate
# Install dependencies
pip install -r requirements.txt
# Start FastAPI server
python main.py
# Backend will run on http://localhost:8000cd frontend
# Install Node.js dependencies
npm install
# Start React development server
npm start
# Frontend will run on http://localhost:3000chmod +x start.sh
./start.sh
# Automatically starts both backend and frontend servers-
Control the Robot
-
Arrow buttons to queue movement commands
- ↑ = Forward, ↓ = Backward, ←/→ = Turn left/right
-
Click Play Trajectory to simulate the path
-
-
Learn from Experience
-
Each trajectory is stored in Qdrant
-
Outcomes:
- 🟢 Safe: No collisions
- 🟠 Near-miss: Close to obstacles but safe
- 🔴 Unsafe: Collision occurred
-
-
Get AI Suggestions
- Click Query Memory to find similar past situations
- AI suggests a safe reflex action (glowing blue arrow)
- Click Execute Suggestion to perform the move
-
Browse Robot Memory
- View all stored trajectories in the Memory Panel
- Click any card to replay that trajectory
- Color indicates outcome
Data Flow
Commands → /simulate → Robot physics simulation
Trajectory + Sensors → Vector embedding → Stored in Qdrant
Current situation → /query → Find similar past experiences
Neighbors → Safety analysis → Suggested reflex action
Vector Embedding
- Sensor data: 16-ray LIDAR readings × 5 timesteps = 80 dims
- Action data: 2 actions × 5 timesteps = 10 dims
- Total: 90-dimensional L2-normalized vectors
- Distance metric: Cosine similarity
Robot Physics
- Environment: 10m × 10m box with obstacles
- Robot model: Point robot with position
(x, y)and orientationθ - Sensors: 16-ray LIDAR, 5m range, π FOV
- Dynamics: Simple kinematic model
robotics-intuition-bank/
├── main.py # FastAPI backend
├── requirements.txt # Python dependencies
├── .env.example # Environment template
├── start.sh # Startup script
├── src/
│ ├── App.js # Main React app
│ ├── index.js # React entry point
│ ├── index.css # Global styles
│ └── components/
│ └── RoboticsIntuitionBank.jsx # Main component
├── public/
│ └── index.html # HTML template
├── package.json # Node.js dependencies
└── tailwind.config.js # Tailwind CSS config