Skip to content

Namann-14/safeorbit

Folders and files

NameName
Last commit message
Last commit date

Latest commit

Β 

History

9 Commits
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 

Repository files navigation

πŸ† SafeOrbit

AI-Powered Safety Equipment Detection System for Space Stations

A complete full-stack solution combining YOLOv8-based object detection with a cross-platform mobile application for real-time safety equipment monitoring in space station environments.

Python React Native TypeScript YOLOv8 Expo FastAPI

ok2.mp4

πŸ“‹ Table of Contents


🎯 Overview

SafeOrbit is an intelligent safety monitoring system designed for space station environments. It combines advanced computer vision with a modern mobile interface to detect and track critical safety equipment in real-time.

Key Components

  • AI Engine: YOLOv8m-based object detection model achieving 86.4% mAP@0.5
  • Mobile App: Cross-platform React Native application with on-device ML inference
  • REST API: FastAPI backend for model serving and real-time predictions
  • ONNX Runtime: On-device inference for iOS, Android, and web platforms

Detected Safety Equipment

The system identifies 7 critical safety equipment classes:

Class ID Equipment Purpose
0 Oxygen Tank Emergency oxygen supply
1 Nitrogen Tank Nitrogen storage
2 First Aid Box Medical emergency kit
3 Fire Alarm Fire detection system
4 Safety Switch Panel Emergency control panel
5 Emergency Phone Emergency communication
6 Fire Extinguisher Fire suppression equipment

✨ Features

πŸ€– AI-Powered Detection

  • High Accuracy: 86.7% mAP@0.5, 95.52% precision
  • Real-time Processing: Fast inference with ONNX optimization
  • Multi-platform: CPU and GPU support with automatic fallback
  • Domain Adaptation: Optimized for real-world space station imagery

πŸ“± Mobile Application

  • Cross-platform: iOS, Android, and web support via React Native
  • Live Scanning: Real-time camera-based object detection
  • On-device ML: ONNX Runtime for privacy and speed
  • Secure Authentication: Clerk-based authentication system
  • Analytics Dashboard: Visual insights and scanning history
  • Offline Capable: Core detection works without internet

πŸ”§ Developer Experience

  • TypeScript: 100% type-safe codebase
  • Modern UI: Tailwind CSS via NativeWind
  • Production Ready: Comprehensive logging and error handling
  • Well Documented: Extensive documentation and code comments

πŸ—οΈ Architecture

β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
β”‚                    Mobile App (React Native)                 β”‚
β”œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€
β”‚  β€’ Expo Router navigation                                    β”‚
β”‚  β€’ On-device ONNX inference                                  β”‚
β”‚  β€’ Real-time camera scanning                                 β”‚
β”‚  β€’ Analytics dashboard                                       β”‚
β”‚  β€’ Clerk authentication                                      β”‚
β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜
                            β”‚
                            β–Ό
β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
β”‚                    REST API (FastAPI)                        β”‚
β”œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€
β”‚  β€’ /predict endpoint                                         β”‚
β”‚  β€’ Base64 image processing                                   β”‚
β”‚  β€’ CORS handling                                             β”‚
β”‚  β€’ Health checks                                             β”‚
β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜
                            β”‚
                            β–Ό
β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
β”‚                    AI Engine (YOLOv8m)                       β”‚
β”œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€
β”‚  β€’ YOLOv8m architecture                                      β”‚
β”‚  β€’ Custom trained weights (204 epochs)                       β”‚
β”‚  β€’ ONNX export support                                       β”‚
β”‚  β€’ CPU/GPU inference                                         β”‚
β”‚  β€’ Advanced augmentation pipeline                            β”‚
β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜

πŸš€ Quick Start

Prerequisites

  • Python: 3.8 or higher
  • Node.js: 18.x or higher
  • Git: Latest version
  • Expo CLI: For mobile development
  • CUDA (optional): For GPU acceleration

1. Clone the Repository

git clone https://github.com/Namann-14/safeorbit.git
cd safeorbit

2. AI Engine Setup

cd ai-engine
pip install -r requirements.txt

Create .env file (optional):

MODEL_PATH=results/improved_model/train/weights/best.pt
DEVICE=cpu

Start the API server:

python api.py

The API will be available at http://localhost:8000

3. Mobile App Setup

cd expo
npm install

Configure environment variables in .env.local:

EXPO_PUBLIC_CLERK_PUBLISHABLE_KEY=your_clerk_key_here
EXPO_PUBLIC_API_BASE_URL=http://localhost:8000

Start the development server:

npm run dev

4. Access the Application


πŸ€– AI Engine Setup

Training the Model

The AI engine includes comprehensive training scripts and configurations.

Basic Training:

cd ai-engine
python train.py

Advanced Training with Optimization:

python scripts/train.py --config configs/train_config.yaml

Master Pipeline (Complete Automation):

python scripts/master_pipeline.py

Configuration Files

  • configs/dataset.yaml: Dataset paths and class definitions
  • configs/train_config.yaml: Training hyperparameters
  • configs/augmentation_config.yaml: Data augmentation settings

Model Inference

Python API:

from ultralytics import YOLO

model = YOLO('results/improved_model/train/weights/best.pt')
results = model.predict('image.jpg', conf=0.25)

Command Line:

python predict.py --source image.jpg --conf 0.25

FastAPI Endpoint:

curl -X POST "http://localhost:8000/predict" \
  -H "Content-Type: application/json" \
  -d '{"image": "base64_encoded_image", "confidence": 0.25}'

Export to ONNX

python -c "from ultralytics import YOLO; YOLO('results/improved_model/train/weights/best.pt').export(format='onnx')"

πŸ“± Mobile App Setup

Running on Different Platforms

iOS Simulator (macOS only):

npm run ios

Android Emulator:

npm run android

Web Browser:

npm run web

Physical Device:

npm run dev
# Scan QR code with Expo Go app

Key Features

  • Authentication: Sign up, sign in, password reset, email verification
  • Live Scanning: Real-time camera-based object detection
  • Dashboard: Analytics with charts and statistics
  • Scan History: Review past detections with details
  • Settings: User preferences and account management

Project Structure

expo/
β”œβ”€β”€ app/                    # Application screens
β”‚   β”œβ”€β”€ (auth)/            # Authentication flows
β”‚   β”œβ”€β”€ (tabs)/            # Main tabbed interface
β”‚   └── index.tsx          # Entry point
β”œβ”€β”€ components/            # Reusable UI components
β”‚   β”œβ”€β”€ ui/               # Base UI primitives
β”‚   β”œβ”€β”€ dashboard/        # Dashboard components
β”‚   └── settings/         # Settings components
β”œβ”€β”€ lib/                   # Utilities and configuration
β”‚   β”œβ”€β”€ api-config.ts     # API client setup
β”‚   β”œβ”€β”€ storage.ts        # AsyncStorage helpers
β”‚   └── utils.ts          # Utility functions
└── assets/
    └── models/
        └── best.onnx     # ONNX model for on-device inference

πŸ“š API Documentation

Endpoints

POST /predict

Detect objects in an image.

Request:

{
  "image": "base64_encoded_image",
  "confidence": 0.25,
  "use_tta": true
}

Response:

{
  "objects": [
    {
      "name": "FireExtinguisher",
      "confidence": 0.95,
      "bbox": {
        "x": 100,
        "y": 150,
        "width": 200,
        "height": 300
      }
    }
  ],
  "inference_time": 0.245,
  "image_size": [640, 480]
}

GET /health

Check API health status.

Response:

{
  "status": "healthy",
  "model_loaded": true,
  "model_path": "results/improved_model/train/weights/best.pt"
}

Interactive Documentation

Visit http://localhost:8000/docs for Swagger UI with interactive API testing.


πŸ“Š Model Performance

Validation Metrics (Epoch 204)

Metric Value
mAP@0.5 86.7%
mAP@0.5:0.95 76.3%
Precision 95.52%
Recall 74.31%

Per-Class Performance

Results demonstrate strong detection capabilities across all 7 safety equipment classes with high precision and competitive recall rates.

Training Configuration

  • Model: YOLOv8m (medium variant)
  • Epochs: 204 (stopped due to CUDA OOM)
  • Image Size: 640x640
  • Batch Size: Adaptive based on GPU memory
  • Optimizer: AdamW with cosine learning rate schedule
  • Augmentations: Advanced pipeline including mixup, mosaic, and spatial transforms

πŸ“ Project Structure

safeorbit/
β”œβ”€β”€ ai-engine/                      # AI/ML Backend
β”‚   β”œβ”€β”€ api.py                     # FastAPI server
β”‚   β”œβ”€β”€ predict.py                 # Inference script
β”‚   β”œβ”€β”€ train.py                   # Training script
β”‚   β”œβ”€β”€ requirements.txt           # Python dependencies
β”‚   β”œβ”€β”€ configs/                   # Configuration files
β”‚   β”œβ”€β”€ scripts/                   # Advanced training scripts
β”‚   β”œβ”€β”€ utils/                     # Utility modules
β”‚   └── results/                   # Training outputs
β”‚       └── improved_model/
β”‚           └── train/
β”‚               └── weights/
β”‚                   └── best.pt    # Best model checkpoint
β”‚
└── expo/                          # Mobile Application
    β”œβ”€β”€ app/                       # App screens and routes
    β”œβ”€β”€ components/                # UI components
    β”œβ”€β”€ lib/                       # Utilities and config
    β”œβ”€β”€ assets/                    # Static assets
    β”‚   └── models/
    β”‚       └── best.onnx         # ONNX model
    β”œβ”€β”€ package.json               # Node dependencies
    └── tsconfig.json              # TypeScript config

πŸš€ Deployment

AI Engine Deployment

Docker:

FROM python:3.8-slim
WORKDIR /app
COPY ai-engine/ .
RUN pip install -r requirements.txt
CMD ["python", "api.py"]

Production Server:

uvicorn api:app --host 0.0.0.0 --port 8000 --workers 4

Mobile App Deployment

iOS:

eas build --platform ios

Android:

eas build --platform android

Web:

npm run build:web

🀝 Contributing

Contributions are welcome! Please follow these steps:

  1. Fork the repository
  2. Create a feature branch (git checkout -b feature/amazing-feature)
  3. Commit your changes (git commit -m 'Add amazing feature')
  4. Push to the branch (git push origin feature/amazing-feature)
  5. Open a Pull Request

πŸ“„ License

This project is licensed under the MIT License.


πŸ™ Acknowledgments

  • YOLOv8: Ultralytics for the excellent object detection framework
  • Expo: For simplifying cross-platform mobile development
  • React Native Community: For comprehensive ecosystem and support

πŸ“ž Contact

Project Maintainer: Namann-14
Repository: github.com/Namann-14/safeorbit


Built with ❀️ for safer space exploration

About

πŸ›‘οΈ AI-powered safety equipment detection for space stations. YOLOv8-based object detection with a cross-platform React Native mobile app for real-time safety monitoring.

Topics

Resources

Stars

Watchers

Forks

Contributors