Autonomous Multi-Domain Coordination Platform
Vigilance is an AI-powered, real-time command dashboard designed for high-stakes operational environments (border security, multi-domain surveillance). It fuses massive sensor telemetry, predictive threat modeling, and generative AI into a single, comprehensive "Ontology" of the battlespace.
The Vigilance platform is built on a microservices architecture, emphasizing low-latency data fusion, high-availability storage, and advanced machine learning models.
graph TD;
Client[React Frontend Dashboard] <--> |REST / WebSockets| API(Node.js Gateway API);
API <--> Cache[(Redis - Pub/Sub & Cache)];
API <--> Telemetry[(TimescaleDB - Time-Series)];
API <--> Graph[(Neo4j - Entity Link Analysis)];
API <--> ML[Python FastAPI - ML Service];
ML --> CV[YOLOv8 / RT-DETR Vision Model];
ML --> NLP[LangChain / Llama 3 Vanguard AI];
ML --> Predict[Transformer Trajectory Prediction];
ML --> Swarm[Heuristic Drone Auto-Tasking];
Sensors[Drones / Radar / Thermal] --> |Kafka/MQTT| API;
- Framework: React 18 + Vite + TypeScript
- Styling: Tailwind CSS (Dark Tactical Theme)
- Features:
- Nexus Graph: Force-directed entity link analysis (
react-force-graph-2d). - Chronos Interface: Temporal scrubbing for historical tracking and future trajectory projection.
- Vanguard AI: Conversational operational copilot overlay.
- Multi-Spectral Video: Real-time optical, thermal, night-vision, and SAR feeds with biometric anomaly overlays.
- Nexus Graph: Force-directed entity link analysis (
- Framework: Node.js + Express + TypeScript
- Real-Time: Socket.io for sub-second telemetry broadcasting.
- Role: Handles client authentication, API routing, and acts as the orchestrator between databases and the ML microservice.
- Framework: Python 3.10 + FastAPI
- Models:
- Vision: Object detection and multi-spectral anomaly scoring.
- Prediction: Time-series forecasting for threat paths (ghost-tracks).
- LLM / RAG: Generative AI for operational queries against the Knowledge Graph.
- Swarm Logic: Dynamic multi-agent routing for drone interception.
- Graph Database (Neo4j): Stores the relationships between entities (e.g.,
VEHICLE-1isPROXIMATE_TOPERSONNEL-A). - Time-Series (TimescaleDB / PostgreSQL): Stores high-frequency sensor readings, bounding box coordinates, and kinetic vectors.
- Cache (Redis): Manages ephemeral state, active session data, and WebSocket Pub/Sub.
- Docker & Docker Compose (Recommended)
- Node.js 20+
- Python 3.10+ (for local ML dev)
The fastest way to spin up the entire Vigilance stack (Frontend, Backend Gateway, ML Service, and Databases) is via Docker Compose.
git clone <repository-url>
cd vigilance-dashboard
# Build and start all services in detached mode
docker-compose up --build -dAccess Points:
- Dashboard UI:
http://localhost:3000 - Backend API:
http://localhost:3001 - ML Service Docs:
http://localhost:8000/docs
If you prefer to run services individually for active development:
1. Infrastructure (Databases)
docker-compose up redis neo4j postgres -d2. ML Service (Python)
cd ml
python -m venv venv
source venv/bin/activate # On Windows: venv\Scripts\activate
pip install -r requirements.txt
uvicorn main:app --reload --port 80003. Backend Gateway (Node.js)
cd backend
npm install
npm run dev4. Frontend Dashboard (React)
cd frontend
npm install
npm run devThe platform is fully configured for production deployment using Render (for the backend and ML services) and Vercel (for the frontend dashboard).
Before deploying the services, you must provision managed databases (e.g., Supabase/Neon for Postgres, AuraDB for Neo4j, Upstash/Render for Redis). Keep their connection URIs handy.
The repository includes a render.yaml Blueprint which automatically configures both the Node.js API and the Python ML service.
- Create a Render account and connect your GitHub repository.
- Click New + > Blueprint.
- Select this repository. Render will automatically detect the
vigilance-backendandvigilance-ml-servicefrom therender.yamlfile. - Fill in the required environment variables in the Render dashboard for the Node.js Backend:
CORS_ORIGIN: Your soon-to-be Vercel frontend URL (e.g.,https://your-app.vercel.app)ML_SERVICE_URL: The internal Render URL of your ML Service (e.g.,http://vigilance-ml-service:8000)DATABASE_URL: Your production PostgreSQL URINEO4J_URI,NEO4J_USER,NEO4J_PASSWORD: Your AuraDB credentialsREDIS_URL: Your Redis instance URIJWT_SECRET: A secure random string for authentication
- Fill in the required environment variables for the Python ML Service:
CORS_ORIGIN: Your soon-to-be Vercel frontend URL (e.g.,https://your-app.vercel.app)OPENAI_API_KEY: (Optional) If utilizing the external LLM copilot.
The React dashboard is optimized for Vercel. SPA routing is automatically handled via the included vercel.json.
- Create a Vercel account and select Add New... > Project.
- Import this repository.
- In the "Configure Project" step, update the following:
- Framework Preset: Vite
- Root Directory:
frontend
- Add the following Environment Variables:
VITE_API_URL: The public URL of your deployed Node.js backend on Render (e.g.,https://vigilance-backend.onrender.com/api).
- Click Deploy.
Environment variables map core service connections. Copy the example file in each directory:
Backend (backend/.env):
PORT=3001
NODE_ENV=development
ML_SERVICE_URL=http://localhost:8000
REDIS_URL=redis://localhost:6379
NEO4J_URI=bolt://localhost:7687
NEO4J_USER=neo4j
NEO4J_PASSWORD=secret
DATABASE_URL=postgres://user:pass@localhost:5432/vigilanceML Service (ml/.env):
MODEL_CACHE_DIR=./models
OPENAI_API_KEY=your-key-here # For Vanguard AI (if using external LLM)Proprietary Software. All rights reserved. Not for public distribution.