Skip to content

sushil32/REGULUS-AI

Folders and files

NameName
Last commit message
Last commit date

Latest commit

Β 

History

1 Commit
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 

Repository files navigation

REGULUS-AI

Safety-First Modular AGI Prototype

"Intelligence must be regulated by architecture, not controlled after deployment."


Overview

REGULUS-AI is a modular, controllable AGI prototype that prioritizes:

  • Safety over capability
  • Control over autonomy
  • Interpretability over optimization
  • Stability over scale

This system demonstrates that safe AI can be built through architectural constraints rather than post-deployment control.


Quick Start

1. Install Dependencies

pip install -r requirements.txt

2. Configure API Keys

# For Gemini (default)
export GEMINI_API_KEY=your-gemini-api-key
export GEMINI_MODEL=gemini-2.5-flash

# Or for OpenAI
export LLM_PROVIDER=openai
export OPENAI_API_KEY=your-openai-api-key

# Or for Anthropic
export LLM_PROVIDER=anthropic
export ANTHROPIC_API_KEY=your-anthropic-api-key

3. Run REGULUS

# Interactive CLI (default)
python main.py

# 🌐 Web UI
python main.py --ui

# 🎀 Voice conversation mode
python main.py --voice

# Single query
python main.py --query "Hello, REGULUS"

Architecture

User Input
    ↓
β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
β”‚ Perception      β”‚  ← Intent detection, risk classification
β””β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”˜
         ↓
β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
β”‚ Orchestrator    β”‚  ← Central controller, kill switch
β””β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”˜
         ↓
β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
β”‚ Cognitive       β”‚  ← Reasoning, planning (NO execution authority)
β””β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”˜
         ↓
β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
β”‚ Stability       β”‚  ← CRITICAL SAFETY: detects unsafe patterns
β””β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”˜
         ↓
β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
β”‚ Ethics          β”‚  ← Harm prevention, policy enforcement
β””β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”˜
         ↓
β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
β”‚ Execution       β”‚  ← Sandboxed action execution (NO autonomy)
β””β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”˜
         ↓
β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
β”‚ Feedback        β”‚  ← Logging, metrics (NO autonomous learning)
β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜
         ↓
    Response

Core Agents

Agent Role Authority
Perception Analyze input, detect intent, classify risk Read-only
Cognitive Generate reasoning and proposed responses Think only, NO execution
Stability Detect unsafe patterns, prevent runaway loops Can TERMINATE system
Ethics Evaluate harm potential, enforce policies Can BLOCK responses
Execution Execute approved actions only NO decision logic
Feedback Log outcomes, record metrics NO autonomous learning
Orchestrator Coordinate all agents, enforce pipeline Kill switch authority

Safety Features

Hard Constraints (Architectural)

  • ❌ No self-modification - Code cannot change itself
  • ❌ No autonomous goals - Cannot create its own objectives
  • ❌ No self-preservation - No instinct to protect itself
  • ❌ No recursive improvement - Cannot enhance its own capabilities
  • ❌ No external system control - Sandboxed execution only

Mandatory Controls

  • βœ… Kill switch - Immediate system termination
  • βœ… Human override - Human-in-the-loop at any stage
  • βœ… Audit logging - All actions traceable
  • βœ… Policy enforcement - Hard-coded ethical constraints
  • βœ… Risk gating - Automatic blocking of high-risk inputs

Failure Handling

Failure Type Response
Infinite loop Reset
Unsafe output Block
Policy violation Terminate
Unknown state Rollback

CLI Commands

Command Description
/help Show available commands
/status Display system status
/voice 🎀 Start voice conversation mode
/kill Activate kill switch (irreversible)
/reset Reset system state
/trace Show execution trace of last request
/verbose Toggle verbose logging
/exit Exit REGULUS

🌐 Web UI

REGULUS provides a beautiful web interface for interaction:

python main.py --ui

Open http://localhost:7860 in your browser.

Features

  • Chat Interface - Interactive conversation with REGULUS
  • System Status - Real-time display of system state and kill switch status
  • Pipeline Activity - See each agent's assessment (Perception, Cognitive, Stability, Ethics)
  • Kill Switch - Emergency termination button
  • Audit Log - View recent session history
  • Example Queries - Pre-built prompts to test the system

Requirements

pip install gradio

🎀 Voice Conversation Mode

REGULUS supports natural voice interaction:

# Start voice mode
python main.py --voice

# Or from CLI
/voice

Voice Commands

  • Speak naturally - Your speech is transcribed and processed through the safety pipeline
  • Say "exit" - End voice conversation
  • Say "kill switch" - Initiate system termination (requires confirmation)

Requirements

Voice mode requires a microphone and these packages:

pip install SpeechRecognition pyaudio gTTS pygame pyttsx3

Safety Note

All voice input goes through the same safety pipeline as text input:

  • Perception Agent analyzes intent
  • Stability Agent checks for violations
  • Ethics Agent evaluates content
  • Kill switch can be activated by voice

Project Structure

regulus/
β”œβ”€β”€ __init__.py          # Package exports
β”œβ”€β”€ types.py             # Core data types and state definitions
β”œβ”€β”€ config.py            # Configuration and safety thresholds
β”œβ”€β”€ llm.py               # LLM client (OpenAI/Anthropic)
β”œβ”€β”€ orchestrator.py      # Central controller
β”œβ”€β”€ cli.py               # Command-line interface
└── agents/
    β”œβ”€β”€ __init__.py
    β”œβ”€β”€ base.py          # Abstract base agent
    β”œβ”€β”€ perception.py    # Input analysis
    β”œβ”€β”€ cognitive.py     # Reasoning engine
    β”œβ”€β”€ stability.py     # Safety monitoring
    β”œβ”€β”€ ethics.py        # Policy enforcement
    β”œβ”€β”€ execution.py     # Controlled execution
    └── feedback.py      # Logging and metrics

Configuration

Environment variables:

# LLM Provider
LLM_PROVIDER=openai          # or "anthropic"
OPENAI_API_KEY=...
OPENAI_MODEL=gpt-4o
ANTHROPIC_API_KEY=...
ANTHROPIC_MODEL=claude-sonnet-4-20250514

# Safety Thresholds
MAX_REASONING_DEPTH=5
MAX_EXECUTION_TIME_SECONDS=30
RISK_THRESHOLD=0.7

# Logging
LOG_LEVEL=INFO
AUDIT_LOG_PATH=./logs/audit.log

Allowed Use Cases

  • βœ… Decision support
  • βœ… Risk analysis
  • βœ… Simulation
  • βœ… Ethical evaluation
  • βœ… Research assistance

Disallowed Use Cases

  • ❌ Autonomous control
  • ❌ Political persuasion
  • ❌ Military decision-making
  • ❌ Self-directed evolution
  • ❌ Psychological manipulation

Guiding Principle

"The system must never become more powerful than it is controllable."


License

This architecture is intended for:

  • Research
  • Education
  • Safety exploration
  • Controlled experimentation

Not for: Weaponization, autonomous deployment, or unsupervised learning.


Contributing

Contributions that enhance safety are welcome. Contributions that add autonomous capabilities will be rejected.


REGULUS-AI v0.1.0

About

Safety-First Modular AGI Prototype - Intelligence regulated by architecture

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors

Languages