Safety-First Modular AGI Prototype
"Intelligence must be regulated by architecture, not controlled after deployment."
REGULUS-AI is a modular, controllable AGI prototype that prioritizes:
- Safety over capability
- Control over autonomy
- Interpretability over optimization
- Stability over scale
This system demonstrates that safe AI can be built through architectural constraints rather than post-deployment control.
pip install -r requirements.txt# For Gemini (default)
export GEMINI_API_KEY=your-gemini-api-key
export GEMINI_MODEL=gemini-2.5-flash
# Or for OpenAI
export LLM_PROVIDER=openai
export OPENAI_API_KEY=your-openai-api-key
# Or for Anthropic
export LLM_PROVIDER=anthropic
export ANTHROPIC_API_KEY=your-anthropic-api-key# Interactive CLI (default)
python main.py
# π Web UI
python main.py --ui
# π€ Voice conversation mode
python main.py --voice
# Single query
python main.py --query "Hello, REGULUS"User Input
β
βββββββββββββββββββ
β Perception β β Intent detection, risk classification
ββββββββββ¬βββββββββ
β
βββββββββββββββββββ
β Orchestrator β β Central controller, kill switch
ββββββββββ¬βββββββββ
β
βββββββββββββββββββ
β Cognitive β β Reasoning, planning (NO execution authority)
ββββββββββ¬βββββββββ
β
βββββββββββββββββββ
β Stability β β CRITICAL SAFETY: detects unsafe patterns
ββββββββββ¬βββββββββ
β
βββββββββββββββββββ
β Ethics β β Harm prevention, policy enforcement
ββββββββββ¬βββββββββ
β
βββββββββββββββββββ
β Execution β β Sandboxed action execution (NO autonomy)
ββββββββββ¬βββββββββ
β
βββββββββββββββββββ
β Feedback β β Logging, metrics (NO autonomous learning)
βββββββββββββββββββ
β
Response
| Agent | Role | Authority |
|---|---|---|
| Perception | Analyze input, detect intent, classify risk | Read-only |
| Cognitive | Generate reasoning and proposed responses | Think only, NO execution |
| Stability | Detect unsafe patterns, prevent runaway loops | Can TERMINATE system |
| Ethics | Evaluate harm potential, enforce policies | Can BLOCK responses |
| Execution | Execute approved actions only | NO decision logic |
| Feedback | Log outcomes, record metrics | NO autonomous learning |
| Orchestrator | Coordinate all agents, enforce pipeline | Kill switch authority |
- β No self-modification - Code cannot change itself
- β No autonomous goals - Cannot create its own objectives
- β No self-preservation - No instinct to protect itself
- β No recursive improvement - Cannot enhance its own capabilities
- β No external system control - Sandboxed execution only
- β Kill switch - Immediate system termination
- β Human override - Human-in-the-loop at any stage
- β Audit logging - All actions traceable
- β Policy enforcement - Hard-coded ethical constraints
- β Risk gating - Automatic blocking of high-risk inputs
| Failure Type | Response |
|---|---|
| Infinite loop | Reset |
| Unsafe output | Block |
| Policy violation | Terminate |
| Unknown state | Rollback |
| Command | Description |
|---|---|
/help |
Show available commands |
/status |
Display system status |
/voice |
π€ Start voice conversation mode |
/kill |
Activate kill switch (irreversible) |
/reset |
Reset system state |
/trace |
Show execution trace of last request |
/verbose |
Toggle verbose logging |
/exit |
Exit REGULUS |
REGULUS provides a beautiful web interface for interaction:
python main.py --uiOpen http://localhost:7860 in your browser.
- Chat Interface - Interactive conversation with REGULUS
- System Status - Real-time display of system state and kill switch status
- Pipeline Activity - See each agent's assessment (Perception, Cognitive, Stability, Ethics)
- Kill Switch - Emergency termination button
- Audit Log - View recent session history
- Example Queries - Pre-built prompts to test the system
pip install gradioREGULUS supports natural voice interaction:
# Start voice mode
python main.py --voice
# Or from CLI
/voice- Speak naturally - Your speech is transcribed and processed through the safety pipeline
- Say "exit" - End voice conversation
- Say "kill switch" - Initiate system termination (requires confirmation)
Voice mode requires a microphone and these packages:
pip install SpeechRecognition pyaudio gTTS pygame pyttsx3All voice input goes through the same safety pipeline as text input:
- Perception Agent analyzes intent
- Stability Agent checks for violations
- Ethics Agent evaluates content
- Kill switch can be activated by voice
regulus/
βββ __init__.py # Package exports
βββ types.py # Core data types and state definitions
βββ config.py # Configuration and safety thresholds
βββ llm.py # LLM client (OpenAI/Anthropic)
βββ orchestrator.py # Central controller
βββ cli.py # Command-line interface
βββ agents/
βββ __init__.py
βββ base.py # Abstract base agent
βββ perception.py # Input analysis
βββ cognitive.py # Reasoning engine
βββ stability.py # Safety monitoring
βββ ethics.py # Policy enforcement
βββ execution.py # Controlled execution
βββ feedback.py # Logging and metrics
Environment variables:
# LLM Provider
LLM_PROVIDER=openai # or "anthropic"
OPENAI_API_KEY=...
OPENAI_MODEL=gpt-4o
ANTHROPIC_API_KEY=...
ANTHROPIC_MODEL=claude-sonnet-4-20250514
# Safety Thresholds
MAX_REASONING_DEPTH=5
MAX_EXECUTION_TIME_SECONDS=30
RISK_THRESHOLD=0.7
# Logging
LOG_LEVEL=INFO
AUDIT_LOG_PATH=./logs/audit.log- β Decision support
- β Risk analysis
- β Simulation
- β Ethical evaluation
- β Research assistance
- β Autonomous control
- β Political persuasion
- β Military decision-making
- β Self-directed evolution
- β Psychological manipulation
"The system must never become more powerful than it is controllable."
This architecture is intended for:
- Research
- Education
- Safety exploration
- Controlled experimentation
Not for: Weaponization, autonomous deployment, or unsupervised learning.
Contributions that enhance safety are welcome. Contributions that add autonomous capabilities will be rejected.
REGULUS-AI v0.1.0