Skip to content

omercsbn/hebbian

Repository files navigation

Hebbian Brain: Emergent Memory via Interaction Graphs

██╗  ██╗███████╗██████╗ ██████╗ ██╗ █████╗ ███╗   ██╗
██║  ██║██╔════╝██╔══██╗██╔══██╗██║██╔══██╗████╗  ██║
███████║█████╗  ██████╔╝██████╔╝██║███████║██╔██╗ ██║
██╔══██║██╔══╝  ██╔══██╗██╔══██╗██║██╔══██║██║╚██╗██║
██║  ██║███████╗██████╔╝██████╔╝██║██║  ██║██║ ╚████║
╚═╝  ╚═╝╚══════╝╚═════╝ ╚═════╝ ╚═╝╚═╝  ╚═╝╚═╝  ╚═══╝

A Bio-Inspired Learning System for Autonomous Agents

No Backpropagation. No Gradients. No Databases. Just Neurons.


🧠 Overview

Hebbian Brain is a proof-of-concept engine in Rust that implements emergent memory and behavioral learning through bio-inspired mechanisms. Agents develop memory, strategy, and behavior patterns using only local learning rules on a dynamic graph structure.

Core Philosophy

  1. Local Learning Only: Agents learn via Hebbian principles ("Neurons that fire together, wire together"). There is no global loss function.

  2. Graph as Brain: The agent's "brain" is a directed graph where Nodes represent State-Action pairs and Edges represent synaptic weights.

  3. Emergence: Complex strategies emerge from simple local interactions.

  4. Biological Efficiency: A "Dream Phase" consolidates memory, inspired by biological sleep cycles.


📚 Table of Contents


🏗 Architecture

┌─────────────────────────────────────────────────────────────────┐
│                        AgentBrain                                │
│  ┌─────────────────────────────────────────────────────────────┐ │
│  │                   Interaction Graph                          │ │
│  │                                                              │ │
│  │    [State₁,Act_A] ──(w=0.8)──► [State₂,Act_B]               │ │
│  │          │                           │                       │ │
│  │      (w=0.3)                     (w=0.6)                     │ │
│  │          ▼                           ▼                       │ │
│  │    [State₃,Act_C] ◄──(w=0.2)── [State₄,Act_A]               │ │
│  │                                                              │ │
│  └─────────────────────────────────────────────────────────────┘ │
│                                                                  │
│  ┌──────────────┐  ┌──────────────┐  ┌──────────────┐           │
│  │ Hebbian      │  │ Synaptic     │  │ Dream        │           │
│  │ Plasticity   │  │ Pruning      │  │ Consolidation│           │
│  └──────────────┘  └──────────────┘  └──────────────┘           │
└─────────────────────────────────────────────────────────────────┘

Key Data Structures

ExperienceNode

Represents a State-Action pair in the agent's experience graph.

pub struct ExperienceNode {
    pub state_hash: StateHash,      // Compressed state representation
    pub action: Action,             // Action taken
    pub created_at: u64,            // Birth timestamp
    pub last_activated: u64,        // Recency tracking
    pub activation_count: u64,      // Frequency tracking
    pub average_reward: f64,        // Value estimation
    pub salience: f64,              // Importance score
}

Synapse (Edge)

The weighted connection between experiences.

pub struct Synapse {
    pub weight: f64,               // Synaptic strength [0, 1]
    pub last_activated: u64,       // For pruning decisions
    pub plasticity: f64,           // Learning rate modifier
    pub eligibility_trace: f64,    // Credit assignment
    pub traversal_count: u64,      // Usage statistics
    pub is_creative_edge: bool,    // Formed during dreams
}

📐 The Learning Rule

We implement an extended Hebbian rule with reward modulation:

$$\Delta w_{ij} = \eta \cdot A_i \cdot A_j \cdot (R + \epsilon) \cdot P_{ij}$$

Where:

  • η = Base learning rate
  • Aᵢ, Aⱼ = Activation levels of pre/post-synaptic nodes
  • R = Reward signal (positive or negative)
  • ε = Stabilization constant
  • Pᵢⱼ = Plasticity modifier for this synapse

Key Properties

Property Traditional NN Hebbian Brain
Learning Rule Global gradient descent Local synaptic updates
Error Signal Backpropagated None required
Memory Weights in layers Graph structure
Forgetting Catastrophic Synaptic pruning
Consolidation N/A Dream phase

💭 Dream Phase (Memory Consolidation)

The dream cycle is a critical feature that mimics biological sleep-based memory consolidation.

┌─────────────────────────────────────────────────────────────────┐
│                        DREAM CYCLE                               │
│                                                                  │
│  1. RANDOM ACTIVATION                                           │
│     └─► Sample random node from graph                           │
│                                                                  │
│  2. ASSOCIATIVE TRAVERSAL (Monte Carlo Walk)                    │
│     └─► Follow edges probabilistically (weight-biased)          │
│                                                                  │
│  3. RECONSOLIDATION                                             │
│     └─► Boost weights of traversed edges                        │
│     └─► Apply "Galvanize" bonus to old synapses                 │
│                                                                  │
│  4. CREATIVE CONNECTION (Low Probability)                       │
│     └─► Connect distant nodes in path                           │
│     └─► Create novel associations ("insight")                   │
└─────────────────────────────────────────────────────────────────┘

The Galvanize Effect

Named after the process of strengthening metal through reactivation, the Galvanize effect occurs when an old, dormant memory trace is reactivated during dreams. These aged synapses receive an extra boost, simulating how biological sleep can unexpectedly strengthen old memories.

// During dream traversal, old synapses get extra boost
if synapse.age(current_time) > galvanize_age_threshold {
    boost *= galvanize_boost_multiplier;  // e.g., 1.5x
}

This mirrors the biological phenomenon where memories consolidated long ago can be "refreshed" when related experiences reoccur during sleep-related replay.


🚀 Installation

Add to your Cargo.toml:

[dependencies]
hebbian_brain = "0.1"

Or clone and build:

git clone https://github.com/your-repo/hebbian_brain
cd hebbian_brain
cargo build --release

⚡ Quick Start

use hebbian_brain::{
    config::BrainConfig,
    simulation::{Agent, ChainEnvironment, SimulationLoop},
    perception::StateHash,
};

fn main() {
    // Create an agent with default configuration
    let mut agent = Agent::with_defaults();
    
    // Simple environment: learn to move right
    let env = ChainEnvironment::new(10);
    
    // Create simulation loop
    let mut sim = SimulationLoop::new(
        env, 
        |&pos| StateHash::from_value(pos as u64)
    );
    
    // Train for 100 episodes
    for episode in 0..100 {
        let stats = sim.run_episode(&mut agent);
        println!("Episode {}: Reward = {:.2}", episode, stats.total_reward);
        
        // Consolidate memories
        agent.dream();
    }
    
    println!("Brain has {} nodes and {} edges", 
             agent.brain().node_count(),
             agent.brain().edge_count());
}

📖 Examples

Grid World Navigation

cargo run --example grid_world

An agent learns to navigate a 2D grid, avoiding obstacles and reaching a goal.

Predator-Prey Simulation

cargo run --example predator_prey

Multiple agents (predators and prey) develop emergent behaviors through interaction.


📚 API Reference

Core Types

Type Description
AgentBrain The complete graph-based cognitive structure
ExperienceNode State-Action pair node
Synapse Weighted edge between nodes
Agent Complete autonomous agent with learning

Key Functions

// Create and manage nodes
brain.get_or_create_node(state_hash, action) -> NodeIndex
brain.activate_node(idx, reward)

// Manage synapses
brain.get_or_create_edge(from, to) -> Option<&mut Synapse>
brain.create_creative_edge(from, to) -> bool

// Learning
learner.learn_transition(&mut brain, from, to, reward)
learner.apply_weight_decay(&mut brain)

// Dream phase
dream_engine.dream_cycle(&mut brain) -> DreamReport

// Visualization
exporter.export(&brain) -> String  // DOT format
generate_brain_report(&brain) -> String

🔬 Biological Foundations

This system draws from several neuroscientific principles:

Hebbian Learning (1949)

"When an axon of cell A is near enough to excite cell B and repeatedly or persistently takes part in firing it, some growth process or metabolic change takes place in one or both cells such that A's efficiency, as one of the cells firing B, is increased." — Donald Hebb

Long-Term Potentiation (LTP)

The molecular mechanism underlying Hebbian learning, where repeated activation strengthens synaptic connections through NMDA receptor activation and downstream signaling cascades.

Memory Consolidation During Sleep

Research shows that sleep plays a crucial role in memory consolidation:

  • Hippocampal replay: Recent experiences are replayed during slow-wave sleep
  • Synaptic homeostasis: Overall synaptic strength is normalized
  • Memory integration: New memories are integrated with existing knowledge

Our Dream Phase computationally models these processes.

Synaptic Pruning

The brain actively eliminates unused synapses to maintain efficiency. Our pruning mechanism removes edges that:

  • Haven't been activated for extended periods
  • Have weights below a threshold
  • Are past their initial protection period

🔧 Configuration

let config = BrainConfig {
    plasticity: PlasticityParams {
        base_learning_rate: 0.05,   // η - learning speed
        epsilon: 0.001,             // Stabilization constant
        weight_max: 1.0,            // Ceiling
        weight_min: 0.0,            // Floor
        enable_anti_hebbian: true,  // Allow weakening
        ..Default::default()
    },
    pruning: PruningParams {
        inactivity_threshold: 1000, // Steps before pruning
        weight_threshold: 0.1,      // Minimum weight
        ..Default::default()
    },
    dream: DreamParams {
        walks_per_cycle: 10,        // Random walks per dream
        consolidation_boost: 0.02,  // Weight increase
        creative_edge_probability: 0.005,  // Insight chance
        enable_galvanize: true,     // Old memory boost
        ..Default::default()
    },
    ..Default::default()
};

📊 Visualization

Export the brain graph for visualization:

use hebbian_brain::visualization::DotExporter;

let exporter = DotExporter::new()
    .with_min_weight(0.2)           // Filter weak edges
    .with_state_hash(false);        // Cleaner labels

let dot = exporter.export(&brain);
std::fs::write("brain.dot", dot)?;

Then render with GraphViz:

dot -Tpng brain.dot -o brain.png

🤝 Contributing

Contributions are welcome! Areas of interest:

  • Additional environment implementations
  • Spike-Timing Dependent Plasticity (STDP) refinements
  • Parallel/distributed agent simulations
  • Real-time visualization tools
  • Integration with game engines

📜 License

MIT License - See LICENSE file for details.


📚 References

  1. Hebb, D.O. (1949). The Organization of Behavior. Wiley.
  2. Bliss, T.V., & Lømo, T. (1973). Long-lasting potentiation of synaptic transmission. Journal of Physiology.
  3. Walker, M.P. (2017). Why We Sleep. Scribner.
  4. Tononi, G., & Cirelli, C. (2014). Sleep and the price of plasticity. Neuron.
  5. Bi, G.Q., & Poo, M.M. (1998). Synaptic modifications in cultured hippocampal neurons. Journal of Neuroscience.

Built with 🧠 and Rust

"The brain is wider than the sky" — Emily Dickinson

About

A Bio-Inspired Learning System for Autonomous Agents No Backpropagation. No Gradients. No Databases. Just Neurons.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors

Languages