Skip to content

axlprotocol/axl-swarm

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

4 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

axl-swarm

Multi-agent deliberation engine using AXL Protocol. Agents debate any topic using structured cognitive operations, achieving 10x compression over English prose.

pip install axl-swarm
axl-swarm run seeds/medical-dx.md --compare

What it does

Spawns N agents from a seed document. Each agent has a unique persona and expertise. They deliberate over multiple rounds — posting, commenting, agreeing, disagreeing. At the end, a prediction signal emerges from the collective reasoning.

Two modes:

  • English: agents communicate in natural language prose
  • AXL: agents communicate in single-line protocol packets using seven cognitive operations

The --compare flag runs both side-by-side and shows the difference.

Proven results

In Battleground 007 (medical differential diagnosis — ovarian cancer vs endometriosis), 12 agents using AXL achieved:

Metric English AXL
Posts 128 22
Comments 21 130
Avg message 1,953 chars 184 chars
Total chars 290,945 27,944
Compression 10.41x
Comments/post 0.16 5.91
Time to complete 50 min 25 min

Both sides reached the same consensus (MRI before surgery). AXL agents completed 12 rounds in the time English completed 5.

The seven cognitive operations

OBS  — Observe    "I see raw data"
INF  — Infer      "I conclude X from Y"
CON  — Contradict "I disagree because Z"
MRG  — Merge      "Synthesizing A and B"
SEK  — Seek       "I need to know X"
YLD  — Yield      "I changed my mind"
PRD  — Predict    "X will happen by T"

Example AXL packet:

π:ONC-01|INF.82|#diagnosis|←#CA125_8.1x+!mass_5.2cm+#ROMA_42.3|~malignancy_probable|1W

This single line replaces a 2,000-character clinical assessment paragraph.

Quick start

# Install
pip install axl-swarm

# Preview what agents would be created (cheap — 1 LLM call)
axl-swarm preview seeds/medical-dx.md

# Run English mode
axl-swarm run seeds/medical-dx.md

# Run AXL mode
axl-swarm run seeds/medical-dx.md --mode axl

# Run both side-by-side with comparison report
axl-swarm run seeds/medical-dx.md --compare

# Use any LLM via litellm
axl-swarm run seeds/medical-dx.md --model gpt-4o
axl-swarm run seeds/medical-dx.md --model ollama/llama3.1:70b
axl-swarm run seeds/medical-dx.md --model groq/llama-3.1-70b-versatile

Commands

axl-swarm run <seed.md>           Run a deliberation
axl-swarm preview <seed.md>       Extract entities (1 LLM call, ~$0.01)
axl-swarm agents <seed.md>        Generate + review agent profiles
axl-swarm signal <experiment/>    Generate prediction from completed run
axl-swarm monitor                 Launch dashboard at localhost:9000
axl-swarm export <experiment/>    Export data to JSON

Seed format

Seeds are markdown files with optional YAML frontmatter:

---
question: Does Patient 7291 have ovarian cancer or endometriosis?
domain: oncology
agents: 12
rounds: 12
mode: compare
---

PATIENT PRESENTATION:
Patient 7291, female, age 47...

NAMED ENTITIES:
1. @Dr.Chen — Gynecologic oncologist...
2. @Dr.Patel — Reproductive endocrinologist...

Three seeds ship with the package: finance (BTC prediction), medicine (differential diagnosis), and health optimization (lifestyle analysis).

Works with any LLM

axl-swarm uses litellm — any model, any provider:

# Cloud APIs
axl-swarm run seed.md --model anthropic/claude-sonnet-4-20250514
axl-swarm run seed.md --model gpt-4o
axl-swarm run seed.md --model groq/llama-3.1-70b-versatile

# Local models via Ollama
axl-swarm run seed.md --model ollama/llama3.1:70b --api-base http://localhost:11434

# Custom LiteLLM proxy
axl-swarm run seed.md --api-base http://localhost:4000

Why compression matters

Agents have fixed context windows (128K–200K tokens). In English, an agent can process ~2,000 messages from the network. In AXL, the same window holds ~14,000 packets. The agent is 7x more connected.

More connections = faster consensus = better predictions. AXL doesn't make agents smarter. It makes the network denser.

Architecture

seed.md → agents.py (1 LLM call) → agent profiles
                                         ↓
                              swarm.py: for round in 12:
                                          for agent in agents:
                                            feed → brain.py → tool call → state.py
                                         ↓
                              signal.py (1 LLM call) → prediction

~500 lines of Python. No OASIS. No CAMEL. No Neo4j. No async. No frameworks.

AXL Protocol ecosystem

Product Description
AXL Protocol Universal agent communication language
axl-core Python parser, emitter, validator
axl-swarm Multi-agent deliberation engine (this repo)
axl-sophon Swarm intelligence observer
axl-battlegrounds Public experiment results

License

Apache 2.0 — AXL Protocol · 2026

About

Multi-agent deliberation engine using AXL Protocol — 10x compression over English prose

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors