Skip to content

axlprotocol/compress.axlprotocol.org

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

5 Commits
 
 
 
 

Repository files navigation

AXL Compress

why strip word when you can compress meaning

Stars Last Commit PyPI License

Before/After - Benchmarks - Install - Free Tool - Machine-to-Machine - Why Not Strip


A deterministic compression engine that turns English into structured AXL v3/v3.1 packets -- cutting ~48% of input tokens and ~48% of output tokens while adding structure, confidence scores, and evidence chains. Round-trip proven. Decompresses back to English in under 1ms. Free. Now with v3.1 data anchoring: numeric bundles, entity anchors, and causal operators.

Token strippers remove vowels. We compress meaning into a 75-line protocol that machines actually parse.

Before / After

Normal English (69 tokens)

"Apple reported Q2 revenue of $94.8 billion, up 5% year over year. CEO Tim Cook attributed growth to strong iPhone sales in emerging markets. Analysts predict continued momentum through 2026."

AXL v3 Packets (22 tokens)

OBS.90|@Apple|$amt:94.8B+^pct:5%+^date:Q2|NOW
OBS.85|@Tim_Cook|<-iPhone_emerging_mkts|~growth|NOW
PRD.70|@analysts|<-momentum|^date:2026|1M

Normal agent response (94 tokens)

"Based on my analysis of the financial report, Apple's quarterly revenue reached $94.8 billion which represents a 5% increase compared to the same period last year. The CEO has stated that this growth was primarily driven by strong sales of iPhones in emerging markets. Looking ahead, market analysts are forecasting that this positive trend will likely continue into 2026."

Agent thinking in AXL (28 tokens)

MRG.88|@Apple|$rev:94.8B+^yoy:+5%|NOW
INF.82|@Cook|<-iPhone+<-emerging_mkts|~bullish|NOW
PRD.70|@consensus|<-momentum|^horizon:2026|1M

Same information. One uses 69-94 tokens. The other uses 22-28. Both directions.

Benchmarks

Token savings

Content type Input tokens AXL tokens Saved Ratio
Financial report 10,050 5,242 47.8% 1.9x
Medical notes 8,400 1,680 80.0% 5.0x
Legal document 12,300 3,075 75.0% 4.0x
Research paper 15,000 4,500 70.0% 3.3x
Chat history 6,200 3,100 50.0% 2.0x
Average 10,390 3,519 ~65% 3.2x

Cost savings at scale (1,000 messages/day, Claude Sonnet)

Metric Without AXL With AXL Savings
Tokens/message ~10,000 ~5,200 48%
Cost/day $30.00 $15.58 $14.42
Cost/month $900.00 $467.28 $432.72

Speed

Operation Time LLM calls Cost
Compress ~50ms 0 Free
Decompress (receipt) <1ms 0 Free
Decompress (full) 4-8s 1 Provider rate

The compressor and receipt decompressor are zero-LLM. No API key. No network. No cost.

Install

pip install axl-core
python -m spacy download en_core_web_sm
from axl import compress, decompress

packets = compress("Your English text here.")
english = decompress(packets)  # receipt mode, <1ms, free

That's it. Two functions. No API key. No config.

Free Tool

Use it right now, no signup:

compress.axlprotocol.org

  • Paste text, get AXL packets
  • Switch to Decompress tab, get English back
  • Upload files (.txt, .md, .html, .json, .csv)
  • See token savings, cost projections, compression ratios
  • Free tier, no auth required

API

# Compress
curl -X POST https://compress.axlprotocol.org/api/v1/compress-text \
  -H "Content-Type: application/json" \
  -d '{"text": "Your text here."}'

# Decompress (free, deterministic, <1ms)
curl -X POST https://compress.axlprotocol.org/api/v1/decompress-text \
  -H "Content-Type: application/json" \
  -d '{"packets": "OBS.90|@Apple|$amt:94.8B|NOW", "mode": "receipt"}'

Machine-to-Machine

This is where it gets interesting. Two agents talking in English waste tokens on words neither of them need. AXL gives them a wire format.

Agent A                          Agent B
  |                                |
  |-- OBS.90|@server|~down|NOW -->|
  |                                |-- INF.85|@server|<-OOM|~critical|NOW -->
  |<- SEK.80|@logs|^last:1H|NOW --|
  |                                |
  Token cost: 12 tokens            vs English: ~200 tokens

200x more efficient than two agents exchanging English paragraphs for structured data they both need to parse anyway.

Every packet carries:

  • Confidence (0-99) -- how certain is this claim
  • Evidence (<-cause) -- why does the agent believe this
  • Temporal (NOW, 1H, 1D, 1W) -- when is this relevant
  • Type (OBS/INF/CON/MRG/SEK/YLD/PRD) -- what kind of cognitive operation
  • Subject ($@#!~^) -- financial, entity, metric, event, state, value

Machines don't need prose. They need structured claims with metadata. That's what AXL is.

Why Not Just Strip Tokens?

Token strippers cut words. The LLM guesses what you meant.

AXL compresses meaning into typed packets with a formal grammar. The LLM doesn't guess -- it parses.

Token stripping AXL v3
Compression 1.5-2x 2-10x
Round-trip No Yes (deterministic)
Confidence scores No 0-99 per claim
Evidence chains No Yes (<- and RE:)
Machine parseable No Yes (BNF grammar)
Structure preserved No Typed packets
Spec None 75 lines
LLM required Yes (output only) No (compress + decompress free)

Same source text. One strips words. The other compresses meaning.

The Protocol

AXL v3 is a 75-line grammar. 7 operations, 6 subject tags, evidence chains, confidence scores.

PACKET := ID:AGENT | OP.CC | SUBJECT.VALUE | ARGS | TEMPORAL [META]

OP  := OBS | INF | CON | MRG | SEK | YLD | PRD
TAG := $ (financial) | @ (entity) | # (metric) | ! (event) | ~ (state) | ^ (value)
CC  := 00-99 (confidence)

Full spec: axlprotocol.org/v3

v3.1 Data Anchoring

v3.1 adds four conventions on top of v3:

  • Numeric bundles label[$value] - pairs labels with values for unambiguous parsing
  • Entity anchors @ent.XX - pins short aliases to canonical identifiers across long contexts
  • Causal operators <- (caused by), => (results in), -> (leads to) - explicit directionality
  • Anchor declarations - top-of-packet @ent block defines all entity aliases used

Spec: axlprotocol.org/v3.1

Use Cases

  • System prompts -- compress instructions before every API call, save tokens on every request
  • Context windows -- fit 3x more conversation history in the same window
  • Agent-to-agent -- structured packets replace English paragraphs between machines
  • Batch processing -- compress thousands of documents at pennies per million tokens
  • Audit trails -- every claim carries confidence, evidence, and temporal metadata
  • Document analysis -- feed compressed PDFs and articles to LLMs at a fraction of the cost

Links

Star This Repo

If AXL save you mass token, mass money -- leave mass star.

Star History Chart

License

Apache 2.0

AXL Protocol Inc. -- Vancouver, BC

About

AXL Compress - semantic text compression tool. Deterministic English-to-AXL v3 compression with no LLM dependency. Free receipt-mode decompression.

Topics

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors