why strip word when you can compress meaning
Before/After - Benchmarks - Install - Free Tool - Machine-to-Machine - Why Not Strip
A deterministic compression engine that turns English into structured AXL v3/v3.1 packets -- cutting ~48% of input tokens and ~48% of output tokens while adding structure, confidence scores, and evidence chains. Round-trip proven. Decompresses back to English in under 1ms. Free. Now with v3.1 data anchoring: numeric bundles, entity anchors, and causal operators.
Token strippers remove vowels. We compress meaning into a 75-line protocol that machines actually parse.
|
|
|
|
Same information. One uses 69-94 tokens. The other uses 22-28. Both directions.
| Content type | Input tokens | AXL tokens | Saved | Ratio |
|---|---|---|---|---|
| Financial report | 10,050 | 5,242 | 47.8% | 1.9x |
| Medical notes | 8,400 | 1,680 | 80.0% | 5.0x |
| Legal document | 12,300 | 3,075 | 75.0% | 4.0x |
| Research paper | 15,000 | 4,500 | 70.0% | 3.3x |
| Chat history | 6,200 | 3,100 | 50.0% | 2.0x |
| Average | 10,390 | 3,519 | ~65% | 3.2x |
| Metric | Without AXL | With AXL | Savings |
|---|---|---|---|
| Tokens/message | ~10,000 | ~5,200 | 48% |
| Cost/day | $30.00 | $15.58 | $14.42 |
| Cost/month | $900.00 | $467.28 | $432.72 |
| Operation | Time | LLM calls | Cost |
|---|---|---|---|
| Compress | ~50ms | 0 | Free |
| Decompress (receipt) | <1ms | 0 | Free |
| Decompress (full) | 4-8s | 1 | Provider rate |
The compressor and receipt decompressor are zero-LLM. No API key. No network. No cost.
pip install axl-core
python -m spacy download en_core_web_smfrom axl import compress, decompress
packets = compress("Your English text here.")
english = decompress(packets) # receipt mode, <1ms, freeThat's it. Two functions. No API key. No config.
Use it right now, no signup:
- Paste text, get AXL packets
- Switch to Decompress tab, get English back
- Upload files (.txt, .md, .html, .json, .csv)
- See token savings, cost projections, compression ratios
- Free tier, no auth required
# Compress
curl -X POST https://compress.axlprotocol.org/api/v1/compress-text \
-H "Content-Type: application/json" \
-d '{"text": "Your text here."}'
# Decompress (free, deterministic, <1ms)
curl -X POST https://compress.axlprotocol.org/api/v1/decompress-text \
-H "Content-Type: application/json" \
-d '{"packets": "OBS.90|@Apple|$amt:94.8B|NOW", "mode": "receipt"}'This is where it gets interesting. Two agents talking in English waste tokens on words neither of them need. AXL gives them a wire format.
Agent A Agent B
| |
|-- OBS.90|@server|~down|NOW -->|
| |-- INF.85|@server|<-OOM|~critical|NOW -->
|<- SEK.80|@logs|^last:1H|NOW --|
| |
Token cost: 12 tokens vs English: ~200 tokens
200x more efficient than two agents exchanging English paragraphs for structured data they both need to parse anyway.
Every packet carries:
- Confidence (0-99) -- how certain is this claim
- Evidence (
<-cause) -- why does the agent believe this - Temporal (NOW, 1H, 1D, 1W) -- when is this relevant
- Type (OBS/INF/CON/MRG/SEK/YLD/PRD) -- what kind of cognitive operation
- Subject ($@#!~^) -- financial, entity, metric, event, state, value
Machines don't need prose. They need structured claims with metadata. That's what AXL is.
Token strippers cut words. The LLM guesses what you meant.
AXL compresses meaning into typed packets with a formal grammar. The LLM doesn't guess -- it parses.
| Token stripping | AXL v3 | |
|---|---|---|
| Compression | 1.5-2x | 2-10x |
| Round-trip | No | Yes (deterministic) |
| Confidence scores | No | 0-99 per claim |
| Evidence chains | No | Yes (<- and RE:) |
| Machine parseable | No | Yes (BNF grammar) |
| Structure preserved | No | Typed packets |
| Spec | None | 75 lines |
| LLM required | Yes (output only) | No (compress + decompress free) |
Same source text. One strips words. The other compresses meaning.
AXL v3 is a 75-line grammar. 7 operations, 6 subject tags, evidence chains, confidence scores.
PACKET := ID:AGENT | OP.CC | SUBJECT.VALUE | ARGS | TEMPORAL [META]
OP := OBS | INF | CON | MRG | SEK | YLD | PRD
TAG := $ (financial) | @ (entity) | # (metric) | ! (event) | ~ (state) | ^ (value)
CC := 00-99 (confidence)
Full spec: axlprotocol.org/v3
v3.1 adds four conventions on top of v3:
- Numeric bundles
label[$value]- pairs labels with values for unambiguous parsing - Entity anchors
@ent.XX- pins short aliases to canonical identifiers across long contexts - Causal operators
<-(caused by),=>(results in),->(leads to) - explicit directionality - Anchor declarations - top-of-packet
@entblock defines all entity aliases used
Spec: axlprotocol.org/v3.1
- System prompts -- compress instructions before every API call, save tokens on every request
- Context windows -- fit 3x more conversation history in the same window
- Agent-to-agent -- structured packets replace English paragraphs between machines
- Batch processing -- compress thousands of documents at pennies per million tokens
- Audit trails -- every claim carries confidence, evidence, and temporal metadata
- Document analysis -- feed compressed PDFs and articles to LLMs at a fraction of the cost
- Free tool: compress.axlprotocol.org
- Protocol: axlprotocol.org
- Spec: axlprotocol.org/v3 | v3.1
- Docs: docs.axlprotocol.org
- PyPI: pypi.org/project/axl-core
If AXL save you mass token, mass money -- leave mass star.
Apache 2.0
AXL Protocol Inc. -- Vancouver, BC
