Skip to content

relunctance/context-compressor

Folders and files

NameName
Last commit message
Last commit date

Latest commit

ย 

History

11 Commits
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 

Repository files navigation

๐Ÿฆ… Context Compressor

Pure Context Compression Engine โ€” not a memory manager, a lifesaver.

When your AI conversation context is about to explode, Context Compressor compresses it instantly. 180k โ†’ 35k. Continue chatting like nothing happened.

License: MIT OpenClaw Compatible

English | ไธญๆ–‡ | ็น้ซ”ไธญๆ–‡ | ๆ—ฅๆœฌ่ชž | ํ•œ๊ตญ์–ด | Franรงais | Espaรฑol | Deutsch | Italiano | ะ ัƒััะบะธะน | Portuguรชs (Brasil)


What is this vs. context-hawk?

Tool What it does When to use
Context Compressor Compresses current conversation Right now, when context is full
context-hawk Manages persistent memory across sessions Daily, between conversations

Context Compressor = Emergency rescue. context-hawk = Daily maintenance.


How it works

Before compression (180k tokens โ€” exploding)

[Full conversation history โ€” 180k tokens โ€” at 88%]
System: You are a helpful assistant...
User: First question...
Assistant: First answer...
User: Second question...
... (่ถŠๆฅ่ถŠ้•ฟ๏ผŒ่ถŠๆฅ่ถŠ่ดต๏ผŒ่ถŠๆฅ่ถŠๆ…ข)

After compression (35k tokens โ€” clean)

{
  "compressed_prompt": [
    {"role": "system", "content": "[ๆฐธไน…ไฟ็•™็š„็ณป็ปŸๆ็คบ]", "status": "preserved"},
    {"role": "user", "content": "[ๆœ€ๆ–ฐ้—ฎ้ข˜ๅฎŒๆ•ดๅŽŸๆ–‡]", "status": "preserved"},
    {"role": "assistant", "content": "[ๆœ€ๆ–ฐๅ›ž็ญ”ๅฎŒๆ•ดๅŽŸๆ–‡]", "status": "preserved"},
    {"role": "summary", "content": "[ๆ—ฉๆœŸ45ๆกๆถˆๆฏๆ‘˜่ฆ]", "status": "summarized"}
  ],
  "stats": {
    "original_tokens": 180000,
    "compressed_tokens": 35000,
    "ratio": "5.1x",
    "kept_messages": 5,
    "summarized_count": 87,
    "level": "normal"
  }
}

โœจ Core Features

Feature Description
Auto-trigger Compresses automatically at 70% context threshold
4 compression levels light / normal / heavy / emergency
Structured JSON output Full stats: tokens, ratio, counts
System prompt preserved Role definitions never compressed
Importance filtering Discards noise, keeps decisions/rules/code
Message deduplication Merges repeated confirmations
Code collapsing Long code blocks folded to meta
Pure Python No database, no dependencies
Writes to memory Compression history saved to memory/today.md

๐Ÿš€ Quick Start

# Install
chmod +x scripts/hawk-compress
ln -s scripts/hawk-compress /usr/local/bin/hawk-compress

# Compress current conversation (auto-detect level)
hawk-compress

# Compress with specific level
hawk-compress --level heavy

# Preview without writing
hawk-compress --dry-run

# Python API
python3 -c "
from context_compressor import ContextCompressor
c = ContextCompressor(keep_recent=5)
result = c.compress(your_chat_history)
print(result['stats']['ratio'])
"

Compression Levels

Level When Effect
light 60-70% Summarize messages > 30 days old
normal 70-85% Summarize + keep recent 10 โ† default
heavy 85-95% Keep recent 5 only
emergency > 95% Keep recent 3 only

Auto-Trigger

When context reaches 70%, every answer includes:

[๐Ÿฆ… Context: 72%] Compress recommended: /hawk-compress
  148k โ†’ ~35k | Save 113k tokens

At 85%+, forces confirmation before continuing.


File Structure

context-compressor/
โ”œโ”€โ”€ SKILL.md
โ”œโ”€โ”€ README.md
โ”œโ”€โ”€ LICENSE
โ”œโ”€โ”€ scripts/
โ”‚   โ””โ”€โ”€ hawk-compress       # Python CLI tool
โ””โ”€โ”€ references/
    โ”œโ”€โ”€ compression-logic.md    # Compression algorithm
    โ”œโ”€โ”€ auto-trigger.md        # Auto-trigger system
    โ”œโ”€โ”€ structured-output.md    # JSON output format
    โ””โ”€โ”€ cli.md                # CLI reference

Requirements

  • Python 3.8+
  • No external dependencies
  • No database required

License

MIT โ€” free to use, modify, and distribute.

About

Context Compressor ๐Ÿฆ… โ€” Pure context compression engine. Compresses current conversation 180kโ†’35k instantly. No database, no dependencies.

Resources

License

Stars

Watchers

Forks

Packages

 
 
 

Contributors

Languages