Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
35 changes: 35 additions & 0 deletions docs/en/Components/Config.md
Original file line number Diff line number Diff line change
Expand Up @@ -106,6 +106,41 @@ tools:

For the complete list of supported tools and custom tools, please refer to [here](./Tools.md)

## Memory Compression Configuration

> Optional, for context management in long conversations

```yaml
memory:
# Context Compressor: token detection + tool output pruning + LLM summary
context_compressor:
context_limit: 128000 # Model context window size
prune_protect: 40000 # Token threshold to protect recent tool outputs
prune_minimum: 20000 # Minimum pruning amount
reserved_buffer: 20000 # Reserved buffer
enable_summary: true # Enable LLM summary
summary_prompt: | # Custom summary prompt (optional)
Summarize this conversation...

# Refine Condenser: structured compression preserving execution trace
refine_condenser:
threshold: 60000 # Character threshold to trigger compression
system: ... # Custom compression prompt (optional)

# Code Condenser: generate code index files
code_condenser:
system: ... # Custom index generation prompt (optional)
code_wrapper: ['```', '```'] # Code block markers
```

Supported compressor types:

| Type | Use Case | Compression Method |
|------|----------|-------------------|
| `context_compressor` | General long conversations | Token detection + Tool pruning + LLM summary |
| `refine_condenser` | Preserve execution trace | Structured message compression (1:6 ratio) |
| `code_condenser` | Code generation tasks | Generate code index JSON |

## Others

> Optional, configure as needed
Expand Down
35 changes: 35 additions & 0 deletions docs/zh/Components/config.md
Original file line number Diff line number Diff line change
Expand Up @@ -106,6 +106,41 @@ tools:

支持的完整工具列表,以及自定义工具请参考 [这里](./tools)

## 内存压缩配置

> 可选,用于长对话场景的上下文管理

```yaml
memory:
# 上下文压缩器:基于token检测 + 工具输出裁剪 + LLM摘要
context_compressor:
context_limit: 128000 # 模型上下文窗口大小
prune_protect: 40000 # 保护最近工具输出的token阈值
prune_minimum: 20000 # 最小裁剪数量
reserved_buffer: 20000 # 预留缓冲区
enable_summary: true # 是否启用LLM摘要
summary_prompt: | # 自定义摘要提示词(可选)
Summarize this conversation...

# 精炼压缩器:保留执行轨迹的结构化压缩
refine_condenser:
threshold: 60000 # 触发压缩的字符阈值
system: ... # 自定义压缩提示词(可选)

# 代码压缩器:生成代码索引文件
code_condenser:
system: ... # 自定义索引生成提示词(可选)
code_wrapper: ['```', '```'] # 代码块标记
```

支持的压缩器类型:

| 类型 | 适用场景 | 压缩方式 |
|------|---------|---------|
| `context_compressor` | 通用长对话 | Token检测 + 工具裁剪 + LLM摘要 |
| `refine_condenser` | 需保留执行轨迹 | 结构化消息压缩(1:6压缩比) |
| `code_condenser` | 代码生成任务 | 生成代码索引JSON |

## 其他

> 可选,按需配置
Expand Down
Loading