Skip to content

feat(cache): add shared index and slice-aware range caching#69

Merged
vansour merged 2 commits intomainfrom
feat/cache-p3-slice-range
Apr 28, 2026
Merged

feat(cache): add shared index and slice-aware range caching#69
vansour merged 2 commits intomainfrom
feat/cache-p3-slice-range

Conversation

@vansour
Copy link
Copy Markdown
Owner

@vansour vansour commented Apr 28, 2026

Summary

  • add shared cache index persistence and cross-instance fill lock coordination for cache zones
  • add convert_head handling and explicit cache bypass when response_buffering is off
  • add slice-aware range caching with route-level slice_size_bytes and downstream subrange trimming
  • update the cache support plan to mark delivered P2/P3 boundaries and remaining gaps

Testing

  • cargo fmt --all
  • cargo test -q -p rginx-http --lib
  • cargo test -q -p rginx-config --lib
  • cargo test -q --workspace

Copilot AI review requested due to automatic review settings April 28, 2026 12:39
@coderabbitai
Copy link
Copy Markdown

coderabbitai Bot commented Apr 28, 2026

Warning

Rate limit exceeded

@vansour has exceeded the limit for the number of commits that can be reviewed per hour. Please wait 25 minutes and 16 seconds before requesting another review.

To keep reviews running without waiting, you can enable usage-based add-on for your organization. This allows additional reviews beyond the hourly cap. Account admins can enable it under billing.

⌛ How to resolve this issue?

After the wait time has elapsed, a review can be triggered using the @coderabbitai review command as a PR comment. Alternatively, push new commits to this PR.

We recommend that you space out your commits to avoid hitting the rate limit.

🚦 How do rate limits work?

CodeRabbit enforces hourly rate limits for each developer per organization.

Our paid plans have higher rate limits than the trial, open-source and free plans. In all cases, we re-allow further reviews after a brief timeout.

Please see our FAQ for further information.

ℹ️ Review info
⚙️ Run configuration

Configuration used: Repository UI (base), Organization UI (inherited)

Review profile: ASSERTIVE

Plan: Pro

Run ID: 6c1be3c9-4f59-4852-93a1-13ffaa6a5737

📥 Commits

Reviewing files that changed from the base of the PR and between 5d31193 and 7f1558a.

📒 Files selected for processing (23)
  • crates/rginx-config/src/compile/tests.rs
  • crates/rginx-config/src/compile/tests/cache.rs
  • crates/rginx-config/src/compile/tests/cache_p2.rs
  • crates/rginx-config/src/compile/tests/cache_p3.rs
  • crates/rginx-http/src/cache/entry.rs
  • crates/rginx-http/src/cache/entry/response.rs
  • crates/rginx-http/src/cache/lookup.rs
  • crates/rginx-http/src/cache/manager.rs
  • crates/rginx-http/src/cache/manager/response.rs
  • crates/rginx-http/src/cache/runtime.rs
  • crates/rginx-http/src/cache/runtime/fill_lock.rs
  • crates/rginx-http/src/cache/shared.rs
  • crates/rginx-http/src/cache/shared/index_file.rs
  • crates/rginx-http/src/cache/store.rs
  • crates/rginx-http/src/cache/store/revalidate.rs
  • crates/rginx-http/src/cache/tests/lookup.rs
  • crates/rginx-http/src/cache/tests/lookup/keys.rs
  • crates/rginx-http/src/cache/tests/lookup/keys_head.rs
  • crates/rginx-http/src/cache/tests/lookup/keys_slice.rs
  • crates/rginx-http/src/cache/tests/storage_p3.rs
  • crates/rginx-http/src/proxy/forward/cache.rs
  • crates/rginx-http/src/proxy/tests/mod.rs
  • crates/rginx-http/src/proxy/tests/support.rs
📝 Walkthrough

执行摘要

本 PR 通过添加共享索引、基于切片的范围请求处理、HEAD 到 GET 方法转换和响应缓冲绕过逻辑来扩展缓存系统。变更涉及配置模型、验证规则、运行时同步以及多个模块中的测试覆盖。

变更

队列 / 文件 摘要
文档与规划
CACHE_SUPPORT_PLAN.md
将 P2 和 P3 功能从"规划中"转为"已交付",指定具体的共享索引、范围切片和 HEAD 转换行为以及响应缓冲绕过语义。
配置模型与编译
crates/rginx-config/src/model/cache.rs, crates/rginx-config/src/compile/cache.rs, crates/rginx-config/src/compile/tests/cache.rs, crates/rginx-config/src/compile/tests/cache_p1.rs
添加可选配置字段:CacheZoneConfig 上的 shared_indexCacheRouteConfig 上的 slice_size_bytesconvert_head。扩展测试以验证默认值和字段传播。
配置验证
crates/rginx-config/src/validate/cache.rs, crates/rginx-config/src/validate/tests/cache.rs
添加验证规则:slice_size_bytes 必须大于 0(若指定),且仅当 cache.range_requests 设置为 Cache 时才允许。
核心类型
crates/rginx-core/src/config/cache.rs, crates/rginx-core/src/config/cache/key_template.rs
扩展 CacheZoneRouteCachePolicy 以包含新字段;在 CacheKeyTemplate 上添加 references_method() 助手。
缓存请求处理
crates/rginx-http/src/cache/request.rs
重构范围处理以区分请求范围与缓存范围;添加 cache_key_method 以支持 HEAD 到 GET 转换;实现切片对齐逻辑;添加上游范围重写助手。
缓存条目与查询
crates/rginx-http/src/cache/entry.rs, crates/rginx-http/src/cache/lookup.rs
重构响应读取和构建以支持请求感知处理,包含范围部分响应逻辑;区分本地与外部填充锁等待;在多个清理路径中保持共享索引。
缓存运行时与共享索引
crates/rginx-http/src/cache/mod.rs, crates/rginx-http/src/cache/runtime.rs, crates/rginx-http/src/cache/shared.rs, crates/rginx-http/src/cache/manager.rs, crates/rginx-http/src/cache/manager/control.rs
添加共享索引引导、同步和持久化;实现文件系统级填充锁;集成生成计数和修改时间追踪;在缓存操作关键点添加索引同步。
缓存存储与维护
crates/rginx-http/src/cache/store.rs, crates/rginx-http/src/cache/store/maintenance.rs
使用请求感知响应构建;在绕过和清理路径中计算下游裁剪需求;在维护操作后持久化共享索引。
代理与转发缓存
crates/rginx-http/src/proxy/forward/cache.rs, crates/rginx-http/src/proxy/forward/types.rs, crates/rginx-http/src/proxy/forward/attempt/cache_lookup.rs, crates/rginx-http/src/proxy/forward/attempt/primary.rs, crates/rginx-http/src/handler/dispatch/mod.rs, crates/rginx-http/src/handler/dispatch/route.rs
添加 response_buffering 配置传播;在缓存绕过时短路路由;更新上游请求方法和头应用。
代理测试
crates/rginx-http/src/proxy/tests/mod.rs, crates/rginx-http/src/proxy/tests/cache.rs
添加范围响应服务器;添加响应缓冲绕过和切片重用测试。
缓存查询与存储测试
crates/rginx-http/src/cache/tests/lookup/keys.rs, crates/rginx-http/src/cache/tests/storage_p1.rs, crates/rginx-http/src/cache/tests/storage_p2.rs, crates/rginx-http/src/cache/tests/storage_p3.rs, crates/rginx-http/src/cache/tests/mod.rs
扩展配置以支持新字段;添加 HEAD 转换、切片对齐和跨实例共享索引同步的测试覆盖。
状态与快照
crates/rginx-http/src/state/cache.rs, crates/rginx-http/src/state/lifecycle/status.rs, crates/rginx-http/src/state/tests/snapshots.rs, crates/rginx-http/src/state/tests/support.rs
迁移快照获取以使用 snapshot_with_shared_sync();添加共享索引元数据验证;更新测试配置。

序列图

sequenceDiagram
    participant Client
    participant CacheManager as 缓存管理器
    participant SharedIndex as 共享索引
    participant FileSystem as 文件系统
    participant UpstreamA as 上游实例A
    participant UpstreamB as 上游实例B

    Client->>CacheManager: 请求查询(键)
    CacheManager->>SharedIndex: 同步共享索引
    SharedIndex->>FileSystem: 读取 .rginx-index.json
    FileSystem-->>SharedIndex: 索引数据
    SharedIndex-->>CacheManager: 已加载索引
    
    alt 本地缓存命中
        CacheManager-->>Client: 缓存响应
    else 缓存缺失或外部填充锁
        CacheManager->>CacheManager: 检查填充锁
        alt 共享填充锁可用
            CacheManager->>FileSystem: 获取外部锁文件
            CacheManager->>UpstreamA: 上游请求
            UpstreamA-->>CacheManager: 响应
            CacheManager->>FileSystem: 存储缓存条目
            CacheManager->>SharedIndex: 更新索引
            SharedIndex->>FileSystem: 持久化 .rginx-index.json
            CacheManager-->>Client: 缓存响应
        else 外部锁被其他实例持有
            CacheManager->>CacheManager: 等待外部锁释放
            CacheManager-->>Client: 缓存响应或旁路
        end
    end
Loading
sequenceDiagram
    participant Client
    participant Router as 路由与转发
    participant CacheStore as 缓存存储
    participant HeadConverter as HEAD 转换器
    participant SlicingEngine as 切片引擎
    participant Upstream as 上游服务器

    Client->>Router: HEAD 请求 /<br/>(convert_head=true)
    Router->>HeadConverter: 规范化为 GET
    HeadConverter->>CacheStore: 使用 GET 方法生成缓存键
    CacheStore->>CacheStore: 检查缓存
    
    alt 缓存缺失
        CacheStore->>Upstream: 转发 GET 请求
        Upstream-->>CacheStore: 200 完整响应
        CacheStore->>CacheStore: 存储为缓存条目
        CacheStore-->>Router: 响应体(剪掉 HEAD)
    else 缓存命中
        CacheStore-->>Router: 缓存响应(剪掉 HEAD)
    end
    
    Router-->>Client: 无体 HEAD 响应

    Client->>Router: Range: bytes=2-4 请求<br/>(slice_size_bytes=8)
    Router->>SlicingEngine: 计算切片边界
    SlicingEngine->>SlicingEngine: 对齐到 [0-7]
    SlicingEngine->>CacheStore: 查询或获取切片
    
    alt 切片缺失
        CacheStore->>Upstream: Range: bytes=0-7 请求
        Upstream-->>CacheStore: 206 Partial [0-7]
        CacheStore->>CacheStore: 存储整个切片
    end
    
    SlicingEngine->>SlicingEngine: 从缓存切片<br/>提取 [2-4]
    SlicingEngine-->>Client: 206 Content-Range: 2-4
Loading

预计代码审查工作量

🎯 4 (复杂) | ⏱️ ~75 分钟

可能相关的 PR

  • PR #65:在同一缓存子系统中添加了基础缓存配置、编译和验证逻辑;本 PR 扩展了这些模块以支持共享索引、切片和 HEAD 转换。
  • PR #66:修改了缓存快照、管理员响应和状态快照流;本 PR 通过新的 snapshot_with_shared_sync() API 更改了快照获取机制。
  • PR #45:涉及 response_bufferingrequest_buffering 字段在路由分发和代理转发中的传播;本 PR 在这些相同位置集成了响应缓冲绕过逻辑。

诗歌

🐰 索引在磁盘上闪闪发光,
切片将请求均匀分割,
HEAD 变身 GET,缓存齐欢唱,
共享锁协调,实例互帮忙,
从 P2 到 P3,功能如春风!

🚥 Pre-merge checks | ✅ 4 | ❌ 1

❌ Failed checks (1 warning)

Check name Status Explanation Resolution
Docstring Coverage ⚠️ Warning Docstring coverage is 23.88% which is insufficient. The required threshold is 80.00%. Write docstrings for the functions missing them to satisfy the coverage threshold.
✅ Passed checks (4 passed)
Check name Status Explanation
Title check ✅ Passed 标题清晰准确地总结了主要变更:添加共享缓存索引和支持切片的范围缓存,与PR目标和所有文件变更完全一致。
Description check ✅ Passed 描述详细相关,涵盖了所有主要变更(共享索引、convert_head、slice_size_bytes、响应缓冲绕过和缓存支持计划更新),并列出了完整的测试命令。
Linked Issues check ✅ Passed Check skipped because no linked issues were found for this pull request.
Out of Scope Changes check ✅ Passed Check skipped because no linked issues were found for this pull request.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing Touches
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Commit unit tests in branch feat/cache-p3-slice-range

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link
Copy Markdown

@amazon-q-developer amazon-q-developer Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Summary

This PR adds sophisticated shared cache index persistence and slice-aware range caching functionality. The implementation is generally well-structured with good concurrent access patterns and error handling. However, I've identified 3 critical issues that should be addressed before merge:

Critical Issues to Fix:

  1. Crash risk in persist_shared_index_to_disk: Missing error recovery for the final modification time read creates state inconsistency
  2. Logic error in store_response: Premature shared index persistence on every cache attempt causes unnecessary I/O overhead
  3. Race condition in external lock cleanup: TOCTOU issue in wait_for_external_fill_lock can prevent proper lock acquisition

Overall Assessment:

The shared index coordination and fill lock mechanisms are well-designed, but the above issues could cause production problems including:

  • Cache state inconsistency across instances
  • Performance degradation from excessive I/O
  • Deadlocks or failed cache fills under concurrent load

Please address these findings before merging.


You can now have the agent implement changes and create commits directly on your pull request's source branch. Simply comment with /q followed by your request in natural language to ask the agent to make changes.


⚠️ This PR contains more than 30 files. Amazon Q is better at reviewing smaller PRs, and may miss issues in larger changesets.

Comment thread crates/rginx-http/src/cache/shared.rs Outdated
Comment thread crates/rginx-http/src/cache/runtime.rs Outdated
Comment thread crates/rginx-http/src/cache/store.rs
Copy link
Copy Markdown

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

This PR extends the HTTP cache subsystem to support cross-instance shared index persistence and coordination, adds slice-aware single-range caching (with downstream trimming), and wires new cache-related route/zone controls through config/validation/tests and runtime status snapshots.

Changes:

  • Add cache-zone shared index sidecar persistence and cross-instance fill-lock coordination; expose shared-index status in snapshots.
  • Add route cache controls for convert_head and slice_size_bytes, plus slice-aware range normalization (upstream) and subrange trimming (downstream).
  • Explicitly bypass cache when response_buffering is off; plumb response_buffering into proxy forwarding.

Reviewed changes

Copilot reviewed 36 out of 36 changed files in this pull request and generated 5 comments.

Show a summary per file
File Description
crates/rginx-core/src/config/cache.rs Adds shared_index, slice_size_bytes, convert_head to core cache config types.
crates/rginx-core/src/config/cache/key_template.rs Adds references_method() for safer GET/HEAD key separation when needed.
crates/rginx-config/src/model/cache.rs Adds new cache fields to the config model.
crates/rginx-config/src/compile/cache.rs Compiles new fields with defaults (shared index + convert_head).
crates/rginx-config/src/validate/cache.rs Validates slice size constraints and range-cache dependency.
crates/rginx-http/src/cache/shared.rs Introduces shared index sidecar read/write + sync logic and shared fill-lock path helper.
crates/rginx-http/src/cache/request.rs Implements convert-head-aware cache method, slice range normalization, and upstream Range rewriting.
crates/rginx-http/src/cache/entry.rs Adds request-aware response finalization to trim cached slices to the requested subrange.
crates/rginx-http/src/cache/manager.rs / lookup.rs / runtime.rs Integrates shared-index sync/persist, external fill-lock waits, and request-aware cached response building.
crates/rginx-http/src/cache/store.rs / store/maintenance.rs Persists shared index after store/purge/cleanup flows; trims downstream range responses even when not stored.
crates/rginx-http/src/proxy/forward/* Adds response_buffering option, cache bypass when buffering is off, and applies cache-driven upstream method/header changes.
crates/rginx-http/src/state/lifecycle/status.rs / state/cache.rs Ensures status/cache snapshots sync shared index before reporting zone stats.
crates/rginx-http/src//tests/ Adds coverage for shared-index sync, shared fill locks, convert_head, and slice range caching behavior.
CACHE_SUPPORT_PLAN.md Updates plan to reflect delivered P2/P3 boundaries and remaining gaps.

💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.

Comment thread crates/rginx-http/src/cache/store.rs Outdated
Comment thread crates/rginx-http/src/cache/shared.rs
Comment thread crates/rginx-http/src/cache/shared.rs Outdated
Comment thread crates/rginx-http/src/cache/runtime.rs Outdated
Comment thread crates/rginx-http/src/cache/runtime.rs Outdated
Copy link
Copy Markdown

@chatgpt-codex-connector chatgpt-codex-connector Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 Codex Review

Here are some automated review suggestions for this pull request.

Reviewed commit: 5d3119350c

ℹ️ About Codex in GitHub

Your team has set up Codex to review pull requests in this repo. Reviews are triggered when you

  • Open a pull request for review
  • Mark a draft as ready
  • Comment "@codex review".

If Codex has suggestions, it will comment; otherwise it will react with 👍.

Codex can also answer questions or update the PR. Try commenting "@codex address that feedback".

Comment thread crates/rginx-http/src/proxy/forward/cache.rs
Comment thread crates/rginx-http/src/cache/entry.rs Outdated
Copy link
Copy Markdown

@gemini-code-assist gemini-code-assist Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request implements shared cache indexing and slice-based range request caching, allowing for multi-process coordination and efficient large object handling. It also introduces method conversion for HEAD requests and explicit cache bypassing for unbuffered responses. Feedback was provided regarding the centralization of range-trimming logic to improve code maintainability.

Comment thread crates/rginx-http/src/cache/store.rs
coderabbitai[bot]
coderabbitai Bot previously requested changes Apr 28, 2026
Copy link
Copy Markdown

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 7

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (5)
crates/rginx-http/src/cache/runtime.rs (1)

166-329: ⚠️ Potential issue | 🟠 Major

把共享 fill-lock 逻辑从 runtime.rs 抽出去。

这一段把决策、等待、过期判断和文件锁实现都塞进了 CacheZoneRuntime,把生产文件推到了 350 行,已经超过 modularization-gate 的 300 行 soft limit。建议至少提取成独立的 shared_fill_lock 模块,或者并入现有 support 子模块,否则 CI 会持续失败。

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@crates/rginx-http/src/cache/runtime.rs` around lines 166 - 329, The shared
fill-lock logic (decision/wait/lock file ops) bloats CacheZoneRuntime; extract
the file-locking code into a new module (e.g., shared_fill_lock) and have
CacheZoneRuntime call into it: move try_acquire_shared_fill_lock,
wait_for_external_fill_lock, and shared_fill_lock_is_stale (and any helper use
of shared::shared_fill_lock_path and unix_time_ms) into that module, expose
functions like acquire_shared_fill_lock(key, now, lock_age) -> Option<PathBuf>
and wait_for_shared_fill_lock(key, lock_timeout, lock_age) -> bool, update
fill_lock_decision to call the new acquire function and return
WaitExternal/Acquired with the returned path, and update any callsites and
imports accordingly so the runtime file shrinks below the modularization limit
and the behavior (generation, notify insertion) remains unchanged.
crates/rginx-http/src/cache/store.rs (2)

58-163: 🛠️ Refactor suggestion | 🟠 Major

这个文件需要在合并前拆分。

本次改动把分片裁剪、旁路回退、共享索引持久化、304 刷新和写盘都堆进同一文件里,已经直接触发 CI 的 modularization-gate(319 > 300)。建议至少把“下游响应收尾”和“共享索引/写盘更新”提成私有 helper,先把文件降回门限内。

Also applies to: 165-306

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@crates/rginx-http/src/cache/store.rs` around lines 58 - 163, store_response
is doing too many responsibilities; extract the downstream-response finalization
and the shared-index / disk write logic into private helpers to reduce file size
below the modularization gate. Specifically: replace the downstream_response
closure with a private function (e.g., finalize_downstream_response) that calls
finalize_response_for_request; move the persistence sequence around
persist_zone_shared_index, write_cache_entry,
record_write_success/record_write_error, and update_index_after_store into a
private helper (e.g., persist_and_update_index) which takes context, final_key,
metadata, hash, paths, collected, and now; keep store_response as the high-level
flow that calls these helpers and returns downstream_response() at the end.
Ensure helper names are private (pub(crate) or fn) and update uses of
persist_zone_shared_index, write_cache_entry, and update_index_after_store to be
invoked from the new helper to reduce this file's token count.

62-95: ⚠️ Potential issue | 🔴 Critical

不要把“未按切片返回”的上游响应误转成 502。

这里在 needs_downstream_range_trim 为真时,会把 !storableno_cache 和 freshness 失败的分支都导向 finalize_response_for_request(...)。但该 helper 在缺少或不匹配 Content-Range 时会直接报错;所以上游如果合法地忽略 Range 并返回 200 OK 全量实体,这里会把原本应透传的响应变成 BAD_GATEWAY。建议只在确认响应可裁剪时才走该 helper,否则回退为原样下发。

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@crates/rginx-http/src/cache/store.rs` around lines 62 - 95, The current logic
calls finalize_response_for_request (via downstream_response()) even when
needs_downstream_range_trim is false, which turns legitimately non-range
upstream replies (e.g., 200 OK without Content-Range) into errors/502; change
the flow so finalize_response_for_request is only invoked when
needs_downstream_range_trim is true and you have confirmed the response can be
trimmed, otherwise return the original response body/parts unchanged.
Concretely, keep the existing downstream_response closure but guard its
invocation with needs_downstream_range_trim (and only call it for the
trimming/storable branches); for branches where trimming is not required (e.g.,
!needs_downstream_range_trim && (!storable || no_cache || oversized)), rebuild
and return the response using the collected bytes and original parts instead of
calling finalize_response_for_request.
crates/rginx-http/src/cache/manager.rs (1)

51-293: 🛠️ Refactor suggestion | 🟠 Major

请把 lookup 的新分支拆出去。

共享索引同步、等待策略、本地/外部锁分流、命中/过期/回源分支现在都压在同一个函数里,CI 已经因为 modularization-gate 失败(302 > 300)。把等待处理和命中/失效恢复各自抽成 helper,会更容易把这次功能继续维护下去。

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@crates/rginx-http/src/cache/manager.rs` around lines 51 - 293, 当前的 lookup
函数太大且未拆分,导致模块化门禁失败;把等待处理逻辑和命中/过期/回源分支拆成独立 helper 函数。具体修复:在 lookup 中保留高层控制流,但提取处理
LookupDecision::Wait 的代码到一个例如 handle_lookup_wait(zone, strategy, policy,
base_key, request) 的 async helper(负责本地/外部锁分流、超时处理、同步共享索引并返回释放结果或构造
CacheLookup::Miss 的 CacheStoreContext),并把处理 LookupDecision::FreshHit / Stale /
BackgroundUpdate / Miss 的内部复杂分支(包括
read_cached_response_for_request、stale_response_from_entry、load_lookup_metadata、remove_index_entry/remove_cache_files_if_unindexed、persist_zone_shared_index、构造
CacheStoreContext/Updating 等)提取到一个或多个 helpers(例如 handle_fresh_hit(zone, key,
entry, request, policy, read_cached_body) 和
handle_miss_or_background_update(...)),在 lookup 中仅做匹配转发到这些 helpers 并根据其返回值继续
loop 或返回 CacheLookup,从而让 lookup 保持精简并使 CI 的 modularization-gate
通过;引用函数/变量:lookup, LookupDecision, LookupWait, sync_zone_shared_index_if_needed,
read_cached_response_for_request, stale_response_from_entry,
load_lookup_metadata, remove_index_entry, remove_cache_files_if_unindexed,
persist_zone_shared_index, CacheStoreContext, CacheLookup。
crates/rginx-http/src/cache/entry.rs (1)

146-387: 🛠️ Refactor suggestion | 🟠 Major

把范围裁剪/响应组装从这个文件里拆出去。

CI 已经报了 modularization gate 失败(441 行,软上限 300)。现在这个文件同时承担了缓存读取、范围解析、范围裁剪和响应构建,继续在这里追加 416/fallback/统计之类分支会更难维护。建议至少把 finalize_response_for_request(...)parse_cached_content_range(...)build_response(...) 提到独立模块。

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@crates/rginx-http/src/cache/entry.rs` around lines 146 - 387, This file is
too large and violates the modularization gate; extract range/response logic
into a new module by moving finalize_response_for_request,
parse_cached_content_range, and build_response into a dedicated module (e.g.,
cache::response or cache::range) and keep their visibility (pub(super)) so
callers still compile; update the original file to replace the moved functions
with use/imports and forward any helper types (CachedContentRange or helper
imports like StatusCode, HeaderMap, CONTENT_RANGE, full_body, HttpResponse) or
expose them from the new module as needed, adjust tests and mod declarations
(mod response;) accordingly, and run cargo check to fix any missing use paths or
visibility issues.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@crates/rginx-config/src/compile/tests/cache.rs`:
- Around line 346-612: Test file grew past 600 lines because two large tests
were added; split each test into its own file or extract shared setup into
helpers. Move compile_cache_policy_supports_disabling_p2_defaults and
compile_cache_policy_supports_p3_slice_controls into separate test modules/files
(keeping temp_base_dir and compile_with_base available) or create a helper
function (e.g., make_default_cache_config/ make_cache_route_config) that builds
the repeated Config/CacheRouteConfig used by both tests and call that from the
two smaller tests so the original cache.rs stays under 600 lines.

In `@crates/rginx-http/src/cache/lookup.rs`:
- Around line 174-182: The code silently turns
build_cached_response_for_request(...) errors into None via .ok()? which hides
corrupt cached entries and prevents cleanup; replace the .ok()? usage with
explicit error handling: await build_cached_response_for_request(...), match on
Err(e) to process the failure by logging the error (including e and the affected
paths/metadata) and then evict/cleanup the bad index/file using the same
cleanup/eviction logic used by load_lookup_metadata() (call the same function or
inline the same steps), and only return None after eviction so the stale entry
won’t repeatedly fail.

In `@crates/rginx-http/src/cache/runtime.rs`:
- Around line 180-184: The current code collapses all std::io::Error into None
in try_acquire_shared_fill_lock, which causes real I/O failures (permissions,
disk, etc.) to be treated as "external lock exists" and return
FillLockDecision::WaitExternal; change try_acquire_shared_fill_lock to
distinguish ErrorKind::AlreadyExists (or equivalent "lock held") from other I/O
errors: return Ok(Some(path)) for success, Ok(None) only when the lock truly
exists (AlreadyExists), and Err(e) for other I/O failures; update the caller
(the match that produces FillLockDecision::WaitExternal) to handle Err by
logging the error (include e) and falling back to local locking or skipping
shared coordination (i.e., do not map Err to WaitExternal), or change the
function signature to Result<Option<PathBuf>, io::Error> and handle both
Ok(None) -> WaitExternal and Err(_) -> log+fallback to local lock.

In `@crates/rginx-http/src/cache/shared.rs`:
- Around line 173-211: persist_zone_shared_index currently overwrites the
on-disk .rginx-index.json using a local snapshot without checking disk
generation or making temp filenames unique, causing lost updates and tmp
collisions; update persist_zone_shared_index/persist_shared_index_to_disk so
before replacing the file you read the current on-disk index and its generation,
compare it to next_generation and if it differs perform a merge (merge disk
entries with our snapshot, resolving per-key by generation/timestamp) or retry,
and then perform an atomic write using a unique temp filename (e.g., include a
UUID or process+nano timestamp) and fsync/rename to the final path returned by
shared_index_path; ensure you still update zone.shared_index_generation and
zone.shared_index_last_modified_unix_ms only after a successful atomic replace.
- Around line 1-352: The file mixes serde schema, disk I/O, and runtime
sync/bootstrap logic; split responsibilities by extracting the file-format and
disk operations into a new "shared_index_file" module (move SharedIndexFile,
SharedIndexEntry, SharedVaryHeader, SharedAdmissionCount,
shared_file_from_index, index_from_shared_file, persist_shared_index_to_disk,
shared_index_path, set_file_modified_unix_ms, file_modified_unix_ms,
next_shared_index_modified_unix_ms, shared_index_modified_unix_ms,
shared_fill_lock_path and run_blocking into it) which exposes clear APIs:
read_shared_index(zone) -> Option<(SharedIndexFile, modified_unix_ms)>,
write_shared_index(zone, SharedIndexFile, minimum_modified_unix_ms) ->
modified_unix_ms, and encode/decode helpers; keep runtime concerns
(bootstrap_shared_index, load_shared_index_from_disk wrapper,
sync_zone_shared_index_if_needed, persist_zone_shared_index,
bootstrap_shared_index) in this file and call the new module's APIs, update
imports (CacheIndex, CacheIndexEntry, CachedVaryHeaderValue, unix_time_ms) and
adjust error handling to preserve behavior; update tests and references to use
the new module functions so the serde/schema + disk atomics are decoupled from
sync/locking logic.

In `@crates/rginx-http/src/cache/tests/lookup/keys.rs`:
- Around line 281-421: Split the new HEAD and slice tests into separate test
modules/files to keep the original keys.rs under the 400-line soft limit:
extract the repeated RouteCachePolicy construction into a shared helper (e.g., a
function like make_route_policy or route_policy_fixture) that returns a
RouteCachePolicy used by
head_cache_key_is_separated_when_convert_head_is_disabled,
cache_key_reuses_slice_for_subranges_within_same_slice, and
cross_slice_range_bypasses_when_slice_size_is_configured; move the HEAD-related
test(s) into a head_tests module/file and the slice/range tests into a
slice_tests module/file, import the helper and the existing helpers
render_cache_key, cache_request_bypass, and CacheRequest so each new file
compiles, and update test module declarations/uses accordingly so the original
file stays below the soft limit.

In `@crates/rginx-http/src/proxy/tests/mod.rs`:
- Around line 361-454: The test helper functions spawn_range_server and
parse_test_range_header are adding heavy test infrastructure into
proxy/tests/mod.rs; move them into a dedicated test support module (e.g.,
proxy/tests/support/range_server.rs or proxy/tests/support/mod.rs) and re-export
or mod‑use them from mod.rs; update the existing references in tests to import
the new module (use crate::tests::support::spawn_range_server or the chosen
path) and keep spawn_range_server and parse_test_range_header as pub (or
pub(crate)) so other tests can call them, ensuring the new file contains the
same logic but isolates test helpers from mod.rs.

---

Outside diff comments:
In `@crates/rginx-http/src/cache/entry.rs`:
- Around line 146-387: This file is too large and violates the modularization
gate; extract range/response logic into a new module by moving
finalize_response_for_request, parse_cached_content_range, and build_response
into a dedicated module (e.g., cache::response or cache::range) and keep their
visibility (pub(super)) so callers still compile; update the original file to
replace the moved functions with use/imports and forward any helper types
(CachedContentRange or helper imports like StatusCode, HeaderMap, CONTENT_RANGE,
full_body, HttpResponse) or expose them from the new module as needed, adjust
tests and mod declarations (mod response;) accordingly, and run cargo check to
fix any missing use paths or visibility issues.

In `@crates/rginx-http/src/cache/manager.rs`:
- Around line 51-293: 当前的 lookup 函数太大且未拆分,导致模块化门禁失败;把等待处理逻辑和命中/过期/回源分支拆成独立
helper 函数。具体修复:在 lookup 中保留高层控制流,但提取处理 LookupDecision::Wait 的代码到一个例如
handle_lookup_wait(zone, strategy, policy, base_key, request) 的 async
helper(负责本地/外部锁分流、超时处理、同步共享索引并返回释放结果或构造 CacheLookup::Miss 的
CacheStoreContext),并把处理 LookupDecision::FreshHit / Stale / BackgroundUpdate /
Miss 的内部复杂分支(包括
read_cached_response_for_request、stale_response_from_entry、load_lookup_metadata、remove_index_entry/remove_cache_files_if_unindexed、persist_zone_shared_index、构造
CacheStoreContext/Updating 等)提取到一个或多个 helpers(例如 handle_fresh_hit(zone, key,
entry, request, policy, read_cached_body) 和
handle_miss_or_background_update(...)),在 lookup 中仅做匹配转发到这些 helpers 并根据其返回值继续
loop 或返回 CacheLookup,从而让 lookup 保持精简并使 CI 的 modularization-gate
通过;引用函数/变量:lookup, LookupDecision, LookupWait, sync_zone_shared_index_if_needed,
read_cached_response_for_request, stale_response_from_entry,
load_lookup_metadata, remove_index_entry, remove_cache_files_if_unindexed,
persist_zone_shared_index, CacheStoreContext, CacheLookup。

In `@crates/rginx-http/src/cache/runtime.rs`:
- Around line 166-329: The shared fill-lock logic (decision/wait/lock file ops)
bloats CacheZoneRuntime; extract the file-locking code into a new module (e.g.,
shared_fill_lock) and have CacheZoneRuntime call into it: move
try_acquire_shared_fill_lock, wait_for_external_fill_lock, and
shared_fill_lock_is_stale (and any helper use of shared::shared_fill_lock_path
and unix_time_ms) into that module, expose functions like
acquire_shared_fill_lock(key, now, lock_age) -> Option<PathBuf> and
wait_for_shared_fill_lock(key, lock_timeout, lock_age) -> bool, update
fill_lock_decision to call the new acquire function and return
WaitExternal/Acquired with the returned path, and update any callsites and
imports accordingly so the runtime file shrinks below the modularization limit
and the behavior (generation, notify insertion) remains unchanged.

In `@crates/rginx-http/src/cache/store.rs`:
- Around line 58-163: store_response is doing too many responsibilities; extract
the downstream-response finalization and the shared-index / disk write logic
into private helpers to reduce file size below the modularization gate.
Specifically: replace the downstream_response closure with a private function
(e.g., finalize_downstream_response) that calls finalize_response_for_request;
move the persistence sequence around persist_zone_shared_index,
write_cache_entry, record_write_success/record_write_error, and
update_index_after_store into a private helper (e.g., persist_and_update_index)
which takes context, final_key, metadata, hash, paths, collected, and now; keep
store_response as the high-level flow that calls these helpers and returns
downstream_response() at the end. Ensure helper names are private (pub(crate) or
fn) and update uses of persist_zone_shared_index, write_cache_entry, and
update_index_after_store to be invoked from the new helper to reduce this file's
token count.
- Around line 62-95: The current logic calls finalize_response_for_request (via
downstream_response()) even when needs_downstream_range_trim is false, which
turns legitimately non-range upstream replies (e.g., 200 OK without
Content-Range) into errors/502; change the flow so finalize_response_for_request
is only invoked when needs_downstream_range_trim is true and you have confirmed
the response can be trimmed, otherwise return the original response body/parts
unchanged. Concretely, keep the existing downstream_response closure but guard
its invocation with needs_downstream_range_trim (and only call it for the
trimming/storable branches); for branches where trimming is not required (e.g.,
!needs_downstream_range_trim && (!storable || no_cache || oversized)), rebuild
and return the response using the collected bytes and original parts instead of
calling finalize_response_for_request.
🪄 Autofix (Beta)

Fix all unresolved CodeRabbit comments on this PR:

  • Push a commit to this branch (recommended)
  • Create a new PR with the fixes

ℹ️ Review info
⚙️ Run configuration

Configuration used: Repository UI (base), Organization UI (inherited)

Review profile: ASSERTIVE

Plan: Pro

Run ID: 2b9eea9b-00d5-4a9f-a4e1-4d49e3e38d44

📥 Commits

Reviewing files that changed from the base of the PR and between 90590d5 and 5d31193.

📒 Files selected for processing (36)
  • CACHE_SUPPORT_PLAN.md
  • crates/rginx-config/src/compile/cache.rs
  • crates/rginx-config/src/compile/tests/cache.rs
  • crates/rginx-config/src/compile/tests/cache_p1.rs
  • crates/rginx-config/src/model/cache.rs
  • crates/rginx-config/src/validate/cache.rs
  • crates/rginx-config/src/validate/tests/cache.rs
  • crates/rginx-core/src/config/cache.rs
  • crates/rginx-core/src/config/cache/key_template.rs
  • crates/rginx-http/src/cache/entry.rs
  • crates/rginx-http/src/cache/lookup.rs
  • crates/rginx-http/src/cache/manager.rs
  • crates/rginx-http/src/cache/manager/control.rs
  • crates/rginx-http/src/cache/mod.rs
  • crates/rginx-http/src/cache/request.rs
  • crates/rginx-http/src/cache/runtime.rs
  • crates/rginx-http/src/cache/shared.rs
  • crates/rginx-http/src/cache/store.rs
  • crates/rginx-http/src/cache/store/maintenance.rs
  • crates/rginx-http/src/cache/tests/lookup/keys.rs
  • crates/rginx-http/src/cache/tests/mod.rs
  • crates/rginx-http/src/cache/tests/storage_p1.rs
  • crates/rginx-http/src/cache/tests/storage_p2.rs
  • crates/rginx-http/src/cache/tests/storage_p3.rs
  • crates/rginx-http/src/handler/dispatch/mod.rs
  • crates/rginx-http/src/handler/dispatch/route.rs
  • crates/rginx-http/src/proxy/forward/attempt/cache_lookup.rs
  • crates/rginx-http/src/proxy/forward/attempt/primary.rs
  • crates/rginx-http/src/proxy/forward/cache.rs
  • crates/rginx-http/src/proxy/forward/types.rs
  • crates/rginx-http/src/proxy/tests/cache.rs
  • crates/rginx-http/src/proxy/tests/mod.rs
  • crates/rginx-http/src/state/cache.rs
  • crates/rginx-http/src/state/lifecycle/status.rs
  • crates/rginx-http/src/state/tests/snapshots.rs
  • crates/rginx-http/src/state/tests/support.rs
📜 Review details
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (1)
  • GitHub Check: Agent
🧰 Additional context used
🧠 Learnings (1)
📚 Learning: 2026-04-13T10:53:29.903Z
Learnt from: vansour
Repo: vansour/rginx PR: 43
File: crates/rginx-config/src/validate/server.rs:421-439
Timestamp: 2026-04-13T10:53:29.903Z
Learning: In `crates/rginx-config/src/validate/server.rs` (`validate_http3`), the check for `tls.session_tickets` when `http3.early_data = true` intentionally only rejects `Some(true)` and allows `None`. Requiring `Some(false)` breaks 0-RTT resumption and the `routes_http3_early_data_by_replay_safety` test and the CI `http3` gate in the current rustls/quinn integration. This is a deliberate design decision and should not be flagged as a bug until the runtime/session policy is updated to support that configuration.

Applied to files:

  • crates/rginx-config/src/validate/cache.rs
  • crates/rginx-config/src/compile/tests/cache_p1.rs
  • crates/rginx-config/src/validate/tests/cache.rs
  • crates/rginx-config/src/compile/tests/cache.rs
🪛 GitHub Actions: CI
crates/rginx-http/src/proxy/tests/mod.rs

[error] 1-1: modularization-gate: new test soft-limit violation (470 lines; soft limit is 400).

crates/rginx-config/src/compile/tests/cache.rs

[error] 1-1: modularization-gate: new test hard-limit violation (612 lines; hard limit is 600).

crates/rginx-http/src/cache/runtime.rs

[error] 1-1: modularization-gate: new production soft-limit violation (350 lines; soft limit is 300).

crates/rginx-http/src/cache/manager.rs

[error] 1-1: modularization-gate: new production soft-limit violation (302 lines; soft limit is 300).

crates/rginx-http/src/cache/tests/lookup/keys.rs

[error] 1-1: modularization-gate: new test soft-limit violation (421 lines; soft limit is 400).

crates/rginx-http/src/cache/store.rs

[error] 1-1: modularization-gate: new production soft-limit violation (319 lines; soft limit is 300).

crates/rginx-http/src/cache/entry.rs

[error] 1-1: modularization-gate: new production soft-limit violation (441 lines; soft limit is 300).

crates/rginx-http/src/cache/shared.rs

[error] 1-1: modularization-gate: new production soft-limit violation (352 lines; soft limit is 300).

🪛 LanguageTool
CACHE_SUPPORT_PLAN.md

[uncategorized] ~43-~43: 您的意思是“"不"齐”?
Context: ...裁切已完成并已落地。 - 下文保留这些条目,是为了把“已交付能力”和“下一轮继续补齐的剩余差距”放在同一张 对标清单里,避免后续阶段重复整理。 - 当前文档的...

(BU)


[uncategorized] ~223-~223: 1.动词被副词修饰时,助词应该用‘得’;2.省略宾语时,助词应该用‘的’;可能造成歧义。您的意思是不是:启用"得"更复杂
Context: ... / Content-Length 并裁切 body。 - 暂不启用的更复杂 partial-body / 流式缓存 - 本轮交付边界:multi...

(wb4)


[uncategorized] ~251-~251: 您的意思是“"不"更”?
Context: ...流式响应缓存模型 2. 若产品确认要支撑多进程共享同一本地 cache 目录,再补更严格的共享索引原子协调 3. 仅在产品边界明确后,再评估更广协议族缓存 交付...

(BU)


[uncategorized] ~256-~256: 您的意思是“"不"配置”?
Context: .... 仅在产品边界明确后,再评估更广协议族缓存 交付原则: - 每个子项都应先补配置模型、再补运行时、最后补 admin/status 和测试。 - 已交付的 ...

(BU)


[uncategorized] ~256-~256: 您的意思是“"不"运行”?
Context: ...界明确后,再评估更广协议族缓存 交付原则: - 每个子项都应先补配置模型、再补运行时、最后补 admin/status 和测试。 - 已交付的 P3 采用...

(BU)

🔇 Additional comments (20)
crates/rginx-http/src/proxy/forward/types.rs (1)

8-8: 新增 response_buffering 字段接入正确。

Line [8] 的字段扩展与下游调用方保持一致,策略不会在转发上下文中丢失。

crates/rginx-http/src/state/cache.rs (1)

5-9: 快照路径的锁作用域处理得当。

Line [5]-[9] 先复制缓存管理器再异步快照,避免了在 await 期间持锁。

crates/rginx-http/src/handler/dispatch/mod.rs (1)

47-54: response_buffering 在调度链路中传递完整。

Line [51] 与 Line [169] 的接线一致,确保路由策略能进入后续响应构建流程。

Also applies to: 163-172

crates/rginx-http/src/state/tests/support.rs (1)

158-158: 测试夹具已同步新缓存字段,方向正确。

shared_indexslice_size_bytesconvert_head 的补齐能保持测试构造与运行时模型一致。

Also applies to: 182-183

crates/rginx-http/src/proxy/forward/attempt/primary.rs (1)

29-31: 上游请求预处理顺序调整合理。

Line [29]-[31] 先做 method/headers 改写再追加条件头,流程更稳健。

crates/rginx-config/src/validate/cache.rs (1)

170-181: slice_size_bytes 相关配置校验完整。

Line [170]-[180] 的两条约束(必须大于 0、且依赖 range_requests = Cache)可以有效避免无效配置进入后续流程。

crates/rginx-http/src/cache/tests/storage_p1.rs (1)

199-199: 测试运行时结构初始化已对齐新字段。

shared-index 相关字段补齐后,测试场景与当前运行时数据结构更一致。

Also applies to: 244-244, 248-248, 252-253

crates/rginx-config/src/compile/cache.rs (1)

31-32: 编译阶段对新缓存字段的落地实现正确。

shared_index 默认值与 slice_size_bytes/convert_head 透传都已在编译输出中体现。

Also applies to: 91-91, 183-184

crates/rginx-http/src/state/lifecycle/status.rs (1)

16-20: 先复制再 await 这一改动是正确的。

这样不会在 snapshot_with_shared_sync() 的磁盘/锁等待期间一直持有 self.inner 的读锁,status/reload 路径的竞争会小很多。

crates/rginx-http/src/cache/lookup.rs (2)

58-80: 本地/外部等待策略拆分是对的。

这里把 WaitLocalWaitExternal 显式映射到不同的 LookupWait,后续等待路径不会再丢失协调来源,和共享 fill-lock 的语义一致。

Also applies to: 92-97


122-125: 删除坏条目后立即持久化共享索引,方向正确。

本地索引被移除后马上调用 persist_zone_shared_index(zone).await,可以避免其他实例继续看到已经失效的索引状态。

Also applies to: 136-138, 150-152, 193-195

crates/rginx-http/src/cache/manager/control.rs (3)

10-15: 先同步共享索引再做快照,这个顺序合理。

这样状态页/诊断快照读取到的是更接近磁盘真值的 zone 视图,不会只反映当前实例的本地索引。


17-21: 在 cleanup/purge 前先同步共享索引是必要的。

这些路径现在先把 zone 的共享索引拉到最新,再做清理或删除,能明显降低跨实例下按旧索引操作的概率。

Also applies to: 24-35, 37-49, 51-63


65-69: 按 zone 记录 bypass 统计很有用。

把 bypass 归因到具体 zone 后,后面的命中率和旁路诊断会更可读。

crates/rginx-http/src/cache/request.rs (4)

19-24: convert_head 的方法归一化贯穿得很一致。

旁路判断、cache key、上游方法和上游 Range 重写现在都走同一套 cache_key_method(...) 规则,避免了 HEAD/GET 在切片缓存上的语义分叉。

Also applies to: 54-90, 93-124, 176-178


126-156: 新的 Content-Range 匹配语义和切片窗口是对齐的。

这里不再要求上游一定返回完整 slice,只要求返回覆盖了下游请求窗口且没有超出当前缓存切片边界,这和后面的下游裁剪逻辑能对上。


215-222: needs_downstream_trimming() 这个边界判定很清晰。

它只在“请求范围”和“缓存切片范围”不一致时触发后续裁剪,正好把精确命中和切片复用两条路径分开。


199-213: 配置层已通过显式验证防护整数除零

该函数接收的 slice_size_bytes 值来自编译阶段的 RouteCachePolicy,该策略是通过通过配置验证的 config 编译而来。编译流程(compile_with_base)在第一步即调用 validate(&raw)?,其中缓存验证明确拒绝 slice_size_bytes == 0 的配置,返回错误信息"cache.slice_size_bytes must be greater than 0"。验证失败时编译过程中止,不会到达运行时。

因此这条代码路径不存在整数除零的风险,函数不需要额外的本地防护。

			> Likely an incorrect or invalid review comment.
crates/rginx-http/src/cache/entry.rs (2)

103-144: 把缓存响应构建改成 request-aware 是必要的。

lookupstalestore 相关路径终于能共享同一条响应构建逻辑,范围缓存不会再各走各的分支。


330-387: Content-Range 解析和响应构建提成 helper 是个好方向。

这两块现在已经有明确边界了;如果按上面的建议再单独成模块,后续测试和复用都会更顺。

Comment thread crates/rginx-config/src/compile/tests/cache.rs Outdated
Comment thread crates/rginx-http/src/cache/lookup.rs Outdated
Comment thread crates/rginx-http/src/cache/runtime.rs Outdated
Comment thread crates/rginx-http/src/cache/shared.rs Outdated
Comment thread crates/rginx-http/src/cache/shared.rs
Comment thread crates/rginx-http/src/cache/tests/lookup/keys.rs Outdated
Comment thread crates/rginx-http/src/proxy/tests/mod.rs Outdated
@vansour
Copy link
Copy Markdown
Owner Author

vansour commented Apr 28, 2026

本轮已处理阻塞合并的评论和错误,更新在 7f1558a

已落的修正:

  • 拆分 cache/shared / cache/runtime / cache/store / cache/entry / cache/manager 以及相关测试文件,modularization gate 本地已通过。
  • 只在真正进入缓存写入路径时才改写上游 method / Range / 条件请求头,避免锁超时旁路时把 slice range 错发给上游。
  • slice 场景下如果上游忽略 Range 返回 200,现在会直接 passthrough,不再因为缺少 Content-Range 变成 502;补了回归测试。
  • stale 命中构建响应失败时,现在会记录日志并驱逐坏条目,不再静默反复失败。
  • shared index 写盘后读取 mtime 失败时增加回退,fill-lock 等待路径改成单次 metadata 判断,消掉 exists() / stale 检查竞态。

说明:record_cache_admission_attempt(...) 后立即持久化 shared index 这点我保持不变。这里是为了让 min_uses 准入计数跨实例共享;如果延后到 admission 成功后才写,多个 runtime 之间的使用次数不会汇总。

Copy link
Copy Markdown

@chatgpt-codex-connector chatgpt-codex-connector Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 Codex Review

Here are some automated review suggestions for this pull request.

Reviewed commit: 7f1558a415

ℹ️ About Codex in GitHub

Your team has set up Codex to review pull requests in this repo. Reviews are triggered when you

  • Open a pull request for review
  • Mark a draft as ready
  • Comment "@codex review".

If Codex has suggestions, it will comment; otherwise it will react with 👍.

Codex can also answer questions or update the PR. Try commenting "@codex address that feedback".

Comment thread crates/rginx-http/src/cache/lookup.rs
Comment thread crates/rginx-http/src/cache/manager.rs
@vansour
Copy link
Copy Markdown
Owner Author

vansour commented Apr 28, 2026

为完成本次合并,这里把仍然阻塞保护规则的 bot review thread 一次性收口:

  • proxy/forward/cache.rs 的上游 Range 改写问题已在 7f1558a 修复。
  • cache/lookup.rs stale 读取一致性、cache/manager.rsconvert_head = false 的 HEAD-only 填充、cache/shared.rs 热路径 stat 与 sidecar 合并写策略,当前都归类为后续优化项,不作为这轮 P3 合并阻塞。
  • cache/store.rs 中 admission count 先持久化再判断是否达标是刻意设计,用来让 min_uses 在多实例之间共享计数;不会按建议后移。
  • 其余 modularization / slice passthrough / stale cleanup / fill-lock 竞态类评论都已经在 7f1558a 处理。

下面会把这些 bot 线程标记为已处理/已知 follow-up,并继续执行合并。

@vansour vansour dismissed coderabbitai[bot]’s stale review April 28, 2026 13:23

Blocking bot review threads have been addressed or explicitly deferred as follow-up work; CI is green and this review is being dismissed to unblock merge.

@vansour vansour merged commit 0961d24 into main Apr 28, 2026
7 checks passed
@vansour vansour deleted the feat/cache-p3-slice-range branch April 28, 2026 13:23
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants