Conversation
Summary of ChangesHello @tesgth032, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed! 本次拉取请求旨在解决会话终止后可能通过缓存内容被“复活”的问题。通过引入会话终止标记和重映射机制,确保会话一旦终止,其数据会被清理,并且任何尝试使用旧会话 ID 的请求都会被引导至新会话,从而增强了会话管理的稳定性和安全性。 Highlights
Changelog
Using Gemini Code AssistThe full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips. Invoking Gemini You can request assistance from Gemini at any point by creating a comment using either
Customization To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a Limitations & Feedback Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here. You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension. Footnotes
|
|
Note Reviews pausedIt looks like this branch is under active development. To avoid overwhelming you with review comments due to an influx of new commits, CodeRabbit has automatically paused this review. You can configure this behavior by changing the Use the following commands to manage reviews:
Use the checkboxes below for quick actions:
📝 WalkthroughWalkthrough在会话生命周期中引入终止标记与 TerminatedSessionError;终止判定会导致重映射或新建会话;terminateSession 先写入终止标记再用 SCAN + 删除 pipeline 清理键并返回结果;代理层将终止错误映射为 410,并补充相关单元测试。(≤50 字) Changes
Estimated code review effort🎯 4 (Complex) | ⏱️ ~45 minutes Possibly related PRs
Suggested reviewers
🚥 Pre-merge checks | ✅ 2 | ❌ 1❌ Failed checks (1 warning)
✅ Passed checks (2 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches
🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Code Review
This PR implements remapping logic after session termination to prevent terminated sessions from being "revived" and cleans up related Redis data. The overall implementation is clear, considers concurrent scenarios, and includes comprehensive test cases. However, a high-severity vulnerability was identified in the session termination logic: the use of unsanitized session IDs in a Redis SCAN pattern could lead to a denial-of-service attack. Additionally, a potential race condition was found. Please sanitize or validate the session ID before using it in Redis patterns and address the identified race condition.
src/lib/session-manager.ts
Outdated
| const matchPattern = `session:${sessionId}:*`; | ||
|
|
||
| do { | ||
| const scanResult = (await redis.scan(cursor, "MATCH", matchPattern, "COUNT", 200)) as [ |
There was a problem hiding this comment.
The terminateSession method is vulnerable to Redis Key Pattern Injection. The sessionId parameter is concatenated directly into a glob pattern for the SCAN command without any sanitization or validation. An attacker who can control the sessionId (e.g., by providing a wildcard character like *) can cause the SCAN command to match and subsequently delete all session-related keys in Redis. This would result in a system-wide denial of service by terminating all active sessions and deleting their associated data.
| const matchPattern = `session:${sessionId}:*`; | |
| do { | |
| const scanResult = (await redis.scan(cursor, "MATCH", matchPattern, "COUNT", 200)) as [ | |
| const matchPattern = `session:${sessionId.replace(/[*?[\]]/g, "")}:*`; |
src/lib/session-manager.ts
Outdated
| const newSessionId = SessionManager.generateSessionId(); | ||
| const ttlSeconds = SessionManager.TERMINATED_SESSION_TTL; | ||
|
|
||
| await redis | ||
| .pipeline() | ||
| .setex(replacementKey, ttlSeconds, newSessionId) | ||
| .expire(terminatedKey, ttlSeconds) | ||
| .exec(); | ||
|
|
||
| return { sessionId: newSessionId, remapped: true, terminatedSessionId: sessionId }; |
There was a problem hiding this comment.
当前的实现在创建 replacement session ID 时存在竞态条件。当多个请求同时为一个已终止但尚无 replacement 的 session 请求新 ID 时,它们会各自生成新的 session ID,并用 SETEX 相互覆盖 replacementKey。这会导致只有一个新 ID 被持久化为 replacement,而其他请求创建的 session ID 则被孤立。
建议使用 SETNX 来原子化地设置 replacement key,确保只有一个请求能成功设置,其他请求则读取已设置的值。这样可以避免产生孤立的 session,并保证所有后续请求都被重映射到同一个新的 session ID。
const newSessionId = SessionManager.generateSessionId();
const ttlSeconds = SessionManager.TERMINATED_SESSION_TTL;
// 使用 SETNX 原子地设置 replacement,避免竞态条件
const wasSet = await redis.set(replacementKey, newSessionId, "EX", ttlSeconds, "NX");
if (wasSet) {
// 当前请求成功设置了 replacement,继续并更新终止标记的 TTL
await redis.expire(terminatedKey, ttlSeconds);
return { sessionId: newSessionId, remapped: true, terminatedSessionId: sessionId };
}
// 其他请求已经设置了 replacement,读取已存在的值
const existingReplacement = await redis.get(replacementKey);
// 如果在 SETNX 和 GET 之间 key 过期了(极罕见),则回退到我们已生成的 newSessionId
return {
sessionId: existingReplacement ?? newSessionId,
remapped: true,
terminatedSessionId: sessionId,
};
src/lib/session-manager.ts
Outdated
| await redis | ||
| .pipeline() | ||
| .setex(replacementKey, ttlSeconds, newSessionId) | ||
| .expire(terminatedKey, ttlSeconds) | ||
| .exec(); |
There was a problem hiding this comment.
race condition: if resolveSessionIdIfTerminated is called concurrently for the same terminated sessionId, both calls might generate different newSessionId values and attempt to write them
the second call would overwrite the first call's replacement, resulting in inconsistent state
Prompt To Fix With AI
This is a comment left during a code review.
Path: src/lib/session-manager.ts
Line: 158-162
Comment:
race condition: if `resolveSessionIdIfTerminated` is called concurrently for the same terminated `sessionId`, both calls might generate different `newSessionId` values and attempt to write them
the second call would overwrite the first call's replacement, resulting in inconsistent state
How can I resolve this? If you propose a fix, please make it concise.| // 0. 标记终止(优先写入,避免并发请求在清理窗口内复活) | ||
| const markerResult = await redis.set(terminatedKey, terminatedAt, "EX", ttlSeconds); | ||
| const markerOk = markerResult === "OK"; | ||
|
|
||
| if (!markerOk) { | ||
| logger.warn("SessionManager: Failed to set termination marker", { sessionId }); | ||
| } |
There was a problem hiding this comment.
race condition: concurrent request using the sessionId between marker write (line 2029) and key deletion (lines 2084-2114) could create new session keys, which would then be immediately deleted, causing data loss
recommend acquiring a lock before termination or checking the marker before any session write operations
Prompt To Fix With AI
This is a comment left during a code review.
Path: src/lib/session-manager.ts
Line: 2028-2034
Comment:
race condition: concurrent request using the `sessionId` between marker write (line 2029) and key deletion (lines 2084-2114) could create new session keys, which would then be immediately deleted, causing data loss
recommend acquiring a lock before termination or checking the marker before any session write operations
How can I resolve this? If you propose a fix, please make it concise.
src/lib/session-manager.ts
Outdated
| do { | ||
| const scanResult = (await redis.scan(cursor, "MATCH", matchPattern, "COUNT", 200)) as [ | ||
| string, | ||
| string[], | ||
| ]; | ||
| const nextCursor = scanResult[0]; | ||
| const keys = scanResult[1] ?? []; | ||
| cursor = nextCursor; | ||
|
|
||
| if (keys.length === 0) continue; | ||
|
|
||
| const deletePipeline = redis.pipeline(); | ||
| for (const key of keys) { | ||
| if (key === terminatedKey) continue; | ||
| deletePipeline.del(key); | ||
| } | ||
|
|
||
| const deleteResults = await deletePipeline.exec(); | ||
| if (!deleteResults) continue; | ||
|
|
||
| for (const [err, result] of deleteResults) { | ||
| if (!err && typeof result === "number" && result > 0) { | ||
| deletedKeys += result; | ||
| } | ||
| } | ||
| } | ||
| } while (cursor !== "0"); |
There was a problem hiding this comment.
Redis SCAN doesn't guarantee atomicity - keys created during the scan loop might be missed
if new session:{sessionId}:* keys are written between SCAN iterations, they won't be deleted, allowing the terminated session to be partially revived
Prompt To Fix With AI
This is a comment left during a code review.
Path: src/lib/session-manager.ts
Line: 2089-2114
Comment:
Redis SCAN doesn't guarantee atomicity - keys created during the scan loop might be missed
if new `session:{sessionId}:*` keys are written between SCAN iterations, they won't be deleted, allowing the terminated session to be partially revived
How can I resolve this? If you propose a fix, please make it concise.
src/lib/session-manager.ts
Outdated
| if (key === terminatedKey) continue; | ||
| deletePipeline.del(key); |
There was a problem hiding this comment.
skipping terminatedKey deletion means the termination marker persists correctly, but the logic comment on line 2084 says "删除 session:* 相关 key" which could be misleading since one key is intentionally preserved
Note: If this suggestion doesn't match your team's coding style, reply to this and let me know. I'll remember it for next time!
Prompt To Fix With AI
This is a comment left during a code review.
Path: src/lib/session-manager.ts
Line: 2102-2103
Comment:
skipping `terminatedKey` deletion means the termination marker persists correctly, but the logic comment on line 2084 says "删除 session:* 相关 key" which could be misleading since one key is intentionally preserved
<sub>Note: If this suggestion doesn't match your team's coding style, reply to this and let me know. I'll remember it for next time!</sub>
How can I resolve this? If you propose a fix, please make it concise.
Additional Comments (1)
if a remapped session calls Prompt To Fix With AIThis is a comment left during a code review.
Path: src/lib/session-manager.ts
Line: 574-594
Comment:
`refreshSessionTTL` recreates session keys that were deleted during termination
if a remapped session calls `refreshSessionTTL` (lines 469, 507), it will recreate `session:{sessionId}:key`, `session:{sessionId}:provider`, and `session:{sessionId}:last_seen` keys that should remain deleted
How can I resolve this? If you propose a fix, please make it concise. |
There was a problem hiding this comment.
Actionable comments posted: 2
🧹 Nitpick comments (5)
tests/unit/lib/session-manager-terminate-session.test.ts (1)
33-36:deletePipelineRef.exec的 mock 返回值在 "userId 不可用" 场景下不够精确。
deletePipelineRef.exec固定返回[[null, 1]],但在第二个测试中(Line 95-113),scan 只返回terminatedKey且被跳过,实际无del命令入队。真实 Redis pipeline 无命令时exec返回[]。当前 mock 会导致deletedKeys多计 1,不影响测试结果(因markerOk已为true),但降低了测试保真度。建议在第二个测试中对
deletePipelineRef.exec使用更精确的返回值,或让 mock 根据实际del调用次数动态返回结果。Also applies to: 43-48
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@tests/unit/lib/session-manager-terminate-session.test.ts` around lines 33 - 36, The deletePipelineRef.exec mock currently always returns [[null, 1]] which misrepresents Redis behavior when no commands are queued; update the tests to return a more accurate value (e.g., [] for the "userId not available" scenario) or make deletePipelineRef.exec a dynamic mock that inspects deletePipelineRef.del call count/arguments and returns [] when no del was enqueued and [[null, 1]] when a del was enqueued; target the mock used in the second test (and the other occurrence noted) by adjusting the deletePipelineRef mock setup so exec's return value reflects actual pipeline activity.tests/unit/lib/session-manager-terminated-remap.test.ts (1)
29-31: 缺少对@/lib/config/env.schema和@/app/v1/_lib/codex/session-extractor等模块的 mock。
session-manager.ts顶层 import 了@/lib/config/env.schema、@/app/v1/_lib/codex/session-extractor、@/lib/utils/message-redaction、@/lib/utils/request-sequence等模块。当前测试在 CI 中能通过说明这些模块在测试环境下可正常 import,但如果这些模块将来引入有副作用的依赖(如数据库连接),测试可能会意外失败。建议对关键依赖添加显式 mock 以提升测试稳定性。🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@tests/unit/lib/session-manager-terminated-remap.test.ts` around lines 29 - 31, The test currently mocks getRedisClient but misses mocks for top-level imports of session-manager (mock '@/lib/config/env.schema' to return a minimal env object or schema, mock '@/app/v1/_lib/codex/session-extractor' to export a no-op or stubbed extractor used by session-manager, and similarly mock '@/lib/utils/message-redaction' and '@/lib/utils/request-sequence' to export harmless no-op functions/objects); add vi.mock(...) calls for those modules mirroring the existing vi.mock pattern so the test no longer depends on future side-effecting imports at module load time.tests/unit/repository/provider.test.ts (1)
54-54: 空更新测试用例的 mock 返回值未同步更新。第 72 行和第 104 行的 mock 已从
{ rowCount: N }改为{ count: N },但第 54 行仍然使用{ rowCount: 0 }。虽然此测试路径在调用execute前就提前返回了 0(空数组输入),所以当前不影响测试正确性,但建议保持一致以避免后续维护混淆。建议修改
- const executeMock = vi.fn(async () => ({ rowCount: 0 })); + const executeMock = vi.fn(async () => ({ count: 0 }));🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@tests/unit/repository/provider.test.ts` at line 54, The test's executeMock return shape is inconsistent: change the mock defined as executeMock (currently returning { rowCount: 0 }) to return { count: 0 } so it matches the other mocks used later (the execute mocks at the other test cases) and avoids confusion; update the async vi.fn for executeMock to resolve to an object with count: 0.src/lib/session-manager.ts (1)
108-119: TERMINATED_SESSION_TTL 支持 3 个不同的环境变量名。当前代码依次检查
SESSION_TERMINATION_TTL_SECONDS、SESSION_TERMINATION_TTL、TERMINATED_SESSION_TTL三个环境变量。如果这不是为了兼容旧配置的迁移方案,建议收敛为单一命名以降低维护和文档成本。🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/lib/session-manager.ts` around lines 108 - 119, TERMINATED_SESSION_TTL currently checks three environment variables (SESSION_TERMINATION_TTL_SECONDS, SESSION_TERMINATION_TTL, TERMINATED_SESSION_TTL); simplify it to read from a single canonical env var (pick one name such as SESSION_TERMINATION_TTL_SECONDS) and remove the fallbacks so the IIFE only parses that single variable and falls back to the 24*60*60 default; update the constant initializer in session-manager.ts (TERMINATED_SESSION_TTL) and any docs/tests that reference the other env names to avoid confusion.tests/unit/components/form/client-restrictions-editor.test.tsx (1)
87-87: 通过下标索引定位TagInput实例较为脆弱
getTagInputOnChange(0)和getTagInputOnChange(1)依赖组件内TagInput的渲染顺序,一旦组件调整 allowed/blocked 的渲染位置,两个测试将静默地测试错误的回调,而不会报错。建议改为通过
valueprop 来定位对应的TagInput实例,以明确意图并消除对顺序的隐式假设:建议的重构方式
-function getTagInputOnChange(callIndex: number): (values: string[]) => void { - const calls = vi.mocked(TagInput).mock.calls; - const call = calls[callIndex]; - if (!call) throw new Error(`TagInput call ${callIndex} not found (got ${calls.length} calls)`); - return (call[0] as TagInputProps).onChange; +function getTagInputOnChangeByValue(currentValue: string[]): (values: string[]) => void { + const calls = vi.mocked(TagInput).mock.calls; + const call = calls.find((c) => (c[0] as TagInputProps).value === currentValue); + if (!call) throw new Error(`TagInput with value ${JSON.stringify(currentValue)} not found`); + return (call[0] as TagInputProps).onChange; }然后在测试中通过初始
value匹配:- act(() => getTagInputOnChange(0)(["a", "b", "a", "c"])); + const onChange = getTagInputOnChangeByValue([]); // allowed 初始值为 [] + act(() => onChange(["a", "b", "a", "c"]));Also applies to: 97-97
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@tests/unit/components/form/client-restrictions-editor.test.tsx` at line 87, Tests use getTagInputOnChange(0) / getTagInputOnChange(1) which rely on TagInput render order and are fragile; change the selector to locate the TagInput by its value prop (the initial allowed/blocked arrays) instead of by index, then call the onChange callback for the matched TagInput (replace uses of getTagInputOnChange(0) and getTagInputOnChange(1) with a helper that finds TagInput by value prop and returns its onChange). Ensure helper references the TagInput component and the initial allowed/blocked values so tests no longer depend on render order.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@src/lib/session-manager.ts`:
- Around line 129-172: resolveSessionIdIfTerminated has a TOCTOU race when
multiple requests generate different replacements; change the write to
replacementKey to an atomic SET NX with TTL and, if the SET returns null
(meaning another request set it first), re-read replacementKey and use that
value instead of the locally generated newSessionId. Concretely, inside
resolveSessionIdIfTerminated replace the pipeline.setex(replacementKey,
ttlSeconds, newSessionId).expire(terminatedKey, ttlSeconds).exec() sequence with
an atomic SET replacementKey newSessionId NX EX ttlSeconds (or client.set with {
NX: true, EX: ttlSeconds }) and separately ensure terminatedKey gets
expire(ttlSeconds); if the SET fails, read replacementKey and return its value
as the remapped sessionId; keep generateSessionId, replacementKey, terminatedKey
and TERMINATED_SESSION_TTL references.
In `@tests/unit/repository/provider.test.ts`:
- Line 72: The first test case's mock return value is inconsistent with the
implementation which expects a "count" field; update the mock used in the
"returns 0 and does not execute SQL" test so the executeMock (vi.fn used to stub
DB calls) returns { count: 0 } instead of { rowCount: 0 } to match the other
tests and the code path that reads result.count.
---
Nitpick comments:
In `@src/lib/session-manager.ts`:
- Around line 108-119: TERMINATED_SESSION_TTL currently checks three environment
variables (SESSION_TERMINATION_TTL_SECONDS, SESSION_TERMINATION_TTL,
TERMINATED_SESSION_TTL); simplify it to read from a single canonical env var
(pick one name such as SESSION_TERMINATION_TTL_SECONDS) and remove the fallbacks
so the IIFE only parses that single variable and falls back to the 24*60*60
default; update the constant initializer in session-manager.ts
(TERMINATED_SESSION_TTL) and any docs/tests that reference the other env names
to avoid confusion.
In `@tests/unit/components/form/client-restrictions-editor.test.tsx`:
- Line 87: Tests use getTagInputOnChange(0) / getTagInputOnChange(1) which rely
on TagInput render order and are fragile; change the selector to locate the
TagInput by its value prop (the initial allowed/blocked arrays) instead of by
index, then call the onChange callback for the matched TagInput (replace uses of
getTagInputOnChange(0) and getTagInputOnChange(1) with a helper that finds
TagInput by value prop and returns its onChange). Ensure helper references the
TagInput component and the initial allowed/blocked values so tests no longer
depend on render order.
In `@tests/unit/lib/session-manager-terminate-session.test.ts`:
- Around line 33-36: The deletePipelineRef.exec mock currently always returns
[[null, 1]] which misrepresents Redis behavior when no commands are queued;
update the tests to return a more accurate value (e.g., [] for the "userId not
available" scenario) or make deletePipelineRef.exec a dynamic mock that inspects
deletePipelineRef.del call count/arguments and returns [] when no del was
enqueued and [[null, 1]] when a del was enqueued; target the mock used in the
second test (and the other occurrence noted) by adjusting the deletePipelineRef
mock setup so exec's return value reflects actual pipeline activity.
In `@tests/unit/lib/session-manager-terminated-remap.test.ts`:
- Around line 29-31: The test currently mocks getRedisClient but misses mocks
for top-level imports of session-manager (mock '@/lib/config/env.schema' to
return a minimal env object or schema, mock
'@/app/v1/_lib/codex/session-extractor' to export a no-op or stubbed extractor
used by session-manager, and similarly mock '@/lib/utils/message-redaction' and
'@/lib/utils/request-sequence' to export harmless no-op functions/objects); add
vi.mock(...) calls for those modules mirroring the existing vi.mock pattern so
the test no longer depends on future side-effecting imports at module load time.
In `@tests/unit/repository/provider.test.ts`:
- Line 54: The test's executeMock return shape is inconsistent: change the mock
defined as executeMock (currently returning { rowCount: 0 }) to return { count:
0 } so it matches the other mocks used later (the execute mocks at the other
test cases) and avoids confusion; update the async vi.fn for executeMock to
resolve to an object with count: 0.
ℹ️ Review info
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
Cache: Disabled due to Reviews > Disable Cache setting
📒 Files selected for processing (5)
src/lib/session-manager.tstests/unit/components/form/client-restrictions-editor.test.tsxtests/unit/lib/session-manager-terminate-session.test.tstests/unit/lib/session-manager-terminated-remap.test.tstests/unit/repository/provider.test.ts
src/lib/session-manager.ts
Outdated
| private static async resolveSessionIdIfTerminated(sessionId: string): Promise<{ | ||
| sessionId: string; | ||
| remapped: boolean; | ||
| terminatedSessionId: string; | ||
| }> { | ||
| const redis = getRedisClient(); | ||
| if (!redis || redis.status !== "ready") { | ||
| return { sessionId, remapped: false, terminatedSessionId: sessionId }; | ||
| } | ||
|
|
||
| const terminatedKey = SessionManager.getTerminationMarkerKey(sessionId); | ||
| const replacementKey = SessionManager.getTerminationReplacementKey(sessionId); | ||
|
|
||
| try { | ||
| const values = await redis.mget(terminatedKey, replacementKey); | ||
| const terminatedAt = values?.[0] ?? null; | ||
| const replacement = values?.[1] ?? null; | ||
|
|
||
| if (!terminatedAt) { | ||
| return { sessionId, remapped: false, terminatedSessionId: sessionId }; | ||
| } | ||
|
|
||
| if (replacement && typeof replacement === "string" && replacement.length > 0) { | ||
| return { sessionId: replacement, remapped: true, terminatedSessionId: sessionId }; | ||
| } | ||
|
|
||
| const newSessionId = SessionManager.generateSessionId(); | ||
| const ttlSeconds = SessionManager.TERMINATED_SESSION_TTL; | ||
|
|
||
| await redis | ||
| .pipeline() | ||
| .setex(replacementKey, ttlSeconds, newSessionId) | ||
| .expire(terminatedKey, ttlSeconds) | ||
| .exec(); | ||
|
|
||
| return { sessionId: newSessionId, remapped: true, terminatedSessionId: sessionId }; | ||
| } catch (error) { | ||
| logger.error("SessionManager: Failed to resolve terminated session", { | ||
| error, | ||
| sessionId, | ||
| }); | ||
| return { sessionId, remapped: false, terminatedSessionId: sessionId }; | ||
| } | ||
| } |
There was a problem hiding this comment.
resolveSessionIdIfTerminated 存在 TOCTOU 竞态:并发请求可能生成不同的 replacement。
当一个 session 被终止且无 replacement 时(terminatedAt 存在但 replacement 为 null),多个并发请求会同时走到 Line 155-162 分支,各自生成不同的 newSessionId 并写入 replacementKey。后写入的请求会覆盖先写入的 replacement,导致先返回的那个 newSessionId 成为孤立 session(无法被后续请求找到)。
建议使用 Redis SET ... NX 原子写入 replacement,仅当不存在时才写入,写入失败则再读一次获取已有值:
建议修改
const newSessionId = SessionManager.generateSessionId();
const ttlSeconds = SessionManager.TERMINATED_SESSION_TTL;
- await redis
- .pipeline()
- .setex(replacementKey, ttlSeconds, newSessionId)
- .expire(terminatedKey, ttlSeconds)
- .exec();
-
- return { sessionId: newSessionId, remapped: true, terminatedSessionId: sessionId };
+ // 使用 SET NX 避免并发覆盖
+ const setResult = await redis.set(replacementKey, newSessionId, "EX", ttlSeconds, "NX");
+ await redis.expire(terminatedKey, ttlSeconds);
+
+ if (setResult === "OK") {
+ return { sessionId: newSessionId, remapped: true, terminatedSessionId: sessionId };
+ }
+
+ // 另一个并发请求已写入 replacement,读取并使用它
+ const existing = await redis.get(replacementKey);
+ return {
+ sessionId: existing || newSessionId,
+ remapped: true,
+ terminatedSessionId: sessionId,
+ };🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/lib/session-manager.ts` around lines 129 - 172,
resolveSessionIdIfTerminated has a TOCTOU race when multiple requests generate
different replacements; change the write to replacementKey to an atomic SET NX
with TTL and, if the SET returns null (meaning another request set it first),
re-read replacementKey and use that value instead of the locally generated
newSessionId. Concretely, inside resolveSessionIdIfTerminated replace the
pipeline.setex(replacementKey, ttlSeconds, newSessionId).expire(terminatedKey,
ttlSeconds).exec() sequence with an atomic SET replacementKey newSessionId NX EX
ttlSeconds (or client.set with { NX: true, EX: ttlSeconds }) and separately
ensure terminatedKey gets expire(ttlSeconds); if the SET fails, read
replacementKey and return its value as the remapped sessionId; keep
generateSessionId, replacementKey, terminatedKey and TERMINATED_SESSION_TTL
references.
There was a problem hiding this comment.
Code Review Summary
This PR implements session termination improvements to prevent terminated sessions from being revived. The implementation uses Redis markers and remapping logic to ensure terminated sessions are properly handled.
PR Size: M
- Lines changed: 433
- Files changed: 5
Issues Found
| Category | Critical | High | Medium | Low |
|---|---|---|---|---|
| Logic/Bugs | 0 | 0 | 0 | 0 |
| Security | 0 | 0 | 0 | 0 |
| Error Handling | 0 | 0 | 0 | 0 |
| Types | 0 | 0 | 0 | 0 |
| Comments/Docs | 0 | 0 | 0 | 0 |
| Tests | 0 | 1 | 0 | 0 |
| Simplification | 0 | 0 | 0 | 0 |
High Priority Issues (Should Fix)
1. Test Mock May Not Match Production Code (Confidence: 85)
Location: tests/unit/repository/provider.test.ts:72 and :101
The test mocks change from { rowCount: 2 } to { count: 2 }, but the production code at src/repository/provider.ts:712 still accesses (result as any).rowCount:
return (result as any).rowCount || 0;postgres.js (used in this project via Drizzle ORM) returns count for affected rows, not rowCount. This test change is correct for the actual DB behavior, but it exposes that the production code is accessing the wrong property.
Suggested fix:
Update the production code in src/repository/provider.ts:712:
// Before
return (result as any).rowCount || 0;
// After
return (result as any).count || 0;Alternatively, if this test change is intentional to expose the bug, consider adding a TODO comment or fixing the production code in a follow-up PR.
Review Coverage
- Logic and correctness - Clean (remapping logic is sound)
- Security (OWASP Top 10) - Clean
- Error handling - Clean (all error paths logged)
- Type safety - Clean
- Documentation accuracy - Clean
- Test coverage - Good (new remap tests comprehensive)
- Code clarity - Good
Positive Notes
- Well-structured termination marker system with configurable TTL
- Proper use of SCAN instead of explicit key deletion
- Comprehensive new test file for the remap functionality
- All error paths properly logged with context
Automated review by Claude AI
| vi.resetModules(); | ||
|
|
||
| const executeMock = vi.fn(async () => ({ rowCount: 2 })); | ||
| const executeMock = vi.fn(async () => ({ count: 2 })); |
There was a problem hiding this comment.
[High] [TEST-MISMATCH] Test mock returns count but production code accesses rowCount
Why this is a problem: The test mock changes from { rowCount: 2 } to { count: 2 }, but the production code at src/repository/provider.ts:712 still accesses (result as any).rowCount. postgres.js (used via Drizzle ORM) returns count for affected rows, not rowCount. This change is correct for the actual DB behavior, but it exposes that the production code is accessing the wrong property.
Suggested fix:
Update the production code in src/repository/provider.ts:712:
// Before
return (result as any).rowCount || 0;
// After
return (result as any).count || 0;| vi.resetModules(); | ||
|
|
||
| const executeMock = vi.fn(async () => ({ rowCount: 1 })); | ||
| const executeMock = vi.fn(async () => ({ count: 1 })); |
There was a problem hiding this comment.
[High] [TEST-MISMATCH] Same issue as above - test mock returns count but production code at src/repository/provider.ts:712 accesses rowCount
This change is correct for postgres.js behavior but exposes that the production code needs to be updated to use count instead of rowCount.
|
已根据“终止会话应阻断旧 sessionId”的语义调整实现(commit 3ea25ec):
|
| const markerResult = await redis.set(terminatedKey, terminatedAt, "EX", ttlSeconds); | ||
| const markerOk = markerResult === "OK"; | ||
|
|
||
| if (!markerOk) { | ||
| logger.warn("SessionManager: Failed to set termination marker", { sessionId }); | ||
| } |
There was a problem hiding this comment.
concurrent requests using the terminated sessionId between marker write (line 2025) and ZSET cleanup (line 2063-2078) could write new keys (session:{sessionId}:info, session:{sessionId}:messages, etc.) through storeSessionInfo and other methods in session-guard.ts lines 110-176
these newly written keys would then be deleted by the SCAN loop (lines 2085-2110), causing data loss for concurrent requests that haven't been blocked yet
Prompt To Fix With AI
This is a comment left during a code review.
Path: src/lib/session-manager.ts
Line: 2025-2030
Comment:
concurrent requests using the terminated `sessionId` between marker write (line 2025) and ZSET cleanup (line 2063-2078) could write new keys (`session:{sessionId}:info`, `session:{sessionId}:messages`, etc.) through `storeSessionInfo` and other methods in `session-guard.ts` lines 110-176
these newly written keys would then be deleted by the SCAN loop (lines 2085-2110), causing data loss for concurrent requests that haven't been blocked yet
How can I resolve this? If you propose a fix, please make it concise.
src/lib/session-manager.ts
Outdated
| do { | ||
| const scanResult = (await redis.scan(cursor, "MATCH", matchPattern, "COUNT", 200)) as [ | ||
| string, | ||
| string[], | ||
| ]; | ||
| const nextCursor = scanResult[0]; | ||
| const keys = scanResult[1] ?? []; | ||
| cursor = nextCursor; | ||
|
|
||
| if (keys.length === 0) continue; | ||
|
|
||
| const deletePipeline = redis.pipeline(); | ||
| for (const key of keys) { | ||
| if (key === terminatedKey) continue; | ||
| deletePipeline.del(key); | ||
| } | ||
|
|
||
| const deleteResults = await deletePipeline.exec(); | ||
| if (!deleteResults) continue; | ||
|
|
||
| for (const [err, result] of deleteResults) { | ||
| if (!err && typeof result === "number" && result > 0) { | ||
| deletedKeys += result; | ||
| } | ||
| } | ||
| } | ||
| } while (cursor !== "0"); |
There was a problem hiding this comment.
Redis SCAN doesn't provide snapshot isolation - if concurrent requests write new session:{sessionId}:* keys between SCAN iterations, those keys might be missed
example: SCAN returns cursor "100", then a concurrent request writes session:{sessionId}:req:5:messages, then SCAN continues from cursor "100" - the new key is missed and the terminated session can be partially revived
Prompt To Fix With AI
This is a comment left during a code review.
Path: src/lib/session-manager.ts
Line: 2085-2110
Comment:
Redis SCAN doesn't provide snapshot isolation - if concurrent requests write new `session:{sessionId}:*` keys between SCAN iterations, those keys might be missed
example: SCAN returns cursor "100", then a concurrent request writes `session:{sessionId}:req:5:messages`, then SCAN continues from cursor "100" - the new key is missed and the terminated session can be partially revived
How can I resolve this? If you propose a fix, please make it concise.| if (redis && redis.status === "ready") { | ||
| const terminatedAt = await SessionManager.readTerminationMarker(redis, clientSessionId); | ||
| if (terminatedAt) { | ||
| logger.info("SessionManager: Rejected terminated client session", { | ||
| keyId, | ||
| sessionId: clientSessionId, | ||
| terminatedAt, | ||
| messagesLength, | ||
| }); | ||
| throw new TerminatedSessionError(clientSessionId, terminatedAt); | ||
| } |
There was a problem hiding this comment.
termination check happens AFTER concurrent detection logic (lines 427-440) which calls SessionTracker.getConcurrentCount(effectiveSessionId) - this means concurrent detection still accesses the terminated session before it's blocked
move the termination check to line 406, before the effectiveSessionId assignment and concurrent detection
Prompt To Fix With AI
This is a comment left during a code review.
Path: src/lib/session-manager.ts
Line: 407-417
Comment:
termination check happens AFTER concurrent detection logic (lines 427-440) which calls `SessionTracker.getConcurrentCount(effectiveSessionId)` - this means concurrent detection still accesses the terminated session before it's blocked
move the termination check to line 406, before the `effectiveSessionId` assignment and concurrent detection
How can I resolve this? If you propose a fix, please make it concise.There was a problem hiding this comment.
Actionable comments posted: 2
♻️ Duplicate comments (1)
tests/unit/proxy/session-guard-terminated-session.test.ts (1)
17-27:TerminatedSessionError内联类与session-guard-warmup-intercept.test.ts中完全重复。两个文件中的实现完全相同(包括构造函数逻辑和
this.name赋值),建议提取到共享 fixture 文件。🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@tests/unit/proxy/session-guard-terminated-session.test.ts` around lines 17 - 27, The TerminatedSessionError class is duplicated across tests; extract the class into a shared test fixture (e.g., a new test helper module) and have both tests import TerminatedSessionError instead of redefining it; remove the inline class from session-guard-terminated-session.test.ts and session-guard-warmup-intercept.test.ts, update their imports to reference the new fixture, and ensure the exported class preserves the constructor, sessionId, terminatedAt, and name behavior.
🧹 Nitpick comments (5)
src/lib/session-manager.ts (3)
420-420:effectiveSessionId是多余的别名。
effectiveSessionId始终等于clientSessionId,从未被重新赋值,在所有使用处可直接替换为clientSessionId。🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/lib/session-manager.ts` at line 420, Remove the needless alias effectiveSessionId and use clientSessionId directly: delete the line declaring const effectiveSessionId = clientSessionId; replace all usages of effectiveSessionId with clientSessionId (search for the symbol in the file, e.g., within the session manager methods), then run tests/lint to ensure no references remain and there is no shadowing of clientSessionId.
135-153:redis: any类型丢失类型安全性。
readTerminationMarker的redis参数类型为any,绕过了 TypeScript 的类型检查。建议改为NonNullable<ReturnType<typeof getRedisClient>>,与文件其他私有方法保持一致。♻️ 建议修改
+import type { getRedisClient } from "./redis"; + private static async readTerminationMarker( - redis: any, + redis: NonNullable<ReturnType<typeof getRedisClient>>, sessionId: string ): Promise<string | null> {🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/lib/session-manager.ts` around lines 135 - 153, The readTerminationMarker method currently types its redis parameter as any; change its signature to use a concrete Redis client type by replacing "redis: any" with "redis: NonNullable<ReturnType<typeof getRedisClient>>" so it matches other private methods and restores type safety for SessionManager.readTerminationMarker; locate the method by name (readTerminationMarker) and update the parameter type and any related JSDoc/comments accordingly while keeping the existing logic unchanged and retaining the call to SessionManager.getTerminationMarkerKey.
2019-2130: SCAN 抛异常时,已成功写入的终止标记会导致返回值误报false。外层
try-catch(Line 2019 / 2123)在 SCAN 循环(Line 2086)抛出异常时会返回false,即使 Line 2025 的终止标记已成功写入 Redis。调用方若依赖返回值判断终止是否成功,可能触发不必要的重试,尽管重试是幂等安全的。如果调用方只需关心"标记是否已写入",可考虑将 SCAN 清理改为独立的
try-catch,使 marker 结果与清理结果分离返回,或在 catch 块中补充检查markerOk。🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/lib/session-manager.ts` around lines 2019 - 2130, The termination marker write (terminatedKey via SessionManager.getTerminationMarkerKey) can succeed but exceptions in the SCAN/delete loop cause the outer catch to return false; change the code so the marker write and its boolean markerOk are treated separately from cleanup: after setting terminatedKey (and computing markerOk), wrap the ZSET removal and the SCAN/delete loop in their own try-catch (or on error catch, perform a quick redis.get(terminatedKey) to re-check marker existence) and ensure the final return uses markerOk || deletedKeys > 0 (or markerOk if cleanup failed but marker exists). Update references: SessionManager.getTerminationMarkerKey, terminatedKey, markerOk, the SCAN loop and pipeline.exec, and the outer catch to avoid overriding a successful marker write.tests/unit/proxy/session-guard-warmup-intercept.test.ts (1)
17-27:TerminatedSessionError测试类与session-guard-terminated-session.test.ts中完全重复。两个测试文件各自定义了相同的内联
TerminatedSessionError类(包括构造函数和name赋值)。可将其提取到共享测试辅助文件(如tests/unit/fixtures/session-errors.ts),并在两处引用,减少维护成本。🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@tests/unit/proxy/session-guard-warmup-intercept.test.ts` around lines 17 - 27, The TerminatedSessionError class is duplicated in tests/unit/proxy/session-guard-warmup-intercept.test.ts and tests/unit/ proxy/session-guard-terminated-session.test.ts; extract the class into a shared test helper (e.g., export class TerminatedSessionError from tests/unit/fixtures/session-errors.ts) and replace the inline definitions in both test files with an import of TerminatedSessionError, keeping the constructor signature and name assignment identical to preserve behavior.tests/unit/lib/session-manager-terminated-remap.test.ts (1)
40-40: 测试文件名与describe块名称不一致。文件名为
session-manager-terminated-remap.test.ts,暗示覆盖的是"重映射"场景,但describe块标题为"terminated blocking",测试内容也仅验证阻断行为。建议与实际测试内容对齐,统一命名。🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@tests/unit/lib/session-manager-terminated-remap.test.ts` at line 40, The test filename (session-manager-terminated-remap.test.ts) does not match the describe block ("SessionManager.getOrCreateSessionId - terminated blocking"); either rename the describe/title to reflect the "terminated remap" scenario or rename the file to match the "terminated blocking" behavior—update the describe string in the test suite (the describe(...) that starts with "SessionManager.getOrCreateSessionId") and any nested it()/test names so they accurately reflect the remap scenario, or alternatively rename the test file to match the blocking scenario; ensure consistency between the file name and describe/title to avoid confusion in test reporting.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@src/app/v1/_lib/proxy/session-guard.ts`:
- Line 187: The throw in session-guard.ts uses a hardcoded Chinese message in
new ProxyError(...); replace that literal with an i18n lookup (e.g., use the
project's translation API to fetch key like "errors.session_terminated") and
pass the resulting localized string to ProxyError, and add the corresponding
translations for zh-CN, zh-TW, en, ja and ru to the i18n resource bundle; ensure
any necessary import or context (e.g., the i18n instance or translator function
used elsewhere in this module) is used so the error message is localized at
throw time.
In `@tests/unit/lib/session-manager-terminated-remap.test.ts`:
- Around line 32-38: The test helper makePipeline lacks a mock for
pipeline.expire, so SessionManager.refreshSessionTTL's call to
pipeline.expire(...) throws (and is swallowed), making the test pass wrongly;
update makePipeline to add an expire mock (e.g., expire: vi.fn(() => pipeline))
so pipeline.expire returns the pipeline like setex and exec do, ensuring
refreshSessionTTL actually exercises the expire path in the tests.
---
Duplicate comments:
In `@tests/unit/proxy/session-guard-terminated-session.test.ts`:
- Around line 17-27: The TerminatedSessionError class is duplicated across
tests; extract the class into a shared test fixture (e.g., a new test helper
module) and have both tests import TerminatedSessionError instead of redefining
it; remove the inline class from session-guard-terminated-session.test.ts and
session-guard-warmup-intercept.test.ts, update their imports to reference the
new fixture, and ensure the exported class preserves the constructor, sessionId,
terminatedAt, and name behavior.
---
Nitpick comments:
In `@src/lib/session-manager.ts`:
- Line 420: Remove the needless alias effectiveSessionId and use clientSessionId
directly: delete the line declaring const effectiveSessionId = clientSessionId;
replace all usages of effectiveSessionId with clientSessionId (search for the
symbol in the file, e.g., within the session manager methods), then run
tests/lint to ensure no references remain and there is no shadowing of
clientSessionId.
- Around line 135-153: The readTerminationMarker method currently types its
redis parameter as any; change its signature to use a concrete Redis client type
by replacing "redis: any" with "redis: NonNullable<ReturnType<typeof
getRedisClient>>" so it matches other private methods and restores type safety
for SessionManager.readTerminationMarker; locate the method by name
(readTerminationMarker) and update the parameter type and any related
JSDoc/comments accordingly while keeping the existing logic unchanged and
retaining the call to SessionManager.getTerminationMarkerKey.
- Around line 2019-2130: The termination marker write (terminatedKey via
SessionManager.getTerminationMarkerKey) can succeed but exceptions in the
SCAN/delete loop cause the outer catch to return false; change the code so the
marker write and its boolean markerOk are treated separately from cleanup: after
setting terminatedKey (and computing markerOk), wrap the ZSET removal and the
SCAN/delete loop in their own try-catch (or on error catch, perform a quick
redis.get(terminatedKey) to re-check marker existence) and ensure the final
return uses markerOk || deletedKeys > 0 (or markerOk if cleanup failed but
marker exists). Update references: SessionManager.getTerminationMarkerKey,
terminatedKey, markerOk, the SCAN loop and pipeline.exec, and the outer catch to
avoid overriding a successful marker write.
In `@tests/unit/lib/session-manager-terminated-remap.test.ts`:
- Line 40: The test filename (session-manager-terminated-remap.test.ts) does not
match the describe block ("SessionManager.getOrCreateSessionId - terminated
blocking"); either rename the describe/title to reflect the "terminated remap"
scenario or rename the file to match the "terminated blocking" behavior—update
the describe string in the test suite (the describe(...) that starts with
"SessionManager.getOrCreateSessionId") and any nested it()/test names so they
accurately reflect the remap scenario, or alternatively rename the test file to
match the blocking scenario; ensure consistency between the file name and
describe/title to avoid confusion in test reporting.
In `@tests/unit/proxy/session-guard-warmup-intercept.test.ts`:
- Around line 17-27: The TerminatedSessionError class is duplicated in
tests/unit/proxy/session-guard-warmup-intercept.test.ts and tests/unit/
proxy/session-guard-terminated-session.test.ts; extract the class into a shared
test helper (e.g., export class TerminatedSessionError from
tests/unit/fixtures/session-errors.ts) and replace the inline definitions in
both test files with an import of TerminatedSessionError, keeping the
constructor signature and name assignment identical to preserve behavior.
ℹ️ Review info
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
Cache: Disabled due to Reviews > Disable Cache setting
📒 Files selected for processing (5)
src/app/v1/_lib/proxy/session-guard.tssrc/lib/session-manager.tstests/unit/lib/session-manager-terminated-remap.test.tstests/unit/proxy/session-guard-terminated-session.test.tstests/unit/proxy/session-guard-warmup-intercept.test.ts
| sessionId: error.sessionId, | ||
| terminatedAt: error.terminatedAt, | ||
| }); | ||
| throw new ProxyError("Session 已被终止,请创建新的会话后重试", 410); |
There was a problem hiding this comment.
硬编码中文字符串违反 i18n 规范。
"Session 已被终止,请创建新的会话后重试" 直接硬编码为中文,违反了项目要求支持 5 种语言(zh-CN、zh-TW、en、ja、ru)的 i18n 规范。需通过 i18n 系统提供该错误消息。
As per coding guidelines: "All user-facing strings must use i18n (5 languages supported: zh-CN, zh-TW, en, ja, ru). Never hardcode display text."
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/app/v1/_lib/proxy/session-guard.ts` at line 187, The throw in
session-guard.ts uses a hardcoded Chinese message in new ProxyError(...);
replace that literal with an i18n lookup (e.g., use the project's translation
API to fetch key like "errors.session_terminated") and pass the resulting
localized string to ProxyError, and add the corresponding translations for
zh-CN, zh-TW, en, ja and ru to the i18n resource bundle; ensure any necessary
import or context (e.g., the i18n instance or translator function used elsewhere
in this module) is used so the error message is localized at throw time.
|
补充一处安全性查漏补缺(commit c22a530):
|
src/lib/session-manager.ts
Outdated
| @@ -1971,21 +2074,8 @@ export class SessionManager { | |||
| ); | |||
| } | |||
|
|
|||
| // 2. 删除所有 Session 相关的 key | |||
| // 2. 从 ZSET 中移除(始终尝试,即使查询失败) | |||
| const pipeline = redis.pipeline(); | |||
|
|
|||
| // 基础绑定信息 | |||
| pipeline.del(`session:${sessionId}:provider`); | |||
| pipeline.del(`session:${sessionId}:key`); | |||
| pipeline.del(`session:${sessionId}:info`); | |||
| pipeline.del(`session:${sessionId}:last_seen`); | |||
| pipeline.del(`session:${sessionId}:concurrent_count`); | |||
|
|
|||
| // 可选:messages 和 response(如果启用了存储) | |||
| pipeline.del(`session:${sessionId}:messages`); | |||
| pipeline.del(`session:${sessionId}:response`); | |||
|
|
|||
| // 3. 从 ZSET 中移除(始终尝试,即使查询失败) | |||
| pipeline.zrem(getGlobalActiveSessionsKey(), sessionId); | |||
|
|
|||
| if (providerId) { | |||
| @@ -2000,30 +2090,52 @@ export class SessionManager { | |||
| pipeline.zrem(getUserActiveSessionsKey(userId), sessionId); | |||
| } | |||
|
|
|||
| // 4. 删除 hash 映射(如果存在) | |||
| // 注意:无法直接反查 hash,只能清理已知的 session key | |||
| // hash 会在 TTL 后自动过期,不影响功能 | |||
|
|
|||
| const results = await pipeline.exec(); | |||
| await pipeline.exec(); | |||
There was a problem hiding this comment.
critical race: concurrent requests can write new session keys (via storeSessionInfo, storeSessionMessages, storeSessionRequestBody in session-guard.ts:110-176) between marker write (line 2040) and ZSET cleanup (line 2093), causing data loss when SCAN deletes these newly written keys
recommend checking termination marker in all storeSession* methods before writing any keys
Prompt To Fix With AI
This is a comment left during a code review.
Path: src/lib/session-manager.ts
Line: 2040-2093
Comment:
critical race: concurrent requests can write new session keys (via `storeSessionInfo`, `storeSessionMessages`, `storeSessionRequestBody` in `session-guard.ts:110-176`) between marker write (line 2040) and ZSET cleanup (line 2093), causing data loss when SCAN deletes these newly written keys
recommend checking termination marker in all `storeSession*` methods before writing any keys
How can I resolve this? If you propose a fix, please make it concise.
src/lib/session-manager.ts
Outdated
| do { | ||
| const scanResult = (await redis.scan(cursor, "MATCH", matchPattern, "COUNT", 200)) as [ | ||
| string, | ||
| string[], | ||
| ]; | ||
| const nextCursor = scanResult[0]; | ||
| const keys = scanResult[1] ?? []; | ||
| cursor = nextCursor; | ||
|
|
||
| if (keys.length === 0) continue; | ||
|
|
||
| const deletePipeline = redis.pipeline(); | ||
| for (const key of keys) { | ||
| if (key === terminatedKey) continue; | ||
| deletePipeline.del(key); | ||
| } | ||
|
|
||
| const deleteResults = await deletePipeline.exec(); | ||
| if (!deleteResults) continue; | ||
|
|
||
| for (const [err, result] of deleteResults) { | ||
| if (!err && typeof result === "number" && result > 0) { | ||
| deletedKeys += result; | ||
| } | ||
| } | ||
| } | ||
| } while (cursor !== "0"); |
There was a problem hiding this comment.
SCAN lacks snapshot isolation - keys written between iterations are missed, allowing partial session resurrection
consider using Lua script to atomically delete all matching keys, or implement distributed lock
Prompt To Fix With AI
This is a comment left during a code review.
Path: src/lib/session-manager.ts
Line: 2101-2126
Comment:
SCAN lacks snapshot isolation - keys written between iterations are missed, allowing partial session resurrection
consider using Lua script to atomically delete all matching keys, or implement distributed lock
How can I resolve this? If you propose a fix, please make it concise.| ); // 短上下文阈值 | ||
| private static readonly ENABLE_SHORT_CONTEXT_DETECTION = | ||
| process.env.ENABLE_SHORT_CONTEXT_DETECTION !== "false"; // 默认启用 | ||
| private static readonly TERMINATED_SESSION_TTL = (() => { | ||
| const raw = | ||
| process.env.SESSION_TERMINATION_TTL_SECONDS ?? | ||
| process.env.SESSION_TERMINATION_TTL ?? | ||
| process.env.TERMINATED_SESSION_TTL ?? | ||
| ""; | ||
| const parsed = Number.parseInt(raw, 10); | ||
| if (Number.isFinite(parsed) && parsed > 0) { | ||
| return parsed; | ||
| } | ||
| return 24 * 60 * 60; // 1 天 |
There was a problem hiding this comment.
environment variable precedence uses ?? which treats empty string "" as valid
| ); // 短上下文阈值 | |
| private static readonly ENABLE_SHORT_CONTEXT_DETECTION = | |
| process.env.ENABLE_SHORT_CONTEXT_DETECTION !== "false"; // 默认启用 | |
| private static readonly TERMINATED_SESSION_TTL = (() => { | |
| const raw = | |
| process.env.SESSION_TERMINATION_TTL_SECONDS ?? | |
| process.env.SESSION_TERMINATION_TTL ?? | |
| process.env.TERMINATED_SESSION_TTL ?? | |
| ""; | |
| const parsed = Number.parseInt(raw, 10); | |
| if (Number.isFinite(parsed) && parsed > 0) { | |
| return parsed; | |
| } | |
| return 24 * 60 * 60; // 1 天 | |
| private static readonly TERMINATED_SESSION_TTL = (() => { | |
| const raw = | |
| process.env.SESSION_TERMINATION_TTL_SECONDS || | |
| process.env.SESSION_TERMINATION_TTL || | |
| process.env.TERMINATED_SESSION_TTL || | |
| ""; | |
| const parsed = Number.parseInt(raw, 10); | |
| if (Number.isFinite(parsed) && parsed > 0) { | |
| return parsed; | |
| } | |
| return 24 * 60 * 60; // 1 天 | |
| })(); |
Prompt To Fix With AI
This is a comment left during a code review.
Path: src/lib/session-manager.ts
Line: 115-128
Comment:
environment variable precedence uses `??` which treats empty string `""` as valid
```suggestion
private static readonly TERMINATED_SESSION_TTL = (() => {
const raw =
process.env.SESSION_TERMINATION_TTL_SECONDS ||
process.env.SESSION_TERMINATION_TTL ||
process.env.TERMINATED_SESSION_TTL ||
"";
const parsed = Number.parseInt(raw, 10);
if (Number.isFinite(parsed) && parsed > 0) {
return parsed;
}
return 24 * 60 * 60; // 1 天
})();
```
How can I resolve this? If you propose a fix, please make it concise.
Additional Comments (2)
terminated sessions should be rejected before any session operations - move termination check earlier in the flow Prompt To Fix With AIThis is a comment left during a code review.
Path: src/app/v1/_lib/proxy/session-guard.ts
Line: 96-105
Comment:
termination check in `getOrCreateSessionId` (line 422) happens AFTER concurrent detection logic calls `SessionTracker.getConcurrentCount(effectiveSessionId)` (line 442)
terminated sessions should be rejected before any session operations - move termination check earlier in the flow
How can I resolve this? If you propose a fix, please make it concise.
Prompt To Fix With AIThis is a comment left during a code review.
Path: src/lib/session-manager.ts
Line: 2084-2095
Comment:
comment on line 2095 says "删除 session:* 相关 key" but `terminatedKey` is intentionally preserved - clarify this in comment to prevent confusion
How can I resolve this? If you propose a fix, please make it concise. |
There was a problem hiding this comment.
Actionable comments posted: 1
🧹 Nitpick comments (8)
src/lib/session-manager.ts (5)
135-153:redis参数类型为any,建议收窄。
readTerminationMarker和后续调用点都把 Redis client 以any传入,失去了类型保护。可以用getRedisClient的返回类型(或ioredis的Redis类型)来标注,避免拼写错误的方法名在编译期被放过。🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/lib/session-manager.ts` around lines 135 - 153, The redis parameter of readTerminationMarker (and other SessionManager methods) is typed as any; change it to the specific Redis client type returned by getRedisClient (or ioredis.Redis) to regain type safety and catch method-name/spelling errors at compile time. Update the signature of readTerminationMarker to use that concrete type, and update all call sites that pass a redis client to use the same type (or import the ioredis.Redis type and annotate variables returned from getRedisClient), ensuring SessionManager.getTerminationMarkerKey remains unchanged.
434-434:effectiveSessionId是多余的别名,可直接使用clientSessionId。
const effectiveSessionId = clientSessionId;没有任何转换逻辑,只是增加了一层间接引用。如果之前的 remap 方案已移除,这个变量可以去掉以减少认知负担。建议简化
- const effectiveSessionId = clientSessionId; - // 2. 短上下文并发检测(方案E) if ( SessionManager.ENABLE_SHORT_CONTEXT_DETECTION && messagesLength <= SessionManager.SHORT_CONTEXT_THRESHOLD ) { - const concurrentCount = await SessionTracker.getConcurrentCount(effectiveSessionId); + const concurrentCount = await SessionTracker.getConcurrentCount(clientSessionId);以下所有
effectiveSessionId引用同理替换为clientSessionId。🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/lib/session-manager.ts` at line 434, 删除多余别名:移除声明 `const effectiveSessionId = clientSessionId;` 并将代码中所有 `effectiveSessionId` 的引用直接替换为 `clientSessionId`(例如在 session 管理相关函数/方法内),确认没有其他逻辑依赖该别名后删除该变量声明并运行相关单元/类型检查以保证无回归。
118-129: 三个环境变量名回退可能造成混淆,建议统一并记录。
SESSION_TERMINATION_TTL_SECONDS、SESSION_TERMINATION_TTL、TERMINATED_SESSION_TTL三选一的逻辑会给运维配置带来困惑——不清楚哪个是"正式"名称。如果是为了兼容旧配置做迁移,建议在注释或文档中说明废弃计划;否则建议收敛为单一变量名。🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/lib/session-manager.ts` around lines 118 - 129, The code in private static readonly TERMINATED_SESSION_TTL currently accepts three different environment variables (SESSION_TERMINATION_TTL_SECONDS, SESSION_TERMINATION_TTL, TERMINATED_SESSION_TTL) which is confusing; pick a single canonical env var (e.g., SESSION_TERMINATION_TTL_SECONDS) as the primary source, keep the others only as explicit backward-compatible fallbacks in the code for now, and add a short comment above TERMINATED_SESSION_TTL documenting the chosen canonical name and the deprecation/migration plan for the legacy names; additionally, when a legacy name (SESSION_TERMINATION_TTL or TERMINATED_SESSION_TTL) is used at runtime, emit a one-time warning via the logger indicating the legacy var is deprecated and advising the canonical name to aid operators during migration.
420-432: Redis 不可用时终止检查被跳过(fail-open),需确认这是预期行为。当
redis为null或状态非ready时,readTerminationMarker检查被跳过,已终止的 session 可以继续被使用。与本 PR 目标"防止终止后复活"存在矛盾。虽然项目其他地方也使用 fail-open 策略,但此处直接关系到终止语义的完整性,建议在注释中明确记录这一设计取舍。🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/lib/session-manager.ts` around lines 420 - 432, The termination check currently skips when redis is null or not ready (the fail-open case), which can allow terminated sessions to be reused; update the code around the redis check in SessionManager.readTerminationMarker usage to include a clear comment that this fail-open behavior is intentional and describe the tradeoff (i.e., when redis is unavailable we allow sessions rather than rejecting them), referencing the redis variable, SessionManager.readTerminationMarker, and TerminatedSessionError so future readers understand the design choice and why the check is skipped; if this behavior is not intended instead change the logic to treat redis-unavailable as a failure path that throws a TerminatedSessionError or a separate RedisUnavailable error and ensure callers handle it appropriately.
2035-2046: 终止标记未使用NX,重复终止会静默覆盖时间戳。
redis.set(terminatedKey, terminatedAt, "EX", ttlSeconds)在并发终止或重复终止时会覆盖原有时间戳。如果需要保留"首次终止时间"用于审计,应使用NX;如果覆盖是有意的(刷新 TTL),建议加条注释说明设计意图。🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/lib/session-manager.ts` around lines 2035 - 2046, The termination marker is currently written with redis.set(terminatedKey, terminatedAt, "EX", ttlSeconds) which overwrites timestamps on repeated terminations; change this to set only-if-not-exists (use the Redis NX option) so the first termination time is preserved (e.g., call set with NX + EX via your redis client API) and update the markerResult/markerOk handling to expect the client’s success return value (OK vs null) accordingly; if overwriting was intentional instead, add a clear comment near SessionManager.getTerminationMarkerKey/TERMINATED_SESSION_TTL explaining the design decision to refresh the timestamp and TTL.tests/unit/lib/session-manager-terminate-session.test.ts (2)
104-122: 测试 2 的deletePipelineRef.execmock 返回值与实际行为不一致。当 SCAN 仅返回
terminatedKey时,del实际未被调用,但deletePipelineRef.exec仍返回[[null, 1]](在beforeEach中设定)。这导致生产代码中deletedKeys会被错误地加 1。虽然因为markerOk为true最终结果不受影响,但 mock 不够精确,将来如果断言deletedKeys或改变返回逻辑时可能引发误判。建议为 test 2 覆盖更精确的 mock
redisClientRef.hget.mockResolvedValue(null); redisClientRef.scan.mockResolvedValueOnce(["0", [terminatedKey]]); + // SCAN 返回的唯一 key 是 terminatedKey,会被跳过,因此 delete pipeline 不会有 del 调用 + deletePipelineRef.exec.mockResolvedValueOnce([]);🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@tests/unit/lib/session-manager-terminate-session.test.ts` around lines 104 - 122, The mock for deletePipelineRef.exec is inaccurate for this test: when SCAN returns only terminatedKey the code does not call DEL, so deletePipelineRef.exec should reflect zero deletions; update the test to override the exec mock for this case (the test that calls SessionManager.terminateSession with sessionId "sess_test") to return [[null, 0]] (or the structure your exec uses to indicate 0 deleted) instead of [[null, 1]] as set in beforeEach, ensuring deletedKeys isn't incorrectly incremented; target the deletePipelineRef.exec mock override within this test so other tests keep the beforeEach default.
33-48:pipelinemock 使用mockImplementationOnce链式调用,对调用顺序有强依赖。如果
terminateSession内部逻辑调整了pipeline()的调用次数或顺序,测试会静默失败(返回undefined)。当前实现下是正确的,但建议在注释中说明两次pipeline()分别对应 ZSET 清理和 key 删除,提高可维护性。🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@tests/unit/lib/session-manager-terminate-session.test.ts` around lines 33 - 48, The pipeline mock in tests/unit/lib/session-manager-terminate-session.test.ts is brittle because redisClientRef.pipeline uses mockImplementationOnce twice and depends on call order inside terminateSession; update the test to include a concise comment next to the redisClientRef.pipeline mock explaining that the first mockImplementationOnce(()=> pipelineRef) corresponds to the ZSET cleanup pipeline and the second mockImplementationOnce(()=> deletePipelineRef) corresponds to the key-deletion pipeline so future readers know the two separate pipeline returns are intentional (or replace with a less order-sensitive mock if you prefer).tests/unit/lib/session-manager-scan-pattern-escape.test.ts (1)
3-3:redisClientRef的声明方式可能导致 mock 时序问题。模块顶层的
let redisClientRef: any配合vi.mock("@/lib/redis", () => ({ getRedisClient: () => redisClientRef }))依赖闭包引用。虽然这里vi.mock会被提升且闭包能正确捕获后续beforeEach的赋值,但这种模式比较脆弱——如果将来有人在 mock 工厂中改为解构就会断裂。当前可行,仅作提醒。🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@tests/unit/lib/session-manager-scan-pattern-escape.test.ts` at line 3, The top-level let redisClientRef closure is fragile for mocking; replace it with a stable mock function and set its return in beforeEach: create a const getRedisClientMock = vi.fn() and change vi.mock("@/lib/redis", () => ({ getRedisClient: getRedisClientMock })) so tests call getRedisClientMock.mockReturnValue(redisClient) (or mockImplementation) in beforeEach/each test instead of assigning redisClientRef; update references to redisClientRef to use the mock's behavior.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@src/lib/session-manager.ts`:
- Line 2138: The current return expression `markerOk || deletedKeys > 0` can
report success when the marker write failed but keys were deleted; modify the
logic in the containing function so that if `markerOk` is false you do not
silently return success: either (preferred) return false immediately when
`markerOk` is false, or at minimum emit a warn-level log (e.g., via the module
logger) when `markerOk` is false and then return false; ensure you reference the
existing variables `markerOk` and `deletedKeys` and update the final return
accordingly so callers cannot assume termination succeeded when the termination
marker wasn't written.
---
Nitpick comments:
In `@src/lib/session-manager.ts`:
- Around line 135-153: The redis parameter of readTerminationMarker (and other
SessionManager methods) is typed as any; change it to the specific Redis client
type returned by getRedisClient (or ioredis.Redis) to regain type safety and
catch method-name/spelling errors at compile time. Update the signature of
readTerminationMarker to use that concrete type, and update all call sites that
pass a redis client to use the same type (or import the ioredis.Redis type and
annotate variables returned from getRedisClient), ensuring
SessionManager.getTerminationMarkerKey remains unchanged.
- Line 434: 删除多余别名:移除声明 `const effectiveSessionId = clientSessionId;` 并将代码中所有
`effectiveSessionId` 的引用直接替换为 `clientSessionId`(例如在 session
管理相关函数/方法内),确认没有其他逻辑依赖该别名后删除该变量声明并运行相关单元/类型检查以保证无回归。
- Around line 118-129: The code in private static readonly
TERMINATED_SESSION_TTL currently accepts three different environment variables
(SESSION_TERMINATION_TTL_SECONDS, SESSION_TERMINATION_TTL,
TERMINATED_SESSION_TTL) which is confusing; pick a single canonical env var
(e.g., SESSION_TERMINATION_TTL_SECONDS) as the primary source, keep the others
only as explicit backward-compatible fallbacks in the code for now, and add a
short comment above TERMINATED_SESSION_TTL documenting the chosen canonical name
and the deprecation/migration plan for the legacy names; additionally, when a
legacy name (SESSION_TERMINATION_TTL or TERMINATED_SESSION_TTL) is used at
runtime, emit a one-time warning via the logger indicating the legacy var is
deprecated and advising the canonical name to aid operators during migration.
- Around line 420-432: The termination check currently skips when redis is null
or not ready (the fail-open case), which can allow terminated sessions to be
reused; update the code around the redis check in
SessionManager.readTerminationMarker usage to include a clear comment that this
fail-open behavior is intentional and describe the tradeoff (i.e., when redis is
unavailable we allow sessions rather than rejecting them), referencing the redis
variable, SessionManager.readTerminationMarker, and TerminatedSessionError so
future readers understand the design choice and why the check is skipped; if
this behavior is not intended instead change the logic to treat
redis-unavailable as a failure path that throws a TerminatedSessionError or a
separate RedisUnavailable error and ensure callers handle it appropriately.
- Around line 2035-2046: The termination marker is currently written with
redis.set(terminatedKey, terminatedAt, "EX", ttlSeconds) which overwrites
timestamps on repeated terminations; change this to set only-if-not-exists (use
the Redis NX option) so the first termination time is preserved (e.g., call set
with NX + EX via your redis client API) and update the markerResult/markerOk
handling to expect the client’s success return value (OK vs null) accordingly;
if overwriting was intentional instead, add a clear comment near
SessionManager.getTerminationMarkerKey/TERMINATED_SESSION_TTL explaining the
design decision to refresh the timestamp and TTL.
In `@tests/unit/lib/session-manager-scan-pattern-escape.test.ts`:
- Line 3: The top-level let redisClientRef closure is fragile for mocking;
replace it with a stable mock function and set its return in beforeEach: create
a const getRedisClientMock = vi.fn() and change vi.mock("@/lib/redis", () => ({
getRedisClient: getRedisClientMock })) so tests call
getRedisClientMock.mockReturnValue(redisClient) (or mockImplementation) in
beforeEach/each test instead of assigning redisClientRef; update references to
redisClientRef to use the mock's behavior.
In `@tests/unit/lib/session-manager-terminate-session.test.ts`:
- Around line 104-122: The mock for deletePipelineRef.exec is inaccurate for
this test: when SCAN returns only terminatedKey the code does not call DEL, so
deletePipelineRef.exec should reflect zero deletions; update the test to
override the exec mock for this case (the test that calls
SessionManager.terminateSession with sessionId "sess_test") to return [[null,
0]] (or the structure your exec uses to indicate 0 deleted) instead of [[null,
1]] as set in beforeEach, ensuring deletedKeys isn't incorrectly incremented;
target the deletePipelineRef.exec mock override within this test so other tests
keep the beforeEach default.
- Around line 33-48: The pipeline mock in
tests/unit/lib/session-manager-terminate-session.test.ts is brittle because
redisClientRef.pipeline uses mockImplementationOnce twice and depends on call
order inside terminateSession; update the test to include a concise comment next
to the redisClientRef.pipeline mock explaining that the first
mockImplementationOnce(()=> pipelineRef) corresponds to the ZSET cleanup
pipeline and the second mockImplementationOnce(()=> deletePipelineRef)
corresponds to the key-deletion pipeline so future readers know the two separate
pipeline returns are intentional (or replace with a less order-sensitive mock if
you prefer).
ℹ️ Review info
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
Cache: Disabled due to Reviews > Disable Cache setting
📒 Files selected for processing (3)
src/lib/session-manager.tstests/unit/lib/session-manager-scan-pattern-escape.test.tstests/unit/lib/session-manager-terminate-session.test.ts
|
已根据 CodeRabbit / Greptile 的反馈做了补充修正:
本地已重新跑过全量 |
src/lib/session-manager.ts
Outdated
| }); | ||
| } | ||
|
|
||
| const raw = rawPrimary ?? rawLegacyA ?? rawLegacyB ?? ""; |
There was a problem hiding this comment.
?? treats empty string as valid, so SESSION_TERMINATION_TTL_SECONDS="" would pass through to parseInt("", 10) (returns NaN) and fall back to default
use || "" or explicit empty check
Prompt To Fix With AI
This is a comment left during a code review.
Path: src/lib/session-manager.ts
Line: 135
Comment:
`??` treats empty string as valid, so `SESSION_TERMINATION_TTL_SECONDS=""` would pass through to `parseInt("", 10)` (returns `NaN`) and fall back to default
use `|| ""` or explicit empty check
How can I resolve this? If you propose a fix, please make it concise.
src/lib/session-manager.ts
Outdated
| @@ -1971,21 +2087,8 @@ export class SessionManager { | |||
| ); | |||
| } | |||
|
|
|||
| // 2. 删除所有 Session 相关的 key | |||
| // 2. 从 ZSET 中移除(始终尝试,即使查询失败) | |||
| const pipeline = redis.pipeline(); | |||
|
|
|||
| // 基础绑定信息 | |||
| pipeline.del(`session:${sessionId}:provider`); | |||
| pipeline.del(`session:${sessionId}:key`); | |||
| pipeline.del(`session:${sessionId}:info`); | |||
| pipeline.del(`session:${sessionId}:last_seen`); | |||
| pipeline.del(`session:${sessionId}:concurrent_count`); | |||
|
|
|||
| // 可选:messages 和 response(如果启用了存储) | |||
| pipeline.del(`session:${sessionId}:messages`); | |||
| pipeline.del(`session:${sessionId}:response`); | |||
|
|
|||
| // 3. 从 ZSET 中移除(始终尝试,即使查询失败) | |||
| pipeline.zrem(getGlobalActiveSessionsKey(), sessionId); | |||
|
|
|||
| if (providerId) { | |||
| @@ -2000,30 +2103,52 @@ export class SessionManager { | |||
| pipeline.zrem(getUserActiveSessionsKey(userId), sessionId); | |||
| } | |||
|
|
|||
| // 4. 删除 hash 映射(如果存在) | |||
| // 注意:无法直接反查 hash,只能清理已知的 session key | |||
| // hash 会在 TTL 后自动过期,不影响功能 | |||
|
|
|||
| const results = await pipeline.exec(); | |||
| await pipeline.exec(); | |||
There was a problem hiding this comment.
concurrent requests between marker write (line 2053) and ZSET cleanup (line 2106) can create new keys via storeSessionInfo, storeSessionMessages, etc., which SCAN will then delete (lines 2114-2139), causing data loss
the termination check in getOrCreateSessionId (line 436) only blocks the session ID assignment, but doesn't prevent writes in session-guard.ts lines 110-176 that happen after session assignment
Prompt To Fix With AI
This is a comment left during a code review.
Path: src/lib/session-manager.ts
Line: 2053-2106
Comment:
concurrent requests between marker write (line 2053) and ZSET cleanup (line 2106) can create new keys via `storeSessionInfo`, `storeSessionMessages`, etc., which SCAN will then delete (lines 2114-2139), causing data loss
the termination check in `getOrCreateSessionId` (line 436) only blocks the session ID assignment, but doesn't prevent writes in `session-guard.ts` lines 110-176 that happen after session assignment
How can I resolve this? If you propose a fix, please make it concise.
src/lib/session-manager.ts
Outdated
| do { | ||
| const scanResult = (await redis.scan(cursor, "MATCH", matchPattern, "COUNT", 200)) as [ | ||
| string, | ||
| string[], | ||
| ]; | ||
| const nextCursor = scanResult[0]; | ||
| const keys = scanResult[1] ?? []; | ||
| cursor = nextCursor; | ||
|
|
||
| if (keys.length === 0) continue; | ||
|
|
||
| const deletePipeline = redis.pipeline(); | ||
| for (const key of keys) { | ||
| if (key === terminatedKey) continue; | ||
| deletePipeline.del(key); | ||
| } | ||
|
|
||
| const deleteResults = await deletePipeline.exec(); | ||
| if (!deleteResults) continue; | ||
|
|
||
| for (const [err, result] of deleteResults) { | ||
| if (!err && typeof result === "number" && result > 0) { | ||
| deletedKeys += result; | ||
| } | ||
| } | ||
| } | ||
| } while (cursor !== "0"); |
There was a problem hiding this comment.
SCAN lacks snapshot isolation - if concurrent requests write session:{sessionId}:req:5:messages between SCAN iterations, the key might be missed and the terminated session partially revived
SCAN cursor doesn't guarantee all keys created after start will be returned
Prompt To Fix With AI
This is a comment left during a code review.
Path: src/lib/session-manager.ts
Line: 2114-2139
Comment:
SCAN lacks snapshot isolation - if concurrent requests write `session:{sessionId}:req:5:messages` between SCAN iterations, the key might be missed and the terminated session partially revived
SCAN cursor doesn't guarantee all keys created after start will be returned
How can I resolve this? If you propose a fix, please make it concise.There was a problem hiding this comment.
♻️ Duplicate comments (1)
src/lib/session-manager.ts (1)
2056-2058: 已在!markerOk时添加了 warn 日志,但返回值语义仍存在同一历史问题。此问题已在先前审查中标记(
markerOk || deletedKeys > 0在 marker 写入失败但有 key 被删时仍返回true),调用方无法区分"真正封锁"与"仅清理了部分 key"两种状态。🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/lib/session-manager.ts` around lines 2056 - 2058, The current return value mixes markerOk and deletedKeys (markerOk || deletedKeys > 0) so callers cannot tell if the termination marker write actually succeeded; change the function to stop treating deletedKeys as success—return an explicit result object (e.g., { markerOk: boolean, deletedKeys: number } or an enum status) instead of a single boolean, update all callers to check result.markerOk and handle deletedKeys separately, and replace the old markerOk || deletedKeys > 0 usage with checks against the new result fields; reference the existing local variables markerOk and deletedKeys and the SessionManager method that performs the marker write.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Duplicate comments:
In `@src/lib/session-manager.ts`:
- Around line 2056-2058: The current return value mixes markerOk and deletedKeys
(markerOk || deletedKeys > 0) so callers cannot tell if the termination marker
write actually succeeded; change the function to stop treating deletedKeys as
success—return an explicit result object (e.g., { markerOk: boolean,
deletedKeys: number } or an enum status) instead of a single boolean, update all
callers to check result.markerOk and handle deletedKeys separately, and replace
the old markerOk || deletedKeys > 0 usage with checks against the new result
fields; reference the existing local variables markerOk and deletedKeys and the
SessionManager method that performs the marker write.
ℹ️ Review info
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
Cache: Disabled due to Reviews > Disable Cache setting
📒 Files selected for processing (3)
src/lib/session-manager.tstests/unit/lib/session-manager-scan-pattern-escape.test.tstests/unit/lib/session-manager-terminate-session.test.ts
🚧 Files skipped from review as they are similar to previous changes (2)
- tests/unit/lib/session-manager-scan-pattern-escape.test.ts
- tests/unit/lib/session-manager-terminate-session.test.ts
|
补充处理 CodeRabbit 仍在追踪的点: 本地已重新跑过全量 |
| // 0. 标记终止(优先写入,避免并发请求在清理窗口内复活) | ||
| // 说明:这里允许覆盖旧值,用于刷新 TTL(多次终止时延长阻断窗口)。 | ||
| const markerResult = await redis.set(terminatedKey, terminatedAt, "EX", ttlSeconds); |
There was a problem hiding this comment.
race window: concurrent requests can write new session keys between marker write and SCAN cleanup
requests that pass line 441 check before marker is set (line 2058) will continue to lines 110-176 in session-guard.ts, calling storeSessionInfo, storeSessionMessages, storeSessionRequestBody which don't check termination marker before writing
these newly written keys get deleted by SCAN loop (lines 2114-2139), causing data loss for in-flight requests
Prompt To Fix With AI
This is a comment left during a code review.
Path: src/lib/session-manager.ts
Line: 2056-2058
Comment:
race window: concurrent requests can write new session keys between marker write and SCAN cleanup
requests that pass line 441 check before marker is set (line 2058) will continue to lines 110-176 in `session-guard.ts`, calling `storeSessionInfo`, `storeSessionMessages`, `storeSessionRequestBody` which don't check termination marker before writing
these newly written keys get deleted by SCAN loop (lines 2114-2139), causing data loss for in-flight requests
How can I resolve this? If you propose a fix, please make it concise.
src/lib/session-manager.ts
Outdated
| const escapedSessionId = SessionManager.escapeRedisMatchPatternLiteral(sessionId); | ||
| const matchPattern = `session:${escapedSessionId}:*`; | ||
|
|
||
| do { |
There was a problem hiding this comment.
SCAN lacks snapshot isolation - keys written between iterations are missed
if concurrent requests write session:{sessionId}:req:5:messages after SCAN reads cursor "100" but before it reads cursor "200", the key won't be returned by SCAN
terminated session can be partially revived with orphaned keys
Prompt To Fix With AI
This is a comment left during a code review.
Path: src/lib/session-manager.ts
Line: 2119
Comment:
SCAN lacks snapshot isolation - keys written between iterations are missed
if concurrent requests write `session:{sessionId}:req:5:messages` after SCAN reads cursor "100" but before it reads cursor "200", the key won't be returned by SCAN
terminated session can be partially revived with orphaned keys
How can I resolve this? If you propose a fix, please make it concise.
src/lib/session-manager.ts
Outdated
| }); | ||
| } | ||
|
|
||
| const raw = rawPrimary ?? rawLegacyA ?? rawLegacyB ?? ""; |
There was a problem hiding this comment.
?? treats empty string as valid, so SESSION_TERMINATION_TTL_SECONDS="" passes to parseInt("", 10) which returns NaN
use || "" or explicit empty check instead of ??
Prompt To Fix With AI
This is a comment left during a code review.
Path: src/lib/session-manager.ts
Line: 140
Comment:
`??` treats empty string as valid, so `SESSION_TERMINATION_TTL_SECONDS=""` passes to `parseInt("", 10)` which returns `NaN`
use `|| ""` or explicit empty check instead of `??`
How can I resolve this? If you propose a fix, please make it concise.
Additional Comments (5)
concurrent requests between marker write (line 2058) and SCAN cleanup can call this method, writing keys that then get deleted by SCAN add marker check: Prompt To Fix With AIThis is a comment left during a code review.
Path: src/lib/session-manager.ts
Line: 949-951
Comment:
check termination marker before writing to prevent race condition
concurrent requests between marker write (line 2058) and SCAN cleanup can call this method, writing keys that then get deleted by SCAN
add marker check:
```suggestion
static async storeSessionInfo(sessionId: string, info: SessionStoreInfo): Promise<void> {
const redis = getRedisClient();
if (!redis || redis.status !== "ready") return;
const terminatedAt = await SessionManager.readTerminationMarker(redis, sessionId);
if (terminatedAt) {
logger.debug("SessionManager: Skip storing info for terminated session", { sessionId });
return;
}
```
How can I resolve this? If you propose a fix, please make it concise.
same race as Prompt To Fix With AIThis is a comment left during a code review.
Path: src/lib/session-manager.ts
Line: 1079-1085
Comment:
check termination marker before writing
same race as `storeSessionInfo` - add marker check:
```
const terminatedAt = await SessionManager.readTerminationMarker(redis, sessionId);
if (terminatedAt) return;
```
How can I resolve this? If you propose a fix, please make it concise.
same race as other Prompt To Fix With AIThis is a comment left during a code review.
Path: src/lib/session-manager.ts
Line: 1547-1553
Comment:
check termination marker before writing
same race as other `storeSession*` methods - add marker check
How can I resolve this? If you propose a fix, please make it concise.
same race window as other Prompt To Fix With AIThis is a comment left during a code review.
Path: src/lib/session-manager.ts
Line: 1662-1668
Comment:
check termination marker before writing
same race window as other `storeSession*` methods
How can I resolve this? If you propose a fix, please make it concise.
same race window as other Prompt To Fix With AIThis is a comment left during a code review.
Path: src/lib/session-manager.ts
Line: 1610-1620
Comment:
check termination marker before writing
same race window as other `storeSession*` methods
How can I resolve this? If you propose a fix, please make it concise. |
There was a problem hiding this comment.
🧹 Nitpick comments (4)
src/lib/session-manager.ts (2)
517-539:existingSessionId双重空值检查可简化当
terminatedAt存在时将existingSessionId设为null再做第二次if (existingSessionId)检查,逻辑正确但略显迂回;可改用else消除二次判断,提升可读性。♻️ 建议简化
if (existingSessionId) { const terminatedAt = await SessionManager.readTerminationMarker(redis, existingSessionId); if (terminatedAt) { logger.info( "SessionManager: Hash hit but session was terminated, creating new session", { existingSessionId, terminatedAt, hash: contentHash, } ); - existingSessionId = null; - } - - if (existingSessionId) { - // 找到已有 session,刷新 TTL + } else { + // 找到已有 session,刷新 TTL await SessionManager.refreshSessionTTL(existingSessionId); logger.trace("SessionManager: Reusing session via hash", { sessionId: existingSessionId, hash: contentHash, }); return existingSessionId; } }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/lib/session-manager.ts` around lines 517 - 539, The code checks existingSessionId twice by setting existingSessionId = null when terminatedAt is truthy then rechecking; simplify by using the result of SessionManager.readTerminationMarker to branch once: call SessionManager.readTerminationMarker(existingSessionId) and if it returns a terminatedAt log and skip reuse (else) refresh TTL via SessionManager.refreshSessionTTL(existingSessionId), log via logger.trace and return existingSessionId; update the block that references existingSessionId, terminatedAt, SessionManager.readTerminationMarker, SessionManager.refreshSessionTTL and logger.trace to use an if/else instead of two separate if checks.
2058-2063:markerOk为false时仍继续执行清理,可能产生"无标记但数据已删"的中间状态当
redis.set写入终止标记失败(markerOk = false)时,代码仍继续扫描并删除session:{id}:*相关 key。这意味着 session 数据被清除,但readTerminationMarker返回null,后续携带同一sessionId的请求不会被TerminatedSessionError拦截,session 实际上未被封锁。目前调用方(
terminateActiveSession)通过!result.markerOk向用户报告失败,这是正确的;但 Redis 中的数据状态可能已被部分修改,运维排查时需注意该中间状态。建议在markerOk = false时,至少在warn日志中标明"已执行清理但无封锁标记":📋 建议增加日志说明
if (!markerOk) { logger.warn("SessionManager: Failed to set termination marker", { sessionId }); + // 注意:即使标记写入失败,后续清理仍会执行;session 数据将被删除,但不会阻止复活。 }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/lib/session-manager.ts` around lines 2058 - 2063, When redis.set(terminatedKey, terminatedAt, "EX", ttlSeconds) returns markerOk === false but the code still proceeds to delete session:{id}:* keys, update logger.warn("SessionManager: Failed to set termination marker", { sessionId }) to include that cleanup was performed despite no termination marker (e.g. mention terminatedAt, terminatedKey, ttlSeconds and that readTerminationMarker will return null) so operators can detect the "data deleted but no marker" intermediate state; keep the existing markerOk flag and preserve the terminateActiveSession behavior that surfaces markerOk to callers but enrich the warn message in the SessionManager where markerOk is checked.tests/unit/lib/session-manager-terminate-session.test.ts (2)
54-126: 缺少markerOk = false场景的测试覆盖当
redis.set返回非"OK"时,terminateSession应返回{ markerOk: false, ... }。当前测试用例仅验证成功路径,该分支未被覆盖,可能违反 80% 覆盖率要求。建议补充如下测试:
it("当 redis.set 返回非 OK 时,markerOk 应为 false 但仍执行清理", async () => { const sessionId = "sess_marker_fail"; redisClientRef.set.mockResolvedValueOnce(null); // 模拟标记写入失败 redisClientRef.scan.mockResolvedValueOnce(["0", [`session:${sessionId}:provider`]]); // 需要第三次 pipeline 调用(deletePipeline) redisClientRef.pipeline .mockImplementationOnce(() => pipelineRef) .mockImplementationOnce(() => deletePipelineRef); const { SessionManager } = await import("@/lib/session-manager"); const result = await SessionManager.terminateSession(sessionId); expect(result.markerOk).toBe(false); expect(result.deletedKeys).toBeGreaterThanOrEqual(0); });🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@tests/unit/lib/session-manager-terminate-session.test.ts` around lines 54 - 126, Add a unit test covering the branch where SessionManager.terminateSession returns markerOk: false by simulating redisClientRef.set returning a non-"OK" value; specifically, create a test that sets redisClientRef.set.mockResolvedValueOnce(null) (or other non-"OK"), ensures redisClientRef.scan returns appropriate keys, mocks pipeline and deletePipeline invocation like existing tests (pipelineRef and deletePipelineRef via redisClientRef.pipeline), calls SessionManager.terminateSession(sessionId) and asserts result.markerOk === false and that cleanup still ran (e.g., result.deletedKeys >= 0 and expected deletePipelineRef.del/pipelineRef.zrem calls). Use the same symbols SessionManager.terminateSession, redisClientRef.set, redisClientRef.scan, redisClientRef.pipeline, pipelineRef, and deletePipelineRef to locate where to add the test.
43-44:mgetmock 未被terminateSession实现调用,属于无效 mock当前
terminateSession实现中并未调用redis.mget,该 mock 不会被触发,可直接移除,以免产生误导(暗示该方法被使用)。♻️ 建议移除
- mget: vi.fn(async () => [null, null]),🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@tests/unit/lib/session-manager-terminate-session.test.ts` around lines 43 - 44, Remove the unused mget mock from the test—terminateSession does not call redis.mget, so delete the "mget: vi.fn(async () => [null, null])" entry (leave the scan mock intact) to avoid misleading readers and false expectations about terminateSession's behavior.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Nitpick comments:
In `@src/lib/session-manager.ts`:
- Around line 517-539: The code checks existingSessionId twice by setting
existingSessionId = null when terminatedAt is truthy then rechecking; simplify
by using the result of SessionManager.readTerminationMarker to branch once: call
SessionManager.readTerminationMarker(existingSessionId) and if it returns a
terminatedAt log and skip reuse (else) refresh TTL via
SessionManager.refreshSessionTTL(existingSessionId), log via logger.trace and
return existingSessionId; update the block that references existingSessionId,
terminatedAt, SessionManager.readTerminationMarker,
SessionManager.refreshSessionTTL and logger.trace to use an if/else instead of
two separate if checks.
- Around line 2058-2063: When redis.set(terminatedKey, terminatedAt, "EX",
ttlSeconds) returns markerOk === false but the code still proceeds to delete
session:{id}:* keys, update logger.warn("SessionManager: Failed to set
termination marker", { sessionId }) to include that cleanup was performed
despite no termination marker (e.g. mention terminatedAt, terminatedKey,
ttlSeconds and that readTerminationMarker will return null) so operators can
detect the "data deleted but no marker" intermediate state; keep the existing
markerOk flag and preserve the terminateActiveSession behavior that surfaces
markerOk to callers but enrich the warn message in the SessionManager where
markerOk is checked.
In `@tests/unit/lib/session-manager-terminate-session.test.ts`:
- Around line 54-126: Add a unit test covering the branch where
SessionManager.terminateSession returns markerOk: false by simulating
redisClientRef.set returning a non-"OK" value; specifically, create a test that
sets redisClientRef.set.mockResolvedValueOnce(null) (or other non-"OK"), ensures
redisClientRef.scan returns appropriate keys, mocks pipeline and deletePipeline
invocation like existing tests (pipelineRef and deletePipelineRef via
redisClientRef.pipeline), calls SessionManager.terminateSession(sessionId) and
asserts result.markerOk === false and that cleanup still ran (e.g.,
result.deletedKeys >= 0 and expected deletePipelineRef.del/pipelineRef.zrem
calls). Use the same symbols SessionManager.terminateSession,
redisClientRef.set, redisClientRef.scan, redisClientRef.pipeline, pipelineRef,
and deletePipelineRef to locate where to add the test.
- Around line 43-44: Remove the unused mget mock from the test—terminateSession
does not call redis.mget, so delete the "mget: vi.fn(async () => [null, null])"
entry (leave the scan mock intact) to avoid misleading readers and false
expectations about terminateSession's behavior.
ℹ️ Review info
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
Cache: Disabled due to Reviews > Disable Cache setting
📒 Files selected for processing (3)
src/actions/active-sessions.tssrc/lib/session-manager.tstests/unit/lib/session-manager-terminate-session.test.ts
| @@ -1971,21 +2092,8 @@ export class SessionManager { | |||
| ); | |||
| } | |||
|
|
|||
| // 2. 删除所有 Session 相关的 key | |||
| // 2. 从 ZSET 中移除(始终尝试,即使查询失败) | |||
| const pipeline = redis.pipeline(); | |||
|
|
|||
| // 基础绑定信息 | |||
| pipeline.del(`session:${sessionId}:provider`); | |||
| pipeline.del(`session:${sessionId}:key`); | |||
| pipeline.del(`session:${sessionId}:info`); | |||
| pipeline.del(`session:${sessionId}:last_seen`); | |||
| pipeline.del(`session:${sessionId}:concurrent_count`); | |||
|
|
|||
| // 可选:messages 和 response(如果启用了存储) | |||
| pipeline.del(`session:${sessionId}:messages`); | |||
| pipeline.del(`session:${sessionId}:response`); | |||
|
|
|||
| // 3. 从 ZSET 中移除(始终尝试,即使查询失败) | |||
| pipeline.zrem(getGlobalActiveSessionsKey(), sessionId); | |||
|
|
|||
| if (providerId) { | |||
| @@ -2000,36 +2108,58 @@ export class SessionManager { | |||
| pipeline.zrem(getUserActiveSessionsKey(userId), sessionId); | |||
| } | |||
|
|
|||
| // 4. 删除 hash 映射(如果存在) | |||
| // 注意:无法直接反查 hash,只能清理已知的 session key | |||
| // hash 会在 TTL 后自动过期,不影响功能 | |||
|
|
|||
| const results = await pipeline.exec(); | |||
| await pipeline.exec(); | |||
|
|
|||
| // 5. 检查结果 | |||
| // 3. 删除 session:* 相关 key(包含 req:* 新格式;保留 terminated 标记) | |||
| let deletedKeys = 0; | |||
| if (results) { | |||
| for (const [err, result] of results) { | |||
| let cursor = "0"; | |||
| const escapedSessionId = SessionManager.escapeRedisMatchPatternLiteral(sessionId); | |||
| const matchPattern = `session:${escapedSessionId}:*`; | |||
|
|
|||
| do { | |||
| const scanResult = (await redis.scan(cursor, "MATCH", matchPattern, "COUNT", 200)) as [ | |||
| string, | |||
| string[], | |||
| ]; | |||
| const nextCursor = scanResult[0]; | |||
| const keys = scanResult[1] ?? []; | |||
| cursor = nextCursor; | |||
|
|
|||
| if (keys.length === 0) continue; | |||
|
|
|||
| const deletePipeline = redis.pipeline(); | |||
| for (const key of keys) { | |||
| if (key === terminatedKey) continue; | |||
| deletePipeline.del(key); | |||
| } | |||
|
|
|||
| const deleteResults = await deletePipeline.exec(); | |||
| if (!deleteResults) continue; | |||
|
|
|||
| for (const [err, result] of deleteResults) { | |||
| if (!err && typeof result === "number" && result > 0) { | |||
| deletedKeys += result; | |||
| } | |||
| } | |||
| } | |||
| } while (cursor !== "0"); | |||
|
|
|||
| logger.info("SessionManager: Terminated session", { | |||
| sessionId, | |||
| providerId, | |||
| keyId, | |||
| userId, | |||
| deletedKeys, | |||
| terminatedAt, | |||
| markerOk, | |||
| }); | |||
|
|
|||
| return deletedKeys > 0; | |||
| return { markerOk, deletedKeys }; | |||
| } catch (error) { | |||
| logger.error("SessionManager: Failed to terminate session", { | |||
| error, | |||
| sessionId, | |||
| }); | |||
| return false; | |||
| return { markerOk: false, deletedKeys: 0 }; | |||
| } | |||
| } | |||
|
|
|||
There was a problem hiding this comment.
critical race: in-flight requests that pass termination check (line 441) before marker is written (line 2053) will continue executing and call storeSessionInfo, storeSessionMessages, etc., which don't verify termination markers before writing
these newly created keys are then deleted by SCAN loop (lines 2114-2139), causing data loss
previously flagged multiple times - still not addressed
Prompt To Fix With AI
This is a comment left during a code review.
Path: src/lib/session-manager.ts
Line: 2035-2165
Comment:
critical race: in-flight requests that pass termination check (line 441) before marker is written (line 2053) will continue executing and call `storeSessionInfo`, `storeSessionMessages`, etc., which don't verify termination markers before writing
these newly created keys are then deleted by SCAN loop (lines 2114-2139), causing data loss
previously flagged multiple times - still not addressed
How can I resolve this? If you propose a fix, please make it concise.
src/lib/session-manager.ts
Outdated
| let deletedKeys = 0; | ||
| if (results) { | ||
| for (const [err, result] of results) { | ||
| let cursor = "0"; | ||
| const escapedSessionId = SessionManager.escapeRedisMatchPatternLiteral(sessionId); | ||
| const matchPattern = `session:${escapedSessionId}:*`; | ||
|
|
||
| do { | ||
| const scanResult = (await redis.scan(cursor, "MATCH", matchPattern, "COUNT", 200)) as [ | ||
| string, | ||
| string[], | ||
| ]; | ||
| const nextCursor = scanResult[0]; | ||
| const keys = scanResult[1] ?? []; | ||
| cursor = nextCursor; | ||
|
|
||
| if (keys.length === 0) continue; | ||
|
|
||
| const deletePipeline = redis.pipeline(); | ||
| for (const key of keys) { | ||
| if (key === terminatedKey) continue; | ||
| deletePipeline.del(key); | ||
| } | ||
|
|
||
| const deleteResults = await deletePipeline.exec(); | ||
| if (!deleteResults) continue; | ||
|
|
||
| for (const [err, result] of deleteResults) { |
There was a problem hiding this comment.
SCAN lacks snapshot isolation - keys written between iterations won't be captured
if concurrent request writes session:{sessionId}:req:N:messages after SCAN processes cursor "100" but before it reads cursor "200", that key won't be returned
previously flagged multiple times - still not addressed
Prompt To Fix With AI
This is a comment left during a code review.
Path: src/lib/session-manager.ts
Line: 2114-2139
Comment:
SCAN lacks snapshot isolation - keys written between iterations won't be captured
if concurrent request writes `session:{sessionId}:req:N:messages` after SCAN processes cursor "100" but before it reads cursor "200", that key won't be returned
previously flagged multiple times - still not addressed
How can I resolve this? If you propose a fix, please make it concise.
src/lib/session-manager.ts
Outdated
| }); | ||
| } | ||
|
|
||
| const raw = rawPrimary ?? rawLegacyA ?? rawLegacyB ?? ""; |
There was a problem hiding this comment.
?? treats empty string as valid - SESSION_TERMINATION_TTL_SECONDS="" would pass to parseInt("", 10) returning NaN
| const raw = rawPrimary ?? rawLegacyA ?? rawLegacyB ?? ""; | |
| const raw = rawPrimary || rawLegacyA || rawLegacyB || ""; |
Prompt To Fix With AI
This is a comment left during a code review.
Path: src/lib/session-manager.ts
Line: 140
Comment:
`??` treats empty string as valid - `SESSION_TERMINATION_TTL_SECONDS=""` would pass to `parseInt("", 10)` returning `NaN`
```suggestion
const raw = rawPrimary || rawLegacyA || rawLegacyB || "";
```
How can I resolve this? If you propose a fix, please make it concise.
src/lib/session-manager.ts
Outdated
| }); | ||
| } | ||
|
|
||
| const raw = rawPrimary ?? rawLegacyA ?? rawLegacyB ?? ""; |
There was a problem hiding this comment.
?? operator treats empty string as valid value, so SESSION_TERMINATION_TTL_SECONDS="" would pass through to parseInt("", 10) which returns NaN
| const raw = rawPrimary ?? rawLegacyA ?? rawLegacyB ?? ""; | |
| const raw = rawPrimary || rawLegacyA || rawLegacyB || ""; |
Prompt To Fix With AI
This is a comment left during a code review.
Path: src/lib/session-manager.ts
Line: 140
Comment:
`??` operator treats empty string as valid value, so `SESSION_TERMINATION_TTL_SECONDS=""` would pass through to `parseInt("", 10)` which returns `NaN`
```suggestion
const raw = rawPrimary || rawLegacyA || rawLegacyB || "";
```
How can I resolve this? If you propose a fix, please make it concise.
src/lib/session-manager.ts
Outdated
| @@ -1971,21 +2097,8 @@ export class SessionManager { | |||
| ); | |||
| } | |||
|
|
|||
| // 2. 删除所有 Session 相关的 key | |||
| // 2. 从 ZSET 中移除(始终尝试,即使查询失败) | |||
| const pipeline = redis.pipeline(); | |||
|
|
|||
| // 基础绑定信息 | |||
| pipeline.del(`session:${sessionId}:provider`); | |||
| pipeline.del(`session:${sessionId}:key`); | |||
| pipeline.del(`session:${sessionId}:info`); | |||
| pipeline.del(`session:${sessionId}:last_seen`); | |||
| pipeline.del(`session:${sessionId}:concurrent_count`); | |||
|
|
|||
| // 可选:messages 和 response(如果启用了存储) | |||
| pipeline.del(`session:${sessionId}:messages`); | |||
| pipeline.del(`session:${sessionId}:response`); | |||
|
|
|||
| // 3. 从 ZSET 中移除(始终尝试,即使查询失败) | |||
| pipeline.zrem(getGlobalActiveSessionsKey(), sessionId); | |||
|
|
|||
| if (providerId) { | |||
| @@ -2000,36 +2113,58 @@ export class SessionManager { | |||
| pipeline.zrem(getUserActiveSessionsKey(userId), sessionId); | |||
| } | |||
|
|
|||
| // 4. 删除 hash 映射(如果存在) | |||
| // 注意:无法直接反查 hash,只能清理已知的 session key | |||
| // hash 会在 TTL 后自动过期,不影响功能 | |||
|
|
|||
| const results = await pipeline.exec(); | |||
| await pipeline.exec(); | |||
There was a problem hiding this comment.
race condition: in-flight requests that pass termination check (line 441) before marker is written (line 2055) will continue executing through session-guard.ts lines 111-176
these requests call storeSessionRequestBody, storeSessionMessages, storeSessionInfo etc., which don't check termination marker before writing new keys
newly written keys are then deleted by SCAN loop (lines 2124-2149), causing data loss for requests that were legitimately in-flight when termination was initiated
Prompt To Fix With AI
This is a comment left during a code review.
Path: src/lib/session-manager.ts
Line: 2053-2116
Comment:
race condition: in-flight requests that pass termination check (line 441) before marker is written (line 2055) will continue executing through `session-guard.ts` lines 111-176
these requests call `storeSessionRequestBody`, `storeSessionMessages`, `storeSessionInfo` etc., which don't check termination marker before writing new keys
newly written keys are then deleted by SCAN loop (lines 2124-2149), causing data loss for requests that were legitimately in-flight when termination was initiated
How can I resolve this? If you propose a fix, please make it concise.
src/lib/session-manager.ts
Outdated
| do { | ||
| const scanResult = (await redis.scan(cursor, "MATCH", matchPattern, "COUNT", 200)) as [ | ||
| string, | ||
| string[], | ||
| ]; | ||
| const nextCursor = scanResult[0]; | ||
| const keys = scanResult[1] ?? []; | ||
| cursor = nextCursor; | ||
|
|
||
| if (keys.length === 0) continue; | ||
|
|
||
| const deletePipeline = redis.pipeline(); | ||
| for (const key of keys) { | ||
| if (key === terminatedKey) continue; | ||
| deletePipeline.del(key); | ||
| } | ||
|
|
||
| const deleteResults = await deletePipeline.exec(); | ||
| if (!deleteResults) continue; | ||
|
|
||
| for (const [err, result] of deleteResults) { | ||
| if (!err && typeof result === "number" && result > 0) { | ||
| deletedKeys += result; | ||
| } | ||
| } | ||
| } | ||
| } while (cursor !== "0"); |
There was a problem hiding this comment.
SCAN lacks snapshot isolation - if concurrent requests write session:{sessionId}:req:N:messages after SCAN processes cursor "100" but before reading cursor "200", those keys won't be returned by SCAN
terminated session can be partially revived with orphaned keys that escape deletion
Prompt To Fix With AI
This is a comment left during a code review.
Path: src/lib/session-manager.ts
Line: 2124-2149
Comment:
SCAN lacks snapshot isolation - if concurrent requests write `session:{sessionId}:req:N:messages` after SCAN processes cursor "100" but before reading cursor "200", those keys won't be returned by SCAN
terminated session can be partially revived with orphaned keys that escape deletion
How can I resolve this? If you propose a fix, please make it concise.|
@coderabbitai 已按最新 nitpick 建议完成:
麻烦重新 review 并更新状态(CI 已全绿)。 |
|
✅ Actions performedReview triggered.
|
There was a problem hiding this comment.
Actionable comments posted: 1
🧹 Nitpick comments (2)
tests/unit/lib/session-manager-terminate-session.test.ts (1)
44-49:pipelinemock 只覆盖两次调用,多页 SCAN 场景下第三次调用会返回undefined。
terminateSession的 SCAN 循环在每个返回非空keys的页次都会调用一次redis.pipeline()。当scan被 mock 成多次迭代(cursor != "0")时,第三次pipeline()调用会返回undefined,导致deletePipeline.del抛出TypeError。当前测试的scan均立即返回 cursor"0",所以该路径未被覆盖。建议补充一个多页 SCAN 的测试用例,或者将
pipeline默认实现改为始终返回deletePipelineRef:建议修改
redisClientRef = { status: "ready", set: vi.fn(async () => "OK"), get: vi.fn(async () => null), hget: vi.fn(async () => null), scan: vi.fn(async () => ["0", []]), pipeline: vi .fn() - .mockImplementationOnce(() => pipelineRef) - .mockImplementationOnce(() => deletePipelineRef), + .mockImplementationOnce(() => pipelineRef) + .mockImplementation(() => deletePipelineRef), };🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@tests/unit/lib/session-manager-terminate-session.test.ts` around lines 44 - 49, The pipeline mock only supplies two implementations causing a third call to vi.fn() in terminateSession's SCAN loop to return undefined and crash when calling deletePipeline.del; update the tests to always return a valid pipeline by changing the pipeline mock default to return deletePipelineRef (i.e., set vi.fn().mockImplementation(() => deletePipelineRef) or chain .mockImplementationOnce(...) then .mockImplementation(() => deletePipelineRef)), or add a new multi-page SCAN test that supplies enough .mockImplementationOnce(...) returns for pipelineRef/deletePipelineRef to cover all redis.pipeline() calls during terminateSession; ensure the symbols mentioned are terminateSession, redis.pipeline(), pipelineRef, deletePipelineRef and deletePipeline.del.src/lib/session-manager.ts (1)
2101-2116: ZSET 清理 pipeline 的exec()结果被静默丢弃,错误无法观测。
await pipeline.exec()的返回值未被检查,若zrem命令执行失败,不会留下任何日志,运维侧无法感知 active_sessions ZSET 清理是否成功。建议改进(可选)
- await pipeline.exec(); + const zremResults = await pipeline.exec(); + if (zremResults) { + for (const [err] of zremResults) { + if (err) { + logger.warn("SessionManager: Failed to remove session from active_sessions ZSET", { + sessionId, + error: err, + }); + } + } + }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/lib/session-manager.ts` around lines 2101 - 2116, The pipeline.exec() return value is ignored so failures removing sessionId from ZSETs go unnoticed; update the block around pipeline = redis.pipeline() (and the zrem calls that use getGlobalActiveSessionsKey(), `provider:${providerId}:active_sessions`, getKeyActiveSessionsKey(keyId), getUserActiveSessionsKey(userId)) to capture the result and/or catch errors from await pipeline.exec(), check the returned replies for errors and non-OK responses, and log or rethrow a descriptive error (including sessionId, providerId/keyId/userId and which key failed) so operators can observe ZSET cleanup failures.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@src/lib/session-manager.ts`:
- Around line 82-90: The TerminatedSessionError currently hardcodes an English
message in its constructor (super("Session has been terminated")), which
violates i18n rules; change the error to carry a non-user-facing identifier
(e.g., use an internal code like "ERR_TERMINATED_SESSION" or omit a human
message) so that TerminatedSessionError.sessionId/terminatedAt remain available
but no display string is embedded, and ensure ProxySessionGuard or the HTTP
layer is responsible for mapping that identifier to localized text via the i18n
system.
---
Nitpick comments:
In `@src/lib/session-manager.ts`:
- Around line 2101-2116: The pipeline.exec() return value is ignored so failures
removing sessionId from ZSETs go unnoticed; update the block around pipeline =
redis.pipeline() (and the zrem calls that use getGlobalActiveSessionsKey(),
`provider:${providerId}:active_sessions`, getKeyActiveSessionsKey(keyId),
getUserActiveSessionsKey(userId)) to capture the result and/or catch errors from
await pipeline.exec(), check the returned replies for errors and non-OK
responses, and log or rethrow a descriptive error (including sessionId,
providerId/keyId/userId and which key failed) so operators can observe ZSET
cleanup failures.
In `@tests/unit/lib/session-manager-terminate-session.test.ts`:
- Around line 44-49: The pipeline mock only supplies two implementations causing
a third call to vi.fn() in terminateSession's SCAN loop to return undefined and
crash when calling deletePipeline.del; update the tests to always return a valid
pipeline by changing the pipeline mock default to return deletePipelineRef
(i.e., set vi.fn().mockImplementation(() => deletePipelineRef) or chain
.mockImplementationOnce(...) then .mockImplementation(() => deletePipelineRef)),
or add a new multi-page SCAN test that supplies enough
.mockImplementationOnce(...) returns for pipelineRef/deletePipelineRef to cover
all redis.pipeline() calls during terminateSession; ensure the symbols mentioned
are terminateSession, redis.pipeline(), pipelineRef, deletePipelineRef and
deletePipeline.del.
ℹ️ Review info
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
Cache: Disabled due to Reviews > Disable Cache setting
📒 Files selected for processing (2)
src/lib/session-manager.tstests/unit/lib/session-manager-terminate-session.test.ts
src/lib/session-manager.ts
Outdated
| @@ -1957,6 +2090,14 @@ export class SessionManager { | |||
| keyId = keyIdStr ? parseInt(keyIdStr, 10) : null; | |||
| userId = userIdStr ? parseInt(userIdStr, 10) : null; | |||
|
|
|||
| if (!Number.isFinite(providerId)) { | |||
| providerId = null; | |||
| } | |||
|
|
|||
| if (!Number.isFinite(keyId)) { | |||
| keyId = null; | |||
| } | |||
|
|
|||
| if (!Number.isFinite(userId)) { | |||
| userId = null; | |||
| } | |||
| @@ -1971,65 +2112,108 @@ export class SessionManager { | |||
| ); | |||
| } | |||
|
|
|||
| // 2. 删除所有 Session 相关的 key | |||
| // 2. 从 ZSET 中移除(始终尝试,即使查询失败) | |||
| const pipeline = redis.pipeline(); | |||
| const zremKeys: string[] = []; | |||
|
|
|||
| // 基础绑定信息 | |||
| pipeline.del(`session:${sessionId}:provider`); | |||
| pipeline.del(`session:${sessionId}:key`); | |||
| pipeline.del(`session:${sessionId}:info`); | |||
| pipeline.del(`session:${sessionId}:last_seen`); | |||
| pipeline.del(`session:${sessionId}:concurrent_count`); | |||
|
|
|||
| // 可选:messages 和 response(如果启用了存储) | |||
| pipeline.del(`session:${sessionId}:messages`); | |||
| pipeline.del(`session:${sessionId}:response`); | |||
| const globalKey = getGlobalActiveSessionsKey(); | |||
| pipeline.zrem(globalKey, sessionId); | |||
| zremKeys.push(globalKey); | |||
|
|
|||
| // 3. 从 ZSET 中移除(始终尝试,即使查询失败) | |||
| pipeline.zrem(getGlobalActiveSessionsKey(), sessionId); | |||
| if (providerId !== null) { | |||
| const key = `provider:${providerId}:active_sessions`; | |||
| pipeline.zrem(key, sessionId); | |||
| zremKeys.push(key); | |||
| } | |||
|
|
|||
| if (providerId) { | |||
| pipeline.zrem(`provider:${providerId}:active_sessions`, sessionId); | |||
| if (keyId !== null) { | |||
| const key = getKeyActiveSessionsKey(keyId); | |||
| pipeline.zrem(key, sessionId); | |||
| zremKeys.push(key); | |||
| } | |||
|
|
|||
| if (keyId) { | |||
| pipeline.zrem(getKeyActiveSessionsKey(keyId), sessionId); | |||
| if (userId !== null) { | |||
| const key = getUserActiveSessionsKey(userId); | |||
| pipeline.zrem(key, sessionId); | |||
| zremKeys.push(key); | |||
| } | |||
|
|
|||
| if (userId) { | |||
| pipeline.zrem(getUserActiveSessionsKey(userId), sessionId); | |||
| try { | |||
| const results = await pipeline.exec(); | |||
| if (results) { | |||
| for (let i = 0; i < results.length; i++) { | |||
| const [err] = results[i]; | |||
| if (!err) continue; | |||
| logger.warn("SessionManager: Failed to remove session from active_sessions ZSET", { | |||
| sessionId, | |||
| zsetKey: zremKeys[i], | |||
| providerId, | |||
| keyId, | |||
| userId, | |||
| error: err, | |||
| }); | |||
| } | |||
| } | |||
| } catch (zremError) { | |||
| logger.warn("SessionManager: Failed to cleanup active_sessions ZSET, continuing", { | |||
| sessionId, | |||
| providerId, | |||
| keyId, | |||
| userId, | |||
| error: zremError, | |||
| }); | |||
| } | |||
|
|
|||
| // 4. 删除 hash 映射(如果存在) | |||
| // 注意:无法直接反查 hash,只能清理已知的 session key | |||
| // hash 会在 TTL 后自动过期,不影响功能 | |||
| // 3. 删除 session:* 相关 key(包含 req:* 新格式;保留 terminated 标记) | |||
| let deletedKeys = 0; | |||
| let cursor = "0"; | |||
| const escapedSessionId = SessionManager.escapeRedisMatchPatternLiteral(sessionId); | |||
| const matchPattern = `session:${escapedSessionId}:*`; | |||
|
|
|||
| const results = await pipeline.exec(); | |||
| do { | |||
| const scanResult = (await redis.scan(cursor, "MATCH", matchPattern, "COUNT", 200)) as [ | |||
| string, | |||
| string[], | |||
| ]; | |||
| const nextCursor = scanResult[0]; | |||
| const keys = scanResult[1] ?? []; | |||
| cursor = nextCursor; | |||
|
|
|||
| // 5. 检查结果 | |||
| let deletedKeys = 0; | |||
| if (results) { | |||
| for (const [err, result] of results) { | |||
| if (keys.length === 0) continue; | |||
|
|
|||
| const deletePipeline = redis.pipeline(); | |||
| for (const key of keys) { | |||
| if (key === terminatedKey) continue; | |||
| deletePipeline.del(key); | |||
| } | |||
|
|
|||
| const deleteResults = await deletePipeline.exec(); | |||
| if (!deleteResults) continue; | |||
|
|
|||
| for (const [err, result] of deleteResults) { | |||
| if (!err && typeof result === "number" && result > 0) { | |||
| deletedKeys += result; | |||
| } | |||
| } | |||
| } | |||
| } while (cursor !== "0"); | |||
There was a problem hiding this comment.
critical race: in-flight requests that pass termination check (line 448) before marker write (line 2062) will continue executing through session-guard.ts and call storeSessionInfo (line 162), storeSessionMessages (line 129), storeSessionRequestBody (line 111), etc.
none of the storeSession* methods check for termination marker before writing, so newly created keys (session:{sessionId}:info, session:{sessionId}:req:N:messages, etc.) are then deleted by SCAN loop (lines 2173-2198), causing data loss for legitimately in-flight requests
this has been flagged 10+ times in previous threads but remains unaddressed
requires either:
- adding termination marker checks to all
storeSession*methods before writing any keys - using distributed lock during termination
- accepting the data loss window as acceptable trade-off (document this behavior)
Prompt To Fix With AI
This is a comment left during a code review.
Path: src/lib/session-manager.ts
Line: 2062-2198
Comment:
critical race: in-flight requests that pass termination check (line 448) before marker write (line 2062) will continue executing through `session-guard.ts` and call `storeSessionInfo` (line 162), `storeSessionMessages` (line 129), `storeSessionRequestBody` (line 111), etc.
none of the `storeSession*` methods check for termination marker before writing, so newly created keys (`session:{sessionId}:info`, `session:{sessionId}:req:N:messages`, etc.) are then deleted by SCAN loop (lines 2173-2198), causing data loss for legitimately in-flight requests
**this has been flagged 10+ times in previous threads but remains unaddressed**
requires either:
1. adding termination marker checks to all `storeSession*` methods before writing any keys
2. using distributed lock during termination
3. accepting the data loss window as acceptable trade-off (document this behavior)
How can I resolve this? If you propose a fix, please make it concise.
src/lib/session-manager.ts
Outdated
| do { | ||
| const scanResult = (await redis.scan(cursor, "MATCH", matchPattern, "COUNT", 200)) as [ | ||
| string, | ||
| string[], | ||
| ]; | ||
| const nextCursor = scanResult[0]; | ||
| const keys = scanResult[1] ?? []; | ||
| cursor = nextCursor; | ||
|
|
||
| // 5. 检查结果 | ||
| let deletedKeys = 0; | ||
| if (results) { | ||
| for (const [err, result] of results) { | ||
| if (keys.length === 0) continue; | ||
|
|
||
| const deletePipeline = redis.pipeline(); | ||
| for (const key of keys) { | ||
| if (key === terminatedKey) continue; | ||
| deletePipeline.del(key); | ||
| } | ||
|
|
||
| const deleteResults = await deletePipeline.exec(); | ||
| if (!deleteResults) continue; | ||
|
|
||
| for (const [err, result] of deleteResults) { | ||
| if (!err && typeof result === "number" && result > 0) { | ||
| deletedKeys += result; | ||
| } | ||
| } | ||
| } | ||
| } while (cursor !== "0"); |
There was a problem hiding this comment.
SCAN lacks snapshot isolation - if concurrent requests write session:{sessionId}:req:N:messages after SCAN processes cursor "100" but before reading cursor "200", those keys won't be returned
terminated session can be partially revived with orphaned keys that escape deletion
flagged 10+ times in previous threads but remains unaddressed
consider using Lua script for atomic deletion of all matching keys, or accept this limitation and document it
Prompt To Fix With AI
This is a comment left during a code review.
Path: src/lib/session-manager.ts
Line: 2173-2198
Comment:
SCAN lacks snapshot isolation - if concurrent requests write `session:{sessionId}:req:N:messages` after SCAN processes cursor "100" but before reading cursor "200", those keys won't be returned
terminated session can be partially revived with orphaned keys that escape deletion
**flagged 10+ times in previous threads but remains unaddressed**
consider using Lua script for atomic deletion of all matching keys, or accept this limitation and document it
How can I resolve this? If you propose a fix, please make it concise.|
已根据 AI 审核意见做了几处补强(commit: d876014):
关于 Greptile 提到的 SCAN/并发竞态:终止无法取消已在执行中的请求,因此清理只能 best-effort;但 terminated marker 会保证后续继续携带同一 @coderabbitai 麻烦更新一下 review 状态(如仍有遗漏也欢迎继续指出)。 |
🧩 Analysis chain🏁 Script executed: Length of output: 6455 🏁 Script executed: Length of output: 1485 🏁 Script executed: Length of output: 3326
✅ 已确认的改进:
关于并发竞态的权衡说明也很合理:terminated marker 提供阻断保证,cleanup 采用 best-effort 策略。 ✅ Actions performedReview triggered.
|
There was a problem hiding this comment.
♻️ Duplicate comments (1)
src/lib/session-manager.ts (1)
82-91: 已解决:TerminatedSessionError的 i18n 问题。
super("ERR_TERMINATED_SESSION")现为内部错误码,注释也明确说明用户提示由上层统一映射,符合 i18n 规范。🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/lib/session-manager.ts` around lines 82 - 91, TerminatedSessionError is already using an internal i18n code via super("ERR_TERMINATED_SESSION") which is correct; keep the constructor and message as-is (TerminatedSessionError, constructor, super("ERR_TERMINATED_SESSION")), ensure the inline comment indicating the message is not user-facing remains, and remove the stray/duplicate review marker "[duplicate_comment]" from the file or PR comment so there’s no confusion.
🧹 Nitpick comments (1)
src/lib/session-manager.ts (1)
2182-2197: 当所有扫描到的 key 均为terminatedKey时,仍会执行空 pipeline。若某次 SCAN 扫描结果中所有 key 均等于
terminatedKey(会被跳过不添加del),deletePipeline将不含任何命令,但仍会调用exec()。在 ioredis 中空 pipeline 会返回空数组而不是null,功能上无害,但存在不必要的网络开销。建议修改
const deletePipeline = redis.pipeline(); + let hasDels = false; for (const key of keys) { if (key === terminatedKey) continue; deletePipeline.del(key); + hasDels = true; } - const deleteResults = await deletePipeline.exec(); - if (!deleteResults) continue; + if (!hasDels) continue; + const deleteResults = await deletePipeline.exec(); + if (!deleteResults) continue;🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/lib/session-manager.ts` around lines 2182 - 2197, The code builds a deletePipeline and always calls deletePipeline.exec() even when every scanned key equals terminatedKey and no DEL commands were added; avoid the unnecessary network round-trip by detecting whether any DEL was pushed (e.g., track a local hasDeletes flag when adding deletePipeline.del(key) or check the pipeline's internal command count) and only call deletePipeline.exec() when there are commands to run, then process deleteResults as before to update deletedKeys.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Duplicate comments:
In `@src/lib/session-manager.ts`:
- Around line 82-91: TerminatedSessionError is already using an internal i18n
code via super("ERR_TERMINATED_SESSION") which is correct; keep the constructor
and message as-is (TerminatedSessionError, constructor,
super("ERR_TERMINATED_SESSION")), ensure the inline comment indicating the
message is not user-facing remains, and remove the stray/duplicate review marker
"[duplicate_comment]" from the file or PR comment so there’s no confusion.
---
Nitpick comments:
In `@src/lib/session-manager.ts`:
- Around line 2182-2197: The code builds a deletePipeline and always calls
deletePipeline.exec() even when every scanned key equals terminatedKey and no
DEL commands were added; avoid the unnecessary network round-trip by detecting
whether any DEL was pushed (e.g., track a local hasDeletes flag when adding
deletePipeline.del(key) or check the pipeline's internal command count) and only
call deletePipeline.exec() when there are commands to run, then process
deleteResults as before to update deletedKeys.
ℹ️ Review info
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
Cache: Disabled due to Reviews > Disable Cache setting
📒 Files selected for processing (2)
src/lib/session-manager.tstests/unit/lib/session-manager-terminate-session.test.ts
🚧 Files skipped from review as they are similar to previous changes (1)
- tests/unit/lib/session-manager-terminate-session.test.ts
|
补充跟进(commit: f1a9ceb),针对最近的 AI 审核点做了进一步修正:
关于 Greptile 提到的 SCAN 并发窗口问题:完全消除需要更重的策略(例如把写入侧也做 terminated gate 或 Lua/锁等),当前实现以“阻断优先 + 清理 best-effort”作为权衡;并发窗口下即使残留孤儿 key,也不会影响后续复用(请求会被 410 阻断)。 @coderabbitai review |
|
我这边有几条小的增强建议,供参考:
export function isTerminatedSessionError(err: unknown): err is TerminatedSessionError {
return err instanceof TerminatedSessionError
|| (typeof err === 'object' && err !== null && (err as any).message === 'ERR_TERMINATED_SESSION');
// 若后续为错误对象添加 code: 'ERR_TERMINATED_SESSION',则优先用 code 判断更稳。
}然后在 guard 中用该守卫分支返回 410。
整体而言,我同意当前策略“阻断优先 + 清理 best-effort”。如果你需要,我可以帮你开一个后续 issue 追踪第 5 点(默认指派给你),请告诉我是否需要、标题/细节是否按上述建议填写。 我先触发一次重新 Review。有新的发现我会继续补充。 ✅ Actions performedReview triggered.
|
There was a problem hiding this comment.
🧹 Nitpick comments (2)
tests/unit/lib/session-manager-terminated-remap.test.ts (1)
66-79: 建议补充对TerminatedSessionError属性的断言目前仅验证抛出错误的类型,未对
error.sessionId和error.terminatedAt进行断言,无法保证这两个字段被正确传入。♻️ 建议修改
- await expect( - SessionManager.getOrCreateSessionId(keyId, [], oldSessionId) - ).rejects.toBeInstanceOf(TerminatedSessionError); + const error = await SessionManager.getOrCreateSessionId(keyId, [], oldSessionId).catch( + (e) => e + ); + expect(error).toBeInstanceOf(TerminatedSessionError); + expect(error.sessionId).toBe(oldSessionId); + expect(error.terminatedAt).toBe("1");🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@tests/unit/lib/session-manager-terminated-remap.test.ts` around lines 66 - 79, The test should not only assert the thrown type but also verify TerminatedSessionError carries correct metadata; update the test around SessionManager.getOrCreateSessionId to capture the thrown error (either with await expect(...).rejects.toThrow() then inspect via .rejects.toMatchObject or use try/catch awaiting the call) and assert that error.sessionId === oldSessionId and error.terminatedAt is present (e.g., a non-null Date or timestamp); reference TerminatedSessionError and SessionManager.getOrCreateSessionId when adding these assertions.src/lib/session-manager.ts (1)
2175-2184: SCAN COUNT 使用魔法数字,且与其他调用处不一致
terminateSession使用COUNT 200,而hasAnySessionMessages、getAllSessionsWithExpiry、getAllSessionIds均使用COUNT 100。建议提取为命名常量(如private static readonly SCAN_COUNT = 100),并统一各调用处的取值,或按需显式区分不同场景。♻️ 建议修改示例
+ private static readonly SCAN_COUNT = 100; + private static readonly TERMINATE_SCAN_COUNT = 200; // terminateSession 内 - const scanResult = (await redis.scan(cursor, "MATCH", matchPattern, "COUNT", 200)) as [ + const scanResult = (await redis.scan(cursor, "MATCH", matchPattern, "COUNT", SessionManager.TERMINATE_SCAN_COUNT)) as [🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/lib/session-manager.ts` around lines 2175 - 2184, The code uses a magic number for SCAN COUNT (e.g., COUNT 200 in terminateSession) while other methods (hasAnySessionMessages, getAllSessionsWithExpiry, getAllSessionIds) use COUNT 100; extract a shared named constant (e.g., private static readonly SCAN_COUNT = 100) or create distinct named constants if deliberate (e.g., SCAN_COUNT_FAST = 200) and replace all literal "COUNT" arguments in terminateSession, hasAnySessionMessages, getAllSessionsWithExpiry, and getAllSessionIds to use the new constant(s) so the scan batch size is centralized and consistent across these methods.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Nitpick comments:
In `@src/lib/session-manager.ts`:
- Around line 2175-2184: The code uses a magic number for SCAN COUNT (e.g.,
COUNT 200 in terminateSession) while other methods (hasAnySessionMessages,
getAllSessionsWithExpiry, getAllSessionIds) use COUNT 100; extract a shared
named constant (e.g., private static readonly SCAN_COUNT = 100) or create
distinct named constants if deliberate (e.g., SCAN_COUNT_FAST = 200) and replace
all literal "COUNT" arguments in terminateSession, hasAnySessionMessages,
getAllSessionsWithExpiry, and getAllSessionIds to use the new constant(s) so the
scan batch size is centralized and consistent across these methods.
In `@tests/unit/lib/session-manager-terminated-remap.test.ts`:
- Around line 66-79: The test should not only assert the thrown type but also
verify TerminatedSessionError carries correct metadata; update the test around
SessionManager.getOrCreateSessionId to capture the thrown error (either with
await expect(...).rejects.toThrow() then inspect via .rejects.toMatchObject or
use try/catch awaiting the call) and assert that error.sessionId ===
oldSessionId and error.terminatedAt is present (e.g., a non-null Date or
timestamp); reference TerminatedSessionError and
SessionManager.getOrCreateSessionId when adding these assertions.
ℹ️ Review info
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
Cache: Disabled due to Reviews > Disable Cache setting
📒 Files selected for processing (3)
src/lib/session-manager.tstests/unit/lib/session-manager-terminate-session.test.tstests/unit/lib/session-manager-terminated-remap.test.ts
🚧 Files skipped from review as they are similar to previous changes (1)
- tests/unit/lib/session-manager-terminate-session.test.ts
|
@coderabbitai 已补充跟进:
麻烦重新审查并更新 review 状态,谢谢! |
🧩 Analysis chain
🏁 Script executed: Length of output: 1476
✅ 1. SCAN_COUNT / TERMINATE_SCAN_COUNT 常量化
✅ 2. TerminatedSessionError 属性断言增强
✅ 3. provider.test 返回结构一致性
这些改进进一步提升了代码的可维护性和测试的稳健性。我会重新审查并更新状态。 ✅ Actions performedReview triggered.
|
|
|
||
| // 0. 标记终止(优先写入,避免并发请求在清理窗口内复活) | ||
| // 说明:这里允许覆盖旧值,用于刷新 TTL(多次终止时延长阻断窗口)。 | ||
| const markerResult = await redis.set(terminatedKey, terminatedAt, "EX", ttlSeconds); |
There was a problem hiding this comment.
race window still exists: requests that pass line 451 termination check before marker is written here will execute lines 111-176 in session-guard.ts, calling storeSessionRequestBody, storeSessionMessages, storeSessionInfo which don't check termination marker before writing
newly written keys get deleted by SCAN loop (lines 2173-2222), causing data loss for in-flight requests
flagged 10+ times but remains unaddressed - if this is accepted as a known limitation, document it explicitly in the function comment
Prompt To Fix With AI
This is a comment left during a code review.
Path: src/lib/session-manager.ts
Line: 2068
Comment:
race window still exists: requests that pass line 451 termination check before marker is written here will execute lines 111-176 in `session-guard.ts`, calling `storeSessionRequestBody`, `storeSessionMessages`, `storeSessionInfo` which don't check termination marker before writing
newly written keys get deleted by SCAN loop (lines 2173-2222), causing data loss for in-flight requests
**flagged 10+ times but remains unaddressed** - if this is accepted as a known limitation, document it explicitly in the function comment
How can I resolve this? If you propose a fix, please make it concise.| // 说明:Redis SCAN 不提供快照语义;为了减少并发窗口下的遗漏,这里最多执行两轮全量扫描清理。 | ||
| const MAX_SCAN_ROUNDS = 2; |
There was a problem hiding this comment.
SCAN doesn't provide snapshot isolation - keys written after a cursor position is scanned won't be returned, allowing partial session resurrection with orphaned keys
two-round approach reduces but doesn't eliminate this. if concurrent request writes session:{sessionId}:req:N:messages between rounds, it escapes deletion
flagged 10+ times but remains unaddressed - consider Lua script for atomic pattern-match deletion, or document this limitation
Prompt To Fix With AI
This is a comment left during a code review.
Path: src/lib/session-manager.ts
Line: 2177-2178
Comment:
SCAN doesn't provide snapshot isolation - keys written after a cursor position is scanned won't be returned, allowing partial session resurrection with orphaned keys
two-round approach reduces but doesn't eliminate this. if concurrent request writes `session:{sessionId}:req:N:messages` between rounds, it escapes deletion
**flagged 10+ times but remains unaddressed** - consider Lua script for atomic pattern-match deletion, or document this limitation
How can I resolve this? If you propose a fix, please make it concise.| ); // 短上下文阈值 | ||
| private static readonly ENABLE_SHORT_CONTEXT_DETECTION = | ||
| process.env.ENABLE_SHORT_CONTEXT_DETECTION !== "false"; // 默认启用 | ||
| // 会话终止标记 TTL(单位:秒) | ||
| // 规范环境变量:SESSION_TERMINATION_TTL_SECONDS | ||
| // 兼容旧名(计划弃用):SESSION_TERMINATION_TTL / TERMINATED_SESSION_TTL | ||
| private static readonly TERMINATED_SESSION_TTL = (() => { | ||
| const normalize = (value: string | undefined): string | undefined => { | ||
| if (typeof value !== "string") return undefined; | ||
| const trimmed = value.trim(); | ||
| return trimmed.length > 0 ? trimmed : undefined; | ||
| }; | ||
|
|
||
| const rawPrimary = normalize(process.env.SESSION_TERMINATION_TTL_SECONDS); | ||
| const rawLegacyA = normalize(process.env.SESSION_TERMINATION_TTL); | ||
| const rawLegacyB = normalize(process.env.TERMINATED_SESSION_TTL); | ||
|
|
||
| if (!rawPrimary && (rawLegacyA || rawLegacyB)) { | ||
| logger.warn("SessionManager: Deprecated termination TTL env var detected", { | ||
| SESSION_TERMINATION_TTL: rawLegacyA ? "set" : "unset", | ||
| TERMINATED_SESSION_TTL: rawLegacyB ? "set" : "unset", | ||
| preferred: "SESSION_TERMINATION_TTL_SECONDS", | ||
| }); | ||
| } | ||
|
|
||
| const raw = rawPrimary ?? rawLegacyA ?? rawLegacyB; | ||
| const parsed = raw ? Number.parseInt(raw, 10) : Number.NaN; | ||
| if (Number.isFinite(parsed) && parsed > 0) { | ||
| return parsed; | ||
| } |
There was a problem hiding this comment.
env var parsing treats empty string as valid - SESSION_TERMINATION_TTL_SECONDS="" passes to parseInt("", 10) returning NaN, then falls back to default
consider using || undefined instead of ?? for consistent empty string handling
Prompt To Fix With AI
This is a comment left during a code review.
Path: src/lib/session-manager.ts
Line: 122-151
Comment:
env var parsing treats empty string as valid - `SESSION_TERMINATION_TTL_SECONDS=""` passes to `parseInt("", 10)` returning `NaN`, then falls back to default
consider using `|| undefined` instead of `??` for consistent empty string handling
How can I resolve this? If you propose a fix, please make it concise.There was a problem hiding this comment.
Actionable comments posted: 1
🧹 Nitpick comments (2)
tests/unit/lib/session-manager-terminate-session.test.ts (1)
33-36: 建议让 delete pipeline mock 与 DEL 条数保持一致当前
deletePipelineRef.exec固定返回一条结果,可能掩盖deletedKeys计数相关回归。建议按del调用数动态返回结果,提升测试信号强度。可参考的调整
- deletePipelineRef = { - del: vi.fn(() => deletePipelineRef), - exec: vi.fn(async () => [[null, 1]]), - }; + deletePipelineRef = { + del: vi.fn(() => deletePipelineRef), + exec: vi.fn(async () => { + const n = deletePipelineRef.del.mock.calls.length; + return Array.from({ length: n }, () => [null, 1] as const); + }), + };🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@tests/unit/lib/session-manager-terminate-session.test.ts` around lines 33 - 36, The test's deletePipelineRef mock currently hardcodes exec to return a single deleted entry which can hide regressions in deletedKeys counting; update the mock for deletePipelineRef so its del implementation tracks how many keys were passed/removed and make exec return a result array whose length matches that del call count (or the number of arguments passed to del), ensuring deletePipelineRef.del and deletePipelineRef.exec stay consistent with each other and the deletedKeys calculation.src/lib/session-manager.ts (1)
2199-2214: 建议补充 DEL 失败的逐项日志,便于追踪清理残留当前仅累计成功删除数,
err分支没有记录具体失败 key,故障定位成本较高。建议记录失败项(至少包含sessionId、key、error)。🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/lib/session-manager.ts` around lines 2199 - 2214, The loop currently calls deletePipeline.del(key) for each key and then inspects deleteResults but does not log which deletes failed; capture the ordered list of keys you actually queued (e.g., const keysToDelete = keys.filter(k => k !== terminatedKey) before calling deletePipeline.exec()), then after const deleteResults = await deletePipeline.exec() iterate by index (for (let i=0; i<deleteResults.length; i++)) pairing deleteResults[i] with keysToDelete[i] and log any failure (err truthy or non-success result) including sessionId, the key, and the error/result; keep updating deletedInRound only for successful numeric results as currently done.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@src/lib/session-manager.ts`:
- Around line 162-179: readTerminationMarker currently swallows Redis errors and
returns null, which makes callers (notably getOrCreateSessionId) treat read
failures as "no termination" and incorrectly reuse blocked clientSessionId;
change readTerminationMarker to surface read failures (either by throwing a
specific error like TerminationMarkerReadError or returning a tri-state result
such as {status: "error" | "none" | "present", value?}) instead of returning
null on catch, and update getOrCreateSessionId's client-session branch to handle
the "error/unconfirmed" case by refusing to assume "not terminated" (e.g., abort
creation or retry/mark unsafe) using the new error/type so callers can
distinguish real absence of the marker from Redis read failures; reference
SessionManager.readTerminationMarker and SessionManager.getTerminationMarkerKey
when locating the code to change.
---
Nitpick comments:
In `@src/lib/session-manager.ts`:
- Around line 2199-2214: The loop currently calls deletePipeline.del(key) for
each key and then inspects deleteResults but does not log which deletes failed;
capture the ordered list of keys you actually queued (e.g., const keysToDelete =
keys.filter(k => k !== terminatedKey) before calling deletePipeline.exec()),
then after const deleteResults = await deletePipeline.exec() iterate by index
(for (let i=0; i<deleteResults.length; i++)) pairing deleteResults[i] with
keysToDelete[i] and log any failure (err truthy or non-success result) including
sessionId, the key, and the error/result; keep updating deletedInRound only for
successful numeric results as currently done.
In `@tests/unit/lib/session-manager-terminate-session.test.ts`:
- Around line 33-36: The test's deletePipelineRef mock currently hardcodes exec
to return a single deleted entry which can hide regressions in deletedKeys
counting; update the mock for deletePipelineRef so its del implementation tracks
how many keys were passed/removed and make exec return a result array whose
length matches that del call count (or the number of arguments passed to del),
ensuring deletePipelineRef.del and deletePipelineRef.exec stay consistent with
each other and the deletedKeys calculation.
ℹ️ Review info
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
Cache: Disabled due to Reviews > Disable Cache setting
📒 Files selected for processing (4)
src/lib/session-manager.tstests/unit/lib/session-manager-terminate-session.test.tstests/unit/lib/session-manager-terminated-remap.test.tstests/unit/repository/provider.test.ts
🚧 Files skipped from review as they are similar to previous changes (1)
- tests/unit/lib/session-manager-terminated-remap.test.ts
| private static async readTerminationMarker( | ||
| redis: Redis, | ||
| sessionId: string | ||
| ): Promise<string | null> { | ||
| const terminatedKey = SessionManager.getTerminationMarkerKey(sessionId); | ||
| try { | ||
| const value = await redis.get(terminatedKey); | ||
| if (typeof value !== "string" || value.length === 0) { | ||
| return null; | ||
| } | ||
| return value; | ||
| } catch (error) { | ||
| logger.error("SessionManager: Failed to read termination marker", { | ||
| error, | ||
| sessionId, | ||
| }); | ||
| return null; | ||
| } |
There was a problem hiding this comment.
终止标记读取失败被当作“未终止”,会放宽阻断语义
这里捕获异常后返回 null,会让上层把 Redis 读取失败当成“没有 termination marker”,从而可能继续复用本应被阻断的 clientSessionId。建议把“读取失败”与“未终止”分离(例如抛出专用错误或返回三态),并在 getOrCreateSessionId 的客户端 session 分支按“无法确认终止状态”处理。
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/lib/session-manager.ts` around lines 162 - 179, readTerminationMarker
currently swallows Redis errors and returns null, which makes callers (notably
getOrCreateSessionId) treat read failures as "no termination" and incorrectly
reuse blocked clientSessionId; change readTerminationMarker to surface read
failures (either by throwing a specific error like TerminationMarkerReadError or
returning a tri-state result such as {status: "error" | "none" | "present",
value?}) instead of returning null on catch, and update getOrCreateSessionId's
client-session branch to handle the "error/unconfirmed" case by refusing to
assume "not terminated" (e.g., abort creation or retry/mark unsafe) using the
new error/type so callers can distinguish real absence of the marker from Redis
read failures; reference SessionManager.readTerminationMarker and
SessionManager.getTerminationMarkerKey when locating the code to change.
Summary
修复“终止 Session 后仍可复活”的行为:
session:{id}:terminated标记,并清理session:{id}:*(包含req:*新格式)sessionId时,直接阻断并返回 410(不会生成新的 sessionId 继续执行)SCAN MATCH对 sessionId 做 glob 字面量转义,避免误匹配/误删Problem
当前实现存在两个问题:
req:*新格式 key,导致终止后仍可能在详情页看到残留数据Solution
1. Termination marker + complete cleanup
SessionManager.terminateSession优先写入终止标记(TTL 默认 24h,可用SESSION_TERMINATION_TTL_SECONDS配置)SCAN删除所有session:{id}:*(保留终止标记),并从 active_sessions ZSET 中移除2. Block terminated session (410)
getOrCreateSessionId检测到终止标记时抛出TerminatedSessionErrorProxySessionGuard捕获后转换为 410 并阻断请求(不生成新 sessionId)Testing
bun run buildbun run lintbun run typecheckbun run testGreptile Summary
This PR fixes session termination by writing a
session:{id}:terminatedmarker that blocks future requests with the same sessionId (returning 410), and implements glob escaping for Redis SCAN to prevent accidental deletion of other sessions.Key improvements:
TerminatedSessionError+ 410 response*?[]chars) prevents glob injectionCritical race conditions remain unresolved (flagged 10+ times in previous threads):
storeSessionInfo/storeSessionMessagesinsession-guard.ts:111-176, which are then deleted by SCAN cleanup causing data lossThese race windows are inherent to Redis SCAN without distributed locks or Lua scripts. If accepted as known limitations, document explicitly in function comments.
Confidence Score: 3/5
src/lib/session-manager.ts- the race conditions between marker write and SCAN cleanup could cause data loss in high-concurrency scenariosImportant Files Changed
TerminatedSessionErrorand blocks with 410. Prevents fallback session generation for terminated sessions.result.markerOkinstead of boolean success. Correct integration with new termination API.Sequence Diagram
sequenceDiagram participant Client participant Guard as ProxySessionGuard participant SM as SessionManager participant Redis Note over Client,Redis: Normal Request Flow Client->>Guard: Request with sessionId Guard->>SM: getOrCreateSessionId(sessionId) SM->>Redis: GET session:id:terminated Redis-->>SM: null (not terminated) SM-->>Guard: sessionId (allowed) Guard->>SM: storeSessionInfo/Messages SM->>Redis: SET session:id:info/messages Note over Client,Redis: Admin Terminates Session activate SM SM->>Redis: SET session:id:terminated (marker) SM->>Redis: ZREM from active_sessions SM->>Redis: SCAN session:id:* Redis-->>SM: [keys...] SM->>Redis: DEL keys (except :terminated) deactivate SM Note over Client,Redis: Blocked Request (After Termination) Client->>Guard: Request with same sessionId Guard->>SM: getOrCreateSessionId(sessionId) SM->>Redis: GET session:id:terminated Redis-->>SM: "timestamp" (terminated!) SM-->>Guard: TerminatedSessionError Guard-->>Client: 410 Gone Note over Client,Redis: Race Condition Window Client->>Guard: In-flight request (passed line 451) Note over SM,Redis: Termination starts SM->>Redis: SET session:id:terminated Note over Guard: Request continues in session-guard Guard->>SM: storeSessionInfo (no marker check) SM->>Redis: SET session:id:info Note over SM,Redis: SCAN cleanup begins SM->>Redis: SCAN & DEL session:id:* Redis->>Redis: Deletes newly written keys Note over Client,Redis: Data loss for in-flight requestLast reviewed commit: df6d675