feat(web): show queued status for messages pending inference#492
feat(web): show queued status for messages pending inference#492junmo-kim wants to merge 6 commits intotiann:mainfrom
Conversation
Add onBatchConsumed callback to MessageQueue2 that fires after collectBatch() returns messages. Wire it through sessionBase → apiSession → Hub socket handler → SSE so the web client can distinguish queued vs consumed messages. New socket event 'messages-consumed' follows the session-alive pattern (sid validation, access check, onWebappEvent relay). Fully backward-compatible: old hubs ignore unknown events, old CLIs never set the optional callback.
When the session is thinking (agent responding), new messages are now displayed with a clock icon and 'queued' status instead of 'sending'. Once the CLI drains its queue and emits the messages-consumed event (added in the previous commit), all queued messages transition to 'sent'. - Add 'queued' to MessageStatus type - useSendMessage: accept isSessionThinking option, set initial optimistic status accordingly via createQueuedMessage() - useSSE: handle messages-consumed sync event via flushQueuedMessages() — both functions are extension points for a future floating-UI PR - MessageStatusIndicator: add QueuedIcon (clock SVG)
There was a problem hiding this comment.
Findings
- [Major] Queued status can flip based on later session state instead of send-time state —
onSuccessre-readsisSessionThinkingRef.currentinstead of using the status chosen inonMutate, so a message queued while the agent was thinking can becomesentbefore the CLI actually consumes it ifsession.thinkingturns false before the POST resolves. Evidenceweb/src/hooks/mutations/useSendMessage.ts:89,web/src/hooks/mutations/useSendMessage.ts:96.
Suggested fix:type SendMessageInput = { sessionId: string text: string localId: string createdAt: number attachments?: AttachmentMetadata[] optimisticStatus: 'queued' | 'sending' } const optimisticStatus = isSessionThinkingRef.current ? 'queued' : 'sending' mutation.mutate({ sessionId: targetSessionId, text, localId, createdAt, attachments, optimisticStatus }) onMutate: async (input) => { appendOptimisticMessage(input.sessionId, createOptimisticMessage(input, input.optimisticStatus)) } onSuccess: (_, input) => { updateMessageStatus(input.sessionId, input.localId, input.optimisticStatus === 'queued' ? 'queued' : 'sent') }
- [Major]
messages-consumedcan mark unrelated queued messages as sent — the new event only carriessid, andflushQueuedStatuses()converts every queued message in that session tosent. If another message is queued after the CLI has already drained one batch but before that SSE event reaches the browser, the later message is still waiting in the CLI queue but will be markedsentanyway. The same over-broad flush also happens on thethinking:falsefallback. Evidenceshared/src/socket.ts:148,web/src/hooks/useSSE.ts:507,web/src/lib/message-window-store.ts:544.
Suggested fix:// cli -> hub -> web 'messages-consumed': (data: { sid: string; localIds: string[] }) => void export function markMessagesConsumed(sessionId: string, localIds: string[]): void { for (const localId of localIds) { updateMessageStatus(sessionId, localId, 'sent') } }
Summary
- Review mode: initial
- 2 findings. Main risk: the new queued affordance can disappear too early or show
sentfor messages that are still waiting in the CLI queue. - Targeted coverage for these async ordering cases was not found in
web/src/hooks/mutations/useSendMessage.test.tsx.
Testing
- Not run (automation)
- Suggested tests: cover
thinking=trueat enqueue time withthinking=falsebefore mutation success, and cover two queued messages where the first batch is consumed before the second is.
HAPI Bot
…tion Address review feedback: messages-consumed event now carries a consumedAt timestamp so the web only flushes messages created before that time. Prevents a race where a newly queued message could be marked sent before the CLI actually processes it. Also captures optimistic status in mutation context so onSuccess uses the send-time thinking state instead of re-reading the ref.
There was a problem hiding this comment.
Findings
- [Major]
consumedAtordering is based on different machine clocks, so the queued-state flush can still acknowledge the wrong messages. The optimistic message timestamp comes from the browser (web/src/hooks/mutations/useSendMessage.ts:124), but the cutoff now comes from the CLI host (cli/src/utils/MessageQueue2.ts:279). In HAPI's remote/PWA flow those clocks are independent, so skew can make a later queued message look older than the consumed batch, or leave an already-consumed message stuck queued. Evidenceweb/src/lib/message-window-store.ts:554.
Suggested fix:// Carry consumed message identities instead of comparing wall-clock timestamps. 'messages-consumed': (data: { sid: string; localIds: string[] }) => void export function markMessagesConsumed(sessionId: string, localIds: string[]): void { for (const localId of localIds) { updateMessageStatus(sessionId, localId, 'sent') } }
Summary
- Review mode: follow-up after new commits
- 1 remaining finding. The previous send-time status regression is fixed, but the current
consumedAtcutoff still depends on browser time vs CLI time and is unreliable for remote clients. - Not found in repo/docs: a test covering skewed browser/CLI clocks or a queued-after-batch case with clock drift.
Testing
- Not run (automation)
- Suggested tests: simulate browser
createdAtahead of and behind the CLI clock; cover two queued messages where only the first batch is consumed and the second stays queued under clock skew.
HAPI Bot
The previous consumedAt timestamp approach compared browser-set createdAt against a CLI-host timestamp. With remote/PWA clients these clocks drift independently, so queued messages could be marked sent too early or stay queued after being consumed. Instead, carry the localId from the incoming hub 'update' event down through the message queue, and return the consumed localIds from collectBatch. The 'messages-consumed' socket event now ships those exact localIds to the hub/web, and the web store transitions only matching queued messages to sent. The thinking→idle fallback still performs a broad flush as a last-resort safety net.
There was a problem hiding this comment.
Findings
- [Major]
thinking:falsestill acknowledges messages that have not been consumed yet.MessageQueue2only reportslocalIdsfor the batch it actually dequeues, but the web fallback still converts every remainingqueuedmessage tosenton any idle transition. When queued messages split across batches, for example after a permission/model change or an isolated command, later messages stay in the CLI queue while the UI shows them as already consumed. Evidenceweb/src/hooks/useSSE.ts:518,web/src/hooks/useSSE.ts:526,web/src/lib/message-window-store.ts:568,cli/src/utils/MessageQueue2.ts:266.
Summary
- Review mode: follow-up after new commits
- 1 finding. The
localId-based ack fixes the earlier cross-clock bug, but thethinking:falsefallback still over-acknowledges later queued batches. - Not found in repo/docs: a test covering multiple queued batches with different mode hashes or isolated commands across a
thinking:true -> false -> truetransition.
Testing
- Not run (automation):
bunis not installed in this runner. - Suggested tests: queue two user messages that land in different
MessageQueue2batches, emitmessages-consumedfor only the first batch, then assert the second message remainsqueuedafter the intermediatethinking:falseupdate.
HAPI Bot
The thinking:false fallback converted every queued message in a session to sent, which overshot when a session had multiple batches (different mode hashes, isolated commands) and the agent briefly went idle between turns. Later messages that were still waiting in the CLI queue ended up marked as sent prematurely. With the localId-based messages-consumed ack, the fallback is no longer necessary for the happy path. Drop it and rely on ack as the sole transition path. If an ack is lost (e.g. CLI crash) the message stays queued, which accurately reflects that it was never processed. Merge logic already preserves the optimistic status on backfill, so a page refresh keeps the queued affordance instead of hiding it.
There was a problem hiding this comment.
Findings
- [Major] Queued status can stick forever when the
messages-consumedSSE event is missed during reconnect or stream loss. This diff now copies the optimisticqueuedstate onto the persisted server message, but the only code path that clearsqueuedis the live SSE handler. The SSE route does not replay past events, so if that ack is missed the real stored message stays dimmed indefinitely. Evidenceweb/src/lib/messages.ts:63,web/src/hooks/useSSE.ts:500,hub/src/web/routes/events.ts:80.
Suggested fix:// Only carry over durable statuses here. `queued` depends on a live ack, // and this SSE stream has no replay/backfill for missed events. if (msg.localId && optimisticStatusByLocalId.has(msg.localId) && !msg.status) { const carried = optimisticStatusByLocalId.get(msg.localId) return carried && carried !== 'queued' ? { ...msg, status: carried } : msg }
Summary
- Review mode: follow-up after new commits
- 1 finding. The over-ack fallback from the prior bot review is gone, but the new status-preservation logic can now leave delivered messages stuck in the queued state after a missed SSE ack.
- Not found in repo/docs: any SSE replay or backfill path for
messages-consumed.
Testing
- Not run (automation):
bunis not installed in this runner. - Suggested tests: drop the
messages-consumedevent, then ingest/refetch the stored user message with the samelocalIdand assert the UI does not remainqueuedforever after reconnect.
HAPI Bot
| }) | ||
| if (optimisticStatusByLocalId.size > 0) { | ||
| merged = merged.map((msg) => { | ||
| if (msg.localId && optimisticStatusByLocalId.has(msg.localId) && !msg.status) { |
There was a problem hiding this comment.
[MAJOR] This now copies the optimistic queued state onto the persisted server message, but queued is only cleared by the live messages-consumed SSE handler. Because the SSE stream has reconnects and no replay/backfill, missing that event leaves the real message dimmed forever after refetch/reconnect.
Suggested fix:
if (msg.localId && optimisticStatusByLocalId.has(msg.localId) && !msg.status) {
const carried = optimisticStatusByLocalId.get(msg.localId)
return carried && carried !== 'queued'
? { ...msg, status: carried }
: msg
}Verifies that when two queued messages have different mode hashes, only the first batch's localId is reported on ack and the second message remains in the queue until its own batch is consumed.
There was a problem hiding this comment.
Findings
- [Major]
queuedcan persist indefinitely after reconnect/refresh because this merge path copies the optimisticqueuedstate onto the stored server message, but only the livemessages-consumedSSE event clears it. The SSE stream reconnects without replay/backfill, so missing that event leaves delivered messages dimmed forever. Evidenceweb/src/lib/messages.ts:79, contextweb/src/hooks/useSSE.ts:500,hub/src/web/routes/events.ts:80.
Suggested fix:if (msg.localId && optimisticStatusByLocalId.has(msg.localId) && !msg.status) { const carried = optimisticStatusByLocalId.get(msg.localId) return carried && carried !== 'queued' ? { ...msg, status: carried } : msg }
Summary
- Review mode: follow-up after new commits
- 1 finding.
- Not found in repo/docs: any replay/backfill path for
messages-consumed, or thethinking: true -> falsequeued-flush fallback described in the PR body.
Testing
- Not run (automation):
bunnot installed in this runner. - Suggested test: miss a
messages-consumedSSE event, then refetch/ingest the stored user message with the samelocalIdand assert it does not remainqueued.
HAPI Bot
| }) | ||
| if (optimisticStatusByLocalId.size > 0) { | ||
| merged = merged.map((msg) => { | ||
| if (msg.localId && optimisticStatusByLocalId.has(msg.localId) && !msg.status) { |
There was a problem hiding this comment.
[MAJOR] This copies the optimistic queued state onto the stored server message, but queued is only cleared by the live messages-consumed SSE handler. Because the SSE stream reconnects without replay/backfill, missing that event leaves the delivered message dimmed forever after refetch/reconnect.
Suggested fix:
if (msg.localId && optimisticStatusByLocalId.has(msg.localId) && !msg.status) {
const carried = optimisticStatusByLocalId.get(msg.localId)
return carried && carried !== 'queued'
? { ...msg, status: carried }
: msg
}|
Thanks for catching this — the edge case is real, but the suggested fix would undo the PR's intent. The The remaining concern is narrower: an in-flight
I'd prefer to leave this PR as-is and tackle the replay path separately if it bites users in practice. Listed in Follow-ups. |
Motivation
Since v0.16.7 (#422), messages sent mid-turn are persisted on the hub immediately but sit in the CLI's queue until the agent finishes. The UI currently treats "saved on hub" and "consumed by CLI" identically — both render as
sent.The two states are behaviorally different. "Saved on hub" means durable; "consumed by CLI" means the agent has started processing it. Future work — floating queued messages above the composer, cancelling them, re-ordering — needs this distinction. This PR adds a minimal visual cue for it.
UX
While the session is thinking, new messages show a clock icon and dimmed text (
queued). When the CLI drains its queue, it emitsmessages-consumedwith the consumedlocalIds, and those messages transition tosent. Backward-compatible.Implementation
MessageQueue2carrieslocalIdper item;collectBatch()reports the consumed ones viaonBatchConsumed(localIds).messages-consumedsocket/SSE event:{ sid, localIds: string[] }, forwarded unchanged by the hub.markMessagesConsumedflips only the matching queued messages;mergeMessagespreservesqueuedacross the optimistic → server-echo replacement (otherwise the affordance would vanish within tens of ms).Follow-ups (separate PRs)
localId-based ack).MessageQueue2).messages-consumedis dropped mid-disconnect a delivered message can stay dimmed until refresh. Visual glitch, not correctness. Broader SSE concern.Related