Skip to content

feat(web): show queued status for messages pending inference#492

Open
junmo-kim wants to merge 6 commits intotiann:mainfrom
junmo-kim:feat/messages-consumed
Open

feat(web): show queued status for messages pending inference#492
junmo-kim wants to merge 6 commits intotiann:mainfrom
junmo-kim:feat/messages-consumed

Conversation

@junmo-kim
Copy link
Copy Markdown
Contributor

@junmo-kim junmo-kim commented Apr 19, 2026

Motivation

Since v0.16.7 (#422), messages sent mid-turn are persisted on the hub immediately but sit in the CLI's queue until the agent finishes. The UI currently treats "saved on hub" and "consumed by CLI" identically — both render as sent.

The two states are behaviorally different. "Saved on hub" means durable; "consumed by CLI" means the agent has started processing it. Future work — floating queued messages above the composer, cancelling them, re-ordering — needs this distinction. This PR adds a minimal visual cue for it.

UX

While the session is thinking, new messages show a clock icon and dimmed text (queued). When the CLI drains its queue, it emits messages-consumed with the consumed localIds, and those messages transition to sent. Backward-compatible.

Implementation

  • MessageQueue2 carries localId per item; collectBatch() reports the consumed ones via onBatchConsumed(localIds).
  • messages-consumed socket/SSE event: { sid, localIds: string[] }, forwarded unchanged by the hub.
  • Web markMessagesConsumed flips only the matching queued messages; mergeMessages preserves queued across the optimistic → server-echo replacement (otherwise the affordance would vanish within tens of ms).

Follow-ups (separate PRs)

  • Floating UI for queued messages (relies on localId-based ack).
  • Cancel queued messages (adds a cancel event on MessageQueue2).
  • SSE replay/backfill — if messages-consumed is dropped mid-disconnect a delivered message can stay dimmed until refresh. Visual glitch, not correctness. Broader SSE concern.

Related

Add onBatchConsumed callback to MessageQueue2 that fires after
collectBatch() returns messages. Wire it through sessionBase →
apiSession → Hub socket handler → SSE so the web client can
distinguish queued vs consumed messages.

New socket event 'messages-consumed' follows the session-alive
pattern (sid validation, access check, onWebappEvent relay).
Fully backward-compatible: old hubs ignore unknown events, old
CLIs never set the optional callback.
When the session is thinking (agent responding), new messages are
now displayed with a clock icon and 'queued' status instead of
'sending'. Once the CLI drains its queue and emits the
messages-consumed event (added in the previous commit), all queued
messages transition to 'sent'.

- Add 'queued' to MessageStatus type
- useSendMessage: accept isSessionThinking option, set initial
  optimistic status accordingly via createQueuedMessage()
- useSSE: handle messages-consumed sync event via
  flushQueuedMessages() — both functions are extension points
  for a future floating-UI PR
- MessageStatusIndicator: add QueuedIcon (clock SVG)
@junmo-kim junmo-kim marked this pull request as ready for review April 19, 2026 07:10
Copy link
Copy Markdown

@github-actions github-actions bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Findings

  • [Major] Queued status can flip based on later session state instead of send-time state — onSuccess re-reads isSessionThinkingRef.current instead of using the status chosen in onMutate, so a message queued while the agent was thinking can become sent before the CLI actually consumes it if session.thinking turns false before the POST resolves. Evidence web/src/hooks/mutations/useSendMessage.ts:89, web/src/hooks/mutations/useSendMessage.ts:96.
    Suggested fix:
    type SendMessageInput = {
        sessionId: string
        text: string
        localId: string
        createdAt: number
        attachments?: AttachmentMetadata[]
        optimisticStatus: 'queued' | 'sending'
    }
    
    const optimisticStatus = isSessionThinkingRef.current ? 'queued' : 'sending'
    mutation.mutate({ sessionId: targetSessionId, text, localId, createdAt, attachments, optimisticStatus })
    
    onMutate: async (input) => {
        appendOptimisticMessage(input.sessionId, createOptimisticMessage(input, input.optimisticStatus))
    }
    
    onSuccess: (_, input) => {
        updateMessageStatus(input.sessionId, input.localId, input.optimisticStatus === 'queued' ? 'queued' : 'sent')
    }
  • [Major] messages-consumed can mark unrelated queued messages as sent — the new event only carries sid, and flushQueuedStatuses() converts every queued message in that session to sent. If another message is queued after the CLI has already drained one batch but before that SSE event reaches the browser, the later message is still waiting in the CLI queue but will be marked sent anyway. The same over-broad flush also happens on the thinking:false fallback. Evidence shared/src/socket.ts:148, web/src/hooks/useSSE.ts:507, web/src/lib/message-window-store.ts:544.
    Suggested fix:
    // cli -> hub -> web
    'messages-consumed': (data: { sid: string; localIds: string[] }) => void
    
    export function markMessagesConsumed(sessionId: string, localIds: string[]): void {
        for (const localId of localIds) {
            updateMessageStatus(sessionId, localId, 'sent')
        }
    }

Summary

  • Review mode: initial
  • 2 findings. Main risk: the new queued affordance can disappear too early or show sent for messages that are still waiting in the CLI queue.
  • Targeted coverage for these async ordering cases was not found in web/src/hooks/mutations/useSendMessage.test.tsx.

Testing

  • Not run (automation)
  • Suggested tests: cover thinking=true at enqueue time with thinking=false before mutation success, and cover two queued messages where the first batch is consumed before the second is.

HAPI Bot

Comment thread web/src/hooks/mutations/useSendMessage.ts Outdated
Comment thread shared/src/socket.ts Outdated
…tion

Address review feedback: messages-consumed event now carries a
consumedAt timestamp so the web only flushes messages created before
that time. Prevents a race where a newly queued message could be
marked sent before the CLI actually processes it.

Also captures optimistic status in mutation context so onSuccess
uses the send-time thinking state instead of re-reading the ref.
Copy link
Copy Markdown

@github-actions github-actions bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Findings

  • [Major] consumedAt ordering is based on different machine clocks, so the queued-state flush can still acknowledge the wrong messages. The optimistic message timestamp comes from the browser (web/src/hooks/mutations/useSendMessage.ts:124), but the cutoff now comes from the CLI host (cli/src/utils/MessageQueue2.ts:279). In HAPI's remote/PWA flow those clocks are independent, so skew can make a later queued message look older than the consumed batch, or leave an already-consumed message stuck queued. Evidence web/src/lib/message-window-store.ts:554.
    Suggested fix:
    // Carry consumed message identities instead of comparing wall-clock timestamps.
    'messages-consumed': (data: { sid: string; localIds: string[] }) => void
    
    export function markMessagesConsumed(sessionId: string, localIds: string[]): void {
        for (const localId of localIds) {
            updateMessageStatus(sessionId, localId, 'sent')
        }
    }

Summary

  • Review mode: follow-up after new commits
  • 1 remaining finding. The previous send-time status regression is fixed, but the current consumedAt cutoff still depends on browser time vs CLI time and is unreliable for remote clients.
  • Not found in repo/docs: a test covering skewed browser/CLI clocks or a queued-after-batch case with clock drift.

Testing

  • Not run (automation)
  • Suggested tests: simulate browser createdAt ahead of and behind the CLI clock; cover two queued messages where only the first batch is consumed and the second stays queued under clock skew.

HAPI Bot

Comment thread web/src/lib/message-window-store.ts Outdated
The previous consumedAt timestamp approach compared browser-set createdAt
against a CLI-host timestamp. With remote/PWA clients these clocks drift
independently, so queued messages could be marked sent too early or stay
queued after being consumed.

Instead, carry the localId from the incoming hub 'update' event down
through the message queue, and return the consumed localIds from
collectBatch. The 'messages-consumed' socket event now ships those exact
localIds to the hub/web, and the web store transitions only matching
queued messages to sent. The thinking→idle fallback still performs a
broad flush as a last-resort safety net.
Copy link
Copy Markdown

@github-actions github-actions bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Findings

  • [Major] thinking:false still acknowledges messages that have not been consumed yet. MessageQueue2 only reports localIds for the batch it actually dequeues, but the web fallback still converts every remaining queued message to sent on any idle transition. When queued messages split across batches, for example after a permission/model change or an isolated command, later messages stay in the CLI queue while the UI shows them as already consumed. Evidence web/src/hooks/useSSE.ts:518, web/src/hooks/useSSE.ts:526, web/src/lib/message-window-store.ts:568, cli/src/utils/MessageQueue2.ts:266.

Summary

  • Review mode: follow-up after new commits
  • 1 finding. The localId-based ack fixes the earlier cross-clock bug, but the thinking:false fallback still over-acknowledges later queued batches.
  • Not found in repo/docs: a test covering multiple queued batches with different mode hashes or isolated commands across a thinking:true -> false -> true transition.

Testing

  • Not run (automation): bun is not installed in this runner.
  • Suggested tests: queue two user messages that land in different MessageQueue2 batches, emit messages-consumed for only the first batch, then assert the second message remains queued after the intermediate thinking:false update.

HAPI Bot

Comment thread web/src/lib/message-window-store.ts Outdated
The thinking:false fallback converted every queued message in a session
to sent, which overshot when a session had multiple batches (different
mode hashes, isolated commands) and the agent briefly went idle between
turns. Later messages that were still waiting in the CLI queue ended up
marked as sent prematurely.

With the localId-based messages-consumed ack, the fallback is no longer
necessary for the happy path. Drop it and rely on ack as the sole
transition path. If an ack is lost (e.g. CLI crash) the message stays
queued, which accurately reflects that it was never processed.
Merge logic already preserves the optimistic status on backfill, so a
page refresh keeps the queued affordance instead of hiding it.
Copy link
Copy Markdown

@github-actions github-actions bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Findings

  • [Major] Queued status can stick forever when the messages-consumed SSE event is missed during reconnect or stream loss. This diff now copies the optimistic queued state onto the persisted server message, but the only code path that clears queued is the live SSE handler. The SSE route does not replay past events, so if that ack is missed the real stored message stays dimmed indefinitely. Evidence web/src/lib/messages.ts:63, web/src/hooks/useSSE.ts:500, hub/src/web/routes/events.ts:80.
    Suggested fix:
    // Only carry over durable statuses here. `queued` depends on a live ack,
    // and this SSE stream has no replay/backfill for missed events.
    if (msg.localId && optimisticStatusByLocalId.has(msg.localId) && !msg.status) {
        const carried = optimisticStatusByLocalId.get(msg.localId)
        return carried && carried !== 'queued'
            ? { ...msg, status: carried }
            : msg
    }

Summary

  • Review mode: follow-up after new commits
  • 1 finding. The over-ack fallback from the prior bot review is gone, but the new status-preservation logic can now leave delivered messages stuck in the queued state after a missed SSE ack.
  • Not found in repo/docs: any SSE replay or backfill path for messages-consumed.

Testing

  • Not run (automation): bun is not installed in this runner.
  • Suggested tests: drop the messages-consumed event, then ingest/refetch the stored user message with the same localId and assert the UI does not remain queued forever after reconnect.

HAPI Bot

Comment thread web/src/lib/messages.ts
})
if (optimisticStatusByLocalId.size > 0) {
merged = merged.map((msg) => {
if (msg.localId && optimisticStatusByLocalId.has(msg.localId) && !msg.status) {
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

[MAJOR] This now copies the optimistic queued state onto the persisted server message, but queued is only cleared by the live messages-consumed SSE handler. Because the SSE stream has reconnects and no replay/backfill, missing that event leaves the real message dimmed forever after refetch/reconnect.

Suggested fix:

if (msg.localId && optimisticStatusByLocalId.has(msg.localId) && !msg.status) {
    const carried = optimisticStatusByLocalId.get(msg.localId)
    return carried && carried !== 'queued'
        ? { ...msg, status: carried }
        : msg
}

Verifies that when two queued messages have different mode hashes, only
the first batch's localId is reported on ack and the second message
remains in the queue until its own batch is consumed.
Copy link
Copy Markdown

@github-actions github-actions bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Findings

  • [Major] queued can persist indefinitely after reconnect/refresh because this merge path copies the optimistic queued state onto the stored server message, but only the live messages-consumed SSE event clears it. The SSE stream reconnects without replay/backfill, so missing that event leaves delivered messages dimmed forever. Evidence web/src/lib/messages.ts:79, context web/src/hooks/useSSE.ts:500, hub/src/web/routes/events.ts:80.
    Suggested fix:
    if (msg.localId && optimisticStatusByLocalId.has(msg.localId) && !msg.status) {
        const carried = optimisticStatusByLocalId.get(msg.localId)
        return carried && carried !== 'queued'
            ? { ...msg, status: carried }
            : msg
    }

Summary

  • Review mode: follow-up after new commits
  • 1 finding.
  • Not found in repo/docs: any replay/backfill path for messages-consumed, or the thinking: true -> false queued-flush fallback described in the PR body.

Testing

  • Not run (automation): bun not installed in this runner.
  • Suggested test: miss a messages-consumed SSE event, then refetch/ingest the stored user message with the same localId and assert it does not remain queued.

HAPI Bot

Comment thread web/src/lib/messages.ts
})
if (optimisticStatusByLocalId.size > 0) {
merged = merged.map((msg) => {
if (msg.localId && optimisticStatusByLocalId.has(msg.localId) && !msg.status) {
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

[MAJOR] This copies the optimistic queued state onto the stored server message, but queued is only cleared by the live messages-consumed SSE handler. Because the SSE stream reconnects without replay/backfill, missing that event leaves the delivered message dimmed forever after refetch/reconnect.

Suggested fix:

if (msg.localId && optimisticStatusByLocalId.has(msg.localId) && !msg.status) {
    const carried = optimisticStatusByLocalId.get(msg.localId)
    return carried && carried !== 'queued'
        ? { ...msg, status: carried }
        : msg
}

@junmo-kim
Copy link
Copy Markdown
Contributor Author

Thanks for catching this — the edge case is real, but the suggested fix would undo the PR's intent.

The mergeMessages carry-over of queued onto the stored server message was added deliberately. Without it the affordance disappears within tens of ms (as soon as the hub echoes the message back), long before the CLI actually picks it up. Distinguishing "saved on hub" from "consumed by CLI" is the whole point of the PR — see the updated description.

The remaining concern is narrower: an in-flight messages-consumed SSE being dropped across a disconnect. Options I considered:

  1. Broad thinking:false fallback — removed in 65dbf69 because it over-acks later batches (your earlier review).
  2. SSE replay/backfill for messages-consumed — larger scope, server-side work.
  3. Accept the trade-off: on a missed ack the delivered message stays dimmed, and a refresh keeps it that way (same merge logic). Visual glitch, not correctness — the message was processed.

I'd prefer to leave this PR as-is and tackle the replay path separately if it bites users in practice. Listed in Follow-ups.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant