Skip to content

fix: emit content_part.done and populate output_text.done.text per Responses API spec#49

Open
mikemolinet wants to merge 1 commit intoKochC:devfrom
mikemolinet:fix/responses-sse-spec-compliance
Open

fix: emit content_part.done and populate output_text.done.text per Responses API spec#49
mikemolinet wants to merge 1 commit intoKochC:devfrom
mikemolinet:fix/responses-sse-spec-compliance

Conversation

@mikemolinet
Copy link
Copy Markdown

Summary

The /v1/responses streaming handler violates the OpenAI Responses API SSE lifecycle spec in two ways:

  1. Missing event: response.content_part.done is not emitted between response.output_text.done and response.output_item.done. Per https://platform.openai.com/docs/api-reference/responses-streaming the event sequence for a text content part should be: content_part.addedoutput_text.delta*output_text.donecontent_part.doneoutput_item.done.
  2. Empty text field: response.output_text.done is sent with text: "" instead of the accumulated output text. The spec requires the final accumulated content.

Accumulate the delta text at the streaming call site, emit content_part.done with the accumulated text, and populate output_text.done.text correctly.

Closes #48

Changes

  • index.js: in the /v1/responses streaming handler, accumulate delta tokens in a local accumulatedText variable via accumulatedText += delta in the onChunk callback. On stream completion: (a) set response.output_text.done.text to accumulatedText (was ""), (b) emit a new response.content_part.done SSE event between output_text.done and output_item.done, with part.text set to the accumulated content. The new event is gated on partIndex > 0 (i.e. at least one delta arrived), keeping the content-part added/done lifecycle symmetric.
  • index.test.js: add a local parseSseStream helper and a regression test asserting the new event, the accumulated-text content on both output_text.done and content_part.done, and the event ordering.

Testing

  • npm test — 113 passed (112 existing + 1 new)
  • npm run lint — clean

Notes

OpenAI Responses API streaming spec reference: https://platform.openai.com/docs/api-reference/responses-streaming.

…sponses API spec

The /v1/responses streaming handler violates the OpenAI Responses API
SSE lifecycle spec in two ways:

1. response.content_part.done is never emitted. Per the spec
   (https://platform.openai.com/docs/api-reference/responses-streaming),
   the event sequence for a text content part should be:
     content_part.added -> output_text.delta* -> output_text.done
     -> content_part.done -> output_item.done

2. response.output_text.done is emitted with text: "" instead of the
   accumulated output text. The spec requires the final content.

Accumulate delta tokens in a local variable at the streaming call
site, emit the missing response.content_part.done event with the
accumulated text in part.text, and populate output_text.done.text
with the same accumulated content. Gate the new content_part.done
event on at least one delta having been received, keeping the
content-part added/done lifecycle symmetric.

Adds one regression test in index.test.js that asserts:
- output_text.done.text equals the accumulated deltas
- content_part.done event is present with part.text populated
- correct ordering (output_text.done < content_part.done < output_item.done)

Closes KochC#48
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant