Skip to content

Send empty message right after first token generation (continuous batching)#4020

Open
dkalinowski wants to merge 3 commits intomainfrom
ttft2
Open

Send empty message right after first token generation (continuous batching)#4020
dkalinowski wants to merge 3 commits intomainfrom
ttft2

Conversation

@dkalinowski
Copy link
Collaborator

🛠 Summary

CVS-181341
CVS-177373

Copilot AI review requested due to automatic review settings February 26, 2026 10:35
@dkalinowski dkalinowski added the WIP Do not merge until resolved label Feb 26, 2026
Copy link
Contributor

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

This PR implements support for sending an empty control message immediately after the first token generation in continuous batching scenarios. This addresses the case where the first token generation iteration produces no visible text output, allowing clients to receive an early signal that generation has started and includes the assistant role as required by the OpenAI streaming specification.

Changes:

  • Added loopIteration counter to track streaming iterations in GenAiServableExecutionContext
  • Implemented logic to send a control chunk when the first iteration produces empty text
  • Added serializeStreamingFirstTokenControlChunk() method to create properly formatted first-chunk responses for both chat completions and completions endpoints

Reviewed changes

Copilot reviewed 4 out of 4 changed files in this pull request and generated 4 comments.

File Description
src/llm/servable.hpp Adds loopIteration field to track which streaming iteration is currently being processed
src/llm/servable.cpp Implements logic to send control chunk on first iteration when text is empty, increments loop counter
src/llm/apis/openai_completions.hpp Declares new method for serializing the first token control chunk
src/llm/apis/openai_completions.cpp Implements serialization of control chunk with role field and null content for OpenAI spec compliance

@dkalinowski dkalinowski removed the WIP Do not merge until resolved label Mar 2, 2026
return buffer.GetString();
}

std::string OpenAIChatCompletionsHandler::serializeStreamingFirstTokenControlChunk() {
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't understand the context of FirstTokenControlChunk
what is the "control" aspect here?

choice.SetObject();

choice.AddMember("index", 0, allocator);
if (endpoint == Endpoint::CHAT_COMPLETIONS) {
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think we could document this behavior maybe in the API reference, so it's clear that we send that empty response (and only for CB pipelines right?)

std::shared_ptr<ov::genai::TextStreamer> textStreamer;
bool sendLoopbackSignal = false;
std::string lastStreamerCallbackOutput;
size_t loopIteration = 0;
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This name does not explain the purpose to me. Also, couldn't this be a bool like decodingPhase? Or even an enum like RequestProcessingPhase.prefill / RequestProcessingPhase.decode - starting with prefill and switching to decode after first read finishes.


// Reusable helper: asserts that a streaming chat completion chunk is the initial
// initial empty message with role:assistant and content:null.
inline void assertInitialStreamChatCompletionChunk(const std::string& response, const std::string& expectedModel) {
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

How about test for completion endpoint?

assertInitialStreamChatCompletionChunk(response, params.modelName);
return;
}
replyCounter++;
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

not needed?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants