Skip to content

Spawn bidi transport send so request initiates before first message()#4

Merged
iainmcgin merged 2 commits intomainfrom
fix/bidi-spawn-transport-send
Mar 17, 2026
Merged

Spawn bidi transport send so request initiates before first message()#4
iainmcgin merged 2 commits intomainfrom
fix/bidi-spawn-transport-send

Conversation

@iainmcgin
Copy link
Collaborator

Fixes #2.

call_bidi_stream previously stored the transport.send() future unpolled in RecvState::Pending, awaiting it lazily on the first message() call. For SharedHttp2Connection, that future contains the actual poll_ready -> connect -> handshake -> stream-body work, so nothing happened until the caller started reading. The half-duplex pattern (send all, close_send, then read) would buffer into the 32-deep ChannelBody mpsc with nobody draining it and deadlock on the 33rd send.

Changes

  • RecvState::Pending holds JoinHandle<Result<Response<B>, ConnectError>> instead of BoxFuture
  • call_bidi_stream spawns the send future instead of boxing it
  • message() awaits the handle; JoinError (task panic) maps to ConnectError::internal
  • Doc comments updated to explain the spawn and the deadline-drop-detach cleanup path
  • Integration test in tests/streaming: 40 sends half-duplex over SharedHttp2Connection, timeout-wrapped so a regression fails fast rather than hanging the suite

Preserved behaviour

  • The lazy HEADERS await is unchanged - servers that wait for the first request message before sending HEADERS still work (the original reason for not awaiting eagerly)
  • No API surface change; internal plumbing only

Cleanup on drop

Dropping the BidiStream before message() detaches the spawned task. The dropped tx closes the body stream, which ends the request, which completes the task naturally - so no explicit abort is needed. If the task was mid-connect/handshake it completes that work before discovering an empty body; wasteful but not leaky.

call_bidi_stream previously stored the transport.send() future unpolled
in RecvState::Pending, awaiting it lazily on the first message() call.
For transports with a background driver (HttpClient -> hyper pool) this
is fine. For SharedHttp2Connection, the send() future IS the work: it
contains poll_ready -> connect -> handshake -> start streaming the
ChannelBody. Nothing happens until that future is polled.

So the half-duplex pattern (send all, close_send, then read) would
buffer sends into the 32-deep ChannelBody mpsc with nobody draining it.
Send #33 deadlocks. Even under 32 sends the behaviour is wrong: the
server receives nothing until the client starts reading.

Fix: tokio::spawn the send future so the request initiates immediately
and ChannelBody gets polled as sends happen. RecvState::Pending holds
the JoinHandle; message() awaits it as before. Preserves the original
lazy-HEADERS-await behaviour (servers that wait for the first request
before sending HEADERS still work).

Dropping the handle on deadline timeout or BidiStream drop detaches the
task, but the dropped tx closes the body stream, which ends the
request, which completes the task naturally. No explicit abort needed.

Integration test in tests/streaming sends 40 messages half-duplex over
SharedHttp2Connection; timeout-wrapped so a regression fails fast.

Fixes #2
@iainmcgin iainmcgin force-pushed the fix/bidi-spawn-transport-send branch from 46448ba to 286f99f Compare March 17, 2026 17:11
@iainmcgin iainmcgin requested a review from rpb-ant March 17, 2026 17:16
@iainmcgin iainmcgin marked this pull request as ready for review March 17, 2026 17:16
@iainmcgin iainmcgin merged commit f585a52 into main Mar 17, 2026
9 checks passed
@github-actions github-actions bot locked and limited conversation to collaborators Mar 17, 2026
@iainmcgin iainmcgin deleted the fix/bidi-spawn-transport-send branch March 17, 2026 17:53
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

BidiStream half-duplex pattern deadlocks after 32 sends on SharedHttp2Connection

2 participants