Skip to content

subtract out some overheads from the max datagram size#3477

Open
jesup wants to merge 30 commits intousers/jesup/WT_DRAIN_SESSIONfrom
users/jesup/datagram_max_size
Open

subtract out some overheads from the max datagram size#3477
jesup wants to merge 30 commits intousers/jesup/WT_DRAIN_SESSIONfrom
users/jesup/datagram_max_size

Conversation

@jesup
Copy link
Copy Markdown
Member

@jesup jesup commented Mar 16, 2026

No description provided.

Copilot AI review requested due to automatic review settings March 16, 2026 04:58
Copy link
Copy Markdown
Contributor

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

Adjusts WebTransport’s client-side reported max datagram size to be more conservative by subtracting additional packet-level overhead, and updates WebTransport datagram tests accordingly.

Changes:

  • Subtract a fixed overhead (64 bytes) in Http3Client::webtransport_max_datagram_size() to reserve space for ACK/packet-number growth.
  • Update WebTransport datagram tests to account for the new client-side overhead.

Reviewed changes

Copilot reviewed 2 out of 2 changed files in this pull request and generated 3 comments.

File Description
neqo-http3/src/features/extended_connect/tests/webtransport/datagrams.rs Updates expected max datagram sizes to include new client-side overhead.
neqo-http3/src/connection_client.rs Makes webtransport_max_datagram_size() subtract additional overhead using saturating subtraction.

Comment thread neqo-http3/src/connection_client.rs Outdated
Comment thread neqo-http3/src/connection_client.rs Outdated
Comment thread neqo-http3/src/features/extended_connect/tests/webtransport/datagrams.rs Outdated
Comment thread neqo-http3/src/connection_client.rs Outdated
Comment on lines +899 to +903
// Reserve space for ACK frames and packet number growth beyond what is
// seen at session start (when max_datagram_size is sampled). A worst-case
// ACK frame is ~50 bytes (RecvdPackets::USEFUL_ACK_LEN) and packet number
// varints can grow by up to 3 bytes over the life of a connection, giving
// ~53 bytes of realistic overhead; 64 provides a small margin.
Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't understand why ACK size is relevant on the HTTP3 level. Can you expand on that? If a pending ACK plus the current WebTransport datagram don't fit into a single UDP datagram, why don't we send two UDP datagrams?

what is seen at session start (when max_datagram_size is sampled)

In what sense is the calculation only happening at session start? Below we are calling self.conn.max_datagram_size(), potentially during the session.

Copy link
Copy Markdown
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We sample max_datagram_size() in the DOM WebTransport code at session start and return that to the application when it asks.

Ack size is relevant in that if a datagram doesn't fit because there's an ACK in the buffer, the datagram can end up dropped (though not usually): if it's deferred due to an ACK it will be pushed back. On retry, if the number of bytes for pn encoding has changed (say from <128 to >128 unacked packets in flight), the packet will be dropped.

More importantly, if there's significant incoming traffic, there may always be an ACK in the buffer, leading to repeated delays in sending datagrams, perhaps to the point where they end up dropped. Generally we want datagrams to go on the wire as soon as possible if other traffic doesn't have precedence. (People will be using them for audio and video).

We could avoid the first problem with a 3 or 4 byte reduction, but not the second. To reduce the impact of the second, we need to reduce the max size by ~53 bytes or more.

Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I understand the concerns above, though if I understand correctly, they are based on intuition and not real-world experience. Thus I suggest only introducing these optimizations once we have proof that they actually occur.

While this might improve performance by allowing ACKs and datagrams to more likely share the same UDP datagram, it might also reduce performance as it reduces the size of WebTransport datagrams. In addition, it is a layer violation, having neqo-http3 be aware of neqo-transport, i.e. WebTransport aware of QUIC ACKs.

Copy link
Copy Markdown
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Reasonable points. If we have incoming HD video+audio (and maybe multiple incoming streams in a conference), we will have a lot of ACKs to send; this could delay sending datagrams. OTOH, maybe what happens is it doesn't fit, we send the ACK, then we have an empty buffer and send the datagram. We waste some bits by having a higher packet rate (close to 2x at the limit, but probably much less than that), but that may be ok.
IIRC I added this because some tests failed without it; I'll see if I can find them

Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

When we invest in ack frequency, the odds that one of those large datagrams will have to share a UDP datagram with an ACK is still significant, but it will be low enough that we don't need to concern ourselves. As far as I'm aware, we'll still get ample chance to squish smaller-than-MTU datagrams in with ACKs, but even if we have no packet sharing, that will still be less of an overhead than a fixed tax on every datagram frame.

Comment thread neqo-http3/src/features/extended_connect/tests/webtransport/datagrams.rs Outdated
@jesup jesup force-pushed the users/jesup/datagram_max_size branch from 6856f91 to 761c9f7 Compare March 17, 2026 15:08
@jesup jesup force-pushed the users/jesup/datagram_max_size branch from 761c9f7 to d9cc0a9 Compare March 17, 2026 15:17
@larseggert larseggert added the needs-rebase PR needs rebasing before it can be merged. label Apr 8, 2026
@jesup jesup force-pushed the users/jesup/WT_DRAIN_SESSION branch from c7f271e to 68dd8b2 Compare May 1, 2026 21:01
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

needs-rebase PR needs rebasing before it can be merged.

Projects

None yet

Development

Successfully merging this pull request may close these issues.

5 participants