Skip to content

feat: default flashblocks.end-buffer-ms to 100#467

Open
noot wants to merge 2 commits intomainfrom
claude-default-end-buffer-100
Open

feat: default flashblocks.end-buffer-ms to 100#467
noot wants to merge 2 commits intomainfrom
claude-default-end-buffer-100

Conversation

@noot
Copy link
Copy Markdown
Contributor

@noot noot commented Apr 17, 2026

📝 Summary

The flashblock scheduler generates target_flashblocks = block_time / interval trigger times, with the last one firing at slot_end - end_buffer_ms. With the default end_buffer_ms=0, the final flashblock fires at the slot deadline itself and almost always fails to publish before getPayload — so the scheduler allocates a slot that is silently dropped, leaving N-1 usable flashblocks per block instead of N.

Benchmarked on a 2s chain block with the default 250ms interval (8 flashblocks/ block), 100 agents at 100M gas for 120s:

end_buffer_ms | p95 backrun_sent→landed | idx=8 utilized | landed/120s
---------------+-------------------------+----------------+-------------
0 | 482 ms | no | 34,686
50 | 482 ms | no | 35,976
100 | 331 ms (-31%) | yes | 40,936
150 | 381 ms | yes | 39,393

50ms is not enough buffer for the build/publish; 100ms is the smallest value in {50,100,150} that lets idx=8 publish before the slot deadline. Larger values (150ms+) don't help because the extra buffer becomes a larger end-of-block dead zone that forces more bundles into the next block.

100ms of buffer costs ~5% of the final slot's bundle-intake window (100ms of 2000ms) in exchange for the entire slot being usable. Net effect: 31% lower p95 landing latency and 18% higher per-block throughput.

💡 Motivation and Context

Improves signal-boost backrun tail latencies.

✅ I have completed the following steps:

  • Run make lint
  • Run make test
  • Added tests (if applicable)

The flashblock scheduler generates `target_flashblocks = block_time / interval`
trigger times, with the last one firing at `slot_end - end_buffer_ms`. With
the default `end_buffer_ms=0`, the final flashblock fires at the slot deadline
itself and almost always fails to publish before `getPayload` — so the
scheduler allocates a slot that is silently dropped, leaving N-1 usable
flashblocks per block instead of N.

Benchmarked on a 2s chain block with the default 250ms interval (8 flashblocks/
block), 100 agents at 100M gas for 120s:

  end_buffer_ms  | p95 backrun_sent→landed | idx=8 utilized | landed/120s
  ---------------+-------------------------+----------------+-------------
       0         |        482 ms           |        no      |    34,686
      50         |        482 ms           |        no      |    35,976
     100         |        331 ms (-31%)    |       yes      |    40,936
     150         |        381 ms           |       yes      |    39,393

50ms is not enough buffer for the build/publish; 100ms is the smallest value
in {50,100,150} that lets idx=8 publish before the slot deadline. Larger
values (150ms+) don't help because the extra buffer becomes a larger
end-of-block dead zone that forces more bundles into the next block.

100ms of buffer costs ~5% of the final slot's bundle-intake window (100ms of
2000ms) in exchange for the entire slot being usable. Net effect: 31% lower
p95 landing latency and 18% higher per-block throughput.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
@noot noot requested review from SozinM and avalonche as code owners April 17, 2026 22:04
Copilot AI review requested due to automatic review settings April 17, 2026 22:04
Copy link
Copy Markdown
Contributor

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

Updates the default flashblocks “end buffer” so the final scheduled flashblock is built/published slightly before slot end, avoiding the last slot being effectively unusable under typical timing constraints.

Changes:

  • Change FlashblocksConfig default end_buffer_ms from 0 to 100.
  • Change CLI flag --flashblocks.end-buffer-ms default from 0 to 100 and expand its help text to explain the rationale.

Reviewed changes

Copilot reviewed 2 out of 2 changed files in this pull request and generated 2 comments.

File Description
crates/op-rbuilder/src/builder/config.rs Updates runtime default flashblocks config to use a 100ms end buffer.
crates/op-rbuilder/src/args/op.rs Updates the CLI/env default for end buffer to 100ms and documents the behavior.

💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.

Comment on lines 207 to 212
#[arg(
long = "flashblocks.end-buffer-ms",
env = "FLASHBLOCK_END_BUFFER_MS",
default_value = "0"
default_value = "100"
)]
pub flashblocks_end_buffer_ms: u64,
Copy link

Copilot AI Apr 17, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This changes the default behavior of a user-facing CLI/config option (from 0ms to 100ms). Please add a note to the crate changelog/release notes so operators who rely on the prior default know they may need to explicitly set --flashblocks.end-buffer-ms=0 to preserve old behavior.

Copilot uses AI. Check for mistakes.
Comment thread crates/op-rbuilder/src/args/op.rs Outdated
@akundaz
Copy link
Copy Markdown
Contributor

akundaz commented Apr 17, 2026

This doesn't need to be a default though right? We don't use 0 either for running the builder but we don't want to expose what value we're using since adversaries would then know by how much they'd need to degrade building. Also, with continuous building (#458) I think getPayload will trigger the last flashblock send (as it's already built) so then a 0 default is fine

@noot
Copy link
Copy Markdown
Contributor Author

noot commented Apr 19, 2026

@akundaz yeah, if the value used in prod isn't 0 then that should be fine, i'll re-bench with continuous building also then

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants