Skip to content

Poor man's work stealing queue (30%-60% improvement depending on number of cores)#5

Open
cmlsharp wants to merge 5 commits intojonhoo:mainfrom
cmlsharp:atomic_counter
Open

Poor man's work stealing queue (30%-60% improvement depending on number of cores)#5
cmlsharp wants to merge 5 commits intojonhoo:mainfrom
cmlsharp:atomic_counter

Conversation

@cmlsharp
Copy link
Copy Markdown
Contributor

@cmlsharp cmlsharp commented Dec 1, 2025

This is adapted from my own solution to this challenge. It builds atop #2 . I would have preferred to keep PRs independent, but the efficiency of this change requires newline searching to be faster.

It makes the following changes:

  • Rather than dividing up the file into large chunks and assigning each chunk to a thread, the file is divided into smaller 64KB blocks which worker threads "claim" by incrementing a shared AtomicU64 (I have no theoretical justification for 64KB, it's just what seemed to maximize speed, I suppose smaller values would increase contention on the counter). Worker threads read a bit more than 64KB to ensure they can always start from the beginning of a line.
  • It eschews memory mapping in favor of raw 64KB reads. I attempted to maintain memory mapping, but it seemed strictly slower. All threads open separate file descriptors. I also attempted to have them share a single file descriptor and use pread (via read_at_exact) but this was marginally slower.

The speedup here seems to depend heavily on the number of cores. On a very powerful machine with 128 logical cores, this yielded a 60% improvement relative to #2. When limiting to 8 cores, it was closer to a 30% improvement on that same machine.

It would be interesting to fully disambiguate what fraction of this improvement is from the "AtomicU64" change and which is from the "fixed sized reads of 64KB" vs memory mapping. I did try a version of this approach with memory mapping as mentioned above and it was slower, but I do not remember by how much.

Replace semi_at and next_newline with a simpler (and seemingly
significantly faster) simd search.

I define a `find_byte` function which is generic over the number of SIMD
lanes. I then select lane sizes for the newline and semicolon searches that minimize total time
(on my machine anyway) which wound up being significantly smaller than the previous choice of 64.

I suspect this is because newlines and (especially) semicolons are often
found relatively early and using a smaller number of lanes improves
latency (at the hypothetical cost of throughput).

This also removes the need for `memchr` (calling it on the
slow path yields no discernable difference).
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant