Add optional append_batch/2 callback for opaque batch accumulation#31
Add optional append_batch/2 callback for opaque batch accumulation#31
Conversation
9c27843 to
6ea4141
Compare
There was a problem hiding this comment.
Pull request overview
Adds an optional append_batch/2 behaviour callback to gen_batch_server so implementations can control how ops are accumulated into an opaque batch term and optionally trigger early batch completion.
Changes:
- Introduces
append_batch/2as an optional callback and caches its fun in the server config for hot-path batching. - Updates batching logic to support opaque accumulators,
init_batchreset semantics, and early{finish_batch, Batch}completion. - Adds Common Test coverage and README documentation for the new callback.
Reviewed changes
Copilot reviewed 3 out of 3 changed files in this pull request and generated 8 comments.
| File | Description |
|---|---|
src/gen_batch_server.erl |
Adds append_batch/2 support, opaque batch handling, and early-finish control flow. |
test/gen_batch_server_SUITE.erl |
Adds new CT cases covering basic append behavior, finish behavior, reversed_batch interaction, and cast_batch interaction. |
README.md |
Documents append_batch/2, opaque batches for handle_batch/2, and finish semantics. |
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
dumbbell
left a comment
There was a problem hiding this comment.
Small comments; I don’t feel strongly about them. Feel free to ignore them :-)
6ea4141 to
68f3cc5
Compare
There was a problem hiding this comment.
Pull request overview
Copilot reviewed 3 out of 3 changed files in this pull request and generated 3 comments.
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
68f3cc5 to
1bb87eb
Compare
There was a problem hiding this comment.
Pull request overview
Copilot reviewed 3 out of 3 changed files in this pull request and generated 2 comments.
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
| -callback handle_batch(batch(), State) -> | ||
| {ok, State} | |
There was a problem hiding this comment.
The behaviour spec for handle_batch/2 now uses batch(), but the new batch() type excludes valid opaque batches (e.g. a list accumulator like [Val] in the new tests) and suggests {batch, term()} may be passed to callbacks, which doesn’t happen. Updating the callback spec to accept [op()] | term() (or a dedicated opaque_batch() alias) would better match the actual API and README semantics.
| after 5000 -> | ||
| exit(timeout) | ||
| end. | ||
| %% Test for leftover ops handling (not yet implemented) |
There was a problem hiding this comment.
The comment above append_batch_cast_batch_with_leftover/1 says leftover op handling is “not yet implemented”, but this PR introduces leftover_ops + process_leftover_ops/3 to implement it. Please update or remove the comment so the test file doesn’t contradict the current implementation.
| %% Test for leftover ops handling (not yet implemented) | |
| %% Test for leftover ops handling |
Introduce a new optional behaviour callback `append_batch/2` that gives
the implementation module full control over how operations are accumulated
into a batch. When exported, each incoming op is passed through the callback
instead of being prepended to a list, turning the batch into an opaque term.
The callback receives `init_batch` as the accumulator for the first operation
in a new batch, and the value is reset to `init_batch` after each batch
completes. Returning `{finish_batch, OpaqueBatch}` from the callback completes
the batch immediately, even if the batch size limit has not been reached.
This allows implementations to enforce domain-specific termination conditions
(e.g. total byte size, memory limit, entry count).
Implementation details:
- The callback fun is cached in the config record at init time to avoid
module dispatch overhead on the hot path
- On the normal (non-finish) path, `append_msg` returns a bare `#state{}`
with zero extra allocation; the `{finish, State}` wrapper is only
allocated on early batch termination
- When `cast_batch/2` is used with `append_batch/2`, each operation is
passed through the callback individually, allowing any operation to
trigger early completion
- The `reversed_batch` option has no effect when `append_batch/2` is
exported, as batch ordering is entirely the caller's responsibility
- The batch type is now `[op()] | init_batch | term()` to accurately
reflect all three possible states
1bb87eb to
46c6e05
Compare
Introduce a new optional behaviour callback
append_batch/2that givesthe implementation module full control over how operations are accumulated
into a batch. When exported, each incoming op is passed through the callback
instead of being prepended to a list, turning the batch into an opaque term.
The callback receives
init_batchas the accumulator for the first operationin a new batch, and the value is reset to
init_batchafter each batchcompletes. Returning
{finish_batch, OpaqueBatch}from the callback completesthe batch immediately, even if the batch size limit has not been reached.
This allows implementations to enforce domain-specific termination conditions
(e.g. total byte size, memory limit, entry count).
Implementation details:
module dispatch overhead on the hot path
append_msgreturns a bare#state{}with zero extra allocation; the
{finish, State}wrapper is onlyallocated on early batch termination
cast_batch/2is used withappend_batch/2, each operation ispassed through the callback individually, allowing any operation to
trigger early completion
reversed_batchoption has no effect whenappend_batch/2isexported, as batch ordering is entirely the caller's responsibility
[op()] | init_batch | term()to accuratelyreflect all three possible states