Skip to content

Add optional append_batch/2 callback for opaque batch accumulation#31

Draft
kjnilsson wants to merge 1 commit intomasterfrom
append-batch-api
Draft

Add optional append_batch/2 callback for opaque batch accumulation#31
kjnilsson wants to merge 1 commit intomasterfrom
append-batch-api

Conversation

@kjnilsson
Copy link
Copy Markdown
Contributor

@kjnilsson kjnilsson commented Mar 31, 2026

Introduce a new optional behaviour callback append_batch/2 that gives
the implementation module full control over how operations are accumulated
into a batch. When exported, each incoming op is passed through the callback
instead of being prepended to a list, turning the batch into an opaque term.

The callback receives init_batch as the accumulator for the first operation
in a new batch, and the value is reset to init_batch after each batch
completes. Returning {finish_batch, OpaqueBatch} from the callback completes
the batch immediately, even if the batch size limit has not been reached.
This allows implementations to enforce domain-specific termination conditions
(e.g. total byte size, memory limit, entry count).

Implementation details:

  • The callback fun is cached in the config record at init time to avoid
    module dispatch overhead on the hot path
  • On the normal (non-finish) path, append_msg returns a bare #state{}
    with zero extra allocation; the {finish, State} wrapper is only
    allocated on early batch termination
  • When cast_batch/2 is used with append_batch/2, each operation is
    passed through the callback individually, allowing any operation to
    trigger early completion
  • The reversed_batch option has no effect when append_batch/2 is
    exported, as batch ordering is entirely the caller's responsibility
  • The batch type is now [op()] | init_batch | term() to accurately
    reflect all three possible states

@kjnilsson kjnilsson force-pushed the append-batch-api branch 3 times, most recently from 9c27843 to 6ea4141 Compare April 1, 2026 08:50
@kjnilsson kjnilsson changed the title Append batch api Add optional append_batch/2 callback for opaque batch accumulation Apr 1, 2026
@kjnilsson kjnilsson requested a review from Copilot April 1, 2026 09:20
Copy link
Copy Markdown
Contributor

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

Adds an optional append_batch/2 behaviour callback to gen_batch_server so implementations can control how ops are accumulated into an opaque batch term and optionally trigger early batch completion.

Changes:

  • Introduces append_batch/2 as an optional callback and caches its fun in the server config for hot-path batching.
  • Updates batching logic to support opaque accumulators, init_batch reset semantics, and early {finish_batch, Batch} completion.
  • Adds Common Test coverage and README documentation for the new callback.

Reviewed changes

Copilot reviewed 3 out of 3 changed files in this pull request and generated 8 comments.

File Description
src/gen_batch_server.erl Adds append_batch/2 support, opaque batch handling, and early-finish control flow.
test/gen_batch_server_SUITE.erl Adds new CT cases covering basic append behavior, finish behavior, reversed_batch interaction, and cast_batch interaction.
README.md Documents append_batch/2, opaque batches for handle_batch/2, and finish semantics.

💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.

Comment thread src/gen_batch_server.erl Outdated
Comment thread src/gen_batch_server.erl Outdated
Comment thread src/gen_batch_server.erl Outdated
Comment thread src/gen_batch_server.erl Outdated
Comment thread src/gen_batch_server.erl
Comment thread README.md
Comment thread README.md
Comment thread test/gen_batch_server_SUITE.erl
Copy link
Copy Markdown

@dumbbell dumbbell left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Small comments; I don’t feel strongly about them. Feel free to ignore them :-)

Comment thread src/gen_batch_server.erl
Comment thread src/gen_batch_server.erl
Comment thread src/gen_batch_server.erl Outdated
Comment thread src/gen_batch_server.erl Outdated
Copy link
Copy Markdown
Contributor

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

Copilot reviewed 3 out of 3 changed files in this pull request and generated 3 comments.


💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.

Comment thread src/gen_batch_server.erl
Comment thread README.md
Comment thread test/gen_batch_server_SUITE.erl Outdated
Copy link
Copy Markdown
Contributor

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

Copilot reviewed 3 out of 3 changed files in this pull request and generated 2 comments.


💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.

Comment thread src/gen_batch_server.erl
Comment on lines +105 to 106
-callback handle_batch(batch(), State) ->
{ok, State} |
Copy link

Copilot AI Apr 1, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The behaviour spec for handle_batch/2 now uses batch(), but the new batch() type excludes valid opaque batches (e.g. a list accumulator like [Val] in the new tests) and suggests {batch, term()} may be passed to callbacks, which doesn’t happen. Updating the callback spec to accept [op()] | term() (or a dedicated opaque_batch() alias) would better match the actual API and README semantics.

Copilot uses AI. Check for mistakes.
Comment thread test/gen_batch_server_SUITE.erl Outdated
after 5000 ->
exit(timeout)
end.
%% Test for leftover ops handling (not yet implemented)
Copy link

Copilot AI Apr 1, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The comment above append_batch_cast_batch_with_leftover/1 says leftover op handling is “not yet implemented”, but this PR introduces leftover_ops + process_leftover_ops/3 to implement it. Please update or remove the comment so the test file doesn’t contradict the current implementation.

Suggested change
%% Test for leftover ops handling (not yet implemented)
%% Test for leftover ops handling

Copilot uses AI. Check for mistakes.
Introduce a new optional behaviour callback `append_batch/2` that gives
the implementation module full control over how operations are accumulated
into a batch. When exported, each incoming op is passed through the callback
instead of being prepended to a list, turning the batch into an opaque term.

The callback receives `init_batch` as the accumulator for the first operation
in a new batch, and the value is reset to `init_batch` after each batch
completes. Returning `{finish_batch, OpaqueBatch}` from the callback completes
the batch immediately, even if the batch size limit has not been reached.
This allows implementations to enforce domain-specific termination conditions
(e.g. total byte size, memory limit, entry count).

Implementation details:
- The callback fun is cached in the config record at init time to avoid
  module dispatch overhead on the hot path
- On the normal (non-finish) path, `append_msg` returns a bare `#state{}`
  with zero extra allocation; the `{finish, State}` wrapper is only
  allocated on early batch termination
- When `cast_batch/2` is used with `append_batch/2`, each operation is
  passed through the callback individually, allowing any operation to
  trigger early completion
- The `reversed_batch` option has no effect when `append_batch/2` is
  exported, as batch ordering is entirely the caller's responsibility
- The batch type is now `[op()] | init_batch | term()` to accurately
  reflect all three possible states
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants