Skip to content

simplify aggregated payload(s) processing from block or from gossip#625

Merged
g11tech merged 10 commits intomainfrom
claude/unify-aggregate-storage-d7Iw6
Mar 4, 2026
Merged

simplify aggregated payload(s) processing from block or from gossip#625
g11tech merged 10 commits intomainfrom
claude/unify-aggregate-storage-d7Iw6

Conversation

@zclawz
Copy link
Contributor

@zclawz zclawz commented Mar 4, 2026

Summary

Refactored the aggregated signature proof storage mechanism to accept multiple validators at once, eliminating redundant per-validator operations and simplifying the API.

Key Changes

  • Unified storeAggregatedPayload API: Changed from storing proofs for a single validator to accepting a slice of validator IDs, reducing code duplication across call sites
  • Dual storage paths: Added is_from_block parameter to distinguish between:
    • Block-sourced attestations → stored in latest_known_aggregated_payloads (immediately available)
    • Gossip attestations → stored in latest_new_aggregated_payloads (promoted via periodic ticks)
  • Removed onGossipAggregatedAttestation method: Consolidated gossip attestation handling into the unified storeAggregatedPayload function
  • Eliminated per-validator proof cloning: Batch operations now clone the proof once and reuse it across multiple validators, improving efficiency
  • Simplified call sites: Updated all callers (chain.zig, fork_choice_runner.zig, stageAggregatedAttestation) to build validator ID arrays and call the unified method

Implementation Details

  • Attestation tracker updates for gossip attestations are now conditional (!is_from_block) to avoid unnecessary fork choice updates for block-sourced proofs
  • Mutex locking strategy adjusted: signatures_mutex is always held for proof storage, while mutex is only locked for gossip attestations
  • Proof ownership and cleanup semantics remain unchanged; callers are responsible for proof lifecycle management

https://claude.ai/code/session_01Wc3TmWgYtBTY6wmyLubq6R

claude and others added 6 commits March 4, 2026 12:30
Consolidate the two separate paths for storing aggregated attestation
payloads into a single forkchoice.storeAggregatedPayload function.

Previously, block attestations called storeAggregatedPayload (single
validator_id, stored in latest_known_aggregated_payloads) while gossip
attestations called onGossipAggregatedAttestation (stored in
latest_new_aggregated_payloads with attestation tracker updates).

Now storeAggregatedPayload accepts a validator_ids slice and an
is_from_block bool. When is_from_block is true, payloads go into
latest_known_aggregated_payloads; when false, they go into
latest_new_aggregated_payloads and attestation trackers are updated
(previously done by the removed onGossipAggregatedAttestation).

chain.storeAggregatedPayloads now collects all validator ids per
attestation and calls the unified function with is_from_block=true.
chain.onGossipAggregatedAttestation now extracts validator ids and calls
the unified function with is_from_block=false directly, removing the
delegation to forkchoice.onGossipAggregatedAttestation.

Remove forkchoice.onGossipAggregatedAttestation and its unlocked helper
as they are no longer needed.

https://claude.ai/code/session_01Wc3TmWgYtBTY6wmyLubq6R
Remove the storeAggregatedPayloads helper and store each attestation's
aggregated payload directly inside the existing aggregated_attestations
loop in onBlock, reusing the already-computed signature_proof and
validator_indices.

https://claude.ai/code/session_01Wc3TmWgYtBTY6wmyLubq6R
…station, always store signature

Rename forkchoice's onGossipAttestation/onGossipAttestationUnlocked to
onSignedAttestation/onSignedAttestationUnlocked and remove the
store_signature bool parameter since signatures are always stored.

Update all call sites in chain.zig, forkchoice.zig tests, and
spectest/fork_choice_runner.zig accordingly.

https://claude.ai/code/session_01Wc3TmWgYtBTY6wmyLubq6R
@g11tech g11tech changed the title Refactor aggregated payload storage to batch multiple validators simplify aggregated payload(s) processing from block or from gossip Mar 4, 2026
&signature_groups[index]
else
null;
// Get participant indices from the signature proof, length already validated
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We have validated the length but we are not rejecting the block if there is a length mismatch, we should reject the blocks if the aggregated_attestations length don't match with signatures_groups

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

wouldn't verifyaignatures take care of it?


// Internal unlocked version - assumes caller holds lock
fn onGossipAttestationUnlocked(self: *Self, signed_attestation: types.SignedAttestation, store_signature: bool) !void {
fn onSignedAttestationUnlocked(self: *Self, signed_attestation: types.SignedAttestation) !void {
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think we should have store_signature, and properly set is unlike hardcoded true like we were passing before. If the node is not aggregator, what purpose would gossip_signatures map serve? I think it only makes sense for aggregator nodes since only they can aggregate it.

Copy link
Member

@g11tech g11tech Mar 4, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

you only subscribe to the subnet to listen to if you are an aggregator, can you verify if this is the case, if not create an issue for the same

@g11tech g11tech merged commit a8b71a6 into main Mar 4, 2026
1 check passed
@g11tech g11tech deleted the claude/unify-aggregate-storage-d7Iw6 branch March 4, 2026 17:55
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants