Skip to content

mining: add getMemoryLoad() and track template non-mempool memory footprint#33922

Open
Sjors wants to merge 7 commits intobitcoin:masterfrom
Sjors:2025/11/ipc-memusage
Open

mining: add getMemoryLoad() and track template non-mempool memory footprint#33922
Sjors wants to merge 7 commits intobitcoin:masterfrom
Sjors:2025/11/ipc-memusage

Conversation

@Sjors
Copy link
Copy Markdown
Member

@Sjors Sjors commented Nov 21, 2025

Implements a way to track the memory footprint of all non-mempool transactions that are still being referenced by block templates, see discussion in #33899. It does not impose a limit.

IPC clients can query this footprint (total, across all clients) using the getMemoryLoad() IPC method. Its client-side usage is demonstrated here:

Additionally, the functional test in interface_ipc.py is expanded to demonstrate how template memory management works: templates are not released until the client drops references to them, or calls the template destroy method, or disconnects. The destroy method is called automatically by clients using libmultiprocess, as sv2-tp does. In the Python tests it also happens when references are destroyed or go out of scope.

The PR starts with preparation refactor commits:

  1. Tweaks interface_ipc.py so destroy() calls happen in an order that's useful to later demonstrate memory management
  2. Change std::unique_ptr<BlockTemplate> block_template from a static defined in rpc/mining.cpp to NodeContext. This prevents a crash when we switch to a non-trivial destructor later (which uses m_node).

Then the main commits:

  1. Add template_tx_refs to NodeContext to track how many templates contain any given transaction. This map is updated by the BlockTemplate constructor and destructor.
  2. Add GetTemplateMemoryUsage() which loops over this map and sums up the memory footprint for transactions outside the mempool
  3. Expose this information to IPC clients via getMemoryLoad() and add test coverage

@DrahtBot
Copy link
Copy Markdown
Contributor

DrahtBot commented Nov 21, 2025

The following sections might be updated with supplementary metadata relevant to reviewers and maintainers.

Code Coverage & Benchmarks

For details see: https://corecheck.dev/bitcoin/bitcoin/pulls/33922.

Reviews

See the guideline for information on the review process.

Type Reviewers
Concept ACK ismaelsadeeq, ryanofsky
Stale ACK enirox001, vasild

If your review is incorrectly listed, please copy-paste <!--meta-tag:bot-skip--> into the comment that the bot should ignore.

Conflicts

Reviewers, this pull request conflicts with the following ones:

  • #34806 (refactor: logging: Various API improvements by ajtowns)
  • #34644 (mining: add submitBlock to IPC Mining interface by w0xlt)
  • #34020 (mining: add getTransactions(ByWitnessID) IPC methods by Sjors)
  • #33966 (refactor: disentangle miner startup defaults from runtime options by Sjors)
  • #33421 (node: add BlockTemplateCache by ismaelsadeeq)

If you consider this pull request important, please also help to review the conflicting pull requests. Ideally, start with the one that should be merged first.

@Sjors
Copy link
Copy Markdown
Member Author

Sjors commented Nov 21, 2025

I haven't benchmarked this yet on mainnet, so I'm not sure if checking every (unique) transaction for mempool presence is unacceptably expensive.

If people prefer, I could also add a way for the getblocktemplate RPC to opt-out of the memory bookkeeping, since it holds on to one template max and no longer than a minute.

@DrahtBot
Copy link
Copy Markdown
Contributor

🚧 At least one of the CI tasks failed.
Task tidy: https://github.com/bitcoin/bitcoin/actions/runs/19575422916/job/56059300316
LLM reason (✨ experimental): clang-tidy flagged fatal errors (loop variable copied for range-based for causing a warnings-as-errors failure) in interfaces.cpp, breaking the CI run.

Hints

Try to run the tests locally, according to the documentation. However, a CI failure may still
happen due to a number of reasons, for example:

  • Possibly due to a silent merge conflict (the changes in this pull request being
    incompatible with the current code in the target branch). If so, make sure to rebase on the latest
    commit of the target branch.

  • A sanitizer issue, which can only be found by compiling with the sanitizer and running the
    affected test.

  • An intermittent issue.

Leave a comment here, if you need help tracking down a confusing failure.


TxTemplateMap& tx_refs{*Assert(m_tx_template_refs)};
// Don't track the dummy coinbase, because it can be modified in-place
// by submitSolution()
Copy link
Copy Markdown
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

b9306b79b8f5667a2679236af8792bb1c36db817: in addition, we might be wiping the dummy coinbase from the template later: Sjors#106

@Sjors Sjors force-pushed the 2025/11/ipc-memusage branch from f22413f to 3b77529 Compare November 21, 2025 16:22
Copy link
Copy Markdown
Member

@ismaelsadeeq ismaelsadeeq left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Concept ACK

I think it would be better if we have internal memory management for the mining interface IPC, since we hold on to the block templates.

I would suggest the following approach:

  • Add memory budget for the mining interface.
  • Introduce a tracking list of recently built block templates and total memory usage.
  • Add templates to the list and increment the memory usage after every createnewblock or waitnext return.
  • Whenever the memory budget is exhausted, we should release templates in FIFO order.

I think since we create a new template after a time interval elapses even if fees increase and that interval is usually enough for the client to receive and distribute the template to miners, this mechanism should be safe as the miners have long switch to most recent template when the budget elapsed because of the time interval being used in between returns of waitnext.

Mining interface clients should also handle their own memory internally.

Currently, I don’t see much use for the exposed getMemoryLoad method. In my opinion, we should not rely on the IPC client to manage our memory.

@Sjors
Copy link
Copy Markdown
Member Author

Sjors commented Nov 21, 2025

In my opinion, we should not rely on the IPC client to manage our memory.

Whenever the memory budget is exhausted, we should release templates in FIFO order

It seems counter intuitive, but from a memory management perspective IPC clients are treated no different than our own code. And if we started FIFO deleting templates that are used by our own code, we'd crash.

So I think FIFO deletion should be a last resort (not implemented here).

There's another reason why we should give clients an opportunity to gracefully release templates in whatever order they prefer. Maybe there's 100 downstream ASIC's, one of which is very slow at loading templates, so it's only given a new template when the tip changes, not when there's a fee change. In that scenario you have a specific template that the client wants to "defend" at all cost.

In practice I'm hoping none of this matters and we can pick and recommend defaults that make it unlikely to get close to a memory limit, other than during some weird token launch.

@ismaelsadeeq
Copy link
Copy Markdown
Member

ismaelsadeeq commented Nov 21, 2025

It seems counter intuitive, but from a memory management perspective IPC clients are treated no different than our own code. And if we started FIFO deleting templates that are used by our own code, we'd crash.

IMHO I think we should separate that, and treat clients differently from our own code, because they are different codebases and separate applications with their own memory.

Maybe there are 100 downstream ASICs, one of which is very slow at loading templates, so it’s only given a new template when the tip changes, not when there’s a fee change. In that scenario you have a specific template that the client wants to “defend” at all costs.

I see your point but I don’t think that’s a realistic scenario, and I think we shouldn’t design software to be one-size-fits-all.
If you want to use only single block templates, then use createnewblock and create a new block template and mine that continuously until the chain tip changes or you mine a block.

waitNext returning indicates that we assume your miners are switching from the block they are currently mining to the new one they receive.
Depending on the budget (which I assume is large), many templates would need to be returned before we exhaust it.

Delegating template eviction responsibility to the client can put us in a situation where they handle it poorly and cause us to OOM (but I guess your argument is that we rather take that chance than being in a situation where we make miners potentially lose on rewards).
However I think if there is a clean separation of concerns between the Bitcoin Core node and its clients and clear interface definition and expectations that should not happen, and I believe the mining interface should not differ in that respect.
Otherwise, if we do want a one-size-fits-all solution capable of handling the scenario you described, we should rethink the design entirely and revert to an approach where we do not retain block templates.

@Sjors
Copy link
Copy Markdown
Member Author

Sjors commented Nov 24, 2025

Delegating template eviction responsibility to the client can put us in a situation where they handle it poorly and cause us to OOM

Note that it's already the clients responsibility, that's inherent to how multiprocess works.

In the scenario where they handle it poorly, we can use FIFO deletion. All getMemoryLoad() does is give clients an opportunity to handle it better. If they're fine with FIFO, then they never have to call this method.

treat clients differently from our own code

We currently don't track whether any given CBlockTemplate is owned by an IPC client or by our internal code. Once we introduce FIFO deletion all call sites will have to check if it's been deleted since, or we need to exempt them from the memory accounting.

an approach where we do not retain block templates.

Afaik that means revalidating the block from scratch, removing one advantage the submitBlock() approach has over the submitblock RPC (I haven't benchmarked this though).

@Sjors
Copy link
Copy Markdown
Member Author

Sjors commented Nov 24, 2025

I tracked the non-mempool transaction memory footprint for half a day on mainnet, using fairly aggressive template update criteria (minimum fee delta 1 sat and no more than once per second). So far the footprint is minuscule, but of course this depends on the mempool weather:

getmemoryload-scatter

The memory spike after each new block is because sv2-tp holds on to templates from previous blocks for 10 seconds. Those ~3 MB spikes may look impressive, but keep in mind that the default mempool is 300 MB.

@Sjors
Copy link
Copy Markdown
Member Author

Sjors commented Nov 25, 2025

I restructured the implementation and commits a bit.

The TxTemplateMap now lives on the NodeContext rather than MinerImpl (interface). This reflects the fact that we want to track the global memory footprint instead of per client. It's a lightweight member template_tx_refs which should be easy to fold into a block template manager later.

It's also less code churn because I don't have to touch the BlockTemplateImpl constructor.

It also made it easier to move GetTemplateMemoryUsage from interface.cpp to miner.cpp, where it's more reusable.

This in turn let me split out a separate commit that introduces the actual getMemoryLoad() interface method. So even if we decide against including that method, the rest of the PR should be useful. However I do think it's worth keeping, it's already been a helpful debugging and monitoring tool.

I added some comments to point out that we don't hold a mempool.cs lock during the calculation because we don't need an accurate result (mempool drift) and we don't want to bog down transaction relay with a potentially long lock (1-3ms in my testing so far).

@Sjors Sjors force-pushed the 2025/11/ipc-memusage branch from 24592b7 to 03dcfae Compare November 25, 2025 17:21
@Sjors
Copy link
Copy Markdown
Member Author

Sjors commented Nov 25, 2025

mining_getblocktemplate_longpoll.py triggered a stack-use-after-return, due to block_template being static (to allow template reuse between RPC calls). I added a commit d752dccaa56b663001d1bb29ab8b9a50628602a9 to move this longpoll template to the node context. This seems more appropriate anyway since BlockTemplate has a m_node member, so it shouldn't be able to outlive the node.

One caveat is that gbt_template has to be cleared before template_tx_refs, so I swapped them and added a comment (cde248a6613b6e37f7f7e35c1aabeb75347ffe95 -> 9c667c362a1639b48113a3657882b751f475082c.


Expanded the PR description.

Copy link
Copy Markdown
Contributor

@vasild vasild left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

ACK 04553cd

Would be good to figure out if #33922 (comment) needs addressing.

Sjors added a commit to Sjors/bitcoin-capnp-types that referenced this pull request Mar 6, 2026
Sjors added a commit to Sjors/bitcoin-capnp-types that referenced this pull request Mar 11, 2026
Copy link
Copy Markdown
Contributor

@enirox001 enirox001 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

ACK 04553cd

Went through each commit:

  • 6f12988 - forward declaration of BlockTemplate in context.h is necessary, mining.h is not in the include chain
  • b22e535 - destruction ordering of gbt_result before template_tx_refs looks correct. break in ~BlockTemplateImpl makes sense, and no lock inversion between template_state_mutex and cs_main
  • 1d7497f - mempool.cs is intentionally not held for the full loop to avoid blocking relay, which is well documented
  • 9f4b744 - locking looks consistent with commit 3, template_state_mutex guards the map while mempool.cs is acquired per exists() call
  • 04553cd - MAX_MONEY is a good boundary case to have covered

Ran interface_ipc_mining.py and all tests passed successfully.

@Sjors Sjors force-pushed the 2025/11/ipc-memusage branch from 04553cd to 430990c Compare April 8, 2026 09:28
@Sjors
Copy link
Copy Markdown
Member Author

Sjors commented Apr 8, 2026

@vasild wrote:

Would be good to figure out if #33922 (comment) needs addressing.

Thanks for the reminder. I ended up using the salted hasher in a6d4b88.

I also added 430990c to delete the default std::hash constructor for CTransactionRef, for good measure.

Deleting operator== would make sense for the same reason, but involves a fair amount of test churn, so I didn't do it here.

@Sjors Sjors force-pushed the 2025/11/ipc-memusage branch from 430990c to 10acf81 Compare April 8, 2026 09:36
Copy link
Copy Markdown
Contributor

@vasild vasild left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

ACK 10acf81

#include <string>
#include <tuple>
#include <utility>
#include <variant>
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

<variant> was added in the last commit 10acf81 refactor: disable default std::hash for CTransactionRef, but I think that it is not needed for the newly added code:

/** Disable default std::hash for CTransactionRef to prevent accidentally  
 *  comparing by pointer. Use CTransactionRefSaltedHash or provide a custom
 *  hasher. */
template<>                         
struct std::hash<CTransactionRef> {                          
    size_t operator()(const CTransactionRef&) const = delete;
};

Copy link
Copy Markdown
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It was an IWYU false positive, see comments below.

@DrahtBot DrahtBot requested a review from enirox001 April 9, 2026 10:15
@Sjors Sjors force-pushed the 2025/11/ipc-memusage branch from 10acf81 to d1b9df5 Compare April 14, 2026 14:56
@Sjors
Copy link
Copy Markdown
Member Author

Sjors commented Apr 14, 2026

The IWYU linter insists on #include <variant>: https://github.com/bitcoin/bitcoin/actions/runs/24406181800/job/71290035323?pr=33922

But that seems wrong, so I added a commit to override it.

Sjors and others added 5 commits April 14, 2026 14:22
The getblocktemplate RPC uses a static BlockTemplate, which goes out
of scope only after the node completed its shutdown sequence.

This becomes a problem when a later commit implements a destructor
that uses m_node.
IPC clients can hold on to block templates indefinately, which has the
same impact as when the node holds a shared pointer to the
CBlockTemplate. Because each template in turn tracks CTransactionRefs,
transactions that are removed from the mempool will not have
their memory cleared.

This commit adds bookkeeping to the block template constructor and
destructor that will let us track the resulting memory footprint.

Co-authored-by: Vasil Dimov <vd@FreeBSD.org>
Calculate the non-mempool memory footprint for template transaction
references.

Add bench logging to collect data on whether caching or simplified
heuristics are needed, such as not checking for mempool presence.
Allow IPC clients to inspect the amount of memory consumed by
non-mempool transactions in blocks.

Returns a MemoryLoad struct which can later be expanded to e.g.
include a limit.

Expand the interface_ipc.py test to demonstrate the behavior and
to illustrate how clients can call destroy() to reduce memory
pressure.
@Sjors Sjors force-pushed the 2025/11/ipc-memusage branch from 77c21a0 to cd1fce1 Compare April 14, 2026 18:24
@Sjors
Copy link
Copy Markdown
Member Author

Sjors commented Apr 14, 2026

Trying a rebase without cd1fce1 first, to see if #34896 fixed the IWYU false positive.

Sjors added 2 commits April 14, 2026 14:29
The default std::hash for shared_ptr compares by pointer.

CTransactionRefSaltedHash or a custom hasher should be used instead.
IWYU incorrectly suggests <variant> for std::hash.
@Sjors
Copy link
Copy Markdown
Member Author

Sjors commented Apr 14, 2026

The workaround is still needed. We can drop it here if #35073 lands.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

Projects

None yet

Development

Successfully merging this pull request may close these issues.

7 participants