Skip to content

Conversation

@0xbigapple
Copy link
Owner

What does this PR do?

  • Optimize memory allocation for the filter interface of JSON-RPC, eth_newFilter, eth_newBlockFilter
  • add config item node.jsonrpc.maxBlockFilterNum

Why are these changes required?

This PR has been tested by:

  • Unit Tests
  • Manual Testing

Follow up

Extra details

Copy link

@github-actions github-actions bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

## 📋 Review Summary

This pull request introduces optimizations for the JSON-RPC filter interface by adding caching to reduce memory allocation, which is a significant improvement. It also adds a configurable limit for the number of block filters. The changes are well-implemented, but there are a few minor areas for improvement regarding resource management and code consistency.

🔍 General Feedback

  • Good Caching Implementation: The use of Guava's Cache to reduce object creation for LogFilterElement and block hashes is a good performance enhancement.
  • New Exception: A new exception JsonRpcExceedLimitException was added but not used. It should be used to provide more specific error information.
  • Resource Cleanup: Not all new caches are being invalidated in the close() method, which could lead to memory leaks.

return result;
}

@Override

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🟡 It seems like `blockHashCache` is not invalidated in the `close()` method, which could lead to a memory leak. You should invalidate it along with `logElementCache`.
Suggested change
@Override
logElementCache.invalidateAll();
blockHashCache.invalidateAll();

} else {
blockFilter2Result = blockFilter2ResultSolidity;
}
if (blockFilter2Result.size() >= maxBlockFilterNum) {

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🟡 You've introduced a new exception `JsonRpcExceedLimitException`, but you're throwing `JsonRpcInvalidParamsException` here. It would be more consistent and specific to use the new exception you've created.
Suggested change
if (blockFilter2Result.size() >= maxBlockFilterNum) {
if (blockFilter2Result.size() >= maxBlockFilterNum) {
throw new JsonRpcExceedLimitException(
"exceed max block filters: " + maxBlockFilterNum + ", try again later");
}

} catch (ExecutionException e) {
logger.error("Getting/loading blockHash from cache failed", e); //never happen
cachedBlockHash = originalBlockHash;
}

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🟢 The comment `//never happen` isn't very descriptive. It would be better to explain *why* it will never happen. For example, you could explain that the callable for the cache does not throw a checked exception.
Suggested change
}
// The callable cannot throw a checked exception, so this should never happen.
logger.error("Getting/loading blockHash from cache failed", e);

Comment on lines +514 to +515
result = result + (removed ? 1 : 0);
return result;

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🟢 The `hashCode()` implementation can be made more concise and readable by using `java.util.Objects.hash()`.
Suggested change
result = result + (removed ? 1 : 0);
return result;
@Override
public int hashCode() {
return Objects.hash(blockHash, transactionHash, transactionIndex, logIndex, removed);
}

@0xbigapple 0xbigapple closed this Jan 20, 2026
@0xbigapple 0xbigapple reopened this Jan 20, 2026
Copy link

@github-actions github-actions bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

📋 Review Summary

This PR introduces memory optimizations for JSON-RPC filters by implementing caching for LogFilterElement and block hashes, which is a solid improvement. It also adds a limit to the number of block filters to prevent resource exhaustion.

🔍 General Feedback

  • Unused Exception: The new JsonRpcExceedLimitException class is added but not utilized in the implementation.
  • Scope Creep: There are several changes unrelated to the PR's stated objective, such as the P2P service startup order change in TronNetService.java, new unrelated tests in NodePersistServiceTest.java, and ShieldedReceiveTest.java. These should ideally be separated or explicitly justified.
  • Production Code Comments: There are "TODO fix test" comments added to production code (e.g., SnapshotManager.java) which indicate that production code is being modified to workaround test issues.
  • Configuration: Consider exposing the cache sizes as configuration parameters for better tunability.

blockFilter2Result = blockFilter2ResultSolidity;
}
if (blockFilter2Result.size() >= maxBlockFilterNum) {
throw new JsonRpcInvalidParamsException(

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🟡 The class JsonRpcExceedLimitException was added in this PR but is not used here. You are throwing JsonRpcInvalidParamsException instead.

If you intend to use JsonRpcExceedLimitException, please ensure you:

  1. Update this throw statement.
  2. Update the newBlockFilter method signature in TronJsonRpc.java to throw JsonRpcExceedLimitException.
  3. Add the corresponding @JsonRpcError annotation in TronJsonRpc.java.

private static final int maxBlockFilterNum = Args.getInstance().getJsonRpcMaxBlockFilterNum();
private static final Cache<LogFilterElement, LogFilterElement> logElementCache =
CacheBuilder.newBuilder()
.maximumSize(300_000L) // 300s * tps(1000) * 1 log/tx ≈ 300_000

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🟢 The cache maximum sizes are hardcoded. Consider making these configurable via Args or CommonParameter, similar to jsonRpcMaxBlockFilterNum. This would provide flexibility for nodes with different memory constraints.


private void deleteCheckpoint() {
if(checkTmpStore == null) {
// only occurs in mock test. TODO fix test

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🟢 Adding comments about mock tests in production code is generally discouraged. If this null check is required for safety in production, the comment might be misleading. If it is only for tests, the tests should ideally be fixed to set up the environment correctly rather than patching production code.

effectiveCheckService.init();

// Move P2P service start to the end
setP2pConfig(getConfig());

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🟡 This change alters the startup order of services and seems unrelated to the PR's objective of optimizing memory allocation for JSON-RPC. While postponing P2P start might be a valid fix, it should ideally be in a separate PR or clearly documented in the PR description to ensure it gets proper visibility and testing regarding side effects.

}

@Test
public void testIsMining() {

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🟢 This test case seems unrelated to the PR's scope.

@0xbigapple 0xbigapple closed this Jan 20, 2026
@0xbigapple 0xbigapple reopened this Jan 20, 2026
Copy link

@github-actions github-actions bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

📋 Review Summary

The PR successfully implements memory optimization for JSON-RPC filters by introducing a caching mechanism for LogFilterElement and block hashes, and it adds a configurable limit to the number of block filters. These are valuable improvements for stability and resource usage.

🔍 General Feedback

  • Scope & Risk: There is a significant change to the service startup sequence in TronNetService that appears unrelated to the PR's objective. This poses a potential regression risk and should be scrutinized or reverted.
  • Unrelated Changes: Several files (NodePersistServiceTest, ShieldedReceiveTest) contain changes that do not align with the PR title. It is best practice to keep PRs focused on a single responsibility.
  • Defensive Coding: Some null checks were added to production code explicitly to fix test failures ("only occurs in mock test"). While safe, it is preferable to fix the tests themselves to avoid polluting production logic.

Comment on lines +102 to +105
// Move P2P service start to the end
setP2pConfig(getConfig());
p2pService.start(p2pConfig);
p2pService.register(p2pEventHandler);

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🟠 This change modifies the service startup sequence by moving p2pService initialization to the end. This falls outside the scope of the PR's objective ("Optimize memory allocation for JSON-RPC").

Changing the startup order can have significant side effects if other services (like PeerManager, syncService, etc.) rely on the P2P service being active during their initialization.

Please revert this change unless it is strictly necessary for the memory optimization, in which case, please provide a detailed explanation of the dependency.


@Override
public void close() throws IOException {
logElementCache.invalidateAll();

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🟢 For consistency and to ensure all resources are released, consider invalidating the blockHashCache here as well.

Suggested change
logElementCache.invalidateAll();
logElementCache.invalidateAll();
blockHashCache.invalidateAll();

@@ -0,0 +1,40 @@
package org.tron.core.net.service.nodepersist;

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🟢 This new test file appears unrelated to the PR's objective of optimizing JSON-RPC memory allocation. It would be better to include this in a separate PR to keep changes focused.


private void deleteCheckpoint() {
if(checkTmpStore == null) {
// only occurs in mock test. TODO fix test

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🟢 While this check prevents the test failure, leaving a TODO in production code regarding a test fix is technically technical debt. It is recommended to fix the test setup to ensure checkTmpStore is properly initialized rather than patching the production code.

}
int queueSize = 0;
if (eventListeners == null || eventListeners.isEmpty()) {
//only occurs in mock test. TODO fix test

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🟢 Similar to the comment in SnapshotManager, try to address the root cause in the test environment instead of modifying production code with test-specific fixes.

@0xbigapple 0xbigapple closed this Jan 20, 2026
@0xbigapple 0xbigapple reopened this Jan 20, 2026
Copy link

@github-actions github-actions bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

📋 Review Summary

This PR introduces a configurable limit for JSON-RPC block filters to prevent excessive memory usage, which is a good improvement. The use of Guava's Cache for managing filter elements and block hashes is also a solid choice for performance and memory management.

🔍 General Feedback

  • The PR includes some unrelated changes (e.g., in TronNetService, and some test files). It would be best to move these to separate PRs to keep this one focused on the JSON-RPC filter optimization.
  • There are some minor improvements that can be made, like using the newly created exception and simplifying some code.
  • Please address the new TODO comments.

Overall, the core changes are good, but the PR would be stronger if it was more focused.

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🟢 The new test file NodePersistServiceTest.java seems unrelated to the PR's goal, which is to optimize memory allocation for the filter interface of JSON-RPC. Please consider moving this to a separate PR to keep this PR focused.

public JsonRpcExceedLimitException(String message, Throwable cause) {
super(message, cause);
}
} No newline at end of file

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🟢 Nitpick: Please add a newline at the end of the file.

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🟢 The changes in this file, moving the P2P service start to the end, seem unrelated to the PR's goal of optimizing memory allocation for the JSON-RPC filter interface. Could you please clarify why these changes are included in this PR or move them to a separate one?

Comment on lines +1446 to 1447
}

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🟡 You've created a specific exception JsonRpcExceedLimitException, but here you are throwing a generic JsonRpcInvalidParamsException. It would be more consistent to use the new exception.

Suggested change
}
throw new JsonRpcExceedLimitException(
"exceed max block filters: " + maxBlockFilterNum + ", try again later");

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🟢 This new test testIsMining seems unrelated to the PR's goal of optimizing the JSON-RPC filter interface. Please consider moving it to a separate, more relevant PR.

Comment on lines +482 to +517

@Override
public boolean equals(Object o) {
if (this == o) {
return true;
}
if (o == null || this.getClass() != o.getClass()) {
return false;
}
LogFilterElement item = (LogFilterElement) o;
if (!Objects.equals(blockHash, item.blockHash)) {
return false;
}
if (!Objects.equals(transactionHash, item.transactionHash)) {
return false;
}
if (!Objects.equals(transactionIndex, item.transactionIndex)) {
return false;
}
if (!Objects.equals(logIndex, item.logIndex)) {
return false;
}
return removed == item.removed;
}

@Override
public int hashCode() {
int result = 0;
result = 31 * result + (blockHash == null ? 0 : blockHash.hashCode());
result = 31 * result + (transactionHash == null ? 0 : transactionHash.hashCode());
result = 31 * result + (transactionIndex == null ? 0 : transactionIndex.hashCode());
result = 31 * result + (logIndex == null ? 0 : logIndex.hashCode());
result = result + (removed ? 1 : 0);
return result;
}

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🟢 The equals and hashCode implementations can be simplified using java.util.Objects.

Suggested change
@Override
public boolean equals(Object o) {
if (this == o) {
return true;
}
if (o == null || this.getClass() != o.getClass()) {
return false;
}
LogFilterElement item = (LogFilterElement) o;
if (!Objects.equals(blockHash, item.blockHash)) {
return false;
}
if (!Objects.equals(transactionHash, item.transactionHash)) {
return false;
}
if (!Objects.equals(transactionIndex, item.transactionIndex)) {
return false;
}
if (!Objects.equals(logIndex, item.logIndex)) {
return false;
}
return removed == item.removed;
}
@Override
public int hashCode() {
int result = 0;
result = 31 * result + (blockHash == null ? 0 : blockHash.hashCode());
result = 31 * result + (transactionHash == null ? 0 : transactionHash.hashCode());
result = 31 * result + (transactionIndex == null ? 0 : transactionIndex.hashCode());
result = 31 * result + (logIndex == null ? 0 : logIndex.hashCode());
result = result + (removed ? 1 : 0);
return result;
}
@Override
public boolean equals(Object o) {
if (this == o) {
return true;
}
if (o == null || getClass() != o.getClass()) {
return false;
}
LogFilterElement that = (LogFilterElement) o;
return removed == that.removed && Objects.equals(blockHash, that.blockHash)
&& Objects.equals(transactionHash, that.transactionHash)
&& Objects.equals(transactionIndex, that.transactionIndex) && Objects
.equals(logIndex, that.logIndex);
}
@Override
public int hashCode() {
return Objects.hash(blockHash, transactionHash, transactionIndex, logIndex, removed);
}

}

private void deleteCheckpoint() {
if(checkTmpStore == null) {

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🟢 It's good practice to address TODO comments. Could you please fix the test as the comment suggests? If not in this PR, please create a ticket to track this.

@0xbigapple 0xbigapple closed this Jan 20, 2026
@0xbigapple 0xbigapple reopened this Jan 20, 2026
Copy link

@github-actions github-actions bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

📋 Review Summary

This pull request introduces performance optimizations for JSON-RPC filter interfaces by adding caching for log elements and block hashes, and it also imposes a configurable limit on the number of active block filters to prevent resource exhaustion. Additionally, the PR includes some unrelated changes, such as new tests and reordering of service initialization.

🔍 General Feedback

  • The use of Guava caches is a good approach for optimizing memory and performance. The configuration and usage patterns are generally sound.
  • The addition of a limit for block filters is a valuable safeguard for the node's stability.
  • The PR could be improved by separating the unrelated changes (e.g., new tests for different packages, P2P service initialization reordering) into their own pull requests for better review focus and clearer commit history.
  • The newly implemented hashCode method in LogFilterElement is functional but could be simplified and made more robust by using Objects.hash().
  • A new exception JsonRpcExceedLimitException was added but not used. This appears to be dead code.
  • Remember to invalidate all caches in the close() method.

public JsonRpcExceedLimitException(String message, Throwable cause) {
super(message, cause);
}
} No newline at end of file

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🟡 This new exception class is not used anywhere in the codebase. In TronJsonRpcImpl.java, a JsonRpcInvalidParamsException is thrown when the block filter limit is exceeded.

Consider either using this new exception there (and updating the interface and error mappings accordingly) or removing this class to avoid dead code.

Comment on lines +509 to +516
int result = 0;
result = 31 * result + (blockHash == null ? 0 : blockHash.hashCode());
result = 31 * result + (transactionHash == null ? 0 : transactionHash.hashCode());
result = 31 * result + (transactionIndex == null ? 0 : transactionIndex.hashCode());
result = 31 * result + (logIndex == null ? 0 : logIndex.hashCode());
result = result + (removed ? 1 : 0);
return result;
}

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🟡 The custom hashCode implementation could be improved. While it is functional, using java.util.Objects.hash() would be more concise, less error-prone, and is generally considered a best practice for generating hash codes from multiple fields.

Suggested change
int result = 0;
result = 31 * result + (blockHash == null ? 0 : blockHash.hashCode());
result = 31 * result + (transactionHash == null ? 0 : transactionHash.hashCode());
result = 31 * result + (transactionIndex == null ? 0 : transactionIndex.hashCode());
result = 31 * result + (logIndex == null ? 0 : logIndex.hashCode());
result = result + (removed ? 1 : 0);
return result;
}
@Override
public int hashCode() {
return Objects.hash(blockHash, transactionHash, transactionIndex, logIndex, removed);
}

return result;
}

@Override

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🟢 For consistency and to ensure all cache resources are released upon closing, you should also invalidate blockHashCache.

Suggested change
@Override
logElementCache.invalidateAll();
blockHashCache.invalidateAll();

@0xbigapple 0xbigapple closed this Jan 20, 2026
@0xbigapple 0xbigapple reopened this Jan 20, 2026
Copy link

@github-actions github-actions bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

## 📋 Review Summary

This pull request effectively optimizes memory allocation for JSON-RPC filters by introducing a configurable limit and using a cache for filter elements. The core logic is sound and should improve performance and stability under heavy loads.

🔍 General Feedback

  • Good Optimization: The use of Guava's Cache is a good choice for managing the lifecycle of filter elements and block hashes, preventing potential memory leaks.
  • Unrelated Changes: The PR includes some changes that seem unrelated to the main goal, such as modifications to the P2P service startup and the addition of unrelated tests. It's generally better to keep PRs focused on a single concern.
  • Code Clarity: The new code is generally clear and well-written. Using the newly defined JsonRpcExceedLimitException would further improve the clarity of error handling.

} else {
blockFilter2Result = blockFilter2ResultSolidity;
}
if (blockFilter2Result.size() >= maxBlockFilterNum) {

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🟡 The newly created `JsonRpcExceedLimitException` should be used here instead of `JsonRpcInvalidParamsException` to provide a more specific error when the block filter limit is exceeded. This improves error handling clarity.
Suggested change
if (blockFilter2Result.size() >= maxBlockFilterNum) {
throw new JsonRpcExceedLimitException(
"exceed max block filters: " + maxBlockFilterNum + ", try again later");

Comment on lines +508 to +514
public int hashCode() {
int result = 0;
result = 31 * result + (blockHash == null ? 0 : blockHash.hashCode());
result = 31 * result + (transactionHash == null ? 0 : transactionHash.hashCode());
result = 31 * result + (transactionIndex == null ? 0 : transactionIndex.hashCode());
result = 31 * result + (logIndex == null ? 0 : logIndex.hashCode());
result = result + (removed ? 1 : 0);

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🟢 For better readability and conciseness, you can use `Objects.hash()` to implement the `hashCode` method.
Suggested change
public int hashCode() {
int result = 0;
result = 31 * result + (blockHash == null ? 0 : blockHash.hashCode());
result = 31 * result + (transactionHash == null ? 0 : transactionHash.hashCode());
result = 31 * result + (transactionIndex == null ? 0 : transactionIndex.hashCode());
result = 31 * result + (logIndex == null ? 0 : logIndex.hashCode());
result = result + (removed ? 1 : 0);
public int hashCode() {
return Objects.hash(blockHash, transactionHash, transactionIndex, logIndex, removed);
}

public JsonRpcExceedLimitException(String message, Throwable cause) {
super(message, cause);
}
} No newline at end of file

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🟢 It's a good practice to add a newline at the end of the file. Most Java style guides recommend it.

Comment on lines 98 to +101
PeerManager.init();
relayService.init();
effectiveCheckService.init();

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🟡 The changes in this file, particularly moving the P2P service initialization, seem unrelated to the primary goal of this pull request, which is optimizing JSON-RPC filter memory allocation. Could you clarify the reason for this change or consider moving it to a separate PR to maintain focus and clarity?

@@ -0,0 +1,40 @@
package org.tron.core.net.service.nodepersist;

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🟡 This new test file seems unrelated to the JSON-RPC filter optimization. To keep pull requests focused on a single concern, it would be better to submit this in a separate PR.

Comment on lines +231 to +233
@Test
public void testIsMining() {
Assert.assertTrue(wallet.isMining());

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🟡 The new test `testIsMining` seems unrelated to the main purpose of this PR. Please consider moving unrelated changes to a separate pull request to ensure each PR addresses a single, specific issue.

@0xbigapple 0xbigapple closed this Jan 21, 2026
@0xbigapple 0xbigapple reopened this Jan 21, 2026
Copy link

@github-actions github-actions bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

📋 Review Summary

This pull request introduces optimizations and hardening for the JSON-RPC service, primarily by adding caching for log filters and block hashes, and limiting the number of active block filters to prevent resource exhaustion. The changes appear to be safe and improve the robustness of the service.

🔍 General Feedback

  • The introduction of caching with reasonable expiration and size limits is a good performance enhancement.
  • The startup order change in TronNetService improves lifecycle safety by ensuring dependent services are initialized before the P2P service starts.
  • The addition of unit tests for new functionality is appreciated.

Comment on lines +485 to +512
if (this == o) {
return true;
}
if (o == null || this.getClass() != o.getClass()) {
return false;
}
LogFilterElement item = (LogFilterElement) o;
if (!Objects.equals(blockHash, item.blockHash)) {
return false;
}
if (!Objects.equals(transactionHash, item.transactionHash)) {
return false;
}
if (!Objects.equals(transactionIndex, item.transactionIndex)) {
return false;
}
if (!Objects.equals(logIndex, item.logIndex)) {
return false;
}
return removed == item.removed;
}

@Override
public int hashCode() {
int result = 0;
result = 31 * result + (blockHash == null ? 0 : blockHash.hashCode());
result = 31 * result + (transactionHash == null ? 0 : transactionHash.hashCode());
result = 31 * result + (transactionIndex == null ? 0 : transactionIndex.hashCode());

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🟢 These equals and hashCode implementations can be simplified using java.util.Objects.

Suggested change
if (this == o) {
return true;
}
if (o == null || this.getClass() != o.getClass()) {
return false;
}
LogFilterElement item = (LogFilterElement) o;
if (!Objects.equals(blockHash, item.blockHash)) {
return false;
}
if (!Objects.equals(transactionHash, item.transactionHash)) {
return false;
}
if (!Objects.equals(transactionIndex, item.transactionIndex)) {
return false;
}
if (!Objects.equals(logIndex, item.logIndex)) {
return false;
}
return removed == item.removed;
}
@Override
public int hashCode() {
int result = 0;
result = 31 * result + (blockHash == null ? 0 : blockHash.hashCode());
result = 31 * result + (transactionHash == null ? 0 : transactionHash.hashCode());
result = 31 * result + (transactionIndex == null ? 0 : transactionIndex.hashCode());
@Override
public boolean equals(Object o) {
if (this == o) {
return true;
}
if (o == null || getClass() != o.getClass()) {
return false;
}
LogFilterElement that = (LogFilterElement) o;
return removed == that.removed &&
Objects.equals(blockHash, that.blockHash) &&
Objects.equals(transactionHash, that.transactionHash) &&
Objects.equals(transactionIndex, that.transactionIndex) &&
Objects.equals(logIndex, that.logIndex);
}
@Override
public int hashCode() {
return Objects.hash(blockHash, transactionHash, transactionIndex, logIndex, removed);
}

public JsonRpcExceedLimitException(String message, Throwable cause) {
super(message, cause);
}
} No newline at end of file

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🟢 It's a good practice to have a newline at the end of the file.

Suggested change
}
}

@0xbigapple 0xbigapple closed this Jan 23, 2026
@0xbigapple 0xbigapple reopened this Jan 23, 2026
Copy link

@github-actions github-actions bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Creating a pending review for the pull request.

Copy link

@github-actions github-actions bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

📋 Review Summary

This Pull Request primarily focuses on optimizing memory allocation for JSON-RPC filter interfaces and introducing a configurable limit for block filters. The core changes involve implementing Guava Caches for log elements and block hashes, and adding equals() and hashCode() methods to LogFilterElement for effective caching. The introduction of JsonRpcExceedLimitException and the maxBlockFilterNum configuration parameter are well-integrated.

🔍 General Feedback

  • The main functionality related to JSON-RPC filter optimization and the introduction of block filter limits seems well-implemented, with appropriate caching mechanisms and configuration.
  • Two new test additions, NodePersistServiceTest.java and ShieldedReceiveTest.java#testIsMining(), appear to be outside the immediate scope of this PR. It would be beneficial for future PRs to maintain a tighter focus on a single feature or fix.
  • There are a couple of TODO comments regarding mock tests that could be addressed in separate follow-up tasks to improve test stability.
  • The reordering of P2P service startup in TronNetService.java should be carefully verified to ensure no unintended side effects or dependency issues.

return null;
}

private void deleteCheckpoint() {

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🟢 This TODO comment indicates a potential issue in mock tests. While not directly related to the current PR's functionality, it's good to keep track of these for future improvements.

return false;
}
int queueSize = 0;
if (eventListeners == null || eventListeners.isEmpty()) {

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🟢 Similar to the SnapshotManager, this TODO indicates a potential issue in mock tests that might warrant a separate fix.

fetchBlockService.init();
nodePersistService.init();
tronStatsManager.init();
PeerManager.init();

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🟡 The reordering of the P2P service start to the end of the start() method is noted. Please ensure this change does not introduce any unexpected dependencies or issues with other services that might rely on the P2P service being initialized earlier.

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🟡 This new test file (NodePersistServiceTest.java) appears to be outside the scope of optimizing JSON-RPC filter interfaces. Please consider if this change should be part of a separate PR for better scope management.

totalShieldedPoolValue -= valueBalance;
chainBaseManager.getDynamicPropertiesStore().saveTotalShieldedPoolValue(totalShieldedPoolValue);
}

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🟡 The testIsMining() method in ShieldedReceiveTest.java also seems unrelated to the PR's core purpose. It would be beneficial to keep PRs focused on a single concern.

Copy link

@github-actions github-actions bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

📋 Review Summary

This Pull Request introduces optimizations for memory allocation within the JSON-RPC filter interface, specifically for eth_newFilter and eth_newBlockFilter. It also adds a configurable limit (node.jsonrpc.maxBlockFilterNum) for the number of block filters to mitigate potential DoS risks. Overall, the changes address an important aspect of system stability and resource management.

🔍 General Feedback

  • The introduction of caching for LogFilterElement and block hashes is a good approach to reduce memory overhead and improve performance. The addition of equals() and hashCode() for LogFilterElement is crucial for correct cache functionality.
  • The new JsonRpcExceedLimitException and its integration for limiting block filters provides a necessary safeguard against resource exhaustion.
  • Configuration changes across various config.conf files ensure that the new maxBlockFilterNum setting is consistently applied and configurable.
  • Comprehensive unit tests for LogFilterElement's equals() and hashCode() methods enhance code reliability.

public JsonRpcExceedLimitException(String message, Throwable cause) {
super(message, cause);
}
} No newline at end of file

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🟢 Consider adding a newline at the end of the file for better POSIX compatibility and to align with common Java formatting practices.

fetchBlockService.init();
nodePersistService.init();
tronStatsManager.init();
PeerManager.init();

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🟠 The P2P service start has been reordered to the end of the start() method. Please provide more context on the rationale behind this reordering. Specifically, ensure that this change does not negatively impact network stability, peer discovery, or other lifecycle aspects, especially in a distributed blockchain environment where timing and initialization order can be critical for consensus and state synchronization.

}

BlockCapsule blockCapsule = new BlockCapsule(block);
long blockNum = blockCapsule.getNum();

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🟢 This line contains redundant spaces. Consider removing them for consistency with coding style guidelines.

Suggested change
long blockNum = blockCapsule.getNum();
long energyFee = wallet.getEnergyFee(blockCapsule.getTimeStamp());

PBFT
}

private static final String FILTER_NOT_FOUND = "filter not found";

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🟡 The logElementCache and blockHashCache are configured with relatively large maximum sizes (300,000 and 60,000 respectively). While the comments provide a rationale, it's important to monitor the memory footprint of these caches in a production environment, especially under sustained high load. Consider adding metrics to track cache size and hit/miss ratios to ensure efficient memory utilization and to detect potential memory pressure.

chainBaseManager.getDynamicPropertiesStore().saveTotalShieldedPoolValue(totalShieldedPoolValue);
}

@Test

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🟡 The test testIsMining() appears to be unrelated to the scope of this pull request, which focuses on optimizing memory allocation for JSON-RPC filter interfaces and introducing maxBlockFilterNum. Consider moving this test to a more relevant test class or a separate pull request to maintain a clear separation of concerns.

@0xbigapple 0xbigapple closed this Jan 23, 2026
@0xbigapple 0xbigapple reopened this Jan 23, 2026
@0xbigapple 0xbigapple closed this Jan 23, 2026
@0xbigapple 0xbigapple reopened this Jan 23, 2026
Copy link

@github-actions github-actions bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

## 📋 Review Summary

This PR introduces optimizations for the JSON-RPC filter interface by adding caching to reduce memory allocation. It also adds a configurable limit for the number of block filters. The changes are generally good, but there are a few concerns.

🔍 General Feedback

  • The change to the P2P service initialization order in TronNetService.java is a significant change that seems unrelated to the PR's main goal. It should be moved to a separate PR.
  • Resource management can be improved by ensuring all new caches are properly invalidated in the close() method.
  • Exception handling in cache loaders should be improved to avoid swallowing exceptions.
  • The newly added JsonRpcExceedLimitException is not used.

Comment on lines +104 to 107
p2pService.start(p2pConfig);
p2pService.register(p2pEventHandler);

logger.info("Net service start successfully");

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🔴 The change in the P2P service initialization order is significant and seems unrelated to the primary goal of this PR, which is to optimize memory for JSON-RPC filters. Altering the startup sequence of core services can introduce subtle bugs, race conditions, or dependency issues that are hard to detect.

Please revert this change and move it to a separate PR where it can be reviewed with the appropriate context and justification. Bundling unrelated changes makes the review process difficult and increases the risk of introducing regressions.


@Override
public void close() throws IOException {
logElementCache.invalidateAll();

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🟡 The newly introduced `blockHashCache` is not invalidated in the `close()` method, unlike `logElementCache`. While this might not be a critical issue if `TronJsonRpcImpl` is a singleton, it's a good practice to clean up all resources consistently. Please add `blockHashCache.invalidateAll()` to the `close()` method to prevent potential memory leaks in scenarios where the object might be re-initialized.
Suggested change
logElementCache.invalidateAll();
public void close() throws IOException {
logElementCache.invalidateAll();
blockHashCache.invalidateAll();
ExecutorServiceManager.shutdownAndAwaitTermination(sectionExecutor, esName);
}

try {
// compare with hashcode() first, then with equals(). If not exist, put it.
cachedBlockHash = blockHashCache.get(originalBlockHash, () -> originalBlockHash);
} catch (ExecutionException e) {

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🟡 Catching `ExecutionException` and only logging it with a comment "never happen" can hide potential issues. The cause of the `ExecutionException` should be properly handled or re-thrown as a runtime exception to ensure that any unexpected errors during cache loading are propagated.
Suggested change
} catch (ExecutionException e) {
try {
// compare with hashcode() first, then with equals(). If not exist, put it.
cachedBlockHash = blockHashCache.get(originalBlockHash, () -> originalBlockHash);
} catch (ExecutionException e) {
logger.error("Getting/loading blockHash from cache failed", e);
throw new RuntimeException(e);
}

}
if (blockFilter2Result.size() >= maxBlockFilterNum) {
throw new JsonRpcInvalidParamsException(
"exceed max block filters: " + maxBlockFilterNum + ", try again later");

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🟢 A new exception `JsonRpcExceedLimitException` was added in this PR, but it is not being used. Instead, a generic `JsonRpcInvalidParamsException` is thrown when the block filter limit is exceeded. To provide a more specific error to the API user, consider using the new exception here.
Suggested change
"exceed max block filters: " + maxBlockFilterNum + ", try again later");
if (blockFilter2Result.size() >= maxBlockFilterNum) {
throw new JsonRpcExceedLimitException(
"exceed max block filters: " + maxBlockFilterNum + ", try again later");
}

return null;
}

private void deleteCheckpoint() {

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🟢 The comment `// only occurs in mock test. TODO fix test` suggests that there is an underlying issue with the tests that should be addressed. While this is a pre-existing issue, since this line is being touched in this PR, it's a good opportunity to highlight that the tests should be fixed to avoid having special conditions in the production code just for tests.

@0xbigapple 0xbigapple closed this Jan 23, 2026
@0xbigapple 0xbigapple reopened this Jan 23, 2026
Copy link

@github-actions github-actions bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

📋 Review Summary

This Pull Request introduces optimizations for JSON-RPC filter interfaces by adding caching mechanisms and a configurable limit for block filters. It also includes various code modifications and new test cases. The overall quality of the changes is good, with a clear focus on performance and resource management.

🔍 General Feedback

  • The introduction of caching for log filter elements and block hashes is a positive step towards optimizing memory allocation and potentially improving performance for JSON-RPC filter interfaces.
  • The addition of unit tests for LogFilterElement's equals and hashCode methods, as well as new tests for NodePersistService, contributes to the overall test coverage and stability of the project.
  • The new maxBlockFilterNum configuration provides important control over resource usage, preventing potential abuse or excessive memory consumption.

return result;
}

@Override

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🟠 High - blockHashCache is not explicitly invalidated during close(). Although expireAfterWrite provides some cleanup, explicit invalidation on shutdown is a safer practice for resource management.

Suggested change
@Override
logElementCache.invalidateAll();
blockHashCache.invalidateAll();


// Move P2P service start to the end
setP2pConfig(getConfig());
p2pService.start(p2pConfig);

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🟠 High - The reordering of p2pService.start() to the end of the start() method could introduce race conditions or NullPointerException if services like advService, syncService, peerStatusCheck, etc., implicitly depend on p2pService being fully operational during their init() phase. This change should be carefully reviewed for implicit dependencies and potential startup issues.

eventFilter2Result.put(filterID, logFilterAndResult);
return ByteArray.toJsonHex(filterID);
}

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🟡 Medium - The newBlockFilter() method's signature has been updated to throw JsonRpcInvalidParamsException. Clients calling this RPC method will need to handle this new exception, which indicates an intentional behavioral change when the maximum number of block filters is exceeded.


private void deleteCheckpoint() {
if(checkTmpStore == null) {
// only occurs in mock test. TODO fix test

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🟡 Medium - The // only occurs in mock test. TODO fix test comment indicates a pre-existing issue within the mock tests. While not introduced by this PR, such comments highlight potential gaps in test coverage or design, which could mask real issues or lead to unexpected behavior in non-mock environments. It is recommended to address these TODOs to improve the robustness of the test suite.

@0xbigapple 0xbigapple closed this Jan 23, 2026
@0xbigapple 0xbigapple reopened this Jan 23, 2026
Copy link

@github-actions github-actions bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

📋 Review Summary

This Pull Request introduces optimizations for JSON-RPC filter interfaces, primarily by implementing a configurable limit for block filters (maxBlockFilterNum) and introducing Guava caches to reduce memory allocation for LogFilterElement and block hash strings. It also includes a reordering of the P2P service startup and adds new unit tests. The changes are well-intentioned for performance and memory management.

🔍 General Feedback

  • The introduction of maxBlockFilterNum provides a necessary control for resource usage in the JSON-RPC filter interface, which is a good security and stability enhancement.
  • The use of Guava caches with proper equals and hashCode implementations for LogFilterElement is an effective strategy for memory optimization through object deduplication.
  • The new tests added demonstrate a good effort in ensuring the correctness of the new features and existing components.

PR Change Type Classification: Mixed (Modification of existing behavior and introduction of new logic)

2.1 Lifecycle & Resource Safety (Mandatory)

Resources created: logElementCache (Guava Cache), blockHashCache (Guava Cache).
logElementCache is explicitly invalidated in close(). blockHashCache relies on expireAfterWrite. While the latter is eventually consistent, an explicit invalidateAll() for blockHashCache would ensure immediate resource release during shutdown, which is a best practice for deterministic lifecycle management.

2.2 Executor / Thread Pool Shutdown Semantics (Mandatory)

Checked, no issues found. The Guava caches manage their internal threads, and no new explicit ExecutorService instances requiring dedicated shutdown logic were introduced. The existing sectionExecutor is correctly handled.

2.3 Behavioral Compatibility & Implicit Invariants (Mandatory)

  • The reordering of P2P service startup in TronNetService.java might subtly change the timing of network availability. While likely intended, it's a behavioral change that could affect services expecting early P2P readiness.
  • The introduction of maxBlockFilterNum creates a new upper bound on the number of block filters, which will cause JsonRpcInvalidParamsException for clients exceeding this limit. This is a deliberate change for resource control and should be communicated as an API change.
  • The addition of equals and hashCode to LogFilterElement is an internal change that enables correct cache behavior and is compatible with its use as a value object.

2.4 Business Logic Correctness (Mandatory)

The logic for maxBlockFilterNum enforcement and the caching mechanisms using Cache.get(key, () -> key) appear correct and achieve the stated goals of memory optimization and resource control. The new exception JsonRpcExceedLimitException is appropriate.

2.5 Concurrency & Execution Interleaving Risks (Mandatory)

The use of ConcurrentHashMap and thread-safe Guava Cache instances correctly handles concurrency for data structures. The change in P2P service startup order does not introduce new inherent race conditions but is a timing change.

2.6 Numeric Precision & Cross-JVM Determinism (Mandatory)

Checked, no issues found. The changes do not involve floating-point arithmetic or complex numeric operations that would introduce precision issues or cross-JVM non-determinism.

System-Level Risk Summary (Mandatory)

  • Consensus risk: Low. The changes are primarily in the JSON-RPC API and node services, not in core consensus logic.
  • State inconsistency risk: Low. The caching mechanisms are designed for efficiency and not for persistent state. The maxBlockFilterNum limit prevents resource exhaustion that could lead to inconsistent state.
  • Lifecycle or resource leak risk: Low. While blockHashCache could benefit from explicit invalidation, the expireAfterWrite mitigates significant leaks. logElementCache is explicitly invalidated.
  • Concurrency or execution-order risk: Low. Thread-safe data structures are used. The P2P service startup order change might have minor timing impacts but doesn't appear to introduce critical race conditions.
  • Cross-JVM non-determinism risk: Low. Numeric operations are deterministic.

Overall, the changes appear safe with respect to consensus, state consistency, lifecycle safety, and cross-JVM determinism. The reordering of P2P service initialization should be carefully considered for any subtle dependencies.

return null;
}

private void deleteCheckpoint() {

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🟡 The TODO fix test comment indicates an existing issue in mock tests. While not directly introduced by this PR, addressing these mock test deficiencies is crucial for the reliability and completeness of the testing suite. This could affect lifecycle or state if the mock is not accurately reflecting production behavior.

if (useNativeQueue) {
return false;
}
int queueSize = 0;

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🟡 Similar to the SnapshotManager comment, this TODO fix test indicates a potential gap or inaccuracy in the mock test setup for EventPluginLoader. Ensuring mocks accurately represent real-world scenarios is important for robust testing and preventing silent failures in the actual system.

fetchBlockService.init();
nodePersistService.init();
tronStatsManager.init();
PeerManager.init();

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🟠 Moving the P2P service start to the end of the start() method is a significant change in the node's initialization sequence. This could introduce subtle behavioral compatibility issues if other services implicitly depend on P2P being available earlier. Ensure thorough testing for any timing-dependent failures or delayed network participation during node startup. Consider adding explicit synchronization mechanisms if such dependencies exist.

PBFT
}

private static final String FILTER_NOT_FOUND = "filter not found";

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🟢 Adding a comment like // LRU cache is helpful for readability and maintainability, clarifying the cache eviction policy. Consider adding similar comments for other cache configurations where applicable for consistency.

private static final int maxBlockFilterNum = Args.getInstance().getJsonRpcMaxBlockFilterNum();
private static final Cache<LogFilterElement, LogFilterElement> logElementCache =
CacheBuilder.newBuilder()
.maximumSize(300_000L) // 300s * tps(1000) * 1 log/tx ≈ 300_000

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🟢 Adding a comment like // LRU cache is helpful for readability and maintainability, clarifying the cache eviction policy. This improves code clarity.

Map<String, BlockFilterAndResult> blockFilter2Result;
if (getSource() == RequestSource.FULLNODE) {
blockFilter2Result = blockFilter2ResultFull;
} else {

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🟠 The introduction of maxBlockFilterNum and the associated check in newBlockFilter() represents a new restriction on the eth_newBlockFilter API. While crucial for resource management, this change can affect existing API clients that might expect an unbounded number of filters. This behavioral change should be clearly documented for API consumers to prevent unexpected JsonRpcInvalidParamsException errors.


return result;
}

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🟡 The blockHashCache also holds memory resources. Although expireAfterWrite provides eventual cleanup, explicitly calling blockHashCache.invalidateAll() alongside logElementCache.invalidateAll() in the close() method would ensure immediate and deterministic resource release upon shutdown, improving lifecycle safety.

Suggested change
logElementCache.invalidateAll();
blockHashCache.invalidateAll();
ExecutorServiceManager.shutdownAndAwaitTermination(sectionExecutor, esName);

@0xbigapple 0xbigapple closed this Jan 23, 2026
@0xbigapple 0xbigapple reopened this Jan 23, 2026
Copy link

@github-actions github-actions bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

📋 Review Summary

This Pull Request aims to optimize memory allocation for JSON-RPC filter interfaces and introduces a new configuration item for maxBlockFilterNum. It also includes refactoring of P2P service startup and caching mechanisms for block hashes and log filter elements. The changes improve resource management and introduce limits to prevent potential resource exhaustion.

🔍 General Feedback

  • The PR introduces several new features and optimizations, including a new exception type, configuration parameter, and caching. These additions generally enhance the robustness and performance of the JSON-RPC service.
  • The addition of equals() and hashCode() methods for LogFilterElement is crucial for the correct functioning of the caching mechanism.
  • Several test cases have been added or modified, which is a good practice for validating the changes.
  • Careful attention should be paid to the identified critical and high-severity issues, particularly regarding breaking API changes, potential concurrency issues in filter handling, and the reordering of critical service initialization.

String newFilter(FilterRequest fr) throws JsonRpcInvalidParamsException,
JsonRpcMethodNotFoundException;

@JsonRpcMethod("eth_newBlockFilter")

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🔴 The eth_newBlockFilter API now throws JsonRpcInvalidParamsException when maxBlockFilterNum is exceeded. This is a breaking behavioral change for existing clients that may not be prepared to handle this new exception, potentially leading to client-side errors or crashes.

Suggested change
@JsonRpcMethod("eth_newBlockFilter")
String newBlockFilter() throws JsonRpcMethodNotFoundException;

fetchBlockService.init();
nodePersistService.init();
tronStatsManager.init();
PeerManager.init();

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🔴 The P2P service initialization in TronNetService.java::start() has been moved to the end of the method. This reordering could introduce runtime issues if advService, syncService, peerStatusCheck, PeerManager, relayService, or effectiveCheckService (or any other service initialized before the P2P service) have implicit dependencies on a fully operational P2P network. This could lead to NullPointerException or incorrect network state during startup. Thorough testing of all service interdependencies during startup is critical.

Suggested change
PeerManager.init();
setP2pConfig(getConfig());
p2pService.start(p2pConfig);
p2pService.register(p2pEventHandler);
advService.init();
syncService.init();
peerStatusCheck.init();
...

List<LogFilterElement> elements =
LogMatch.matchBlock(logFilter, logsFilterCapsule.getBlockNumber(),
logsFilterCapsule.getBlockHash(), logsFilterCapsule.getTxInfoList(),
logsFilterCapsule.isRemoved());

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🔴 The List returned by BlockFilterAndResult.getResult() and LogFilterAndResult.getResult() is being modified concurrently in handleBLockFilter and handleLogsFilter through .add() operations. If these lists are not thread-safe (e.g., ArrayList), this will lead to ConcurrentModificationException or data corruption under concurrent access. Consider using thread-safe collections like CopyOnWriteArrayList or Collections.synchronizedList().

Suggested change
logsFilterCapsule.isRemoved());
// Change the return type of getResult() to a thread-safe List or
// ensure all modifications happen under a lock.
// For example, if getResult() returns a new CopyOnWriteArrayList:
// logFilterAndResult.getResult().add(cachedElement);

import org.tron.core.services.jsonrpc.types.BlockResult;
import org.tron.core.services.jsonrpc.types.BuildArguments;
import org.tron.core.services.jsonrpc.types.CallArguments;
import org.tron.core.services.jsonrpc.types.TransactionReceipt;

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🟢 The import import org.tron.core.net.peer.Item; in framework/src/main/java/org/tron/core/services/jsonrpc/TronJsonRpc.java is unused and should be removed.

Suggested change
import org.tron.core.services.jsonrpc.types.TransactionReceipt;
-import org.tron.core.net.peer.Item;


return result;
}

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🟠 blockHashCache is not explicitly invalidated in TronJsonRpcImpl.java::close(). For consistency and to prevent potential subtle resource leaks in scenarios involving classloader resets or dynamic class reloading, all static cache resources should be explicitly invalidated during shutdown.

Suggested change
logElementCache.invalidateAll();
blockHashCache.invalidateAll();

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants