-
Notifications
You must be signed in to change notification settings - Fork 0
fix(api): Optimize memory allocation for the filter interface of JSON-RPC #3
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: pr-6459-base
Are you sure you want to change the base?
Conversation
…newFilter, eth_newBlockFilter (tronprotocol#6495)
…, eth_newFilter, eth_newBlockFilter (tronprotocol#6495)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This pull request introduces optimizations for the JSON-RPC filter interface by adding caching to reduce memory allocation, which is a significant improvement. It also adds a configurable limit for the number of block filters. The changes are well-implemented, but there are a few minor areas for improvement regarding resource management and code consistency.
🔍 General Feedback
- Good Caching Implementation: The use of Guava's
Cacheto reduce object creation forLogFilterElementand block hashes is a good performance enhancement. - New Exception: A new exception
JsonRpcExceedLimitExceptionwas added but not used. It should be used to provide more specific error information. - Resource Cleanup: Not all new caches are being invalidated in the
close()method, which could lead to memory leaks.
| return result; | ||
| } | ||
|
|
||
| @Override |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
| @Override | |
| logElementCache.invalidateAll(); | |
| blockHashCache.invalidateAll(); |
| } else { | ||
| blockFilter2Result = blockFilter2ResultSolidity; | ||
| } | ||
| if (blockFilter2Result.size() >= maxBlockFilterNum) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
| if (blockFilter2Result.size() >= maxBlockFilterNum) { | |
| if (blockFilter2Result.size() >= maxBlockFilterNum) { | |
| throw new JsonRpcExceedLimitException( | |
| "exceed max block filters: " + maxBlockFilterNum + ", try again later"); | |
| } |
| } catch (ExecutionException e) { | ||
| logger.error("Getting/loading blockHash from cache failed", e); //never happen | ||
| cachedBlockHash = originalBlockHash; | ||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
| } | |
| // The callable cannot throw a checked exception, so this should never happen. | |
| logger.error("Getting/loading blockHash from cache failed", e); |
| result = result + (removed ? 1 : 0); | ||
| return result; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
| result = result + (removed ? 1 : 0); | |
| return result; | |
| @Override | |
| public int hashCode() { | |
| return Objects.hash(blockHash, transactionHash, transactionIndex, logIndex, removed); | |
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
📋 Review Summary
This PR introduces memory optimizations for JSON-RPC filters by implementing caching for LogFilterElement and block hashes, which is a solid improvement. It also adds a limit to the number of block filters to prevent resource exhaustion.
🔍 General Feedback
- Unused Exception: The new
JsonRpcExceedLimitExceptionclass is added but not utilized in the implementation. - Scope Creep: There are several changes unrelated to the PR's stated objective, such as the P2P service startup order change in
TronNetService.java, new unrelated tests inNodePersistServiceTest.java, andShieldedReceiveTest.java. These should ideally be separated or explicitly justified. - Production Code Comments: There are "TODO fix test" comments added to production code (e.g.,
SnapshotManager.java) which indicate that production code is being modified to workaround test issues. - Configuration: Consider exposing the cache sizes as configuration parameters for better tunability.
| blockFilter2Result = blockFilter2ResultSolidity; | ||
| } | ||
| if (blockFilter2Result.size() >= maxBlockFilterNum) { | ||
| throw new JsonRpcInvalidParamsException( |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🟡 The class JsonRpcExceedLimitException was added in this PR but is not used here. You are throwing JsonRpcInvalidParamsException instead.
If you intend to use JsonRpcExceedLimitException, please ensure you:
- Update this throw statement.
- Update the
newBlockFiltermethod signature inTronJsonRpc.javato throwJsonRpcExceedLimitException. - Add the corresponding
@JsonRpcErrorannotation inTronJsonRpc.java.
| private static final int maxBlockFilterNum = Args.getInstance().getJsonRpcMaxBlockFilterNum(); | ||
| private static final Cache<LogFilterElement, LogFilterElement> logElementCache = | ||
| CacheBuilder.newBuilder() | ||
| .maximumSize(300_000L) // 300s * tps(1000) * 1 log/tx ≈ 300_000 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🟢 The cache maximum sizes are hardcoded. Consider making these configurable via Args or CommonParameter, similar to jsonRpcMaxBlockFilterNum. This would provide flexibility for nodes with different memory constraints.
|
|
||
| private void deleteCheckpoint() { | ||
| if(checkTmpStore == null) { | ||
| // only occurs in mock test. TODO fix test |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🟢 Adding comments about mock tests in production code is generally discouraged. If this null check is required for safety in production, the comment might be misleading. If it is only for tests, the tests should ideally be fixed to set up the environment correctly rather than patching production code.
| effectiveCheckService.init(); | ||
|
|
||
| // Move P2P service start to the end | ||
| setP2pConfig(getConfig()); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🟡 This change alters the startup order of services and seems unrelated to the PR's objective of optimizing memory allocation for JSON-RPC. While postponing P2P start might be a valid fix, it should ideally be in a separate PR or clearly documented in the PR description to ensure it gets proper visibility and testing regarding side effects.
| } | ||
|
|
||
| @Test | ||
| public void testIsMining() { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🟢 This test case seems unrelated to the PR's scope.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
📋 Review Summary
The PR successfully implements memory optimization for JSON-RPC filters by introducing a caching mechanism for LogFilterElement and block hashes, and it adds a configurable limit to the number of block filters. These are valuable improvements for stability and resource usage.
🔍 General Feedback
- Scope & Risk: There is a significant change to the service startup sequence in
TronNetServicethat appears unrelated to the PR's objective. This poses a potential regression risk and should be scrutinized or reverted. - Unrelated Changes: Several files (
NodePersistServiceTest,ShieldedReceiveTest) contain changes that do not align with the PR title. It is best practice to keep PRs focused on a single responsibility. - Defensive Coding: Some null checks were added to production code explicitly to fix test failures ("only occurs in mock test"). While safe, it is preferable to fix the tests themselves to avoid polluting production logic.
| // Move P2P service start to the end | ||
| setP2pConfig(getConfig()); | ||
| p2pService.start(p2pConfig); | ||
| p2pService.register(p2pEventHandler); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🟠 This change modifies the service startup sequence by moving p2pService initialization to the end. This falls outside the scope of the PR's objective ("Optimize memory allocation for JSON-RPC").
Changing the startup order can have significant side effects if other services (like PeerManager, syncService, etc.) rely on the P2P service being active during their initialization.
Please revert this change unless it is strictly necessary for the memory optimization, in which case, please provide a detailed explanation of the dependency.
|
|
||
| @Override | ||
| public void close() throws IOException { | ||
| logElementCache.invalidateAll(); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🟢 For consistency and to ensure all resources are released, consider invalidating the blockHashCache here as well.
| logElementCache.invalidateAll(); | |
| logElementCache.invalidateAll(); | |
| blockHashCache.invalidateAll(); |
| @@ -0,0 +1,40 @@ | |||
| package org.tron.core.net.service.nodepersist; | |||
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🟢 This new test file appears unrelated to the PR's objective of optimizing JSON-RPC memory allocation. It would be better to include this in a separate PR to keep changes focused.
|
|
||
| private void deleteCheckpoint() { | ||
| if(checkTmpStore == null) { | ||
| // only occurs in mock test. TODO fix test |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🟢 While this check prevents the test failure, leaving a TODO in production code regarding a test fix is technically technical debt. It is recommended to fix the test setup to ensure checkTmpStore is properly initialized rather than patching the production code.
| } | ||
| int queueSize = 0; | ||
| if (eventListeners == null || eventListeners.isEmpty()) { | ||
| //only occurs in mock test. TODO fix test |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🟢 Similar to the comment in SnapshotManager, try to address the root cause in the test environment instead of modifying production code with test-specific fixes.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
📋 Review Summary
This PR introduces a configurable limit for JSON-RPC block filters to prevent excessive memory usage, which is a good improvement. The use of Guava's Cache for managing filter elements and block hashes is also a solid choice for performance and memory management.
🔍 General Feedback
- The PR includes some unrelated changes (e.g., in
TronNetService, and some test files). It would be best to move these to separate PRs to keep this one focused on the JSON-RPC filter optimization. - There are some minor improvements that can be made, like using the newly created exception and simplifying some code.
- Please address the new
TODOcomments.
Overall, the core changes are good, but the PR would be stronger if it was more focused.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🟢 The new test file NodePersistServiceTest.java seems unrelated to the PR's goal, which is to optimize memory allocation for the filter interface of JSON-RPC. Please consider moving this to a separate PR to keep this PR focused.
| public JsonRpcExceedLimitException(String message, Throwable cause) { | ||
| super(message, cause); | ||
| } | ||
| } No newline at end of file |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🟢 Nitpick: Please add a newline at the end of the file.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🟢 The changes in this file, moving the P2P service start to the end, seem unrelated to the PR's goal of optimizing memory allocation for the JSON-RPC filter interface. Could you please clarify why these changes are included in this PR or move them to a separate one?
| } | ||
|
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🟡 You've created a specific exception JsonRpcExceedLimitException, but here you are throwing a generic JsonRpcInvalidParamsException. It would be more consistent to use the new exception.
| } | |
| throw new JsonRpcExceedLimitException( | |
| "exceed max block filters: " + maxBlockFilterNum + ", try again later"); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🟢 This new test testIsMining seems unrelated to the PR's goal of optimizing the JSON-RPC filter interface. Please consider moving it to a separate, more relevant PR.
|
|
||
| @Override | ||
| public boolean equals(Object o) { | ||
| if (this == o) { | ||
| return true; | ||
| } | ||
| if (o == null || this.getClass() != o.getClass()) { | ||
| return false; | ||
| } | ||
| LogFilterElement item = (LogFilterElement) o; | ||
| if (!Objects.equals(blockHash, item.blockHash)) { | ||
| return false; | ||
| } | ||
| if (!Objects.equals(transactionHash, item.transactionHash)) { | ||
| return false; | ||
| } | ||
| if (!Objects.equals(transactionIndex, item.transactionIndex)) { | ||
| return false; | ||
| } | ||
| if (!Objects.equals(logIndex, item.logIndex)) { | ||
| return false; | ||
| } | ||
| return removed == item.removed; | ||
| } | ||
|
|
||
| @Override | ||
| public int hashCode() { | ||
| int result = 0; | ||
| result = 31 * result + (blockHash == null ? 0 : blockHash.hashCode()); | ||
| result = 31 * result + (transactionHash == null ? 0 : transactionHash.hashCode()); | ||
| result = 31 * result + (transactionIndex == null ? 0 : transactionIndex.hashCode()); | ||
| result = 31 * result + (logIndex == null ? 0 : logIndex.hashCode()); | ||
| result = result + (removed ? 1 : 0); | ||
| return result; | ||
| } | ||
|
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🟢 The equals and hashCode implementations can be simplified using java.util.Objects.
| @Override | |
| public boolean equals(Object o) { | |
| if (this == o) { | |
| return true; | |
| } | |
| if (o == null || this.getClass() != o.getClass()) { | |
| return false; | |
| } | |
| LogFilterElement item = (LogFilterElement) o; | |
| if (!Objects.equals(blockHash, item.blockHash)) { | |
| return false; | |
| } | |
| if (!Objects.equals(transactionHash, item.transactionHash)) { | |
| return false; | |
| } | |
| if (!Objects.equals(transactionIndex, item.transactionIndex)) { | |
| return false; | |
| } | |
| if (!Objects.equals(logIndex, item.logIndex)) { | |
| return false; | |
| } | |
| return removed == item.removed; | |
| } | |
| @Override | |
| public int hashCode() { | |
| int result = 0; | |
| result = 31 * result + (blockHash == null ? 0 : blockHash.hashCode()); | |
| result = 31 * result + (transactionHash == null ? 0 : transactionHash.hashCode()); | |
| result = 31 * result + (transactionIndex == null ? 0 : transactionIndex.hashCode()); | |
| result = 31 * result + (logIndex == null ? 0 : logIndex.hashCode()); | |
| result = result + (removed ? 1 : 0); | |
| return result; | |
| } | |
| @Override | |
| public boolean equals(Object o) { | |
| if (this == o) { | |
| return true; | |
| } | |
| if (o == null || getClass() != o.getClass()) { | |
| return false; | |
| } | |
| LogFilterElement that = (LogFilterElement) o; | |
| return removed == that.removed && Objects.equals(blockHash, that.blockHash) | |
| && Objects.equals(transactionHash, that.transactionHash) | |
| && Objects.equals(transactionIndex, that.transactionIndex) && Objects | |
| .equals(logIndex, that.logIndex); | |
| } | |
| @Override | |
| public int hashCode() { | |
| return Objects.hash(blockHash, transactionHash, transactionIndex, logIndex, removed); | |
| } |
| } | ||
|
|
||
| private void deleteCheckpoint() { | ||
| if(checkTmpStore == null) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🟢 It's good practice to address TODO comments. Could you please fix the test as the comment suggests? If not in this PR, please create a ticket to track this.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
📋 Review Summary
This pull request introduces performance optimizations for JSON-RPC filter interfaces by adding caching for log elements and block hashes, and it also imposes a configurable limit on the number of active block filters to prevent resource exhaustion. Additionally, the PR includes some unrelated changes, such as new tests and reordering of service initialization.
🔍 General Feedback
- The use of Guava caches is a good approach for optimizing memory and performance. The configuration and usage patterns are generally sound.
- The addition of a limit for block filters is a valuable safeguard for the node's stability.
- The PR could be improved by separating the unrelated changes (e.g., new tests for different packages, P2P service initialization reordering) into their own pull requests for better review focus and clearer commit history.
- The newly implemented
hashCodemethod inLogFilterElementis functional but could be simplified and made more robust by usingObjects.hash(). - A new exception
JsonRpcExceedLimitExceptionwas added but not used. This appears to be dead code. - Remember to invalidate all caches in the
close()method.
| public JsonRpcExceedLimitException(String message, Throwable cause) { | ||
| super(message, cause); | ||
| } | ||
| } No newline at end of file |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🟡 This new exception class is not used anywhere in the codebase. In TronJsonRpcImpl.java, a JsonRpcInvalidParamsException is thrown when the block filter limit is exceeded.
Consider either using this new exception there (and updating the interface and error mappings accordingly) or removing this class to avoid dead code.
| int result = 0; | ||
| result = 31 * result + (blockHash == null ? 0 : blockHash.hashCode()); | ||
| result = 31 * result + (transactionHash == null ? 0 : transactionHash.hashCode()); | ||
| result = 31 * result + (transactionIndex == null ? 0 : transactionIndex.hashCode()); | ||
| result = 31 * result + (logIndex == null ? 0 : logIndex.hashCode()); | ||
| result = result + (removed ? 1 : 0); | ||
| return result; | ||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🟡 The custom hashCode implementation could be improved. While it is functional, using java.util.Objects.hash() would be more concise, less error-prone, and is generally considered a best practice for generating hash codes from multiple fields.
| int result = 0; | |
| result = 31 * result + (blockHash == null ? 0 : blockHash.hashCode()); | |
| result = 31 * result + (transactionHash == null ? 0 : transactionHash.hashCode()); | |
| result = 31 * result + (transactionIndex == null ? 0 : transactionIndex.hashCode()); | |
| result = 31 * result + (logIndex == null ? 0 : logIndex.hashCode()); | |
| result = result + (removed ? 1 : 0); | |
| return result; | |
| } | |
| @Override | |
| public int hashCode() { | |
| return Objects.hash(blockHash, transactionHash, transactionIndex, logIndex, removed); | |
| } |
| return result; | ||
| } | ||
|
|
||
| @Override |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🟢 For consistency and to ensure all cache resources are released upon closing, you should also invalidate blockHashCache.
| @Override | |
| logElementCache.invalidateAll(); | |
| blockHashCache.invalidateAll(); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This pull request effectively optimizes memory allocation for JSON-RPC filters by introducing a configurable limit and using a cache for filter elements. The core logic is sound and should improve performance and stability under heavy loads.
🔍 General Feedback
- Good Optimization: The use of Guava's
Cacheis a good choice for managing the lifecycle of filter elements and block hashes, preventing potential memory leaks. - Unrelated Changes: The PR includes some changes that seem unrelated to the main goal, such as modifications to the P2P service startup and the addition of unrelated tests. It's generally better to keep PRs focused on a single concern.
- Code Clarity: The new code is generally clear and well-written. Using the newly defined
JsonRpcExceedLimitExceptionwould further improve the clarity of error handling.
| } else { | ||
| blockFilter2Result = blockFilter2ResultSolidity; | ||
| } | ||
| if (blockFilter2Result.size() >= maxBlockFilterNum) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
| if (blockFilter2Result.size() >= maxBlockFilterNum) { | |
| throw new JsonRpcExceedLimitException( | |
| "exceed max block filters: " + maxBlockFilterNum + ", try again later"); |
| public int hashCode() { | ||
| int result = 0; | ||
| result = 31 * result + (blockHash == null ? 0 : blockHash.hashCode()); | ||
| result = 31 * result + (transactionHash == null ? 0 : transactionHash.hashCode()); | ||
| result = 31 * result + (transactionIndex == null ? 0 : transactionIndex.hashCode()); | ||
| result = 31 * result + (logIndex == null ? 0 : logIndex.hashCode()); | ||
| result = result + (removed ? 1 : 0); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
| public int hashCode() { | |
| int result = 0; | |
| result = 31 * result + (blockHash == null ? 0 : blockHash.hashCode()); | |
| result = 31 * result + (transactionHash == null ? 0 : transactionHash.hashCode()); | |
| result = 31 * result + (transactionIndex == null ? 0 : transactionIndex.hashCode()); | |
| result = 31 * result + (logIndex == null ? 0 : logIndex.hashCode()); | |
| result = result + (removed ? 1 : 0); | |
| public int hashCode() { | |
| return Objects.hash(blockHash, transactionHash, transactionIndex, logIndex, removed); | |
| } |
| public JsonRpcExceedLimitException(String message, Throwable cause) { | ||
| super(message, cause); | ||
| } | ||
| } No newline at end of file |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
| PeerManager.init(); | ||
| relayService.init(); | ||
| effectiveCheckService.init(); | ||
|
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
| @@ -0,0 +1,40 @@ | |||
| package org.tron.core.net.service.nodepersist; | |||
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
| @Test | ||
| public void testIsMining() { | ||
| Assert.assertTrue(wallet.isMining()); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
📋 Review Summary
This pull request introduces optimizations and hardening for the JSON-RPC service, primarily by adding caching for log filters and block hashes, and limiting the number of active block filters to prevent resource exhaustion. The changes appear to be safe and improve the robustness of the service.
🔍 General Feedback
- The introduction of caching with reasonable expiration and size limits is a good performance enhancement.
- The startup order change in
TronNetServiceimproves lifecycle safety by ensuring dependent services are initialized before the P2P service starts. - The addition of unit tests for new functionality is appreciated.
| if (this == o) { | ||
| return true; | ||
| } | ||
| if (o == null || this.getClass() != o.getClass()) { | ||
| return false; | ||
| } | ||
| LogFilterElement item = (LogFilterElement) o; | ||
| if (!Objects.equals(blockHash, item.blockHash)) { | ||
| return false; | ||
| } | ||
| if (!Objects.equals(transactionHash, item.transactionHash)) { | ||
| return false; | ||
| } | ||
| if (!Objects.equals(transactionIndex, item.transactionIndex)) { | ||
| return false; | ||
| } | ||
| if (!Objects.equals(logIndex, item.logIndex)) { | ||
| return false; | ||
| } | ||
| return removed == item.removed; | ||
| } | ||
|
|
||
| @Override | ||
| public int hashCode() { | ||
| int result = 0; | ||
| result = 31 * result + (blockHash == null ? 0 : blockHash.hashCode()); | ||
| result = 31 * result + (transactionHash == null ? 0 : transactionHash.hashCode()); | ||
| result = 31 * result + (transactionIndex == null ? 0 : transactionIndex.hashCode()); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🟢 These equals and hashCode implementations can be simplified using java.util.Objects.
| if (this == o) { | |
| return true; | |
| } | |
| if (o == null || this.getClass() != o.getClass()) { | |
| return false; | |
| } | |
| LogFilterElement item = (LogFilterElement) o; | |
| if (!Objects.equals(blockHash, item.blockHash)) { | |
| return false; | |
| } | |
| if (!Objects.equals(transactionHash, item.transactionHash)) { | |
| return false; | |
| } | |
| if (!Objects.equals(transactionIndex, item.transactionIndex)) { | |
| return false; | |
| } | |
| if (!Objects.equals(logIndex, item.logIndex)) { | |
| return false; | |
| } | |
| return removed == item.removed; | |
| } | |
| @Override | |
| public int hashCode() { | |
| int result = 0; | |
| result = 31 * result + (blockHash == null ? 0 : blockHash.hashCode()); | |
| result = 31 * result + (transactionHash == null ? 0 : transactionHash.hashCode()); | |
| result = 31 * result + (transactionIndex == null ? 0 : transactionIndex.hashCode()); | |
| @Override | |
| public boolean equals(Object o) { | |
| if (this == o) { | |
| return true; | |
| } | |
| if (o == null || getClass() != o.getClass()) { | |
| return false; | |
| } | |
| LogFilterElement that = (LogFilterElement) o; | |
| return removed == that.removed && | |
| Objects.equals(blockHash, that.blockHash) && | |
| Objects.equals(transactionHash, that.transactionHash) && | |
| Objects.equals(transactionIndex, that.transactionIndex) && | |
| Objects.equals(logIndex, that.logIndex); | |
| } | |
| @Override | |
| public int hashCode() { | |
| return Objects.hash(blockHash, transactionHash, transactionIndex, logIndex, removed); | |
| } |
| public JsonRpcExceedLimitException(String message, Throwable cause) { | ||
| super(message, cause); | ||
| } | ||
| } No newline at end of file |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🟢 It's a good practice to have a newline at the end of the file.
| } | |
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Creating a pending review for the pull request.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
📋 Review Summary
This Pull Request primarily focuses on optimizing memory allocation for JSON-RPC filter interfaces and introducing a configurable limit for block filters. The core changes involve implementing Guava Caches for log elements and block hashes, and adding equals() and hashCode() methods to LogFilterElement for effective caching. The introduction of JsonRpcExceedLimitException and the maxBlockFilterNum configuration parameter are well-integrated.
🔍 General Feedback
- The main functionality related to JSON-RPC filter optimization and the introduction of block filter limits seems well-implemented, with appropriate caching mechanisms and configuration.
- Two new test additions,
NodePersistServiceTest.javaandShieldedReceiveTest.java#testIsMining(), appear to be outside the immediate scope of this PR. It would be beneficial for future PRs to maintain a tighter focus on a single feature or fix. - There are a couple of
TODOcomments regarding mock tests that could be addressed in separate follow-up tasks to improve test stability. - The reordering of P2P service startup in
TronNetService.javashould be carefully verified to ensure no unintended side effects or dependency issues.
| return null; | ||
| } | ||
|
|
||
| private void deleteCheckpoint() { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🟢 This TODO comment indicates a potential issue in mock tests. While not directly related to the current PR's functionality, it's good to keep track of these for future improvements.
| return false; | ||
| } | ||
| int queueSize = 0; | ||
| if (eventListeners == null || eventListeners.isEmpty()) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🟢 Similar to the SnapshotManager, this TODO indicates a potential issue in mock tests that might warrant a separate fix.
| fetchBlockService.init(); | ||
| nodePersistService.init(); | ||
| tronStatsManager.init(); | ||
| PeerManager.init(); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🟡 The reordering of the P2P service start to the end of the start() method is noted. Please ensure this change does not introduce any unexpected dependencies or issues with other services that might rely on the P2P service being initialized earlier.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🟡 This new test file (NodePersistServiceTest.java) appears to be outside the scope of optimizing JSON-RPC filter interfaces. Please consider if this change should be part of a separate PR for better scope management.
| totalShieldedPoolValue -= valueBalance; | ||
| chainBaseManager.getDynamicPropertiesStore().saveTotalShieldedPoolValue(totalShieldedPoolValue); | ||
| } | ||
|
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🟡 The testIsMining() method in ShieldedReceiveTest.java also seems unrelated to the PR's core purpose. It would be beneficial to keep PRs focused on a single concern.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
📋 Review Summary
This Pull Request introduces optimizations for memory allocation within the JSON-RPC filter interface, specifically for eth_newFilter and eth_newBlockFilter. It also adds a configurable limit (node.jsonrpc.maxBlockFilterNum) for the number of block filters to mitigate potential DoS risks. Overall, the changes address an important aspect of system stability and resource management.
🔍 General Feedback
- The introduction of caching for
LogFilterElementand block hashes is a good approach to reduce memory overhead and improve performance. The addition ofequals()andhashCode()forLogFilterElementis crucial for correct cache functionality. - The new
JsonRpcExceedLimitExceptionand its integration for limiting block filters provides a necessary safeguard against resource exhaustion. - Configuration changes across various
config.conffiles ensure that the newmaxBlockFilterNumsetting is consistently applied and configurable. - Comprehensive unit tests for
LogFilterElement'sequals()andhashCode()methods enhance code reliability.
| public JsonRpcExceedLimitException(String message, Throwable cause) { | ||
| super(message, cause); | ||
| } | ||
| } No newline at end of file |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🟢 Consider adding a newline at the end of the file for better POSIX compatibility and to align with common Java formatting practices.
| fetchBlockService.init(); | ||
| nodePersistService.init(); | ||
| tronStatsManager.init(); | ||
| PeerManager.init(); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🟠 The P2P service start has been reordered to the end of the start() method. Please provide more context on the rationale behind this reordering. Specifically, ensure that this change does not negatively impact network stability, peer discovery, or other lifecycle aspects, especially in a distributed blockchain environment where timing and initialization order can be critical for consensus and state synchronization.
| } | ||
|
|
||
| BlockCapsule blockCapsule = new BlockCapsule(block); | ||
| long blockNum = blockCapsule.getNum(); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🟢 This line contains redundant spaces. Consider removing them for consistency with coding style guidelines.
| long blockNum = blockCapsule.getNum(); | |
| long energyFee = wallet.getEnergyFee(blockCapsule.getTimeStamp()); |
| PBFT | ||
| } | ||
|
|
||
| private static final String FILTER_NOT_FOUND = "filter not found"; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🟡 The logElementCache and blockHashCache are configured with relatively large maximum sizes (300,000 and 60,000 respectively). While the comments provide a rationale, it's important to monitor the memory footprint of these caches in a production environment, especially under sustained high load. Consider adding metrics to track cache size and hit/miss ratios to ensure efficient memory utilization and to detect potential memory pressure.
| chainBaseManager.getDynamicPropertiesStore().saveTotalShieldedPoolValue(totalShieldedPoolValue); | ||
| } | ||
|
|
||
| @Test |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🟡 The test testIsMining() appears to be unrelated to the scope of this pull request, which focuses on optimizing memory allocation for JSON-RPC filter interfaces and introducing maxBlockFilterNum. Consider moving this test to a more relevant test class or a separate pull request to maintain a clear separation of concerns.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This PR introduces optimizations for the JSON-RPC filter interface by adding caching to reduce memory allocation. It also adds a configurable limit for the number of block filters. The changes are generally good, but there are a few concerns.
🔍 General Feedback
- The change to the P2P service initialization order in
TronNetService.javais a significant change that seems unrelated to the PR's main goal. It should be moved to a separate PR. - Resource management can be improved by ensuring all new caches are properly invalidated in the
close()method. - Exception handling in cache loaders should be improved to avoid swallowing exceptions.
- The newly added
JsonRpcExceedLimitExceptionis not used.
| p2pService.start(p2pConfig); | ||
| p2pService.register(p2pEventHandler); | ||
|
|
||
| logger.info("Net service start successfully"); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Please revert this change and move it to a separate PR where it can be reviewed with the appropriate context and justification. Bundling unrelated changes makes the review process difficult and increases the risk of introducing regressions.
|
|
||
| @Override | ||
| public void close() throws IOException { | ||
| logElementCache.invalidateAll(); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
| logElementCache.invalidateAll(); | |
| public void close() throws IOException { | |
| logElementCache.invalidateAll(); | |
| blockHashCache.invalidateAll(); | |
| ExecutorServiceManager.shutdownAndAwaitTermination(sectionExecutor, esName); | |
| } |
| try { | ||
| // compare with hashcode() first, then with equals(). If not exist, put it. | ||
| cachedBlockHash = blockHashCache.get(originalBlockHash, () -> originalBlockHash); | ||
| } catch (ExecutionException e) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
| } catch (ExecutionException e) { | |
| try { | |
| // compare with hashcode() first, then with equals(). If not exist, put it. | |
| cachedBlockHash = blockHashCache.get(originalBlockHash, () -> originalBlockHash); | |
| } catch (ExecutionException e) { | |
| logger.error("Getting/loading blockHash from cache failed", e); | |
| throw new RuntimeException(e); | |
| } |
| } | ||
| if (blockFilter2Result.size() >= maxBlockFilterNum) { | ||
| throw new JsonRpcInvalidParamsException( | ||
| "exceed max block filters: " + maxBlockFilterNum + ", try again later"); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
| "exceed max block filters: " + maxBlockFilterNum + ", try again later"); | |
| if (blockFilter2Result.size() >= maxBlockFilterNum) { | |
| throw new JsonRpcExceedLimitException( | |
| "exceed max block filters: " + maxBlockFilterNum + ", try again later"); | |
| } |
| return null; | ||
| } | ||
|
|
||
| private void deleteCheckpoint() { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
📋 Review Summary
This Pull Request introduces optimizations for JSON-RPC filter interfaces by adding caching mechanisms and a configurable limit for block filters. It also includes various code modifications and new test cases. The overall quality of the changes is good, with a clear focus on performance and resource management.
🔍 General Feedback
- The introduction of caching for log filter elements and block hashes is a positive step towards optimizing memory allocation and potentially improving performance for JSON-RPC filter interfaces.
- The addition of unit tests for
LogFilterElement'sequalsandhashCodemethods, as well as new tests forNodePersistService, contributes to the overall test coverage and stability of the project. - The new
maxBlockFilterNumconfiguration provides important control over resource usage, preventing potential abuse or excessive memory consumption.
| return result; | ||
| } | ||
|
|
||
| @Override |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🟠 High - blockHashCache is not explicitly invalidated during close(). Although expireAfterWrite provides some cleanup, explicit invalidation on shutdown is a safer practice for resource management.
| @Override | |
| logElementCache.invalidateAll(); | |
| blockHashCache.invalidateAll(); |
|
|
||
| // Move P2P service start to the end | ||
| setP2pConfig(getConfig()); | ||
| p2pService.start(p2pConfig); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🟠 High - The reordering of p2pService.start() to the end of the start() method could introduce race conditions or NullPointerException if services like advService, syncService, peerStatusCheck, etc., implicitly depend on p2pService being fully operational during their init() phase. This change should be carefully reviewed for implicit dependencies and potential startup issues.
| eventFilter2Result.put(filterID, logFilterAndResult); | ||
| return ByteArray.toJsonHex(filterID); | ||
| } | ||
|
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🟡 Medium - The newBlockFilter() method's signature has been updated to throw JsonRpcInvalidParamsException. Clients calling this RPC method will need to handle this new exception, which indicates an intentional behavioral change when the maximum number of block filters is exceeded.
|
|
||
| private void deleteCheckpoint() { | ||
| if(checkTmpStore == null) { | ||
| // only occurs in mock test. TODO fix test |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🟡 Medium - The // only occurs in mock test. TODO fix test comment indicates a pre-existing issue within the mock tests. While not introduced by this PR, such comments highlight potential gaps in test coverage or design, which could mask real issues or lead to unexpected behavior in non-mock environments. It is recommended to address these TODOs to improve the robustness of the test suite.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
📋 Review Summary
This Pull Request introduces optimizations for JSON-RPC filter interfaces, primarily by implementing a configurable limit for block filters (maxBlockFilterNum) and introducing Guava caches to reduce memory allocation for LogFilterElement and block hash strings. It also includes a reordering of the P2P service startup and adds new unit tests. The changes are well-intentioned for performance and memory management.
🔍 General Feedback
- The introduction of
maxBlockFilterNumprovides a necessary control for resource usage in the JSON-RPC filter interface, which is a good security and stability enhancement. - The use of Guava caches with proper
equalsandhashCodeimplementations forLogFilterElementis an effective strategy for memory optimization through object deduplication. - The new tests added demonstrate a good effort in ensuring the correctness of the new features and existing components.
PR Change Type Classification: Mixed (Modification of existing behavior and introduction of new logic)
2.1 Lifecycle & Resource Safety (Mandatory)
Resources created: logElementCache (Guava Cache), blockHashCache (Guava Cache).
logElementCache is explicitly invalidated in close(). blockHashCache relies on expireAfterWrite. While the latter is eventually consistent, an explicit invalidateAll() for blockHashCache would ensure immediate resource release during shutdown, which is a best practice for deterministic lifecycle management.
2.2 Executor / Thread Pool Shutdown Semantics (Mandatory)
Checked, no issues found. The Guava caches manage their internal threads, and no new explicit ExecutorService instances requiring dedicated shutdown logic were introduced. The existing sectionExecutor is correctly handled.
2.3 Behavioral Compatibility & Implicit Invariants (Mandatory)
- The reordering of P2P service startup in
TronNetService.javamight subtly change the timing of network availability. While likely intended, it's a behavioral change that could affect services expecting early P2P readiness. - The introduction of
maxBlockFilterNumcreates a new upper bound on the number of block filters, which will causeJsonRpcInvalidParamsExceptionfor clients exceeding this limit. This is a deliberate change for resource control and should be communicated as an API change. - The addition of
equalsandhashCodetoLogFilterElementis an internal change that enables correct cache behavior and is compatible with its use as a value object.
2.4 Business Logic Correctness (Mandatory)
The logic for maxBlockFilterNum enforcement and the caching mechanisms using Cache.get(key, () -> key) appear correct and achieve the stated goals of memory optimization and resource control. The new exception JsonRpcExceedLimitException is appropriate.
2.5 Concurrency & Execution Interleaving Risks (Mandatory)
The use of ConcurrentHashMap and thread-safe Guava Cache instances correctly handles concurrency for data structures. The change in P2P service startup order does not introduce new inherent race conditions but is a timing change.
2.6 Numeric Precision & Cross-JVM Determinism (Mandatory)
Checked, no issues found. The changes do not involve floating-point arithmetic or complex numeric operations that would introduce precision issues or cross-JVM non-determinism.
System-Level Risk Summary (Mandatory)
- Consensus risk: Low. The changes are primarily in the JSON-RPC API and node services, not in core consensus logic.
- State inconsistency risk: Low. The caching mechanisms are designed for efficiency and not for persistent state. The
maxBlockFilterNumlimit prevents resource exhaustion that could lead to inconsistent state. - Lifecycle or resource leak risk: Low. While
blockHashCachecould benefit from explicit invalidation, theexpireAfterWritemitigates significant leaks.logElementCacheis explicitly invalidated. - Concurrency or execution-order risk: Low. Thread-safe data structures are used. The P2P service startup order change might have minor timing impacts but doesn't appear to introduce critical race conditions.
- Cross-JVM non-determinism risk: Low. Numeric operations are deterministic.
Overall, the changes appear safe with respect to consensus, state consistency, lifecycle safety, and cross-JVM determinism. The reordering of P2P service initialization should be carefully considered for any subtle dependencies.
| return null; | ||
| } | ||
|
|
||
| private void deleteCheckpoint() { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🟡 The TODO fix test comment indicates an existing issue in mock tests. While not directly introduced by this PR, addressing these mock test deficiencies is crucial for the reliability and completeness of the testing suite. This could affect lifecycle or state if the mock is not accurately reflecting production behavior.
| if (useNativeQueue) { | ||
| return false; | ||
| } | ||
| int queueSize = 0; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🟡 Similar to the SnapshotManager comment, this TODO fix test indicates a potential gap or inaccuracy in the mock test setup for EventPluginLoader. Ensuring mocks accurately represent real-world scenarios is important for robust testing and preventing silent failures in the actual system.
| fetchBlockService.init(); | ||
| nodePersistService.init(); | ||
| tronStatsManager.init(); | ||
| PeerManager.init(); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🟠 Moving the P2P service start to the end of the start() method is a significant change in the node's initialization sequence. This could introduce subtle behavioral compatibility issues if other services implicitly depend on P2P being available earlier. Ensure thorough testing for any timing-dependent failures or delayed network participation during node startup. Consider adding explicit synchronization mechanisms if such dependencies exist.
| PBFT | ||
| } | ||
|
|
||
| private static final String FILTER_NOT_FOUND = "filter not found"; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🟢 Adding a comment like // LRU cache is helpful for readability and maintainability, clarifying the cache eviction policy. Consider adding similar comments for other cache configurations where applicable for consistency.
| private static final int maxBlockFilterNum = Args.getInstance().getJsonRpcMaxBlockFilterNum(); | ||
| private static final Cache<LogFilterElement, LogFilterElement> logElementCache = | ||
| CacheBuilder.newBuilder() | ||
| .maximumSize(300_000L) // 300s * tps(1000) * 1 log/tx ≈ 300_000 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🟢 Adding a comment like // LRU cache is helpful for readability and maintainability, clarifying the cache eviction policy. This improves code clarity.
| Map<String, BlockFilterAndResult> blockFilter2Result; | ||
| if (getSource() == RequestSource.FULLNODE) { | ||
| blockFilter2Result = blockFilter2ResultFull; | ||
| } else { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🟠 The introduction of maxBlockFilterNum and the associated check in newBlockFilter() represents a new restriction on the eth_newBlockFilter API. While crucial for resource management, this change can affect existing API clients that might expect an unbounded number of filters. This behavioral change should be clearly documented for API consumers to prevent unexpected JsonRpcInvalidParamsException errors.
|
|
||
| return result; | ||
| } | ||
|
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🟡 The blockHashCache also holds memory resources. Although expireAfterWrite provides eventual cleanup, explicitly calling blockHashCache.invalidateAll() alongside logElementCache.invalidateAll() in the close() method would ensure immediate and deterministic resource release upon shutdown, improving lifecycle safety.
| logElementCache.invalidateAll(); | |
| blockHashCache.invalidateAll(); | |
| ExecutorServiceManager.shutdownAndAwaitTermination(sectionExecutor, esName); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
📋 Review Summary
This Pull Request aims to optimize memory allocation for JSON-RPC filter interfaces and introduces a new configuration item for maxBlockFilterNum. It also includes refactoring of P2P service startup and caching mechanisms for block hashes and log filter elements. The changes improve resource management and introduce limits to prevent potential resource exhaustion.
🔍 General Feedback
- The PR introduces several new features and optimizations, including a new exception type, configuration parameter, and caching. These additions generally enhance the robustness and performance of the JSON-RPC service.
- The addition of
equals()andhashCode()methods forLogFilterElementis crucial for the correct functioning of the caching mechanism. - Several test cases have been added or modified, which is a good practice for validating the changes.
- Careful attention should be paid to the identified critical and high-severity issues, particularly regarding breaking API changes, potential concurrency issues in filter handling, and the reordering of critical service initialization.
| String newFilter(FilterRequest fr) throws JsonRpcInvalidParamsException, | ||
| JsonRpcMethodNotFoundException; | ||
|
|
||
| @JsonRpcMethod("eth_newBlockFilter") |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🔴 The eth_newBlockFilter API now throws JsonRpcInvalidParamsException when maxBlockFilterNum is exceeded. This is a breaking behavioral change for existing clients that may not be prepared to handle this new exception, potentially leading to client-side errors or crashes.
| @JsonRpcMethod("eth_newBlockFilter") | |
| String newBlockFilter() throws JsonRpcMethodNotFoundException; |
| fetchBlockService.init(); | ||
| nodePersistService.init(); | ||
| tronStatsManager.init(); | ||
| PeerManager.init(); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🔴 The P2P service initialization in TronNetService.java::start() has been moved to the end of the method. This reordering could introduce runtime issues if advService, syncService, peerStatusCheck, PeerManager, relayService, or effectiveCheckService (or any other service initialized before the P2P service) have implicit dependencies on a fully operational P2P network. This could lead to NullPointerException or incorrect network state during startup. Thorough testing of all service interdependencies during startup is critical.
| PeerManager.init(); | |
| setP2pConfig(getConfig()); | |
| p2pService.start(p2pConfig); | |
| p2pService.register(p2pEventHandler); | |
| advService.init(); | |
| syncService.init(); | |
| peerStatusCheck.init(); | |
| ... |
| List<LogFilterElement> elements = | ||
| LogMatch.matchBlock(logFilter, logsFilterCapsule.getBlockNumber(), | ||
| logsFilterCapsule.getBlockHash(), logsFilterCapsule.getTxInfoList(), | ||
| logsFilterCapsule.isRemoved()); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🔴 The List returned by BlockFilterAndResult.getResult() and LogFilterAndResult.getResult() is being modified concurrently in handleBLockFilter and handleLogsFilter through .add() operations. If these lists are not thread-safe (e.g., ArrayList), this will lead to ConcurrentModificationException or data corruption under concurrent access. Consider using thread-safe collections like CopyOnWriteArrayList or Collections.synchronizedList().
| logsFilterCapsule.isRemoved()); | |
| // Change the return type of getResult() to a thread-safe List or | |
| // ensure all modifications happen under a lock. | |
| // For example, if getResult() returns a new CopyOnWriteArrayList: | |
| // logFilterAndResult.getResult().add(cachedElement); |
| import org.tron.core.services.jsonrpc.types.BlockResult; | ||
| import org.tron.core.services.jsonrpc.types.BuildArguments; | ||
| import org.tron.core.services.jsonrpc.types.CallArguments; | ||
| import org.tron.core.services.jsonrpc.types.TransactionReceipt; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🟢 The import import org.tron.core.net.peer.Item; in framework/src/main/java/org/tron/core/services/jsonrpc/TronJsonRpc.java is unused and should be removed.
| import org.tron.core.services.jsonrpc.types.TransactionReceipt; | |
| -import org.tron.core.net.peer.Item; |
|
|
||
| return result; | ||
| } | ||
|
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🟠 blockHashCache is not explicitly invalidated in TronJsonRpcImpl.java::close(). For consistency and to prevent potential subtle resource leaks in scenarios involving classloader resets or dynamic class reloading, all static cache resources should be explicitly invalidated during shutdown.
| logElementCache.invalidateAll(); | |
| blockHashCache.invalidateAll(); |
What does this PR do?
Why are these changes required?
This PR has been tested by:
Follow up
Extra details