Currently when we wait for compact filters if we receive an inv message about new block hash we discard the current filters request.
When this discarded request arrives, we treat it as unexpected and think that the peer has misbehaved, which results in a peer's ban.
This happened to me on regtest, where the only peer got banned and the node has died, but it also happens on live networks, for example during the initial sync from an old block (when we have several batch requests of filters). This would ban peer and exhaust peer list which will lead to DNS peer discovery.
If I understand it right, it is a high severity security problem, because it can be exploited by an attacker: we don't really check who has sent the inv message, which means an attacker can just send a garbage block hash (the attacker will also get eventually banned for that), which would result in banning a legitimate peer from which we had requested filters.
More precisely:
When we sync cf headers, we check if there is a filter request made previously. If it has been cleared
src/chain/chain.rs:182:
174 pub(crate) fn sync_cf_headers(
...
178 ) -> Result<CFHeaderChanges, CFHeaderSyncError> {
179 let batch: CFHeaderBatch = cf_headers.into();
180 let request = self
181 .request_state
182 .last_filter_header_request
183 .ok_or(CFHeaderSyncError::UnexpectedCFHeaderMessage)?;
the call returns an Err which results in a disconnect and ban of the peer:
src/node.rs/495:
474 async fn handle_cf_headers(
...
478 ) -> Option<MainThreadMessage> {
479 self.chain.send_chain_update().await;
480 match self.chain.sync_cf_headers(peer_id, cf_headers) {
481 Ok(potential_message) => match potential_message {
...
490 },
491 Err(e) => {
492 self.dialog.send_warning(Warning::UnexpectedSyncError {
493 warning: format!("Compact filter header syncing encountered an error: {e}"),
494 });
495 self.peer_map.ban(peer_id).await;
496 Some(MainThreadMessage::Disconnect)
497 }
When do we clear last_filter_header_request?
It happens in src/chain/chain.rs:391, in clear_compact_filter_queue:
388 // Reset the compact filter queue because we received a new block
389 pub(crate) fn clear_compact_filter_queue(&mut self) {
390 self.request_state.agreement_state.reset_agreements();
391 self.request_state.last_filter_header_request = None;
392 self.request_state.pending_batch = None;
393 }
Then clear_compact_filter_queue is called in handle_inventory_blocks, which is invoked when we receive "inv" gossip message.
src/node.rs:621
597 async fn handle_inventory_blocks(
...
601 ) -> Option<MainThreadMessage> {
602 for block in blocks.iter() {
...
607 }
608 match self.state {
609 NodeState::Behind => None,
610 _ => {
611 if blocks
612 .into_iter()
613 .any(|block| !self.chain.header_chain.contains(block))
614 {
615 self.state = NodeState::Behind;
...
621 self.chain.clear_compact_filter_queue();
622 Some(MainThreadMessage::GetHeaders(next_headers))
623 } else {
624 None
...
628 }
Potential mitigations:
-
Do not ban peer on CFHeaderSyncError::UnexpectedCFHeaderMessage.
In that case we would just ignore compact filter messages. We could be spammed by these compact filters messages, but it seems to me it is expensive to do so; moreover just ignoring messages should be completely okay.
-
Do not clear the filter queue on "inv" messages at all. The fact that we received a new block doesn't necessarily mean that the filters returned are no longer valid. It speeds up the sync, as we don't discard previous results. In case of reorganization the queue still gets cleared (chain/chain.rs:137).
Also is it okay that we set the state to Behind whenever we receive an unknown block hash? Attacker can force us to re-request headers (seems to be fine).
Currently when we wait for compact filters if we receive an
invmessage about new block hash we discard the current filters request.When this discarded request arrives, we treat it as unexpected and think that the peer has misbehaved, which results in a peer's ban.
This happened to me on regtest, where the only peer got banned and the node has died, but it also happens on live networks, for example during the initial sync from an old block (when we have several batch requests of filters). This would ban peer and exhaust peer list which will lead to DNS peer discovery.
If I understand it right, it is a high severity security problem, because it can be exploited by an attacker: we don't really check who has sent the
invmessage, which means an attacker can just send a garbage block hash (the attacker will also get eventually banned for that), which would result in banning a legitimate peer from which we had requested filters.More precisely:
When we sync cf headers, we check if there is a filter request made previously. If it has been cleared
src/chain/chain.rs:182:
the call returns an Err which results in a disconnect and ban of the peer:
src/node.rs/495:
When do we clear
last_filter_header_request?It happens in src/chain/chain.rs:391, in
clear_compact_filter_queue:Then
clear_compact_filter_queueis called inhandle_inventory_blocks, which is invoked when we receive "inv" gossip message.src/node.rs:621
Potential mitigations:
Do not ban peer on
CFHeaderSyncError::UnexpectedCFHeaderMessage.In that case we would just ignore compact filter messages. We could be spammed by these compact filters messages, but it seems to me it is expensive to do so; moreover just ignoring messages should be completely okay.
Do not clear the filter queue on "inv" messages at all. The fact that we received a new block doesn't necessarily mean that the filters returned are no longer valid. It speeds up the sync, as we don't discard previous results. In case of reorganization the queue still gets cleared (chain/chain.rs:137).
Also is it okay that we set the state to
Behindwhenever we receive an unknown block hash? Attacker can force us to re-request headers (seems to be fine).