-
Notifications
You must be signed in to change notification settings - Fork 8
Open
Description
Summary
IBD currently uses a single peer, bottlenecking sync on network latency. This adds parallel block download from multiple peers while validating sequentially through process_block.
Proof-of-concept: https://github.com/pzafonte/kernel-node/tree/multi-peer-ibd
Design
- PeerManager spawns N threads (configurable
--maxpeers, default 4) with auto-reconnect - Shared download queue (
Mutex<VecDeque<BlockHash>>) populated after header sync - In-flight set (
Mutex<HashSet<BlockHash>>) prevents duplicate requests across peers - Per-peer state machine:
AwaitingHeaders -> AwaitingInv / AwaitingBlockwith 16-block batches - Single validation thread via
sync_channel(32)chainman is not thread-safe for concurrentprocess_block - First peer syncs headers; late peers skip straight to block download (
AtomicBoolflag) - Stalled peers (30s timeout) are disconnected; unreceived blocks re-enqueued for other peers
- Logarithmic block locators for
getheaders/getblocks
This is a proof-of-concept implementation. Design tradeoffs favor simplicity over production-readiness. Each is documented with an inline // Tradeoff: comment explaining the cost and what a production implementation would do differently. There are probably other enhancements that could be made too.
Test Coverage
27 tests: 21 unit tests (batch popping, message construction, deduplication, state machine) + 6 integration tests with regtest ChainstateManager.
Reactions are currently unavailable
Metadata
Metadata
Assignees
Labels
No labels