Monad Hardware Compatibility List

Community-maintained hardware guide for running Monad validators with optimal performance

Last Updated: 2025/08/21Testnet PhaseCommunity Driven

โš™๏ธ Baseline Hardware Requirements

Minimum specifications for running a Monad validator node on testnet and mainnet

๐Ÿ–ฅ๏ธ
CPU
Processing power
Cores: 16+ physical
Clock: 4.5GHz+ base
Architecture:
  • AMD Zen 4+ (Ryzen 7000/9000)
  • Intel Raptor Lake+ (13th gen)
โญ AMD Ryzen 9 7950X, 9950X, EPYC 4584PX
๐Ÿ’พ
Memory
RAM requirements
Minimum: 32 GB DDR4/DDR5
Recommended: 64 GB
Speed: 3200MHz+
๐Ÿ’ก ECC memory for production
๐Ÿ’ฟ
Storage
SSD requirements
Capacity:
  • 2TB NVMe (TrieDB)
  • 2TB NVMe (MonadBFT/OS)
Speed: PCIe Gen4+
IOPS: 1M+ random write
โœ“ Samsung 990 PROโœ— Nextorage
๐ŸŒ
Network
Connectivity needs
Bandwidth: 1 Gbps symmetric
Latency: Low latency required
Uptime: 99.9% required
โš ๏ธ Static IP recommended

๐Ÿ”ง CPU Performance Tuning

Purpose: CPU Tuning Guide for Handling Intermittent Skipped Blocks and Tx Load

1. CCDs (Core Complex Dies) Settings

๐Ÿ’ก For systems with a 9950X or 4584PX, the CCDs Settings can be skipped.

Purpose

  • Groups cores into chiplet units called Core Complex Dies(CCDs) that share an L3 cache, CCDs are connected via Infinity Fabric for scalability and efficiency.
  • Shapes threads/IRQs cache & latency characteristics โ†’ directly impacts performance and jitter.

How it works

  • Same CCD: Share L3; smaller IPI/scheduler propagation latency (faster wake-ups).
  • Different CCDs: Goes over IF โ†’ extra latency and higher variability; wake/IRQ can feel slower with unlucky timing.
  • Deeper C-states (e.g. C2/C3) increase wake latency, making cross-CCD wake-ups more sensitive.

Check & Change

# Check CCDs to see which CPU belongs to which CCD
lscpu -e | awk 'NR==1{print "CPU L3"} NR>1{split($5,a,":"); print $1, a[4]}'

# e.g. AMD Threadripper PRO 7965WX
# CCD 0: 0 ~ 5, CCD 1: 6 ~ 11, CCD 2: 12 ~ 17, CCD 3: 18 ~ 23
CPU L3
0 0
1 0
2 0
3 0
4 0
5 0
6 1
7 1
8 1
...
20 3
21 3
22 3
23 3

# In monad-execution.service, the default value for ro_sq_thread_cpu is 5, and the default value for sq_thread_cpu is 6.
# On the 7965WX, core 5 belongs to CCD 0 while core 6 belongs to CCD 1.
# Therefore, it is recommended to place both values within the same CCD (e.g. CCD 1).
sudo vi /usr/lib/systemd/system/monad-execution.service

--ro_sq_thread_cpu 6 \
--sq_thread_cpu 7 \

2. cpupower frequency Settings (P-state)

Purpose

  • Controls the CPU frequency scaling policy (Governor), which determines how the CPU clock speed changes based on load and idle conditions.
  • Balances performance vs. power efficiency.

How it works

The governor is a kernel module that sets CPU clock behavior.

Common modes:

  • performance - Keeps CPU clock as high as possible (low latency, higher power draw)
  • powersave - Keeps CPU clock as low as possible (energy saving, lower performance)
  • schedutil - Dynamically adjusts frequency based on Linux scheduler load tracking (modern default)

Check & Change

# Check current CPU governor
cpupower frequency-info | grep -A2 "current policy"

# Change to performance mode
sudo cpupower frequency-set -g performance

3. cpupower idle Settings (C-state)

Purpose

  • Controls which C-states(CPU idle power states) are available when the CPU is idle.
  • Manages the trade-off between low-latency wake-ups and power savings.

C-State basics

  • C0: Active state - executing instructions.
  • POLL: C0 substate - not executing but spinning/waiting (no deep sleep, highest idle power).
  • C1, C2, C3โ€ฆ: Higher numbers mean deeper sleep, more power savings, but higher wake-up latency.
  • e.g. C1 has very short entry/exit latency; C6 is deep sleep with longer wake-up delay.

BIOS vs OS Control

BIOS

  • Global C-State Control enables/disables CPU power-saving states at hardware level.
  • If disabled, only C0/C1 are available and OS-level cpupower cannot override.
  • For AMD EPYC processors, AMD strongly recommends keeping it enabled to avoid losing all deep sleep states.

OS (cpupower)

  • Can only manage governors and C-state availability within the limits set by BIOS.

Check & Change

# Check current idle states
cpupower idle-info

# Disable C2
sudo cpupower idle-set -d 2

# Disable C1
sudo cpupower idle-set -d 1

# Re-enable C1
sudo cpupower idle-set -e 1

๐Ÿ”ฅ Firewall PPS Policy Configuration

Purpose: Configure firewall PPS (Packets Per Second) limits to protect against UDP floods while maintaining Monad BFT performance

โš ๏ธ Note: These values are highly experimental and vary depending on your environment. Adjust based on your network conditions.

1. System Limits & Message Types

Block Limits

  • tx_limit: 5,000 transactions/block
  • proposal_gas_limit: 150,000,000 gas/block
  • proposal_byte_limit: 2,000,000 bytes/block (2MB)
  • Block generation time: ~400ms (2.5 rounds/second)

Network Message Limits

  • MAX_MESSAGE_SIZE: 3,145,728 bytes (3MB)
  • MAX_CONCURRENT_SEND_RAW_TX: 1,000 concurrent requests
  • DEFAULT_SEGMENT_SIZE: 960 bytes (MTU 1500 based)

Raptorcast Message Transmission

  • MTU Size: 1,452 bytes (default)
  • Chunk Structure:
    • Header: 108 bytes (signature 65 + metadata 43)
    • Merkle proof: 100 bytes (depth 6)
    • Actual data: ~1,220 bytes
  • Redundancy Settings:
    • Current: 3x (generates 3x original data chunks)
    • Maximum: 7x (DoS protection limit)
  • Chunk Calculation:
    Chunks = ceil(message_size / 1,220) ร— 3

2. RaptorCast Packet Analysis (200 Validators)

Basic Assumptions

  • Validators: 200 nodes with equal stake (voting power)
  • Proposal size: 2MB (maximum block size)
  • Redundancy: 3x
  • MTU: 1,452 bytes (actual payload: ~1,220 bytes)

Packet Calculation

1. Leader sends proposal:

  • Original chunks: 2MB รท 1,220 bytes โ‰ˆ 1,721 chunks
  • With redundancy: 1,721 ร— 3 = 5,163 chunks
  • Per validator allocation: 5,163 รท 199 โ‰ˆ 26 chunks (average)

2. Rebroadcast load:

  • Each node receives ~26 chunks as recipient
  • Rebroadcasts each chunk to remaining 198 nodes
  • Rebroadcast packets: 26 ร— 198 = 5,148 packets

3. PPS Analysis by Role

When Leader

  • Send: 5,163 chunks (initial transmission)
  • Receive: 199 vote packets
  • Total packets: ~5,362 per round

When Non-Leader

  • Receive:
    • Direct from leader: ~26 chunks
    • Via rebroadcast: ~1,695 chunks (for recovery)
    • Vote packets: 199
    • Total receive: ~1,920 packets
  • Send:
    • Rebroadcast: ~5,148 packets (26 ร— 198)
    • Vote transmission: 199 packets
    • Total send: ~5,347 packets
  • Total packets: ~7,267 per round
Proposal SizeLeader (PPS)Non-Leader (PPS)Network Total
100KB~451~400-600~300KB
2MB~5,362~7,267~6MB
โš ๏ธ With 2.5 rounds/sec: Non-leader peak = ~18,665 PPS
๐Ÿ”ฅ For consensus only: 20,000 PPS firewall setting recommended

4. Critical Issues with 2MB Blocks

  • High burst load: 7,000+ packets per round processing
  • Rebroadcast bottleneck: Each node retransmits 5,000+ packets
  • Network latency: Queueing delays from massive packet volume
  • Recovery time: Need to collect minimum 1,721 chunks

Optimization Considerations

  • Reduce redundancy: 2x redundancy = 33% load reduction
  • Increase chunk size: Adjust MTU for larger payloads
  • Hierarchical propagation: Implement tiered broadcast structure
  • Selective rebroadcast: Stake-based forwarding priority

Bandwidth Comparison

  • Traditional broadcast: 2MB ร— 199 = 398MB (total network)
  • RaptorCast: 2MB ร— 3 = 6MB (total network)
  • Savings: ~98.5% bandwidth reduction

5. Comprehensive Network PPS Analysis

1. Consensus Messages (RaptorCast)

Proposal (2MB block):

  • Leader: 5,163 chunks sent
  • Non-leader: 26 received + 5,148 rebroadcast
  • Per round: ~7,267 packets

Vote messages:

  • Send: 199 (to all other nodes)
  • Receive: 199
  • Per round: 398 packets

2. Transaction Traffic (Point-to-Point)

  • Throughput: 10,000 TPS = 2MB/s
  • Batching: 256KB batches ร— 8/second
  • Packet size: 256KB = ~180 UDP packets
  • Distribution: Send to 3 future leaders
  • Send: 8 batches ร— 180 packets ร— 3 leaders = 4,320 packets/sec
  • Receive: Similar volume from other nodes = ~4,320 packets/sec
  • Total: ~8,640 packets/sec

3. Block Synchronization

  • Missing block requests/responses
  • 2MB block: ~1,400 packets
  • Average sync: 1-2 blocks/second
  • Per second: ~2,800 packets

4. State Synchronization

  • Merkle proof included state chunks
  • Variable based on state size
  • Average: 100KB/second sync
  • Per second: ~1,000 packets

5. Peer Discovery & Others

  • Peer information exchange
  • Heartbeat/ping messages
  • Per second: ~200 packets

6. Total PPS Requirements

Per Round Breakdown (400ms assumed)

ComponentLeaderNon-Leader
Proposal Send5,1635,148
Proposal Receive1991,920
Vote Send/Receive398398
Round Total5,7607,466

Per Second Total (2.5 rounds/second)

ComponentPackets/Second
Consensus (2.5 rounds)~18,665
Transactions (10K TPS, 3 leaders)~8,640
Block Sync~2,800
State Sync~1,000
Peer Discovery~200
TOTAL~31,305 PPS

Bandwidth Analysis

ComponentSendReceiveTotal
Consensus~15MB/s~15MB/s~30MB/s
Transactions~4MB/s~4MB/s~8MB/s
Sync~2MB/s~2MB/s~4MB/s
Total Bandwidth~21MB/s~21MB/s~42MB/s
๐Ÿ”ฅ Required Firewall Setting: 50,000 PPS minimum
โš ๏ธ Recommended with margin: 70,000 PPS

7. Monitoring & Alert Thresholds

Key Metrics to Monitor

  • Port-specific PPS utilization
  • Number of dropped packets
  • RaptorCast decoding failure rate
  • Memory pool utilization

Alert Thresholds

MetricWarningCritical
PPS Utilization> 80%> 95%
Packet Drop Rate> 1%> 5%

Monitoring Commands

# Check iptables counters
iptables -L -v -n | grep hashlimit

# Monitor UDP traffic
tcpdump -i eth0 -n udp port 30303 -c 100

# Check dropped packets
netstat -su | grep -i drop

8. Important Considerations

  • Validator count scaling: Increase Vote PPS proportionally as validators increase
  • Network topology: Full nodes connected to multiple validators need higher PPS
  • Geographic distribution: Adjust burst allowance based on regional latency
  • RaptorCast efficiency: Actual redundancy may be lower than 7x depending on network conditions
  • DDoS defense: Additional rules needed for SYN flood and general DDoS attacks
๐Ÿ”’ These settings are baseline - adjust based on your specific network conditions

๐Ÿ’ฟ Software Requirements

Operating System

Required:Ubuntu 24.04 LTS or newer

Kernel:Linux kernel โ‰ฅ v6.8.0.60-generic

โš ๏ธ Critical Kernel Bug Warning

Linux kernel versions v6.8.0.56 through v6.8.0.59 contain a critical bug that causes Monad clients to hang in an uninterruptible sleep state.

Affected Versions (DO NOT USE):

  • v6.8.0.56-generic
  • v6.8.0.57-generic
  • v6.8.0.58-generic
  • v6.8.0.59-generic

Solution: Immediately upgrade to v6.8.0.60-generic or newer.

๐Ÿค Contributing to Monad HCL

This hardware compatibility list is community-maintained. Share your hardware configurations and help other validators optimize their setups.

๐Ÿ“ฆ View on GitHub๐Ÿ’ฌ Join Discord

Maintainers: Monad Community

Last Updated: January 2025

Version: 1.0.0