Installing & configuring Reth

Joseph H

14th May, 2024

I've been running various blockchain node software for a while with the primary reason being for ease of data collection and to avoid the heavy price tags that come with utilising remote API service providers. For Ethereum, my choice of client for the longest time had been Erigon being the forerunner of minimising the amount of disk space that archive nodes take up.

In mid-2023, I started using a rust package called cryo for managing my exports over the ol' reliable Ethereum-ETL. It doesn't provide too many additional benefits, but is easier to use. Shortly afterwards, I became aware of the Reth client implementation that the same folks were working on. And, after listening to Georgios describing his vision for Reth on a recent ZK Knowledge podcast episode, I became fully convinced that a lot of the excitement around Reth is warranted. The vision, as Georgios describes in the episode, is for a high-performance, fully modularised node that gives operators immense room for customisability providing capabilities of running L2 chains, ZK co-processors and other off-chain infrastructure that rely on a great deal of data indexing among other requirements.

Just a couple of days ago, Reth Execution Extensions were released as a way to seemlessly integrate post-execution hooks atop Reth. With minimal lines of code, it becomes easy to write software that runs simultaneously, alongside the base EVM. I'm currently immersing myself in understanding ZK EVM co-processor designs, and plan on exploring how Reth may be leveraged for this capability.

This post serves as a guide on how to get setup running Reth and lighthouse clients atop a Ubuntu 24.04 LTS operating system, with a few added bonuses for APS backup and VPN support that pertain to my own setup.

Hardware

My hardware setup:

  • intel NUC 12 i7-1260P (12 cores, 16 threads)
  • 64GB RAM
  • Samsung 980 Pro 1TB
  • MX500 4TB SATA SSD
  • APC Back-UPS BX750MI-AZ AVR UPS

Setting up drives and OS

Installing & Setting up Ubuntu Server LTS

  1. Download and install the latest LTS version of Ubuntu Server OS to a USB from https://ubuntu.com/download/server
    and update/upgrade: apt-get update && apt-get upgrade
  2. After installation, re-partition the OS drive to use the full disk space. By default it's probably only utilising 100GB or 200GB:
    1. Resize the logical volume:
      sudo lvresize -vl +100%FREE /dev/mapper/ubuntu--vg-ubuntu--lv
    2. Resize the filesystem
      sudo resize2fs -p /dev/mapper/ubuntu--vg-ubuntu--lv
    3. Check the size of the logical volume to see if everything went smooth
      df -hT /dev/mapper/ubuntu--vg-ubuntu--lv
  3. Setup SSD
    1. Run lsblk to view a list of available drives
    2. sudo fdisk /dev/sda
    `Welcome to fdisk (util-linux 2.37.2).
    Changes will remain in memory only, until you decide to write them.
    Be careful before using the write command.`
    
    `Device does not contain a recognized partition table.
    The size of this disk is 3.6 TiB (4000787030016 bytes). DOS partition table format cannot be used on drives for volumes larger than 2199023255040 bytes for 512-byte sectors. Use GUID partition table format (GPT).`
    
    `Created a new DOS disklabel with disk identifier 0xca88577d.`
    
    `Command (m for help): g
    Created a new GPT disklabel (GUID: 0E4A4322-AF8E-E94C-8241-27DCF603DFE6).`
    
    `Command (m for help): n
    Partition number (1-128, default 1): 1
    First sector (2048-7814037134, default 2048):
    Last sector, +/-sectors or +/-size{K,M,G,T,P} (2048-7814037134, default 7814037134):`
    
    `Created a new partition 1 of type 'Linux filesystem' and of size 3.6 TiB.`
    
    `Command (m for help): w
    The partition table has been altered.
    Calling ioctl() to re-read partition table.
    Syncing disks.`
    
    1. sudo mkfs.ext4 /dev/sda
    mke2fs 1.46.5 (01-Jan-2024)
    Found a gpt partition table in /dev/sda
    Proceed anyway? (y,N) y
    Discarding device blocks: done
    Creating filesystem with 976754646 4k blocks and 244195328 inodes
    Filesystem UUID: c23b048d-448d-4b44-a287-aa57aa68cdf2
    Superblock backups stored on blocks:
    32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632, 2654208,
    4096000, 7962624, 11239424, 20480000, 23887872, 71663616, 78675968,
    102400000, 214990848, 512000000, 550731776, 644972544
    
    Allocating group tables: done
    Writing inode tables: done
    Creating journal (262144 blocks): done
    Writing superblocks and filesystem accounting information: done
    
    1. Manually mount the partition: mount /dev/sda /mnt/ssd1
    2. Find the uid of the drive: blkid /dev/sda
    3. Add auto-mounting for the partition: nano /etc/fstab
    UUID=______ /mnt/ssd1 ext4 defaults 0 0
    
  4. Setup UPS integration such that in the case of a power outage, the server will safely shutdown:
    1. Install apcupsd
      sudo apt-get -y install apcupsd
    2. Take a backup of the original file:
      sudo cp /etc/apcupsd/apcupsd.conf /etc/apcupsd/apcupsd.conf.bak
    3. Next, edit the configuration files:
      sudo nano /etc/apcupsd/apcupsd.conf
    UPSNAME smartups750
    UPSCABLE usb
    UPSTYPE usb
    DEVICE 
    POLLTIME 60
    

    sudo cp /etc/default/apcupsd /etc/default/apcupsd.bak
    sudo nano /etc/default/apcupsd
    ISCONFIGURED=yes
    
    1. You may check the configuration via
      apcaccess status
    2. Run a test:
      sudo systemctl stop apcupsd
      sudo apctest
    3. Once done, start process:
      sudo systemctl start apcupsd
  1. Optionally setup Wireguard
    1. Change to root user: sudo su - root
      • apt-get install wireguard
      • cd /etc/wireguard/
    2. Generate private/public key pair:
      umask 077; wg genkey | tee privatekey | wg pubkey > publickey
    3. Copy private and public key info alongside wireguard vpn server info into conf file:
      nano vpn.conf
      [Interface]
      PrivateKey = <private_key>
      Address=192.168.69.2/24 // Or whatever the internal inet addr is. Run ifconfig
      
      [Peer]
      # VPN Server
      PublicKey=<vpn_servers_public_key>
      Endpoint=<server_ip>:<server_port>
      AllowedIPs = 0.0.0.0/0
      
    4. Spin up the service
      wg-quick up vpn
    5. Add the wireguard service to systemd:
    sudo systemctl enable wg-quick@vpn.service
    sudo systemctl daemon-reload
    
    1. Add the WireGuard service to systemd:
    sudo systemctl enable wg-quick@vpn.service
    sudo systemctl daemon-reload
    
    1. Start the new service
    sudo systemctl start wg-quick@vpn
    
    1. To remove the service and clean up the system:
    sudo systemctl stop wg-quick@vpn
    sudo systemctl disable wg-quick@vpn.service
    sudo rm -i /etc/systemd/system/wg-quick@vpn*
    sudo systemctl daemon-reload
    sudo systemctl reset-failed
    
    See more: https://www.ivpn.net/setup/linux-wireguard/

Installing Reth

  1. Install Rust
    curl --proto '=https' --tlsv1.2 -sSf https://sh.rustup.rs | sh
  2. Install pre-requisites for building reth and lighthouse from source:
    sudo apt install -y git gcc g++ make cmake pkg-config llvm-dev libclang-dev clang
  3. Build lighthouse
    1. Fetch git: git clone https://github.com/sigp/lighthouse.git
    2. Make sure you're on the latest stable branch:
      cd lighthouse && git checkout stable
    3. Now, build the lighthouse binary with MAXPERF setting:
      PROFILE=maxperf make
  4. Build reth
    1. Fetch git: git clone https://github.com/paradigmxyz/reth
    2. Make sure you're on the latest release:
      cd reth && git checkout v0.2.0-beta.6
    3. Now, build the reth binary:
      RUSTFLAGS="-C target-cpu=native" cargo build --profile maxperf
  5. Running lighthouse & reth
    1. Create a shared secret for the clients to be able to communicate with one another
      sudo mkdir /secrets && openssl rand -hex 32 | tr -d "\n" | sudo tee /secrets/jwt.hex
  6. Turn em' into systemd services
    1. Copy the binaries into /usr/local/bin:
    sudo cp ./target/maxperf/reth /usr/local/bin/reth
    sudo cp ./target/maxperf/lighthouse /usr/local/bin/lighthouse
    
    1. Reth: Create the file /etc/systemd/system/reth.service:
    [Unit]
    Description=RETH
    After=network.target
    StartLimitIntervalSec=0
    
    [Service]
    Type=simple
    Restart=always
    RestartSec=1
    ExecStart=/usr/local/bin/reth node --datadir /mnt/ssd1/ethereum/ --metrics 0.0.0.0:9002  --authrpc.jwtsecret /secrets/jwt.hex --authrpc.addr 127.0.0.1 --authrpc.port 8551 --http --ws --rpc-max-connections 429496729 --http.api trace,web3,eth,debug --ws.api trace,web3,eth,debug
    
    [Install]
    WantedBy=multi-user.target
    
    1. Lighthouse: Create the file /etc/systemd/system/lighthouse.service:
    [Unit]
    Description=Lighthouse
    After=network.target
    StartLimitIntervalSec=0
    
    [Service]
    Type=simple
    Restart=always
    RestartSec=1
    ExecStart=/usr/local/bin/lighthouse bn --network mainnet --datadir /mnt/ssd1/lighthouse --metrics --execution-endpoint http://localhost:8551 --execution-jwt /secrets/jwt.hex --checkpoint-sync-url https://mainnet.checkpoint.sigp.io --disable-deposit-contract-sync
    
    [Install]
    WantedBy=multi-user.target
    
    1. Start the services:
    sudo systemctl start reth
    sudo systemctl start lighthouse
    
    1. To ensure that the services are running:
    sudo systemctl status reth
    sudo systemctl status lighthouse
    
    1. To check the logs of the running services:
    sudo journalctl -u reth -f # -n 1000 to see more logs
    sudo journalctl -u lighthouse -f # -n 1000 to see more logs
    
    1. To enable autostart:
    sudo systemctl enable reth
    sudo systemctl enable lighthouse
    

See more: https://blog.merkle.io/blog/run-a-reth-node

Hopefully, if you've managed to get up to this stage, you've got your node set up and running and are awaiting the clients to fully sync. I expect that this process takes one to two days.