Tadas Vilkeliskis

u6

My home server configuration guide and reference material. Last updated 2026-01-10.

Operating System

Debian 13

Hardware

A list of installed hardware. Configuration on this page is specific to the hardware listed here.

  • Supermicro H12SSL-I Server Motherboard (manual mirror)
  • AMD EPYC 7502 2.5GHz 32-core 200W Processor
  • Silverstone Technology XE04-SP5 CPU fan
  • A-Tech 128GB Kit (2x64GB) DDR4 3200MHz PC4-25600 ECC RDIMM 2Rx4 Dual Rank 1.2V ECC Registered DIMM 288-Pin Server & Workstation RAM
  • Ubit WiFi Card 6E 5400Mbps PCIe WiFi Card. Intel AX210 chipset
  • ASUS TUF Gaming GeForce RTX 4090 OG OC Edition Gaming Graphics Card
  • Samsung 980 PRO SSD 2TB PCIe NVMe Gen 4
  • Two Seagate Exos X24 ST24000NM000H 24TB 7.2K RPM SATA 6Gb/s 512e CMR
  • Noctua NF-A8 PWM (80mm) fan for Exos drives
  • Silverstone RM600 chassis
  • Corsair HX1000i power supply

Storage

u6 is planned to be used for long-term redundant file storage. This is achieved by setting up a ZFS pool for data. I don't plan to have hundreds of disks, probably 6-10 hard drives max. Such a configuration should provide me with ~100TB of storage over time. Which I think should be more than sufficient for my needs. I decided to use RAID1 (Mirror) configuration and build the pool out of 2 drive mirrors. Each mirror can tolerate a single drive failure at the cost of reduced total capacity.

Prerequisites

ZFS tools on Linux must be installed and the kernel module loaded.

# install linux headers first, so dkms module can be built
apt install linux-headers-$(uname -r)
apt install zfs-dkms zfsutils-linux
modprobe zfs
# to force kernel module build if modprobe does not work
dkms autoinstall
# persist module after reboot
echo "zfs" | tee /etc/modules-load.d/zfs.conf

Initial ZFS pool setup

lsblk -o name,maj:min,rm,size,ro,type,mountpoint,wwn
NAME            MAJ:MIN RM  SIZE RO TYPE MOUNTPOINT WWN
sda               8:0    0 21.8T  0 disk            0x5000c500e8b3e457
sdb               8:16   0 21.8T  0 disk            0x5000c500e8a89971
nvme0n1         259:0    0  1.8T  0 disk            eui.002538b331a2f64d
├─nvme0n1p1     259:1    0  976M  0 part /boot/efi  eui.002538b331a2f64d
├─nvme0n1p2     259:2    0  977M  0 part /boot      eui.002538b331a2f64d
└─nvme0n1p3     259:3    0  1.8T  0 part            eui.002538b331a2f64d
  ├─u6--vg-root 254:0    0   64G  0 lvm  /
  └─u6--vg-home 254:1    0  128G  0 lvm  /home

This shows the current block devices on my system. I will use the sda and sdb drives for my initial ZFS mirror. However, it's very important to use WWN identifier for drives. WWNs are stable unique identifiers bound to hardware, this means that the drives can be rearranged inside the chassis and connected in a different order but would still work as expected.

Next, create the pool:

zpool create -o ashift=12 tank mirror \
	/dev/disk/by-id/wwn-0x5000c500e8b3e457 \
	/dev/disk/by-id/wwn-0x5000c500e8a89971

zfs set compression=lz4 tank
zfs set atime=off tank

zfs create tank/media
zfs create tank/git

zfs set recordsize=1M tank/media
zfs set compression=off tank/media

zfs set recordsize=16k tank/git

Check status with zpool list and zpool status. At this point we have 24TB of mirrored storage. One important flag is ashift=12. According to LLMs this changes sector alignment to 4K which is important for larger drives like the Seagate Exos X24 I'm using here. I also turned on compression and disabled access time logging to improve performance.

Thermal management

In my configuration I am using consumer grade fans for my 2 exos drives. These fans are much more quiet and won't make my office sound like I'm in a jet turbine. However, The Supermicro server motherboard expects server grade fans that run at full speed all the time. This leads to the Noctua fan being undetectable by the motherboard.

The fan is connected to the FANA header of the motherboard, making it part of Zone 1, which can be controlled independently of CPU fans. Below you can see a script that's installed on the server and the corresponding systemd timer to automatically adjust fan speed if my drives start getting hot.


root@u6:~# cat /usr/local/bin/hdd-fan-control.sh
#!/bin/bash

# Configuration
DRIVES=("/dev/sda" "/dev/sdb")
FAN_ZONE="0x01" # Zone 1 for FANA/B
THRESHOLD_LOW=35
THRESHOLD_HIGH=45

# Get highest temperature among all drives
MAX_TEMP=0
for DRIVE in "${DRIVES[@]}"; do
    TEMP=$(sudo smartctl -A $DRIVE | awk '/Temperature_Celsius/ {print $10}')
    if [[ "$TEMP" -gt "$MAX_TEMP" ]]; then MAX_TEMP=$TEMP; fi
done

# Logic to set Duty Cycle (Hex)
if [ "$MAX_TEMP" -le "$THRESHOLD_LOW" ]; then
    SPEED="0x1E" # 30% speed if cool
elif [ "$MAX_TEMP" -ge "$THRESHOLD_HIGH" ]; then
    SPEED="0x64" # 100% speed if hot
else
    SPEED="0x32" # 50% speed for mid-range
fi

# Apply the speed to Zone 1
/usr/bin/ipmitool raw 0x30 0x70 0x66 0x01 $FAN_ZONE $SPEED
root@u6:~# cat /etc/systemd/system/hdd-fan-control.service
[Unit]
Description=Monitor HDD temps and adjust FANA speed
After=network.target

[Service]
Type=oneshot
ExecStart=/usr/local/bin/hdd-fan-control.sh

[Install]
WantedBy=multi-user.target
root@u6:~# cat /etc/systemd/system/hdd-fan-control.timer
[Unit]
Description=Run HDD Fan Control every 5 minutes

[Timer]
OnBootSec=2min
OnUnitActiveSec=5min

[Install]
WantedBy=timers.target
root@u6:~#

Once these files are set up, the cron can be enabled with systemctl daemon-reload && systemctl enable --now hdd-fan-control.timer.

Kernel upgrades

ZFS kernel module is provided via dkms. If source code for the new kernel is not available, the zfs module will not be provided and won't be able to mount zfs pool. A pro-active fix is to make sure that kernel source is always available by installing kernel source meta-package apt install linux-headers-amd64.

This will ensure that dkms module is rebuild when new kernel is installed. If DKMS hooks fail, you can force rebuild zfs module:


sudo dkms autoinstall
sudo modprobe zfs

# Import the pool (ZFS should remember 'tank')
sudo zpool import -a

# mount all datasets
zfs mount