It’s been a few months since my last update. Lots of things can happen over four months, and a lot of things did happen over four months, so I’m going to talk about some of it.
The file server in my living room has been decommissioned. I still have the hardware and intend to use it somewhere because I don’t like the idea of contributing to e-waste, even if it’s Ivy Bridge-EP with DDR3 memory. I’ve also decommissioned the Synology DS418 I’ve been running for several years; however, as I still needed file storage, I built a new-ish file server My goals with the new device were pretty simple: build a NAS that combines the capacities of the old ones, provides faster write performance than either old one, provides room for expansion, and is efficient enough to run in the basement without tripping a breaker, while maintaining a price point at or below that of an 8-bay Synology NAS. From a software perspective, I needed an operating system that let me a mix of hard disk capacities without having to sacrifice spare capacity provided by the larger disks in the set.
With these goals in mind, I settled on a SuperMicro X11SSH-GF-1585L mainboard. This is alphabet soup: it’s a Micro-ATX board with one PCIe x16 slot, one PCIe x8 slot, and one PCIe x4 slot (in x8 form factor); one PCIe M.2 socket; six SATA3 ports; four DDR4 SODIMM slots; and, crucially, an Intel Xeon E3-1585L v5 CPU. The one I grabbed on eBay had those four SODIMM slots populated with a total of 64GB of RAM, which is simultaneously the maximum the board supports, and the amount I wanted to use. The CPU is a 4c/8t part, contrasting with the 10c/20t CPU I had in my old TrueNAS server. Realistically, this is plenty as the server will primarily serve files and won’t need the extra compute horsepower. Additionally, the E3-1585L v5 is a 45W part, whereas the E5-2690 v2 it replaced was 130W, so the drop in core count contributes heavily to my efficiency goal.
The PCIe x16 slot is occupied by a 16-port SAS HBA, increasing the total drive capacity to 23 (including the M.2 slot). The x8 slot holds a PCIe M.2 card, pushing that count to 24. Finally, the x4 slot has an Intel 226-V, which increases the network speed from the 1Gbps provided by the integrated I350 chip to 2.5Gbps. All of this means that the GeForce GTX 1650 in the TrueNAS box isn’t being reused, further contributing to the power efficiency of the new box.
The case I chose for this application is an odd one: the Jonsbo N5. It’s a big black cube with a pair of backplanes yielding a total of 12 3.5″ hot-swap drive bays. The bays themselves are trayless: Jonsbo provides a set of HDD screws with rubber grommets and rubber handles that must be attached to each drive in order to slot it in properly. The one ding I had against this case was that the fans didn’t generate enough airflow to cool the CPU effectively, as it came equipped with a 1U passive heatsink. Even attaching a 92mm fan to a 4 duct didn’t push enough cool air to the CPU. I did manage to find an active copper heatsink for the CPU, which brought its temps into a sane range, so I swapped the passive heatsink for that one and kept the 4″ duct for good measure.
While TrueNAS Scale does support the use of mixed hard disk capacities, it will limit the usable capacity of each drive in the array to the smallest disk’s capacity. So, if I throw two 6TB, two 8TB, two 10TB, and two 12TB disks in a single array, the larger disks will all be treated like 6TB drives. I really liked the Synology Hybrid RAID feature, which allowed me to use all of the capacities of each drive. For example, RAID 5 (or RAIDZ 1) would yield a total of 38.2TB of usable capacity with these drives, but SHR would yield 54.5TB — a huge difference that effectively rules TrueNAS Scale out. There are ways to run Synology’s DSM operating system on non-Synology hardware, but apart from being against their license terms there’s no guarantee the methods will support future DSM versions, so that’s out as well. Ultimately, I decided on Unraid as the operating system for this build.
Let me tell you, it’s a good thing Unraid has a 30-day trial. I had never used it before and didn’t want to pay for a license if it wasn’t going to suit my needs. However, within the first 5 days, I knew it was the perfect fit, so I sprung for a Lifetime license. Within the first 10 days I had half the drives fail (the 6 and 12 TB ones), so I replaced them with 8TB ones. I also added a pair of 64GB SATA DOMs (basically tiny SSDs that plug directly into the SATA ports on the mainboard), and a pair of 512GB M.2 drives as mirrored write cache. Unraid is strange in that you have to boot it off a USB drive, and most USB consumer drives are poor quality, so I put it on a USB 2.0 thumb drive with SLC memory specifically designed for industrial use called the ATP Nanodura. It’s slow, but speed ultimately doesn’t matter because the OS mainly runs out of system memory.
Three months on from that adventure, the system is completely solid. The two 10TB drives act as the parity pool, ensuring that the array can survive drive failure. The ten 8TB drives give me 80TB of usable capacity, and the mirrored 512GB SSDs let me saturate the 2.5Gbps link when taking backups or copying large amounts of data. A fantastic feature is the system’s ability to spin down the mechanical disks when they’re not needed, meaning the system should be using a minimal amount of power when the disks are not being accessed. There are a couple Docker containers running, yielding additional functionality on top of serving files that I originally didn’t intend to use, but even with everything running the system is using less than 10% of its available RAM and, after all backups have been completed (both my main system and Proxmox), only 23% of available disk space is being used.
Overall, I’m quite happy with this new box. However, I have no idea how much power it’s consuming, so I’ll have to get my hands on something to let me log that information.
But, did I meet my pricing requirement? Well, let’s see … the closest thing to what I built is a Synology RackStation RS3618xs, which costs $2799.99 bare (i.e. no drives) at B&H Photo & Video, which is where I buy most of my stuff now. If I fill all 12 drive bays with new 8TB WD Red disks at $179.99 each, it’d be $2159.88 on top of that, it comes out to $4959.87 (which also happens to be the cost of B&H’s RS3618xs + disks bundle). Adding a pair of 512GB NVMe drives ($49.99 each) and a Synology M2D20 dual-M.2 card ($169.99) yields a final price of $5229.84.
In reality, I got most of my stuff on eBay. That’s the mainboard, I/O shield, SATA DOMs, and 8TB disks. The Jonsbo N5, power supply, 92mm to 4″ fan adapter, 4″ hose coupler 4″ 90-degree elbow, and Dynatron copper heatsink came from Amazon. The 8GB ATP Nanodura USB drive came from DigiKey. The Unraid license came from Unraid, appropriately enough. All in, that’s $2362.07 — less than half the cost of the Synology solution. So, yes, the pricing requirement is more than met.
Is this the approach for everyone? No, absolutely not. However, it’s the approach for me, so I took it and am happy with the result.