Disconnect

By now you know that I enjoy tech. One with a knack for self-promotion may say I’m “passionate” about it, but I think it’s more in-character for me to say I like it a lot. Technology is a thread that is woven through every aspect of my day whether my eyes are open or shut. I’ve been online practically non-stop since 1998. My kids have heard this phrase numerous times: I am the internet.

There is no way any of what I just said is healthy over the long term. By “long term” I’m talking years, not just the singular year. That’s where the title of this post comes in: disconnect. I can’t be the internet. I can’t constantly be online, and I need to shut my computer off and take my hands off the keyboard every so often.

Somehow, my wife and I stumbled upon our preferred escape route: glamping. Apparently the phrase isn’t as well-known as I once thought it was, so for those who are not familiar, it’s short for “glamorous camping” — camping with modern amenities. We don’t go all the way to the top of the scale by getting internet, streaming services, or over-the-air TV; a solid roof/floor/walls, a proper mattress to sleep on, a standard bathroom (toilet+basin+shower with heated water), air conditioning, but in the middle of nowhere. That’s how we prefer to disconnect. Sure, we’ll take some movies with us in the form of DVDs or files on a laptop we plug into a TV, but if we don’t bring it along, forget about it.

Can we get online? Sure, of course we can. We always have our phones, and there’s “LTE” coverage (3G speed), but while we can tolerate the slower speeds having grown up on 14.4k, we choose not to. It’s an easy choice to make, especially after using 10Gbps internet. Takes a bit for the kids to get used to it, but they come around after a few hours. The dogs? They couldn’t care less, they’re with us and they get to see more squirrels.

And, after a week of campfires (actual fires, outdoors, with wood … not “glampfires”), grilling, getting eaten alive by mosquitoes, getting crisped by the sun, fishing, walking around with the dogs, swimming in whatever (safe) bodies of water we encounter, we get back behind the wheel and exchange touching grass for #!/bin/bash yet again. Sure, the commands come slowly at first, but the thoughts come more clearly … at least, once the >1500 emails are cleared out of the ol’ inbox.

So, with that, it’s time to disconnect for a bit. But not before a nice round of Satisfactory. Gotta get that Thermal Propulsion Rocket factory up and running, after all.

No.

To answer the question posed by my last post: no. A server cannot be my main workstation. Or, at least, that server can’t be my main workstation. The one I’m still using — 3900X with 64GB of DDR4-3200 — is running perfectly, but most importantly, it’s running fast. From a first-hand usage perspective, the Ryzen wrecks the Xeon up and down the stack … but given that the Xeon was launched almost 10 years ago, and the Ryzen has been around for less than 5, I suppose that’s to be expected. 

With that said, my homelab server’s CPU is identical to the one in my workstation, so … maybe I’m running a server as my primary workstation. Only difference is that the workstation has 64GB of non-ECC RAM. 

The E5-2690v2 is now running TrueNAS Scale and sits quietly in a corner, absorbing data backups from various other machines around the house. It’s chock-full of Noctua fans that run completely silently in spite of the SuperMicro mainboard’s efforts to ramp them up. Seriously, if I leave the default fan settings in the BMC, the fans all cycle between minimum and maximum speed every few seconds, which is obnoxious. Only happens with Noctua fans, likely because they run at reasonably low RPMs. Overall I’m pretty happy with TrueNAS Scale on that box, but I’m pretty sure I don’t use even 25% of its functionality.

I think I can switch my parents’ file server to TrueNAS Scale, on that note. It’s a server in name only, really: it’s a mini-ITX Sandy Bridge machine with 16GB of DDR3, a 250GB SSD boot volume, and a pair of mirrored 1TB HDDs. The hardware runs Proxmox VE which, in turn, runs Pi-Hole and a Windows 10 VM that hosts a singular file share. Why bother with Windows 10, though, when TrueNAS Scale is purpose-built for storing files? And, with the k3s functionality built into Scale, I can chuck Pi-Hole on there with minimal resource impact. 

Can a server be my main machine?

We usually think of servers as powerful, expensive machines sitting in data centers, and that’s usually accurate enough. Small to medium businesses have tower servers available as an option. Back in the day Intel used to make kits including chassis, mainboards, CPUs, memory, and cooling for DIY servers; SuperMicro has something similar now.

I’ve been running a homelab for a while. I started with an old gaming rig, switched to a proper server, moved to a couple of those SuperMicro kit components, and landed on another DIY option with an ASRock “server” mainboard in it. Now, I have those SuperMicro kit components left over.

The SuperMicro components are not considered that powerful in 2023. The CPU is a Xeon E5-2690 v2, making it a 10-core, 20-thread Ivy Bridge-EP part. Generationally this is on par with my first “old gaming rig” server, an i7-4930k, but with far more grunt. It’s running 128GB of ECC DDR3 RAM. The mainboard has a decent amount of SATA ports, though most are SATA2. The PCI Express slots are also jerks — they’re all sized either 16x or 8x, but electrically there is only one 16x slot and a number of the 8x slots are PCIe 2.0.

With all of this in mind, what do I actually need to do my work? I’m running a Ryzen 9 3900X for that purpose, which means the Xeon’s core count is pretty close. I’m running 64GB of DDR4-3200, mainly because I run multiple virtual machines; the 128GB of ECC DDR3 would allow me to allocate more memory to those VMs and possibly run more of them in general. I like booting from NVMe drives because of the performance, obviously, which I can’t do using the SuperMicro board. However, I don’t have an issue booting off a SATA drive and using NVMe on PCIe 1x add-in cards as necessary; that’s what I did when this machine ran my homelab and performance was fine. The GPU I use for work was never considered high-end: it’s a FirePro W4100. PCI Express 3.0 is fine for that, but the SuperMicro board gives us a built-in display output, though it just has a single HD-15 port so its usefulness is somewhat limited.

One of the VMs I’m running is Windows 11, which requires a recent CPU and a TPM. I’ve virtualized plenty of machines using the E5-2690 v2, and I’ve gotten Windows 11 virtualization working on an Alder Lake-based KVM machine before, but I had to set the CPU model to Skylake. Virtualizing the TPM is trivial, but I’m not sure if I’d be able to do set Skylake as the CPU model with an Ivy Bridge-EP CPU.

So … I haven’t answered the question posed in the title of this post, but I think it’s worth trying. It’s an interesting thought experiment, to be sure, and I’m not expecting it to outperform the 3900X. However, as I mentioned in a previous post I’m pretty keen on keeping hardware running as long as possible, and not using a 10c/20t CPU seems like a tremendous waste. Let’s see what happens.

A Second Life

Nothing beats the feeling of opening a box of shiny new tech. Well, peeling the plastic off that piece of shiny new tech beats the pants off opening the box, but given that you have to open the box and witness the peelable plastic first, I’ll defer to the box opening as the best feeling.

Like most things, though, the tech we use has a limited lifespan. Stuff fails. Older hardware seems to grow slower and slower by the day. If we take the proper precautions, though, we can pass our older hardware on to the next user. My kids use my older mainboards, CPUs, and GPUs in their computers, for example, but only after I secure erase the SSD that goes with them.

We should not shy away from being on the receiving end of that, though. My previous laptop was a Lenovo ThinkPad T460 and, given that the previous statement is in the past tense, you can tell it didn’t end well. I didn’t buy that T460 new, and I didn’t buy its replacement — a ThinkPad X395 — new either. I love the feeling of getting a shiny new piece of tech as much as the next computer nerd, but does it have to be new new? I got pretty lucky with this X395 too, as it looked brand-new when I took it out of the box. Keyboard feels like it was never used, monitor has no smudges, and there was not a speck of dust in the cooling fan. Didn’t stop me from putting a larger NVMe drive in and re-pasting the CPU, of course.

The only problem I had was that it came with Windows, but as you are acutely aware, I have a tried and true solution to that problem. The ThinkPad lineup, especially the older models, run Linux very well and are occasionally certified to run some distributions by their maintainers. Canonical, for example, certified the ThinkPad X395 on Ubuntu 18.04. Now, I’m not going to reach for a 5-year-old OS when I get my hands on a new-to-me laptop, but it’s good to know that someone put effort into making sure the hardware works properly with my operating system (not necessarily distro) of choice.

As a side note, the ease with which I can run Linux on an all-AMD machine is unparalleled. My main machine, a Ryzen 9 5900X on an X570 mainboard with a Radeon RX 6750 XT GPU, runs perfectly with minimal intervention. This laptop has a R5-3500U, which has built-in Radeon graphics, and it’s running great. The only thing that needed extra attention was the fingerprint scanner, but even that was a single extra package I had to install.

So, the long and the short of it is, don’t discount something just because it’s been used before. There are tons of good machines out there waiting for a new lease on life, and with the way Linux has grown over the past couple years, it’s never been easier to slap an OS on there and get down to business.

IoT is a Gateway Drug

I was fiddling around in my homelab this weekend and finally decided to write this down as the thoughts have been kicking around for a while.

Years ago, when the term “Internet of Things” came to be, I thought it was pretty stupid … and let’s face it, in hindsight, it is a pretty stupid term. The internet is the internet, and while the objects connecting to it are “things” in the strictest sense of the word, it’s computers all the way down. Raspberry Pi? Adafruit Feather? ESP8266? All computers. Everything. It doesn’t have to have a Xeon or a Ryzen to be a computer, its RAM can measure in the megabytes. Let’s just lay that groundwork right now so we can go on to the next paragraph.

“Physical computing” is a bit silly as well, but to an extent it makes a little more sense. We’re grabbing data from the physical world by taking measurements of it. Temperature, humidity, proximity, brightness, presence or absence of a magnetic field, voltage, current, vibration, orientation … we’re taking all of that, putting the numbers somewhere, applying some operations to them, and putting them on a screen so we can make sense of it all. We’re gathering the data and visualizing it in a way that makes more sense to us than numbers on a spreadsheet. Maybe I’m misunderstanding, but that’s how I’ve interpreted this whole “internet of things” … thing. I’ve built some systems to do stuff with it, and based on what I’ve been reading about recently it’s become apparent that the entire economy that’s built up around it is a gateway to what folks typically call “enterprise technology.”

Now, I work with some enterprise technologies, but this isn’t a post about work. That said, there’s a pretty extensive overlap between the monitoring and reporting we use for those technologies, and what I was describing above. The stuff at the “bottom” of the ladder — lightweight stuff that’s designed to run on single-board computers, for example — implements concepts that translate really, really well to the high end. At some point the only difference becomes syntax, which may take a little practice, but nowhere near as much effort as learning something from scratch. It was the same back in college when we used different programming language for every other course. The only one that really threw me off was LISP.

Before containerization we’d spin up VMs or, before that, entire servers to handle some of these things. Right now, a tiny VM running a lightweight Linux distro, Docker, and a smattering of containers is enough for a homelab or a small to medium business environment. Given a docker-compose file, you can even class it up with a lightweight Kubernetes deployment using something like MicroK8s, K3s, or RKE2. Let’s Encrypt makes it almost trivially easy to get your hands on a certificate to secure everything properly. A really basic deployment doesn’t require too much in the way of configuration changes, but if your goal is to learn this stuff, editing some text files is going to come with the territory.

So … long story short: learning IoT, regardless of my opinions on the term, is a good way to get your hands dirty with some pretty powerful technologies. Good stuff if you’re looking for an in at some tech shops.

Is it nostalgia after only six years?

I’ve played video games for a long time. Not that I’m setting any records or anything, but the first games I ever played were on Atari 2600 and Intellivision. The Atari was destroyed in late 1986 for certain aquatic reasons, and the Intellivision was my cousin’s while we lived in their basement for a while. After that came the NES, and that’s all I had until I was able to get a job and save up enough for a PlayStation 2.

Despite the console beginnings, though, my true gaming love was the PC. Believe it or not, that started with a TRS-80 Model 4, and Horse Race. Sure, the 486 with Wolfenstein 3D, Doom, and Microsoft Flight Simulator 4 blew that thing out of the water, but that TRS-80 was the beginning. At this point I don’t even know how many PC games I’ve played, and I’ve probably forgotten the titles of more games than I remember. And, while Wolfenstein 3D and Doom still hold a special place in my memory, and the Half-Life games took my enjoyment of PC gaming to the next level, it wasn’t until Mass Effect that I realized how much I loved gaming.

Mass Effect appeared when I was in grad school. It just so happened that I was focusing on 3D graphics and dataset visualization when Mass Effect was ported to PC and released in mid-2008. The graphics were somewhat janky and the controls were a bit of a mess, but it was the story that drew me in. Story and character development were front and center, and I thought Mass Effect delivered that in spades. To date I have spent hundreds of hours playing each game in the Mass Effect trilogy and, after attending the midnight launches of Mass Effect 2 and 3, I can confidently count 2 as my favorite, especially with the improvements introduced in the Legendary Edition. However, the fact remains that Mass Effect 3 is an 11-year-old game, which doesn’t line up with the title of this post.

The year 2017 promised a fourth major entry in the Mass Effect franchise: Andromeda. Reviewers were pretty down on it at the beginning and, like a silly fan, I did the midnight launch on EA Origin for this one as well. The experience was different this time, though. There were no familiar characters, different combat mechanics, and a story that didn’t quite feel whole. But, despite this, I pushed through it and finished the game. Somehow, I didn’t find any of the characters all that memorable. Who can forget the first time they met Liara T’Soni in that cave? Battling Sovereign’s forces while climbing up the side of the tower in the Citadel? Getting killed ten minutes into Mass Effect 2, only to be resurrected by the organization you were fighting back in the first game that happened to be run by Martin Sheen? Encountering the Collectors and raiding their base near the galactic core, only to discover and fight the human Reaper? Seeing the Citadel fill with refugees while the situation at Huerta Memorial Hospital continuously deteriorated? Punching out Khalisah Bint Sinan al-Jilani because you were sick and tired of her snide insinuations/disingenuous assertions/tabloid journalism? Stabbing Kai Leng in the gut with your omni-blade for Thane? What is there to remember from Mass Effect: Andromeda? Quite literally forgettable.

The year 2017 also brought a new intellectual property from Guerilla Games called Horizon Zero Dawn. A former coworker and I were debating which one to get and, because I game on PC, I didn’t have much choice in the matter: Horizon Zero Dawn was a PlayStation 4 exclusive and I didn’t have one of those. We decided that he’d get Horizon Zero Dawn and I’d get Mass Effect: Andromeda. He seemed to enjoy Horizon thoroughly, whereas I was left with a robust “meh” feeling. Along came 2020 and Horizon Zero Dawn was released for PC, so I immediately jumped on it, fired it up, and was absolutely blown away. That feeling I’d gotten with the first Mass Effect had returned, to an extent, but with spectacular graphics and a decent keyboard & mouse control scheme.

Unlike Mass Effect: Andromeda, I’ve replayed Horizon Zero Dawn 4 times as of Tuesday of this week. The story and characters are engaging. The New Game+ mode makes you feel utterly invincible. Yes, some of the face modeling behaves kind of janky with people seeming to move their mouths quite a bit more than necessary for certain words, but that’s a small price to pay for the story, cinematics, and gameplay.

Just for kicks, though, I launched Mass Effect: Andromeda last night and my jaw nearly dropped to the floor. It was as if I’d gone from a triple-A game (which, to be fair, HZD actually is) to something I’d developed during my graduate courses back in 2007. ME:A felt lifeless, like going through the motions with muscle memory.

So … where does the nostalgia come in? Maybe it’s the feeling of anticipation before launching ME:A for the first time … or maybe it’s more like regret. Regret that I didn’t snag a PlayStation 4 and a copy of Horizon Zero Dawn instead of loading up Origin. If I could go back and do it again, or if I could give advice to my past self, I’d know exactly what not to choose this time around.

Hunt is over for now

Server’s been up and running happily since the last post. So, it was the enormous VM data volume that was causing the problem.

A different hunt has ended as well (for now, of course), completely unrelated to Proxmox: the hunt for a decent camera. Like most people these days I’ve been recording videos on my phone, which is generally enough. Then, my kids started doing sports and I started sharing the videos with family, which made me remember some feedback I received at a much younger age: my videos were not stable enough. I switched to my previous phone, a OnePlus 7T, which vastly improved video stability compared to my current iPhone 12. The videos produced by the OnePlus had better color saturation as well, but it turned out the microphone was flaky, as the left channel muted and un-muted every few seconds. As a temporary solution I recorded with both phones and combined the OnePlus’s video with the iPhone’s audio in kdenlive, but that was a garbage solution.

My next upgrade was a gimbal and a small cage for the iPhone. An add-on lens and external mic improved the video and audio quality to the point that I don’t use the OnePlus anymore, but I wasn’t able to use the entire feature set of the gimbal. There are several buttons and a joystick on the grip, and while I could use the joystick to adjust the pan and tilt, that left button and accessory panels unused. There’s a list of DSLRs and mirrorless cameras on the manufacturer’s site, so I figured I’d slap my Nikon D3100 on it to see how it works. Spoiler alert: it didn’t. At least, not very well. My main goal here is to shoot video, and the D3100 has three things working against it. Maximum video length is 10 minutes, but the things I record can last over an hour; the built-in mic is abysmal and there’s no provision for an external mic; and I can’t use my favorite lens because it’s enormous and heavy, not to mention it has manual zoom. Twisting the lens to zoom while it’s on a gimbal is super awkward, and the only other lens I have for the D3100 is a 35mm prime, which I wouldn’t use for video from stands at my kids’ sporting events. Therefore, it was time to upgrade.

I started out patiently, perusing eBay and placing bids on Nikon mirrorless bodies, as I’d be able to use my existing lenses along with lenses specifically for the mirrorless body. My patience ran out when I was outbid on 8 or 9 different auctions, so I switched brands. I’d heard good things about Sony mirrorless bodies on a Linux podcast that has a professional photographer as one of its hosts, so I started looking for Sony kits that were on the gimbal’s compatibility list. As I’d run out of patience I looked for kits with “buy it now” enabled with a maximum price around what I was bidding. Within a couple hours I found an A6400 with the 16-50mm kit lens and two batteries for about $150 less than my maximum bid, so I snapped it up. I slapped it on the gimbal, attached the control cable, and went to town. Picture quality puts both phones to shame, and the audio quality is … exactly the same because I used the same mic. First test was during the family Christmas gathering, and the videos were met with positive feedback.

The 16-50mm lens left me wanting a little more, though, so after a mishap with a used 18-200mm lens (focus didn’t work at all), I settled on an 18-100mm with electronic zoom. Finally sat that on the gimbal and balanced it, and it’s perfect. Zoom works fine using the gimbal’s control knob too, but it’s not going to set any speed records. Not going for full-on NBA “I need to zoom on this guy shooting a 3-pointer yesterday” shots here, of course. First game is tomorrow, and I can’t wait to see how it performs.

Almost had it

Turns out it wasn’t actually some weird LXC-related permissions thing, which is a bit of a bummer, but it was most definitely file system-related. My Plex server’s data volume was a single monolithic 12TB volume, and when I did anything that caused significant I/O, that resulted in high load for some reason. I split my collection apart into multiple smaller volumes, with the largest being 5TB, used rsync to transfer everything over, and deleted the 12TB volume. No issues since the rsync finished. I’m still going to keep an eye on it for a few days, though.

Now, on to the next project: improving the quality of my home videos. I started with a OnePlus 7T, which does a decent job in its own right. Its image stabilization is surprisingly good, but my specific OnePlus 7T’s microphone was not behaving properly, forcing me to combine left and right channels to prevent audio drop-outs after every recording. My iPhone 12’s audio recording is great, but the color saturation is awful compared to the OnePlus, and there is no image stabilization, so I snagged a gimbal and some accessories for the iPhone. It’s been alright so far, but without HDR, the color saturation is still a problem. My existing Nikon D3100 is far too heavy to sit on the gimbal, so I’ve upgraded to a Sony A6400. The video quality is above and beyond the iPhone’s, and with the Rode VideoMic GO II attached, the audio quality is right up there with the video. At this time the only drawback is that the kit lens is 16-50mm and I have no zoom controls from the gimbal; an 18-200mm f3.5-6.3 lens is on its way, as well as the gimbal’s zoom control module, and that’ll complete the kit. All this to shoot better video of my kids’ sporting events.

On the software side I do my editing in DaVinci Resolve or kdenlive running on EndeavourOS. This is the only distro I was able to get the proprietary OpenCL stack working with alongside the open source video drivers, as I am running an AMD GPU; without proprietary OpenCL, Resolve won’t run properly. I render them to disk with x265 encoding to maximize quality and minimize disk usage and dump them onto the Plex server, which has a Quadro P2200 passed through to it for transcoding purposes.

This is also where the VHS tapes I capture go, though I used h264 for the first 8 tapes; those files aren’t as big because they’re quite low-resolution at, only because the source medium is at quite low resolution. Unfortunately the BlackMagic Intensity Pro 4k can’t capture at that resolution, so I have to use 720×576 when ingesting the footage and scale it down in post.

Minor Stability

My home server has been running steadily for the past 10 days without any issues … is what I’d say if replacing the RAM with ECC modules fixed 100% of the problems I was encountering. The majority of the problems were indeed fixed, but I encountered some new ones now that I didn’t have to worry about services randomly crashing and preventing access to the web UI.

I started noticing startlingly high load averages on the hardware. I’m used to seeing load averages well under the number of logical cores in the system, but was constantly seeing values over 34 for 1, 5, and 15-minute averages, which is not something I’d want to see for a machine with 12 physical/24 logical cores. It didn’t take long after the system booted for these values to show up either. Proxmox’s host-level Summary page showed a pretty steady CPU utilization of 15% with a constant 6-7% IO delay, which correlates with the OS’s iowait metric. This was all a bit unusual.

First port of call was good ol’ htop, which revealed that udisks2 was eating up a lot of CPU. I didn’t install this explicitly and hadn’t seen it on other Proxmox machines before, and after a little digging it turned out that this was normally installed alongside cockpit. This is a package I tend to install on most machines because it looks pretty, but I hadn’t installed it on this host so I had to dig a bit further.

While researching iowait issues I stumbled upon a forum post that said that high iowaits can be a result of issues with the underlying hardware. I hadn’t seen any hardware-related alerts, and the SMART values looked healthy for the SSDs and spinning rust, but the output of zpool status -v revealed that the ZFS pool did indeed have one single, solitary error, and was already performing a scrub operation to clear it. The file containing the error was for an experimental VM that I didn’t need anymore, so I just deleted the VM and restarted the scrub. That cleared the error but didn’t have any effect on the high system load.

It was at that point that I connected a couple dots. I was running several LXC containers for things like Pi-Hole, Squid, Homebridge, Gitea, Ansible, and a Minecraft server, and all of these kept their files on the host’s file system. That is, unlike VMs, LXCs don’t have their own dedicated virtual hard disks carved out as one large VMDK (or QCOW2 or VHDX) file, with all of the smaller files within. So, if an unprivileged container was having trouble accessing files due to a permissions issue between the container and the host, that could also show up as high iowaits. To test this theory I disabled autostart on all containers and rebooted the host. The VMs came up and were happily running, with host system load averages peaking at 4. The Pi-Hole container was the first one I started back up, and it had no observable impact on the load; however, the squid and gitea containers made the hardware extremely unhappy, driving load averages slightly higher than they were before, to over 36.

I quickly spun up a Debian VM for squid, copied the config from the container, and … no observable system impact. Built an Ubuntu VM for gitea (one-line installer via snap) and, again, no observable system impact. At that point I figured “well, if the VMs run with this little impact on overall system performance, I’ll just move all of my LXC roles into separate VMs,” and that’s exactly what I did.

That brings us to a new count of 11 VMs and 0 LXC containers:

  1. Pi-Hole
  2. Squid
  3. Plex
  4. VM for Docker
  5. Tailscale endpoint
  6. Desktop Environment Playground VM
  7. Certbot
  8. Homebridge
  9. Gitea
  10. Ansible
  11. Minecraft

…and, with these VMs running for the past 6 hours or so, my load averages are currently sitting at 0.36, 0.27, and 0.35, with iowait values of less than 0.1%.

So, the past week was a bit of a mixed bag when it came to the overall stability of the system, which means it’s time to observe for a bit longer. This also begs the question: if I’m only running VMs on this box, why am I running Proxmox? Could I get away with running Rocky Linux 9 on the bare metal, with the VMs running in a standard KVM install? Or would I get fancy with something like TrueNAS Scale, effectively rendering my small Synology NAS redundant? For now I’ll stick with what I know, but there could be some interesting plans for the future.

The Itch

In one of my previous posts I described my main machine. It’s the best workstation I’ve ever built for myself, providing the best of all possible worlds: it’s quiet under normal utilization, decently quick during video encodes, provides great gaming performance, and runs anything I can throw at it. What’s unique about this machine is that it’s the first airflow-focused build I’ve run in over ten years. The Fractal Design Torrent case has large openings on the front and bottom for air intake, and comes equipped with two 180mm fans in front and three 140mm fans on the bottom. Interestingly, the power supply compartment is at the top of the case, which I hadn’t seen since before 2010; however, it’s a bit more modern than cases of that vintage, in that the power supply is hidden behind a shroud and has ventilation holes, so the power supply has access to the air drawn into the case by the other fans.

The CPU is cooled by a Noctua NH-D15. This heatsink is absolutely enormous, almost comically oversized, but provides a lot of cooling power and stays quiet due to its large fans. In general, large fans are quieter because they have to spin at lower RPMs to move the same amount of air as smaller ones. Back in the day we primarily used 80mm fans, which are still used but very rare; the largest fan I’ve seen used in a case is 200mm, but that was in the Thermaltake Core V1, which is mini-ITX only.

While large diameter fans do tend to be quieter than smaller ones, they still get loud when they’re running at full tilt. Adjusting the fan curves in the BIOS is probably the best way to keep that noise in check; the Silent preset on the Gigabyte X570 Aorus Elite is perfect for this, but the tradeoff is that the CPU runs hotter on average. Given the size of the NH-D15 this isn’t too big a deal.

That brings us to the title of this post. I am happy with the performance of this machine. Upgrading to the Ryzen 9 5950X would reduce performance in gaming but increase it in video encodes, but I play games more than I encode videos. The Ryzen 7 5800X3D is the best gaming performer on the AM4 platform, but the reduction in thread count would decrease media performance. Therefore I don’t believe swapping the CPU would make me any happier. Increasing my machine’s RAM beyond 64GB wouldn’t do much. The GPU is the newest component in the rig and I have no desire to swap it out. I’ve upgraded the audio by using a Schiit Audio Fulla DAC. That leaves one thing: the case.

Every now and then I like swapping my machine’s case to keep it looking fresh, and to scratch that “I want to build a new computer” itch, without spending thousands of dollars on new components. At least, that’s what I’d have to do at this point — the only GPU I’d want to upgrade to is the Radeon RX 7900 XTX, which isn’t even out yet; upgrading the CPU would require going to the AM5 platform, which would require a new motherboard and DDR5 RAM.

Enter the Fractal Design North. That thing was announced four days ago, but the black version with the mesh side panel looks perfect for my workspace. It’s significantly smaller than the Torrent, which is nice because that thing is enormous. However, I’d also like to liquid cool the PC again, as simply lifting and shifting the mainboard with the NH-D15 attached doesn’t seem like it would be satisfying enough. So now we wait — I don’t believe the North is available anywhere at the moment, and I have research to do for the liquid cooling. That research is actually not as straightforward as I thought as it could lead me to sticking with the Torrent and building a custom loop. Alphacool makes a 2x180mm radiator and a waterblock for the reference RX 6750XT, which makes a building a custom loop an appealing challenge as well. Decisions, decisions.