Currently working on an Arch server for my self hosting needs. I love arch, in my eyes its the perfect platform for self hosting. There is no bloat, making it lightweight and resource efficient. Its also very stable if you go down the lts route and have the time and skills to head off problems before they become catastrophic.
The downsides. For someone who is a semi-noob there is a very steep learning curve. Arch is very well documented but when you hit a problem or a brick wall its very frustrating. My low tolerence for bullshit means I take hours/days long breaks from it. There’s also time demands in the real world so needless to say I’ve been going at it for a few weeks now.
Unraid is very appealing - nice clean interface, out-of-the-box solutions for whatever you want to do, easy NAS management… What’s not to like? If it was fully open-source I would’ve bought into it from the start. At least once a day I think “I’m done. Sign me up unraid”. Its taking an age to set up the Arch server. If I went for unraid I could be self hosting in a matter of hours. Unraid is the antitheses of Arch. Arch is for masochists.
My question is this - do you ever look at products like unraid and think “fuck this shit I’m done”?
Ive been using Unraid for years.
I am fully capable of running a Docker solution and setting up drives in a raid configuration. It’s more or less one of my job duties so when I get home I’m not in a hurry to do a lot more of that.
But Unraid is not zero maintenance, and when something goes wrong, it’s a bit of a pain in the ass to fix even with significant institutional knowledge.
Running disks in JBOD with parity is wonderful for fault tolerance. But throughput for copying files is very slow.
You could run it with zfs and get much more performance, but then all your discs need to be the same size, and there’s regular disk maintenance that needs to happen.
They have this weird dedication to running everything is root. They’re not inherently insecure, but it’s one of those obvious no-nos that you shouldn’t do that they’re holding on to.
If you want to make it a jellyfin/arr server and just store some docs on the side, it’s reasonable and fairly low maintenance.
I’m happy enough with them not to change away. And if you wait till a black Friday they usually have a pretty good sale.
I’ll probably eventually move to a ProxMox and a Kubernetes cluster as I’ve picked up those skills at work. I kind of want to throw together a 10-inch rack with a cluster of RPI. But that’s pretty against what direction you’re looking to head :)
They have this weird dedication to running everything as root
I didn’t know that. That isn’t fantastic.
Running disks in JBOD with parity is wonderful for fault tolerance. But throughput for copying files is very slow.
Didn’t know this either. It makes sense. Worth considering.
When remodelling my NAS I was tempted to go for unraid as well, but in the end I chose OMV. Aside from some minor problems here and there it has been running great.
How close are you to “fck it, im just gonna pay for unraid”?
Extremely far. Maximum distance. My self updating debian with an sftpgo container and some RAID HDDs slapped onto it has been rocksolid for years.
I actually bought unraid before it became a subscription. And I must say, i really like it.
I am currently looking into Proxmox to build a small cluster to prevent critical services from becoming unavailable whenever I do stupid things… but… compared to unraid the learning curve and ease of use is much more brutal.
I think most selfhosters already know/use Linux, so management issues are already known. About the ease of use, if you manage services with docker it’s really easy to bring them up/down, and if you want some GUI there’s portainer.
I’ve never tried Arch but my debian server with kvm/qemu/cockpit running mdraid1 and smb/nfs file sharing - works well enough and I enjoy the tinkering and setting it all up. I’m writing this from a virtual Fedora KDE workstation that I’ve setup vfio and pcie passthrough of my dgpu and a usb controller on (both connected to my monitor that acts as a usb hub).
A friend runs a Proxmox VE Community Edition with physical disk passthrough to a virtual Nextcloud server and that seems to work well too.
I guess my answer is no, I don’t look at UnRAID and think “fuck this shit I’m done”, I enjoy the tinkering that makes you frustrated.May I ask what kind of brick walls you’re hitting and what software you run on Arch that makes it so frustrating?
I actually gave Debian a go and I get the hype. Compared to Arch its a dream to set up and work with. Somewhere down the road I might go back to it.
Proxmox - it looks great but I think its overkill for what I need. I can run most things in Docker - I don’t really need virtualization. At some point in the future I’d like to try it and have TrueNAS virtualized on top to manage the NAS side of things.
There’s not really particular thing (or things) that are insurmountable/unbearable with Arch. Its more the experience. But I love it and hate it in equal measure ha.
What I like about running a hypervisor and true vms is that I can fool around with some vms in my server without risk of disrupting the others.
I run most of my dockers in one VM, my game servers in another and the Jellyfin instance on a third. That allows me to fool around with my portainer instance or game servers without disrupting Jellyfin and so on.
Part of it is that I’m more used to and comfortable in managing vms and their backup/recovery compared to LXCs and Dockers.

Im running a similar setup (ZFS pool, Cockpit, portainer x2, and a few LXCs for Plex, Frigate, etc) and it’s been great. Before building it early this year, I’d been running everything on Windows for the decade prior because I was unfamiliar with Linux and struggled like OP when problems arose, but after following a guide to get everything setup it’s been rock solid and if I screw anything up I can just load a backup. I’d also looked into TrueNAS and Unraid but this gives me a more flexible setup without any extra cost and the ability to tinker without affecting anything else like you said.
@jobbies Eh? Just use Fedora/Alma/Rocky with BTRFS and Samba/NFS. You can even use an immutable flavour and not have to worry about updates breaking anything.
I don’t use unraid I use XCP-ng instead.
Never heard of this…
Because it’s a hypervisor, not a NAS.
zfs has been working nicely for me for many years, for diverse operating systems including zfs all-in-one for internal NFS mount.
Experimented with various approaches, looking to replace an older Pegasus set of 6 drives and a file server.
Unraid just worked. Lots of support and tutorials to get started. Look up Spaceinvader on YouTube as a starter
I use Unraid. It’s great and a lot less hassle than back when I used just a regular distro for everything.
Setup was easy, it just works, its stable, and if you want the regular updates, just get the lifetime model on sale. I bought it becaue I didn’t want to spend time screwing with setup and just wanted to get my data moved snd running.
Not close at all.
OK, I got some missing bells and whistles in my current setup, which is just a poor man’s NAS made of ZFS and samba, plus a nextcloud for convenience.
But I fell so much in love with ZFS that I would never replace it with unraid. For my next box I am looking forward to use TrueNAS instead.
ZFS is a bit like Arch - its wonderful in theory but in practice it can be bitch to work with. I’ve got it working on Arch but it wasn’t easy, let me tell you.
I have given up on ZFS entirely because of how much of a pig it was.
A pig on what?
Looks like I angered people by not loving ZFS. I don’t feel like being bagged on further for using it wrong or whatever.
I didn’t downvote you, I’m genuinely curious what you mean about zfs being a “pig”.
I was trying to use it for a mirrored setup with TrueNAS and found it to be flakey to the point of uselessness. I was essentially told that I was using it wrong because I had USB disks. It allowed me to set it up and provided no warnings but after losing my test data for the fifth time (brand new disks - that wasn’t the issue) I gave up and setup a simple rsync job to mirror data between the two ext4 disks.
If losing power effectively wipes my data then it’s no damn use to me. I’m sure it’s great in a hermetically sealed data centre or something but if I can’t pull one of the mirrored disks and plug it into another machine for data recovery then it’s no damn good to me.
Ah, I hear you, and sorry you had that experience. GUI controls of ZFS aren’t usually very intuitive.
Also, ZFS assumes it has direct access to the block device, and certain USB implementations (not UAS) use sync operations that sit between the HAL and userland somewhere. So ZFS likes direct-attached storage, it’s a caveat to be sure.
If you ever change your mind, https://klarasystems.com/zfs/ has a ton of reading and tutorials on ZFS.
Wait so you built a pool using removable USB media, and was surprised it didn’t work? Lmao
That’s like being angry that a car wash physically hurt you because you drove in on a bike, then using a hose on your bike and claiming that the hose is better than the car wash.
Zfs is a low level system meant for pcie or sata, not USB, which is many layers above sata & pcie. Rsync was the right choice for this scenario since it’s a higher level program which doesn’t care about anything other than just the data and will work over USB, Ethernet, wifi, etc., but you gotta understand why it was the right choice instead of just throwing shade at one of the most robust filesystems out there just because it wasn’t designed for your specific usecase.
I don’t know what that other dude’s on about.
The general consensus on zfs is (or at least was) that your need 1Gb of RAM per terabyte of zpool. Especially if you want to run deduplication.
If you don’t need dedupe the requirements drop significantly.
What parts are “a bitch” to work with?
I’m a bit confuses about your approach in general:
No zfs because it “breaks”, but you use arch as server is? Sounds like you want to tinker and break things to learn, but virtualization is “overkill”?
I don’t understand what you’re trying to get from your homelab.
What parts are “a bitch” to work with
If you’re coming from Windows servers/environs (or Mac for that matter), configuring ZFS in CLI (as you do in arch) is a learning curve and can be tedious.
No zfs because it “breaks”
Its not baked into the arch kernels so unless you’ve got your wits about you running updates can fuck everything up.
virtualization is “overkill”
Yes. If all you’re looking for is a NAS with some docker containers and you don’t need the segregation virtualization is overkill.
I don’t understand what you’re trying to get from your homelab.
You could just ask questions? There’s no need to be a dick about it.
There’s no need to be a dick about it.
I meant no disrespect, I suppose I should have been more direct.
I asked about ZFS because it is not really that difficult to set up and there aren’t that many variables involved. You create a pool, then start using it. There isn’t much more to it.
Its not baked into the arch kernels so unless you’ve got your wits about you running updates can fuck everything up.
That is an arch problem, not ZFS. An update on Debian with ZFS would almost never behave like this.
I asked about virtualization because it would allow you to break things intentionally and flip back to a desired state, which seems to fit with your like of solving broken stuff.
So in the end, you’re obviously free to do what you like, and that’s the great thing about Linux. But you definitely seem to want to do things the hard way.
Have a better one.
Unraid supports ZFS
TrueNAS
Arch server
here’s your issue.
Why not something like NAS4Free or OpenMediaVault, then? You don’t have to chose between DIY and paid-for, there is a middle-ground










