Distro/Operating System for New Home Server?

Just got myself a new home server, a tiny Asus PN53 with Ryzen 7, 7735H, 64 GB RAM and 2 x 4 TB NVMe. So I think it’s going to be nice. (Replacing a 2014 E3 HP server with spinning rust RAID.) :sunglasses:

My old server runs Debian and ZFS on Linux. It runs a few KVM and LXC instances and serves CIFS and NFS.

The plan for my new server is to still run a few VM’s (KVM at least), some Docker images and serve files (CIFS, AFS maybe and maybe NFS or something fun).

Wondering if I should just use Debian, or if I should try out ProxMox, or something like CoreOS, SmartOS (illumOS based), or some NAS specialized distro Free/OpenNAS (FreeBSD based IIRC) or something … Wanna help me decide, give some input as to why select this OS/distro? :nerd_face: :vulcan_salute:

1 Like

I usually go with Proxmox :smiley: If I need VMs, it’s always my go-to. I try to stick with Debian + Docker otherwise

3 Likes

Give Proxmox a shot, I do the exact same thing as @Wolveix. If I don’t need VM’s I stick to Debian + Docker because it’s more convenient for hardware passthrough and I end up with less kernels to manage/update.

2 Likes

Thanks @Wolveix & @Solaire ! :slight_smile: Just use ZFS RAID1, then, I guess :slight_smile:

… Adding a Standard PAM User in Proxmox UI, Datacenter → Users was interesting. Tried setting password and got Non-existent user … :thinking: (I don’t see the user in getent passwd.) I guess users there are for the Proxmox cluster and should use their auth server. Created a local user using adduser.

For serving SMB etc, you just do stuff the regular Debian way. Proxmox is just for handling the VMs?

2 Likes

If you use Proxmox it’s highly discouraged to do anything on host level. Things like samba should go in an LXC / CT container where you can mount your ZFS storage in.

To be honest, I run ZFS myself (raidz1), and I chose not to use Proxmox because iGPU pass through is a pain and Docker in LXC works but is highly discouraged and officially not supported. So for a storage cluster you’d have a LXC container that has access to the pool via volume mounts, which would in turn expose it via Samba, NFS or any other protocol, which you can then use to mount it in a VM. To me, that’s just too much of overhead and potential issues. I do understand the pros of this approach, but to me the cons outweigh the pros.

So I’m using Proxmox on another dedicated server (it’s simple hardware with a single SSD, idling at 11W), and my main server runs anything in Docker because that’s just much easier to maintain. Passing through of volumes and hardware is really straightforward and doesn’t introduce a lot of overhead.

So to come back to your original question, while I still recommend Proxmox, I’d personally stick to plain Debian, build a Docker stack and manage VM’s and containers from the CLI because I’m used to it anyway

2 Likes

Hm, thanks. HW passthrough is interesting, especially for one of my KVM’s. So … it’s a hassle in Proxmox?

Edit: Reinstalled using Debian 12. Currently stuck at having to get physical access to enter some password to EFI-loader. (Left the computer at work, as we have better fiber there.) Quite a hassle with DKIM and Secure Boot signing.
ProxMox used Ubuntu kernel, or something, since ZFS was supported for root fs as well?

Do you install the regular Docker tools as documented on their site, or do you use podman or some other tools/stack? :nerd_face:

If you need it in both the host and guest, or multiple guests, yea it’s a hassle.

No it should be Debian based. Probably best to use a small dedicated NVMe for the hoot disk anyway.

Just docker + compose, which is embedded into Docker nowadays. Portainer is nice if you can’t be bothered writing compose files. But I’m a control freak when it comes to tech, so I write my compose files by hand and rarely leave the cli.

1 Like

Hm, there’s only two NVMe slots, I think. I bought 2 x 4096 TB Kingston KC3000.
For debian I partitioned them as having EFI/ESP, then 100 GB for md0/raid1 ext4 root. And then the rest 3,9T each for ZFS raid1 was the plan.
(And then serving SMB etc from host, doing the rest from a few VM’s and docker containers.)
But something like this surely looks interesting: TrueNAS SCALE - Hyperconverged Storage Scales Up & Out

1 Like

For what it’s worth, I just run Docker alongside Proxmox. While it’s true that it’s best to avoid doing anything on the Proxmox host where you can avoid it, doing so with Docker is kinda annoying. So I keep Docker alongside Proxmox directly :smiley:

1 Like

That’s also tempting. A bit afraid I might ruin something by doing too much locally/manually. (Old habits die hard.) I really would like USB and maybe PCI passthrough for at least one large KVM instance.
Maybe I’ll try that Debian TrueNAS version as well, just to get a feel, then decide after having tried that.

At least it looks like the NVMe’s work as expected (they claim 7000 MB/s). And this CPU seems good enough:

fio Disk Speed Tests (Mixed R/W 50/50) (Partition /dev/md0):
---------------------------------
Block Size | 4k            (IOPS) | 64k           (IOPS)
  ------   | ---            ----  | ----           ----
Read       | 987.91 MB/s (246.9k) | 3.93 GB/s    (61.4k)
Write      | 990.52 MB/s (247.6k) | 3.95 GB/s    (61.7k)
Total      | 1.97 GB/s   (494.6k) | 7.88 GB/s   (123.1k)
           |                      |
Block Size | 512k          (IOPS) | 1m            (IOPS)
  ------   | ---            ----  | ----           ----
Read       | 4.30 GB/s     (8.4k) | 4.35 GB/s     (4.2k)
Write      | 4.52 GB/s     (8.8k) | 4.64 GB/s     (4.5k)
Total      | 8.83 GB/s    (17.2k) | 9.00 GB/s     (8.7k)

Geekbench 5 Benchmark Test:
---------------------------------
Test            | Value
                |
Single Core     | 1736
Multi Core      | 10572
Full Test       | https://browser.geekbench.com/v5/cpu/22110124

Geekbench 6 Benchmark Test:
---------------------------------
Test            | Value
                |
Single Core     | 2227
Multi Core      | 10919
Full Test       | https://browser.geekbench.com/v6/cpu/4275547
2 Likes

For home server I recommend DietPi. I use it on Raspberry Pi, NanoPi Neo, as well as an old computer bought from a fare.

2 Likes

:exploding_head:

Right, that’ll work but is slightly more complicated, I don’t have a lot of experience with that. I have a LVM for my NVMe (single disk), and ZFS for spinning rust.

TrueNAS does look nice, but I’m a big fan of Debian for its stability and everything. There is no right or wrong, what @Wolveix mentioned (running Docker on the Proxmox host) will work too, until you run into some weird incompabilty issue and all you can find is Proxmox people saying “it’s not supported, why’d you use it?”

I like my (at least main homeserver) to be kinda set and forget. I enjoy playing around with things, but since I run a few dozen services that I’m using everyday (password vault, media streaming, VPN, etc) I don’t want to jump through hoops and get stuff working again if I can prevent this by simply reducing the complexity of virtualization. I’m a software developer by profession and “less is more” really is the best approach with a lot of things in my opinion.

2 Likes

I personally find TrueNAS far too restrictive as the host OS. I actually run it as a VM under Proxmox :smiley:

1 Like

TrueNAS CORE (BSD), or TrueNAS SCALE (Debian)? :vulcan_salute:
Passthrough of storage to the VM, then? :slight_smile:

1 Like

Yea you can do a pass through of the storage but then you lose the ability to manage the ZFS from the host and you still need to configure NFS and everything from Truenas to mount it back to the host.

Yes, TrueNAS seemed very limiting in install options, it used both drives for boot-pool, so nothing left for fun :wink:

I always go with Arch. Makes a perfect server distro, comes without a DE installed and is pretty lightweight. Plus all the goodies packed into AUR. Can’t go wrong.

For servers I use Ubuntu, and get extended support (for free), also really good support for ZFS. I tend to keep them running for as long as it’s safe.

Should maybe consider Arch again. Tried it some years ago, back when it was kinda new. Did have some stability issues. (Got some breakage when upgrading, after an extended offline period, it had issues catching up …)
(Having run Debian since 2.0 or 2.1, it’s easy to choose that.)

I did give the Ubuntu Server installer a try, but it wouldn’t let me create ZFS for root.
Only drawback I see with Debian, is the dkms stuff required for ZFS. (Getting the modules signed, wasn’t too hard.)

Right now on Debian (again). Giving netplan a try. (Seemed like a good opportunity to learn something new, instead of just making my own /etc/network/interfaces as I’m used to.)

Trying to figure out how to make enp3s0 a vlan trunk (pvid1), and then use vlan’s etc for bridges. Thought I had it correct from docs, but something is wrong/missing.
(I switched from NetworkManager to systemd-networkd. Mostly to have NetworkManager forget old config, it seemed to retain stuff I didn’t add to netplan.)

And then I will have to make docker use a different net for it’s bridge (I use 172.17.x for something else).

2 Likes

I keep an eye on several distros, and for Arch, I’ve been very impressed with EndeavourOS.

As to ZFS on root, I’ve done it on Ubuntu (it was an experimental feature). But I usually use ZFS for storage/data, and have root on ext4, it’s simpler and well supported and understood. Native ZFS modules on Ubuntu is nice.

Debian is very well supported and understood/documented for servers, but is not supported for many years, and not easily upgradable.