ZFS by default will use quite a bit of ram for its ZFS ARC, if I’m correct up to 50%.
ZFS is great for certain use cases but I really wouldnt recommend it on a low memory server where you need to use as much of it for VM memory as possible.
You could try and tune your ZFS arc settings. But your also have a negative affect on disk performance but atleast wouldnt be running the server to the near point of being hit by OOM.
If not and your in a postion to backup and restore I’d suggest going to a standard mdadm Raid array.
no, not really. I use ZFS on one node, but only for a secondary set of harddrives using an ssd as caching device. I think that was more of an experiment anyway, so far no troubles with that.
I usually use just plain sw raid 1 or 10 with sparse disk images, not even thin lvm…
As for swap space, swapfiles running on ZFS doesn’t play too nicely. What you should do instead is when you install Proxmox is use the advanced setup, on that tab at the bottom (forgot what its called) but the bottom box allows you to set some free space at the end of your drive. Add however much space you want reserved for swap then continue installing the system as normal. After the machine is online, just create swap on that unused freespace like you would usually do (gdisk, mkswap etc). This will give you the best performance and stability.
Tune the ZFS ARC. I use 4-6GB on 32GB hosts.
zfsonlinux uses 50% of host RAM by default.
Next, MOVE/disable all swap on zfs zvols. (the default installer option I think)
ANY system(hypervisor) swap ‘should’ be on non ZFS volume for safety reasons. (explained later)
The newer proxmox installers provide a feature to ‘leave’ some unallocated space after the ZFS partition marker. Make a linux SWAP partion in that space and use that instead for the hypervisor linux system.
Having swap on ZFS during a critical OOM situation can cause catastrophic data loss.
I think this was the issue that alerted me to it https://github.com/openzfs/zfs/issues/7734
It’s been quite a while, there may have been movement on the bug.
But I doubt it due to the nature of ZFS design. Almost every operation in ZFS involves some memory allocation ops. So, an OOM is very bad thing for swap-out.
None of this applies to Guests. SWAP or no-SWAP is up to you there.
Better to just give enough MIN and big MAX ram and enable Balloon driver like Falzo mentioned.
If you got around 1TB of storage then 1GB RAM is what you ideally need set as max which is what that number reflects. And tbh you prob find you are really using more then that right now for ZFS as it would be using 16GB (max) on that system.
The way I understand it is, 1GB would be the bare minimum for ARC_size per terabyte of addressable storage.
Additional ARC size functions as a hypervisor wide disk page cache. I get ssd-cached-like performance in my lxc containers (backed by zfs datasets by default in proxmox . KVMs are backed by ZVOLS; slightly different internally)
I say 4GB ARC_size and a minimal 256MB swap inside guests.
You want guests to take advantage of ZFS storage magic.