49
Best OS for a NAS (derpzilla.net)
submitted 10 months ago by Kwa@derpzilla.net to c/selfhosted@lemmy.world

Hey again! I’ve progressed in my NAS project and I’ve chosen to go for a DIY NAS. I can’t wait for the parts to arrive!

Now I’m a bit struggling to choose an OS. I am starting with 2x10To HDD + 1To NVMe SSD. I plan to use 1 HDD for parity and to add more disks later.

I plan to use this server purely as a NAS because I will be getting a second more powerful server some time next year. But in the meantime, this NAS is a big upgrade over my rpi 4, so I will run some containers or VMs.

I don’t want to go with TrueNAS as I don’t want to use ZFS (my RAM is limited and I’m not sure I can add drives with different sizes). I’ve read btrfs is the second best for NAS, so I may use this.

Unraid seemed like the perfect fit. But the more I read about it, the more I wonder if I shouldn’t switch to Proxmox.

What I like about Unraid is the ability to add a disk without worrying about the size. I don’t care much about the applications Unraid provides and since docker-compose is not fully supported, I’m afraid I won’t be able to do things I could have done easily with a docker-compose.yml I also like that’s it’s easy to share a folder. What I don’t like about Unraid is the cache system and the mover. I understand why the system works this way but I’m not a fan.

I’ve asked myself if I needed instant parity for all my data and if I should put everything in the array.

The thing is that for some of my data I don’t care about parity. For instance, I’m good with only backing up my application data and to have parity for the backup. For my tv shows I don’t care about parity nor backup while I want both for my photos.

After some more research, I found mergerfs and snapraid. I feel that they are more flexible and fix the cache/mover issue from Unraid. Although I’m not sure if snapraid can run with only 2 disks.

If I go with Proxmox I think I would use OpenMediaVault to setup shares.

Is anyone using something like this? What are your recommendations?

Thanks!

you are viewing a single comment's thread
view the rest of the comments
[-] ShellMonkey@lemmy.socdojo.com 12 points 10 months ago* (last edited 10 months ago)

https://xigmanas.com/xnaswp/download/

For a pure NAS purpose this is my go to. Serves drives, supports multiple file systems, and has a few extras like a basic web server and RSync built into a nice embedded system. The OS can run on a USB stick and manage the drives separately for the data.

On the ZFS front, a common misconception is that it eats a ton of RAM. What it does actually is use idle RAM for the 'arc' which caches the most frequent and/or most recently used files to avoid pulling them from disk. That RAM though will get dumped and made available to the system on demand though if for whatever reason the OS needs it. Idle RAM is wasted RAM so it's a nice thing to have available.

[-] anzo@programming.dev 4 points 10 months ago

Indeed, ZFS uses a percentage of RAM for cache. That amount is configurable. ZFS has a easier CLI, I'd recommend it for a NAS. And, allow me to say that I am not sure the comparison is between truenas scale and proxmox really. This thread reminds me of the usual distro wars when people don't know about desktop environments (kde, gnome, xfce, etc.) as in: you can use ZFS in proxmox ;O

[-] Kwa@derpzilla.net 0 points 10 months ago

I was comparing Proxmox and Unraid. I had ruled out TrueNAS because it only supports ZFS. I was wrong about the RAM for ZFS, but another issue is that it doesn’t support different disk sizes

[-] Kwa@derpzilla.net 2 points 10 months ago

Indeed, it wasn’t clear that it was how it worked. That seems better than the cache/mover system from Unraid.

Another point as to why I didn’t consider ZFS, is that it only works with disks having the same capacities. As I will be adding disks over time, I think I will be wasting disk space

[-] ShellMonkey@lemmy.socdojo.com 5 points 10 months ago* (last edited 10 months ago)

The disk size also doesn't have to match. Creating a drive array for ZFS is a 2 phase thing:

Creating a series of 'vdev' which can be single disks or mirrored pairs,

Then you combine the vdevs into a 'zpool' regardless of their sizes and it all becomes one big pool, and it acts somewhere between raid and disk spanning where it reads and writes to all but once any given vdevs is full it just stops going there. I currently have vdevs sets in 12, 8, 6 and three 4 TB sizes for a total of 38 TB of space minus formatting loss.

Example how I have it laid out, it'd be ideal to have them all the same size to balance it better, but it's not required.

[-] Kwa@derpzilla.net 2 points 10 months ago

Thanks, I’ll check ZFS again!

this post was submitted on 13 Feb 2024
49 points (96.2% liked)

Selfhosted

40719 readers
405 users here now

A place to share alternatives to popular online services that can be self-hosted without giving up privacy or locking you into a service you don't control.

Rules:

  1. Be civil: we're here to support and learn from one another. Insults won't be tolerated. Flame wars are frowned upon.

  2. No spam posting.

  3. Posts have to be centered around self-hosting. There are other communities for discussing hardware or home computing. If it's not obvious why your post topic revolves around selfhosting, please include details to make it clear.

  4. Don't duplicate the full text of your blog or github here. Just post the link for folks to click.

  5. Submission headline should match the article title (don’t cherry-pick information from the title to fit your agenda).

  6. No trolling.

Resources:

Any issues on the community? Report it using the report flag.

Questions? DM the mods!

founded 2 years ago
MODERATORS