this post was submitted on 01 May 2026
45 points (100.0% liked)

Selfhosted

58910 readers
329 users here now

A place to share alternatives to popular online services that can be self-hosted without giving up privacy or locking you into a service you don't control.

Rules:

  1. Be civil: we're here to support and learn from one another. Insults won't be tolerated. Flame wars are frowned upon.

  2. No spam posting.

  3. Posts have to be centered around self-hosting. There are other communities for discussing hardware or home computing. If it's not obvious why your post topic revolves around selfhosting, please include details to make it clear.

  4. Don't duplicate the full text of your blog or github here. Just post the link for folks to click.

  5. Submission headline should match the article title (don’t cherry-pick information from the title to fit your agenda).

  6. No trolling.

  7. No low-effort posts. This is subjective and will largely be determined by the community member reports.

Resources:

Any issues on the community? Report it using the report flag.

Questions? DM the mods!

founded 2 years ago
MODERATORS
 

Like many self-hosters, I've looked upon the recent price hikes for storage in utter disbelief. Faced with paying double the price of what I paid only last year for new hard drives, I dug around my hardware stash and came across about a dozen of old 2.5" 320-500 GB drives which I had saved from the dumpster once, but never deployed. After all, they were too slow to be used as PC system drives and too small in storage size for any meaningful use in a server. Now seemed like a perfect time to look for a way to put them to good use after all. And I found it in mergerFS.

For anyone not familiar with it: in spite of its name, mergerFS is not a filesystem in the sense that in order to deploy it, you'll need to reformat any drives (although this wouldn't have been a problem for my use case). Instead, you can theoretically take a bunch of drives (JBOD) and string them together with no modification to their filesystem, keeping existing data intact. It is agnostic of the filesystems present on the drives, meaning you can even combine volumes formatted with, say, ext4, btrfs, and xfs. All drives will show up in your filesystem as a single volume, and - depending on the policies you configured - store some data on this and some data on that drive. Since data isn't striped, the drives will remain individually legible, i.e. there's no need to rebuild all of them after a drive fails.

Speaking of drive failure: while mergerFS itself does not come with RAID, you can add SnapRAID to the mix for parity-based RAID (although it's not real-time RAID; parity data must be written on schedule, so it's not for mission-critical data that is frequently being updated and rewritten).

Combined, these two technologies allow me to have my cake and eat it too:

  • I can put drives to use that would otherwise be rotting in a drawer.
  • I can avoid additional cost - both financial and ecological. (The energy bills won't increase by much, either, because most of the energy comes from solar cells on the roof.)
  • I can always flexibly tack on more drives, regardless of size.
  • I can have the added data security of a RAID, but at the price of very few (if any) of its drawbacks (e.g. no drives of equal size needed).

If this was news to you - maybe you want to give it a shot too. (I don't consider myself a very advanced user and I found it dead simple to deploy.)
If you're already running mergerFS and SnapRAID, feel free to showcase your use case and setup!
If you found any of the above incorrect or misleading, feel free to correct me.

top 18 comments
sorted by: hot top controversial new old
[–] irmadlad@lemmy.world 3 points 1 day ago

@IratePirate@feddit.org That's pretty resourceful and pretty cool. I'm intrigued. I'm going to have to read up on that. Thanks for posting

[–] adarza@lemmy.ca 2 points 1 day ago (2 children)

i have three snapraids here. one with (what was at the time) new disks, and two made up of old salvaged disks like you've got--pulled from systems and laptops headed for the recycle bin.

[–] yo_scottie_oh@lemmy.ml 1 points 1 day ago* (last edited 21 hours ago)

How do you connect your disks to your host machine? Are they in an external cage w/ SATA-to-USB adaptors or mounted internally to SATA ports?

[–] irmadlad@lemmy.world 1 points 1 day ago (1 children)

Was it hard to set up? Any field expedient modifications, adjustments, or fiddling? I've got a ton of old HDD from desktops, laptops, old servers sitting in one of my closets. Hmmmmmm

[–] adarza@lemmy.ca 2 points 1 day ago

not difficult at all, snapraid's online documentation is very good.

[–] plz1@sh.itjust.works 1 points 1 day ago

This is why I went with Unraid. Being able to slap whatever drives in that I have on hand was the primary driver for getting away from btrfs (Synology). And that build was about 3 months before RAM prices started to explode last year, which I read as "all parts gonna skyrocket", which they have.

[–] eightys3v3n@lemmy.ca 3 points 1 day ago (1 children)

Sounds interesting, thank you!

[–] eightys3v3n@lemmy.ca 4 points 1 day ago (1 children)

This SnapRAID occupies an interesting middle ground between the least "proper" solution and the most "proper" solution for when more resources aren't available or justified, it seems.

Rather than a single drive, or dozens of drives, with data randomly duplicated around or lost when individual drives die. Rather than a huge volume on zfs with it's large setup cost and lack of expandability (until AnyRaid is done) and potentially unneeded additional functionality.

Then mergerfs is a natural expansion offering a unified way to organize and access the data that SnapRAID is securing (instead of mounting all those drives somewhere).

If someone merged these projects into one solution, and added a couple extra functions (like managing compression or deduplication, caching) it seems like it could be a comparable offer to zfs for different use cases. Imagine a NAS offering with this setup by default. Much more intuitive to users I would argue.

[–] IratePirate@feddit.org 1 points 1 day ago (1 children)

a comparable offer to zfs

Weeell, zfs does bring a lot more to the table than mergerFS + snapRAID, e.g. snapshotting and scrubs/bitrot protection. But then again, it does so at a much higher price.

Imagine a NAS offering with this setup by default. Much more intuitive to users I would argue.

Agreed. unRAID has something very similar and even (slightly) better (their RAID syncs automatically, not on command). But then again, unRAID isn't FOSS.

[–] Andres4NY@social.ridetrans.it 3 points 1 day ago (1 children)

@IratePirate @eightys3v3n Snapraid offers scrub/bitrot protection - check out 'snapraid scrub'.

[–] IratePirate@feddit.org 1 points 1 day ago

I stand corrected - thank you!

[–] Decronym@lemmy.decronym.xyz 1 points 1 day ago* (last edited 1 day ago)

Acronyms, initialisms, abbreviations, contractions, and other phrases which expand to something larger, that I've seen in this thread:

Fewer Letters More Letters
NAS Network-Attached Storage
RAID Redundant Array of Independent Disks for mass storage
SATA Serial AT Attachment interface for mass storage
ZFS Solaris/Linux filesystem focusing on data integrity

4 acronyms in this thread; the most compressed thread commented on today has 10 acronyms.

[Thread #269 for this comm, first seen 1st May 2026, 23:00] [FAQ] [Full list] [Contact] [Source code]

[–] possiblylinux127@lemmy.zip 1 points 1 day ago (1 children)

Honesty I wouldn't recommend much outside of ZFS for data storage

ZFS is hard to beat

[–] scrubbles@poptalk.scrubbles.tech 6 points 1 day ago (1 children)

ZFS works best for drives of the same size. It is possible to do multiple drive sizes, but it's pretty tedious. Mergerfs is a clear winner when you have many varying sizes of drives and are okay with the speed tradeoff

[–] possiblylinux127@lemmy.zip -3 points 1 day ago (1 children)

It seems worse in many regards

I'd rather do btrfs honesty

Ok great, thanks for sharing.

[–] Andres4NY@social.ridetrans.it 1 points 1 day ago (1 children)

@IratePirate Combine this with restic (or borgbackup, if that's how you swing) for a bombproof selfhosting solution.

[–] IratePirate@feddit.org 1 points 1 day ago

Good call! I'm doing regular borgbackups to an off-site, self-hosted backup server. (I'd still prefer not to be bombed! :D)