324
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
this post was submitted on 12 Dec 2024
324 points (98.8% liked)
Technology
59982 readers
2284 users here now
This is a most excellent place for technology news and articles.
Our Rules
- Follow the lemmy.world rules.
- Only tech related content.
- Be excellent to each another!
- Mod approved content bots can post up to 10 articles per day.
- Threads asking for personal tech support may be deleted.
- Politics threads may be removed.
- No memes allowed as posts, OK to post as comments.
- Only approved bots from the list below, to ask if your bot can be added please contact us.
- Check for duplicates before posting, duplicates may be removed
Approved Bots
founded 2 years ago
MODERATORS
That was true a while back, but yes drives have gotten way better.
That's just failure rate though, not data loss. You need your drives using a sane file system like zfs or using raid 1/10/6 where discs can do error checking as well to prevent silent data loss.
They also need to be powered on. Offline drives will lose data to bit rot over time.
What about btrfs?
ZFS is better.
What about it is better?
ZFS is unfortunately not in the upstream Linux Kernel :/
Btrfs is worse in many aspects but I like its flexibility of adding drives with different capacities over time.
How did we get becachefs upstreamed but not ZFS?
Edit: Nevermind, it's licensing related
OpenZFS works just fine.
The lifetimes have improved, but according to your link, the currently measured average age of a drive at failure is 2 years, 10 months. They expect that to increase as they roll over to newer, more reliable drives. These drives are under heavy use, unlike drives used for offline storage, but still it's not really the kind of lifespan you'd ideally want in an archival medium.