Selfhosted
A place to share alternatives to popular online services that can be self-hosted without giving up privacy or locking you into a service you don't control.
Rules:
-
Be civil: we're here to support and learn from one another. Insults won't be tolerated. Flame wars are frowned upon.
-
No spam posting.
-
Posts have to be centered around self-hosting. There are other communities for discussing hardware or home computing. If it's not obvious why your post topic revolves around selfhosting, please include details to make it clear.
-
Don't duplicate the full text of your blog or github here. Just post the link for folks to click.
-
Submission headline should match the article title (don’t cherry-pick information from the title to fit your agenda).
-
No trolling.
-
No low-effort posts. This is subjective and will largely be determined by the community member reports.
Resources:
- selfh.st Newsletter and index of selfhosted software and apps
- awesome-selfhosted software
- awesome-sysadmin resources
- Self-Hosted Podcast from Jupiter Broadcasting
Any issues on the community? Report it using the report flag.
Questions? DM the mods!
view the rest of the comments
This would be a bitch to have to rebuild in a raid array. At some point a drive can get TOO big. And this is looking to cross that line.
It doesn't really matter, the current limitations are not so much data density at rest, but getting the data in and out at a useful speed. We breached the capacity barrier long ago with disk arrays.
SATA will no longer be improved, we now need u.2 designs for data transport that are designed for storage. This exists, but needs to filter down through industrial application to get to us plebs.
640K ought to be enough for anybody.
I was thinking the same. I would hate to toast a 140 TB drive. I think I'd just sit right down and cry. I'll stick with my 10 TB drives.
This is not meant for human beings. A creature that needs over 140 TB of storage in a single device can definitely afford to run them in some distributed redundancy scheme with hot swaps and just shred failed units. We know they're not worried about being wasteful.
This is for like, Smaug but if he hoarded classic anime and the entirety of Steam or something. Lol
Rebuild time is the big problem with this in a RAID Array. The interface is too slow and you risk losing more drives in the array before the rebuild completes.
Realistically, is that a factor for a Microsoft-sized company, though? I'd be shocked if they only had a single layer of redundancy. Whatever they store is probably replicated between high-availability hosts and datacenters several times, to the point where losing an entire RAID array (or whatever media redundancy scheme they use) is just a small inconvenience.
True, but that's going to really be pushing your network links just to recover. Realistically, something like ZFS or a RAID-6 with extra hot spares would help reduce the risks, but it's still a non trivial amount of time. Not to mention the impact to normal usage during that time period.
Network? Nah, the bottleneck is always going to be the drive itself. Storage networks might pass absurd numbers of Gbps, but ideally you'd be resilvering from a drive on the same backplane, and SAS-4 tops out at 24 Gbps, but there's no way you're going to hit that write speed on a single drive. The fastest retail drives don't do more than ~2 Gbps. Even the Seagate Mach.2 only does around twice that due to having two head actuators.
I don't get how a single person would have that much data. I fit my whole life from the first shot I took on a digital camera in 2001... Onto a 4TB drive.
...and even then, two thirds of it is just pirated movies.
Amateur 😀
But seriously I probably have close to 100 TB of music, TV shows, movies, books, audiobooks, pictures, 3d models, magazines, etc.
I need a home for my orphaned podman containers /s
I think this is better targeted to small and medium businesses.
if you run this as a NAS you could easily have all your budd s obsesses files in one place without needing complex networking.