this post was submitted on 20 Oct 2025
492 points (98.8% liked)

Selfhosted

52836 readers
444 users here now

A place to share alternatives to popular online services that can be self-hosted without giving up privacy or locking you into a service you don't control.

Rules:

  1. Be civil: we're here to support and learn from one another. Insults won't be tolerated. Flame wars are frowned upon.

  2. No spam posting.

  3. Posts have to be centered around self-hosting. There are other communities for discussing hardware or home computing. If it's not obvious why your post topic revolves around selfhosting, please include details to make it clear.

  4. Don't duplicate the full text of your blog or github here. Just post the link for folks to click.

  5. Submission headline should match the article title (don’t cherry-pick information from the title to fit your agenda).

  6. No trolling.

Resources:

Any issues on the community? Report it using the report flag.

Questions? DM the mods!

founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] db0@lemmy.dbzer0.com 262 points 3 weeks ago (9 children)

It's wild that these cloud providers were seen as a one-way stop to ensure reliability, only to make them a universal single point of failure.

[–] Nighed@feddit.uk 137 points 3 weeks ago (2 children)

But if everyone else is down too, you don't look so bad 🧠

[–] queerlilhayseed@piefed.blahaj.zone 69 points 3 weeks ago (2 children)

No one ever got fired for buying IBM.

[–] cdzero@lemmy.ml 18 points 3 weeks ago (2 children)

I wouldn't be so sure about that. The state government of Queensland, Australia just lifted a 12 year ban on IBM getting government contracts after a colossal fuck up.

[–] queerlilhayseed@piefed.blahaj.zone 58 points 3 weeks ago* (last edited 3 weeks ago) (2 children)

It's an old joke from back when IBM was the dominant player in IT infrastructure. The idea was that IBM was such a known quantity that even non-technical executives knew what it was and knew that other companies also used IBM equipment. If you decide to buy from a lesser known vendor and something breaks, you might be blamed for going off the beaten track and fired (regardless of where the fault actually lay), whereas if you bought IBM gear and it broke, it was simply considered the cost of doing business, so buying IBM became a CYA tactic for sysadmins even if it went against their better technical judgement. AWS is the modern IBM.

AWS is the modern IBM.

That's basically why we use it at work. I hate it, but that's how things are.

[–] NotMyOldRedditName@lemmy.world 2 points 2 weeks ago* (last edited 2 weeks ago)

if you bought IBM gear and it broke, it was simply considered the cost of doing business,

IBM produced Canadian Phoenix Pay system has entered the chat with a record 0 firings.

[–] ByteJunk@lemmy.world 4 points 3 weeks ago

Such a monstrous clusterfuck, and you'll be hard pressed to find anyone having been sacked, let alone facing actual charges over the whole debacle.

If anything, I'd say that's the single best case for buying IBM - if you're incompetent and/or corrupt, just go with them and even if shit hits the fan, you'll be OK.

[–] Auli@lemmy.ca 5 points 3 weeks ago

Yes but now it is nobody ever got fired for buying Cisco.

[–] clif@lemmy.world 13 points 3 weeks ago* (last edited 3 weeks ago)

One of our client support people told an angry client to open a Jira with urgent priority and we'd get right on it.

... the client support person knew full well that Jira was down too : D

At least, I think they knew. Either way, not shit we could do about it for that particular region until AWS fixed things.

[–] GissaMittJobb@lemmy.ml 60 points 3 weeks ago (5 children)

It's mostly a skill issue for services that go down when USE-1 has issues in AWS - if you actually know your shit, then you don't get these kinds of issues.

Case in point: Netflix runs on AWS and experienced no issues during this thing.

And yes, it's scary that so many high-profile companies are this bad at the thing they spend all day doing

[–] village604@adultswim.fan 22 points 3 weeks ago (2 children)

Yeah, if you're a major business and don't have geographic redundancy for your service, you need to rework your BCDR plan.

[–] Bob_Robertson_IX@discuss.tchncs.de 11 points 3 weeks ago (1 children)
[–] village604@adultswim.fan 10 points 3 weeks ago

So does an outage, but I get that the C-suite can only think one quarter at a time

[–] sugar_in_your_tea@sh.itjust.works 5 points 3 weeks ago* (last edited 3 weeks ago)

Absolutely this. We are based out of one region, but also have a second region as a quick disaster recovery option, and we have people 24/7 who can manage the DR process. We're not big enough to have live redundancy, but big enough that an hour of downtime would be a big deal.

[–] tourist@lemmy.world 4 points 3 weeks ago (1 children)

What's the general plan of action when a company's base region shits the bed?

Keep dormant mirrored resources in other regions?

I presumed the draw of us-east-1 was its lower cost, so if any solutions involve spending slightly more money, I'm not surprised high profile companies put all their eggs in one basket.

[–] corsicanguppy@lemmy.ca 4 points 3 weeks ago

I presumed the draw of us-east-1 was its lower cost

At no time is pub-cloud cheaper than priv-cloud.

The draw is versatility, as change didn't require spinning up hardware. No one knew how much the data costs would kill the budget, but now they do.

[–] B0rax@feddit.org 4 points 3 weeks ago* (last edited 2 weeks ago) (1 children)

Case in point: Netflix runs on AWS and experienced no issues during this thing.

But Netflix did encounter issues. For example the account cancel page did not work.

[–] princessnorah@lemmy.blahaj.zone 1 points 2 weeks ago (1 children)

I would say that's a pretty minor issue that isn't related to the functioning of the service itself.

[–] kbobabob@lemmy.dbzer0.com 0 points 2 weeks ago (1 children)

It's probably by design that the only thing that didn't work was the cancel page

[–] princessnorah@lemmy.blahaj.zone 2 points 2 weeks ago (1 children)

That's honestly just a tin-foil hat sort of take, that entirely relies on planning for an unprecedented AWS outage specifically to screw over customers.

[–] kbobabob@lemmy.dbzer0.com 1 points 2 weeks ago

What I meant by that is that they probably didn't care if that service has a robust backup solution like authentication or something would.

[–] corsicanguppy@lemmy.ca 3 points 3 weeks ago

I love the "git gud" response. Sacred cashcows?

[–] Danquebec@sh.itjust.works 1 points 2 weeks ago

Netflix did encounter issues. I couldn't access it yesterday at noon EST. And I wasn't alone, judging by Downdetector.ca

[–] tburkhol@lemmy.world 32 points 3 weeks ago (1 children)

It is still a logical argument, especially for smaller shops. I mean, you can (as self-hosters know) set up automatic backups, failover systems, and all that, but it takes significant time & resources. Redundant internet connectivity? Redundant power delivery? Spare capacity to handle a 10x demand spike? Those are big expenses for small, even mid-sized business. No one really cares if your dentist's office is offline for a day, even if they have to cancel appointments because they can't process payments or records.

Meanwhile, theoretically, reliability is such a core function of cloud providers that they should pay for experts' experts and platinum standard infrastructure. It makes any problem they do have newsworthy.

I mean,it seems silly for orgs as big and internet-centric as Fortnite, Zoom, or forturne-500 bank to outsource their internet, and maybe this will be a lesson for them.

[–] village604@adultswim.fan 5 points 3 weeks ago (1 children)

It's also silly for the orgs to not have geographic redundancy.

[–] killabeezio@lemmy.zip 3 points 3 weeks ago (1 children)

No it's not. It's very expensive to run and there are a lot of edge cases. It's much easier to have regional redundancy for a fraction of the cost.

[–] village604@adultswim.fan 4 points 2 weeks ago* (last edited 2 weeks ago)

The organizations they were talking about and I was referring to have a global presence

Plus, it's not significantly more expensive to have a cold standby in a different geographic location in AWS.

[–] ms_lane@lemmy.world 11 points 3 weeks ago

They zigged when we all zagged.

Decentralisation has always been the answer.

[–] corsicanguppy@lemmy.ca 10 points 3 weeks ago (1 children)

universal single point of failure.

If it's not a region failure, it's someone pushing untested slop into the devops pipeline and vaping a network config. So very fired.

[–] 4am@lemmy.zip 7 points 3 weeks ago

Apparently it was DNS. It’s always DNS…

[–] mhzawadi@lemmy.horwood.cloud 8 points 3 weeks ago

yeah, so many things now use AWS in some way. So when AWS has a cold, the internet shivers

[–] joel_feila@lemmy.world 3 points 2 weeks ago (1 children)

Well companies use not for relibibut to outsource responsibility. Even a medium sized company treated Windows like a subscription for many many years. People have been emailing files to themself since the start of email.

For companies moving everything to msa or aws just was the next step and didn't change day to operations

[–] NotMyOldRedditName@lemmy.world 3 points 2 weeks ago

People also tend to forget all the compliance issues that can come around hosting content, and using someone with expertise in that can reduce a very large burden. It's not something that would hit every industry, but it does hit many.

[–] relativestranger@feddit.nl 1 points 3 weeks ago

sidekicks in '09. had so many users here affected.

never again.

[–] wirebeads@lemmy.ca 1 points 3 weeks ago

A single point of failure you pay them for.