this post was submitted on 21 Oct 2025
115 points (100.0% liked)

technology

24106 readers
177 users here now

On the road to fully automated luxury gay space communism.

Spreading Linux propaganda since 2020

Rules:

founded 5 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] Xiisadaddy@lemmygrad.ml 7 points 1 month ago (1 children)

Well, they always SAY its DNS. That doesn't mean that's what it actually was.

[–] 30_to_50_Feral_PAWGs@hexbear.net 19 points 1 month ago* (last edited 1 month ago) (1 children)

Does it count as a self-inflicted cyberattack if you play layoff roulette with your senior staff and piss off anyone else remaining via RTO chumpfuckery to the point that you hemorrhage engineers left and right for multiple consecutive years, up until you get to the point where none of the new hires know how to fix your bread and butter when it inevitably goes tits up? Because maybe that framing would work.

But no, it was DNS. More specifically, a DynamoDB instance became unreachable by their control plane due to failed DNS resolution. Then, big bada boom. Recovery took significantly longer than usual on account of the aforementioned staffing issue.

[–] mermella@hexbear.net 2 points 1 month ago (1 children)

How does this have anything to do with layoffs? They say this every AWS outage from years ago. It was a race condition on a DNS setting that wiped out a major one. Don’t artiunute to malice which is likely stupidity. DNS is incredibly easy to eff up and impossible to alert on

I'm not saying that a disgruntled ex-employee did something, I'm saying that due to brain drain -- in part due to layoffs in the name of "cost cutting" -- AWS had no experienced staff left who knew how to not fall into one of the incredibly easy DNS pitfall traps, and moreover, this had a direct impact on recovery time. It is 100% stupidity at an upper management level.