this post was submitted on 30 Apr 2026
81 points (87.2% liked)

Technology

42860 readers
141 users here now

A nice place to discuss rumors, happenings, innovations, and challenges in the technology sphere. We also welcome discussions on the intersections of technology and society. If it’s technological news or discussion of technology, it probably belongs here.

Remember the overriding ethos on Beehaw: Be(e) Nice. Each user you encounter here is a person, and should be treated with kindness (even if they’re wrong, or use a Linux distro you don’t like). Personal attacks will not be tolerated.

Subcommunities on Beehaw:


This community's icon was made by Aaron Schneider, under the CC-BY-NC-SA 4.0 license.

founded 4 years ago
MODERATORS
 

It only took nine seconds for an AI coding agent gone rogue to delete a company’s entire production database and its backups, according to its founder. PocketOS, which sells software that car rental businesses rely on, descended into chaos after its databases were wiped, the company’s founder Jeremy Crane said.

The culprit was Cursor, an AI agent powered by Anthropic’s Claude Opus 4.6 model, which is one of the AI industry’s flagship models. As more industries embrace AI in an attempt to automate tasks and even replace workers, the chaos at PocketOS is a reminder of what could go wrong.

Crane said customers of PocketOS’s car rental clients were left in a lurch when they arrived to pick up vehicles from businesses that no longer had access to software that managed reservations and vehicle assignments.

you are viewing a single comment's thread
view the rest of the comments
[–] LukeZaz@beehaw.org 0 points 2 days ago (1 children)

I think this kind of rhetoric is best saved for when AI is not currently one of the most harmful things in society today. Argue it's a hammer all you like; people aren't going to be receptive when that hammer is currently being used to beat their faces in, and making that argument at such a time isn't exactly sympathetic.

[–] t3rmit3@beehaw.org 2 points 1 day ago* (last edited 1 day ago) (1 children)

I think that "stop being mad the hammer exists, start being mad at the group of people who are beating your face in" is a very important message. Getting rid of AI (which isn't even something we can do) won't fix the issue, they'll just make another hammer. The hammer is both a weapon in this case, and a distraction.

[–] LukeZaz@beehaw.org 1 points 19 hours ago (1 children)

I think it's fine if people are mad at both. By all means, encourage people to be angry at the responsible companies. But you don't gotta defend the tech to do that.

Besides, as far as I'm concerned, strong anti-AI sentiment does actually help temper the harms of the tech and its owners. Is it a permanent solution? Obviously not, no — you're very correct that the groups and people hard-pusing AI are much more important targets for ire. But two pressures are better than one.

[–] t3rmit3@beehaw.org 2 points 17 hours ago* (last edited 17 hours ago) (1 children)

Besides, as far as I’m concerned, strong anti-AI sentiment does actually help temper the harms of the tech and its owners.

My worry is that much like gun control legislation, I see our neoliberal fear-based media pushing AI use by individuals as the "real danger", and will only end up funneling anti-AI sentiment into 1) limiting actual open AI access (e.g. open-weight, FOSS models) by individuals, and 2) legitimizing governmental and corporate use of AI as the only "safe" and "legitimate" AI usage.

The ratio of "government-controlled AI is literally being used to kill people right now" awareness out there, versus e.g. awareness of deepfakes, is astoundingly unbalanced. Both are real dangers, but only one is getting legislation passed on it, and once again it's not the one that would put limits on corporations and government.

Stoking fear is not useful if your opponents are the ones who will actually utilize that fear to their own ends successfully.

[–] LukeZaz@beehaw.org 1 points 4 hours ago

That's very understandable. While I think we disagree on the utility of AI (since I feel that it is more harmful than it is useful, and am unsure how much that would change post-bubble), I do agree that this is a likely path for the gov't to take and would leave the most serious things completely unaddressed while also clamping down on some things that shouldn't be to begin with. Heck, in many regards, you could say the GUARD act is this problem in motion.

For me, I guess, the bubble and its effects on us are just so ridiculous and exhausting at this point that it's hard for me to worry about things like this. Though I do vehemently hate government use of AI especially; using it at all is a problem in my mind, but using it specifically to deliberately hurt people is reprehensibly disgusting.