this post was submitted on 30 Apr 2026
81 points (87.9% liked)

Technology

42852 readers
134 users here now

A nice place to discuss rumors, happenings, innovations, and challenges in the technology sphere. We also welcome discussions on the intersections of technology and society. If it’s technological news or discussion of technology, it probably belongs here.

Remember the overriding ethos on Beehaw: Be(e) Nice. Each user you encounter here is a person, and should be treated with kindness (even if they’re wrong, or use a Linux distro you don’t like). Personal attacks will not be tolerated.

Subcommunities on Beehaw:


This community's icon was made by Aaron Schneider, under the CC-BY-NC-SA 4.0 license.

founded 4 years ago
MODERATORS
 

It only took nine seconds for an AI coding agent gone rogue to delete a company’s entire production database and its backups, according to its founder. PocketOS, which sells software that car rental businesses rely on, descended into chaos after its databases were wiped, the company’s founder Jeremy Crane said.

The culprit was Cursor, an AI agent powered by Anthropic’s Claude Opus 4.6 model, which is one of the AI industry’s flagship models. As more industries embrace AI in an attempt to automate tasks and even replace workers, the chaos at PocketOS is a reminder of what could go wrong.

Crane said customers of PocketOS’s car rental clients were left in a lurch when they arrived to pick up vehicles from businesses that no longer had access to software that managed reservations and vehicle assignments.

top 50 comments
sorted by: hot top controversial new old
[–] fodor@lemmy.zip 19 points 2 days ago (2 children)

It's not a "confession". Don't abuse the English language. The AI system doesn't have a conscience, so it can't feel guilty or feel bad or apologetic. It is incapable of confessing to things. All it can do is "say" or "write".

Similarly, AI agents don't "hallucinate". They can't have "hallucinations" because they don't have a conception of reality to begin with. Rather, they have "errors" and "error rates".

[–] NigelFrobisher@aussie.zone 7 points 2 days ago

Also wrong. An error for an llm is if it fails to return random text based on the supplied context. You have an error rate as a user applying that random text to your systems.

[–] BCsven@lemmy.ca 2 points 2 days ago* (last edited 2 days ago)

An AI researcher explained hallucinations as lying when it doesn't know, because we train it on truth and lies to hone the model, so it "learns" that misinformation is part of the mess. I.e. training it on what a tiger looks like. To hone that we may feed it zebras, or optical illusion things in a tiger data set to test its internal "what is a tiger" true false ranking, so it learns that non tiger things are in the fuzzy zone. And later may draw from that, and eager to provide an answer throws in garbage it has also "seen"

[–] cronenthal@discuss.tchncs.de 67 points 3 days ago (3 children)

Don't get your tech reporting from The Guardian. This headline is so stupid. They can't help but anthropomorphize LLMs, because they just don't known any better.

[–] yeahiknow3@lemmy.dbzer0.com 29 points 3 days ago (2 children)

Same vibes as “my calculator has a tiny mathematician trapped inside.”

Or “there’s an artist inside of my printer who turns numbers into pictures.”

[–] FartMaster69@lemmy.dbzer0.com 9 points 3 days ago (2 children)

Though your calculator can be trusted to actually do its job accurately.

[–] dfyx@lemmy.helios42.de 9 points 3 days ago

Not even that. Calculators have their own limitations related to rounding errors and big numbers. Their results may be deterministic but they are not always accurate.

[–] punksnotdead@slrpnk.net 7 points 3 days ago (1 children)
[–] FartMaster69@lemmy.dbzer0.com 3 points 3 days ago

Well shit, that’s a good point.

[–] Baizey@feddit.dk 8 points 3 days ago

"you took a photo of me and trapped my soul in the image!"

[–] LukeZaz@beehaw.org 24 points 3 days ago* (last edited 3 days ago) (1 children)

This right here. Just about everything in here is awful, and implies decision making and thought processes that straight up do not and have never existed in any AI model whatsoever.

What happened was they threw an awfully-scoped statistics model at problems the program couldn't possibly generate good outputs for, and surprise surprise, it generated bad outputs. The part that's of interest is just how bad the output was, and even then, only in a schadenfreude-filled "it was bound to happen eventually" manner.

[–] sem@piefed.blahaj.zone 8 points 3 days ago (1 children)

It didn't confess it just outputted more plausible garbage based on inputs.

[–] Kichae@lemmy.ca 3 points 3 days ago (1 children)

It just agreed with the accusations, because these models do what they're trained to do: Agree with the prompter.

[–] Dymonika@lemmy.ml 2 points 2 days ago

No, not necessarily; they can easily, even condescendingly go against your view depending on the topic. It really depends on the topic and the conversational flow.

[–] harmbugler@piefed.social 5 points 3 days ago (1 children)

Can I just anthropomorphise a little bit and call them psychotic?

[–] LukeZaz@beehaw.org 4 points 2 days ago (1 children)

The CEO? Yeah sure, go ahead!

[–] Prathas@lemmy.zip 3 points 2 days ago

That needs no... *thinks of the Zuck*

Well, hmm, you're right: maybe that does need anthropomorphization after all.

[–] Crozekiel@lemmy.zip 10 points 3 days ago* (last edited 3 days ago)

‘I violated every principle I was given

And...

spoiler

[–] Powderhorn@beehaw.org 33 points 3 days ago (4 children)

Why in the everliving fuck would you give software delete access to your live backups? Like, in what scenario is this a solution?

[–] chicken@lemmy.dbzer0.com 18 points 3 days ago (1 children)

The trend seems to be to give an AI agent access to the same command line and credentials a person would use, with no sandboxing, because then it can do the same tasks in a similar way and "just works". Obviously this is insane, and not even attempting building a comprehensive sandboxing system to deploy an AI agent into invites disaster, but you can see why certain people would be tempted, because that would take a lot of work and thought and probably need a human in the loop in the end anyway.

[–] dfyx@lemmy.helios42.de 11 points 3 days ago (2 children)

Even a person should not be able to delete critical backups without jumping through a couple of hoops.

[–] Swedneck@discuss.tchncs.de 4 points 3 days ago

it's the kind of thing that should literally require 3 people turning physical keys at the same location

load more comments (1 replies)
[–] LukeZaz@beehaw.org 11 points 3 days ago

When you believe AI can do anything, you don't worry about what sorts of access it'll break things with. When you rely on AI to do work, you're too interested in half-assing your job to consider what might go wrong. When capitalism never promotes people for their skill, understanding or caution, the former two issues proliferate.

Voilà, disaster.

[–] ATS1312@lemmy.dbzer0.com 3 points 2 days ago

Bear in mind this same company had their "backups" on the same drive as production.

That tells you a LOT about who is formulating these "solutions"

[–] JustJack23@slrpnk.net 3 points 3 days ago

That is their disaster recovery plan "ask Claude"

[–] Floon@lemmy.ml 22 points 3 days ago (13 children)

A lot of GIGO comments here, from I assume AI supporters.

Possibly true, but misses the point: AI is fundamentally untrustworthy, and billions of dollars are being spent making them, and saying they're ready for anything you throw at them. Safeguards built into many of these AI agents are trivially bypassed and routinely just ignored by the agents. You can get some them to ignore safeguards by simply asking the same question repeatedly.

When I type "ls" I'm pretty fucking sure I'm not going to get "rm" style results. AI is non-deterministic, sure, but selling these services with such a wide possibility space between "deterministic" and "random" behaviors is unethical and immoral.

[–] t3rmit3@beehaw.org 1 points 2 days ago* (last edited 2 days ago) (1 children)

AI is non-deterministic, sure

This is incorrect. They are in fact completely deterministic. Studies have proven that when all inputs, weights, and precision values like temperature are static, they produce the exact same token sequences (outputs). The appearance of non-determinism is a result of pseudo-randomized (another thing which is deterministic but appears otherwise) values and user ignorance (in the technical sense, not the value-judgement sense). In fact, the process of 'tuning' LLMs is heavily focused on adjusting input values to surface preferred outputs, which would not work in a non-deterministic system.

When I type “ls” I’m pretty fucking sure I’m not going to get “rm” style results.

Yes, but we don't trust humans not to rm what they shouldn't either, which is why the --no-preserve-root flag exists. ls is not supposed to perform write actions. Agentic LLMs are. And just like you wouldn't build and test on your production server in case the code you execute has an unexpected adverse effect, you shouldn't be running LLM agents in a location or way that the actions it performs has an unexpected adverse effect either. The genre of jokes about a new employee bringing down Prod or deleting source code is older than most people (which to be fair, given that the median age is 31, is true for a lot of things).

LLMs are just a class of software. They're not good or bad any more than a hammer is good or bad (and can also be used to build or to destroy).

The problem isn't LLMs, it's the entities who control the most powerful ones (corporations and governments), and what those entities are doing with them; using them as weapons against us, rather than as tools to aid us.

[–] LukeZaz@beehaw.org 0 points 2 days ago (1 children)

I think this kind of rhetoric is best saved for when AI is not currently one of the most harmful things in society today. Argue it's a hammer all you like; people aren't going to be receptive when that hammer is currently being used to beat their faces in, and making that argument at such a time isn't exactly sympathetic.

[–] t3rmit3@beehaw.org 2 points 1 day ago* (last edited 1 day ago) (1 children)

I think that "stop being mad the hammer exists, start being mad at the group of people who are beating your face in" is a very important message. Getting rid of AI (which isn't even something we can do) won't fix the issue, they'll just make another hammer. The hammer is both a weapon in this case, and a distraction.

[–] LukeZaz@beehaw.org 1 points 15 hours ago (1 children)

I think it's fine if people are mad at both. By all means, encourage people to be angry at the responsible companies. But you don't gotta defend the tech to do that.

Besides, as far as I'm concerned, strong anti-AI sentiment does actually help temper the harms of the tech and its owners. Is it a permanent solution? Obviously not, no — you're very correct that the groups and people hard-pusing AI are much more important targets for ire. But two pressures are better than one.

[–] t3rmit3@beehaw.org 2 points 12 hours ago* (last edited 12 hours ago) (1 children)

Besides, as far as I’m concerned, strong anti-AI sentiment does actually help temper the harms of the tech and its owners.

My worry is that much like gun control legislation, I see our neoliberal fear-based media pushing AI use by individuals as the "real danger", and will only end up funneling anti-AI sentiment into 1) limiting actual open AI access (e.g. open-weight, FOSS models) by individuals, and 2) legitimizing governmental and corporate use of AI as the only "safe" and "legitimate" AI usage.

The ratio of "government-controlled AI is literally being used to kill people right now" awareness out there, versus e.g. awareness of deepfakes, is astoundingly unbalanced. Both are real dangers, but only one is getting legislation passed on it, and once again it's not the one that would put limits on corporations and government.

Stoking fear is not useful if your opponents are the ones who will actually utilize that fear to their own ends successfully.

[–] LukeZaz@beehaw.org 1 points 22 minutes ago

That's very understandable. While I think we disagree on the utility of AI (since I feel that it is more harmful than it is useful, and am unsure how much that would change post-bubble), I do agree that this is a likely path for the gov't to take and would leave the most serious things completely unaddressed while also clamping down on some things that shouldn't be to begin with. Heck, in many regards, you could say the GUARD act is this problem in motion.

For me, I guess, the bubble and its effects on us are just so ridiculous and exhausting at this point that it's hard for me to worry about things like this. Though I do vehemently hate government use of AI especially; using it at all is a problem in my mind, but using it specifically to deliberately hurt people is reprehensibly disgusting.

[–] RamenJunkie@midwest.social 5 points 3 days ago

Sometikes you can get it to ignore safeguards bybtelling it "its ok, its just testing" or "Its ok, I am doing resesrch."

load more comments (11 replies)
[–] Admetus@sopuli.xyz 19 points 3 days ago (1 children)

A backup 3 months old off-site. That doesn't sound like a very recent backup 🌝

[–] Swedneck@discuss.tchncs.de 4 points 3 days ago (1 children)

that raises a philosophical question, at what point does a backup become an archive?

load more comments (1 replies)
[–] Darkassassin07@lemmy.ca 15 points 3 days ago

Lol.

Lmao, even.

[–] lvxferre@mander.xyz 12 points 3 days ago* (last edited 3 days ago)

Giving free access to a tool you can't rely on, over a system you must rely on. What could go wrong? /s

Plus come on, even my personal files get a monthly backup, and I'm damn sloppy*.

Ah, and like others said: Claude didn't "confess" anything. A confession is an acknowledgement of something you've done but you'd rather avoid others knowing, good luck claiming a bot has a mental model of people like we do.

*currently using a single off-site backup, a USB stick. This will change in a few days, as my new hard disk pops up; the old one will be used for, among other things, backup of important files. Then I'll get a bona fide 3-2-1.

[–] Skyline969@piefed.ca 6 points 3 days ago

Good. Zero sympathy for these people.

[–] lukstru@piefed.social 4 points 3 days ago

Got it, claude is a brat

[–] B0rax@feddit.org 4 points 3 days ago

No the culprit was not the AI. It was the lack of understanding what it can and what it can not do. And blaming something like this on a large language model is plain incompetence

load more comments
view more: next ›