this post was submitted on 20 Mar 2026
72 points (86.7% liked)

Not The Onion

20899 readers
1011 users here now

Welcome

We're not The Onion! Not affiliated with them in any way! Not operated by them in any way! All the news here is real!

The Rules

Posts must be:

  1. Links to news stories from...
  2. ...credible sources, with...
  3. ...their original headlines, that...
  4. ...would make people who see the headline think, “That has got to be a story from The Onion, America’s Finest News Source.”

Please also avoid duplicates.

Comments and post content must abide by the server rules for Lemmy.world and generally abstain from trollish, bigoted, ableist, or otherwise disruptive behavior that makes this community less fun for everyone.

And that’s basically it!

founded 2 years ago
MODERATORS
 

Side note: Sorry for linking to a site full of clickbait.

you are viewing a single comment's thread
view the rest of the comments
[–] MangoCats@feddit.it 25 points 19 hours ago (1 children)

Simple fact is: if the AI Agent broke out of its testing environment, somebody left the door open for it to do so. Just because the person setting up the test environment is incompetent doesn't mean the AI is diabolical.

Now, if you first asked the AI Agent to ensure that its test environment was secure, really really secure, and it assured you "yes, there is no way I can get out" and then it turned around and got out, attempting to cover its tracks while doing so, I'd ask: what was this LLM trained on? Black hat conference proceedings, or...?

[–] msage@programming.dev 6 points 15 hours ago

To the second paragraph: what?

Agents are not sentient, nor logical, asking them if they can't get out is just dumb.