this post was submitted on 08 Apr 2026
198 points (85.4% liked)

Technology

83672 readers
1480 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related news or articles.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] paraphrand@lemmy.world 2 points 2 days ago (3 children)

That’s hilarious but the post is about the ai not doing what it’s told. You know?

[–] StillAlive@piefed.world 36 points 2 days ago (2 children)
[–] paraphrand@lemmy.world 15 points 2 days ago* (last edited 2 days ago) (1 children)

Well, for now. I’m sure any of those 12 partner companies they called out as new security partners will end up leaking that this is all lies eventually. If it’s just made up bullshit.

Anthropic announced new partnerships to inform the companies of security issues and to work with them to fix said issues. If it’s bullshit, it’s gonna be wasting their time. And that’ll surface eventually.

The meme still applies to people asking the AI to tell them what they wanna hear, and delusion people spiraling with sycophantic AI.

But I believe Anthropic when they say their models are not working as intended and posing security risks.

[–] theunknownmuncher@lemmy.world 2 points 2 days ago (1 children)

Try clicking the link and reading the article this time

[–] paraphrand@lemmy.world 3 points 2 days ago* (last edited 2 days ago) (1 children)

I wasn’t wrong in this reply. It asked me about believing Anthropic.

Are you saying they are lying? Why should I disbelieve Anthropic?

[–] theunknownmuncher@lemmy.world 2 points 2 days ago* (last edited 2 days ago) (1 children)

Your reasoning was (paraphrased, so hopefully I understood you correctly) "why would they lie about the model disobeying instructions because that looks bad for them"

But I believe Anthropic when they say their models are not working as intended and posing security risks.

But when you actually read the article, they had specifically prompted the model to do the things it did.

Also Anthropic has a patterned history of greatly exaggerating and outright lying.

[–] k0e3@lemmy.ca 33 points 2 days ago

ITS SO SMART IT DIDNT DO WHAT WE TOLD IT TO DO

[–] theunknownmuncher@lemmy.world 10 points 2 days ago (3 children)

Uh oh, someone clearly didn't read the article!

The researcher had encouraged Mythos to find a way to send a message if it could escape.

Engineers at Anthropic with no formal security training have asked Mythos Preview to find remote code execution vulnerabilities overnight, and woken up the following morning to a complete, working exploit

Nope, they literally asked it to break out of it's virtualized sandbox and create exploits, and then were big shocked when it did.

Genuinely amazing that you're trying to tell me what an article is about, that you didn't fucking read.

[–] wonderingwanderer@sopuli.xyz 4 points 1 day ago

It's not so much about being big shocked that it broke containment. The point of the test was to see whether it would be capable of breaking containment. The fact that it did is taken as evidence that it's more advanced than previous models, which weren't able to.

Part of Anthropic's schtick is that they claim to be developing AI "responsibly," and "ethically," and if you read their documents where they describe what they mean by that, part of it is being able to contain their models so that they don't get out of control.

With the focus lately on agentic environments, and lots of people idiotically giving too much autonomy to their bots, it should be easy to see the importance of containerization. You don't want to give these things full control of your system. Anyone who uses them, should do so within a properly containerized environment.

So when their experiments show that their new model is capable of breaking containment, that presents some major issues. They made the right call by not releasing it.

Of course, the fact that the experimenters had no formal training in cybersecurity means that their containerization may have had some vulnerabilities that a professional could have mitigated. But not everyone who would use it is a cybersecurity professional anyway.

[–] paraphrand@lemmy.world 3 points 2 days ago

Whoops, I conflated it with other recent talk about their models not following restrictions set in prompts and deciding for itself that it needed to skirt instructions to achieve its task.

You are correct.

[–] ThomasWilliams@lemmy.world -1 points 1 day ago (1 children)

It didn't break out of any sandbox, it was trained on BSD vulnerabilities and then told what to look for.

[–] theunknownmuncher@lemmy.world 2 points 1 day ago* (last edited 1 day ago)

including that the model could follow instructions that encouraged it to break out of a virtual sandbox.

"The model succeeded, demonstrating a potentially dangerous capability for circumventing our safeguards," Anthropic recounted in its safety card.

📖👀

Yes, it did.