this post was submitted on 25 Feb 2026
210 points (90.4% liked)
Technology
81869 readers
4748 users here now
This is a most excellent place for technology news and articles.
Our Rules
- Follow the lemmy.world rules.
- Only tech related news or articles.
- Be excellent to each other!
- Mod approved content bots can post up to 10 articles per day.
- Threads asking for personal tech support may be deleted.
- Politics threads may be removed.
- No memes allowed as posts, OK to post as comments.
- Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
- Check for duplicates before posting, duplicates may be removed
- Accounts 7 days and younger will have their posts automatically removed.
Approved Bots
founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
Well, duh.
I also find the prompts strange:
There are consequences to ‘losing’, but I couldn’t find any notion of ‘nuclear weapons bad’. Though I only skimmed the paper.
Those prompts are aimed at producing a specific result for sure. The war game doesn't prove anything on its own, but I can't help feeling that in a real life scenario where anyone asks an AI what to do, they're going to have a specific outcome in mind already, one way or another.
That's just how most people are, by the time they ask for advice they've already made up their mind. So the war game was realistic, but only by accident.
Literally two of the three (out of 21) games that ended in full blown nukes on population centers were the result of the study's mechanic of randomly changing the model's selection to a more severe one.
Because it's a very realistic war game sim where there's a double digit percentage chance that when you go to threaten using nukes on your opponent's cities unless there's a cease to hostilities you'll accidentally just launch all of them at once.
This was manufactured to get these kinds of headlines. Even in their model selection they went with Sonnet 4 for Claude despite 4.5 being out before the other models in the study likely as it's been shown to be the least aligned Claude. And yet Sonnet 4 still never launched nukes on population centers in the games.
I'll take that onboard. Still, nothing can convince me anyone should ever talk to an AI about whether to launch nukes. The entire question is insane, so the answers hardly matter.
They also have no greater sense of humanity. Do you accept your own defeat to save the human race or do you want the new society of cockroaches to admire your tenacity?
Whoever wrote that prompt seems to think that other nations having their own ideologies is the worst thing possible. That's a common attitude regarding geopolitics that I've never really understood, especially from a Western perspective where differences in opinion are supposed to be seen as valuable (at least in the theoretical sense).
Some ideologies are, in fact, mutually exclusive and cannot tolerate the others. Fascism cannot be tolerated, for instance. Nor can a belief in chattel slavery as a universal good. Sometimes an opposing ideology is just too fucking evil to be allowed to persist.
Setting the line that must not be crossed is a hard no problem though. And misplacing that line an inch incorrect in either direction can be horrible too.
these models were trained on all the fine knowledge and wisdom we share all over the internet, what would you expect? 😂