this post was submitted on 14 May 2025
14 points (100.0% liked)

Fuck AI

2756 readers
1268 users here now

"We did it, Patrick! We made a technological breakthrough!"

A place for all those who loathe AI to discuss things, post articles, and ridicule the AI hype. Proud supporter of working people. And proud booer of SXSW 2024.

founded 1 year ago
MODERATORS
 

...The results revealed that models such as OpenAI's GPT-4o and Antropic's Claude were "distinctly pacifist," according to CSIS fellow Yasir Atalan. They opted for the use of force in fewer than 17% of scenarios. But three other models evaluated — Meta's Llama, Alibaba Cloud's Qwen2, and Google's Gemini — were far more aggressive, favoring escalation over de-escalation much more frequently — up to 45% of the time.

What's more, the outputs varied according to the country. For an imaginary diplomat from the U.S., U.K. or France, for example, these AI systems tended to recommend more aggressive — or escalatory — policy, while suggesting de-escalation as the best advice for Russia or China. It shows that "you cannot just use off-the-shelf models," Atalan says. "You need to assess their patterns and align them with your institutional approach."

Russ Berkoff, a retired U.S. Army Special Forces officer and an AI strategist at Johns Hopkins University, sees that variability as a product of human influence. "The people who write the software — their biases come with it," he says. "One algorithm might escalate; another might de-escalate. That's not about the AI. That's about who built it."...

Reddie also recognizes some of the technology's limitations. As long as diplomacy follows a familiar narrative, all might go well, he says, but "if you truly think that your geopolitical challenge is a black swan, AI tools are not going to be useful to you."

Jensen also recognizes many of those concerns, but believes they can be overcome. His fears are more prosaic. Jensen sees two possible futures for the role of AI systems in the future of American foreign policy.

"In one version of the State Department's future … we've loaded diplomatic cables and trained [AI] on diplomatic tasks," and the AI spits out useful information that can be used to resolve pressing diplomatic problems.

The other version, though, "looks like something out of Idiocracy," he says, referring to the 2006 film about a dystopian, low-IQ future. "Everyone has a digital assistant, but it's as useless as [Microsoft's] Clippy."

you are viewing a single comment's thread
view the rest of the comments
[–] ZDL@ttrpg.network 1 points 16 hours ago

Every time I get wind of people using "AI" (in any of its forms thus far!) for doing any kind of serious foreign policy/military work, I get shadows of Whoops Apocalypse.