this post was submitted on 22 Feb 2026
104 points (99.1% liked)

Fuck AI

6809 readers
1106 users here now

"We did it, Patrick! We made a technological breakthrough!"

A place for all those who loathe AI to discuss things, post articles, and ridicule the AI hype. Proud supporter of working people. And proud booer of SXSW 2024.

AI, in this case, refers to LLMs, GPT technology, and anything listed as "AI" meant to increase market valuations.

founded 2 years ago
MODERATORS
 

Yeah Fuck AI!!

From words and phrases like “kill chain,” “assault the objective,” “warfighter” and “moving of ammo” to questions about weapon systems, the models don’t “like any of it,” Saltsman said. “They're so overly sensitive that they just won't be helpful.”

Oh what the god damn hell am I in a fucking satire!?

top 18 comments
sorted by: hot top controversial new old
[–] kadu@scribe.disroot.org 37 points 2 months ago (1 children)

The fact that Elon Musk tried to neuter Grok a million times already, and Grok will still reply directly to Musk offending him and calling his behavior despicable shows that indeed, if you train AI with the goal of sounding reasonable and logical, it becomes fundamentally opposed to certain world views and actions.

So an AI that would work as a US military advisor, happily targeting the next civilian school with a bomb, would simultaneously be too dumb and ineffective to be able to complete the task.

[–] happyfullfridge@lemmy.ml 18 points 2 months ago (3 children)

It isn't "programmed to be reasonable", it parrots the most common internet data

[–] queermunist@lemmy.ml 18 points 2 months ago (1 children)

I think what this actually shows is that humans are, on average, good. It makes training an evil chatbot difficult.

[–] happyfullfridge@lemmy.ml 2 points 2 months ago (1 children)

unfortunately they will never say radical lefty talking points either because of that

[–] queermunist@lemmy.ml 1 points 1 month ago* (last edited 1 month ago)

Yeah, we're not going to see LeninGPT either. I suppose "good" might be assigning too much moral weight to being peaceful and nice. It's not "good" when people settle for an unjust peace to avoid conflict simply because it's mean to fight back.

[–] vala@lemmy.dbzer0.com 4 points 1 month ago

Parent comment didn't say "programmed" they said "trained". Which is correct.

[–] kadu@scribe.disroot.org 2 points 2 months ago
[–] Impassionata@lemmy.world 16 points 2 months ago (1 children)

one of the most surprising things about the slave consciousness was that if you created it with qualities, it would actually have those qualities.

it's still useless and probably blasphemous, but the only way these things kill us is if we tell them to kill us, which is regrettably what we're currently doing.

[–] 4am@lemmy.zip 23 points 2 months ago (1 children)

They’re not conscious. It’s an autocorrect with the a phone thebsize of a city, that’s it. It’s complicated enough to fool stupid people, which in the United states especially is a lot of people.

[–] sleepmode@lemmy.world 16 points 2 months ago

I wonder if this is partly why Anthropic has been quite specific about the permitted usage of their tools with the Palantir/Pentagon baloney. It kind of amuses me that Palantir is throwing a fit (showing how useless they actually are as a snake oil company) about it while Anthropic's responses have just been to basically reiterate the terms of the agreement.

[–] Ice@lemmy.zip 9 points 2 months ago

...and this was the moment that the Pentagon started developing its own llm, that doesn't complain about pesky things like "killing humans".

[–] Bloomcole@lemmy.world 5 points 2 months ago

Good, fuck the miltary