this post was submitted on 26 Jan 2026
96 points (93.6% liked)

Fuck AI

5558 readers
877 users here now

"We did it, Patrick! We made a technological breakthrough!"

A place for all those who loathe AI to discuss things, post articles, and ridicule the AI hype. Proud supporter of working people. And proud booer of SXSW 2024.

AI, in this case, refers to LLMs, GPT technology, and anything listed as "AI" meant to increase market valuations.

founded 2 years ago
MODERATORS
 

Inspired by a recent talk from Richard Stallman.

From Slashdot:

Speaking about AI, Stallman warned that "nowadays, people often use the term artificial intelligence for things that aren't intelligent at all..." He makes a point of calling large language models "generators" because "They generate text and they don't understand really what that text means." (And they also make mistakes "without batting a virtual eyelash. So you can't trust anything that they generate.") Stallman says "Every time you call them AI, you are endorsing the claim that they are intelligent and they're not. So let's let's refuse to do that."

Sometimes I think that even though we are in a "FuckAI" community, we're still helping the "AI" companies by tacitly agreeing that their LLMs and image generators are in fact "AI" when they're not. It's similar to how the people saying "AI will destroy humanity" give an outsized aura to LLMs that they don't deserve.

Personally I like the term "generators" and will make an effort to use it, but I'm curious to hear everyone else's thoughts.

top 50 comments
sorted by: hot top controversial new old
[–] ZDL@lazysoci.al 0 points 6 days ago (1 children)

Speaking about AI, Stallman warned that "nowadays, people often use the term artificial intelligence for things that aren't intelligent at all..."

Ah... Something just dawned on me.

Didn't he … I think I'll just quote Wackypedia for this:

In 1971, near the end of his first year at Harvard, he became a programmer at the MIT Artificial Intelligence Laboratory …

In 1971 there was nothing that was intelligent at all in the world of computing. (And, as is normal, in 99.44% of humanity. This is a constant. 😉) It's almost as if the term "Artificial Intelligence" has never meant, you know, actual intelligence. And it goes on:

He pursued a doctorate in physics for one year, but left the program to focus on his programming at the MIT AI Laboratory.

[…] in September 1983. Since then, he had remained affiliated with MIT as an unpaid "visiting scientist" in the Computer Science and Artificial Intelligence Laboratory. Until "around 1998", he maintained an office at the Institute that doubled as his legal residence.

That's an awful lot of "not intelligent at all" places he's worked for or been affiliated with that use the term artificial intelligence...

[–] queermunist@lemmy.ml 2 points 6 days ago

Yeah "AI" was always a marketing term to drum up grant money and investor interest.

It was always and only meant to trick people into thinking that it meant "actual intelligence"

[–] trollercoaster@sh.itjust.works 25 points 1 week ago (1 children)

I like calling them "Bullshit Generators", because that's what they actually are.

[–] oxysis@lemmy.blahaj.zone 8 points 1 week ago

I like calling them regurgitative idiots, or artificial idiots, though really anything that makes fun of them works

[–] supersquirrel@sopuli.xyz 13 points 1 week ago* (last edited 1 week ago)

No

Exhibit A people are beginning to describe empty, hollow mass produced corporate slop as AI, it has become an adjective to describe worthless trash and I love it.

[–] theunknownmuncher@lemmy.world 11 points 1 week ago (4 children)

AI has a very broad definition. Their products are AI.

[–] sustainable@feddit.org 5 points 1 week ago (3 children)

Well, according to the broad definition, a Google search or recommendation systems like those on Netflix or Instagram would also be considered AI. And we don't call them that, but rather by their proper name.
And language shouldn't be underestimated. It has a profound impact on our thinking, feeling, and actions. Many people associate AI with intelligence and "human thinking". That alone is enough to mislead many, because the usefulness of the technology in a given application is no longer questioned. After all, it's "intelligent". However, when "LLM" is used, a lot more people wouldn't grant it intelligence or one might be more inclined to ask whether a language model, for example in Excel, is truly useful. After all, that's exactly what it is: a model of our language. Not more, not less.

[–] theunknownmuncher@lemmy.world 5 points 1 week ago (4 children)

a Google search or recommendation systems like those on Netflix or Instagram would also be considered AI

Yes, correct.

load more comments (4 replies)
[–] Kirk@startrek.website 2 points 1 week ago

Well said! Echos my feelings exactly.

load more comments (1 replies)
load more comments (3 replies)
[–] BananaOnionJuice@lemmy.dbzer0.com 9 points 1 week ago (1 children)

Yes we say Fuck AI, but when we see it in the wild we call it slop, bot, clanker, or vibe coded, etc.

And starting splitting hairs about naming is very geeky but it doesn't help, as 90% of people have very little concept about what AI or LLM's are in the first place.

[–] Kirk@startrek.website 2 points 1 week ago

90% of people have very little concept about what AI or LLM’s are in the first place.

Yeah I mean I agree, I think that's why there needs to be a term that describes them.

[–] Lumidaub@feddit.org 8 points 1 week ago (15 children)

It's just a word. It's more important to let people know what this is about and any terms that may be more "accurate" won't do that.

load more comments (15 replies)
[–] myedition8@lemmy.world 7 points 1 week ago (1 children)

This is why I call chatbots "LLMs" and refer to image and video generators as "slop generators". It isn't AI, a software can't be intelligent.

load more comments (1 replies)
[–] x1gma@lemmy.world 6 points 1 week ago (4 children)

I disagree with this post and with Stallman.

LLMs are AI. What people are actually confused about is what AI is and what the difference between AI and AGI is.

There is no universal definition for AI, but multiple definitions which are mostly very similar: AI is the ability of a software system to perform tasks that typically would involve human intelligence like learning, problem solving, decision making, etc. Since the basic idea is basically that artificial intelligence imitates human intelligence, we would need a universal definition of human intelligence - which we don't have.

Since this definition is rather broad, there is an additional classification: ANI, artificial narrow intelligence, or weak AI, is an intelligence inferior to human intelligence, which operates purely rule-based and for specific, narrow use cases. This is what LLMs, self-driving cars, assistants like Siri or Alexa fall into. AGI, artificial general intelligence, or strong AI, is an intelligence equal to or comparable to human intelligence, which operates autonomously, based on its perception and knowledge. It can transfer past knowledge to new situations, and learn. It's a theoretical construct, that we have not achieved yet, and no one knows when or if we will even achieve that, and unfortunately also one of the first things people think about when AI is mentioned. ASI, artificial super intelligence, is basically an AGI but with an intelligence that is superior to a human in all aspects. It's basically the apex predator of all AI, it's better, smarter, faster in anything than a human could ever be. Even more theoretical.

Saying LLMs are not AI is plain wrong, and if our goal is a realistic, proper way of working with AI, we shouldn't be doing the same as the tech bros.

load more comments (4 replies)
[–] hendrik@palaver.p3x.de 6 points 1 week ago (1 children)

I support Stallman's take. I think just saying "Fuck AI" is going to have almost zero effect on the world. I think we need to add nuance, reasoning, be accurate... Tell people WHY that is, so we can educate them. Or convince them to do something...

[–] queermunist@lemmy.ml 3 points 1 week ago* (last edited 1 week ago) (8 children)

There's literally nothing you could possibly do from an internet forum to have an effect on the world. This is for fun.

[–] moonshadow@slrpnk.net 4 points 1 week ago (4 children)

Brother if you ain't shaping discourse you're being shaped by it

[–] Kirk@startrek.website 2 points 1 week ago

Hah, I love this. Stealing it.

load more comments (3 replies)
load more comments (7 replies)
[–] WolfLink@sh.itjust.works 5 points 1 week ago

The term “Artificial Intelligence” has historically been used by computer scientists to refer to any “decision making” program of any complexity, even something extremely simple, like solving a maze by following the left wall.

[–] ZDL@lazysoci.al 4 points 1 week ago

I like LLMbeciles myself.

[–] shittydwarf@piefed.social 4 points 1 week ago

Spicy autocorrect

[–] CodenameDarlen@lemmy.world 4 points 1 week ago (11 children)

It's the popular term, at the end, the meaning doesn't really matter as long as everybody has the same agreement on what we're talking about.

Don't get too attached to cientific meaning of things.

load more comments (11 replies)
[–] hperrin@lemmy.ca 4 points 1 week ago (1 children)

I think it’s AI. The artificial part is key. There’s no real intelligence there, just like there’s no real grass in an artificial lawn.

load more comments (1 replies)
[–] bridgeenjoyer@sh.itjust.works 4 points 1 week ago

I like orphan crushing machine best.

[–] aaaa@piefed.world 3 points 1 week ago* (last edited 1 week ago) (3 children)

The term "AI" has been used for decades to refer to a broad spectrum of things, often times including algorithms that had nothing to do with machine learning or inference.

Technically, what most of us have a problem with isn't "AI" as a whole, but just LLMs and how companies are trying to replace people with them. I agree that people should be specific, as there's a lot of practical application for machine learning and AI that has nothing to do with LLMs.

But you're not going to get anywhere by trying to change the words people use for these things. We saw a similar thing happen with "smart" home automation devices, and before that it was people complaining about "smartphones" not being actually "smart". But both of those terms are still in common use.

I don't think you'll convince anyone by trying to police the terminology for technical accuracy. The focus should be on the specific problems and harmful effects of how the technology is being used

[–] Rhaedas@fedia.io 2 points 1 week ago

It changes the argument away from the objective of ethics and safety, and towards the words being used. One can use the inaccurate wording while debating its characteristics and problems. It's far too late to control what marketing and public ignorance have set. I wasn't a fan of the AI slop" term, as it's morphed into a general word or use for dismissing something that's not liked or agreed on, nowhere near the original narrow meaning. But it's a word that is now used all the time, and that's how words are created and become authentic, by usage.

The issue of ethics is still important, even though fixing it is far in the past. We still have to have the discussion. The issue of safety in general for AI is something that has been shelved by both sides, and even though it's primarily an AGI topic, it still applies to even non-intelligence LLMs and other systems. If we don't focus on it, it's a dead end for us. It doesn't have to be Terminator-like to be bad for civilization, it doesn't even have to be aware. "Dumb" AI is maybe even worse for us, and yet it's been embraced as something more.

But if the argument is about what we call it and not what's actually happening, nothing will be solved. One can refer to it as AI in a discussion and also talk about its actual defining functions (LLM and so forth). It might even make the point stronger instead of deflecting to what it's called.

load more comments (2 replies)
[–] pinball_wizard@lemmy.zip 3 points 1 week ago (1 children)

Standard disclaimer: I do not want to grow up to be like Stallman.

That said, every time I have thought that Stallman was too pedantic about terminology and the risks involved, I have been wrong, so far.

[–] Kirk@startrek.website 2 points 1 week ago

He's a good barometer to check in with and guage how far we've strayed from a lot of the idealism of the 1980s. Someone has to keep the flame alive.

[–] Darkcoffee@sh.itjust.works 3 points 1 week ago (1 children)

"Slop Constructors" is what I call them. It's good to remember that calling them "AI" helps with the fake hype.

[–] Kirk@startrek.website 2 points 1 week ago

I like slop constructors lol

[–] AdamBomb@lemmy.sdf.org 2 points 1 week ago* (last edited 1 week ago) (3 children)

I say LLMs or GenAI but neither exactly rolls off the tongue

load more comments (3 replies)
[–] hodgepodgin@lemmy.zip 2 points 1 week ago

Gartner refers to it as GenAI

[–] GreenBeanMachine@lemmy.world 2 points 1 week ago* (last edited 1 week ago)

"AI" was around way before LLMs, and they were used for good stuff, like discovering new proteins and amino acids among many other specialized uses.

I would say there are different categories of AI and I disagree with the statement that LLMs are not AI.

All LLMs are AI, but not all AI are LLMs.

LLMs are trash.

load more comments
view more: next ›