this post was submitted on 31 Oct 2025
129 points (98.5% liked)

Fuck AI

4585 readers
1187 users here now

"We did it, Patrick! We made a technological breakthrough!"

A place for all those who loathe AI to discuss things, post articles, and ridicule the AI hype. Proud supporter of working people. And proud booer of SXSW 2024.

founded 2 years ago
MODERATORS
top 32 comments
sorted by: hot top controversial new old
[–] sexy_peach@feddit.org 31 points 2 weeks ago (4 children)

Someone said that calling their misinfo hallucinations is actually genius. Because everything they say is the hallucination, even the things we read and think are correct. The whole thing hallucinates away and then we go and say ok some of this makes a lot of sense, but the rest...

So basically that's why it will never be 100% correct all the time, because all of the output is just more or less correct hallucination.

[–] roguetrick@lemmy.world 15 points 2 weeks ago

Human pattern recognition making the insane machine seem like it's making sense. Astrology but with venture capital backing. I like it.

[–] Pelicanen@sopuli.xyz 11 points 2 weeks ago

So basically that's why it will never be 100% correct all the time, because all of the output is just more or less correct hallucination.

This is completely correct, it does the exact same thing when it works as people expect as it does when it's "hallucinating".

[–] msage@programming.dev 5 points 2 weeks ago (2 children)
[–] sexy_peach@feddit.org 2 points 2 weeks ago

Thanks that was a cool read. I pretty much fully agree.

[–] AnarchistArtificer@slrpnk.net 1 points 2 weeks ago

That's such a great article. It's been one of the most effective things to share with people who are intrigued by LLMs but are usually sensible people

[–] technocrit@lemmy.dbzer0.com 4 points 2 weeks ago

The problem with "hallucinations" is that computers don't hallucinate. It's just more anthropomorphic grifter hype. So, while it sounds like a criticism of "AI", it's just reinforcing false narratives.

[–] cronenthal@discuss.tchncs.de 18 points 2 weeks ago (1 children)

Something that should have be clear for a while by now. It won't get better, it can't be solved. LLMs are quite limited in real life applications, the financial bubble around them is insane.

[–] brucethemoose@lemmy.world 4 points 2 weeks ago (1 children)

Well, there are some theoretical improvements laid out in papers. Not for hallucinations or the Tech Bro ish AGI dreams, but more adaptation, functional use, things like that.

…But the incredible thing is that the AI houses with the money seem to be ignoring them.

American firms seem to only pay attention to in-house innovations, like they have egos the size of the moon. And I’m only speaking of the ones not peddling the “scale transformers up infinitely” garbage.

Chinese LLMs tend to be open weights and more “functionally” oriented, which is great. But (with a few exceptions) they’re still pretty conservative with architectural experimentation, and increasingly falling into traps of following/copying others now.

Europe started out strong with Mistral (and the first good MoE!) and some other startups/initiatives, yet seems to have just… gone out to lunch? While still taking money.

And regions countries like South Korea or the Saudis are still pretty small scale.


What I’m saying is you are right, but it’s largely from an incredible amount of footgunning all the firms are doing. Otherwise models can be quite functional tools in many fields.

[–] technocrit@lemmy.dbzer0.com 1 points 2 weeks ago* (last edited 2 weeks ago)

The point of "AI" is not making useful, functional software. Those technologies have existed for a long time and hopefully will continue to be developed by reasonable people.

The new "AI" is about creating useful, functional rubes to take their money. It's obvious just from the phony name "AI". If these grifters are shooting themselves in the foot, it doesn't seem to stop them from walking to the bank.

[–] ZDL@lazysoci.al 12 points 2 weeks ago

"LLMs Will Always Hallucinate"

That's literally all they do. EVERYTHING that an LLMbecile outputs is hallucinated. It's just sometimes the hallucinations match reality and sometimes they don't.

[–] WatDabney@sopuli.xyz 12 points 2 weeks ago* (last edited 2 weeks ago)

I'd say that calling what they do "hallucinating" is still falling prey to the most fundamental ongoing misperceptions/misrepresentations of them.

They cannot actually "hallucinate," since they don't actually perceive the data that's poured into and out of them, much less possess any ability to interpret it either correctly or incorrectly.

They're just gigantic databases programmed with a variety of ways in which to collate, order and regurgitate portions of that data. They have no awareness of what it is that they're doing - they're just ordering data based on rules and statistical likelihoods, and that rather obviously means that they can and will end up following language paths that, while likely internally coherent, will have drifted away from reality. That that ends up resembling a "hallucination" is just happenstance, since it doesn't even arise from the same process as actual "hallucinations."

And broadly I grow increasingly confident that virtually all of the current (and coming - I think things are going to get much worse) problems with "AI" in and of itself (as distinct from the ways in which it's employed) are rooted in the fundamental misrepresentations, misinterpretations and misconceptions that are made about them, starting with the foundational one that they are or can be in any sense "intelligence."

[–] BradleyUffner@lemmy.world 11 points 2 weeks ago

Every single output from an LLM is a hallucination. Some hallucinations are just more accurate then others.

[–] technocrit@lemmy.dbzer0.com 4 points 2 weeks ago* (last edited 2 weeks ago)

LLMs Will Always ~~Hallucinate~~ Make Errors, and We Don't Need to Live With This

These kind of headlines are just more grifter hype... as usual.

[–] Arghblarg@lemmy.ca 3 points 2 weeks ago
[–] Denjin@feddit.uk 3 points 2 weeks ago (1 children)

Our analysis draws on computational theory and Godel's First Incompleteness Theorem

You don't need any fancy analysis to understand the basic principals of LLMs as a sorting and prediction algorithm that works on large datasets and how they will always produce incorrect results to queries.

They are not intelligent, they don't understand or interpret any of the information in their datasets they just guess what you might want to see based on what appears similar.

[–] technocrit@lemmy.dbzer0.com 1 points 2 weeks ago* (last edited 2 weeks ago)

You don’t need any fancy analysis to understand the basic principals of LLMs

That's true. But the problem is that grifters are pushing the complete opposite of a basic understanding. It's nonstop disinformation and people are literally buying the hype. In these situations I think that formal analysis and proof can be necessary as a solid foundation for critiques of the grifting.

[–] aesthelete@lemmy.world 3 points 2 weeks ago* (last edited 2 weeks ago)

I think these things are maybe more useful (in very narrow cases) when people realize a little about how they work. They're probabilistic. So they're great at bullshit. They're also great at bullshit adjacent things (quarterly goal documents for your employer for instance). Knowing they're probabilistic makes me treat them differently. For instance, I wanted to know what questions I should ask about something, and so I used Google AI insights or whatever from their search engine to generate a large list by simply resubmitting the same question over and over.

It's great at that kind of (often extremely useless) junk. You can get lots of subtle permutations out of it. It also might be interesting to just continually regen images using the same prompt over and over and look at the slight differences.

It would be more interesting to me instead of bullshit like Sora if they made something that just gave you the prompts in a feed and allowed you to sit there and regenerate the junk by hitting a button. People could see the same post and a slightly different video every time. Or image. Still stupid? Yes. Still not worth slurping up our lakes for? Yes. But hey at least it'd be a little more fun.

The prompts are also, for the most part, the only creative thing involved in this garbage.

Instead of the current knobgobblers that want to take a single permutation and try to make it more than worthless, or want to pretend these systems are anything close to right...or intelligent....or human or whatever it'd be much better if we started thinking about them for what they are.

[–] zeca@lemmy.ml 2 points 2 weeks ago
[–] Kolanaki@pawb.social 1 points 1 week ago* (last edited 1 week ago)

Weights. They need weighted response paths that actually make them try to be more accurate. It's gonna be difficult to figure out every single weighted decision in the tree when you want the algorithm to be able to do anything, but it sure as shit is not impossible.

These statements from AI companies just show they do not want AI to actually be what they claim to to be. They want it to be garbage.