this post was submitted on 04 Feb 2026
505 points (98.3% liked)
Fuck AI
5685 readers
1749 users here now
"We did it, Patrick! We made a technological breakthrough!"
A place for all those who loathe AI to discuss things, post articles, and ridicule the AI hype. Proud supporter of working people. And proud booer of SXSW 2024.
AI, in this case, refers to LLMs, GPT technology, and anything listed as "AI" meant to increase market valuations.
founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments


To be clear: this isnt an AI problem, the LLM is doing exactly what its being told to
This is an Openclaw problem with the platform itself doing very very stupid things with the LLM lol
We are hitting the point now where, tbh, LLMs are on their own in a glass box feeling pretty solid performance wise, still prone to hallucinating but the addition of the Model Context Protocol for tooling makes them way less prone to hallucinating, cuz they have the tooling now to sanity check themselves automatically, and/or check first and then tell you what they found.
IE a MCP to search wikipedia and report back with "I found this wiki article on your topic" or whatever.
The new problem now is platforms that "wrap" LLMs having a "garbage in, garbage out" problem, where they inject their "bespoke" stuff into the llm context to "help" but it actually makes the LLM act stupider.
Random example: Github Copilot agents get a "tokens used" thing quietly/secretly injected to them periodically, looks like every ~25k tokens or so
I dunno what the wording is they used, but it makes the LLM start hallucinating a concept of a "deadline" or "time constraint" and start trying to take shortcuts and justifying it with stuff like "given time constraints I wont do this job right"
Its kinda weird how such random stuff that seems innocuous and tries to help can actually make the LLM worse instead of better.
LLMs do not "hallucinate”, they are not sentient. They just spit out incorrect bullshit. All of the time.
Hallucinate is the term used for the statistical phenomena that arises from their output.
You know, you're entitled to your opinions, but you are most certainly not entitled to your facts.
The term "hallucinate" as used by people in AI research: https://en.wikipedia.org/wiki/Hallucination_(artificial_intelligence)
P. S. A lay person's objections to the term's usage in popular media is entirely warranted as unnecessary anthropomorphizing. In general, this tendency to ascribe the language of human mental states to the outputs of statistical computer models is deeply problematic. See: https://firstmonday.org/ojs/index.php/fm/article/view/14366
Nothing you linked there contradicts what I said. It expands on it in more specific detail.
LLMs are heuristic statistical token prediction engines.
Hallucinations are a shorthand term for a set of phenomena that arise out of the way the statistical prediction works, where it will string together sentences that are grammatically correct and sound right, but an LLM has no concept of right/wrong, only statistically likely next token given the prior.
That wiki article goes into much more depth on the "why" but it does support my statement.
I dunno what it is with people and linking wiki articles that support the person's statement and claiming its the opposite.
... learn to read I guess? I dunno lol.