this post was submitted on 04 Feb 2026
504 points (98.3% liked)
Fuck AI
5666 readers
969 users here now
"We did it, Patrick! We made a technological breakthrough!"
A place for all those who loathe AI to discuss things, post articles, and ridicule the AI hype. Proud supporter of working people. And proud booer of SXSW 2024.
AI, in this case, refers to LLMs, GPT technology, and anything listed as "AI" meant to increase market valuations.
founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments


To be clear: this isnt an AI problem, the LLM is doing exactly what its being told to
This is an Openclaw problem with the platform itself doing very very stupid things with the LLM lol
We are hitting the point now where, tbh, LLMs are on their own in a glass box feeling pretty solid performance wise, still prone to hallucinating but the addition of the Model Context Protocol for tooling makes them way less prone to hallucinating, cuz they have the tooling now to sanity check themselves automatically, and/or check first and then tell you what they found.
IE a MCP to search wikipedia and report back with "I found this wiki article on your topic" or whatever.
The new problem now is platforms that "wrap" LLMs having a "garbage in, garbage out" problem, where they inject their "bespoke" stuff into the llm context to "help" but it actually makes the LLM act stupider.
Random example: Github Copilot agents get a "tokens used" thing quietly/secretly injected to them periodically, looks like every ~25k tokens or so
I dunno what the wording is they used, but it makes the LLM start hallucinating a concept of a "deadline" or "time constraint" and start trying to take shortcuts and justifying it with stuff like "given time constraints I wont do this job right"
Its kinda weird how such random stuff that seems innocuous and tries to help can actually make the LLM worse instead of better.
LLMs do not "hallucinate”, they are not sentient. They just spit out incorrect bullshit. All of the time.
I love that humans are inclined to anthropamorphize things. A door can't be sad. A street can't be lonely. The moon can't be wistful. The ocean can't be angry.
But they can... in our heads. And that's real for us.
I think that, at least at a societal level, this part of the human condition has been mostly benign. Just a little bit of spice.
LLMs seem to have short circuited that part in our brains. We can't even describe errata of a system without anthropamorphizing it