this post was submitted on 11 Mar 2026
67 points (98.6% liked)
Technology
82518 readers
3964 users here now
This is a most excellent place for technology news and articles.
Our Rules
- Follow the lemmy.world rules.
- Only tech related news or articles.
- Be excellent to each other!
- Mod approved content bots can post up to 10 articles per day.
- Threads asking for personal tech support may be deleted.
- Politics threads may be removed.
- No memes allowed as posts, OK to post as comments.
- Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
- Check for duplicates before posting, duplicates may be removed
- Accounts 7 days and younger will have their posts automatically removed.
Approved Bots
founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
This is pretty bonkers. How TF are they fabricating answers?????
I'm no expert and don't care to become one, but I understand they generally trained these models on the entire public internet plus all the literature and research they could pirate.
So I would expect the outputs of those models to not be some kind of magical correct description of the world, but instead to be roughly "this passes for something a person on the internet might write."
It does the thing it was designed to do pretty well. But then the sociopathic grifters tried to sell it to the world as a magic super-intelligence that actually knows things. And of course many small-time wannabe grifters ate it up.
What LLMs do is get you a passable elaborate forum post replying to your question, written by an extremely confident internet rando. But it's done at computer speed and global scale!
https://en.wikipedia.org/wiki/Hallucination_(artificial_intelligence)
Aka being wrong, but with a fancy name!
When Cletus is wrong because he mixed up a dog and a cat when deacribing their behavior do we call it hallucinating? No.
Accepting concepts like "right" and "wrong" gives those tools way too much credit, basically following the AI narrative of the corporations behind them. They can only be used about the output but not the tool itself.
To be precise:
LLMs can't be right or wrong because the way they work has no link to any reality - it's stochastics, not evaluation. I also don't like the term halluzination for the same reason. It's simply a too high temperature setting jumping into a closeby but unrelated vector set.
Why this is an important distinction: Arguing that an LLM is wrong is arguing on the ground of ChatGPT and the likes: It's then a "oh but wen make them better!" And their marketing departments overjoy.
To take your calculator analogy: like these tools do have floating point errors which are inherent to those tools wrong outputs are a dore part of LLMs.
We can minimize that but then they automatically use part of their function. This limitation is way stronger on LLMs than limiting a calculator to 16 digits after the comma though...
What word would you propose to use instead?
Fabrication?
No comment on the rest of the thread but I always though "confabulation" was a more accurate word than hallucination for what LLMs tend to do.
The "signs and symptoms" part of the article really seems oddly familiar when compared to interacting with an LLM sometimes haha.
That's my problem: any single word humanizes the tool in my opinion. Iperhaps something like "stochastic debris" comes close but there's no chance to counter the common force of pop culture, Corp speak a and humanities talent to see humanoid behavior everywhere but each other. :(
We do enjoy pareidolia, don't we?
Scam. We're being sold an autocomplete tool as a search engine.
Or fraud, since some of the same companies destroyed the functionality of their search engines in order to make the autocomplete look better in comparison.
if you have a lobby you get special names, look at the pharma industry who coined the term "discontinuation syndrome" for a simple "withdrawal"
Because guessing correct answer is more successful than saying nothing.