this post was submitted on 07 Jan 2026
225 points (99.1% liked)

Technology

78511 readers
3131 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related news or articles.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] hobovision@mander.xyz 1 points 3 days ago

Humans will anthropomorphize damn near anything. We'll say shit like "hydrogen atoms want to be with oxygen so bad they get super excited and move around a lot when they get to bond". I don't think characterizing the language output of an LLM using terms that describe how people speak is a bad thing.

"Hallucination" on the other hand is not even close to describing the "incorrect" bullshit that comes out of LLMs as opposed to the "correct" bullshit. The source of using "hallucination" to describe the output of deep neural networks kind of started with these early image generators. Everything it output was a hallucination, but eventually these networks got so believable that sometimes they could output realistic, and even sometimes factually accurate, content. So the people who wanted these neural nets to be AI would start to only call the bad and unbelievable and false outputs as hallucinations. It's not just anthropomorphizing it, but implying that it actually does something like thinking and has a state of mind.