this post was submitted on 10 Sep 2025
        
      
      938 points (99.1% liked)
      Fuck AI
    4448 readers
  
      
      1753 users here now
      "We did it, Patrick! We made a technological breakthrough!"
A place for all those who loathe AI to discuss things, post articles, and ridicule the AI hype. Proud supporter of working people. And proud booer of SXSW 2024.
        founded 2 years ago
      
      MODERATORS
      you are viewing a single comment's thread
view the rest of the comments
    view the rest of the comments
 
          
          
Hallucinations are investor / booster speak for errors.
Yeah it's a pretty good hand wavey term for a real issue
It's a weird case. As the paper says, this is inherent to LLMs. They have no concept of true and false, and rather produce probabilistic word streams. So is producing an untrue statement an error? Not really. Given these inputs (training data, model parameters and quiet), it's correct. But it's also definitely not a "hallucination", that's a disingenuous bogus term.
The problem however is that we pretend these probabilistic language approaches are somehow a general fit for the programs they're put in place to solve.
If the system (regardless of the underlying architecture and technical components) is intended to produce a correct result, and instead produces something that is absurdly incorrect, that is an error.
Our knowledge about how the system works or its inherent design flaws does nothing to alter that basic definition in my opinion.