this post was submitted on 30 Mar 2026
2 points (100.0% liked)
Lobste.rs
353 readers
38 users here now
RSS Feed of lobste.rs
founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
X - Doubt
Some of the most recent examinations of models - including the paid ones - show a 60-80% hallucination/inaccuracy rate.
That doesn’t get solved overnight, especially when the source of that rate - no answer being “punished” just as negatively as a wrong answer - remains unchanged in all models, due to fundamental trust-based biases that all humans have, and the need to cater to those biases in order to be accepted as a tool.