this post was submitted on 10 Sep 2025
936 points (99.1% liked)

Fuck AI

4106 readers
490 users here now

"We did it, Patrick! We made a technological breakthrough!"

A place for all those who loathe AI to discuss things, post articles, and ridicule the AI hype. Proud supporter of working people. And proud booer of SXSW 2024.

founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] drspod@lemmy.ml 55 points 1 week ago (1 children)

The problem is that an LLM is a language model, not an objective reality model, so the best it can do is estimate the probability of a particular sentence appearing in the language, but not the probability that the sentence represents a true statement according to our objective reality.

They seem to think that they can use these confidence measures to filter the output when it is not confident of being correct, but there are an infinite number of highly probable sentences in a language which are false in reality. An LLM has no way of distinguishing between unlikely and false, or between likely and true.