this post was submitted on 13 Nov 2025
460 points (97.3% liked)

Microblog Memes

9635 readers
2376 users here now

A place to share screenshots of Microblog posts, whether from Mastodon, tumblr, ~~Twitter~~ X, KBin, Threads or elsewhere.

Created as an evolution of White People Twitter and other tweet-capture subreddits.

Rules:

  1. Please put at least one word relevant to the post in the post title.
  2. Be nice.
  3. No advertising, brand promotion or guerilla marketing.
  4. Posters are encouraged to link to the toot or tweet etc in the description of posts.

Related communities:

founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] Rhaedas@fedia.io 2 points 15 hours ago

selectivity based on probability and not weighing on evidence

I don’t follow this, but an LLM’s whole “world” is basically the prompt it’s fed. It can “weigh” that, but then how does one choose what’s in the prompt?

Some describe or use the analogy of an autocompleter with a very big database. LLMs are more complex than just that, but that's the idea, and when the model looks at the prompt and context of the conversation, it's choosing the best match of words to fulfill that prompt. My point was that the best word or phrase completion doesn't mean it's the best answer, or even right. It's just seen as the most probabilistic in the huge training data. If that data is crap, the answers are crap. Having Wikipedia as a source and presumably the only source is better than many places on the internet to pull from, but that doesn't guarantee the answers that pop up will be always correct or the best in a choice of answers. It's just the most likely based on the data.

It would be different if it was AGI because by definition it would be able to find the best data based on the data itself, not text probability, and could look at anything connected including discussion behind the article and make a judgement on how solid the information is for the prompt in question. We don't have that yet. Maybe we will, maybe we won't for any number of reasons.