this post was submitted on 13 Nov 2025
460 points (97.3% liked)

Microblog Memes

9635 readers
2084 users here now

A place to share screenshots of Microblog posts, whether from Mastodon, tumblr, ~~Twitter~~ X, KBin, Threads or elsewhere.

Created as an evolution of White People Twitter and other tweet-capture subreddits.

Rules:

  1. Please put at least one word relevant to the post in the post title.
  2. Be nice.
  3. No advertising, brand promotion or guerilla marketing.
  4. Posters are encouraged to link to the toot or tweet etc in the description of posts.

Related communities:

founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] givesomefucks@lemmy.world 82 points 1 day ago (9 children)

I use duckduckgo and my eyes were so used to glazing over the ai preview I don't know when it changed but it's now a Wikipedia summary, and actually useful.

Wikipedia is everything ai chatbots want to be, and why the right hates it so much.

[–] Rhaedas@fedia.io 8 points 1 day ago (2 children)

An LLM with a cultivating source is a lot better than what the other major ones are, but it still has the issue of selectivity based on probability and not weighing on evidence (unless it does that, which would be huge). Because people are naturally gullible and believe the first thing they read, especially if it's presented as if "someone" has validated it for them.

But the good part is that both DDG and Firefox made it both obvious and easy to disable the AI.

[–] brucethemoose@lemmy.world 3 points 16 hours ago* (last edited 16 hours ago) (1 children)

selectivity based on probability and not weighing on evidence

I don’t follow this, but an LLM’s whole “world” is basically the prompt it’s fed. It can “weigh” that, but then how does one choose what’s in the prompt?

What they need is cheaper long context (being worked on, especially outside the Tech Bro circles), and primarily, much more sophisticated databases to hook up to. Basically they need what WolframAlpha was trying to build a decade ago: a structured, searchable repository of human knowledge they can query that’s better than random Google search results.

It’s honestly insane this isn’t the first concern of all the AI Bros. The focus is on training data and “AGI” scams when they should basically be building a RAG system to end all RAG systems if they want something functional.

[–] Rhaedas@fedia.io 2 points 16 hours ago

selectivity based on probability and not weighing on evidence

I don’t follow this, but an LLM’s whole “world” is basically the prompt it’s fed. It can “weigh” that, but then how does one choose what’s in the prompt?

Some describe or use the analogy of an autocompleter with a very big database. LLMs are more complex than just that, but that's the idea, and when the model looks at the prompt and context of the conversation, it's choosing the best match of words to fulfill that prompt. My point was that the best word or phrase completion doesn't mean it's the best answer, or even right. It's just seen as the most probabilistic in the huge training data. If that data is crap, the answers are crap. Having Wikipedia as a source and presumably the only source is better than many places on the internet to pull from, but that doesn't guarantee the answers that pop up will be always correct or the best in a choice of answers. It's just the most likely based on the data.

It would be different if it was AGI because by definition it would be able to find the best data based on the data itself, not text probability, and could look at anything connected including discussion behind the article and make a judgement on how solid the information is for the prompt in question. We don't have that yet. Maybe we will, maybe we won't for any number of reasons.

load more comments (6 replies)