this post was submitted on 01 May 2026
207 points (93.3% liked)

Technology

84274 readers
3441 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related news or articles.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] davidagain@lemmy.world 2 points 4 hours ago (2 children)

How come it's inaccurate about 40% of the time when I know the answer then? It's a bullshit factory. A chatbot that's fundamentally designed to sound like a person and be able to respond to any prompt. But truth isn't any part of the fundamental architecture of an LLM.

[–] SlimePirate@lemmy.dbzer0.com 1 points 2 hours ago (1 children)

It does lie and hallucinate a lot, especially with biased context in the question (the bullshit part). The (biased) knowledge is hiding somewhere in its weights, it is just that it is sometimes quite hard to recover.

Your 40% depends a lot on how you ask the questions and the field of these questions. Humanity's last exam is a morr obiective benchmark for measuring the wide knowledge of LLMs.

[–] davidagain@lemmy.world 1 points 52 minutes ago

Your 40% depends a lot on how you ask the questions and the field of these questions.

Dude, they fail that exam with even worse error rates than I see!

When you can verify it, it's OFTEN and REGULARLY wrong. It's stupid to trust if for anything you can't personally verify.

The designed purpose of LLMs is to respond to human interaction, not to be correct. They are the showoff who pretends he can answer every question. They are the confident drunkard at the bar who will tell you anything that pops into their head. Intelligent, knowledgeable people say "I don't know" when they don't know. LLMs don't do that. Ever. Trouble is, they don't "know" anything. They're a chatbot from the bottom up. Chatbot through and through. It's their fundamental nature.

Yes there was knowledge and deep understanding in their training data. Also, I ate chicken curry for tea. However, I am not a chicken, I do not cluck, I haven't started eating worms, I cannot produce any chicken, and my poop is not chicken either. My poop smells faintly of curry. So it is with LLMs and the knowledge and understanding in their training data.

[–] NottaLottaOcelot@lemmy.ca 2 points 3 hours ago

Bullshit factory is very apt. I was using it for an open book exam and it gave answers entirely skewed to the way the question was asked.

For example, if I asked “is X bacteria a pathogen in Y disease”, it would say yes, it was a very bad pathogen.

If I asked “what effects does X bacteria have in this body system”, it said it was a beneficial bacteria.

Never trust the AI summary, you have to fully read the studies.