this post was submitted on 14 Mar 2026
135 points (97.9% liked)

Technology

82621 readers
2621 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related news or articles.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] magnetosphere@fedia.io 13 points 15 hours ago (3 children)

Until they solve the AI hallucination problem, I’ll never be able to trust it.

[–] frongt@lemmy.zip 2 points 14 hours ago (2 children)

It's a feature of text prediction, not a bug. They could fix it, but that would mean drastically increasing the size of the context of each piece of information (no idea what it's called).

[–] Truscape@lemmy.blahaj.zone 4 points 14 hours ago* (last edited 14 hours ago)

I believe it's just complexity and token/compute usage.

You end up chasing diminishing returns as well (100% or even 95% accuracy is just not possible for certain areas of study, especially for niche topics).

It's also 100% unfixable as a premise for the technology. I can enjoy an upscaling algorithm for my retro games to look more detailed at the cost of an odd artifact, but I sure as shit am not taking that risk for information gathering and general study.

[–] magnetosphere@fedia.io 1 points 14 hours ago

I’m not knowledgeable enough to dispute your point. To the end user, though, the result is equally unreliable.

[–] ulterno@programming.dev 1 points 13 hours ago (1 children)

That doesn't seem like a solvable thingy.
People tend to make stuff up, too. The difference being that the bluff is revealed in non-verbal communication.

[–] magnetosphere@fedia.io -1 points 13 hours ago (1 children)

Yeah, but we’ve known that about people since forever. Computers are expected to be reliable.

If hallucinations aren’t a solvable problem, then either AI is impossible, or we’re going about it the wrong way.

[–] ulterno@programming.dev 1 points 11 hours ago

AI is pretty much possible, we are thinking about it the wrong way.

We are expecting AI to have the 3 bests of both worlds.

  • High I/O ability : we have that from computers
  • Determinism and Correctness : computers always had a high level determinism, never correctness because a computer does not know what is correct^[this boils down to the same thing that one person once said to some computer guy - 'If I enter the wrong numbers, will I still get the correct answer?']
  • Intelligence and thought : intelligence is a perception. AI will always have a lower depth of thought than us as long as it is dependent upon us

So we only get 1 best of the other world. In turn for some of this (person) world, we have to deal with 1 worst of the computer world. We lose determinism, because we rely upon the model being a higher level of fuzzy.

Of course, I don't mean "determinism" in the exact and full meaning. The LLM is still made on top of a computer, so for the same internal saved state and the same external input (including any randomising functions that might be used), the output will still be the same. But you can't get the kind of logical determinism that you expect from normal computer operations.
A dumbed down example to get my thoughts across: You can use either of a + b or ADD(A,B) or SUM(A:B) and will still get the same result.

[–] jbloggs777@discuss.tchncs.de -5 points 14 hours ago

Nobody says to blindly trust it...