this post was submitted on 14 Mar 2026
167 points (97.7% liked)

Technology

82621 readers
2674 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related news or articles.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] frongt@lemmy.zip 2 points 17 hours ago (2 children)

It's a feature of text prediction, not a bug. They could fix it, but that would mean drastically increasing the size of the context of each piece of information (no idea what it's called).

[–] Truscape@lemmy.blahaj.zone 4 points 17 hours ago* (last edited 17 hours ago)

I believe it's just complexity and token/compute usage.

You end up chasing diminishing returns as well (100% or even 95% accuracy is just not possible for certain areas of study, especially for niche topics).

It's also 100% unfixable as a premise for the technology. I can enjoy an upscaling algorithm for my retro games to look more detailed at the cost of an odd artifact, but I sure as shit am not taking that risk for information gathering and general study.

[–] magnetosphere@fedia.io 1 points 17 hours ago

I’m not knowledgeable enough to dispute your point. To the end user, though, the result is equally unreliable.