this post was submitted on 23 Mar 2026
10 points (61.9% liked)

Technology

82992 readers
2725 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related news or articles.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] Zozano@aussie.zone 21 points 9 hours ago (3 children)

LLMs aren't AI, let alone AGI.

They're fucking prediction engines with extra functions.

[–] unnamed1@feddit.org 2 points 2 hours ago

So are we. Your definition of AI also seems off. It’s a field of computer science dealing with seemingly cognitive algorithms. Basically everything that is not rule based programming. I work in AI production since over ten years. It is absolutely valid and necessary to hate AI, but not to deny technical functionality. Also the other answer to your comment: of course training a neural network is a form of learning. Wether it is by reinforcement or by training data. There are many applications of ML since many years before LLMs, it makes no sense to deny that it exists.

[–] Onihikage@piefed.social 9 points 5 hours ago

The best description I've ever heard of LLMs is "a blurry jpeg of the internet". From the perspective of data compression and retrieval, they're impressive... but they're still a blurry jpeg. The image doesn't change, you can only zoom in on different parts of it and apply extra filters, and there's nothing you can truly do about the compression artifacts (what we call "hallucinations"). It can't think, it can't learn, it just is, and that's all it will ever be.

[–] MojoMcJojo@lemmy.world 0 points 2 hours ago

It's an industrial sized prediction engine. And when you apply that to bioscience, it predicts things that saves lives.