this post was submitted on 18 Apr 2026
125 points (85.7% liked)

Technology

84699 readers
2855 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related news or articles.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] DudeImMacGyver@kbin.earth -1 points 4 weeks ago (1 children)

The irony of your response is strong. Also, you DID say that:

I view AGI as inevitable became it's the natural end goal of us incrementally improving our AI systems over a long enough period of time. As with all human-created technology, we will keep improving it. It doesn't matter how slow the process is - as long as we keep heading in that direction, we will eventually reach the destination. The only things that could stop us, as far as I can see, are either destroying ourselves some other way before we get there or substrate independence - meaning general intelligence simply cannot be created without our biological wetware. I however see no reason to assume that, since human brains are made of matter just like computers are and I don't think there's anything supernatural about intelligence.

It sounds like you've bought into techbro bullshit, but don't realize it.

[–] Iconoclast@feddit.uk 0 points 4 weeks ago (1 children)

Feel free to help me realize it then, because whatever irony or conflict you're seeing there, I don't see.

[–] DudeImMacGyver@kbin.earth -1 points 4 weeks ago (1 children)

Yes, I can see that.

The "AI" that we have now is not actually AI, that's just a marketing term. Actual experts (read: Not people like Sam Altman) point out that LLMs are severely flawed and will always return bad information. This problem is baked into the way these models function. Making what we've got into actual AI like you said isn't going to happen, full stop.

Don't believe the horseshit you hear from people trying to sell something.

[–] Iconoclast@feddit.uk 3 points 4 weeks ago* (last edited 4 weeks ago) (1 children)

The “AI” that we have now is not actually AI

This is simply just false. We've had AI since 1956

AI isn’t any one thing. It’s an extremely broad term. It simply refers to any system designed to perform a cognitive task that would normally require human intelligence. The chess opponent on an old Atari console is an AI. It’s an intelligent system - but only narrowly so. Narrow AI can have superhuman cognitive abilities, but only within the specific task it was built for, like playing chess.

A large language model like ChatGPT is also a narrow AI. It’s exceptionally good at what it was designed to do: generate natural-sounding language. It often gets things right - not because it knows anything, but because its training data contains a lot of correct information. That accuracy is an emergent byproduct of how it works, not its intended function.

What people expect from it, though, isn’t narrow intelligence - it’s general intelligence: the ability to apply cognitive ability across a wide range of domains, like a human can. That’s something LLMs simply can’t do - at least not yet. Artificial General Intelligence is the end goal for many AI companies, but AGI and LLMs are not the same thing, even though both fall under the umbrella of AI.

Making what we’ve got into actual AI like you said isn’t going to happen, full stop.

I've never claimed LLMs will lead to AGI as I stated in the comment you quoted above.