this post was submitted on 23 Mar 2026
7 points (62.1% liked)

Technology

82992 readers
2828 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related news or articles.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS
top 16 comments
sorted by: hot top controversial new old
[–] Zozano@aussie.zone 5 points 1 hour ago

LLMs aren't AI, let alone AGI.

They're fucking prediction engines with extra functions.

[–] mrmaplebar@fedia.io 9 points 2 hours ago

I think you're a bullshitting con artist.

[–] GottaHaveFaith@fedia.io 1 points 59 minutes ago

I just dropped an AGI down the toilet AMA

[–] Almacca@aussie.zone 8 points 3 hours ago

Geez. You can almost smell the desperation in this guy.

[–] Dindonmasker@sh.itjust.works 3 points 2 hours ago

Guys i think i just found AGI in my gramp's old stuff.

[–] Peruvian_Skies@sh.itjust.works 28 points 6 hours ago

Sure you do. It's not at all a transparent attempt to prolong the bubble.

[–] Technus@lemmy.zip 21 points 6 hours ago (1 children)

I only have a rather high level understanding of current AI models, but I don't see any way for the current generation of LLMs to actually be intelligent or conscious.

They're entirely stateless, once-through models: any activity in the model that could be remotely considered "thought" is completely lost the moment the model outputs a token. Then it starts over fresh for the next token with nothing but the previous inputs and outputs (the context window) to work with.

That's why it's so stupid to ask an LLM "what were you thinking", because even it doesn't know! All it's going to do is look at what it spat out last and hallucinate a reasonable-sounding answer.

[–] RedFrank24@piefed.social 13 points 6 hours ago (1 children)

So why do we need Jensen Huang?

[–] MrVilliam@sh.itjust.works 12 points 5 hours ago

Exactly. CEO is maybe the easiest job for an AI to take over, so an AGI is possibly the most perfect candidate for that role.

Put up or shut up, tech bro CEOs. Replace yourself if it's so fucking amazing.

[–] meme_historian@lemmy.dbzer0.com 13 points 6 hours ago

Fridman, the podcast’s host, defines AGI as an AI system that’s able to “essentially do your job,” as in start, grow, and run a successful tech company worth more than $1 billion. He then asks Huang when he believes AGI will be real — asking if it’s, say, five, 10, 15, or 20 years away — and Huang responds, “I think it’s now. I think we’ve achieved AGI.”

So we've achieved AGI in the sense that it could replace a nonsensical fart-sniffing clown, hyping a horde of morons into valuating a company at orders of magnitude it's actual worth?

[–] acosmichippo@lemmy.world 6 points 6 hours ago

fart sniffer