LLMs aren't AI, let alone AGI.
They're fucking prediction engines with extra functions.
This is a most excellent place for technology news and articles.
LLMs aren't AI, let alone AGI.
They're fucking prediction engines with extra functions.
I think you're a bullshitting con artist.
I just dropped an AGI down the toilet AMA
Geez. You can almost smell the desperation in this guy.
Guys i think i just found AGI in my gramp's old stuff.
Sure you do. It's not at all a transparent attempt to prolong the bubble.
I only have a rather high level understanding of current AI models, but I don't see any way for the current generation of LLMs to actually be intelligent or conscious.
They're entirely stateless, once-through models: any activity in the model that could be remotely considered "thought" is completely lost the moment the model outputs a token. Then it starts over fresh for the next token with nothing but the previous inputs and outputs (the context window) to work with.
That's why it's so stupid to ask an LLM "what were you thinking", because even it doesn't know! All it's going to do is look at what it spat out last and hallucinate a reasonable-sounding answer.
So why do we need Jensen Huang?
Exactly. CEO is maybe the easiest job for an AI to take over, so an AGI is possibly the most perfect candidate for that role.
Put up or shut up, tech bro CEOs. Replace yourself if it's so fucking amazing.
Fridman, the podcast’s host, defines AGI as an AI system that’s able to “essentially do your job,” as in start, grow, and run a successful tech company worth more than $1 billion. He then asks Huang when he believes AGI will be real — asking if it’s, say, five, 10, 15, or 20 years away — and Huang responds, “I think it’s now. I think we’ve achieved AGI.”
So we've achieved AGI in the sense that it could replace a nonsensical fart-sniffing clown, hyping a horde of morons into valuating a company at orders of magnitude it's actual worth?
fart sniffer