this post was submitted on 30 Nov 2025
447 points (98.3% liked)

Fuck AI

4749 readers
1118 users here now

"We did it, Patrick! We made a technological breakthrough!"

A place for all those who loathe AI to discuss things, post articles, and ridicule the AI hype. Proud supporter of working people. And proud booer of SXSW 2024.

AI, in this case, refers to LLMs, GPT technology, and anything listed as "AI" meant to increase market valuations.

founded 2 years ago
MODERATORS
top 50 comments
sorted by: hot top controversial new old
[–] Honytawk@feddit.nl 14 points 6 days ago (1 children)

Large Language Models aren't designed to be intelligent.

They are language models, made to generate text. Using them for anything else is just dumb user error.

load more comments (1 replies)
[–] Amanda527@lemmy.world 12 points 6 days ago (1 children)

I don’t really care whether LLMs are “intelligent” or not. In the real world, tools either work or they don’t. Same as a generator, radio, or water filter. They can extend your capabilities, but they never replace judgment, planning, or responsibility. Over-reliance is the real risk.

[–] jj4211@lemmy.world 2 points 6 days ago

To the extent that LLM does stuff, they exist in a more middle ground of "they work... maybe". Problem being that in the ways that matter, "maybe" is really hard to take out of things.

Normally, taking the technology as it comes for what it is is reasonable, but there's a whole investment angle here around expectations. Investment is coming in as if they are expecting an emergent AGI to come out of it. The drive to build everything around executing LLM models and only LLM models is driving the cost of everything in tech up. Money that might have been spent on more general purpose friendly compute is being redirected at Grace Blackwell infrastructure connected by NVL72, which is not particularly interesting for almost all other applications. It's just sucking up RAM and storage and starving everything else.

[–] SlippiHUD@lemmy.world 6 points 6 days ago

The expert is years late to the party. Or the quote is.

[–] khepri@lemmy.world 5 points 6 days ago* (last edited 6 days ago) (1 children)

They aren't supposed to be? An LLM is just a big ol' fabric of matrices with weights based on it's training data. It then re-weights and queries this fabric based on your input and it's other prompts and parameters, such as temperature and previous context, and returns output. LLMs are, and always will be, exactly as "intelligent" as that process, because that process is what an LLM is. The fact that data is returned from these matrices as English sentences with a bit of randomness thanks to temp settings, rather than a fixed list of results has greatly confused people.

[–] jj4211@lemmy.world 8 points 6 days ago (1 children)

Problem is that confusion has caused an unreasonable level of investment, with expectations that have no significant indications of being met. So we need credible experts reiterating what those in the thick of it consider 'obvious' to try to fight that irrational behavior.

Currently, they have seen a rapidly evolving tech industry with lots of things that didn't quite work right but were quickly iterated into something useful. The dot-com being an example of generally solid principles attempted too soon, and the tech companies 'fixed' a lot of the problems in short order. So they see this thing that almost seems like a conversational human and assume that 'almost' will be addressed by the tech geniuses before we know it, and whatever naysaying those same experts might be saying is just misplaced humility. The 'experts' may have nerdy reasons why they view it as limited, but the investors "common sense" experience are inconsistent with that feedback.

Of course there's survivorship bias and plenty of 'big tech' that retreated after an optimistic push if you went looking to see that tech can actually fail to close that gap, but none of those examples are anywhere near the scale of the AI push.

[–] khepri@lemmy.world 3 points 6 days ago

Yes, all the same people who thought smartphones were actually "smart" or that social media was actually "social" are the ones thinking artificial intelligence is actually "intelligent". Just cause a company calls their product "vitamin water" doesn't make it healthy, and the sooner people learn to see through the bs hype machine that major corporations have us all hypnotized by, the better for everyone.

[–] Treczoks@lemmy.world 5 points 6 days ago

Wow. One needs experts for that? It is a language model. It is a parrot with a dictionary, or in case of an LLM, a large dictionary.

[–] CleoCommunist@lemmy.ml 4 points 6 days ago
[–] sp3ctr4l@lemmy.dbzer0.com 4 points 6 days ago

I already said this years ago, person with basic conceptual grasp of the operative concepts at play says.

load more comments
view more: next ›