this post was submitted on 12 Jun 2025
343 points (97.0% liked)

Fuck AI

3377 readers
1132 users here now

"We did it, Patrick! We made a technological breakthrough!"

A place for all those who loathe AI to discuss things, post articles, and ridicule the AI hype. Proud supporter of working people. And proud booer of SXSW 2024.

founded 1 year ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] 4am@lemm.ee 69 points 3 weeks ago (16 children)

Why did anyone think that a LLM would be able to solve logic or math problems?

They’re literally autocomplete. Like, 100% autocomplete that is based on an enormous statistical model. They don’t think, they don’t reason, they don’t compute. They lay words out in the most likely order.

To be fair it’s pretty amazing they can do that from a user prompt - but it’s not doing whatever it is that our brains do. It’s not a brain. It’s not “intelligent”. LLMs are machine learning algorithms but they are not AI.

It’s a fucking hornswoggle, always has been 🔫🧑‍🚀

[–] ignirtoq@fedia.io 17 points 3 weeks ago (6 children)

My running theory is that human evolution developed a heuristic in our brains that associates language sophistication with general intelligence, and especially with humanity. The very fact that LLMs are so good at composing sophisticated sentences triggers this heuristic and makes people anthropomorphize them far more than other kinds of AI, so they ascribe more capability to them than evidence justifies.

I actually think this may explain some earlier reporting of some weird behavior of AI researchers as well. I seem to recall reports of Google researchers believing they had created sentient AI (a quick search produced this article). The researcher was fooled by his own AI not because he drank the Koolaid, but because he fell prey to this neural heuristic that's in all of us.

[–] homesweethomeMrL@lemmy.world 9 points 3 weeks ago (1 children)

I think you're right about that.

It didn't help that The Average Person has just shy of absolutely zero understanding of how computers work despite using them mostly all day every day.

Put the two together and it's a grifter's dream.

[–] Aceticon@lemmy.dbzer0.com 1 points 3 weeks ago* (last edited 3 weeks ago)

IMHO, if one's approach to the world is just - take it as it is and go with it - then probabilistic parrots creating the perceived elements of reality will work on that person because that's what they use to decide what to do next, but if one has an analytical approach to the world - wanting to figure out what's behind the façade to understand it and predict what might happen - then one will spot that the "logic" behind the façades created by the probabilistic parrots is segmented into little pieces of logic which are do not matched to the other little pieces of logic and do not add up to a greater building of logic (phrases are logic because all phrases have an inherent logic in how they are put together which is general, but the choice of which phrases get used in a higher logic which is far more varied than the logic inherent in phrases, so LLMs lose consistency at that level because the training material goes in a lot more directions at that level than it goes at the level of how phrases are put together).

load more comments (4 replies)
load more comments (13 replies)