this post was submitted on 04 Feb 2026
505 points (98.3% liked)
Fuck AI
5685 readers
1511 users here now
"We did it, Patrick! We made a technological breakthrough!"
A place for all those who loathe AI to discuss things, post articles, and ridicule the AI hype. Proud supporter of working people. And proud booer of SXSW 2024.
AI, in this case, refers to LLMs, GPT technology, and anything listed as "AI" meant to increase market valuations.
founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments


I don't think we've overcome the halfglass of wine issue, rather, we've papier-mâchéd over some fundamental flaws in precisely what it is happening when an LLM creates the appearance of reason. In doing saw we're baking a certain amount of sawdust into the cake, and the fact that no substantive advances has really been made since maybe the 4, 4.5 days, with most of the "improvements" being seen coming from basically better engineering, its clear we've hit an asymptote with what these models are capable/ will be capable, and it will never manifest into a full reasoning system that can self correct.
There is no amount of engineering sandblasting that can overcome issues which are fundamental to the models structure. If the rot is in the bones, its in the bones.
Nah there have been huge advancements in the past few months, you are definitely out of touch if you havent witnessed them
Recent models have gotten WAY better at "second guessing" themselves, and not acting nearly so confidently wrong.
That isnt an LLM issue at all, that has nothing to do with LLMs in fact. Thats a problem with Stable Diffusion which is an entirely different kind of AI, but yeah that issue is fundamental to what stable diffusion is.
I mean, thats not much different from any other tech, a LOT of advanced tech we have today is dozens and dozens of separate bits of engineering all working in tandem to create something more meaningful.
Your smartphone has countless different and distinct advancements on different types of technology that come together to make a useful device, and if you removed any one of those pieces from it, it would be substantially less useful as a tool.
So yeah, I personally will very much count the other pieces of the puzzle, advancing, as the system as a whole advancing.
LLMs today compared to ones a year ago are quite a bit better, by a large degree, and the tooling around them has also improved a lot. The proliferation of Model Context Protocol Tools is proving to be a massive part of the system as a whole becoming something actually very useful.
It's built in layers, and the layers that are improving are not the LLMs themselves, it's the layers that interact between the user and the LLM that are improving, which creates the illusion that the LLMs are improving. They're not. TropicalDingdong knows what they're talking about, you should listen to them.
If you continue to improve the layers between the LLM and the user long enough, you'll end up with something that we traditionally used to call a "software program" that is optimized for accomplishing a task, and you won't need an LLM much if at all.
You've gotta be living under a rock if you dont think the models themselves have been improving over the last year, lol.
We are bumping into a log scale problem where people arent fully grasping how big of a difference going from an x% error rate to a y% error rate is in actual practice for where it matters.