525
Ladies and Gentlemen, this is what slopperations are funneling all their money into in 2026
(files.catbox.moe)
"We did it, Patrick! We made a technological breakthrough!"
A place for all those who loathe AI to discuss things, post articles, and ridicule the AI hype. Proud supporter of working people. And proud booer of SXSW 2024.
AI, in this case, refers to LLMs, GPT technology, and anything listed as "AI" meant to increase market valuations.
I want to say upfront that I'm not trying to defend AI here. I wouldn't be on Fuck AI if I wanted to do that. I just think it's philosophically interesting despite causing way more problems than it solves.
I copied the message from the image verbatim.
About 50% of the models I tried got it right. (Don't worry, I didn't pay the AI companies for that or give them feedback or anything.)
The question from the image.
My question was how do you then explain some models getting the question right?
It's usually the more advanced ones that get it, so it's possible that a similar enough question is in the training data somewhere and the only difference is that the advanced models are large enough to encode it. The question in the image has been around since at least 2023.
So let's try making our own question, taking a well-known trick question and subtly inverting it so it becomes a kind of double bluff.
It's hard to google, for obvious reasons, but I couldn't find anyone trying this question like I could with the question from the image. But I got similar results with the AI models.
They actually did slightly better on this one. About 60-70% got it right.
I've tried a few different types of questions, over the last few years, to see what AI gets wrong that humans get right. What I've found so far is that AI has been a lot dumber than I had expected, but humans have also been a lot dumber than I had expected.
To be honest, the gap was far wider for the humans. My theory is that COVID gave us all brain damage.