I just tried it on Braves AI 
The obvious choice, said the motherfucker π
This is a most excellent place for technology news and articles.
I just tried it on Braves AI 
The obvious choice, said the motherfucker π
and what is going to happen is that some engineer will band aid the issue and all the ai crazy people will shout βsee! itβs learnding!β and the ai snake oil sales man will use that as justification of all the waste and demand more from all systems
just like what they did with the full glass of wine test. and no ai fundamentally did not improve. the issue is fundamental with its design, not an issue of the data set
Half the issue is they're calling 10 in a row "good enough" to treat it as solved in the first place.
A sample size of 10 is nothing.
Frankly would like to see some error bars on the "human polling". How many people rapiddata is polling are just hitting the top or bottom answer?
Yeah seems like the training on human data makes it so most AIs will answer at least as unreliable as humans. 71% saying walk from the human side is crazy
There are a lot of humans that would fail this as well. Just sayin.
You should consider reading the article before "just sayin."
What is the wrong answer though? It is a stupid question. I would look at you sideways if you asked me this, because the obvious answer is "walk silly, the car is already at the car wash". Otherwise why would you ask it?
Which is telling because when asked to review the answer, the AI's that I have seen said, you asked me how you were going to get to the car wash. Assumption the car was already there.
Those humans used AI to answer the question.
They also polled 10,000 people to compare against a human baseline:
Turns out GPT-5 (7/10) answered about as reliably as the average human (71.5%) in this test. Humans still outperform most AI models with this question, but to be fair I expected a far higher "drive" rate.
That 71.5% is still a higher success rate than 48 out of 53 models tested. Only the five 10/10 models and the two 8/10 models outperform the average human. Everything below GPT-5 performs worse than 10,000 people given two buttons and no time to think.
This here is the point most people fail to grasp. The AI was taught by people. And people are wrong a lot of the time. So the AI is more like us than what we think it should be. Right down to it getting the right answer for all the wrong reasons. We should call it human AI. Lol.
Like I said the person above, there is no wrong answer. Its all about assumptions. It is a stupid trick question that no one would ask.
I just asked Goggle Gemini 3 "The car is 50 miles away. Should I walk or drive?"
In its breakdown comparison between walking and driving, under walking the last reason to not walk was labeled "Recovery: 3 days of ice baths and regret."
And under reasons to walk, "You are a character in a post-apocalyptic novel."
Me thinks I detect notes of sarcasm......
Gemini 3 pro said that this was a "great logic puzzle" and then said that if my goal is to wash the car, then I need to drive there.
Itβs trained on Reddit. Sarcasm is itβs default
Could end up in a pun chain too
My gods, I love those. We should link to some.
It's so obvious I didn't even need to be British to understand you are being totally serious.
I feel like we're the only ones that expect "all-knowing information sources" should be more writing seriously than these edgelord-level rizzy chatbots are, and yet, here they are, blatantly proving they are chatbots that should not be blindly trusted as authoritative sources of knowledge.
I watched this in a YouTube Shorts format a week ago, where they ask a few models about walking or driving to the car wash.
They have some more funny ask AI shorts.