this post was submitted on 12 May 2026
595 points (98.2% liked)
Facepalm
3630 readers
435 users here now
founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
Every time these get posted, I go and check if it's true. And then I realize I deliberately used Google to read an AI summary and feel sick about it. I've been tricked into giving them traffic
Don't be tricked, read other people's comments.
I get a reasonable result. Maybe they fix these things quickly?
Know how you feel, though.
Randomized result.
Because almost all of these nonsensical summaries are from when it was newly implemented so we're all making fun of something from a couple years ago
Not always, I’ll screen shot some if I see I get a new one, but I’ve definitely had some nonsense answers recently.
Right, that's why I said "almost all"
But these posts rarely have a date attached and the posters are rarely the ones to take the screenshot
Fair enough, I can definitely agree
No, this is still a common occurrence and pretty well documented. LLMs give a random answer based on a dataset and programmed weighting towards certain types of responses. This is why you can give the same prompt to the same LLM repeatedly and get different responses each time, or different responses by slightly modifying the prompt, even if both prompts say essentially the exact same thing. There is no comparison or "learning" happening from user input. It doesn't think, rationalize, or memorize. This is just what LLMs are and how they work under the hood.
Anthropomorphizing LLMs is a bad idea, and trusting the output without manual verification is foolish. The LLM does not know or care about misinformation, it is just a software that analyzes a dataset and outputs that information with programmed noise for variance, and sometimes extra user ass-kissing added for flair.
I thought my first comment would make it clear that I'm very much not pro-AI so I'm not sure why I'm getting a lecture on anthropomorphizing it.
But there's still a lot wrong with your comment. You're assuming that the data set never change and the parameters are never tweaked which is wildly untrue. Answers like this are not a common occurrence anymore because when a new one pops up, companies have a vested interest in updating the system instructions.
I'm not saying the summaries are good now. Just that most of the outrageous answers have long since been fixed
It wasn't a lecture, nor was it a personal attack to you. Your comment didn't anthropomorphize LLMs, so I'm not sure how you interpreted that as me coming at you. The only place we disagree on this topic based on what we've each commented so far is that misinformation is inherently a byproduct of how LLMs currently function.
You may not be as neutral on this topic as you claim if a response like mine felt offensive. It was a fairly predictable counter argument, and I'm not even the only one who made it in the replies.
Well, you did reply to me. I didn't realize you turned to the audience for the second paragraph
Also, it clearly isn't static. I played around with Gemini and when I got some clearly BS response, just repeating the prompt would often lead to a more correct result.
That's because all these LLMs are inherently random.