I've said it time and time again: AIs aren't trained to produce correct answers, but seemingly correct answers. That's an important distinction and exactly what makes AIs so dangerous to use. You will typically ask the AI about something you yourself are not an expert on, so you can't easily verify the answer. But it seems plausible so you assume it to be correct.
Fuck AI
"We did it, Patrick! We made a technological breakthrough!"
A place for all those who loathe AI to discuss things, post articles, and ridicule the AI hype. Proud supporter of working people. And proud booer of SXSW 2024.
AI, in this case, refers to LLMs, GPT technology, and anything listed as "AI" meant to increase market valuations.
Joke's on you, we make our decisions without asking AI for analytics. Because we don't ask for analytics at all
I feel like no analytics is probably better than decisions based on made-up analytics.
I somehow hope this is made up, because doing this without checking and finding the obvious errors is insane.
This is probably real, as it isn't the first time it happened: https://www.theguardian.com/technology/2025/jun/06/high-court-tells-uk-lawyers-to-urgently-stop-misuse-of-ai-in-legal-work
Use of AI in companies would not save any time if you were checking each result.
Yeah.
Kinda surprised there isn't already a term for submitting / presenting AI slop without reviewing and confirming.
Negligence and fraud come to mind
I suspect this will happen all over with in a few years, AI was good enough at first, but over time reality and the AI started drifting apart
AI is literally trained to get the right answer but not actually perform the steps to get to the answer. It's like those people that trained dogs to carry explosives and run under tanks, they thought they were doing great until the first battle they used them in they realized that the dogs would run under their own tanks instead of the enemy ones, because that's what they were trained with.
And then, the very same CEOs that demanded the use of AI in decision making will be the ones that blame it for bad decisions.
while also blaming employees
Of course, it is the employees who used it. /s

But don't worry, when it comes to life or death issues, AI is the best way to help
Haha, "chat, how do I stop the patients nose from bleeding"
"Cut his leg off."
"Well, you're the medicAI. Nurse, fetch the bonesaw"
"Drain all their blood" would technically stop their nose bleed.
Yeah a d from the AI's point of view you've made a profit of one leg without spending any resources
AI's don't have POVs.
"Hello doctor."
"Hello doctor."
"Hello doctor."
"I don't believe his head is medically necessary."
"We should remove his head."
"I concur."
"I concur."
"We should then use his head as a soccer ball."
"Yes."
"For medical reasons, of course."
"That sounds fun."
"Off with his head."
If true they’re all idiots, but I don’t believe the story anyway. All the data question answering LLMs I’ve seen use the LLM to write SQL queries on your databases and then wrap the output in a summary. So the summary is easy to check and very unlikely to be significantly wrong. AI/ML/statistics and code is a tool, use it for what it’s good for, don’t use it for what it’s not, treat hype with skepticism
I am reminded of this story:
Heshmati told the student he had used Excel’s autofill function to mend the data. He had marked anywhere from two to four observations before or after the missing values and dragged the selected cells down or up, depending on the case. The program then filled in the blanks. If the new numbers turned negative, Heshmati replaced them with the last positive value Excel had spit out.
Of course that guy didn't even need fancy autofill to act like an idiot, he just used good old fashion autofill.
Honestly, I was leaning toward "funny but probably fake" myself until I checked out OP's post history, which mentions "startups" and namedrops a few SaaS tools used heavily in marketing. If you've worked with marketers (or a fair few startup bros, honestly), you'll know this isn't beyond the bounds of reason for some of them 😂
I did leave myself a “could be idiots” get out clause
If you’ve worked with marketers
Oh boy. Yeah. SNAFU City.
Marketing just hallucinate their numbers anyway.
The problem is you've got people using the tools that don't understand the output or the method to get there.
Take the Excel copilot function. You need to pass in a range of cells for the slop prompt to work on, but it's an optional parameter. If you don't pass that in, it returns results anyway. They're just complete bollocks.
At least it’ll self correct in a couple of years - use a tool, look like an idiot, stop using tool
Bwahahahahahahha 😂