this post was submitted on 08 Jan 2026
118 points (99.2% liked)
Fuck AI
5157 readers
902 users here now
"We did it, Patrick! We made a technological breakthrough!"
A place for all those who loathe AI to discuss things, post articles, and ridicule the AI hype. Proud supporter of working people. And proud booer of SXSW 2024.
AI, in this case, refers to LLMs, GPT technology, and anything listed as "AI" meant to increase market valuations.
founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
Oh, I can't see this going well...Gassed up LLMs are going to confidently prescribe deadly medicinal cocktails, as it will hallucinate...Since it's artificially incompetent by design! Think of the data breaches and hacks because LLMs are notably insecure and vulnerable to simple and brute force attacks. There's a big storm coming for Utah and they aren't prepared for all that sloppy LLM nonsense.
Oh, you're right it looks like it indeed it is LLM.
Also
80% is probably the upper bound as it's their service. I'm sure that once there's a diagnosis the treatment would be hard tied to the diagnosis.
It makes me wonder what psychopath gave ok to use their model for medical advice or the ones who coded it. They definitively are aware that it doesn't actually think. They know this otherwise they wouldn't limit the list of allowed medications.
There's a reason that even those drugs aren't OTC.
I wonder if they might have a lawsuit sometime in the future similar to Rite aid, but I would expect that they won't exist by then.
Remember the bug in the RTG that killed several people due to a race condition being a problem that blasted them with extreme amount of rads?
That was 99.999+% reliabilty. How can they be ok with 80? Or even 99? You're just ok with potentionally killing 1% of patients? What the fuck.
Yes, LLMs were up-jumped to "AI" by techbros that wanted to create the next big scam. No surprise there, as that's what they always do; leaving someone else holding the bag before everything crashes and loses value. Anyway, I wouldn't specifically trust their 80% number, because they could've massaged the numbers or had real people intervene when their LLM was about dispense a bad prescription double whammy. Even with a limited list of allowed medications, the programming of LLMs allows for 'hallucinations' and breaking the supposed safety rails that are built into the code.
Easily blowing out any guardrails to continue engagement and output. Even OTC drugs can be dangerous when taken for too long or at too high of a dose. You can have adverse reactions with other OTC drugs and nutritional supplements (like vitamin/mineral tablets). A LLM would never be able to account for that...
I'd assume that a lawsuit will be coming up eventually as LLMs used for a purpose they were never meant for...Leads to such avoidable choices! Like the current lawsuits and cases where LLM chatbots isolated and talked suicidal people into committing suicide without seeking help. All for the lifetime engagement!
I can only hope all these corporations pushing LLM powered services and invasive programs get fucked financially and can't continue operating. 🤬
Not doctor, but people called me the R-slur and wanted me get fired for much less mistakes done in much less consequential jobs.