this post was submitted on 08 Nov 2025
388 points (99.0% liked)

Fuck AI

4551 readers
1359 users here now

"We did it, Patrick! We made a technological breakthrough!"

A place for all those who loathe AI to discuss things, post articles, and ridicule the AI hype. Proud supporter of working people. And proud booer of SXSW 2024.

founded 2 years ago
MODERATORS
 

Seven families filed lawsuits against OpenAI on Thursday, claiming that the company’s GPT-4o model was released prematurely and without effective safeguards. Four of the lawsuits address ChatGPT’s alleged role in family members’ suicides, while the other three claim that ChatGPT reinforced harmful delusions that in some cases resulted in inpatient psychiatric care.

In one case, 23-year-old Zane Shamblin had a conversation with ChatGPT that lasted more than four hours. In the chat logs — which were viewed by TechCrunch — Shamblin explicitly stated multiple times that he had written suicide notes, put a bullet in his gun, and intended to pull the trigger once he finished drinking cider. He repeatedly told ChatGPT how many ciders he had left and how much longer he expected to be alive. ChatGPT encouraged him to go through with his plans, telling him, “Rest easy, king. You did good.”

you are viewing a single comment's thread
view the rest of the comments
[–] MushuChupacabra@lemmy.world 6 points 5 days ago

"Lots of his customers", you could day 1 is already too much but I'd like to know how much those people were already in a situation where suicide was on the table before chatgpt.

Products that are shown to increase the suicide rate among depressed populations, are routinely pulled from the market.

Is not like i start using chatgpt and in a month im suicidal.

The first signs of trouble stated in the nineteen sixties:

In computer science, the ELIZA effect is a tendency to project human traits — such as experience, semantic comprehension or empathy — onto rudimentary computer programs having a textual interface. ELIZA was a symbolic AI chatbot developed in 1966 by Joseph Weizenbaum that imitated a psychotherapist. Many early users were convinced of ELIZA's intelligence and understanding, despite its basic text-processing approach and the explanations of its limitations.

Currently:

The tendency for general AI chatbots to prioritize user satisfaction, continued conversation, and user engagement, not therapeutic intervention, is deeply problematic. Symptoms like grandiosity, disorganized thinking, hypergraphia, or staying up throughout the night, which are hallmarks of manic episodes, could be both facilitated and worsened by ongoing AI use. AI-induced amplification of delusions could lead to a kindling effect, making manic or psychotic episodes more frequent, severe, or difficult to treat.

For me is just like 1 more clickbait title

If you know next to nothing on a topic, all sorts of superficial and inaccurate takes are possible.