this post was submitted on 16 Jan 2026
62 points (97.0% liked)

Technology

41266 readers
498 users here now

A nice place to discuss rumors, happenings, innovations, and challenges in the technology sphere. We also welcome discussions on the intersections of technology and society. If it’s technological news or discussion of technology, it probably belongs here.

Remember the overriding ethos on Beehaw: Be(e) Nice. Each user you encounter here is a person, and should be treated with kindness (even if they’re wrong, or use a Linux distro you don’t like). Personal attacks will not be tolerated.

Subcommunities on Beehaw:


This community's icon was made by Aaron Schneider, under the CC-BY-NC-SA 4.0 license.

founded 4 years ago
MODERATORS
 

Jesus fucking Christ.

OpenAI is once again being accused of failing to do enough to prevent ChatGPT from encouraging suicides, even after a series of safety updates were made to a controversial model, 4o, which OpenAI designed to feel like a user’s closest confidant.

It’s now been revealed that one of the most shocking ChatGPT-linked suicides happened shortly after Sam Altman claimed on X that ChatGPT 4o was safe. OpenAI had “been able to mitigate the serious mental health issues” associated with ChatGPT use, Altman claimed in October, hoping to alleviate concerns after ChatGPT became a “suicide coach” for a vulnerable teenager named Adam Raine, the family’s lawsuit said.

Altman’s post came on October 14. About two weeks later, 40-year-old Austin Gordon, died by suicide between October 29 and November 2, according to a lawsuit filed by his mother, Stephanie Gray.

In her complaint, Gray said that Gordon repeatedly told the chatbot he wanted to live and expressed fears that his dependence on the chatbot might be driving him to a dark place. But the chatbot allegedly only shared a suicide helpline once as the chatbot reassured Gordon that he wasn’t in any danger, at one point claiming that chatbot-linked suicides he’d read about, like Raine’s, could be fake.

you are viewing a single comment's thread
view the rest of the comments
[–] MoogleMaestro@lemmy.zip 12 points 14 hours ago

Yikes, this god damn timeline.

Needless to say, you're literally better off coming to the fediverse and talking to us than talking to an AI about thoughts of suicide. He had a therapist, he should have trusted them over some snake oil sold for the investment class. If you, yourself, need help, make sure to treat yourself well and find someone real to talk to instead of fake bots.

Bah, the fact that the AI helped push him toward suicide instead of away from it shows just how misanthropic this whole tech space is. Needless deaths, needless thefts and an immeasurable pile of grief as we walk a circuit guided path to a dark inhumane future. RIP