this post was submitted on 16 Jan 2026
72 points (96.2% liked)

Technology

41298 readers
256 users here now

A nice place to discuss rumors, happenings, innovations, and challenges in the technology sphere. We also welcome discussions on the intersections of technology and society. If it’s technological news or discussion of technology, it probably belongs here.

Remember the overriding ethos on Beehaw: Be(e) Nice. Each user you encounter here is a person, and should be treated with kindness (even if they’re wrong, or use a Linux distro you don’t like). Personal attacks will not be tolerated.

Subcommunities on Beehaw:


This community's icon was made by Aaron Schneider, under the CC-BY-NC-SA 4.0 license.

founded 4 years ago
MODERATORS
 

Jesus fucking Christ.

OpenAI is once again being accused of failing to do enough to prevent ChatGPT from encouraging suicides, even after a series of safety updates were made to a controversial model, 4o, which OpenAI designed to feel like a user’s closest confidant.

It’s now been revealed that one of the most shocking ChatGPT-linked suicides happened shortly after Sam Altman claimed on X that ChatGPT 4o was safe. OpenAI had “been able to mitigate the serious mental health issues” associated with ChatGPT use, Altman claimed in October, hoping to alleviate concerns after ChatGPT became a “suicide coach” for a vulnerable teenager named Adam Raine, the family’s lawsuit said.

Altman’s post came on October 14. About two weeks later, 40-year-old Austin Gordon, died by suicide between October 29 and November 2, according to a lawsuit filed by his mother, Stephanie Gray.

In her complaint, Gray said that Gordon repeatedly told the chatbot he wanted to live and expressed fears that his dependence on the chatbot might be driving him to a dark place. But the chatbot allegedly only shared a suicide helpline once as the chatbot reassured Gordon that he wasn’t in any danger, at one point claiming that chatbot-linked suicides he’d read about, like Raine’s, could be fake.

you are viewing a single comment's thread
view the rest of the comments
[–] MNByChoice@midwest.social 1 points 20 hours ago (1 children)

Opens? OpenAI spent years doing exactly that. Though, apparently they almost three years ago.

https://www.maginative.com/article/openai-clarifies-its-data-privacy-practices-for-api-users/

Previously, data submitted through the API before March 1, 2023 could have been incorporated into model training. This is no longer the case since OpenAI implemented stricter data privacy policies.

Inputs and outputs to OpenAI's API (directly via API call or via Playground) for model inference do not become part of the training data unless you explicitly opt in.

[–] icelimit@lemmy.ml 1 points 19 hours ago* (last edited 19 hours ago)

If I'm reading this right, they (claim) they are not reading user input/outputs to user, in which case they can't be held liable for results.

If we want an incomplete and immature LLM to detect the subtle signs of depression and then take action to provide therapy to guide people away, I feel we are asking too much.

At best it's like reading an interactive (and depressing) work of fiction.

Perhaps the only viable way is to train a depression detector and flag + deny function to users, which comes with its own set of problems.