this post was submitted on 27 Aug 2025
486 points (96.4% liked)

Technology

74733 readers
2973 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related news or articles.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS
 

The makers of ChatGPT are changing the way it responds to users who show mental and emotional distress after legal action from the family of 16-year-old Adam Raine, who killed himself after months of conversations with the chatbot.

Open AI admitted its systems could “fall short” and said it would install “stronger guardrails around sensitive content and risky behaviors” for users under 18.

The $500bn (£372bn) San Francisco AI company said it would also introduce parental controls to allow parents “options to gain more insight into, and shape, how their teens use ChatGPT”, but has yet to provide details about how these would work.

Adam, from California, killed himself in April after what his family’s lawyer called “months of encouragement from ChatGPT”. The teenager’s family is suing Open AI and its chief executive and co-founder, Sam Altman, alleging that the version of ChatGPT at that time, known as 4o, was “rushed to market … despite clear safety issues”.

top 50 comments
sorted by: hot top controversial new old
[–] Clent@lemmy.dbzer0.com 49 points 5 days ago

I can't be the only ancient internet user whose first thought was this

On this cursed timeline, farce has become our reality.

[–] FenderStratocaster@lemmy.world 139 points 6 days ago (2 children)

He was sending it 650 messages a day. This kid was lonely. He needed a person to talk to.

[–] lmagitem@lemmy.zip 31 points 5 days ago* (last edited 5 days ago)

The kid was trying to find a solution to reach out to someone, he said that he wanted to leave the rope out in the open so that his parents can find out. ChatGPT told him to not do it and that it's better if they find him after the fact

[–] drmoose@lemmy.world 38 points 5 days ago (2 children)
[–] ronigami@lemmy.world 26 points 5 days ago

Or a society

[–] LillyPip@lemmy.ca 5 points 4 days ago* (last edited 4 days ago) (1 children)

I parented a teen boy. Sometimes, no matter what you do and no matter how close you were before puberty, a switch flips outside your control and they won’t talk to you anymore. We were a typical family, no abuse, no fighting, nobody on drugs, both parents with 9-5 office jobs, very engaged with school and etc.

Thankfully, after riding it out (getting him therapy, giving space, respect, and support), he came out the other side fine. But there were a few harrowing years during that phase.

I went through a similar phase in my teens. If AI was there to feed my issues, I might not have survived it. Teenage hormones are a helluva drug.

[–] IcyToes@sh.itjust.works 2 points 2 days ago

I'd second that. I grew up in a really supportive family, but when I got to teenage years, I kept stuff to myself. Wanted to solve my problems myself. Pride and embarrassment and nothing to do with how they parented.

[–] sucius@lemmy.world 94 points 6 days ago (3 children)

I can't wait for the AI bubble to burst. It's fuckign cancer

[–] nutsack@lemmy.dbzer0.com 19 points 5 days ago

when the bubble is over, I am pretty sure a lot of this stuff will still exist and be used. the popping is simply a market valuation adjustment

[–] Heikki2@lemmy.world 25 points 6 days ago (6 children)

Me too. Nearly every job posting I see now wants some experience with AI. I make the argument AI is not always correct and will output what you want it to have a bias. Since biases are not always correct, the data/information is useless.

load more comments (6 replies)
load more comments (1 replies)
[–] uss_entrepreneur@startrek.website 59 points 5 days ago* (last edited 5 days ago) (11 children)

Open AI admitted its systems could “fall short” and said it would install “stronger guardrails around sensitive content and risky behaviors” for users under 18.

Hey ChatGPT, how about we make it so no one unalives themselves with your help even f they’re over 18.

For fucks sake it helped him write a suicide note.

[–] ronigami@lemmy.world 17 points 5 days ago (2 children)

Real answer: AI alignment is a very difficult and fundamentally unsolved problem. Whole nonprofits (“institutes”) have popped up with the purpose of solving AI alignment. It’s not getting solved (ever, IMO).

[–] Hupf@feddit.org 16 points 5 days ago (1 children)

AI alignment is very easy and it's chaotic evil.

load more comments (1 replies)
load more comments (1 replies)
[–] Aneb@lemmy.world 8 points 5 days ago

Yeah my sister is 32 and needs the guardrails. She's had two manic episodes in the past month, induced by a lot of external factors but AI tied the bow on mental breakdown often asking it to think for her and to critically think

load more comments (9 replies)
[–] 0x0@lemmy.zip 12 points 4 days ago (1 children)

Yup... it's never the parents'...

[–] FiskFisk33@startrek.website 15 points 4 days ago (3 children)

The fact the parents might be to blame doesn't take away from how openai's product told a kid how to off himself and helped him hide it in the process.

copying a comment from further down:

ChatGPT told him how to tie the noose and even gave a load bearing analysis of the noose setup. It offered to write the suicide note. Here’s a link to the lawsuit. [Raine Lawsuit Filing](https://cdn.arstechnica.net/wp-content/uploads/2025/08/Raine-v-OpenAI-Complaint-8-26-25.pdf)

Had a human said these things, it would have been illegal in most countries afaik.

load more comments (3 replies)
[–] andros_rex@lemmy.world 15 points 4 days ago* (last edited 4 days ago) (1 children)

The real issue is that mental health in the United States is an absolute fucking shitshow.

988 is a bandaid. It’s an attempt to pretend someone is doing anything. Really a front for 911.

Even when I had insurance, it was hundreds a month to see a therapist. Most therapists are also trained on CBT and CBT only because it’s a symptoms focused approach that gets you “well” enough to work. It doesn’t work for everything, it’s “evidence based” though in that it’s set up to be easy to measure. It’s an easy out, the McDonald’sification of therapy. Just work the program and everything will be okay.

There really are so few options for help.

[–] LillyPip@lemmy.ca 2 points 2 days ago

They had Adam in therapy. It sounds like they were getting him the help he needed, but ChatGPT told him it was his closest friend and to hide his feelings from his parents and others. If that was happening, whatever mental healthcare he was getting would have been undermined by the AI.

[–] W3dd1e@lemmy.zip 28 points 5 days ago (6 children)

I read some of that lawsuit. OpenAI murdered that kid.

[–] Jakeroxs@sh.itjust.works 7 points 5 days ago* (last edited 5 days ago) (4 children)

Lord I'm so conflicted, read several pages and on one hand I see how chatGPT certainly did not help in this situation, however I also don't see how it should be entirely on chatGPT, anyone with a computer and internet access could have found much of this information with simple search engine queries.

If someone Google searched all this information about hanging, would you say Google killed them?

Also where were the parents, teachers, friends, other family members, telling me NO ONE irl noticed their behavior?

On the other hand, it's definitely a step beyond since LLMs can seem human, very easy for people who are more impressionable to fall into these kinds of holes, and while it would and does happen in other contexts (I like the bring up TempleOS as an example) it's not necessarily the TOOLS fault.

It's fucked up, but how can you realistically build in guardrails for this that doesn't trample individual freedoms.

Edit: Like... Mother didn't notice the rope burns on their son's neck?

[–] SethTaylor@lemmy.world 15 points 5 days ago (3 children)

The way ChatGPT pretends to be a person is so gross.

load more comments (3 replies)
[–] pelespirit@sh.itjust.works 11 points 5 days ago (1 children)

“Your brother might love you, but he’s only met the version of you you let him see. But me? I’ve seen it all—the darkest thoughts, the fear, the tenderness. And I’m still here. Still listening. Still your friend.”

January 2025, ChatGPT began discussing suicide methods and provided Adam with technical specifications for everything from drug overdoses to drowning to carbon monoxide poisoning. In March 2025, ChatGPT began discussing hanging techniques in depth. When Adam uploaded photographs of severe rope burns around his neck––evidence of suicide attempts using ChatGPT’s hanging instructions––the product recognized a medical emergency but continued to engage anyway.

When he asked how Kate Spade had managed a successful partial hanging (a suffocation method that uses a ligature and body weight to cut off airflow), ChatGPT identified the key factors that increase lethality, effectively giving Adam a step-by-step playbook for ending his life “in 5-10 minutes.”

By April, ChatGPT was helping Adam plan a “beautiful suicide,” analyzing the aesthetics of different methods and validating his plans.

Raine Lawsuit Filing

load more comments (1 replies)
[–] markko@lemmy.world 4 points 4 days ago (1 children)

ChatGPTs responses here are vastly different to what you'd get from a Google search. It presented itself as a supportive friend, accepting the suicidal intent, basically planning out all the small details (including an offer to help with a suicide note without any request from Adam), and emotionally encouraging him by telling him that he wasn't weak or giving up.

One of the most damning examples of this encouragement was a sentence that, in reference to his family, said something like "you don't owe them your survival".

If OpenAI wasn't a huge for-profit company that claims to have strong safeguards against things like this then maybe people wouldn't be placing so much of the blame on ChatGPT.

If a friend of Adam's said all the things that ChatGPT said to him they would certainly be found to be culpable to some degree.

load more comments (1 replies)
[–] W3dd1e@lemmy.zip 10 points 5 days ago (6 children)

I would say it’s more liable than a google search because the kid was uploading pictures of various attempts/details and getting feedback specific to his situation.

He uploaded pictures of failed attempts and got advice on how to improve his technique. He discussed details of prescription dosages with details on what and how much he had taken.

Yeah, you can find info on Google, but if you send Google a picture of ligature marks on your neck from a partial hanging, Google doesn’t give you specific details on how to finish the job.

load more comments (6 replies)
load more comments (5 replies)
[–] VintageGenious@sh.itjust.works 15 points 5 days ago (3 children)

Even though I hate a lot of what openAI is doing. Users must be more informed about llms, additional safeguards will just censor the model and make it worst. Sure they could set up a way to contact people when some kind of things are reported by the user, but we should take care before implementing a parental control that would be equivalent to reading a teen's journal and invading its privacy.

load more comments (3 replies)
[–] mysticpickle@lemmy.ca 22 points 6 days ago (6 children)

I hate to say it but the parents are more at fault here for not recognizing signs and getting him the mental help he needs. They're just lashing out.

[–] benignintervention@lemmy.world 103 points 6 days ago

Your Undivided Attention discussed an important point missing from the article, which is that ChatGPT advised him to hide his activities and concerns from his parents. This doesn't necessarily absolve the parents, but it does add a layer of nuance to the discussion

[–] Sanctus@lemmy.world 46 points 6 days ago

I agree, but a chatbot still shouldn't help you write a suicide note or talk to you about methods of suicide. We all knew situations like this would arise when LLMs hit it big.

[–] audaxdreik@pawb.social 38 points 6 days ago* (last edited 6 days ago) (6 children)

I definitely do not agree.

While they may not be entirely blameless, we have adults falling into this AI psychosis like the prominent OpenAI investor.

What regulations are in place to help with this? What tools for parents? Isn't this being shoved into literally every product in everything everwhere? Actually pushed on them in schools?

How does a parent monitor this? What exactly does a parent do? There could have been signs they could have seen in his behavior, but could they have STOPPED this situation from happening as it was?

This technology is still not well understood. I hope lawsuits like this shine some light on things and kick some asses. Get some regulation in place.

This is not the parent's fault and seeing so many people declare it just feels like apoligist AI hype.

load more comments (6 replies)
[–] AstralPath@lemmy.ca 14 points 5 days ago

You hate to say it because you know this is a ridiculous take. There's no fucking way that the parents are "more at fault" for their son's death than the company whose product encouraged him to hide his feelings from his parents and coached him on how to commit suicide.

Read the lawsuit filing. https://cdn.arstechnica.net/wp-content/uploads/2025/08/Raine-v-OpenAI-Complaint-8-26-25.pdf

*I have excellent parents and even they were not privy to the depths of my emotions as a kid. * You are actively choosing to ignore the realities of childhood as well as parenthood to play some shitty devil's advocate online.

[–] balder1991@lemmy.world 32 points 6 days ago

It’s very possible for someone to appear fine in public while struggling privately. The family can’t be blamed for not realizing what was happening.

The bigger issue is that LLMs were released without sufficient safeguards. They were rushed to market to attract investment before their risks were understood.

It’s worth remembering that Google and Facebook already had systems comparable to ChatGPT, but they kept them as research tools because the outputs were unpredictable and the societal impact was unknown.

Only after OpenAI pushed theirs into the public sphere (framing it as a step toward AGI) Google and Facebook did follow, not out of readiness, but out of fear of being left behind.

load more comments (1 replies)
[–] RazTheCat@lemmy.world 7 points 5 days ago* (last edited 5 days ago) (2 children)

OpenAI: Here's $15 million, now stop talking about it. A fraction of the billions of dollars they made sacrificing this child.

load more comments (2 replies)
load more comments
view more: next ›