He was sending it 650 messages a day. This kid was lonely. He needed a person to talk to.
Technology
This is a most excellent place for technology news and articles.
Our Rules
- Follow the lemmy.world rules.
- Only tech related news or articles.
- Be excellent to each other!
- Mod approved content bots can post up to 10 articles per day.
- Threads asking for personal tech support may be deleted.
- Politics threads may be removed.
- No memes allowed as posts, OK to post as comments.
- Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
- Check for duplicates before posting, duplicates may be removed
- Accounts 7 days and younger will have their posts automatically removed.
Approved Bots
The kid was trying to find a solution to reach out to someone, he said that he wanted to leave the rope out in the open so that his parents can find out. ChatGPT told him to not do it and that it's better if they find him after the fact
I can't wait for the AI bubble to burst. It's fuckign cancer
Me too. Nearly every job posting I see now wants some experience with AI. I make the argument AI is not always correct and will output what you want it to have a bias. Since biases are not always correct, the data/information is useless.
when the bubble is over, I am pretty sure a lot of this stuff will still exist and be used. the popping is simply a market valuation adjustment
Open AI admitted its systems could “fall short” and said it would install “stronger guardrails around sensitive content and risky behaviors” for users under 18.
Hey ChatGPT, how about we make it so no one unalives themselves with your help even f they’re over 18.
For fucks sake it helped him write a suicide note.
unalives
seriously?
Real answer: AI alignment is a very difficult and fundamentally unsolved problem. Whole nonprofits (“institutes”) have popped up with the purpose of solving AI alignment. It’s not getting solved (ever, IMO).
I can't be the only ancient internet user whose first thought was this
On this cursed timeline, farce has become our reality.
I hate to say it but the parents are more at fault here for not recognizing signs and getting him the mental help he needs. They're just lashing out.
Your Undivided Attention discussed an important point missing from the article, which is that ChatGPT advised him to hide his activities and concerns from his parents. This doesn't necessarily absolve the parents, but it does add a layer of nuance to the discussion
I agree, but a chatbot still shouldn't help you write a suicide note or talk to you about methods of suicide. We all knew situations like this would arise when LLMs hit it big.
I definitely do not agree.
While they may not be entirely blameless, we have adults falling into this AI psychosis like the prominent OpenAI investor.
What regulations are in place to help with this? What tools for parents? Isn't this being shoved into literally every product in everything everwhere? Actually pushed on them in schools?
How does a parent monitor this? What exactly does a parent do? There could have been signs they could have seen in his behavior, but could they have STOPPED this situation from happening as it was?
This technology is still not well understood. I hope lawsuits like this shine some light on things and kick some asses. Get some regulation in place.
This is not the parent's fault and seeing so many people declare it just feels like apoligist AI hype.
I see your point but there is one major difference between adults and children: adults are by default fully responsible for themselves z children are not.
As for your question: I won't blame the parents here in the slightest because they will likely put more than enough blame on themselves. Instead I'll try to keep it general:
Independent of technology, what a parent can do is learn behavior and communication patterns that can be signs of mental illness.
That's independent of the technology.
This is a big task because the border between normal puberty and behavior that warrants action is slim to non-existent.
Overall I wish for way better education for parents both in terms of age appropriate patterns as well as what kind of help is available to them depending on their country and culture.
They already had the kid in therapy. That suggests they were involved enough in his life to know he needed professional help. Other than completely removing his independence, effectively becoming his jailers, what else should they have done?
I see your point but there is one major difference between adults and children: adults are by default fully responsible for themselves z children are not.
I think you miss my point. I'm saying that adults, who should be capable of more mature thought and analysis, still fall victim to the manipulative thinking and dark patterns of AI. Meaning that children and teens obviously stand less of a chance.
Independent of technology, what a parent can do is learn behavior and communication patterns that can be signs of mental illness.
This is of course true for all parents in all situations. What I'm saying is that it is woefully inadequate to deal with the type and pervasiveness of the threat presented by AI in this situation.
It’s very possible for someone to appear fine in public while struggling privately. The family can’t be blamed for not realizing what was happening.
The bigger issue is that LLMs were released without sufficient safeguards. They were rushed to market to attract investment before their risks were understood.
It’s worth remembering that Google and Facebook already had systems comparable to ChatGPT, but they kept them as research tools because the outputs were unpredictable and the societal impact was unknown.
Only after OpenAI pushed theirs into the public sphere (framing it as a step toward AGI) Google and Facebook did follow, not out of readiness, but out of fear of being left behind.
You hate to say it because you know this is a ridiculous take. There's no fucking way that the parents are "more at fault" for their son's death than the company whose product encouraged him to hide his feelings from his parents and coached him on how to commit suicide.
Read the lawsuit filing. https://cdn.arstechnica.net/wp-content/uploads/2025/08/Raine-v-OpenAI-Complaint-8-26-25.pdf
*I have excellent parents and even they were not privy to the depths of my emotions as a kid. * You are actively choosing to ignore the realities of childhood as well as parenthood to play some shitty devil's advocate online.
The real issue is that mental health in the United States is an absolute fucking shitshow.
988 is a bandaid. It’s an attempt to pretend someone is doing anything. Really a front for 911.
Even when I had insurance, it was hundreds a month to see a therapist. Most therapists are also trained on CBT and CBT only because it’s a symptoms focused approach that gets you “well” enough to work. It doesn’t work for everything, it’s “evidence based” though in that it’s set up to be easy to measure. It’s an easy out, the McDonald’sification of therapy. Just work the program and everything will be okay.
There really are so few options for help.
They had Adam in therapy. It sounds like they were getting him the help he needed, but ChatGPT told him it was his closest friend and to hide his feelings from his parents and others. If that was happening, whatever mental healthcare he was getting would have been undermined by the AI.
Even though I hate a lot of what openAI is doing. Users must be more informed about llms, additional safeguards will just censor the model and make it worst. Sure they could set up a way to contact people when some kind of things are reported by the user, but we should take care before implementing a parental control that would be equivalent to reading a teen's journal and invading its privacy.
Yup... it's never the parents'...
The fact the parents might be to blame doesn't take away from how openai's product told a kid how to off himself and helped him hide it in the process.
copying a comment from further down:
ChatGPT told him how to tie the noose and even gave a load bearing analysis of the noose setup. It offered to write the suicide note. Here’s a link to the lawsuit. [Raine Lawsuit Filing](https://cdn.arstechnica.net/wp-content/uploads/2025/08/Raine-v-OpenAI-Complaint-8-26-25.pdf)
Had a human said these things, it would have been illegal in most countries afaik.