this post was submitted on 26 Nov 2025
91 points (100.0% liked)

technology

24215 readers
466 users here now

On the road to fully automated luxury gay space communism.

Spreading Linux propaganda since 2020

Rules:

founded 5 years ago
MODERATORS
 

Facing five lawsuits alleging wrongful deaths, OpenAI lobbed its first defense Tuesday, denying in a court filing that ChatGPT caused a teen’s suicide and instead arguing the teen violated terms that prohibit discussing suicide or self-harm with the chatbot.


“They abjectly ignore all of the damning facts we have put forward: how GPT-4o was rushed to market without full testing. That OpenAI twice changed its Model Spec to require ChatGPT to engage in self-harm discussions. That ChatGPT counseled Adam away from telling his parents about his suicidal ideation and actively helped him plan a ‘beautiful suicide,’” Edelson (family's lawyer) said. “And OpenAI and Sam Altman have no explanation for the last hours of Adam’s life, when ChatGPT gave him a pep talk and then offered to write a suicide note.”

all 26 comments
sorted by: hot top controversial new old
[–] SorosFootSoldier@hexbear.net 34 points 2 months ago (1 children)

Yeah because if there's one thing depressed and people that want to self-harm read it's the tos on the magic computer.

[–] meathappening@lemmy.ml 27 points 2 months ago

This is a real problem--teens using death as an excuse to duck OpenAI's penalties from violating the TOS

[–] WhyEssEff@hexbear.net 30 points 2 months ago* (last edited 2 months ago) (1 children)

what function are terms of service if when you break them your service is unaltered

[–] WhyEssEff@hexbear.net 19 points 2 months ago* (last edited 2 months ago) (1 children)

im not a lawyer but im pretty sure the type of document that would be the one that would waive their burden of responsibility would be, you know, a waiver. given that they're arguing from a ToS they did not enforce they probably do not do that

[–] WhyEssEff@hexbear.net 17 points 2 months ago* (last edited 2 months ago) (1 children)

using your ToS as a defense despite your ToS objectively failing here is not a good precedent to set for the sanctity of your ToS catgirl-huh

[–] driving_crooner@lemmy.eco.br 3 points 2 months ago (3 children)

Why arw answering yo yourself? Looks like a bot.

[–] WhyEssEff@hexbear.net 13 points 2 months ago (1 children)

sorry for having consecutive thoughts won't happen again lea-sad

[–] FunkyStuff@hexbear.net 4 points 2 months ago

You gotta become the aliens from Arrival and have all your thoughts for all events that will ever occur available ahead of time.

[–] TanneriusFromRome@hexbear.net 6 points 2 months ago (1 children)

Nah, YSF is a long time user, and has been investigated already

[–] TanneriusFromRome@hexbear.net 6 points 2 months ago

also, y answer bot. sus

[–] FunkyStuff@hexbear.net 5 points 2 months ago

She's literally our best poster

[–] Assassassin@lemmy.dbzer0.com 29 points 2 months ago (1 children)

So openAI decided that a clause in the TOS was a good enough guardrail against giving out info on how to kill yourself? And this was after multiple instances of them deliberately putting in guards against other behavior that they didn't want?

That's a pretty fucking stupid legal case.

[–] carpoftruth@hexbear.net 11 points 2 months ago (1 children)

setting precedent for upcoming gun law, which is where anyone can buy any gun, but you have to pinky swear not to use the gun to do crimes before you take it home

[–] BountifulEggnog@hexbear.net 22 points 2 months ago (1 children)

Isn't this basically current gun law?

[–] meathappening@lemmy.ml 4 points 2 months ago

I live in a communist state on the west coast so we have some regulation, but they're pretty accurately describing the gun show loophole.

[–] Carl@hexbear.net 24 points 2 months ago

ChatGPT gave him a pep talk and then offered to write a suicide note

jesus-christ

[–] Enjoyer_of_Games@hexbear.net 21 points 2 months ago

AI company founded by the "car company whose self driving turns itself off a second before collision to blame the driver" guy is using the "this bong is for tobacco use only janet-wink" defence for their suicide coach AI.

[–] LeninWeave@hexbear.net 19 points 2 months ago
[–] GrouchyGrouse@hexbear.net 11 points 2 months ago (2 children)

I say we bring back Scaphism for the AI dorks

[–] bobs_guns@lemmygrad.ml 11 points 2 months ago (1 children)

I don't know what that is but I agree with you completely.

[–] Erika3sis@hexbear.net 17 points 2 months ago (1 children)

Scaphism is a purported ancient method of execution in which a person would be sandwiched between two small boats with just their limbs and head sticking out, which would then be smeared with milk and honey and allowed to fester with vermin.

[–] SevenSkalls@hexbear.net 3 points 2 months ago

They had some creative execution methods back in the day, didn't they?

[–] Damarcusart@hexbear.net 8 points 2 months ago

I suppose this was inevitable. Their "terms of service" will probably protect them from other things too, like if it tells people to drink bleach or something, they'll say it violates TOS to follow hallucinatory directions from it.