this post was submitted on 26 Nov 2025
89 points (100.0% liked)

technology

24104 readers
388 users here now

On the road to fully automated luxury gay space communism.

Spreading Linux propaganda since 2020

Rules:

founded 5 years ago
MODERATORS
 

Facing five lawsuits alleging wrongful deaths, OpenAI lobbed its first defense Tuesday, denying in a court filing that ChatGPT caused a teen’s suicide and instead arguing the teen violated terms that prohibit discussing suicide or self-harm with the chatbot.


“They abjectly ignore all of the damning facts we have put forward: how GPT-4o was rushed to market without full testing. That OpenAI twice changed its Model Spec to require ChatGPT to engage in self-harm discussions. That ChatGPT counseled Adam away from telling his parents about his suicidal ideation and actively helped him plan a ‘beautiful suicide,’” Edelson (family's lawyer) said. “And OpenAI and Sam Altman have no explanation for the last hours of Adam’s life, when ChatGPT gave him a pep talk and then offered to write a suicide note.”

all 26 comments
sorted by: hot top controversial new old
[–] Damarcusart@hexbear.net 7 points 17 hours ago

I suppose this was inevitable. Their "terms of service" will probably protect them from other things too, like if it tells people to drink bleach or something, they'll say it violates TOS to follow hallucinatory directions from it.

[–] Carl@hexbear.net 23 points 22 hours ago

ChatGPT gave him a pep talk and then offered to write a suicide note

jesus-christ

[–] LeninWeave@hexbear.net 19 points 22 hours ago
[–] WhyEssEff@hexbear.net 29 points 1 day ago* (last edited 1 day ago) (1 children)

what function are terms of service if when you break them your service is unaltered

[–] WhyEssEff@hexbear.net 18 points 1 day ago* (last edited 1 day ago) (1 children)

im not a lawyer but im pretty sure the type of document that would be the one that would waive their burden of responsibility would be, you know, a waiver. given that they're arguing from a ToS they did not enforce they probably do not do that

[–] WhyEssEff@hexbear.net 16 points 1 day ago* (last edited 1 day ago) (1 children)

using your ToS as a defense despite your ToS objectively failing here is not a good precedent to set for the sanctity of your ToS catgirl-huh

[–] driving_crooner@lemmy.eco.br 3 points 20 hours ago (3 children)

Why arw answering yo yourself? Looks like a bot.

[–] WhyEssEff@hexbear.net 11 points 17 hours ago (1 children)

sorry for having consecutive thoughts won't happen again lea-sad

[–] FunkyStuff@hexbear.net 3 points 15 hours ago

You gotta become the aliens from Arrival and have all your thoughts for all events that will ever occur available ahead of time.

[–] FunkyStuff@hexbear.net 4 points 15 hours ago

She's literally our best poster

[–] TanneriusFromRome@hexbear.net 5 points 20 hours ago (1 children)

Nah, YSF is a long time user, and has been investigated already

[–] TanneriusFromRome@hexbear.net 6 points 20 hours ago

also, y answer bot. sus

[–] Assassassin@lemmy.dbzer0.com 28 points 1 day ago (1 children)

So openAI decided that a clause in the TOS was a good enough guardrail against giving out info on how to kill yourself? And this was after multiple instances of them deliberately putting in guards against other behavior that they didn't want?

That's a pretty fucking stupid legal case.

[–] carpoftruth@hexbear.net 10 points 1 day ago (1 children)

setting precedent for upcoming gun law, which is where anyone can buy any gun, but you have to pinky swear not to use the gun to do crimes before you take it home

[–] BountifulEggnog@hexbear.net 21 points 1 day ago (1 children)

Isn't this basically current gun law?

[–] meathappening@lemmy.ml 4 points 21 hours ago

I live in a communist state on the west coast so we have some regulation, but they're pretty accurately describing the gun show loophole.

[–] Enjoyer_of_Games@hexbear.net 20 points 1 day ago

AI company founded by the "car company whose self driving turns itself off a second before collision to blame the driver" guy is using the "this bong is for tobacco use only janet-wink" defence for their suicide coach AI.

[–] SorosFootSoldier@hexbear.net 32 points 1 day ago (1 children)

Yeah because if there's one thing depressed and people that want to self-harm read it's the tos on the magic computer.

[–] meathappening@lemmy.ml 26 points 1 day ago

This is a real problem--teens using death as an excuse to duck OpenAI's penalties from violating the TOS

[–] SunsetFruitbat@lemmygrad.ml 17 points 1 day ago* (last edited 1 day ago) (1 children)

I’m going to die somewhat on this hill, but it just feels like a lot of this is just a reaction to llms/ai and not really about suicide and just comes across as weaponizing it. Mainly because, to quote

First and foremost, we have to acknowledge and reinforce the fact that suicide cannot be attributed to one single cause.

Listening to a particular type of music or belonging to a group associated with it is no more blameworthy than the influence of books, television, movies, or video games. Although each of these can be risk factors for a vulnerable person, a multi-dimensional approach that takes into account all individual psychological elements is necessary to ascertain with certainty if an individual is considering suicide.

As a society, we also must not react irrationally to phenomena like suicide that we do not fully comprehend. Yet because suicide is so enigmatic and hard to fathom, it can be tempting to isolate a single factor, especially a social one such as rock music, to make sense of it. In some rare cases, societal influences like suggestive lyrics in rock music could be the precipitating cause that sends an extremely at-risk individual over the edge. However, we could never say that rock music or participation in a sub-culture associated with a particular style of music directly caused an individual’s suicide.

What causes an individual to entertain suicide is multi-dimensional. Through education, and by having constructive conversation about what the common risk factors for suicide actually are, we can de-fang it, have a greater understanding of it, and no longer fear it.

from https://www.suicideinfo.ca/local_resource/musicandsuicide/

another thing to but it comes across as just wanting to punish people for not wanting to talk about suicide anywhere at all. In a sort of taboo religious sense that you can’t talk about suicide, that even to chatbots it should be censored to. Like I like to talk to chatbots about suicide since it helps relief things for a bit. Is it a solution? No, but it does help a bit! I’d also rather talk to a chatbot at this point than being told to just “fuck the fuck off” while getting degendered.

That isn’t to say openai isn’t a risk factor, or that AI really shouldn't be easily handling suicide advice over. but on the other hand it is pretty fucking annoying wanting to talk about suicide and then being unable to due to the reaction the other way where you can’t just talk about it at all. even to a bot, because everything censored!

My response is more so I really dislike it when ai is the sole blame of cause.

If I ever kill myself, which I think about a lot of days and talk to a chatbot about. It’s not gonna be because of the chatbot! It’s gonna be because I can’t stand living with my dad drinking himself to death, grief over my mom or other things, or it’s gonna be because I live in a red state that only gets more transphobic. If I ever did kill myself, I swear to fucking god if a chatbot gets blamed for it. I’ll lose my mind. Living somewhere transphobic? No no! Must be the chatbot! My dad drinking himself to death? No no! Chatbot the reason if I ever killed myself!

Like why is someone talking to an ai and discussing suicide methods? Cant be because people are just general assholes and don’t want to hear it, or are quick to punish you with threatening to call the police on you and lock you in a psych ward! I'd rather talk to a chatbot.

And this is the only time suicide even comes up in conversations at all! Is when it’s over ai, but where is discussion over suicide before that?

Why aren't articles like this posted? https://web.archive.org/web/20221001035906/https://anti-imperialism.org/2018/12/12/every-suicide-is-murder-capitalism-and-mental-illness/ [Every Suicide is Murder: Capitalism and Mental Illness from anti-imperialism org]

I just don't care anymore. Suicide just a joke for everyone and only time anyone takes it seriously is after the matter happens, or it's reaction to stuff like this, but when time move forwards suicide will just be pushed back into the box where you shouldn't talk about it. then something new will come along blaming things for suicide, and people will pretend to care.

[–] UmbraVivi@hexbear.net 6 points 23 hours ago

I kinda agree with this. I've seen YT videos titled "ChatGPT killed someone" and as much as I hate LLMs, no it didn't. It certainly didn't help and it said some truly horrible things, but that's not what happened.

[–] GrouchyGrouse@hexbear.net 11 points 1 day ago (2 children)

I say we bring back Scaphism for the AI dorks

[–] bobs_guns@lemmygrad.ml 11 points 1 day ago (1 children)

I don't know what that is but I agree with you completely.

[–] Erika3sis@hexbear.net 17 points 1 day ago (1 children)

Scaphism is a purported ancient method of execution in which a person would be sandwiched between two small boats with just their limbs and head sticking out, which would then be smeared with milk and honey and allowed to fester with vermin.

[–] SevenSkalls@hexbear.net 2 points 1 hour ago

They had some creative execution methods back in the day, didn't they?