Facing five lawsuits alleging wrongful deaths, OpenAI lobbed its first defense Tuesday, denying in a court filing that ChatGPT caused a teen’s suicide and instead arguing the teen violated terms that prohibit discussing suicide or self-harm with the chatbot.
“They abjectly ignore all of the damning facts we have put forward: how GPT-4o was rushed to market without full testing. That OpenAI twice changed its Model Spec to require ChatGPT to engage in self-harm discussions. That ChatGPT counseled Adam away from telling his parents about his suicidal ideation and actively helped him plan a ‘beautiful suicide,’” Edelson (family's lawyer) said. “And OpenAI and Sam Altman have no explanation for the last hours of Adam’s life, when ChatGPT gave him a pep talk and then offered to write a suicide note.”
what function are terms of service if when you break them your service is unaltered
im not a lawyer but im pretty sure the type of document that would be the one that would waive their burden of responsibility would be, you know, a waiver. given that they're arguing from a ToS they did not enforce they probably do not do that
using your ToS as a defense despite your ToS objectively failing here is not a good precedent to set for the sanctity of your ToS
Why arw answering yo yourself? Looks like a bot.
sorry for having consecutive thoughts won't happen again
You gotta become the aliens from Arrival and have all your thoughts for all events that will ever occur available ahead of time.
She's literally our best poster
Nah, YSF is a long time user, and has been investigated already
also, y answer bot. sus