I suppose this was inevitable. Their "terms of service" will probably protect them from other things too, like if it tells people to drink bleach or something, they'll say it violates TOS to follow hallucinatory directions from it.
technology
On the road to fully automated luxury gay space communism.
Spreading Linux propaganda since 2020
- Ways to run Microsoft/Adobe and more on Linux
- The Ultimate FOSS Guide For Android
- Great libre software on Windows
- Hey you, the lib still using Chrome. Read this post!
Rules:
- 1. Obviously abide by the sitewide code of conduct. Bigotry will be met with an immediate ban
- 2. This community is about technology. Offtopic is permitted as long as it is kept in the comment sections
- 3. Although this is not /c/libre, FOSS related posting is tolerated, and even welcome in the case of effort posts
- 4. We believe technology should be liberating. As such, avoid promoting proprietary and/or bourgeois technology
- 5. Explanatory posts to correct the potential mistakes a comrade made in a post of their own are allowed, as long as they remain respectful
- 6. No crypto (Bitcoin, NFT, etc.) speculation, unless it is purely informative and not too cringe
- 7. Absolutely no tech bro shit. If you have a good opinion of Silicon Valley billionaires please manifest yourself so we can ban you.
ChatGPT gave him a pep talk and then offered to write a suicide note


what function are terms of service if when you break them your service is unaltered
im not a lawyer but im pretty sure the type of document that would be the one that would waive their burden of responsibility would be, you know, a waiver. given that they're arguing from a ToS they did not enforce they probably do not do that
using your ToS as a defense despite your ToS objectively failing here is not a good precedent to set for the sanctity of your ToS 
Why arw answering yo yourself? Looks like a bot.
sorry for having consecutive thoughts won't happen again 
You gotta become the aliens from Arrival and have all your thoughts for all events that will ever occur available ahead of time.
She's literally our best poster
So openAI decided that a clause in the TOS was a good enough guardrail against giving out info on how to kill yourself? And this was after multiple instances of them deliberately putting in guards against other behavior that they didn't want?
That's a pretty fucking stupid legal case.
setting precedent for upcoming gun law, which is where anyone can buy any gun, but you have to pinky swear not to use the gun to do crimes before you take it home
Isn't this basically current gun law?
I live in a communist state on the west coast so we have some regulation, but they're pretty accurately describing the gun show loophole.
AI company founded by the "car company whose self driving turns itself off a second before collision to blame the driver" guy is using the "this bong is for tobacco use only
" defence for their suicide coach AI.
Yeah because if there's one thing depressed and people that want to self-harm read it's the tos on the magic computer.
This is a real problem--teens using death as an excuse to duck OpenAI's penalties from violating the TOS
I’m going to die somewhat on this hill, but it just feels like a lot of this is just a reaction to llms/ai and not really about suicide and just comes across as weaponizing it. Mainly because, to quote
First and foremost, we have to acknowledge and reinforce the fact that suicide cannot be attributed to one single cause.
Listening to a particular type of music or belonging to a group associated with it is no more blameworthy than the influence of books, television, movies, or video games. Although each of these can be risk factors for a vulnerable person, a multi-dimensional approach that takes into account all individual psychological elements is necessary to ascertain with certainty if an individual is considering suicide.
As a society, we also must not react irrationally to phenomena like suicide that we do not fully comprehend. Yet because suicide is so enigmatic and hard to fathom, it can be tempting to isolate a single factor, especially a social one such as rock music, to make sense of it. In some rare cases, societal influences like suggestive lyrics in rock music could be the precipitating cause that sends an extremely at-risk individual over the edge. However, we could never say that rock music or participation in a sub-culture associated with a particular style of music directly caused an individual’s suicide.
What causes an individual to entertain suicide is multi-dimensional. Through education, and by having constructive conversation about what the common risk factors for suicide actually are, we can de-fang it, have a greater understanding of it, and no longer fear it.
from https://www.suicideinfo.ca/local_resource/musicandsuicide/
another thing to but it comes across as just wanting to punish people for not wanting to talk about suicide anywhere at all. In a sort of taboo religious sense that you can’t talk about suicide, that even to chatbots it should be censored to. Like I like to talk to chatbots about suicide since it helps relief things for a bit. Is it a solution? No, but it does help a bit! I’d also rather talk to a chatbot at this point than being told to just “fuck the fuck off” while getting degendered.
That isn’t to say openai isn’t a risk factor, or that AI really shouldn't be easily handling suicide advice over. but on the other hand it is pretty fucking annoying wanting to talk about suicide and then being unable to due to the reaction the other way where you can’t just talk about it at all. even to a bot, because everything censored!
My response is more so I really dislike it when ai is the sole blame of cause.
If I ever kill myself, which I think about a lot of days and talk to a chatbot about. It’s not gonna be because of the chatbot! It’s gonna be because I can’t stand living with my dad drinking himself to death, grief over my mom or other things, or it’s gonna be because I live in a red state that only gets more transphobic. If I ever did kill myself, I swear to fucking god if a chatbot gets blamed for it. I’ll lose my mind. Living somewhere transphobic? No no! Must be the chatbot! My dad drinking himself to death? No no! Chatbot the reason if I ever killed myself!
Like why is someone talking to an ai and discussing suicide methods? Cant be because people are just general assholes and don’t want to hear it, or are quick to punish you with threatening to call the police on you and lock you in a psych ward! I'd rather talk to a chatbot.
And this is the only time suicide even comes up in conversations at all! Is when it’s over ai, but where is discussion over suicide before that?
Why aren't articles like this posted? https://web.archive.org/web/20221001035906/https://anti-imperialism.org/2018/12/12/every-suicide-is-murder-capitalism-and-mental-illness/ [Every Suicide is Murder: Capitalism and Mental Illness from anti-imperialism org]
I just don't care anymore. Suicide just a joke for everyone and only time anyone takes it seriously is after the matter happens, or it's reaction to stuff like this, but when time move forwards suicide will just be pushed back into the box where you shouldn't talk about it. then something new will come along blaming things for suicide, and people will pretend to care.
I kinda agree with this. I've seen YT videos titled "ChatGPT killed someone" and as much as I hate LLMs, no it didn't. It certainly didn't help and it said some truly horrible things, but that's not what happened.
I say we bring back Scaphism for the AI dorks
I don't know what that is but I agree with you completely.
Scaphism is a purported ancient method of execution in which a person would be sandwiched between two small boats with just their limbs and head sticking out, which would then be smeared with milk and honey and allowed to fester with vermin.
They had some creative execution methods back in the day, didn't they?