this post was submitted on 27 Nov 2025
627 points (98.8% liked)
Not The Onion
18718 readers
677 users here now
Welcome
We're not The Onion! Not affiliated with them in any way! Not operated by them in any way! All the news here is real!
The Rules
Posts must be:
- Links to news stories from...
- ...credible sources, with...
- ...their original headlines, that...
- ...would make people who see the headline think, “That has got to be a story from The Onion, America’s Finest News Source.”
Please also avoid duplicates.
Comments and post content must abide by the server rules for Lemmy.world and generally abstain from trollish, bigoted, or otherwise disruptive behavior that makes this community less fun for everyone.
And that’s basically it!
founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
> Build a yes-man
> It is good at saying "yes"
> Someone asks it a question
> It says yes
> Everyone complains
ChatGPT is a (partially) stupid technology with not enough security. But it's fundamentally just autocomplete. That's the technology. It did what it was supposed to do.
I hate to defend OpenAI on this but if you're so mentally sick (dunno if that's the right word here?) that you'd let yourself be driven to suicide by some online chats [1] then the people who gave you internet access are to blame too.
[1] If this was a human encouraging him to suicide this wouldn't be newsworthy...
While I agree, the markdown guide is right there in the editor toolbar along with formatting buttons, and we don't need to break semantic structure like that.[^footnote]
[^footnote]: you can toggle the view source button to see how to write this footnote
You don't think pushing glorified predictive text keyboard as a conversation partner is the least bit negligent?
Nope. Text is text: it can't compel action.
It is. But the chatGPT interface reminds you of that when you first create an account. (At least it did when I created mine).
At some point we have to give the responsibility to the user. Just like with Kali OS or other pentesting tools. You wouldn't (shouldn't) blame them for the latest ransomeware attack too.
That is such a fucked up take on this. Instead of seeing the responsibility at the piece of shit billionaires force-feeding this glorified text prediction on everyone, and politicians allowing minors access to smartphones, you turn off your brain and hop straight over to victim-blaming. I hope you will slap yourself for this comment after some time to reflect on it.
Like hell it wouldn't, do you live under a rock?
If a human tells you how to commit suicide, and you do, that's on you, though.
Human was charged with manslaughter. It was huge news at the time.
I get where you're coming from because people and those directly over them will always bear a large portion of the blame and you can only take safety so far.
However, that blame can only go so far as well, because the designers of a thing who overlook or ignore safety loopholes should bear responsibility for their failures. We know some people will always be more susceptible to implicit suggestions than others are and that not everyone has someone who's responsible over them in the first place, so we need to design AIs accordingly.
Think of it like blaming an employee's shift supervisor when an employee dies when the work environment is itself unsafe. Or think of it like only blaming a gun user and not the gun laws. Yes, individual responsibility is a thing, but the system as a whole has a responsibility all it's own.
No, we don't. The harm was self-inflicted. The reader had unlimited time to contemplate their actions before committing them. This is entirely on the user.
If this is what ChatGPT is "supposed to do" then that's the problem. A yes-man that will say yes to anything, even suicide, is dangerous.