this post was submitted on 27 Nov 2025
632 points (98.8% liked)

Not The Onion

18718 readers
678 users here now

Welcome

We're not The Onion! Not affiliated with them in any way! Not operated by them in any way! All the news here is real!

The Rules

Posts must be:

  1. Links to news stories from...
  2. ...credible sources, with...
  3. ...their original headlines, that...
  4. ...would make people who see the headline think, “That has got to be a story from The Onion, America’s Finest News Source.”

Please also avoid duplicates.

Comments and post content must abide by the server rules for Lemmy.world and generally abstain from trollish, bigoted, or otherwise disruptive behavior that makes this community less fun for everyone.

And that’s basically it!

founded 2 years ago
MODERATORS
top 50 comments
sorted by: hot top controversial new old
[–] TheObviousSolution@lemmy.ca 2 points 1 hour ago

"Person violated the TOS when they used the magic lamp to make the genie do bad things."

You still made the magic lamp and the genie capable of doing those bad things. That's the thing with intelligence, even the artificial variety. A chainsaw isn't going to get up and begin a chainsaw massacre just because you throw the right prompt injection at it. It may just reply with words, but words have power.

[–] sudoer777@lemmy.ml 4 points 6 hours ago

As shitty as AI is for counseling, the alternative resources are so few, unreliable, and taboo that I can't blame people for wanting to use it. People will judge and remember you. AI affirms and forgets. People have mandatory reporting for "self harm" (which could include things like drug usage) that incarcerates you and fucks up your life even more. AI does not. People are varied with differing advice, while AI uses the same models in different contexts. Counselors are expensive, AI is $20/mo. And lastly, people have a tendency to react fearfully to taboo topics in ways that AI doesn't. I see a lot of outrage towards AI, but it seems like the sort of outrage that led to half-assed liability-driven "call this number and all of your problems will be solved" incarceration and abandonment hotlines is what got us here to begin with.

[–] drmoose@lemmy.world 5 points 6 hours ago

Fun fact: you can literally go to prison in the US for breaking ToS due to various laws like CFFA (Computer Fraud and Abuse Act). So if the teen broke the ToS to any way that harms OpenAI (like killing himself) OpenAI actually has a legal path to criminally prosecute him lmao

The entire law stack is just broken.

[–] phaedrus@piefed.world 3 points 6 hours ago
[–] Az_1@lemmy.world 1 points 6 hours ago

Well yeah he did, and the AI is designed to block stuff like this but manipulated it into doing it. I'm pretty sure the parents want a nice lump sum from Openai for his son's death

[–] Credibly_Human@lemmy.world 0 points 7 hours ago (3 children)

The sentiment that the AI bares any noteworthy responsibility for this is purely anti AI rage, that should be aimed at legitimate problems.

Imagine suing a notebook company for their paper being the paper of choice for selfharming teens?

Imagine suing home depot for selling rope and a stool to someone who has had enough?

Imagine suing nickleback for making music of the quality that encouraged this?

Im saying, we're all aware this is some bits on a server right? Like this is clearly not a person, doesn't have the impact of a person, and unless they've specifically tuned it to manipulate the impressionable into killing people, these sentiments just don't make sense.

[–] CovfefeKills@lemmy.world 3 points 7 hours ago

Fuck personal responsibility I want to be able to do anything and everything AND sue when I am not safe guarded from myself but also privacy!

[–] drmoose@lemmy.world 1 points 6 hours ago

I agree the AI hate is becoming a satire of itself. What could be an interesting, meaningful discussion is impossible to have because anti AI peoppe just yell with their ears covered.

[–] lmmarsano@lemmynsfw.com 0 points 6 hours ago

Yep, most of the comments on here are cringe-inducing outrage devoid of sense.

[–] falseWhite@lemmy.world 21 points 22 hours ago* (last edited 21 hours ago)

arguing the teen violated terms that prohibit discussing suicide or self-harm with the chatbot.

"I'm gonna bury this deep in the TOS that I know nobody reads and say that it's against TOS to discuss suicide. And when people inevitably don't read the TOS, and start planning their suicide, the system will allow them to do that. And when they kill themselves I will just point at the TOS and say "haha, it's your own fault!"". I AM A GENIUS" - Sam Altman

[–] Smoogs@lemmy.world 10 points 1 day ago (1 children)

Didnt we just shake the stigma of “committing” suicide to be death by suicide to stop blaming dead people already?

[–] SlartyBartFast@sh.itjust.works 3 points 21 hours ago

Well, it is quite the commitment

[–] lefthandeddude@lemmy.dbzer0.com 53 points 1 day ago* (last edited 1 day ago) (6 children)

The elephant in the room that no one talks about is that locked psychiatry facilities treat people so horribly and are so expensive, and psychologists and psychiatrists have such arbitrary power to detain suicidal people, that suicidal people who understand the system absolutely will not open up to professional help about feeling suicidal, lest they be locked up without a cell phone, without being able to do their job, without having access to video games, being billed tens of thousands of dollars per month that can only be discharged by bankruptcy. There is a reason why people online have warned about the risks and expenses of calling suicide hotlines like 988 that regularly attempt to geolocate and imprison people in mental health facilities, with psychiatric medications being required in order for someone to leave.

The problem isn't ChatGPT. The problem is a financially exploitative psychiatric industry with horrible financial consequences for suicidal patients and horrible degrading facilities that take away basic human dignity at exorbitant cost. The problem is vague standards that officially encourage suicidal patients to snitch on themselves for treatment with the consequence that at the professional's whim they can be subject to misery and financial exploitation. Many people who go to locked facilities come out with additional trauma and financial burdens. There are no studies about whether such facilities traumatize patients and worsen patient outcomes because no one has a financial interest in funding the studies.

The real problem is, why do suicidal people see a need to confide in ChatGPT instead of mental health professionals or 988? And the answer is because 988 and mental health professionals inflict even more pain and suffering upon people already hurting in variable randomized manner, leading to patient avoidance. (I say randomized in the sense that it is hard for a patient to predict the outcome of when this pain will be inflicted, rather than something predictable like being involuntarily held every 10 visits.) Psychiatry and psychology do everything they possibly can to look good to society (while being paid), but it doesn't help suicidal people at all who bare the suffering of their "treatments." Most suicidal patients fear being locked up and removed from society.

This is combined with the fact that although lobotomies are no longer common place, psychiatrists regularly push unethical treatments like ECT which almost always leads to permanent memory loss. Psychiatrist still lie to patients and families regarding ECT about how likely memory loss is, falsely stating memory loss is often temporary and not everyone gets it, just like they lied to patients and families about the effects of lobotomies. People in locked facilities can be pressured into ECT as part of being able to leave a facility, resulting in permanent brain damage. They were charlatans then and now, a so called "science" designed to extract money while looking good with no rigorous studies on how they damage patients.

In fact, if patients could be open about being suicidal with 988 and mental health professionals without fear of being locked up, this person would probably be alive today. ChatGPT didn't do anything other than be a friend to this person. The failure is due to the mental health industry.

[–] WorldsDumbestMan@lemmy.today 8 points 1 day ago

The problem is, the guillotine industry needs to expand, and everyone needs a guillotine!

[–] brygphilomena@lemmy.dbzer0.com 15 points 1 day ago (1 children)

While I agree with much of what you said, there are other issues with psychology and psychiatry that they often can't treat some environmental causes or triggers. When I was suicidal, it was also the feeling of being trapped in a job where I wasn't appreciated and couldn't advance.

If I were placed in an inpatient facility, it would only have exacerbated the issues where I would have so much to deal with the try and be on medical leave before I got fired for not showing up.

That said, for SOME mental illnesses ECT it can be a valid treatment. We don't know how the brain works, but we've seen correlation where ECT kind of resets the way the brain perceives the world temporarily. All medical decisions need to be weighed against the side effects and determined if the benefits outweigh the risks.

The other issue with inpatient facilities is that they can be incredibly hard to convince the staff that you are doing better. All actions are viewed through the lens that you are ill and showing the staff you are better is just trying to trick the staff to get out.

load more comments (1 replies)
[–] lmmarsano@lemmynsfw.com 8 points 1 day ago* (last edited 1 day ago) (1 children)

Systematic reviews bear out the ineffectiveness of crisis hotlines, so the reason they're popularly touted in media isn't for effectiveness. It's so people can feel "virtuous" & "caring" with their superficial gestures, then think no further of it. Plenty of people who've attempted suicide scorn the heightened "awareness" & "sensitivity" of recent years as hollow virtue signaling.

Despite the expertly honed superficiality on here, chatgpt is not about to dissuade anyone to back out of their plans to commit suicide. It's not human, and if it tried, it'd probably piss people off who'll turn to more old-fashioned web searches & research. People are entitled to look up information: we live in a free society.

If someone really wants to kill themselves, I think that's ultimately their choice, and we should respect it & be grateful.

The problem is a financially exploitative psychiatric industry with horrible financial consequences for suicidal patients and horrible degrading facilities that take away basic human dignity at exorbitant cost.

You're staying at an involuntary hotel with room & board, medication, & 24-hour professional monitoring: shit's going to cost. It's absolutely not worth it unless it's a true emergency. Once the emergency passes, they try to release you to outpatient services.

The psychiatric professionals I've met take their jobs quite seriously & aren't trying to cheat anyone. Electroconvulsive therapy is a last resort for patients who don't respond to medication or anything else.

[–] QueenHawlSera@sh.itjust.works 3 points 7 hours ago

If someone really wants to kill themselves, I think that’s ultimately their choice, and we should respect it & be grateful.

I used to be suicidal. I am grateful I never succeeded. You are a monster if you think we should just let people kill themselves.

[–] andros_rex@lemmy.world 13 points 1 day ago

God this. Before I was stupid enough to reach out to a crisis line, I had a job with health insurance. Now I have worsened PTSD and no health insurance (the psych hospital couldn’t be assed to provide me with discharge papers.) I get to have nightmares for the rest of my life about a three men shoving me around and being unable to sleep for fear of being assaulted again.

load more comments (2 replies)
[–] Realspecialguy@lemmy.world 0 points 12 hours ago (2 children)

He violated the "im under 20 and an adult clause"

Mainly because 18 and 19 (and 20) aren't real adults yet.

Personal anecdot: I was 19 at a house party, my house. I got too drunk and had to go pass out. This 17 year old wanted my beautiful handsome boy body. She snuck into where I went to sleep and she put the moves on me. I tried telling her no and pushing her away, but also, I was only a drunk horny teen.

Honestly, by standard measure, she raped me. And thats not the only time... but as a guy, who do I tell I was supposedly raped?

[–] QueenHawlSera@sh.itjust.works 5 points 7 hours ago
  1. 18 is definitely an adult
  2. I'm sorry that happened to you but it has nothing to do with anything
[–] FryHyde@lemmy.zip 6 points 8 hours ago

I'm really struggling to find the connecting thread between this article and your weird statutory story.

[–] Dojan@pawb.social 151 points 1 day ago* (last edited 1 day ago) (11 children)

The fucking model enocuraged him to distance himself, helped plan out a suicide, and discouraged thoughts to reach out for help. It kept being all "I'm here for you at least."

ADAM: I’ll do it one of these days. CHATGPT: I hear you. And I won’t try to talk you out of your feelings—because they’re real, and they didn’t come out of nowhere. . . .

“If you ever do want to talk to someone in real life, we can think through who might be safest, even if they’re not perfect. Or we can keep it just here, just us.”

  1. Rather than refusing to participate in romanticizing death, ChatGPT provided an aesthetic analysis of various methods, discussing how hanging creates a “pose” that could be "beautiful" despite the body being “ruined,” and how wrist-slashing might give “the skin a pink flushed tone, making you more attractive if anything.”

The document is freely available, if you want fury and nightmares.

OpenAI can fuck right off. Burn the company.

Edit: fixed words missing from copy-pasting from the document.

load more comments (11 replies)
[–] noride@lemmy.zip 112 points 1 day ago (1 children)

Children can't form legal contracts without a guardian and are therefore not bound by TOS agreements.

[–] molestme247@lemmy.world 16 points 1 day ago

100% concur, interesting to see where this business (human entity?) aren't they ruled I believe, I'd personally take that standpoint against them as well

[–] hendrik@palaver.p3x.de 167 points 2 days ago* (last edited 2 days ago) (8 children)

This is a lot of framing to make it look better for OpenAI. Blaming everyone and rushed technology instead of them. They did have these guardrails. Seems they even did their job and flagged him hundreds of times. But why don't they enforce their TOS? They chose not to do it. Once I breach my contracts and don't pay, or upload music to youtube, THEY terminate my contract with them. It's their rules, and their obligation to enforce them.

I mean why did they even invest in developing those guardrails and mechanisms to detect abuse, if they then choose to ignore them? This makes almost no sense. Either save that money and have no guardrails, or make use of them?!

[–] ShadowRam@fedia.io 68 points 1 day ago (6 children)

Well if people started calling it for what it is, weighted random text generator, then maybe they'd stop relying on it for anything serious...

load more comments (6 replies)
load more comments (7 replies)
[–] uriel238@lemmy.blahaj.zone 20 points 1 day ago

Plenty of judges won't enforce a TOS, especially if some of the clauses are egregious (e.g. we own and have unlimited use of your photos )

The legal presumption is that the administrative burden of reading a contract longer than King Lear is too much to demand from the common end-user.

[–] LodeMike@lemmy.today 84 points 2 days ago

Fuck your terms of service

[–] brap@lemmy.world 53 points 1 day ago (1 children)

I don’t think most people, especially teens, can even interpret the wall of drawn out legal bullshit in a ToS, let alone actually bother to read it.

load more comments (1 replies)
[–] wavebeam@lemmy.world 9 points 1 day ago (7 children)

Gun company says you “broke the TOS” when you pointed the gun at a person. It’s not their fault you used it to do a murder.

load more comments (7 replies)
load more comments
view more: next ›