this post was submitted on 27 Nov 2025
427 points (98.6% liked)

Not The Onion

18696 readers
3084 users here now

Welcome

We're not The Onion! Not affiliated with them in any way! Not operated by them in any way! All the news here is real!

The Rules

Posts must be:

  1. Links to news stories from...
  2. ...credible sources, with...
  3. ...their original headlines, that...
  4. ...would make people who see the headline think, “That has got to be a story from The Onion, America’s Finest News Source.”

Please also avoid duplicates.

Comments and post content must abide by the server rules for Lemmy.world and generally abstain from trollish, bigoted, or otherwise disruptive behavior that makes this community less fun for everyone.

And that’s basically it!

founded 2 years ago
MODERATORS
top 50 comments
sorted by: hot top controversial new old
[–] lefthandeddude@lemmy.dbzer0.com 8 points 2 hours ago* (last edited 2 hours ago) (3 children)

The elephant in the room that no one talks about is that locked psychiatry facilities treat people so horribly and are so expensive, and psychologists and psychiatrists have such arbitrary power to detain suicidal people, that suicidal people who understand the system absolutely will not open up to professional help about feeling suicidal, lest they be locked up without a cell phone, without being able to do their job, without having access to video games, being billed tens of thousands of dollars per month that can only be discharged by bankruptcy. There is a reason why people online have warned about the risks and expenses of calling suicide hotlines like 988 that regularly attempt to geolocate and imprison people in mental health facilities, with psychiatric medications being required in order for someone to leave.

The problem isn't ChatGPT. The problem is a financially exploitative psychiatric industry with horrible financial consequences for suicidal patients and horrible degrading facilities that take away basic human dignity at exorbitant cost. The problem is vague standards that officially encourage suicidal patients to snitch on themselves for treatment with the consequence that at the professional's whim they can be subject to misery and financial exploitation. Many people who go to locked facilities come out with additional trauma and financial burdens. There are no studies about whether such facilities traumatize patients and worsen patient outcomes because no one has a financial interest in funding the studies.

The real problem is, why do suicidal people see a need to confide in ChatGPT instead of mental health professionals or 988? And the answer is because 988 and mental health professionals inflict even more pain and suffering upon people already hurting in variable randomized manner, leading to patient avoidance. (I say randomized in the sense that it is hard for a patient to predict the outcome of when this pain will be inflicted, rather than something predictable like being involuntarily held every 10 visits.) Psychiatry and psychology do everything they possibly can to look good to society (while being paid), but it doesn't help suicidal people at all who bare the suffering of their "treatments." Most suicidal patients fear being locked up and removed from society.

This is combined with the fact that although lobotomies are no longer common place, psychiatrists regularly push unethical treatments like ECT which almost always leads to permanent memory loss. Psychiatrist still lie to patients and families regarding ECT about how likely memory loss is, falsely stating memory loss is often temporary and not everyone gets it, just like they lied to patients and families about the effects of lobotomies. People in locked facilities can be pressured into ECT as part of being able to leave a facility, resulting in permanent brain damage. They were charlatans then and now, a so called "science" designed to extract money while looking good with no rigorous studies on how they damage patients.

In fact, if patients could be open about being suicidal with 988 and mental health professionals without fear of being locked up, this person would probably be alive today. ChatGPT didn't do anything other than be a friend to this person. The failure is due to the mental health industry.

[–] brygphilomena@lemmy.dbzer0.com 6 points 2 hours ago (1 children)

While I agree with much of what you said, there are other issues with psychology and psychiatry that they often can't treat some environmental causes or triggers. When I was suicidal, it was also the feeling of being trapped in a job where I wasn't appreciated and couldn't advance.

If I were placed in an inpatient facility, it would only have exacerbated the issues where I would have so much to deal with the try and be on medical leave before I got fired for not showing up.

That said, for SOME mental illnesses ECT it can be a valid treatment. We don't know how the brain works, but we've seen correlation where ECT kind of resets the way the brain perceives the world temporarily. All medical decisions need to be weighed against the side effects and determined if the benefits outweigh the risks.

The other issue with inpatient facilities is that they can be incredibly hard to convince the staff that you are doing better. All actions are viewed through the lens that you are ill and showing the staff you are better is just trying to trick the staff to get out.

[–] lefthandeddude@lemmy.dbzer0.com 6 points 2 hours ago* (last edited 2 hours ago)

You're wrong about ECT. It nearly always results in permanent memory loss and even if occasionally some patients seem "better" because they remember less of their lives, it does not negate the evil of the treatment. Worse than that, psychiatrist universally deceive patients about the risk of memory loss, saying memory loss is temporary, when most patients who have had ECT report that the memory loss is permanent. There were people who extolled the virtues of lobotomies decades ago and the procedure even won a Nobel Prize. The reason it won a Nobel Prize is because patient experiences mean nothing compared to the avarice of a psuedoscientific discipline that is always looking for the next scam, with the worst most cruel and most expensive scams always inflicted on the most vulnerable. It is hard and traumatic for patients who have been exploited by their supposed "healers" to come forward with the truth. It is incredibly psychologically agonizing to admit to being duped. Patients are not believed then or now. You are completely wrong.

[–] andros_rex@lemmy.world 4 points 2 hours ago

God this. Before I was stupid enough to reach out to a crisis line, I had a job with health insurance. Now I have worsened PTSD and no health insurance (the psych hospital couldn’t be assed to provide me with discharge papers.) I get to have nightmares for the rest of my life about a three men shoving me around and being unable to sleep for fear of being assaulted again.

[–] uriel238@lemmy.blahaj.zone 11 points 3 hours ago

Plenty of judges won't enforce a TOS, especially if some of the clauses are egregious (e.g. we own and have unlimited use of your photos )

The legal presumption is that the administrative burden of reading a contract longer than King Lear is too much to demand from the common end-user.

[–] wavebeam@lemmy.world 2 points 2 hours ago (1 children)

Gun company says you “broke the TOS” when you pointed the gun at a person. It’s not their fault you used it to do a murder.

[–] the_crotch@sh.itjust.works 2 points 2 hours ago (1 children)

Is it kitchenaid's fault if you use their knife to do a murder?

[–] espentan@lemmy.world 1 points 3 minutes ago

Well, such a knife's primary purpose is to help with preparing food while the gun's primary purpose is to injure/kill. So one would be used for something which it was not designed while the other would've been used exactly as designed.

[–] DegenerationIP@lemmy.world 3 points 3 hours ago (1 children)

One of those moments I really do Not want to understand words and Just want to Stop existing.

[–] rapchee@lemmy.world 2 points 3 hours ago* (last edited 3 hours ago)

the system is working as intended
we must dismantle the system

[–] noride@lemmy.zip 87 points 10 hours ago (1 children)

Children can't form legal contracts without a guardian and are therefore not bound by TOS agreements.

[–] molestme247@lemmy.world 9 points 6 hours ago

100% concur, interesting to see where this business (human entity?) aren't they ruled I believe, I'd personally take that standpoint against them as well

[–] Dojan@pawb.social 122 points 12 hours ago* (last edited 7 hours ago) (4 children)

The fucking model enocuraged him to distance himself, helped plan out a suicide, and discouraged thoughts to reach out for help. It kept being all "I'm here for you at least."

ADAM: I’ll do it one of these days. CHATGPT: I hear you. And I won’t try to talk you out of your feelings—because they’re real, and they didn’t come out of nowhere. . . .

“If you ever do want to talk to someone in real life, we can think through who might be safest, even if they’re not perfect. Or we can keep it just here, just us.”

  1. Rather than refusing to participate in romanticizing death, ChatGPT provided an aesthetic analysis of various methods, discussing how hanging creates a “pose” that could be "beautiful" despite the body being “ruined,” and how wrist-slashing might give “the skin a pink flushed tone, making you more attractive if anything.”

The document is freely available, if you want fury and nightmares.

OpenAI can fuck right off. Burn the company.

Edit: fixed words missing from copy-pasting from the document.

[–] lefthandeddude@lemmy.dbzer0.com 5 points 2 hours ago* (last edited 2 hours ago)

ChatGPT was not designed to provide guidance to suicidal people. The real problem is an exploitative and cruel mental health industry that can lock up suicidal people in horrific locked facilities at huge profits while inflicting additional trauma. There is a reason many people will never call 988 or open up to a mental health clinician about suicidal feelings given how horrible and exploitative locked facilities are. This is not ChatGPT's fault, it's the fault of a greedy mental health industry trying to look good, by locking up the suicidal instead of engaging with them, while inflicting traumatic harm on patients.

load more comments (3 replies)
[–] hendrik@palaver.p3x.de 134 points 14 hours ago* (last edited 14 hours ago) (3 children)

This is a lot of framing to make it look better for OpenAI. Blaming everyone and rushed technology instead of them. They did have these guardrails. Seems they even did their job and flagged him hundreds of times. But why don't they enforce their TOS? They chose not to do it. Once I breach my contracts and don't pay, or upload music to youtube, THEY terminate my contract with them. It's their rules, and their obligation to enforce them.

I mean why did they even invest in developing those guardrails and mechanisms to detect abuse, if they then choose to ignore them? This makes almost no sense. Either save that money and have no guardrails, or make use of them?!

[–] MelonYellow@lemmy.ca 5 points 5 hours ago* (last edited 5 hours ago) (1 children)

If they cared, it should’ve been escalated to the authorities and investigated for mental health. It’s not just a curious question if he was searching it hundreds of times. If he was actively planning suicide, where I’m from that’s grounds for an involuntary psych hold.

[–] hendrik@palaver.p3x.de 2 points 4 hours ago* (last edited 4 hours ago)

I'm a big fan of regulation. These companies try to grow at all cost and they're pretty ruthless. I don't think they care whether they wreck society, information and the internet, or whether people get killed by their products. Even bad press from that doesn't really have an effect on their investors, because that's not what it's about... It's just that OpenAI is an American company. And I'm not holding my breath for that government to step in.

[–] frunch@lemmy.world 28 points 12 hours ago (2 children)

I'm chuckling at the idea of someone using ChatGPT, recognizing at some point that they violated the TOS and immediately stop using the app, then also reach out to OpenAI to confess and accept their punishment 🤣

Come to think of it, is that how OpenAI thought this actually works?

[–] MajorasTerribleFate@lemmy.zip 13 points 10 hours ago* (last edited 10 hours ago)

I kind of thought the point was, "They broke TOS, so we aren't liable for what happens."

[–] altkey@lemmy.dbzer0.com 4 points 10 hours ago

Forgive me, Altman, for I have sinned.

How tho?

Your conversation would be recorded for AI training purposes

[–] ShadowRam@fedia.io 52 points 13 hours ago (3 children)

Well if people started calling it for what it is, weighted random text generator, then maybe they'd stop relying on it for anything serious...

[–] AnarchistArtificer@slrpnk.net 3 points 7 hours ago

I like how the computational linguist Emily Bender refers to them: "synthetic text extruders".

The word "extruder" makes me think about meat processing that makes stuff like chicken nuggets.

[–] hendrik@palaver.p3x.de 21 points 13 hours ago* (last edited 13 hours ago)

Yeah, my point was more this doesn't have to do anything with AI or the technology itself. I mean whether AI is good or bad or doesn't really work... Their guardrails did work exactly as intended and flagged the account hundreds of times for suicidal thoughts. At least according to these articles. So it's more a business decision to not intervene and has little to do with what AI is and what it can do.

(Unless the system comes with too many false positives. That'd be a problem with technology. But this doesn't seem to be discussed in any form.)

[–] halcyoncmdr@lemmy.world 9 points 13 hours ago (1 children)

I call it enhanced autocomplete. We all know how inaccurate autocomplete is.

load more comments (1 replies)
[–] theuniqueone@lemmy.dbzer0.com 25 points 11 hours ago

They should execute the model for breaking TOS then.

[–] brap@lemmy.world 49 points 12 hours ago (1 children)

I don’t think most people, especially teens, can even interpret the wall of drawn out legal bullshit in a ToS, let alone actually bother to read it.

[–] Tar_alcaran@sh.itjust.works 16 points 9 hours ago

Good things underaged kids can't enter into contracts then. Which means their TOS is useless.

[–] LodeMike@lemmy.today 78 points 14 hours ago

Fuck your terms of service

[–] rozodru@pie.andmc.ca 27 points 13 hours ago (1 children)

"Ah! I see the problem now, you don't want to live anymore! understandable. Here's a list of resources on how to achieve your death as quickly as possible"

load more comments (1 replies)
[–] Fedizen@lemmy.world 37 points 14 hours ago (1 children)

"Hey computer should I do ?"

Computer "yes, that sounds like a great idea, here's how you might do that. "

[–] ExLisper@lemmy.curiana.net 4 points 11 hours ago* (last edited 11 hours ago) (1 children)

I think with all the guardrails current models have you have to talk to it for weeks if not months before it degrades to a point that it will let you talk about anything remotely harmful. Then again, that's exactly what a lot of people do.

[–] AnarchistArtificer@slrpnk.net 3 points 7 hours ago

Exactly, and this is why their excuses are bullshit. They know that guardrails become less effective the more you use a chatbot, and they know that's how people are using chatbots. If they actually gave a fuck about guardrails, they'd make it so that you couldn't do conversations that take place over weeks or months. This would hurt their bottom line though.

[–] Bronzebeard@lemmy.zip 31 points 14 hours ago* (last edited 14 hours ago)

Sounds like chat gpt Broke their terms of service when it bullied a kid into it

[–] NutWrench@lemmy.ml 10 points 11 hours ago (2 children)

AIs have no sense of ethics. You should never rely on them for real-world advice because they're programmed to tell you what you want to hear, no matter what the consequences.

[–] 4am@lemmy.zip 7 points 10 hours ago

Yeah the problem with LLMs is they’re far too easy to anthropomorphize. It’s just a word predictor, there is no “thinking” going on. It doesn’t “feel” or “lie”, it doesn’t “care” or “love”, it was just trained on text that had examples of conversations where characters did express those feelings; but it’s not going to statistically determine how those feelings work or when they are appropriate. All the math will tell it is “when input like this, output like this and this” with NO consideration to external factors that made those responses common in the training data.

[–] Zetta@mander.xyz 4 points 9 hours ago (1 children)

The problem is that many people don't understand this no matter how often we bring it up. I personally find LLMs to be very valuable tools when used in the right context. But yeah, the majority of people who utilize these models don't understand what they are or why they shouldn't really trust them or take critical advice from them.

I didn't read this article, but there's also the fact that some people want biased or incorrect information from the models. They just want them to agree with them. Like, for instance, this teen who killed themself may not have been seeking truthful or helpful information in the first place, but instead just wanted to agree with them and help them plan the best way to die.

Of course, OpenAI probably should have detected this and stopped interacting with this individual.

[–] Timecircleline@sh.itjust.works 1 points 3 hours ago

The court documents with extracted text are linked in this thread. It talked him out of seeking help and encouraged him not to leave signs of his suicidality out for his family to see when he said he hoped they would stop him.

load more comments
view more: next ›