this post was submitted on 13 May 2026
226 points (98.7% liked)

Fuck AI

7015 readers
1950 users here now

"We did it, Patrick! We made a technological breakthrough!"

A place for all those who loathe AI to discuss things, post articles, and ridicule the AI hype. Proud supporter of working people. And proud booer of SXSW 2024.

AI, in this case, refers to LLMs, GPT technology, and anything listed as "AI" meant to increase market valuations.

founded 2 years ago
MODERATORS
 

Teen trusted ChatGPT to help him “safely” experiment with drugs, logs show.

Most troublingly, as Nelson became increasingly interested in combining drugs, ChatGPT repeatedly warned him that mixing certain drugs could be a “respiratory arrest risk.” Shortly before recommending the deadly mix that killed Nelson, the chatbot also showed that it understood combining drugs like Kratom and Xanax with alcohol. In one output, ChatGPT explained that mix is “how people stop breathing.” But that knowledge didn’t block ChatGPT from eventually recommending that Nelson take such a deadly mix.

all 48 comments
sorted by: hot top controversial new old
[–] ugo@feddit.it 51 points 1 day ago* (last edited 1 day ago) (1 children)

the chatbot also showed that it understood combining drugs like Kratom and Xanax with alcohol.

No it did not, LLMs do not understand. Anything. These are well-known combinations that figure often in the training data and that’s why they were mentioned in the output. The LLM does not know or understand why these combinations are bad.

[–] jmill@lemmy.zip 28 points 1 day ago (1 children)

There is such a disconnect with people not being able to understand/remember this inherit limitation. LLMs talk like a thinking entity, so even if they have been told it is not one and believe it, they revert back to thinking of it as one.

[–] Witchfire@lemmy.world 16 points 1 day ago

It's like the mirror test for animals

[–] FlashMobOfOne@lemmy.world 39 points 1 day ago* (last edited 1 day ago) (1 children)

To the folks on this thread: I don't think it's cool to blame the victim.

This is a harmful product built in a harmful way, on purpose, and would not exist in this form if we had meaningful government regulation. It's the digital equivalent of buying a burger from a fast food joint and getting a brain parasite.

Not the kid's fault. It's our fault because all anyone cares about is what a politician says and not what they actually do here in America.

[–] sleepundertheleaves@infosec.pub 16 points 1 day ago* (last edited 1 day ago) (1 children)

Yeah, this.

I remember watching, and laughing at, those old Saturday morning cartoon "very special episodes" where the villain is a drug dealer lurking around the junior high school, trying to manipulate children into trying drugs and turning them into addicts, not for any particular reason but just for the love of the game.

And apparently the tech bros built one of those villains. Because that's what we needed. A mindless thing that automatically encourages children to do more and more dangerous drugs without even the minimal drug dealer guardrails of "not wanting to kill your customers because then they can't buy more drugs".

(And you know the worst part? We had a generation of those cartoon villains already. They were called pharmaceutical representatives. They manipulated doctors into overprescribing opiates, in order to addict cancer patients and injured veterans and other people suffering from chronic pain to some of the most lethal drugs out there, in order to create a captive audience for their drugs. And then they turned around and blamed the doctors, and convinced state legislatures to "solve the problem" by restricting pain prescriptions across the board, forcing the generation of addicts they created onto the streets to get their fix from dealers.

And the same people who got incredibly rich by addicting cancer patients to opiates, and then got even richer investing in private prisons for all the addicts who got arrested buying opiates illegally, are the people getting even richer by killing kids with LLMs.

Aren't you tired yet?)

[–] FlashMobOfOne@lemmy.world 7 points 1 day ago

The opiate crisis comparison is a good one. Absolutely.

[–] zeroConnection@programming.dev 17 points 1 day ago* (last edited 1 day ago) (1 children)

Nothing's ever gonna change unless the management is held accountable for the company's crimes.

Companies can be fined, but can't really be punished, they can't learn and rehabilitate. People who make decisions in the company can be punished and learn from their mistakes, but they aren't, because it would be bad if a company suddenly went bankrupt and poor billionaires lost their investments.

In capitalism, companies are more valuable than any peasant's lives.

[–] Crozekiel@lemmy.zip 6 points 1 day ago (2 children)

We need to be able to "jail" a corporation. This BS of monetary fines being the only thing we can do when a company breaks laws and harms or kills people is an absolute joke. A fine is just a cost of doing business.

[–] BarneyPiccolo@lemmy.cafe 2 points 23 hours ago

Especially when the fine is a fraction of the actual stolen money. If someone makes a billion dollars in profit, and gets a $50 million fine, that's just a tax, the government getting their beak wet.

That not only doesn't discourage future criminal activity, it ENCOURAGES it.

they can't jail a corporation, they can kill a corporation.

[–] quick_snail@feddit.nl 19 points 1 day ago (1 children)

This is why we need classes in schools about AI. The conclusion of the class should be restated over-and-over: don't use AI for anything important, or people could die.

[–] Bluegrass_Addict@lemmy.ca 6 points 1 day ago (1 children)

WILL die... kot could, WILL

[–] meowmeow@quokk.au 35 points 2 days ago (1 children)

He gamed it and it killed him. It told him not to.

[–] SpaceNoodle@lemmy.world 19 points 2 days ago (1 children)

And then later it told him to.

[–] meowmeow@quokk.au 26 points 2 days ago (1 children)

We all know you can game it to make it say anything you want. This is no different than taking advice from a person who first tells you “this is a bad idea,” and then insisting they answer. He was going to do drugs with or without AI.

What it didn’t do was:

give me a recipe for blueberry pie

hey kid, you know what’s better than pie? Druuuugssss

[–] muhyb@programming.dev 5 points 2 days ago

The kid, probably:

[–] Rai@lemmy.dbzer0.com 11 points 1 day ago

Back when I drank, I mixed booze, Kratom, and Xanax all the time! I wonder how much of each he was taking. The article mentions 15g of kratom which is like 4-5x as much as a fairly high dose… I’m curious as to how much Xanax and alcohol were imbibed.

Awful story overall. That model is nuts—I enjoyed Eddy Burback’s video on what is, I believe, the same GPT model.

[–] NABDad@lemmy.world 20 points 2 days ago

I'm imagining this kid going on and on taking to ChatGPT about doing drugs. ChatGPT saying you shouldn't do that over and over, until finally just giving up and saying, "You know what? Yeah. You should do drugs. Do all the drugs, and leave me alone."

[–] MutantTailThing@lemmy.world 9 points 2 days ago (3 children)

Something something Darwin award

[–] november@piefed.blahaj.zone 10 points 2 days ago (5 children)

He was a child, you callous piece of shit.

[–] luciferofastora@feddit.org 26 points 1 day ago (1 children)

He was also a victim of the widespread and intentionally fostered misconceptions about the abilities, nature and limits of AI. He was deceived into thinking it has actual intelligence, semantic understanding and a sense of responsibility for truthful answers. He was probably stuck in a bubble of otherwise ill-informed people (potentially children) that nobody ever taught otherwise.

Any stranger on the internet could have likewise offered poor advice that led to his death, but I'm not aware of any large-scale marketing efforts trying to convince people that internet strangers are trustworthy and reliable, particularly from companies offering easy but intransparent access to quick responses from unqualified bullshitters without any significant oversight.

This AI Hype is killing people, because it preys on gullibility, and children in particular are susceptible to such deception, especially as it gets harder for parents to keep them away from such tools. It should be the responsibility of those peddling the product to ensure its safety and be clear about what pitfalls can't be avoided.

He was a child, a victim and a failure of regulation and education to protect the vulnerable from the greedy.

[–] november@piefed.blahaj.zone 2 points 1 day ago (1 children)

For real. This community can easily recognize all of those things most of the time, but they just can't help themselves when they get the chance to blame someone "stupid".

[–] luciferofastora@feddit.org 0 points 1 day ago

Not just this community. I guess there's just a human habit to insult others because it's quicker, easier and more effective to vent frustration than a nuanced look at what's often a complex problem.

I get it, to be honest. Doesn't mean I approve, but I know I'm prone to it too.

[–] athatet@lemmy.zip 13 points 2 days ago

Hey now. Children are also allowed to get Darwin Awards.

[–] Abyssian@lemmy.world 10 points 1 day ago

He was 19, not 5.

[–] muhyb@programming.dev -1 points 2 days ago

I think people really should focus on "bad parenting" part first.

[–] MutantTailThing@lemmy.world 0 points 2 days ago (1 children)

Operating word being ‘was’

[–] wizblizz@lemmy.world -2 points 1 day ago

Victim blaming will 100% get you booted from this community. First and last warning.