this post was submitted on 31 Dec 2025
272 points (98.2% liked)

Fuck AI

5043 readers
1084 users here now

"We did it, Patrick! We made a technological breakthrough!"

A place for all those who loathe AI to discuss things, post articles, and ridicule the AI hype. Proud supporter of working people. And proud booer of SXSW 2024.

AI, in this case, refers to LLMs, GPT technology, and anything listed as "AI" meant to increase market valuations.

founded 2 years ago
MODERATORS
top 34 comments
sorted by: hot top controversial new old
[–] sol6_vi@lemmy.makearmy.io 13 points 1 day ago

One could call it... Cyberpsychosis?

[–] FosterMolasses@leminal.space 25 points 1 day ago (1 children)

One recent peer-reviewed case study focused on a 26-year-old woman who was hospitalized twice after she believed ChatGPT was allowing her to talk with her dead brother

I feel like the bar for the turing test is lower than ever... You can't tell ChatGPT apart from your own relatives??

[–] ivanafterall@lemmy.world 16 points 1 day ago (1 children)

My cousin lost her young daughter a few years back. At Christmas, she had used AI to put her daughter in her Christmas photo. I didn't have words, because it made her so happy, and I can't fathom her grief, but man. Felt pretty fucked.

[–] TheOakTree@lemmy.zip 3 points 1 day ago

I feel you. I can't deny the comfort it brought her, but I also can't help but feel like it is training her to reject her grief.

Not that I'm in any position to pass judgement. I just hope it doesn't lead to anything more severe.

[–] queermunist@lemmy.ml 58 points 2 days ago

Talking to a hallucination is, in fact, not good for you.

[–] jaredwhite@humansare.social 57 points 2 days ago

Who knew that "simulating" human conversations based on extruded text strings that have no basis in grounded reality or fact could send people into spirals of delusion?

[–] minorkeys@lemmy.world 36 points 2 days ago* (last edited 2 days ago) (1 children)

Are companies who force employees to use LLMs going to be liable for the mental health issues they produce?

[–] underisk@lemmy.ml 24 points 1 day ago

Should they be? Absolutely. Will they be? lol

[–] pyrinix@kbin.melroy.org 33 points 2 days ago (2 children)

Talking to AI Chatbots is about as useful as talking to walls, only that we decided to have those walls talk back to us.

And they aren't saying anything insightful or useful.

[–] Gullible@sh.itjust.works 8 points 2 days ago (1 children)

Good small talk tutorial. Terrible everything else

[–] pyrinix@kbin.melroy.org 12 points 2 days ago

By all accounts, it is still a tool. But knowing society, they want the shortcuts to everything. Like using AI as a therapist. That's a huge no.

[–] flowers_galore2@lemmynsfw.com 7 points 2 days ago (1 children)

Hey now, my walls are perfect companions, they may be silently judging me but they are always supportive and never sycophantic.

[–] Quetzalcutlass@lemmy.world 5 points 1 day ago

Don't forget to check if the wall is load-bearing before relying on it for support.

[–] Zachariah@lemmy.world 21 points 2 days ago (2 children)

So the developing psychosis could be causing the AI use?

[–] Bonifratz@piefed.zip 28 points 2 days ago (1 children)

That's what the article says, yes:

“The technology might not introduce the delusion, but the person tells the computer it’s their reality and the computer accepts it as truth and reflects it back, so it’s complicit in cycling that delusion,” Sakata told the WSJ.

[–] Jax@sh.itjust.works 7 points 1 day ago

Thing that tells you exactly what you want to hear causes delusions?

Whaaat?

I completely understand why articles like this need to exist. Information about what 'AI' actually is needs to be spread. That being said, I also can't remove myself from the impression that this is just incredibly obvious. Like one of those studies about whether a dog actually loves their owner by going to lengths such as an MRI of their brain while looking at their owner.

Like, thank you mystery researcher on the internet — but you could have saved the helium by just sticking to Occam's Razor.

[–] NachBarcelona@piefed.social 7 points 2 days ago

"Doctors say"!

[–] Zacryon@feddit.org 6 points 2 days ago (2 children)

I'd say know your tools. People misusing "stuff" and being vulnerable to it in general is nothing new. Yet, in a lot of cases, we rely on independence and maturity in the decisions people make. This is no different to LLMs. However, of course meaningful (technological) safeguards should be implemented wherever possible.

[–] Amberskin@europe.pub 6 points 1 day ago

By their own nature, there is no way to implement robust safeguards in a LLM. The technology is toxic and the best that could happen is anything else, hopefully not based on brute forcing the production of a stream of tokens, is developer and makes obvious LLMs are a false path, a road that should not be taken.

If AI is that dangerous, it should need a licence to use, same as a gun or car or heavy machinery.