this post was submitted on 31 Dec 2025
272 points (98.2% liked)
Fuck AI
5043 readers
1032 users here now
"We did it, Patrick! We made a technological breakthrough!"
A place for all those who loathe AI to discuss things, post articles, and ridicule the AI hype. Proud supporter of working people. And proud booer of SXSW 2024.
AI, in this case, refers to LLMs, GPT technology, and anything listed as "AI" meant to increase market valuations.
founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
"just as safe" is a relational, not absolutist, statement. I'm saying AI is at X level of safety, and more cases emerging does not imply an increasing risk of psychosis. That risk is where it's always been.
You're twisting my words because you're likely one of those brain-dead AI haters.
I don't particularly love or hate AI, the difference is I look at it critically instead of emotionally. If the population at large had the same X propensity for psychosis as the rate seen with AI usage, that just means it's correlation without causation.
Says the person calling people "fucks agreeing with this shit take", and "brain-dead AI haters" and "less-critical readers" and just in this thread alone. Who knows what else I'd find in looking in your full posting history.
Not a very convincing act, even for a clank-fucker.
Yes. Because taking a side is a shit take. Defending an article taking a side is a shit take.
Whatever sort of "argument" you think you have by cherry-picking is a shit take.
How do LLM interactions compare to… Kinder eggs or lawn darts in terms of safety?
Kinder eggs are incredibly safe. Lawn darts ... less so.
Alright, but the point is that the "X level of safety" AI is at might be a dangerous level in the first place. I don't think anybody is arguing that AI got more dangerous as a psychosis risk factor over the past year or so, they're arguing that AI was a risk factor to begin with, and with increased AI use more evidence of this turns up. So you saying that the inherent risk of AI hasn't changed is kind of a moot point because that's not what the debate is about.
Also notice that I clearly said it's too early to tell one way or the other, so there's no reason to malign me as uncritical.
You ignored my last paragraph. Yes it's too early to tell, hence the opinion piece saying "Almost Certainly Linked To" is a distortion of reality. It's laughably biased, and inductive in less-critical readers.
I can agree with that. (As an aside, I think scientific findings are almost always exaggerated like this in popular journalism.)
I'd say the long and short of it is that we simply don't (and can't) know yet. But I think more research on possible links between AI and psychotic delusions is definitely useful, because I find the idea of a connection plausible.