this post was submitted on 31 Dec 2025
273 points (98.2% liked)
Fuck AI
5043 readers
859 users here now
"We did it, Patrick! We made a technological breakthrough!"
A place for all those who loathe AI to discuss things, post articles, and ridicule the AI hype. Proud supporter of working people. And proud booer of SXSW 2024.
AI, in this case, refers to LLMs, GPT technology, and anything listed as "AI" meant to increase market valuations.
founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
You increase the sample size, you increase the number of hits. Proportionally AI is still just as safe. What a bullshit opinion piece. Inconsequential just like the fucks agreeing with this shit take.
Do you think statisticians aren't well aware of this?
I am a fucking statistician. And you need a fucking control group to establish causality.
Gtfo if you don't understand this basic principle.
The article and your argument are both entirely devoid of substance.
If the statisticians involved in this case study are anywhere close to as unhinged as you are then it's no wonder they got those results lol
Homie been smokin’ them data science rocks, it seems.
Literally made an account on this instance just to let them know I think they’re fucking dense, but I decided they’re not even worth interacting with personally.
Huh? The whole point of this emerging scientific debate is that AI use might be proportionally unsafe, i. e. it might be a risk factor causing and/or exacerbating psychosis. Now sure this is still just a hypothesis and it's too early to make definite epidemiological statements, but it's just as wrong to blankly state that AI is "still just as safe".
"just as safe" is a relational, not absolutist, statement. I'm saying AI is at X level of safety, and more cases emerging does not imply an increasing risk of psychosis. That risk is where it's always been.
You're twisting my words because you're likely one of those brain-dead AI haters.
I don't particularly love or hate AI, the difference is I look at it critically instead of emotionally. If the population at large had the same X propensity for psychosis as the rate seen with AI usage, that just means it's correlation without causation.
Says the person calling people "fucks agreeing with this shit take", and "brain-dead AI haters" and "less-critical readers" and just in this thread alone. Who knows what else I'd find in looking in your full posting history.
Not a very convincing act, even for a clank-fucker.
Yes. Because taking a side is a shit take. Defending an article taking a side is a shit take.
Whatever sort of "argument" you think you have by cherry-picking is a shit take.
Alright, but the point is that the "X level of safety" AI is at might be a dangerous level in the first place. I don't think anybody is arguing that AI got more dangerous as a psychosis risk factor over the past year or so, they're arguing that AI was a risk factor to begin with, and with increased AI use more evidence of this turns up. So you saying that the inherent risk of AI hasn't changed is kind of a moot point because that's not what the debate is about.
Also notice that I clearly said it's too early to tell one way or the other, so there's no reason to malign me as uncritical.
You ignored my last paragraph. Yes it's too early to tell, hence the opinion piece saying "Almost Certainly Linked To" is a distortion of reality. It's laughably biased, and inductive in less-critical readers.
I can agree with that. (As an aside, I think scientific findings are almost always exaggerated like this in popular journalism.)
I'd say the long and short of it is that we simply don't (and can't) know yet. But I think more research on possible links between AI and psychotic delusions is definitely useful, because I find the idea of a connection plausible.
How do LLM interactions compare to… Kinder eggs or lawn darts in terms of safety?
Kinder eggs are incredibly safe. Lawn darts ... less so.