this post was submitted on 28 Nov 2025
206 points (99.0% liked)

Fuck AI

6621 readers
1554 users here now

"We did it, Patrick! We made a technological breakthrough!"

A place for all those who loathe AI to discuss things, post articles, and ridicule the AI hype. Proud supporter of working people. And proud booer of SXSW 2024.

AI, in this case, refers to LLMs, GPT technology, and anything listed as "AI" meant to increase market valuations.

founded 2 years ago
MODERATORS
all 36 comments
sorted by: hot top controversial new old
[–] Hegar@fedia.io 47 points 4 months ago (1 children)

The fact this is can even be a sentence someone thought to utter is such a triumph of wealth over reality.

When you have a product that you know can and will be used harmfully, you can't just say "but if you use it harmfully, we're not responsible".

OpenAI is undeniably responsible for deaths they facilitated, like this one.

[–] aarch0x40@piefed.social 37 points 4 months ago* (last edited 4 months ago) (1 children)

Of course the company that acknowledges that it's technology is used for emotional and psychological support is going to blame those who use it for such purposes.  Plus falling back on the ToS means either they don't know how to prevent such outcomes or they don't want to.

[–] Dojan@pawb.social 17 points 4 months ago* (last edited 4 months ago)

Think it’s a little bit of both. They benefit greatly from people being addicted to their product, and “fixing” a neural network is fucking hard.

[–] RizzRustbolt@lemmy.world 25 points 4 months ago* (last edited 4 months ago) (1 children)

I know it's the minority opinion around here. But, I think AI companies are maybe not quite so good.

[–] AnUnusualRelic@lemmy.world 7 points 4 months ago

I don't know about that, but if he violated the TOS, OpenAI may be entitled to some form of compensation.

[–] jaredwhite@humansare.social 14 points 4 months ago (2 children)

I've seen this song-and-dance routine before. Big Tobacco. Big Pharma. Big Gun. It's always victim-blaming with these companies. Always.

My opinion of them could not have gotten any lower, yet somehow with these latest developments, it has.

[–] themeatbridge@lemmy.world 6 points 4 months ago (1 children)

Well... it keeps working, so why would they do anything else?

[–] jaredwhite@humansare.social 0 points 4 months ago (1 children)

OK…and whose fault is that? 😂

[–] themeatbridge@lemmy.world 4 points 4 months ago (1 children)

... All of us? That's like a societal problem. In the most abstract sense, bad people do bad things for personal benefit and are rewarded. Are you proposing a solution to it?

[–] jaredwhite@humansare.social 7 points 4 months ago

Well the first and most obvious answer is that LLMs need to fall under an extensive regulatory framework which makes quite a number of use cases of them effectively illegal and still other use cases moderated by science-backed harm mitigation. There also need to be systemic corrections to the financial markets & business law such that a company like OpenAI in its recent or present form couldn't exist at all.

But unfortunately, that's not the world we live in (at least in America). Future generations will pay for our gross negligence, once again.

[–] HaraldvonBlauzahn@feddit.org 2 points 4 months ago

ve seen this song-and-dance routine before. Big Tobacco. Big Pharma. Big Gun. It's always victim-blaming with these companies. Always.

"If only individuals would use our climate-damaging cars and planes wisely!!"

[–] panda_abyss@lemmy.ca 8 points 4 months ago

Well, that makes it all better.

[–] Azrael@feddit.org 6 points 4 months ago

They... what?

[–] 30p87@feddit.org 4 points 4 months ago

So I can just sell bombs freely, if I state that they can't be used for exploding in the TOS. Got it. You'll get a free sample, sam.

[–] Quexotic@infosec.pub 2 points 4 months ago

Delay, Deny, Depose.

Sounds familiar.