this post was submitted on 29 Apr 2026
171 points (95.2% liked)

Fuck AI

6935 readers
2113 users here now

"We did it, Patrick! We made a technological breakthrough!"

A place for all those who loathe AI to discuss things, post articles, and ridicule the AI hype. Proud supporter of working people. And proud booer of SXSW 2024.

AI, in this case, refers to LLMs, GPT technology, and anything listed as "AI" meant to increase market valuations.

founded 2 years ago
MODERATORS
 

Lawsuits: OpenAI didn’t report ChatGPT user to cops to protect Altman, IPO.

all 22 comments
sorted by: hot top controversial new old
[–] hakase@lemmy.zip 35 points 1 week ago* (last edited 1 week ago) (4 children)

God, Lemmings are such a bunch of fucking hypocrites. Loudly advocating for privacy and huge, faceless, multibillion-dollar corporations not being able to sell personal information to governments out one side of their ass, all while upvoting bullshit like this to high heaven and sucking off government surveillance in the comments out the other.

News flash: giving governments access to megacorps' personal data doesn't mean they'll only use that data in the tiny number of ways that you agree with, or only against the people you don't like!

Edit for context: the other top comment was getting hammered with downvotes when I posted this.

[–] gravitas_deficiency@sh.itjust.works 23 points 1 week ago (1 children)

Many - myself included - will upvote messed up or unpleasant or concerning stuff because we feel it should be discussed, and discussion tends to correlate with visibility.

Upvote != agree in a ton of cases. Reddit worked (works? Haven’t been on in a few years actually) the same way.

[–] hakase@lemmy.zip 5 points 1 week ago

Thanks, this is an excellent point that my comment doesn't account for.

[–] pilferjinx@piefed.social 3 points 1 week ago (1 children)

Yeah, I agree. I don't want corporations giving my info to my government for any reason. Imagine the huge waste of resources and violations for false positives. And if they did hit a positive correctly, are they going to apprehend them for thought crimes?

[–] hakase@lemmy.zip 2 points 1 week ago

Respect for unselecting your own upvote - Lemmy hardcore mode.

[–] shani66@ani.social 1 points 1 week ago

Different people, but yeah lemmy is getting kinda authoritarian lately.

[–] okamiueru@lemmy.world 1 points 1 week ago (1 children)

Selection bias is relevant here. Some people care about X, some about Y. The intersection might be hypocrites, but your argument seems to imply they are the same group.

[–] PearOfJudes@lemmy.ml 2 points 1 week ago

Ngl they probably are though. I think there are other ways to prevent school shootings then making companies give private information to the government.

But since OpenAI is already stealing your data, they definitely arne't using it for good.

[–] Skullgrid@lemmy.world 11 points 1 week ago (3 children)

no, not fuck AI. keep the internet private

[–] Bane_Killgrind@lemmy.dbzer0.com 13 points 1 week ago (1 children)

Absolutely not.

Leaders rejected the safety team’s urgings and declined to report the user to law enforcement.

OpenAI will “find ways to prevent tragedies like this in the future” and to continue “working with all levels of government to help ensure something like this never happens again,” Altman said.

They already have a fucking way to prevent this and they opted not to, for PR reasons. They are complicit, they provided a service that aided planning and decided to continue service and allowed further planning.

If you post a message to a website, that message is not private from the website regardless of the method they use to receive it. They have the moral responsibility to respond to threats to life regardless of the legal responsibility they are arguing they don't have.

If I put a cork board up in front of my house and someone pins threats to it, when I notice it it's now my responsibility to act on that.

[–] Skullgrid@lemmy.world -3 points 1 week ago (1 children)

this is more akin to asking a library for information

[–] new_world_odor@lemmy.world 6 points 1 week ago (1 children)

it's really not. more like gathering a crowd of a few billion people, asking them a question, hearing the loudest answer and assuming it's correct

[–] Skullgrid@lemmy.world -5 points 1 week ago (1 children)

as far as I know, Open Ai is not hosting the largest forum in the world

[–] Bane_Killgrind@lemmy.dbzer0.com 2 points 1 week ago (1 children)

No they are just training their model on it?

https://openai.com/index/openai-and-reddit-partnership/

Like isn't this common knowledge?

[–] Skullgrid@lemmy.world 1 points 1 week ago (1 children)

There is a huge difference between hosting an archive of conversations that took place, and providing a place where you can participate in conversations.

This is the equivalent of looking at the archives of debates transcribed in newspapers. When you do that, you are not participating in a debate, you are reading the transcript of a debate

The model responds based on conversations it's trained on? It's a bespoke response. It's not simply showing a browsable list of responses, it's giving particular ones.

It's literally feeding these mentally ill people responses that a human, with the same context, would be legally culpable for.

[–] osanna@lemmy.vg 1 points 1 week ago

I got some news for you: the internet hasn't been private in a VERY long time.

[–] Tartas1995@discuss.tchncs.de 1 points 1 week ago

They train their ai on your data.

That is not really a case of privacy.

I am all for privacy but then you can let a company collect all that data to begin with (especially one that states clearly that they will leak your information and has a history of respecting privacy and copyright) and then cry over privacy.

That company wanted to have the data. Now even if you don't want to share the data with the government, they carry the responsibility that they could have done something.

A gun manufacturer carry the weight of the responsibility of what is done with those weapons. That is just how it is. Even if they are required and even if you don't want to prevent the existence of these weapons, the manufacturer carries the responsibility.

If you want a private internet, use a private internet. Stop supporting big data while crying about surveillance. Big data is always 1 law away from surveillance state.

[–] corsicanguppy@lemmy.ca 5 points 1 week ago

Reporting a user for risky behaviour relies on an assessment that violates the EU AI legislation. It seems they reasoned a machine assessment is already a rights violation too far.

[–] eestileib@lemmy.blahaj.zone -1 points 1 week ago

Couldn't have said it better myself.