this post was submitted on 03 Aug 2025
304 points (86.2% liked)

Fuck AI

3617 readers
804 users here now

"We did it, Patrick! We made a technological breakthrough!"

A place for all those who loathe AI to discuss things, post articles, and ridicule the AI hype. Proud supporter of working people. And proud booer of SXSW 2024.

founded 1 year ago
MODERATORS
 

Source (Bluesky)

top 50 comments
sorted by: hot top controversial new old
[–] Goldmaster@lemmy.ml 1 points 1 hour ago

What bluesky client is that?

[–] TheGuyTM3@lemmy.ml 11 points 7 hours ago* (last edited 7 hours ago)

I'm just sick of all this because we gave to "AI" too much meaning.

I don't like Generative AI tools like LLMs, image generators, voice, video etc because i see no interests in that, I think they give bad habits, and they are not understood well by their users.

Yesterday again i had to correct my mother because she told me some fun fact she had learnt by chatGPT, (that was wrong), and she refused to listen to me because "ChatGPT do plenty of researches on the net so it should know better than you".

About the thing that "it will replace artists and destroy art industry", I don't believe in that, (even if i made the choice to never use it), because it will forever be a tool. It's practical if you want a cartoony monkey image for your article (you meanie stupid journalist) but you can't say "make me a piece of art" and then put it on a museum.

Making art myself, i hate Gen AI slop from the deep of my heart but i'm obligated to admit that. (Let's not forget how it trains on copirighted media, use shitton of energy, and give no credits)

AI in others fields, like medecine, automatic subtitles, engineering, is fine for me. It won't give bad habits, it is well understood by its users, and it is truly benefical, as in being more efficient to save lifes than humans, or simply being helpful to disabled people.

TL,DR AI in general is a tool. Gen AI is bad as a powerful tool for everyone's use like it is bad to give to everyone an helicopter (even if it improves mobility). AI is nonetheless a very nice tool that can save lifes and help disabled peoples IF used and understood correctly and fairly.

[–] gmtom@lemmy.world 30 points 10 hours ago (5 children)

I work at a company that uses AI to detect repirstory ilnesses in xrays and MRI scans weeks or mobths before a human doctor could.

This work has already saved thousands of peoples lives.

But good to know you anti-AI people have your 1 dimensional, 0 nuance take on the subject and are now doing moral purity tests on it and dick measuring to see who has the loudest, most extreme hatred for AI.

[–] starman2112@sh.itjust.works 13 points 9 hours ago* (last edited 9 hours ago) (2 children)

Nobody has a problem with this, it's generative AI that's demonic

[–] gmtom@lemmy.world 1 points 2 hours ago (1 children)
  1. Except clearly some people do. This post is very specifically saying ALL AI is bad and there is no exceptions.

  2. Generative AI isnt a well defined concept and a lot of the tech we use is indistinguishable on a technical level from "Generstive AI"

[–] starman2112@sh.itjust.works 3 points 2 hours ago* (last edited 2 hours ago)
  1. sephirAmy explicitly said generative AI

  2. Give me an example, and watch me distinguish it from the kind of generative AI sephirAmy is talking about

[–] brucethemoose@lemmy.world 9 points 9 hours ago* (last edited 9 hours ago) (2 children)

Generative AI is a meaningless buzzword for the same underlying technology, as I kinda ranted on below.

Corporate enshittification is what's demonic. When you say fuck AI, you should really mean "fuck Sam Altman"

[–] monotremata@lemmy.ca 11 points 8 hours ago (2 children)

I mean, not really? Maybe they're both deep learning neural architectures, but one has been trained on an entire internetful of stolen creative content and the other has been trained on ethically sourced medical data. That's a pretty significant difference.

[–] KeenFlame@feddit.nu 4 points 3 hours ago (1 children)

No, really. Deep learning and transformers etc. was discoveries that allowed for all of the above, just because corporate vc shitheads drag their musty balls in the latest boom abusing the piss out of it and making it uncool, does not mean the technology is a useless scam

[–] ILikeTraaaains@lemmy.world 1 points 1 hour ago

This.

I recently attended a congress about technology applied on healthcare.

There were works that improved diagnosis and interventions with AI, generative mainly used for synthetic data for training.

However there were also other works that left a bad aftertaste in my mouth, like replacing human interaction between the patient and a specialist with a chatbot in charge of explaining the procedure and answering questions to the patient. Some saw privacy laws as a hindrance and wanted to use any kind of private data.

Both GenAI, one that improves lives and other that improves profits.

[–] AdrianTheFrog@lemmy.world 2 points 6 hours ago

I think DLSS/FSR/XeSS is a good example of something that is clearly ethical and also clearly generative AI. Can't really think of many others lol

load more comments (1 replies)
[–] brucethemoose@lemmy.world 13 points 10 hours ago* (last edited 9 hours ago)

All this is being stoked by OpenAI, Anthropic and such.

They want the issue to be polarized and remove any nuance, so it’s simple: use their corporate APIs, or not. Anything else is ”dangerous.”

For what they’re really scared of is awareness of locally runnable, ethical, and independent task specific tools like yours. That doesn’t make them any money. Stirring up “fuck AI” does, because that’s a battle they know they can win.

load more comments (3 replies)
[–] OmegaLemmy@discuss.online 5 points 7 hours ago

This is extreme

[–] ruuster13@lemmy.zip 7 points 8 hours ago

AI is a marketing term. Big Tech stole ALL data. All of it. The brazen piracy is a sign they feel untouchable. We should touch them.

[–] axEl7fB5@lemmy.cafe 4 points 7 hours ago (2 children)

Do people who self-host count? Like ollama? It's not like my PC is going to drain a lake.

[–] Senal@programming.dev 2 points 1 hour ago

Ethics and morality aside.

Yes, they count, the process of making and continuing to update the underlying LLM is also what drains the lakes, they are all made on pirated info (all the big ones for sure, I've not heard of a widely available, usable model trained 100% on legally obtained data, but I suppose it could exist).

[–] Auth@lemmy.world 5 points 6 hours ago

To that person, yeah self hosting still counts.

[–] Atlas_@lemmy.world 19 points 13 hours ago (2 children)

Do y'all hate chess engines?

If yes, cool.

If no, I think you hate tech companies more than you hate AI specifically.

[–] princessnorah@lemmy.blahaj.zone 18 points 12 hours ago* (last edited 12 hours ago) (7 children)

The post is pretty clearly* about genAI, I think you're just choosing to ignore that part. There's plenty of really awesome machine learning technology that helps with disabilities, doesn't rip off artists and isn't environmentally deleterious.

load more comments (7 replies)
[–] theunknownmuncher@lemmy.world 15 points 12 hours ago (8 children)

Yup, as always, none of these problems are inherent to AI itself, they're all problems with capitalism.

load more comments (8 replies)
load more comments
view more: next ›