this post was submitted on 11 Jan 2026
1059 points (98.8% liked)

Fuck AI

5195 readers
887 users here now

"We did it, Patrick! We made a technological breakthrough!"

A place for all those who loathe AI to discuss things, post articles, and ridicule the AI hype. Proud supporter of working people. And proud booer of SXSW 2024.

AI, in this case, refers to LLMs, GPT technology, and anything listed as "AI" meant to increase market valuations.

founded 2 years ago
MODERATORS
 

Source (Bluesky)

Transcript

recently my friend's comics professor told her that it's acceptable to use gen Al for script- writing but not for art, since a machine can't generate meaningful artistic work. meanwhile, my sister's screenwriting professor said that they can use gen Al for concept art and visualization, but that it won't be able to generate a script that's any good. and at my job, it seems like each department says that Al can be useful in every field except the one that they know best.

It's only ever the jobs we're unfamiliar with that we assume can be replaced with automation. The more attuned we are with certain processes, crafts, and occupations, the more we realize that gen Al will never be able to provide a suitable replacement. The case for its existence relies on our ignorance of the work and skill required to do everything we don't.

top 50 comments
sorted by: hot top controversial new old
[–] Voroxpete@sh.itjust.works 18 points 2 days ago* (last edited 2 days ago) (1 children)

This actually relates, in a weird but interesting way, to how people get broken out of conspiracy theories.

One very common theme that's reported by people who get themselves out of a conspiracy theory is that their breaking point is when the conspiracy asserts a fact that they know - based on real expertise of their own - to be false. So, like, you get a flat-earther who is a photography expert and their breaking point is when a bunch of the evidence relies on things about photography that they know aren't true. Or you get some MAGA person who hits their breaking point over the tariffs because they work in import/export and they actually know a bunch of stuff about how tariffs work.

Basically, whenever you're trying to disabuse people of false notions, the best way to start is always the same; figure out what they know (in the sense of things that they actually have true, well founded, factual knowledge of) and work from there. People enjoy misinformation when it affirms their beliefs and builds up their ego. But when misinformation runs counter to their own expertise, they are forced to either accept that they are not actually an expert, or reject the misinformation, and generally they'll reject the misinformation, because accepting they're not an expert means giving up on a huge part of their identity and their self-esteem.

It's also not always strictly necessary for the expertise to actually be well founded. This is why the Epstein files are such a huge danger to the Trump admin. A huge portion of MAGA spent the last decade basically becoming "experts" in "the evil pedophile conspiracy that has taken over the government", and they cannot figure out how to reconcile their "expertise" with Trump and his admin constantly backpedalling on releasing the files. Basically they've got a tiny piece of the truth - there really is a conspiracy of powerful elite pedophiles out there, they're just not hanging out in non-existent pizza parlour basements and dosing on adrenochrone - and they've built a massive fiction around that, but that piece of the truth is still enough to conflict with the false reality that Trump wants them to buy into.

You get a flat-earther who is a photography expert and their breaking point is when a bunch of the evidence relies on things about photography

Or you get a demolitions expert to watch a video of WTC7

[–] Kolanaki@pawb.social 18 points 2 days ago (2 children)

AI only seems good when you don't know enough about any given topic to notice that it is wrong 70% of the time.

This is concerning when CEOs and other people in charge seem to think it is good at everything, as this means they don't know a god damn thing about fuck all.

[–] AngryCommieKender@lemmy.world 6 points 2 days ago (1 children)

I remember an article back in 2011 that predicted that we would be able to automate all middle and most upper management jobs by 2015. My immediate thought was, "Well these people must not do much, if a glorified script can replace them."

[–] JcbAzPx@lemmy.world 5 points 2 days ago

Yeah, other than CFO and most* CTOs, anyone in the C-suite is easily replaceable by an LLM. Hell, the CEO could be replaced by a robot arm holding a magic 8-ball with no noticeable difference in performance.

* Probably not the majority, but I'll be generous.

[–] matlag@sh.itjust.works 3 points 2 days ago (1 children)

That's the whole point of the bubble: convincing investors and CEOs that AI will replace all workers. You don't need to convince the workers: they don't make decisions and an awful lot of CEOs have such a high opinion of themselves that they assume any feedback from below is worthless.

[–] jj4211@lemmy.world 3 points 2 days ago

Not just a high opinion of themselves, they think everyone is as self-centered as they are, and any claims about needing human workers for the task by human workers is just self-serving and not caring about the work.

[–] Bustedknuckles@lemmy.world 128 points 2 days ago (1 children)

Which explains why C-suites push it so hard for everyone

[–] leftzero@lemmy.dbzer0.com 38 points 2 days ago* (last edited 2 days ago) (1 children)

Well, they do have the one job that actually can be replaced by “AI” (though in most cases it'd be more beneficial to just eliminate it altogether).

[–] fartographer@lemmy.world 7 points 2 days ago

Which is acting like they know everything about everyone else's jobs, while making up wholly inaccurate assumptions

[–] hapablap@lemmy.sdf.org 14 points 2 days ago (1 children)

The breadth of knowledge demonstrated by Al gives a false impression of its depth.

[–] purplemonkeymad@programming.dev 8 points 2 days ago (1 children)

Generalists can be really good at getting stuff done. They can quickly identify the experts needed when it's beyond thier scope. Unfortunately over confident generalists tend not to get the experts in to help.

load more comments (1 replies)
[–] 18107@aussie.zone 28 points 2 days ago

AI has been excellent at teaching me to program in new languages. It knows everything about all languages - except the ones I'm already familiar with. It's terrible at those.

[–] GreenKnight23@lemmy.world 18 points 2 days ago (6 children)

let's not confuse LLMs, AI, and automation.

AI flies planes when the pilots are unconscious.

automation does menial repetitive tasks.

LLMs support fascism and destroy economies, ecologies, and societies.

[–] ricecake@sh.itjust.works 6 points 2 days ago (2 children)

I'd even go a step further and say your last point is about generative LLMs, since text classification and sentiment analysis are also pretty benign.

It's tricky because we're having a social conversation about something that's been mislabeled, and the label has been misused dozens of times as well.

It's like trying to talk about knife safety when you only have the word "pointy".

load more comments (2 replies)
load more comments (5 replies)
[–] boogiebored@lemmy.world 8 points 2 days ago

I just focus on the parts of what I do know that AI can help me with, not try to say AI can replace other people, but not me. That's some dumb shit.

[–] Viking_Hippie@lemmy.dbzer0.com 47 points 2 days ago (3 children)

That's also why the billionaires love it so much:

they very rarely have much if any technical expertise, but imagine that they just have to throw enough money at AI and it'll make them look like the geniuses they already see themselves as.

[–] Clent@lemmy.dbzer0.com 20 points 2 days ago

billionaires love it

They think it knows everything because they know nothing.

[–] Tigeroovy@lemmy.ca 9 points 2 days ago (1 children)

That and it talks to them like every jellyfish yes man that they interact with.

Which subsequently seems to be why so many regular ass people like it, because it talks to them like they’re a billionaire genius who might accidentally drop some money while it’s blowing smoke up their ass.

[–] sp3ctr4l@lemmy.dbzer0.com 6 points 2 days ago* (last edited 2 days ago)

I literally have to give my local LLM a bit of a custom prompt to get it to stop being so overly praising of me and the things that I say.

Its annoying, it reads as patronizing to me.

Sure, everyonce in a while I feel like I do come up with an actually neat or interesting idea... but if you went by the default of most LLMs, they basically act like they're a teenager in a toxic, codependent relationship with you.

They are insanely sycophantic, reassure you that all your dumbest ideas and most mundane observations are like, groundbreaking intellectual achievements, all your ridiculous and nonsensical and inconsequential worries and troubles are the most serious and profound experiences that have ever happened in the history of the universe.

Oh, and they're also absurdly suggestible about most things, unless you tell them not to be.

... they're fluffers.

They appeal to anyone's innate narcissism, and amplify it into ego mania.

Ironically, you could maybe say that they're programming people to be NPCs, and the template they are programming to be, is 'Main Character Syndrome'.

[–] sp3ctr4l@lemmy.dbzer0.com 19 points 2 days ago

Which ironically means that they are the easiest things to replace with AI.

... They just... get to own them.

For some reason.

[–] HaraldvonBlauzahn@feddit.org 2 points 1 day ago* (last edited 1 day ago)

This has also been coined the Gell-Mann [Amnesia] effect and is perhaps a kind of corollary to the Dunning-Kruger effect: incompetent people fail to recognize competence.

Truly intelligent people respect the work of professionals and experts in other fields. Or maybe, this is even fundamentally a respect problem.

[–] db0@lemmy.dbzer0.com 13 points 2 days ago (3 children)

It's why managers fucking love GenAI.

My personal take is that GenAI is ok for personal entertainment and for things that are ultimately meaningless. Making wallpapers for your phone, maps for your RPG campaign, personal RP, that sort of thing.

[–] pulsewidth@lemmy.world 9 points 2 days ago (2 children)

'I'll just use it for meaningless stuff that nobody was going to get paid for either way' is at the surface-level a reasonable attitude; personal songs generated for friends as in-jokes, artwork for home labels, birthday parties, and your examples.. All fair because nobody was gonna pay for it anyway, so no harm to makers.

But I don't personally use them for any of those things myself though, some of my reasons: I figure it's just investor-subsidized CPU cycles burning power somewhere (environmental), and ultimately that use-case won't be a business model that makes any money (propping the bubble), it dulls and avoids my own art-making skills which I think everyone should work on (personal development atrophy), building reliance on proprietary platforms... so I'd rather just not, and hopefully see the whole AI techbro bubble crash sooner than later.

load more comments (2 replies)
load more comments (2 replies)
[–] Ilixtze@lemmy.ml 20 points 2 days ago* (last edited 2 days ago)

Ignorance and lack of respect for other fields of study, I'd say. Generative ai is the perfect tool for narcisists because it has the potential to lock them in a world where only their expertise matters and only their oppinion is validated.

[–] matlag@sh.itjust.works 5 points 2 days ago (2 children)

So the only real business model here is for people to be able to produce things they are not qualified to work on, with an acceptable risk of generating crap. I don't see how that won't be a multi-trillions dollars market.

[–] Jason2357@lemmy.ca 5 points 2 days ago

Investors are rarely experts in the particular niches that the companies they hold shares in are applying AI to.

[–] jj4211@lemmy.world 4 points 2 days ago

produce things they are not qualified to work on, with an acceptable risk of generating crap

You just described the C-suite at most major companies.

[–] Zachariah@lemmy.world 30 points 2 days ago (4 children)

and all the things we aren’t experts in, we’re unqualified to be the evaluators of the AI’s output

load more comments (4 replies)
[–] lightnsfw@reddthat.com 14 points 2 days ago

IDK about that I'm a professional slop maker and I think it could replace me easily.

[–] TheEighthDoctor@lemmy.zip 10 points 2 days ago (1 children)

So Gen AI is like Dan Brown, the more you know about the subject the more it sucks

load more comments (1 replies)
[–] jjjalljs@ttrpg.network 16 points 2 days ago

This is why leadership loves it. They don't know shit about fuck.

[–] Anafabula@discuss.tchncs.de 21 points 2 days ago (3 children)
load more comments (3 replies)
[–] AeonFelis@lemmy.world 4 points 2 days ago (1 children)

Hot take: it's reasonable for a comics student to use AI for script-writing and for a screenwriting student to use AI for concept art, not because machine can generate meaningful artistic work at these fields but because these are not the fields they are trying to learn.

In a way, this can be used to level the field. The comics professor can use the same LLM to generate scripts for all their students. It'll be slop script, but the slop will be of uniform quality so no student will have the advantage of better writing and it'd be easier to judge their work based on the drawing alone.

And even if AI could generate true art in some field - why would it be acceptable for a student to use it for the very field they are studying and need to polish their own skills at?

[–] jj4211@lemmy.world 3 points 2 days ago

Yeah, the comics professor is to grade the visuals, and the text is filler, could be lorem ipsum for all they care. Simlarly a screenwriter using AI to storyboard seems fine as it's not the core product.

The ideal would be cross-discipline projects bringing students together similar to how they would be expected to deal in the real world, but when individual assignments call for 'filler' content to stand in for one of those other disciplines, I think I could accept LLM as a reasonable compromise. I would expect some assignments to ask the students to go beyond their core discipline for some perspective and LLM be bad for that, but I could see a place for skipping the irrelevant complementary pieces of a good chunk of assignments.

[–] AnarchoEngineer@lemmy.dbzer0.com 16 points 2 days ago* (last edited 2 days ago) (3 children)

Funnily enough, all my engineering professors seem to encourage the use of genAI for anything as long as it’s “not doing the learning for you”

What’s funny is that there’s basically no practical use for GenAI in engineering in the first place. Images like technical drawings need to be precise and code written for FDM/FEA etc. needs to be validated by some kind of mathematical model you derived yourself.

They say “it’s a useful new tool” and when I ask “what is it useful for” they typically have no answer besides “writing grant proposals” lol

There are lots of useful applications for machine learning in engineering, but very few if any practical applications for genAI.

load more comments (3 replies)
[–] wewbull@feddit.uk 16 points 2 days ago

I agree with this. I tell people to ask it questions about things they know about. Then, when they see how many errors it makes, ask them why they assume it's any better on a topic they don't know about.

You see the same effect in journalism. News stories seem pretty authoritative until you read one about a subject you know.

load more comments
view more: next ›