this post was submitted on 28 Jan 2026
459 points (99.1% liked)

Fuck AI

5423 readers
1305 users here now

"We did it, Patrick! We made a technological breakthrough!"

A place for all those who loathe AI to discuss things, post articles, and ridicule the AI hype. Proud supporter of working people. And proud booer of SXSW 2024.

AI, in this case, refers to LLMs, GPT technology, and anything listed as "AI" meant to increase market valuations.

founded 2 years ago
MODERATORS
top 50 comments
sorted by: hot top controversial new old
[–] LordCrom@lemmy.world 12 points 18 hours ago

I was asked to create a simple script... Great, I could have knocked that out in maybe 3 or 4 hours.

Boss insisted I use A.I. ... Fine whatever.

The code it spit out was OK, but didn't work... So I took it and started re coding and fixing the bugs.

It took over 3 hours to get that sloppy code to a working state.

Boss asked why it took so long, ai works in seconds. He didn't understand that I had to fix that crap code he forced me to use

Look, ai does pattern matching like a champ. But it can not create... It doesn't imagine...

[–] Croquette@sh.itjust.works 17 points 21 hours ago (2 children)

It's a productivity miracle for manager because they can bullshit their job faster and easier.

[–] Smaile@lemmy.ca 8 points 17 hours ago* (last edited 17 hours ago)

yup, they don't realize it will replace them, not their workers. and if you are that manager reading this, remember their goal is no middle class.

That means you.

Not your grunts that get paid dogshit and are little more the soulless husks these days.

You.

[–] Kaz@lemmy.org 8 points 21 hours ago

This, because all management does is communicate they think it's amazing..

Try and get it to do complicated or edge case things and it struggles, but management never ever touch complicated stuff! They offload it

[–] Formfiller@lemmy.world 5 points 18 hours ago (1 children)

One of my college professors is mandating chat gpt

[–] Interstellar_1@lemmy.blahaj.zone 2 points 16 hours ago (1 children)
[–] Nikelui@lemmy.world 3 points 13 hours ago

Maybe because students are using it anyways, so it's better to teach how to use it responsibly, check for sources and not to trust blindly anything it outputs. But I am being optimistic.

[–] Itdidnttrickledown@lemmy.world 10 points 22 hours ago (2 children)

I have a simple anwer why managers think its smart and workers things its dumb. The managers see all kinds of documentaion from workers and to them the AI slop look the same. It looks the same due to the fact that the managers never take the time to comprehend what they are reading.

[–] bampop@lemmy.world 4 points 13 hours ago* (last edited 13 hours ago) (1 children)

I think it's more that they are faced with a soulless bullshit generator machine with no imagination and no deep understanding, and noticing that it can do most of the work they do. There's a lot of skill overlap with management there. So naturally they would be impressed with it.

[–] Itdidnttrickledown@lemmy.world 2 points 9 hours ago

Of course it is and its the point of my post.

[–] Taleya@aussie.zone 3 points 21 hours ago (1 children)

LLMs look smart if you have no idea what the fuck they're on about. And management is full of Peters.

[–] Itdidnttrickledown@lemmy.world 3 points 19 hours ago

Without a doubt. The skill set to be in management has nothing to do with intelligence. It has to do with selfish manipulation and no empathy. That way you can be cruel without missing a second of sleep.

[–] RBWells@lemmy.world 4 points 20 hours ago* (last edited 7 hours ago) (1 children)

They are pushing it at my work. I spent half a day trying to train Copilot to build me a report from one PDF and one way too formatted excel sheet, no go, the too-formatted excel stumped it, I had to clean it up first. I am booking payroll and the fucking system we use refuses to generate a report with the whole cost, there is one for gross to net and a separate one, not available in excel, and not in a format that can be put in a spreadsheet, for the employer cost. I need to split the total into departments & job cost codes. (ETA the payroll system also doesn't handle the job costing, even after I get total cost, more manual work)

I worked with the department who sends me this trash and glory be, there was a CSV for the gross to net one. Finally wrestled it into getting this right and asked it "what do I ask next time to get this result the first time" and it does now do a reliable job of this BUT:

All it's doing is making a report that the payroll system really and truly ought to be capable of producing. And I guess letting me honestly say, "sure boss, I use the copilot". It's not adding anything at all, just making up for a glaring defect in the reporting available from the payroll company. Give me access to that system and I could build the report, it doesn't need AI at all.

[–] echodot@feddit.uk 4 points 19 hours ago

This is my problem with AI where I work. I can use it to get the result I want (eventually) although I have to do some editing.

But I can also use the python script that has been working fine for years, which gets me 99% of the way there in 15 seconds. It would be faster but the script is terribly unoptimized because I'm not a programmer.

[–] Whats_your_reasoning@lemmy.world 20 points 1 day ago (1 children)

I don’t work with computers or coding, yet even in early childhood education/therapy some people are pushing for AI. Someone used it to make “busy scene” pictures for students to find specific things in. I hate using them. Prior to this, we used “busy scene” images that are easy to find online, full of quirky, funny details that the kids enjoy spotting.

But I can barely look at the slop images that were generated. So many of the characters have faces that look like wax figures left in the hot summer sun. The “toys” in the scene are nonsensical shapes somewhere between unusable building blocks and poorly-formed puzzle pieces. Looking at the previous, human-made pictures brought me joy, but this AI garbage is a mess that makes me sad. There’s no direction, no fun details to find, just a chaotic, repetitive scene. I bet the kids I work with could draw something more interesting than this.

[–] Hazor@lemmy.world 10 points 1 day ago

I've never understood these use cases, pushing for generative AI in places where there's already an abundance of human-made resources. Often for free. Is it just laziness? A case of "Why take 2 minutes for a Google search when I could take 1 minute for a generative AI prompt?"

[–] Reygle@lemmy.world 9 points 1 day ago

Most of my conversations with my management is forced to be talking them out of the heinous baloney they're convinced of because "Gemini says.." No boss, Gemini made some shit up. Scroll past it or stop wasting my time.

[–] Reygle@lemmy.world 9 points 1 day ago (1 children)

Honestly at this point AI is bad and human critical thinking is the worst I've ever seen in my life.

I know people that I expect would collapse inward without AI holding their hands, and here's the surprise of this statement. Can't wait to see it happen. I'm really holding on for the implosions and REALLY hope they happen when I'm nearby.

[–] bridgeenjoyer@sh.itjust.works 3 points 23 hours ago (1 children)

Be the change you want to see in the world. Data centers need to dissappear :)

[–] biggerbogboy@sh.itjust.works 1 points 21 hours ago (1 children)

There are incentives too. They have ram :)

[–] Reygle@lemmy.world 3 points 21 hours ago

All the more reason to gather a gigantic group of people and ransack them.

[–] Sam_Bass@lemmy.world 4 points 1 day ago

All the sweet talk in the world ain't gonna save their jobs when their ai babies take over

[–] DarrinBrunner@lemmy.world 15 points 1 day ago (3 children)

Management never has a clue what their employees actually do day-to-day. We're just another black box to them, tracked on a spreadsheet by accounting. Stuff goes in, stuff comes out, you can't explain that.

[–] ThomasWilliams@lemmy.world 2 points 22 hours ago

It's really the middle management they don't understand, not the floor staff, the people who do all the checking and compliance which the top management now think can be replaced by AI

load more comments (2 replies)
[–] gravitas_deficiency@sh.itjust.works 22 points 1 day ago (4 children)

We just had an all hands where they were circlejerking about how incredible “AI” is. Then they started talking about OKRs around using that shit on a regular basis.

On the one hand, I’m more than a little peeved that none of the pointed and cogent concerns that I have raised on personal, professional, hobbyist, sustainability, environmental, public infrastructure, psychological, social, or cultural grounds - backed up with multiple articles and scientific studies that I have provided links to in previous all-hands meetings - have been met with anything more than hand-waving before being simply ignored outright.

On the other hand, I’m just going to make a fucking cron job pointed at a script that hits the LLM API they’re logging usage on, asking it to summarize the contents, intent, capabilities, advantages, and drawbacks of random GitHub repos over a certain SLOC count. There’s a part of me that feels bad for using such a wasteful service like in such a wasteful fashion. But there’s another part of me that is more than happy to waste their fucking money on LLM tokens if they’re gonna try to make me waste my time like that.

[–] luciferofastora@feddit.org 6 points 22 hours ago

Textbook example of "When a measure becomes a target, it is no longer a good measure" (if it ever was)

[–] acchariya@lemmy.world 13 points 1 day ago (1 children)

If you have to define OKRs to get people to use a tool, perhaps the tool is not a good investment.

load more comments (1 replies)
[–] Snowclone@lemmy.world 3 points 1 day ago (1 children)

sounds like starting a parallel business with out AI could possibly replace this company you work for...

Not really, actually. We have a shitload of wetlabs too for genomic assay and sequencing, which is honestly probably the most expensive line item on our OpEx by far.

load more comments (1 replies)
[–] Infrapink@thebrainbin.org 94 points 1 day ago (10 children)

I'm a line worker in a factory, and I recently managed to give a presentation on "AI" to a group of office workers (it went well!). One of the people there is in regular contact with the C_Os but fortunately is pretty reasonable. His attitude is "We have this problem; what tools do we have to fix it", and so isn't impressed by " AI" yet. The C_Os, alas, insist it's the future. They keep hammering on at him to get everybody to integrate "AI" in their workflows, but they have no idea how to actually do that (let alone what the factory actually does), they just say "We have this tool, use it somehow".

The reasonable manager asked me how I would respond if a C_O said we would get left behind if we don't embrace " AI". I quipped that it's fine to be left behind when everybody else is running towards a cliff. I was pretty proud of that one.

[–] ravelin@lemmy.ml 14 points 1 day ago

As usual, I fear for the reasonable manager's job.

Reasonable managers usually get plowed out of the way by unreasonable C levels who just see their reasonable concerns as obstructions.

Try giving them each an allen wrench and tell them to apply it to their daily lives to boost productivity.

[–] Strider@lemmy.world 6 points 1 day ago (2 children)

Everyone bought so hard in on it that they need to (make you/us) use it. Otherwise it will be a financial disaster. It shit leaking down all the way.

(of course it has uses. But it's not AGI!)

[–] wonderingwanderer@sopuli.xyz 5 points 1 day ago* (last edited 1 day ago)

In the early stages, it had potential to develop into something useful. Legislators had a chance to regulate it so it wouldn't become toxic and destructive of all things good, but they didn't do that because it would "hinder growth," again falling for the fallacy that growth is always good and desirable.

But to be honest, some of the earlier LLMs were much better than the ones now. They could have been forked, and developed into specialized models trained exclusively on technical documents relative to their field.

Instead, AI companies all wanted to have the biggest, most generalized models they could possibly develop, so they scraped as much data as they possibly could and trained their LLMs on enormous amounts of garbage, thinking "oh just a few billion more data points and it will become sentient" or something stupid like that. And now you have Artificial Idiocy that hallucinates nonstop.

Like, an LLM trained exclusively on peer-reviewed journals could make a decent research assistant or expedited search engine. It would help with things like literature reviews, collating data, and meta-analyses, saving time for researchers so they could dedicate more of their effort towards the specifically human activities of critical thinking, abstract analysis, and synthesizing novel ideas.

An ML model trained exclusively on technical diagrams could render more accurate simulations than one trained on a digital fuckton of slop.

load more comments (1 replies)
load more comments (7 replies)
[–] SpookyBogMonster@lemmy.ml 4 points 1 day ago

My workplace was holding the yearly meeting where they lay out a bunch of rules that get followed for a month, and then get forgotten about.

And one of the things in question was attendance. The boss smugly days "We have an AI tracker that can tell us if you've come in late"

I can't think of anything that could give me lrss faith in the accuracy of such a system.

[–] lechekaflan@lemmy.world 1 points 20 hours ago

Always those fucking suits.

[–] fibojoly@sh.itjust.works 16 points 1 day ago

Our new tech lead loves fucking AI, which let's him refactor our terraform (I was already doing that), write pipelines in gitlab, and lots of other shiny cool things (after many many many attempts, if his commit history is any indication).

Funnily, he won't touch our legacy code. Like, he just answers "that's outside my perimeter" when he's clearly the one who should be helping us handle that shit. Also it's for a mission critical part of our company. But no, outside his perimeter. Gee I wonder why.

Because they asked AI to tell them?

[–] primalmotion@lemmy.ml 72 points 2 days ago (2 children)

This tells one very important thing: The only job that could be replaced by AI is the executive's. Shocking

load more comments (2 replies)
load more comments
view more: next ›