this post was submitted on 26 Apr 2026
844 points (99.6% liked)

Fuck AI

7037 readers
838 users here now

"We did it, Patrick! We made a technological breakthrough!"

A place for all those who loathe AI to discuss things, post articles, and ridicule the AI hype. Proud supporter of working people. And proud booer of SXSW 2024.

AI, in this case, refers to LLMs, GPT technology, and anything listed as "AI" meant to increase market valuations.

founded 2 years ago
MODERATORS
 
all 41 comments
sorted by: hot top controversial new old
[–] unexposedhazard@discuss.tchncs.de 100 points 2 weeks ago (2 children)
[–] diabetic_porcupine@lemmy.world 0 points 2 weeks ago (1 children)

It makes sense to me… I use a very heavy framework that ensures my agent doesn’t lose context about the systems I develop. But that means every change goes through a big long pipeline and if all you want to do is change a few lines of code then maybe a junior dev is the right fit for that specific task?

[–] xErah@anarchist.nexus 2 points 2 weeks ago (1 children)

Agentic coding would still have the context issues of changing code whether it’s AI or a human: somebody changed something, how do you record that for the next person. You either log it in memory or point it to the git PR, either way it needs to surface the changes.

So yeah, if AI is too expensive to code small problems for a given company than it’s too expensive for them period.

[–] jumperalex@lemmy.world 50 points 2 weeks ago (2 children)

For those asking / pronouncing this has to be a joke, perhaps. But not for long. AI is still not making a profit. So whatever it costs today at the growth-at-all-costs subsidized rate, think how much more expensive it will be when investors start insisting on profit after market consolidation*.

Because if you think there is a competitive barrier to entry for smartphones, operating systems, CPUs, and streaming services, you ain't seen NOTHING yet

[–] WanderingThoughts@europe.pub 12 points 2 weeks ago (1 children)

They're spending $3 per $1 of revenue. The price per token will rise dramatically.

[–] very_well_lost@lemmy.world 8 points 2 weeks ago* (last edited 2 weeks ago) (1 children)

I suspect it's even worse than that.

A Claude Max subscription is $200 a month, which is roughly $7 a day. I'm forced to use Claude Code at work, and I frequently run the /usage command out of morbid curiosity to see how many tokens in wasting. I'm not exactly a power user, but even with my bare-minimum usage I typically burn about $50 of tokens per day — so roughly 7 times that $7 a day figure.

And that's just based on Anthropic's official per-token pricing, which itself is almost certainly subsidized... so it's likely I'm costing them something closer to $20-$30 for every $1 of revenue. And again, I'm only using it for the bare minimum. I know people with much higher usage than me.

[–] jj4211@lemmy.world 2 points 2 weeks ago (1 children)

Similar boat, lots of people on the loss-leading phase, 'the first hit is free/cheap' is in play and I know development organizations that are explicitly designing hard dependencies on hosted AI as the 'foundation' of their processes. They are going for mainframe-style lockin where customers are too scared to change when it gets too expensive.

I know one organization that at least is being more careful, their LLM usage is only based on whatever they can run indefinitely on-premise without sweating future traps around pricing changes.

[–] very_well_lost@lemmy.world 2 points 2 weeks ago

Yeah. I would prefer to work at an organization that uses no AI at all, but at the very least companies should be moving to on-prem local models so that they don't suddenly get rug-pulled the moment the investment cash runs out and AI firms are forced to raise prices or die.

I've been yelling this at my own org for months, but no one cares.

Move fast and break things, even if the thing you're breaking is your own fucking business...

[–] takeda@lemmy.dbzer0.com 6 points 2 weeks ago

We also have that energy shortage problem right now.

[–] davetortoise@reddthat.com 46 points 2 weeks ago* (last edited 2 weeks ago) (1 children)

Except now everyone is getting paid less

[–] takeda@lemmy.dbzer0.com 35 points 2 weeks ago (1 children)

This is exactly what it is for.

It is to scare tech workers to accept lesser salary.

LLM is just great at fooling people to think it is greater at something than it actually is.

[–] tmyakal@infosec.pub 2 points 2 weeks ago

LLM is just great at fooling people to think it is greater at something than it actually is.

Oh, so AI is coming for my job specifically.

[–] hayvan@piefed.world 24 points 2 weeks ago

Congrats on LLM agents on the promotion. One day maybe they'll even make it to management.

[–] LinkeSocke@feddit.org 22 points 2 weeks ago (1 children)
[–] deadbeef79000@lemmy.nz 16 points 2 weeks ago (3 children)

It is a joke. But also entirely plausible.

[–] jj4211@lemmy.world 8 points 2 weeks ago

Well, it's plausible but not 'just' for simple code.

Generally if the operator is dead set on AI sorting it out, and the AI gets into a loop of failure it burns through tokens and turns what should have been a cheap modification to a codebase into a multi-thousand dollar failure in a fairly short time. The more extraneous code there is for it to potentially incidentally mess with, the more likely it breaks test cases and goes back to perturb the codebase again hoping to fix it, but just breaking a different set of test cases.

[–] Diplomjodler3@lemmy.world 3 points 2 weeks ago

Not really. Local models are pretty decent for simple tasks. The hardware to run them costs less than a month's salary.

[–] aarRJaay@lemmy.world 22 points 2 weeks ago (3 children)

Junior Developer doesn't boil oceans to do their job

[–] massive_bereavement@fedia.io 10 points 2 weeks ago

Not with that attitude.

[–] mrgoosmoos@lemmy.ca 6 points 2 weeks ago* (last edited 2 weeks ago)

depends. do they work in corporate or consulting?

[–] Lifter@discuss.tchncs.de 2 points 2 weeks ago

We don't usually take into account what employees do to nature on their spare time.

In a way, we would have to equate all the money spent on employees and what it is used for.

Getting rid of an employee would surely be a net gain for the environment, right? If an LLM could offset 50 employees, perhaps that would be neutral in that sense.

This doesn't make a lot of sense, esoecially morally. We can't really blame the company for what employees do out of work, can we?

[–] TheReturnOfPEB@reddthat.com 13 points 2 weeks ago

that is not a morale building tweet

[–] Banana@sh.itjust.works 1 points 2 weeks ago

Capitalism...so innovative

[–] Tollana1234567@lemmy.today 1 points 2 weeks ago (1 children)

no they will just hire a senior engineer, and have a bunch of low wages from eastern europe or south america to do the work.

[–] ddplf@szmer.info 1 points 2 weeks ago

You ain't getting no cheap labour from eastern europe these days!