this post was submitted on 19 Feb 2026
290 points (99.7% liked)

Fuck AI

5952 readers
1145 users here now

"We did it, Patrick! We made a technological breakthrough!"

A place for all those who loathe AI to discuss things, post articles, and ridicule the AI hype. Proud supporter of working people. And proud booer of SXSW 2024.

AI, in this case, refers to LLMs, GPT technology, and anything listed as "AI" meant to increase market valuations.

founded 2 years ago
MODERATORS
 

Similarly, in research, the trajectory points toward systems that can increasingly automate the research cycle. In some domains, that already looks like robotic laboratories that run continuously, automate large portions of experimentation and even select new tests based on prior results.

At first glance, this may sound like a welcome boost to productivity. But universities are not information factories; they are systems of practice. They rely on a pipeline of graduate students and early-career academics who learn to teach and research by participating in that same work. If autonomous agents absorb more of the “routine” responsibilities that historically served as on-ramps into academic life, the university may keep producing courses and publications while quietly thinning the opportunity structures that sustain expertise over time.

The same dynamic applies to undergraduates, albeit in a different register.

top 33 comments
sorted by: hot top controversial new old
[–] UnspecificGravity@piefed.social 35 points 1 day ago (4 children)

Honestly, a lot of these predicted problems with AI are actually overly optimistic because they assume AI can actually DO the work that we are talking about and the current state of AI very much cannot.

I sometimes think that articles like this are plants by the AI industry to create the narrative that their shit is even capable of causing this problem.

[–] floquant@lemmy.dbzer0.com 18 points 1 day ago (1 children)

What people think AI can do is a problem, even (especially) when it cannot.

[–] UnspecificGravity@piefed.social 11 points 1 day ago (1 children)

I would say that THIS is the biggest risk of AI. its not what it does, its what people believe it does. Especially people who aren't capable of actually assessing its performance.

[–] regedit@lemmy.zip 2 points 17 hours ago

Or C-level execs that are so out of touch with what their employees do and are convinced it can and/or should replace one or all of an employees job duties.

[–] chisel@piefed.social 4 points 1 day ago (3 children)

The problem is AI replacing essentially busy work that historically brand-new workers tend to be assigned. AI, in its current state, absolutely can do a lot of this work and replace the need for junior employees in a lot of cases. The problem is, junior employees need this work in order to improve, so if we delegate all of it to AI, there will no no future senior employees to do the more advanced work that AI can't do.

[–] rebelsimile@sh.itjust.works 3 points 1 day ago* (last edited 1 day ago)

I think it’s actually just the perception that it’s doing that from people who are so far away from the working that they don’t have any clue.

Like 300 years ago if you wanted to be a sailor maybe you started by rowing oars, or swabbing decks or dealing with ropes and rigging or whatever. If you’re a 19 year old seaman in 2026 you’re probably using a throttle controller that’s 26 different systems disconnected from the actual mechanical work. No one says “oh that guys not a novice seaman” or “the boat literally drives itself hurr”

If you’re a graphic designer in 1970, you’d cut out and hand lay a magazine page out on a big glowing table, page by page, resetting the type so it fit along with giant physical images so you could design the page. In 2026, you’d use Illustrator and lay it out on your computer. No one says “oh the magazines just lay themselves out” or “that’s not how you lay out magazine”. It’s just done with the tools available.

What’s going on is very few people seem to understand the difference between a tool and a solution. A tool is a thing that does something. A hammer is a tool. A solution is something that solves a problem. A hammer is a tool but it is not a good solution for a crying baby. Applying random tools to non-problems does not generate solutions randomly, it just creates even more intractable problems.

[–] UnspecificGravity@piefed.social 4 points 1 day ago (1 children)

It probably depends on your industry but it absolutely cannot even do entry level work in a lot of fields.

[–] chisel@piefed.social 3 points 1 day ago

I feel like that goes without saying, yeah. AI can do a lot of a junior software engineer's job, but it's going to fail miserabley at being a junior house cleaner.

[–] Bustedknuckles@lemmy.world 2 points 1 day ago

I agree, and that's a great way of putting it. We're kneecapping ourselves collectively because enough individual companies are deprecating the junior dev experience. We'll see if it holds up when senior devs are in such short supply that companies have to pay them 4x the margin they saved on junior devs. I think they're hoping that the machine learning gets good enough to do senior dev work before the humans retire. Or else they're just line-go-up types

[–] Ephera@lemmy.ml 2 points 1 day ago

LLMs do tend to be pretty good at textbook problems, because they've been trained on the textbooks. We have working students at $DAYJOB, who tell us that you can often get a flawless grade by handing in something AI-generated.

But then, yeah, you don't learn anything, and that will become a problem sooner or later, because none of problems at work are textbook problems.

[–] Virtvirt588@lemmy.world 2 points 1 day ago

I do agree, it may be that the situation is exacerbated to the point where doomerism is essentially encouraged.

The thing is, there was an article where the students themselves didn't want AI on the mandated Chromebooks. It feels force fed, and the fact is, nobody, no matter the age wants garbage screwing up their workflow.

Honestly, a lot of these articles are repeating the same story. In the end it all leads to a similar conclusion - and it isn't age related.

[–] bridgeenjoyer@sh.itjust.works 13 points 1 day ago (1 children)

Can you imagine being a kid today? I'd be utterly depressed. Thank goodness I'm old.

[–] supersquirrel@sopuli.xyz 12 points 1 day ago (1 children)

Nah man its just like... social media that is making kids depressed, it isn't that everything is metaphorically but also literally on fire or anything.

Ha true. I guess all I can do is try to teach the Young's to get out from the grip of corpo net and become tech literate. That alone would solve a massive amount of issues and give corps wayyyy less power.

Then also teach them to look inward, have empathy, and think critically.

It doesn't seem like a lot...

[–] Entertainmeonly@lemmy.blahaj.zone 10 points 1 day ago (1 children)

Universities are for systems of practice, they are not information factories.

I like that so much.

Yeah, it's a really eloquent way of phrasing it.

It's something I've been thinking about a lot lately, because I have a lot of friends who are doing PhDs at the moment. It's interesting because especially at this stage, their actual research output isn't the point. Like, ofc publishing your research is a key part of the process of getting a PhD, but that's almost like an incidental byproduct of the process — the actual primary product is a person who is knowledgeable and experienced enough in the academic process that they can be trusted to be a part of the system.

(Tangentially, something I've been pondering lately is that I think Wikipedia works similarly, in that the encyclopedia itself isn't the point, but rather the robust systems of editor organising and social infrastructure is the "real" product of value, and the encyclopedia is just a byproduct that exists downstream of the system of practice)

[–] Ludicrous0251@piefed.zip 10 points 1 day ago (3 children)

I mean, that's always been the problem with cheating in higher education.

Nobody is actually harmed because you copied whole sections of your history essay from a friend who took the course 3 years ago or glanced at someone else's math test. Using AI to do these things is no different.

[–] Mac@mander.xyz 6 points 1 day ago (1 children)

Yep. I said as much in a survey by my college.

They had two applicable questions: How can we best utilize AI? and What opportunities does [school] have in regards to AI (or smth)?
I told them: The best opportunity is to spearhead the effort to maintain human-focused learning. Offloading cognitive work to a machine is going to impair the learning itself and even the ability of the students to learn at all. Isn't learning the point of an educational institution?

The world has gone mad and it's making me mad.

[–] supersquirrel@sopuli.xyz 5 points 1 day ago* (last edited 1 day ago)

I love that you said that and to bring this back to a deeply political point, the entire framing of AI assumes we have a severe shortage of human minds that are not being used to anywhere near their full potential.

You can't argue that enhancing two college students with 10 academic AIs is necessary because there aren't enough prospective students while we simultaneously cast immigrants back into the ocean as they try to immigrate to our countries and attend our universities to start lives they could never have imagined back home for whatever reason does it really matter 99.999% of the time? You can't argue the use of AI is necessary because we just don't have enough intelligence when there are homeless everywhere who are given no outlet by society to use their minds productively despite abundant evidence that structuring society this way only hurts ALL of our potential for intelligence both individually and collectively.

You can't argue that we need to pursue intelligence as a virtue over all else, whatever the hell that means anyways... and then ignore the incredible dehumanization so many people feel in their workplaces and that materially diminishes their potential for applied intelligence in a collective organization.

I am disgusted by this bifurcation of our valuing of human beings and our valuing of intelligence irrespective of how cool and interesting the idea is in the abstract of a sentient computer. I love Data, but Data would be shitting on all of the computer science focused people I see featured prominently in society and who are given billions and billions of dollars to piss away in vain cathedrals of bullshit. Data wouldn't have time for this, he would be out punching AR-15 toting ICE thugs in the face as they were attempting to kidnap children because in the end it is always children that are the source of intelligence, after all if adults already knew how to do it then it wouldn't be labelled the accolade of intelligent in the first place right? The thing with intelligence is it just takes a bit of time and care... something these "masters of intelligence" seem to have a pretty artificial understanding of.

[–] RustyNova@lemmy.world 4 points 1 day ago

Actually there's someone that is harmed. The one that cheated. You're here to learn this stuff. If you're there just to cheat just drop out

[–] Bustedknuckles@lemmy.world 3 points 1 day ago (2 children)

I agree but the scale is different. When 10% of new grads are useless drones, society can bear the burden and shuffle them around. When it's 70%, we have a real problem (or opportunity for fascists).

Also, some cheaters were really creative. One dude wrote a cheat sheet on the inside of a plastic soda bottle label so that he could tilt the bottle and read the notes, untilt it and the soda would hide them. That kind of cheating is real problem solving!

[–] AnarchistArtificer@slrpnk.net 4 points 1 day ago (1 children)

My favourite cheating story was when a friend was permitted to take a couple of revision cards of notes into her final exam (as much as you could fit on the cards — one dude took a microscope into his exam, but that is fairly common, apparently). My friend had a form of synaesthesia that meant that whenever she saw letters, she saw colours. Each letter (and number, I think) had its own distinctive colour.

So what she did was she wrote her notes in colour, allowing her to encode an entire additional layer of information. So let's say the letters in the word "carbon" appeared to her as being red, orange, yellow, green, blue and purple, then she could write the word oxygen with the "o" in red, the "x" in orange", the "y" in yellow etc., and end up with something that a normal person would read as "oxygen", but she would be able to read it as "oxygen" and "carbon" simultaneously. Apparently it took work to be able to efficiently read two layers of information at once (or to focus on one layer and not be distracted by the other), but she started playing around with this back in highschool. She told me that the hardest part of this process was finding some coloured fineliners that were precisely the right colour for each letter.

However, she found that she was unsatisfied with the amount of extra information she was able to encode in this way. So instead, she broke down each letter into multiple chunks. So if she wrote the letter "o" in "oxygen" using 3 different colours (red, orange and yellow", and the "x" with "green, blue, purple", then she has managed to encode the entire word "carbon" into the space of only two letters. In the end, I think she was able to encode 6-8 times the information density into her permitted notes.

But the most funny thing about this is that producing these notes took so much effort and focus that she accidentally learned the content so well she didn't even need the notes. Task failed successfully, I guess? (If the task was writing some useful notes using this weird brain quirk or hers). She was salty at first at the wasted effort of making the notes, but I think she was glad to get to have such an absurd project

I can't imagine what it must be like to perceive the world like that. It really cooks my brain. I remember I once wrote down a word in regular black ink, and asked her what colours it appeared as. Then I wrote down the same word but in red ink, and asked her if she could tell that it was red, and whether she could simultaneously still see the same colours as before. She told me that yes, she could, and honestly, my mind is blown anew every time I think of this.

Gosh, that was longer than I expected. It was fun to write though. I hope at least one person finds it fun to read too.

[–] altasshet@lemmy.ca 1 points 22 hours ago

I don't have synesthesia, but writing your own notes/summarizing the text helping to absorb the information is a valid learning technique.

The extra layer of complexity with perfecting color differently sounds super trippy though!

[–] jaybone@lemmy.zip 2 points 1 day ago

I’ve heard of writing on the inside of a water bottle label. But the soda is genius as you have to tilt to reveal it, otherwise it remains hidden.

[–] minorkeys@lemmy.world 9 points 1 day ago (1 children)

The dependency we will develop on AI will enable a kind of leverage over entire populations that borders on a national security risk.

[–] AAA@feddit.org 3 points 1 day ago (1 children)

Which is why some nations will outright ban it. And others will sleep on the risk until it is too late. It's the "should we do anything about this social media stuff"-discussion all over again... but on steroids.

We can see it already, the amount of people who rely on it for... literally any information request. And it's freakishly hard to fight against it. Mainly because the same companies which push those AI products, are actively making traditional information sources worse (e.g Google Search).

[–] minorkeys@lemmy.world 2 points 1 day ago

I feel the effect when it happens as I am required to use AI in my work. I have to acknowledge it and take steps to not offload memorizing and analysis of things solely to an AI assistant. Mitigating this impact takes time and effort. The danger is that AI is good enough that the gains in speed may outweigh the risks and the cost of errors. If the efficiency is high enough, meeting the performance output standards required to hold a position at a company will not be possible without using AI and doing so in a way that makes it impossible to mitigate the formation of dependencies. People will have to use AI in a way that ensures dependency in order to have the job. The costs are bourne by the workers, the benefits reaped by the owners.

That's the leverage that will reshape our society, we will be forced to work in ways that make us worse at learning, memorization and analysis. Ask a product owner a question and they have to reach for AI because the environment makes it impossible to have the answer without it. And if they can get the answer from AI, so can the person asking the question. So with AI adoption, leadership, decision making and expertise are all transfered to ownership, decimating those middle roles between implementation and ownership that the entire office environment is built around. They're also trying to use it to replace implementation as well, as with software engineers.

For business, AI is a massive opportunity to reduce their dependency on human labour, while making remaining labour dependent on AI. It's a nightmare for a society and for human beings. If robotics manages to accelerate alongside, then what is a population even capable of doing to protect themselves from the harm of this corporate empowerment? No jobs, no money, no legal access to resources and facing an autonomous robotic security system protecting those resources?

History shows self-interest and concentrated power leads to mass suffering. I don't see how these new technologies, in the hands private power, will produce anything different this time.

[–] queermunist@lemmy.ml 9 points 1 day ago

Higher education was already eroded when Reagan closed the public colleges and forced everyone to pay tuition.

They're just killing it off.

[–] Zink@programming.dev 5 points 1 day ago (1 children)

In modern society it seems like our ideal use of technology is to insulate us from the natural world and get rid of the everyday tasks we have to do. If it adds to comfort and convenience, it's generally successful.

But I think this is bad for our health. When you think about our hunter-gatherer ancestors, those are people who evolved to walk around the forest all day constantly being busy. We don't necessarily need the same sunshine, fresh air, exercise, and full range of sensory inputs in order to have fulfilling lives. But I bet that stuff is a huge help for the vast majority of us. Our privilege as modern humans is that we can pick and choose what we spend our time on, in the form of hobbies.

Removing the process of learning is like the meta, higher-level version of removing the day-to-day work from our lives.

So if AI ever gets that good to where we are fine not learning shit and trusting in the quadrillion-dollar black box, I hope that means we end up in the post-scarcity Star Trek future or else I fear it will only get worse from here.

[–] WoodScientist@lemmy.world 4 points 1 day ago (1 children)

Maybe we'll all just go back to hanging out and playing games and music together. Ancestral hunter gatherers worked only a fraction of the hours we do.

[–] Zink@programming.dev 1 points 1 day ago (1 children)

Yeah, that's a good point. They did a lot more work on their own needs, their surroundings, and their family/tribe. They did zero work grinding away at some mind-numbing task to make somebody else rich.

[–] WoodScientist@lemmy.world 1 points 1 day ago

Also, most modern hunter gatherer groups lived lifestyles that a lot of early western anthropologists derided as 'lazy'. But they were actually practicing a highly evolved skill set attuned to their environment.

The thing about gathering is that the land has a fixed sustainable population level. With farming, you can get more food by working a bit harder. But with hunting/gathering, the land supports what the land will support. Over hunting today just leaves less game to hunt tomorrow. And if the plants and animals you're acquiring are spread thin enough, then leaving camp to gather them can actually be net calorie negative. Spend all day hunting a single mouse, and you'll burn more calories doing that than you'll get by eating it.

Over the millennia, by natural selection, early humans evolved cultural practices that forced them to live sustainably. Those groups with cultures that stripped the land bare all died of hunger.

So cultures evolved to have a lot of down time. Sitting around a fire telling jokes and stories isn't "productive," but it also doesn't burn many calories. Humans are highly productive hunters and foragers. If we work too hard at it, we strip the land and die of hunger. It's the same reason lions spend most of their time sleeping. The go-getter lions that want to max the grind all die of hunger.

So instead hunter gatherers would tend to grow their numbers to up near the carrying capacity of the land and then use a lot downtime to keep from overhunting and overforaging. Western anthropologists saw this as being lazy, but they were applying concepts of labor that only made sense in agarian and industrial societies.

[–] Blackfeathr@lemmy.world 1 points 1 day ago

Ironically, this article reads like AI wrote it, complete with the overuse of "It's not X, it's Y"