this post was submitted on 12 Apr 2026
162 points (86.2% liked)

Futurology

4181 readers
285 users here now

founded 2 years ago
MODERATORS
top 50 comments
sorted by: hot top controversial new old
[–] collapse_already@lemmy.ml 11 points 10 hours ago (1 children)

At my company, the GenX programmers want to force the new hires to learn to code and debug before they're allowed to use AI. The newbies meanwhile are clamoring for AI. Management gave them access, so we expect they're development to be hindered. At least they can write more bugs per week now. Their sloc metrics are probably better than us experienced folks because we don't trust the AI at all. Management will probably layoff every one that knows how to fix bugs soon.

[–] SleeplessCityLights@programming.dev 5 points 8 hours ago (1 children)

You forgot the best part. None of those Jr.s are going to learn a thing. Their skills are going to regress. We will draw a line and anyone who learned how to code before Ai and those that learned after will be on distinct sides. The skill gap between those sides will be incredible. The code bases right now have people that understand large parts of it and they can make design decisions based on their context. When people offload that work to Ai, which can only handle a tiny bit of context compared to a human, nobody will understand code bases anymore and it will be chaos.

[–] pinball_wizard@lemmy.zip 2 points 6 hours ago (1 children)

The skill gap between those sides will be incredible.

Yes. And we see this skill gap between folks who learned to code before web frameworks, vs after, as well.

[–] collapse_already@lemmy.ml 2 points 5 hours ago

Yes, and it is so frustrating. Last week I was tearing into a stack dump from a crash and one of the entry level kids was watching me. I immediately identified a bad pointer and walked the stack back to the function where it originated and determined that the pointer array index was out of bounds. I might as well have been practicing witchcraft. He had no sense of what a valid address looks like, nor did he understand why that bad address would lead to a bus fault that would throw an exception. The best thing about this particular kid is that he listens and learns. He still wants to code with AI, but he knows the geezers have skills he needs. Probably my favorite among our current crop.

When I came out of school, I had experience in multiple assembly languages, operating system theory, compilers, and computer architecture. All areas where his knowledge is lacking. I am sure he knows lots of things I don't, but I haven't done a great job of identifying areas where those skills are applicable. I am pleased with his willingness and aptitude to learn. He'll be fine, but I don't have that confidence in a lot of them.

(I should remember this post when I have to write performance feedback for him.)

[–] Jankatarch@lemmy.world 14 points 13 hours ago* (last edited 13 hours ago) (1 children)

Was AI an overhyped application of statistics and not the magical construct to all of us becoming billionaires overnight?

Nah, people must be sabotaging it!

[–] unbanshee@lemmy.dbzer0.com 1 points 9 hours ago (1 children)
[–] iltoroargento@startrek.website 2 points 8 hours ago

I mean, tbh, I'm doing everything I can to bring attention to its shortcomings at my workplace. I think there are a lot of us.

[–] sem@piefed.blahaj.zone 17 points 21 hours ago
[–] RedSnt@feddit.dk 14 points 21 hours ago* (last edited 21 hours ago)

I'm impressed they wrote that whole article without going into the story about luddites. Or maybe I think about luddites too much..

Ah, the report as linked to is from "writer.com" aka Writer Inc. a "generative artificial intelligence company based in San Francisco". To nobody's surprise there's a lot of em-dashes in it.

The biggest crime is perhaps that the whole PDF report is just pictures. You can't highlight any text or search in it.

[–] pjwestin@lemmy.world 39 points 1 day ago (2 children)

Boomer and Gen X middle-managers watching their AI rollouts fail because the technology's efficiency and benefits have been vastly oversold.

"Clearly, the Zoomers are sabotaging us."

[–] RamenJunkie@midwest.social 5 points 12 hours ago (2 children)

We have a new AI team at work to find ways we can use AI for our work.

I cannot think of a single thing for what we do (manage data center hardware). We don't confure it in any meaninful way that might maybe be useful.

Like we have these once a month logs we process, maybe that, but I already wrote and distributed an app (like ten years ago) that does that because its simple data processing X to Y. That already takes 5 minutes to do now.

[–] lepinkainen@lemmy.world 1 points 9 hours ago (2 children)

5 minutes to analyse a month of logs?

Either that’s the most efficient log parser I’ve ever seen or you don’t log very much 😅

[–] Arcka@midwest.social 4 points 8 hours ago

If you're just looking for something specific, even command line tools can be hundreds of times faster than general data processing applications.

[–] RamenJunkie@midwest.social 4 points 9 hours ago* (last edited 9 hours ago)

Its very specific data that its looking for, transmit and recieves for EAS, and where it came from.

[–] funkless_eck@sh.itjust.works 1 points 11 hours ago (1 children)

hyper-personalize what stock can go into DR or off bleeding edge based on known EOL dates and maintenance cycles from OEMs?.

[–] RamenJunkie@midwest.social 1 points 9 hours ago

EOL dates? LOL, we have servers that are like 15 years old now.

[–] HubertManne@piefed.social 3 points 1 day ago

pft. I pushed middle management to millenials. Even young boomers are retired now.

[–] givesomefucks@lemmy.world 79 points 1 day ago (3 children)

People keep spreading this...

Because they're not smart enough to realize it's pro-AI propaganda put out by AI companies...

A new report published Tuesday from enterprise AI agent firm Writer and research firm Workplace Intelligence finds a significant share of employees are actively trying to sabotage their company’s AI rollout. The report—a survey of 2,400 knowledge workers across the U.S., the U.K., and Europe, including 1,200 C-suite executives—found 29% of employees admit to sabotaging their company’s AI strategy. That number jumps to 44% among Gen Z workers

They need an excuse for why it's not working, so they're blaming jr workers, knowing ceos will come to the conclusion "just fire more people".

Even the way they're phrasing this, makes it sound like the only reason an employee doesn't like AI, is they're a "hater" scared of losing their job.

Do people legitimately not understand any of this? It seems incredibly obvious but this is like the 20th article I've and I don't why people keep spreading this shit

[–] supersquirrel@sopuli.xyz 16 points 1 day ago

Yes, this tendency is really dangerous in my opinion.

[–] PixelatedSaturn@lemmy.world 5 points 1 day ago (2 children)

It's not about looking for a scapegoat yet. Its about CEOs actually not understanding why it's not working.

I have such a situation at my work. All the top management know ai only at a level where it seems everything is possible. It's a beautiful level, I remember being at that level, so nice. For a while I tried to explain where the limits are, but I was dismissed as a naysayer every time. So I adapted and decided to kind of get back on that train officially, but route most of my work to where it makes sense.

[–] mrgoosmoos@lemmy.ca 1 points 7 hours ago

currently going through this at my small company. the owners seem to think it's great - one of them has been playing around with it creating various tools for the past couple years. to be fair, the last thing he's been working on has actually been rather impressive. the other guy only just started using it and I think he's in the honeymoon phase. still, it's a bit worrying.

I've asked when I can get access to the same tools, and it hasn't been rolled out to the teams yet. but from what I've seen, the actual use cases for us (consolidating standards documents, pulling out information from standards documents, creating spec sheets and requirements documents etc) it is not really worth it, since everything has to be validated anyways

from my perspective of not being able to use the same tools myself, it still seems like just a search engine to me. a better ctrl+f. which isn't to say it's a bad tool, though definitely an inefficient one.

[–] givesomefucks@lemmy.world 9 points 1 day ago (1 children)

Its about CEOs actually not understanding why it’s not working.

Half the respondents are from the c-suite...

And the question asked wasn't "are you doing this" it's "do you believe people are doing this".

I literally quoted it because I knew people still wouldn't read the source, but here we are.

[–] Diurnambule@jlai.lu 1 points 13 hours ago

I didn't read the article and am grateful for the context :D <3

[–] mineralfellow@lemmy.world 0 points 1 day ago (2 children)

I would be curious how they phrased the questionnaire and how it is being interpreted. Surely they didn't have a question, "Are you trying to sabotage AI?" Must have been something more benign that was modified in meaning by the marketers.

[–] RamenJunkie@midwest.social 0 points 12 hours ago (1 children)

Probably something like "Have you used AI tools to help develop efficiency at your job?"

And people say no, because they have no use for it, so it gets interperated as "sabotage".

[–] givesomefucks@lemmy.world -1 points 19 hours ago (1 children)

I would be curious

Really?

Most people would have then follow the bread rumbs and checked?

Why did you care enough to type that out, when clicking two links to find the answer was so easy and you could have found out immediately?

[–] mineralfellow@lemmy.world 0 points 11 hours ago

When I click the link, I see one sentence, a video that doesn't load, an ad, and a demand to subscribe.

[–] Stefan_S_from_H@piefed.zip 47 points 1 day ago (3 children)

Are they really sabotaging it, or is it just not working as promised?

[–] otter@lemmy.dbzer0.com 2 points 12 hours ago

Shhh, you're gonna ruin the propaganda!

[–] paraphrand@lemmy.world 10 points 1 day ago

Why not both?

[–] AlecSadler@lemmy.dbzer0.com 2 points 1 day ago

Depending on the application and the way it is used, AI absolutely works as promised.

But those instances are few and far between and the general hype and expectations are far and above yield.

As someone who uses AI daily to produce faster and better, I absolutely hope the bubble pops and shit collapses.

[–] Zorque@lemmy.world 36 points 1 day ago (1 children)

"The AI output is always so shitty... the workers must be the problem, they're clearly sabotaging our obviously perfect way to make perfect profits!"

[–] BeigeAgenda@lemmy.ca 7 points 1 day ago (1 children)

The problem is that LLM's sometimes gets the right answer and then you are like Wow this is the best! And the next minute you are thinking It must be me not giving enough context? Let me try a different model. which then also fails.

[–] OpenStars@piefed.social 6 points 1 day ago

Intermittent reinforcement conditioning is literally the most powerful there is.

People who are not aware of their biases are mastered by them.

Stay curious folks!

[–] pelespirit@sh.itjust.works 22 points 1 day ago (4 children)
load more comments (4 replies)
[–] Feyd@programming.dev 8 points 1 day ago

Sabatoging AI strategy = getting things done the way that works instead of the top down directive that doesn't

A tale as old as time

[–] Lawnman23@piefed.social 11 points 1 day ago

Good for Gen Z.

[–] njm1314@lemmy.world 8 points 1 day ago
[–] Whitebrow@lemmy.world 8 points 1 day ago

Nobody sabotaging anything that manages to shit itself half a step into any task.

[–] Lugh@futurology.today 7 points 1 day ago

This is entirely expected and a foreshadowing of a world to come. Prediction? When robo-taxis are everywhere they'll be a target, too. Driving jobs are one of the last refuges in our economy to earn money when people are out of other options.

[–] swordgeek@lemmy.ca 5 points 1 day ago
[–] reallykindasorta@slrpnk.net 4 points 1 day ago

The research firm sounds really scientific and reliable on their website!

[–] 0ndead@infosec.pub 2 points 1 day ago

It’s not about fear, Fortune

[–] NigelFrobisher@aussie.zone 0 points 1 day ago

Why’s there a picture from the 70s though?