this post was submitted on 12 Feb 2026
60 points (73.8% liked)

Technology

82070 readers
5624 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related news or articles.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS
 

When people ask me what artificial intelligence is going to do to jobs, they’re usually hoping for a clean answer: catastrophe or overhype, mass unemployment or business as usual. What I found after months of reporting is that the truth is harder to pin down—and that our difficulty predicting it may be the most important part of

https://web.archive.org/web/20260210152051/www.theatlantic.com/magazine/2026/03/ai-economy-labor-market-transformation/685731/

In 1869, a group of Massachusetts reformers persuaded the state to try a simple idea: counting.

The Second Industrial Revolution was belching its way through New England, teaching mill and factory owners a lesson most M.B.A. students now learn in their first semester: that efficiency gains tend to come from somewhere, and that somewhere is usually somebody else. The new machines weren’t just spinning cotton or shaping steel. They were operating at speeds that the human body—an elegant piece of engineering designed over millions of years for entirely different purposes—simply wasn’t built to match. The owners knew this, just as they knew that there’s a limit to how much misery people are willing to tolerate before they start setting fire to things.

Still, the machines pressed on.

...

top 46 comments
sorted by: hot top controversial new old
[–] LodeMike@lemmy.today 59 points 2 weeks ago (3 children)

The TLDR of this article is "we can't predict the impact of AI because we can't predict the future." It takes apparently 15,000 words to say that. It just talks about what people are saying about AI without any purpose, along with random irrelevant things. This article is a waste of time.

[–] XLE@piefed.social 31 points 2 weeks ago (1 children)

Based on your description, I expected the article to be worthless (and it definitely was worthless!), but I didn't expect the author to start breathlessly talking about Steve Bannon as if he's some paragon of populist "AI safety" wisdom that transcends the Republican and Democrat parties.

For anybody who's not aware, Steve Bannon is a key architect of the first and second Trump administration. And the fact that Bannon is part of the AI safety grift, which should be a red flag that it is a bad thing, this author twists it into a green flag that Bannon might be a good guy after all.

[–] onlyhalfminotaur@lemmy.world 4 points 2 weeks ago

Holy shit, The Atlantic out New York Times'd itself.

[–] Lost_My_Mind@lemmy.world 7 points 2 weeks ago (1 children)

I don't feel at all like I'm the smartest person in any given room, but lately I feel like the movie idiocracy. Where I'm just some average guy, and the rest of the world is letting AI do their thinking for them. The end result is, crops won't grow, because the lot of you are trying to water them with gatoraide. Top scientists in the country are so blinded by why science fails them, never realizing it's because gatoraide controls the farming industry, and helps write the laws to ensure further grasp of control. Regardless of results.

And everybody else just goes with it. What will happen in the future? Click this article to read about it! Answer: No one knows what would happen if you water plants with water.

Here is how the AI experiment plays out.....

Corporations cling and force this stuff down our throats, despite it not working. They do this for 2-3 generations to normalize it. With time and tech advancements they continue to develop it.

They keep using it where people don't push back. Which for AI, is most things. I don't see a major pushback on google including AI in search results. I don't see a major pushback from MOST people on AI being in every element of Windows 11. I see people here hating on microsoft, but linux users are like 4% of the market.

So they continue using the stuff people don't rock the boat over, while not improving services. Eventually they get more and more of these AI services in every aspect of your life.

The one place they spend all their effort improving is survailance. Watching you watch yourself, and sending them the data.

Alexa could listen for "Hey Alexa" or it could listen for sneezing. Then send that information to HQ where they can now sell that data, that you sneeze 37 times per day in the spring, or 3 times a day in the winter.

Now your insurance rates go up for allergy medication before you even see your doctor.

Thats just one example. Like one dot of a painting of millions of dots. But it all starts with people who don't have critical thinking skills. They just don't even question why TVs in the 90s were expensive, but by 2020 they were basically free.

So they buy their cheap smart tvs, and smart fridge, and everything else. Happy as can be. Not even realizing that its all just corporations bringing us closer and closer to 1984.

And in 30 years, not having a smartphone will be illegal. Not having a trackable device with you 24/7 will be illegal. They'll justify it by saying "think of the children!". And people will fall for it, yet again. Just as they always do.

[–] LodeMike@lemmy.today 1 points 2 weeks ago (1 children)

Well, the U.K. recently tried to require citizens own and maintain a propriety device completely beholden to U.S. companies in order to be alive (effectively), so.

[–] Lost_My_Mind@lemmy.world 1 points 2 weeks ago (1 children)

.......in the words of Ian Malcom:

"God damn do I hate always being right all the time..."

Also in the words of Ian Malcolm:

sexy growling and laughing noises

[–] LodeMike@lemmy.today 1 points 2 weeks ago
[–] jqubed@lemmy.world 6 points 2 weeks ago (1 children)

I’ve found that to be the case more and more with The Atlantic in recent years: long articles that might sound impressive but don’t actually say much or could’ve said things much more succinctly. I usually don’t read their articles anymore.

[–] onlyhalfminotaur@lemmy.world 3 points 2 weeks ago

You nailed it, exactly why I unsubscribed last year.

[–] Formfiller@lemmy.world 31 points 2 weeks ago (3 children)

Owner of the Atlantic is in the Epstein Files. They also wrote an article shaming Americas reaction to the Brian Thompson killing with no acknowledgement of the trauma we all experience in this corrupt system. Not going to give them any traffic

[–] gravitas_deficiency@sh.itjust.works 13 points 2 weeks ago (1 children)

Oh wow, somehow I missed both of those things. Shit.

[–] Formfiller@lemmy.world 10 points 2 weeks ago* (last edited 2 weeks ago)

This is the owner of the Atlantic having a good time with Ghisline

[–] onlyhalfminotaur@lemmy.world 6 points 2 weeks ago (1 children)

Not just one article, several. That was a particular disgusting time in their history. There was no nuance.

[–] Formfiller@lemmy.world 2 points 2 weeks ago

I will never read anything that they publish again

[–] Nilay@thelemmy.club 2 points 2 weeks ago
[–] LodeMike@lemmy.today 29 points 2 weeks ago (3 children)

There are gobs of money to be made selling enterprise software, but dulling the impact of AI is also a useful feint. This is a technology that can digest a hundred reports before you’ve finished your coffee, draft and analyze documents faster than teams of paralegals, compose music indistinguishable from the genius of a pop star or a Juilliard grad, code—really code, not just copy-paste from Stack Overflow—with the precision of a top engineer. Tasks that once required skill, judgment, and years of training are now being executed, relentlessly and indifferently, by software that learns as it goes.

Literally not true.

It can't "analyze" documents. There's no thinking involved with these machines. It outputs the statistically most likely thing that looks like analysis.

And it's not even close as good as the top engineer. If it was there would be no engineers TODAY.

[–] forrgott@lemmy.sdf.org 12 points 2 weeks ago (2 children)

And let's not forget the asinine claim about music composition. Yeah, this is a bullshit fluff piece to keep attention on AI.

[–] XLE@piefed.social 7 points 2 weeks ago

Could AI blow up the world tomorrow? Who knows! The future is unpredictable, so it's basically a 50-50, right? /s

[–] Holytimes@sh.itjust.works 1 points 2 weeks ago

The only thing I've found "AI music"good for is making unsettling endless droning. It's actually quite good at that.

Perfect for a creepy pasta low effort YouTube bait game.

[–] criss_cross@lemmy.world 5 points 2 weeks ago

This is why I get so frustrated when people demand I integrate this stuff into every workflow. It’s not thinking at all. It’s just regurgitating text based on input and hoping for the best.

[–] dparticiple@sh.itjust.works -3 points 2 weeks ago (2 children)

LodeMike, I'm curious about something. What's the latest set of AI models and tools you've used personally? Have you used Opus 4.5 or 4.6, for instance?

I am not disagreeing with the points you've made, but it's been my experience that the increase in capabilities over the last six months has been so rapid that it's hard to realistically evaluate what the current frontier models are capable of unless you've uused them meaningfully and with some frequency.

I'd welcome your perspective.

[–] LodeMike@lemmy.today 2 points 2 weeks ago* (last edited 2 weeks ago)

Opus like the audio codec?

I use the GPT mini or similar models

[–] criss_cross@lemmy.world 2 points 2 weeks ago (1 children)

Not OP but I use these on the regular.

I’d still agree with the OP that there are hard limits to what these can do. I’ve gotten Claude stuck in loops before on removing unrelated code, then adding it back, then removing it again hoping it’ll fix something.

And OP is still correct. At the heart of all of this it’s “given input x guess the probability of response Y”. Even frontier models don’t think. They can output tokens to call tools to try and get more input x but it’s still a best guess.

You can also give them too much context and get “context rot” which makes their output absolutely horrible too. I think cursor had a problem with that where too many Claude skills caused cursor to hallucinate and go nuts.

[–] dparticiple@sh.itjust.works 1 points 2 weeks ago

All valid points.

However, the actual capabilities of the AIs might not matter with respect to job displacement, since the people making the hiring decisions are absorbing the marketing hype but not using the tools.

Even if folks are still hired, they might experience second order effects like increased job stress and burnout: https://fortune.com/2026/02/10/ai-future-of-work-white-collar-employees-technology-productivity-burnout-research-uc-berkeley/

I'm rather glad that I'm reaching the end of my career and not trying to break into the market as a junior software engineer.

[–] sturmblast@lemmy.world 15 points 2 weeks ago (3 children)

Ai isn't good enough to take many jobs

[–] stormeuh@lemmy.world 5 points 2 weeks ago

But it can be sold as good enough to credulous management, thereby still doing damage by getting people laid off in the short term.

There's this famous quote about investing which goes: "the market can remain irrational longer than you can remain solvent". I think that equally holds for the labor market. Just because you and everyone around you knows your job can't be replaced by AI, doesn't mean there won't be an attempt to replace you which lasts long enough for you to lose your house.

[–] Bullerfar@lemmy.world 4 points 2 weeks ago

You give jobs today, too much credit. We have a danish book called: "How we got busy doing Nothing" (translated) which describes how more and more workers, which needs to work a 37 hour week, had to come up with fucked up processes just to make the job a little more complicated, how meetings has been the new norm instead of doing actual work. When I red the article, this is the people I am thinking of. Agile coaches, middle leaders, schedulers, people who manually put data into sheets documents. This is a lot of fucking people, atleast at my office. I am not scared (yet) since I still manage all the hardware componants in the car-rooms etc. But all the other people. What will they do when a 18EUR/month AI can replace a 4685EUR/Month co-worker?

[–] nutsack@lemmy.dbzer0.com -2 points 2 weeks ago* (last edited 2 weeks ago)

you guys keep saying this but I know a company or two companies or a lot of companies actually

[–] Binturong@lemmy.ca 15 points 2 weeks ago

AI is snake oil and the ones ruining the jobs are the corporations and billionaires. AI will be a net positive for society once we make it a public project and reclaim the stolen wealth of the oligarchy, who use it to maximize their extraction and destroy society. Cool article, or whatever.

[–] 0ndead@infosec.pub 5 points 2 weeks ago

Cool fearbait bruh

[–] ToTheGraveMyLove@sh.itjust.works 4 points 2 weeks ago (2 children)

The world isn't ready for what I want to do to AI

[–] LodeMike@lemmy.today 4 points 2 weeks ago (2 children)
[–] squaresinger@lemmy.world 3 points 2 weeks ago

Can it be necrophilia if it has never lived?

[–] ToTheGraveMyLove@sh.itjust.works 1 points 2 weeks ago (1 children)

That literally doesn't make any sense.

[–] LodeMike@lemmy.today 1 points 2 weeks ago
[–] CADmonkey@lemmy.world 1 points 2 weeks ago (1 children)

Good thing AI is something that doesn't exist in physical space that someone can tamper with...

Servers are.

[–] TropicalDingdong@lemmy.world 1 points 2 weeks ago
[–] ruuster13@lemmy.zip 0 points 2 weeks ago (3 children)

To everyone shitting on the article because of where AI is now: remember how little time passed between will smith spaghetti and sora 2?

[–] XLE@piefed.social 3 points 2 weeks ago

Sora 2, the product that cost $1.6 billion and hasn't recouped even a thousandth of that yet?

Yeah it's as financially unviable as ever

[–] LodeMike@lemmy.today 2 points 2 weeks ago (1 children)

Those gains won't continue into the future. Transformers are a mostly flushed technology, at least from the strictly tech/math side. New use cases or specialized sandboxes are still new tech (keyboard counts as a sandbox).

[–] ruuster13@lemmy.zip -4 points 2 weeks ago (3 children)

Moore's Law isn't quite dead. And quantum computing is a generation away. Computers will continue getting exponentially faster.

[–] LodeMike@lemmy.today 6 points 2 weeks ago

No.

We know how they work. They're purely statistical models. They don't create, they recreate training data based on how well it was stored in the model.

[–] squaresinger@lemmy.world 1 points 2 weeks ago

The problem is with hardware requirements scaling exponentially with AI performance. Just look at RAM and computation consumption increasing compared to the performance of the models.

Anthropic recently announced that since the performance of one agent isn't good enough it will just run teams of agents in parallel on single queries, thus just multiplying the hardware consumption.

Exponential growth can only continue for so long.

[–] bunchberry@lemmy.world 1 points 2 weeks ago

Moore's law died a long time ago. Engineers pretended it was going on for years by abusing the nanometer metric, by saying that if they cleverly find a way to use the space more effectively then it is as if they packed more transistors into the same nanometers of space, and so they would say it's a smaller nanometer process node, even though quite literal they did not shrink the transistor size and increase the number of transistors on a single node.

This actually started to happen around 2015. These clever tricks were always exaggerated because there isn't an objective metric to say that a particular trick on a 20nm node really gets you performance equivalent to 14nm node, so it gave you huge leeway for exaggeration. In reality, actual performance gains drastically have started to slow down since then, and the cracks have really started to show when you look at the 5000 series GPUs from Nvidia.

The 5090 is only super powerful because the die size is larger so it fits more transistors on the die, not because they actually fit more per nanometer. If you account for the die size, it's actually even less efficient than the 4090 and significantly less efficient than the 3090. In order to pretend there have been upgrades, Nvidia has been releasing software for the GPUs for AI frame rendering and artificially locking the AI software behind the newer series GPUs. The program Lossless Scaling proves that you can in theory run AI frame rendering on any GPU, even ones from over a decade ago, and that Nvidia's locking of it behind a specific GPU is not hardware limitation but them trying to make up for lack of actual improvements in the GPU die.

Chip improvements have drastically slowed done for over a decade now and the industry just keeps trying to paper it over.

[–] Tikiporch@lemmy.world 1 points 2 weeks ago

No. How much time?