this post was submitted on 07 May 2026
198 points (86.1% liked)

Technology

84597 readers
5194 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related news or articles.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS
top 50 comments
sorted by: hot top controversial new old
[–] Buffalox@lemmy.world 94 points 6 days ago* (last edited 6 days ago) (18 children)

According to a new study by researchers at Carnegie Mellon, MIT, Oxford, and UCLA,

Study should be solid I guess.

participants who were given AI assistants (in this case, a chatbot powered by OpenAI’s GPT-5 model) would have the aid pulled from them without warning during the test

Wow, interesting idea. 👍

where they had their assistant removed, the AI group saw the solve rate fall off a cliff. They had a solve rate about 20% lower

And even worse IMO:

They also had nearly double the skip rate, meaning they simply chose not to solve the questions.

This seems very alarming IMO, because this indicates they lost some of their ability to think constructively on how to actually solve a problem!

I know there have always been some who cried wold every time new technology has become available, like calculators and computers. Even dictionaries were once claimed to be harmful once!
But maybe this time there is a real danger, because AI takes away a lot of the need to actually think creatively and constructively. And that's an ability we must not lose.

The last paragraph of the article is even worse. As it mentions 2 studies that show these effects are also long term!!!

[–] Ioughttamow@fedia.io 54 points 6 days ago (1 children)

When driving somewhere, if I set out with the mindset that I can’t rely on gps I can usually wing it and figure out where to go when a hiccup occurs. If I don’t, then I have a lot of trouble getting into that path finding mode when needed… similar to this maybe?

[–] yakko@feddit.uk 20 points 6 days ago

Yeah exactly, because although it's possible to do more with technology sometimes, you're actively de-skilling at the same time. When we invented the written word yes it legitimately made everything better, but also we lost oral traditions and the capacity to memorize large volumes of storytelling, songs, and histories. Now you can burn the books, and the knowledge dies. It's a real risk.

Everything is like this. Every technology has a cost beyond its price, and making a decision of whether to use it or not will always be in error unless you think about what you're losing in the process.

[–] scarabic@lemmy.world 27 points 6 days ago (9 children)

Changing the terms of the test in the middle of it, without warning, is disruptive. I’m not convinced it “fried their brains.” The same would happen with a calculator suddenly removed during the middle of an exam.

load more comments (9 replies)
[–] NeilNuggetstrong@lemmy.world 19 points 6 days ago (3 children)

If I use AI for my personal coding projects I've found that if the task is unsolvable by the ai model, I'm not able to sit down and do it myself until the next day. It's like I've got to reset my brain.

If I want to save time and use AI for a specific part of the code, it probably saves me 5 hours of work. But then I spend five hours yelling at the ai to try to get it to actually solve it. Next day I'll just fix it myself in 2 hours.

[–] sockenklaus@sh.itjust.works 4 points 4 days ago* (last edited 4 days ago)

But what you're describing is not that uncommon, even without AI: Oftentimes when trying to solve a complex problem and being unsuccessful you have to reset your brain by doing something fundamentally different or have a good night of sleep and after that you solve the problem easily.

May what you're experiencing is not AI related at all.

load more comments (2 replies)
[–] FauxLiving@lemmy.world 9 points 6 days ago (10 children)

This paper shows that a person who has performed a task 12 times performs better than a person who has never performed the same task.

They also do not properly control for performance loss due to context switching which is a well known contributor to performance loss.

It's a paper on arXiv, it hasn't been peer reviewed or published.

load more comments (10 replies)
[–] carotte@lemmy.blahaj.zone 6 points 6 days ago (2 children)

there have always been some who cried wold every time new technology has become available, like calculators and computers

and they kinda have a point, really. people got worse at memorizing stuff by heart when writing was invented, and people got worse at mental calculus when calculators when invented.

but they allowed many things that were simply not possible. a calculation that takes me 2 minutes in wolfram alpha could take hours if not days to solve by hand!

ai, meanwhile, or at least the ai we’re sold, does not offer significant advantages (at best it saves a few minutes), at the cost of making us worse at thinking, a skill that is absolutely essential to have… and of course, that’s the point. the tech oligarchs want us to be dependent on their extremely expensive products.

load more comments (2 replies)
load more comments (13 replies)
[–] Comet79@lemmy.world 17 points 5 days ago* (last edited 5 days ago) (7 children)

1980: TVs will fry your brain

1990: Videogames will fry your brain

2000: Computers will fry your brain

2010: Smartphones will fry your brain

2020: AI will fry your brain

Any takes for the 2030s?

Climate change.

Literally.

[–] EightBitBlood@lemmy.world 3 points 4 days ago

I mean, based fully on our current dystopian reality, I feel you just made a really good point about tech growing to a point where it fully captures you from reality, and indeed frys your brain by convincing you that fantasies are real.

MAGA is a great example of people with brains so fried they think a pedophile exconman with 34 felonies who killed millions of Americans trough a poor pandemic response is somehow helping them by destroying USAID, DEI, Healthcare, and Social Security.

Their brains are gonzo, all through the constant applied exploitation of all the tech you just mentioned combined.

AI will absolutley make it worse.

[–] Analog@lemmy.ml 5 points 5 days ago

2030: Cyborg w/AI will fry your brain. Literally though.

[–] flying_sheep@lemmy.ml 4 points 5 days ago

And before that books and comics. But LLMs are different: they pretend to be your friend but actually just encourage whatever you come up with. You can easily fry people's brains by being their sycophant, now everyone can subscribe to one.

[–] feinstruktur@lemmy.ml 3 points 4 days ago

Neural implants? Only this time they're really going to fry your brain.

[–] BoosBeau@lemmy.world 3 points 4 days ago

2030: Critical thought will fry your brain

Well looking around at where we are today, maybe TVs did fry our brains.

[–] texture@lemmy.world 24 points 5 days ago* (last edited 5 days ago)

i think reading the title of this post hurt my brain. like what are we doing here? making medical claims using sensationalist and meaningless language... seems unhelpful

[–] RIotingPacifist@lemmy.world 39 points 6 days ago (2 children)

The test seems kind of dogshit, you could make the same argument against any tool, calculators or even abacuses would have the same effect.

I'm made to use it for work and it does speed up some tasks, however for some stuff it ends up being like the experiment where not doing the work the first time means the whole process takes longer at the end.

[–] FauxLiving@lemmy.world 11 points 6 days ago* (last edited 6 days ago) (1 children)

To add to this, we already know that context switching causes a loss in performance.

A person who's thinking about how to solve a problem one way and then has to suddenly think about solving it in another way will perform worse.

https://medium.com/@codewithmunyao/the-hidden-cost-of-context-switching-why-your-most-productive-hours-are-disappearing-43c5b501de19

The Neuroscience Behind the Pain

Context switching isn’t just annoying — it’s neurologically expensive. When you shift from debugging a race condition to answering emails, your brain doesn’t simply “change tabs.” It goes through a complex process:

-Memory consolidation: Storing your current mental model

-Attention disengagement: Breaking focus from the current task

-Cognitive reloading: Building a new mental model for the next task

-Re-engagement: Getting back into flow

Research from Carnegie Mellon shows that even brief interruptions can increase task completion time by up to 23%. For complex cognitive work like programming, this cost multiplies dramatically.

Here's another article from CMU discussing the same thing: https://www.sei.cmu.edu/blog/addressing-the-detrimental-effects-of-context-switching-with-devops/

What this study shows is that a person who is faced with an unexpected context switch performs worse on a task than a user who has spent the last 12 questions performing the task the same way.

This exact problem would happen if you replaced AI with a calculator, or made a person swap from using paper to doing mental math. The problem here is context switching, not AI.

The way to ensure that the problem is AI and not the context switch, would be to continue the quest and see if the first group reverts back to baseline after 12 questions. 12 questions is how long the control group had to become acclimated to the task before their last context swap at the start of the test.

Also, of note, this is a paper on arXiv it is not published so it has not gone through a peer-review process which would certainly catch the failure to set a proper control group.

[–] chunes@lemmy.world 6 points 6 days ago* (last edited 6 days ago) (2 children)

Context switching isn’t just X — it’s Y.

Are we sure this was written by a human?

[–] FauxLiving@lemmy.world 6 points 6 days ago (4 children)

AI being released was basically an apocalypse for people who use EM dash.

Here's the most cited, human created (2001), paper on the topic of context switching performance loss: https://www.apa.org/pubs/journals/releases/xhp274763.pdf

load more comments (4 replies)
load more comments (1 replies)
load more comments (1 replies)
[–] HertzDentalBar@lemmy.blahaj.zone 8 points 5 days ago (8 children)

I fucking hate this AI shit but I'll admit I end up using Gemini (knowing its wrong sometimes) but it's like how I'd use Google but just more of a complex ask instead of simple search query's, I couldn't imagine using it beyond that other than a follow-up or two.

It's just a chatbot that has access to info, who goes onto their cable companies website and befriends the chatbot?

load more comments (8 replies)
[–] zebidiah@lemmy.ca 9 points 5 days ago (1 children)

AI is like a dog looking at itself in a mirror.

Some dogs are smart, and understand that this is a tool and that it is there to help you see things better.... Some dogs are fucking morons and think their reflection is another dog, and they wanna fuck and fight....

There are a ton of good use cases for ai, and none of them include coquettish sexbots or drawings of me as a Simpson or a Ghibli sketch.

load more comments (1 replies)
[–] melsaskca@lemmy.ca 4 points 4 days ago

Should we trust a researcher whose brain got fried. Did they remember to do the old double-blind setup before the frying of the brains occurred?

[–] melfie@lemmy.zip 9 points 5 days ago

I think the key point is that you’re not outsourcing critical thinking to LLMs, but are instead using it as a tool to do grunt work that you could’ve done yourself, but the LLM can pump out faster. This means constantly being critical of everything it does, asking questions, asking for links to credible sources, asking it to provide info to help evaluate the pros and cons of multiple approaches, with you making the decisions and learning along the way. Overall, any work a LLM produces that will have your name on it should be work you entirely understand and agree with. For coding, I find agent markdown files to be especially helpful to make sure the LLM follows my desired practices without me constantly making it refactor.

Largely, my assumption at this point is that LLMs may not always be around, so I definitely don’t want to be left holding the bag with a bunch of slop I can’t manage on my own. I think I’ll feel better when I can run open weight models on my own hardware that are fully competitive with cloud models. With models like Qwen 3.6 27B, it seems we are getting closer to that.

[–] ElReatonVaquer0@lemmy.world 15 points 6 days ago (1 children)

I think that if you use AI responsibly (as an assisting tool) like mentioned in the article, then you are pretty much on the safe side.

But when you have AI do everything for you, then there's a big problem.

Personally I try not to use it at all, not a fan of all the problems that come with it.

[–] Buffalox@lemmy.world 13 points 6 days ago* (last edited 6 days ago)

You clearly didn't read the article, and you are dead wrong.
Except you are right that if you let the AI do everything, it's worse, and you lose a lot of ability for critical thinking.
The last paragraph of the article even shows that other studies have shown that using AI assistance over time, will even have long term effect of lowering problem solving abilities!!

Personally I try not to use it at all, not a fan of all the problems that come with it.

This is the way. 😀

[–] mechoman444@lemmy.world 5 points 5 days ago* (last edited 5 days ago)

Studies show that using a bulldozer for plowing a field decreases the farmers muscle density after just one day of use.

Christ. What a load of shit.

[–] nonentity@sh.itjust.works 11 points 6 days ago (10 children)

I’ll never understand how an explosively imprecise, statistically luke-warm, grey goo extrusion sphincter could ever be mistaken for intelligence.

AI doesn’t exist, it’s a vacuous marketing term.

LLMs have vanishingly narrow legitimate, defensible use cases, but their output is intrinsically inaccurate, and should never be used without supervision from relevant domain experts.

load more comments (10 replies)
[–] lechekaflan@lemmy.world 8 points 6 days ago

I don't want it, all it does is to negate years of learned experience and ability to organically formulate ideas.

[–] SunshineJogger@feddit.org 7 points 6 days ago* (last edited 6 days ago) (2 children)

I really do see the issue with AI. I see people around me outsource thinking to it too much. Like literally. As if they are happy that a machine can make their life choices for them. This is extremely worrying It's About how people use it

load more comments (2 replies)
[–] iglou@programming.dev 6 points 6 days ago* (last edited 6 days ago)

Those are important studies but nothing shocking. The conclusion to draw from them is the same one we've drawn from all technologies that have improved our lives to some degree: Without them, we tend to either be incompetent as losing access to them isn't worth planning for, or we are demotivated because why would we deprive ourselves from technology that makes our work so much less exhausting?

It doesn't necessarily remove our capacity to think (and the article falsely generalises to critical thinking), it shifts what kind of thinking we do.

If AI is as good or better than I am at writing code, then I'll switch my brain to only do the orchestrating and architecture rather than the writing code part. And yes, if you remove AI, then the switch will cause me to perform less than I used to before AI, but not permanently, only until I get used to it again.

If an AI is better than a doctor at finding cancer indicators, then the doctor will focus their mind on finding solutions only rather than splitting it on both the detection and solution.

This is not new, not bad, and I'll even go to the extent of saying it's a great use of AI: Humans evolved for specialization. The less varied the tasks are, the better we are at the subset we specialize in. That's what has driven our rapid technological and societal advances in the past millenia.

But, AI has many issues and many detrimental applications as well, so don't see this comment as a full endorsement of AI.

[–] chahn.chris@piefed.social 6 points 6 days ago

My experience with using ai, and at this point I’d say this experience is extensive / daily, is that it gets things wrong A LOT and with a high degree of confidence in its position.

In the early stages of using it I felt my problem solving desire start to slip, but after pushing through that and realizing I should not trust this any more than I’d trust human judgment it’s more like having another person to work with. That’s helpful but if I let me own thinking guard down at all I put myself in a lot of risk.

I hope most people that do use AI regularly eventually push through to this stage and we all will be way better off in the long run for the assistance.

I fear most people won’t push through. This study points to the obstacle, I’d love to see what can be done to help people overcome it, probably there’s room for AI usage training that we need to start to consider.

load more comments
view more: next ›