this post was submitted on 30 Jan 2026
42 points (95.7% liked)

Programming

26127 readers
812 users here now

Welcome to the main community in programming.dev! Feel free to post anything relating to programming here!

Cross posting is strongly encouraged in the instance. If you feel your post or another person's post makes sense in another community cross post into it.

Hope you enjoy the instance!

Rules

Rules

  • Follow the programming.dev instance rules
  • Keep content related to programming in some way
  • If you're posting long videos try to add in some form of tldr for those who don't want to watch videos

Wormhole

Follow the wormhole through a path of communities !webdev@programming.dev



founded 2 years ago
MODERATORS
 

A company not making self-serving predictions & studies.

top 15 comments
sorted by: hot top controversial new old
[–] Kissaki@programming.dev 15 points 1 month ago

From the paper abstract:

[…] Novice workers who rely heavily on AI to complete unfamiliar tasks may compromise their own skill acquisition in the process. We conduct randomized experiments to study how developers gained mastery of a new asynchronous programming library with and without the assistance of AI.

We find that AI use impairs conceptual understanding, code reading, and debugging abilities, without delivering significant efficiency gains on average. Participants who fully delegated coding tasks showed some productivity improvements, but at the cost of learning the library.

We identify six distinct AI interaction patterns, three of which involve cognitive engagement and preserve learning outcomes even when participants receive AI assistance. Our findings suggest that AI-enhanced productivity is not a shortcut to competence and AI assistance should be carefully adopted into workflows to preserve skill formation – particularly in safety-critical domains.

[–] 30p87@feddit.org 10 points 1 month ago (1 children)

The wording is very, very self serving tho.

[–] idriss@lemmy.ml 2 points 1 month ago (1 children)

yep, they are selling learning models, but they are not pretending medical doctors will be out of work next week like OpenAI is doing

[–] d0ntpan1c@lemmy.blahaj.zone 4 points 1 month ago (1 children)

Anthropic may avoid saying the dumb things OpenAI says, but do not mistake that for being a better company/product. Amodei is still out to eliminate all jobs and has a history of being just as self-serving as Altman.

[–] idriss@lemmy.ml 2 points 1 month ago

I 100% agree with you and would love to see Anthropic burn (same as OpenAI and all other big tech)

[–] eleijeep@piefed.social 6 points 1 month ago

Discussion
Our main finding is that using AI to complete tasks that require a new skill (i.e., knowledge of a new Python
library) reduces skill formation.
...
The erosion of conceptual understanding, code reading, and debugging skills that we measured among participants using AI assistance suggests that workers acquiring new skills should be mindful of their reliance on AI during the learning process.
...
Among participants who use AI, we find a stark divide in skill formation outcomes between high-scoring interaction patterns (65%-86% quiz score) vs low-scoring interaction patterns (24%-39% quiz score). The high scorers only asked AI conceptual
questions instead of code generation or asked for explanations to accompany generated code; these usage
patterns demonstrate a high level of cognitive engagement.
Contrary to our initial hypothesis, we did not observe a significant performance boost in task completion
in our main study.
...
Our qualitative analysis reveals that our finding is largely due to the heterogeneity in how participants decide to use AI during the task.
...
These contrasting patterns of AI usage suggest that accomplishing a task with new knowledge or skills does not necessarily lead to the same productive gains as tasks that require only existing knowledge.
Together, our results suggest that the aggressive incorporation of AI into the workplace can have negative impacts on the professional development workers if they do not remain cognitatively [ sic ] engaged. Given time constraints and organizational pressures, junior developers or other professionals may rely on AI to complete tasks as fast as possible at the cost of real skill development. Furthermore, we found that the biggest difference in test scores is between the debugging questions. This suggests that as companies transition to more AI code writing with human supervision, humans may not possess the necessary skills to validate and debug AI-written code if their skill formation was inhibited by using AI in the first place.

[–] PolarKraken@programming.dev 4 points 1 month ago (1 children)

Interesting read and feels intuitively plausible. Also matches my growing personal sense that people are using these things wildly differently and having completely different outcomes as a result. Some other random disconnected thoughts:

  1. I'm surprised they're publishing this, it seems to me like a pretty stark condemnation of the technology. Like what are the benefits they anticipate that made them decide this should be published, vs. quietly kept aside "pending further research"? Obviously people knowing how to use the tools better is good for longevity, but that's just not what our idiotic investment cycles prioritize.

  2. I'm no scientist or expert in experimental design, but this seems like way too few people for the level of detail they're bringing to the conclusions they're drawing. That plus the way it all just feels intuitively plausible has a very "just so" feeling to the interpretation rather than true exploration. I mean, cmon - the behavioral buckets they are talking about range from 2-7 people apiece, most commonly just 4 individuals. "Four junior engineers behaved kinda like this and had that average outcome" MIGHT reflect a broader pattern but it sure doesn't feel compelling or scientific.

Nonetheless I selfishly enjoyed having my own vague subconscious observations validated lol, would like to see more of this (and anything else that seems to work against the crazy bubble being inflated).

[–] AbelianGrape@beehaw.org 3 points 1 month ago (1 children)

For 1: as a software company, they have a vested interest in ensuring that software engineers are as capable as possible. I don't know if anthropic as a company uses this as a guiding principle, but certainly some companies do (ex Jane Street). So they might see this as more important than investment cycles.

The quality of software engineers and computer scientists I've seen coming out of undergraduate programs in the last year has been astonishingly poor compared to 2-3 years ago. I think it's almost guaranteed that the larger companies have also noticed this.

[–] PolarKraken@programming.dev 2 points 1 month ago* (last edited 1 month ago)

I completely agree and appreciate sincerely that they released this. It's unfortunate, the way the obviously nonsense claims made by the industry at large - "LLMs are AI and can do everything!" - have polluted a lot of devs' ability or willingness to see the tools for what they are, and maybe official acknowledgements like these can help.

It also seems likely to me that the major players know a lot of negative truths about all this stuff, you're probably right about hiring observations. I don't follow any of their marketing really so I have to admit I'm even just assuming that releasing this is out of character.

If I'm being honest, I'm mostly just on the edge of my seat waiting for the hype bubble to burst, lol, and curious about how it'll unfold. Probably just kind of hoping this marks a step toward that.

[–] troi@techhub.social 3 points 1 month ago (1 children)

@idriss Seems predictable to me. Programmers on the left or middle of some distribution identifying "good" programmers or engineers will use AI and be comfortable having completed some task. Those on the right of the distribution may or may not use AI but will insist on understanding what has been created.

Now, an interesting question for me unrelated to the post is "what would be a good metric to identify
really good programmers?"

[–] idriss@lemmy.ml 1 points 1 month ago

@troi@techhub.social tbh I could see people who are considered good programmers in one place but not in another place (just prompting to get things done with minimum effort & reserving the effort for something else). Probably it comes back to interest & care, how much the person is interested in iterating over their solution & architecture + learning things regardless of seniority level to achieve a higher level goal (simpler design for example rather than stopping when it works). Maybe that could be an indication of a good programmer?

[–] entwine@programming.dev 3 points 1 month ago (1 children)

In a randomized controlled trial, we examined 1) how quickly software developers picked up a new skill (in this case, a Python library) with and without AI assistance; and 2) whether using AI made them less likely to understand the code they’d just written.

We found that using AI assistance led to a statistically significant decrease in mastery. On a quiz that covered concepts they’d used just a few minutes before, participants in the AI group scored 17% lower than those who coded by hand, or the equivalent of nearly two letter grades. Using AI sped up the task slightly, but this didn’t reach the threshold of statistical significance.

Who designed this study? I assume it wasn't a software engineer, because this doesn't reflect real world "coding skills". This is just a programming-flavored memory test. Obviously, the people who coded by hand remembered more about the library in the same way students who take notes by hand as opposed to typing tend to remember more.

A proper study would need to evaluate critical thinking and problem solving skills using real world software engineering tasks. Maybe find some already-solved, but obscure bug in an open source project and have them try to solve it in a controlled environment (so they don't just find the existing solution already).

[–] Miaou@jlai.lu 1 points 1 month ago (1 children)

The study is about the impact AI use has on learning. Their experiment seems to test just that, unlike what you're describing.

Besides, remembering what you did an hour ago seems like a real world problem to me. Unless one manages to switch project before the bug reports come in

[–] entwine@programming.dev 1 points 1 month ago

The study is about the impact AI use has on learning. Their experiment seems to test just that, unlike what you’re describing.

The title is literally "How AI assistance impacts the formation of coding skills". Memorizing APIs isn't what most people would consiser a "coding skill".

Debugging, systems design, optimization, research and evaluation, etc are what actually make someone a useful engineer, and are the skills a person develops as they go from junior to senior. Even domain knowledge (like knowing a lot about farming if you're working on farming software) is more useful than memorizing the API of any framework. The only thing memorization does is it saves you a few minutes from having to read some docs, but that's minimal impact, and it's something you pick up normally throughout the course of working on a project anyways. When you finish that project, you might never use that API again, or if you do it might have changed completely when a new version is released.

remembering what you did an hour ago seems like a real world problem to me.

Sure, humans have shitty memory, but that has nothing to do with AI code assistance. There are plenty of non-AI coding assistants that help people with this (like Intellisense/LSP auto complete, which has been around for decades)

[–] MxRemy@piefed.social 0 points 1 month ago

Why are like 70% of the posts in this comm about AI lately?? I'm out of here...