this post was submitted on 04 Apr 2026
71 points (98.6% liked)
technology
24315 readers
329 users here now
On the road to fully automated luxury gay space communism.
Spreading Linux propaganda since 2020
- Ways to run Microsoft/Adobe and more on Linux
- The Ultimate FOSS Guide For Android
- Great libre software on Windows
- Hey you, the lib still using Chrome. Read this post!
Rules:
- 1. Obviously abide by the sitewide code of conduct. Bigotry will be met with an immediate ban
- 2. This community is about technology. Offtopic is permitted as long as it is kept in the comment sections
- 3. Although this is not /c/libre, FOSS related posting is tolerated, and even welcome in the case of effort posts
- 4. We believe technology should be liberating. As such, avoid promoting proprietary and/or bourgeois technology
- 5. Explanatory posts to correct the potential mistakes a comrade made in a post of their own are allowed, as long as they remain respectful
- 6. No crypto (Bitcoin, NFT, etc.) speculation, unless it is purely informative and not too cringe
- 7. Absolutely no tech bro shit. If you have a good opinion of Silicon Valley billionaires please manifest yourself so we can ban you.
founded 5 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments

Who writes the unit tests though? If the AI is eventually writing those too because the devs have gotten too reliant on the AI, then it defeats the whole point. And in the case that the devs are mainly writing unit tests as a spec for the AI, it's a pretty miserable experience compared to how development was before.
Most of the time, the AI will spit out a first draft of unit tests and I’ll go in to clean them up and review them a bit before letting it proceed. It gets it like 80% of the way there and is indeed faster, thought not the 10x or 100x that the hypebeasts claim. I’ve seen a study that claims about a 30% increase in initial speed in large codebases and that about checks out to me. At its worst it writes the boilerplate for me. At its best it one-shots a feature or fix for me.
There’s a lot more spec writing and code review than before, so if you’re not into that I can understand why you wouldn’t like working with AI. But we’ve become a lot more responsive to tickets and have cleared out a huge chunk of our backlog. I’m generally not big on AI and I’ve been going out of my way to not use it on personal projects because I don’t want my skills to rot. But they do pay for it and require its use at work so I’ve done my best to make the best of it. I just don’t agree with the people who haven’t used it in a professional context and insist that it has no use and is never advantageous.
Have you noticed any degradation in your own coding or any resistance to coding without an LLM on personal projects?
My personal code is shitty as ever, in a good way. I still feel pretty sharp, but I’m doing my best to work on a personal project at least an hour a night. Nothing compared to when I was younger but it’s all I can budget for now. I don’t find myself reaching for an agent for personal code so much as I find myself reaching for a chatbot when google search fails me. It’s like stack overflow if they were nicer and quicker but also wrong more often. But I try to avoid that just so I’m not getting in the habit of dulling my research/debugging skills either.
I noticed my brain going straight to not wanting to code once and wanting to just offload to the LLM and that’s when I started taking my personal project time more seriously. This shit’s not gonna rob me of my enjoyment in my longest standing hobby which is also my living.
Yeah I've had a very similar experience but it sounded like you've had more pressure to adopt AI in your life so I was curious. Thanks for sharing your perspective, it's reassuring.