this post was submitted on 01 May 2026
65 points (98.5% liked)
Technology
84274 readers
3253 users here now
This is a most excellent place for technology news and articles.
Our Rules
- Follow the lemmy.world rules.
- Only tech related news or articles.
- Be excellent to each other!
- Mod approved content bots can post up to 10 articles per day.
- Threads asking for personal tech support may be deleted.
- Politics threads may be removed.
- No memes allowed as posts, OK to post as comments.
- Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
- Check for duplicates before posting, duplicates may be removed
- Accounts 7 days and younger will have their posts automatically removed.
Approved Bots
founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
My experience is, they're not. Like the article says they are just focused on MOAR and not on the quality of the output. It may take years for the unmaintainable code to cause problems, and they may have already been laid off by the time that happens, anyway .
I don't write much code anymore, but when I did, there was a fair amount of embedded code, where fixing a bug is more costly than just pushing out a build to a production server. I actively sought out automation back then, but the purpose of the automation was to help cover edge cases and better test the embedded code for flaws that traced through multiple layers of code.
Whenever I start a new software project, it usually starts with a short period of experimentation when I try out several things. Then, I coalesce on an architecture in my head (and eventually document it), and once I do that I can add more structure to the code.
Given the state of the AI tools today, I can see myself using them to accelerate all the little fiddly parts of this (especially if I can give it a coding standard and have it stick to it). But I wouldn't trust it more than that. I would always keep the archictecture separate, because I don't trust the AI tools to change it on me for no good reason.
So did I, it was called C compilers so I didn't have to do hand coded assembly. They turned out O.K. after the first few buggy generations.
Then they're doing it wrong.
Hoooooh boy, that if is doing a lot of heavy lifting, in my experience. I'm constantly telling the stupid little stochastic fuck to follow basic coding standards I've given it.
I don't use a lot of AI tooling outside of debugging and a little bit into command discovery, but fuck if the little shit isn't constantly rewriting my code into a shit style that I hate and constantly correct.
One of my bosses has been a little Ai-pilled recently and he also contributes code.
I can tell which parts are his AI slop not from any git blame or anything but because of how it looks. You can see the stylistic differences in a block of code from one file to the next, and also it seems like AI likes to add comments to everything, and he just copy and pastes it all into the file. Those comments are often very different looking, too. So just stylistically everything is all over the place.
So, just like any team project.
Everyone has their own style, but Bob over here doesn't change his style every day. Before, my boss had their own style, and if I ended up working on their code I'd try to match that just to keep things consistent. But now it's all over the place.
AI slop just flops out whatever it feels like at any given time since it's just cribbing everything from the internet.
Those are all great habits.
But the time spent doing that is time not shipping code. Most companies don't give a flying fuck about quality, they just want to ship as much as possible to make as much money as possible.
When the cost to ship trash code trends toward zero, then there will not be value in shipping trash code. Companies will need to focus on software that is actually competitive (in a qualitative way) because otherwise their customers will just self-vend the slop code.
I think you have something backwards. When the cost to ship trash code trends to zero, the profit trends to infinity.
The cheaper it is to produce slop code, the less the demand there will be to buy it. Companies will self-vend instead of buying the slop being sold. Your profit margins are someone else's inefficiency.
It's hard to determine whether something is slop before you buy it.
Not beating the association between AI and scams with this one.