this post was submitted on 20 Feb 2026
138 points (100.0% liked)
Technology
42304 readers
227 users here now
A nice place to discuss rumors, happenings, innovations, and challenges in the technology sphere. We also welcome discussions on the intersections of technology and society. If it’s technological news or discussion of technology, it probably belongs here.
Remember the overriding ethos on Beehaw: Be(e) Nice. Each user you encounter here is a person, and should be treated with kindness (even if they’re wrong, or use a Linux distro you don’t like). Personal attacks will not be tolerated.
Subcommunities on Beehaw:
This community's icon was made by Aaron Schneider, under the CC-BY-NC-SA 4.0 license.
founded 4 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
Well, AI code should be reviewed prior merge into master, same as any code merged into master.
We have git for a reason.
So I would definitely say this was a human fault, either reviewer’s or the human’s who decided that no (or AI driven) review process is needed.
If I would manage devOps, I would demand that AI code has to be signed off by a human on commit taking responsibility with the intention that they review changes made by AI prior pushing
@Petter1 @remington at our company every PR needs to be reviewed by at least one lead developer. And the PRs of the lead developers have to be reviewed by architects. And we encourage the other developers to perform reviews as well. Our company encourages the usage of Copilot. But none of our reviewers would pass code that they don't understand.
🥰nice!
And you would get burned. Today's AI does one thing really really well - create output that looks correct to humans.
You are correct that mandatory review is our best hope.
Unfortunately, the studies are showing we're fucked anyway.
Because whether the AI output is right or wrong, it is highly likely to at least look correct, because creating correct looking output is where (what we call "AI", today) AI shines.
Realistically what happens is the code review is done under time pressure and not very thoroughly.
This is what happens to us. People put out a high volume of AI-generated PRs, nobody has time to review them, and the code becomes an amalgamation of mixed paradigms, dependency spaghetti, and partially tested (and horribly tested) code.
Also, the people putting out the AI-generated PRs are the same people rubber stamping the other PRs, which means PRs merge quickly, but nobody actually does a review.
The code is a mess.
@TehPers @Limerance why hadn't you time to review it? Every minute in review pays off because it saves you from hours of debugging and handling with angry customers.
Sure, that’s the theory. In practice code review often looks like this:
In other words – people were barely reading merge requests before. Code reviews have limited effects as well. You won’t catch all bugs or see if it actually works just by looking at the code. Code reviews mainly serve to spread knowledge about the code among the team. The more code exists in a project, the harder it is to understand. You don’t want huge areas of code, that only one person has ever seen.
Project managers don’t necessarily talk to angry customers directly. They might also choose to chase more features instead of allocating resources to fixing bugs. It depends on what the bosses prioritize. If they want AI and lots of new features, that‘s what they will get. Fixing bugs, improved stability, better performance, etc. are rarely the priority.
Because if I spent my whole day reviewing AI-generated PRs and walking through the codebase with them only for the next PR to be AI-generated unreviewed shit again, I'd never get my job done.
I'd love to help people learn, but nobody will use anything they learn because they're just going to ask an LLM to do their task for them anyway.
This is a people problem, and primarily at a high level. The incentive is to churn out slop rather than do things right, so that's what people do.