this post was submitted on 26 Feb 2026
207 points (92.6% liked)
Open Source
44910 readers
1014 users here now
All about open source! Feel free to ask questions, and share news, and interesting stuff!
Useful Links
- Open Source Initiative
- Free Software Foundation
- Electronic Frontier Foundation
- Software Freedom Conservancy
- It's FOSS
- Android FOSS Apps Megathread
Rules
- Posts must be relevant to the open source ideology
- No NSFW content
- No hate speech, bigotry, etc
Related Communities
- !libre_culture@lemmy.ml
- !libre_software@lemmy.ml
- !libre_hardware@lemmy.ml
- !linux@lemmy.ml
- !technology@lemmy.ml
Community icon from opensource.org, but we are not affiliated with them.
founded 6 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
I'm no fan of AI generally, but "AI Vulnerable" as a term just doesn't make much sense to me. Code reviewing should be filtering out bad code whether it originates from an AI or a human.
PR spamming with the usage of AI is another problem which is very serious and harmful for OSS, but that's not due to some unique danger that only AI code has and human contributors don't.
But studies are showing it doesn't work.
A human makes a mental model of the entire system, does some testing, and submits code that works, passes tests, and fits their unstanding of what is need.
A present day AI makes an educated guess which existing source code snippets best match the request, does some testing, and submits code that it judges is most likely to pass code review.
And yes, plenty of human coders fall into the second bracket, as well.
But AI is very good at writing code that looks right.
Here's an example I ran into, since work wants us to use AI to produce work stuff, whatever, they get to deal with the result.
But I had asked it to add some debug code to verify that a process was working by saving the in memory result of that process to a file, so I could ensure the next step was even possible to do based on the output of the first step (because the second step was failing). Get the file output and it looks fine, other than missing some whitespace, but that's ok.
And then while debugging, it says the issue is the data for step 1 isn't being passed to the function the calls if all. Wait, how can this be, the file looks fine? Oh when it added the debug code, it added a new code path that just calls the step 1 code (properly). Which does work for verifying step 1 on its own but not for verifying the actual code path.
The code for this task is full of examples like that, almost as if it is intelligent but it's using the genie model of being helpful where it tries to technically follow directions while subverting expectations anywhere it isn't specified.
Thinking about my overall task, I'm not sure using AI has saved time. It produces code that looks more like final code, but adds a lot of subtle unexpected issues on the way.
That's still on the human that opened the PR without doing the slightest effort of testing the AI changes though.
I agree there should be a lot of caution overall, I just think that the problem is a bit mischaracterized. The problem is the newfound ability to spam PRs that look legit but are actually crap, but the root here is humans doing this for Github rep or whatever, not AI inherently making codebases vulnerable. There need to be ways to detect such users that repeatedly do zero effort contributions like that and ban them.
Yes, it is their fault, and also, that fault is a widespread problem