this post was submitted on 11 Mar 2026
127 points (93.8% liked)

Opensource

5735 readers
360 users here now

A community for discussion about open source software! Ask questions, share knowledge, share news, or post interesting stuff related to it!

CreditsIcon base by Lorc under CC BY 3.0 with modifications to add a gradient



founded 2 years ago
MODERATORS
 

Lutris maintainer use AI generated code for some time now. The maintainer also removed the co-authorship of Claude, so no one knows which code was generated by AI.

Anyway, I was suspecting that this "issue" might come up so I've removed the Claude co-authorship from the commits a few days ago. So good luck figuring out what's generated and what is not.

sauce 1

sauce 2

you are viewing a single comment's thread
view the rest of the comments
[–] mrmaplebar@fedia.io 11 points 10 hours ago

I have multiple years of experience maintaining and reviewing code for a medium sized open source project, and in my experience we have no seen any meaningful increase of good contributions since the AI investment bubble kicked off a couple years ago.

On the flip side, I know that dealing with a glut of low-quality AI-generated slop merge requests has been a real problem for other large open source projects. https://www.pcgamer.com/software/platforms/open-source-game-engine-godot-is-drowning-in-ai-slop-code-contributions-i-dont-know-how-long-we-can-keep-it-up/

In my personal view, AI is really not suitable for actual programming, just typing. Programming requires thought and logic--something LLMs do not actually possess and are not capable of. Furthermore, without an authentic understanding of the code that is being generated, the human being who are ultimately responsible for maintaining the code, fixing errors and making improvements, will only be hurting themselves in the long wrong when they can't follow the "logic" of what was written. You're just creating more problems for yourself in the future.

Personification of probability doesn't do us any good, open source projects require thoughtful contributions from thinking entities.

To make matters worse, I think that AI is also not at all suitable for "open source" development, as it obfuscates authorship and completely obliterates the concept for FOSS licensing.

Were AI models trained on FOSS code including GPL-licensed code? Does this make the output of AI models GPL too, or are LLMs magical machines that can launder GPL code into something proprietary? How do you know that the code produced by your LLM is legally safe and not ripped verbatim from someone else's scraped proprietary codebase? Finally, who is the author and copyright holder of AI generated code?

Ultimately, right now in 2026 we are seeing a lot of use of generative AI being forced by the corporate world, but we are not seeing that result in any meaningful improvement to worker productivity or product quality. (Windows 11 has never been in worse shape than it is today, and I can only assume that is because it is being programmed with much less human intelligence behind it.)