this post was submitted on 11 Mar 2026
127 points (93.8% liked)
Opensource
5735 readers
360 users here now
A community for discussion about open source software! Ask questions, share knowledge, share news, or post interesting stuff related to it!
⠀
founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
this is some real 2022 style complaint
most developers are using ai in 2026 in some way, it’s simply too good
"it's simply too good"
Tell that to code reviews I've been rejecting because strong disagree. People are using it because they swallowed the snake oil, doesn't mean we can't keep fighting against it.
And/or have developed AI psychosis after one too many erotic role play sessions.
"I get it through my work" yeah, whatever you say Mathieu.
Are developers really using AI because it’s “too good”, or is it because management has made the use of AI mandatory?
Some, maybe many, developers use and want to use AI even without management pushing it.
I'm skeptical and see limited usefulness, but I've also heard seemingly different sentiments from colleagues.
Fast =/= good
I have multiple years of experience maintaining and reviewing code for a medium sized open source project, and in my experience we have no seen any meaningful increase of good contributions since the AI investment bubble kicked off a couple years ago.
On the flip side, I know that dealing with a glut of low-quality AI-generated slop merge requests has been a real problem for other large open source projects. https://www.pcgamer.com/software/platforms/open-source-game-engine-godot-is-drowning-in-ai-slop-code-contributions-i-dont-know-how-long-we-can-keep-it-up/
In my personal view, AI is really not suitable for actual programming, just typing. Programming requires thought and logic--something LLMs do not actually possess and are not capable of. Furthermore, without an authentic understanding of the code that is being generated, the human being who are ultimately responsible for maintaining the code, fixing errors and making improvements, will only be hurting themselves in the long wrong when they can't follow the "logic" of what was written. You're just creating more problems for yourself in the future.
Personification of probability doesn't do us any good, open source projects require thoughtful contributions from thinking entities.
To make matters worse, I think that AI is also not at all suitable for "open source" development, as it obfuscates authorship and completely obliterates the concept for FOSS licensing.
Were AI models trained on FOSS code including GPL-licensed code? Does this make the output of AI models GPL too, or are LLMs magical machines that can launder GPL code into something proprietary? How do you know that the code produced by your LLM is legally safe and not ripped verbatim from someone else's scraped proprietary codebase? Finally, who is the author and copyright holder of AI generated code?
Ultimately, right now in 2026 we are seeing a lot of use of generative AI being forced by the corporate world, but we are not seeing that result in any meaningful improvement to worker productivity or product quality. (Windows 11 has never been in worse shape than it is today, and I can only assume that is because it is being programmed with much less human intelligence behind it.)
No we’re not.
People malding but its the truth.
You are living under a rock if you think any major software now doesnt have AI written pieces to it in some manner.
Its so common now its a waste of time to label it, you should just assume AI was involved at this point.
Where I work, the company has a ChatGPT contract that's used as a coding assistant tool in VS Code and I imagine also for the admin/contract/legal people doing what they do. Every contracting company developer I've worked with, their company has some enterprise ChatGPT/Claude/Gemini/etc. I've talked to software developers at large companies that raved about what they could do with enterprise Claude and enterprise Cisco AI coding tools
Pretty much everyone I know at the minimum uses the Gemini Google search summary for coding questions/dockerfile/kubernetes/open shift/docker compose/helm/terraform/ansible/bash script/python script/snippets/...
edit: Ineffective activist hive mind people here don't like hearing people using AI. The first person I knew that made regular use of ChatGPT before I ever opened the webpage was an electrician. Like 2 years ago. He used it to write up his emails to customers. The second I met was a person that did marketing for a local restaurant chain. They used ChatGPT to draft marketing text for emails and mailers. Been doing that for like 2 years now as well
I remember in the news Level 5 using generative AI to create early idea. Beloved video games Expedition 33 and Arc Raiders use/used generative AI
2024 article
https://www.tomshardware.com/tech-industry/artificial-intelligence/over-1000-games-using-generative-ai-content-are-already-available-on-steam-but-are-any-of-them-worth-playing
2025 article
https://www.tomshardware.com/video-games/pc-gaming/1-in-5-steam-games-released-in-2025-use-generative-ai-up-nearly-700-percent-year-on-year-7-818-titles-disclose-genai-asset-usage-7-percent-of-entire-steam-library
If you're fighting against AI usage in development of anything, strategies of the last few years need to be revisited to determine where improvements can be had
I appreciate your comment, even when many others downvote it. Honest experiences like this provide context and should always be upvoted in my eyes.
You didn't even make any claims about effectiveness or usefulness. Downvotes like these make me sad and make me feel like this is an unwelcoming community in general, where you can't have open and honest discussions.
I'm glad I don't work where you do. Unfortunately though, I've heard plenty of stories from industry colleagues who have been forced to use AI coding assistants, regardless of any actual impact on productivity or reliability. The consequences of this approach are already manifesting in the form of an embarassing new era of security vulnerabilities. Preaching about the widespread usage of AI is misleading at best if the adoption is mostly based on external marketing pressure. We've had this before with other technologies that get pushed hard by sales people.
At absolute worst, bare minimum, these tools function as incredibly fast fuzzy intent based searchers on documentation
Instead of spending 10 minutes on "where the hell is (documentation) Im trying to find" these tools can hunt them down for me in a matter of seconds.
That already makes them useful just for that, let alone all the other crazy shit they help with now.
It seems like the only people who actually derive value out of it are software developers or middle managers. Every other professional discipline has liability and a need to verify accuracy before actioning something. So beyond reading the AI generated summary on a search engine for non critical things it's basically useless.