this post was submitted on 28 Jan 2026
74 points (100.0% liked)

Open Source

45071 readers
158 users here now

All about open source! Feel free to ask questions, and share news, and interesting stuff!

Useful Links

Rules

Related Communities

Community icon from opensource.org, but we are not affiliated with them.

founded 6 years ago
MODERATORS
all 14 comments
sorted by: hot top controversial new old
[–] babka420@szmer.info 1 points 1 day ago

I love this approach - clear and concise rules. They are being realistic about AI created code making it into their codebase, while setting some guidelines.

[–] warm@kbin.earth 28 points 1 month ago (4 children)

Fuck AI, but a respectable stance.

And then you have the ShareX developer, using a screenshot of an AI response to respond to a GitHub issue.

[–] marcie@lemmy.ml 6 points 1 month ago

And then you have the ShareX developer, using a screenshot of an AI response to respond to a GitHub issue.

based sharex dev, every foss dev should be legally allowed to shoot one issue creator each year

[–] Evotech@lemmy.world 5 points 1 month ago* (last edited 1 month ago)

Based. It's entirely opt in.

[–] roofuskit@lemmy.world 23 points 1 month ago (1 children)

Seems like a common sense approach.

[–] FirmDistribution@lemmy.world 15 points 1 month ago

yeah. It's basically "don't commit poorly written code" (AI or not)

[–] LiveLM@lemmy.zip 8 points 1 month ago

So far the best AI tool use policy I've read, specially this bit:

Your PR body should be providing context to other developers about why a change was made, and if your name is on it, we want your words and explanations, not an LLM's. If you can't explain what the LLM did, we are not interested in the change.

[–] HiddenLayer555@lemmy.ml 6 points 1 month ago* (last edited 1 month ago) (1 children)

What about the IP issues? Not even talking about the "ethics" of "ip theft via AI" or anything, you just know a company like Microsoft or Apple will eventually try suing an open source project over AI code that's "too similar" to their proprietary code. Doesn't matter if they're doing the same to a much greater degree, all that matters is they have the resources to sue open source projects and not the other way around. If a tech company can get rid of the competition by abusing the legal system, you just know they will, especially if they can also play the "they're knowingly letting their users use pirated media that we own with their software” card on top of it.

[–] eager_eagle@lemmy.world 7 points 1 month ago (1 children)

you just know a company like Microsoft or Apple will eventually try suing an open source project over AI code that’s “too similar” to their proprietary code.

Doubt it. The incentives don't align. They benefit from open source much more than are threatened by it. Even that "embrace, extent, extinguish" idea comes from different times and it's likely less profitable than the vendor lock-in and other modern practices that are actually in place today. Even the copyright argument is something that could easily backfire if they just throw it in a case, because of all this questionable AI training.

[–] Ferk@lemmy.ml 7 points 1 month ago* (last edited 1 month ago)

And specially for Microsoft, they would be shooting their own foot if they were to spread Fear, Uncertainty and Doubt in the development community over the legality of the use of AI tools like Copilot, which they themselves promote and sell.

[–] mrnobody@reddthat.com 3 points 1 month ago

If anyone needs to translate, try out

https://libretranslate.com/

[–] kalpol@lemmy.ca 1 points 1 month ago

Well I wish the AI would fix the Schedules Direct problem