this post was submitted on 26 Mar 2026
42 points (78.4% liked)

Steam Hardware

21833 readers
345 users here now

A place to discuss and support all Steam Hardware, including Steam Deck, Steam Machine, Steam Frame, and SteamOS in general.

As Lemmy doesn't have flairs yet, you can use these prefixes to indicate what type of post you have made, eg:
[Flair] My post title

The following is a list of suggested flairs:
[Deck] - Steam Deck related.
[Machine] - Steam Machine related.
[Frame] - Steam Frame related.
[Discussion] - General discussion.
[Help] - A request for help or support.
[News] - News about the deck.
[PSA] - Sharing important information.
[Game] - News / info about a game on the deck.
[Update] - An update to a previous post.
[Meta] - Discussion about this community.

If your post is only relevant to one hardware device (Deck/Machine/Frame/etc) please specify which one as part of the title or by using a device flair.

These are not enforced, but they are encouraged.

Rules:

Link to our Matrix Space

founded 4 years ago
MODERATORS
 

So when the news circulated recently that the Lutris developer was using Claude to help write the code (and the angry posts/articles appeared) I figured I'd reach out to Mathieu to hear his side of things.

I chatted to him a little, asking for his side of the story. He goes into some depth on how he uses it as part of his work-flow, the transparency in open-source projects in general, licensing and ownership of code that A.I. writes, safety and so on. Plenty of answers from Lutris, if you're curious on the topic. As ever, you can find the link here:

https://gardinerbryant.com/mathieu-comandon-explains-his-use-of-ai-in-lutris-development/

all 13 comments
sorted by: hot top controversial new old
[–] uuj8za@piefed.social 1 points 4 days ago* (last edited 4 days ago)
[–] Zedstrian@sopuli.xyz 42 points 3 weeks ago* (last edited 3 weeks ago)

Also, there is enough open source code available that I would hope Anthropic doesn’t feel the need to train their models on potentially litigious code base.

The problem with this statement is twofold. Firstly, it is unrealistic to assume that leading AI companies are staying entirely above board in terms of code licensing. With how widespread AI is, this makes it all the harder for developers to enforce their licenses when many developers inevitably violate their terms without knowing. Even if that code is open source, licensing terms typically require attribution that an AI is unlikely to provide for every segment of code cobbled together.

When the developers that had their code taken and reused are unable to know who reused it, it is disingenuous to work under a 'take first, ask later' mentality.

[–] rozodru@piefed.world 36 points 3 weeks ago (1 children)

After reading the interview (great job btw) I can see he's utilizing Claude Code in the correct way. As someone whose contracting day job is to code review and report on the various fuck ups companies make utilizing AI him stating it's more used as a sort of rubber duck or peer programming is honestly, like it or not, the correct way to utilize these tools.

Now him stating that he hopes Anthropic won't feed on what he's produced...I wouldn't bet on it bud. your code base has already been utilized.

[–] Machindo@lemmy.ml 3 points 3 weeks ago* (last edited 3 weeks ago)

I agree.

I think his take is really healthy. If you use Claude as something to make toilsome work easier then you can stay productive and get over the procrastination hump. He also talks about the velocity part where you can go fix/explore things that you wouldn't have time to normally.

It's also not like he's some know nothing vibe coder. He's got a long coding career. Claude is basically a really clever but amnesiac junior dev.

I had been feeling like I peaked in my career and just was too tired/depressed to work on homelab/coding personal projects. Now with Claude, work is a lot less draining and I've got energy leftover for stuff in the evenings. His experience resonates with mine.

What bothers me a lot is that before using Claude, I was just a AI skeptic/hater. Now that I use it regularly I see all the warts but the good colossally outweighs the bad. Vibe coding is still a menace because people who don't know better are inundating open source projects with low effort slop. So near daily I feel like I keep getting challenged by the remaining skeptical coworkers with this AI purity test where I have to keep explaining the same shit. "Yes I vet every line of code before making a PR. Yes I understand the APIs/Documentation from the source material. Yes I have been extremely vigilant against slop." It's exhausting.

So I very much sympathize with that sensationalized quote "Good luck telling the difference between Claude commits and mine" because at the end of the day I stake my professional integrity on every thing I produce regardless of whether I wrote the code by hand or dictated to Claude.

I do leave the "coauthored by Claude" in my commits because I still think it would be disingenuous to do otherwise. But damn if it isn't tempting to remove it.

[–] CoyoteFacts@piefed.ca 30 points 3 weeks ago (1 children)

I already read a lot of the lutris devs' honest feelings about AI and their willingness to obfuscate what they're doing with it in the initial issues/discussions. No offense, but I'm not all that interested in watching them attempt to whitewash and downplay what happened after they've had time to figure out how to spin it.

[–] Alaknar@sopuli.xyz 1 points 3 weeks ago

The only reason they decided to obfuscate the use of Claude was due to the community starting wars and sending them death threats over it. Nobody is downplaying anything, they literally stated that they did that because managing shit-tier Issues that were all basically "why use AI" was becoming too damaging to the project.

[–] memphis@sopuli.xyz 19 points 3 weeks ago

Cancelled my patreon membership over this

[–] Fizz@lemmy.nz 3 points 3 weeks ago

Ai is becoming a very good tool in the.software industry. I think people are going to have to really consider their AI stance and really hone in on what they actually find to be the unethical parts because it will be so widespread and you need to fight against its parts instead of it as a whole.

For me the copyright asymmetry and hostile integration with existing life. I dont want to live in a world where openAI can train a model off all works but i can't do the same. I dont want openAI to scrape every website relentlessly while I get blocked from scraping any large website.

For power usage I dont care. Thats a local government issue. They choose to let an ai data center drive up costs and water usage then they suck and I'll hate them for approving that. Theres plenty of places to put a data center where power isnt an issue.

For art Its awful because its trained unconsensually off artists works and two because it has no intention behind its creation. Ive come to believe that the reason we appreciate art is because of the human intention that goes into its creation. Thats why there is objectively bad art that we resonante with more than a perfect still life because the artist has a story alongside the piece and gives it unique value that ai could never truely replicate.

This is why I can accept AI usage in software development and still hate AI. If its built off an open source model its fine but i dont want to support development using these closed source models and end up in a world where american megacorps control the tools to create software.