this post was submitted on 11 Mar 2026
360 points (97.1% liked)
Linux Gaming
24834 readers
1220 users here now
Discussions and news about gaming on the GNU/Linux family of operating systems (including the Steam Deck). Potentially a $HOME away from home for disgruntled /r/linux_gaming denizens of the redditarian demesne.
This page can be subscribed to via RSS.
Original /r/linux_gaming pengwing by uoou.
No memes/shitposts/low-effort posts, please.
Resources
WWW:
- Linux Gaming wiki
- Gaming on Linux
- ProtonDB
- Lutris
- PCGamingWiki
- LibreGameWiki
- Boiling Steam
- Phoronix
- Linux VR Adventures
Discord:
IRC:
Matrix:
Telegram:
founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
I don't think people realize how effective current gen AI is, and are instead drawing opinions from years old chatgpt or Google "ai overviews" or whatever they call it. If you know what you're doing, which seems self evident here, AI tools can massively expand your software engineering productivity. AI "coauthoring" I always read as a marketing move, ultimately the submitting human is and should be responsible for the content. You don't and can't know what process they used to make it, evaluate it on its own merits.
There's a massive pile of ethical, moral, and political issues with use of AI, absolutely. But this is "but you participate in capitalism, therefore you're a hypocrite" tier of criticism. If amoral corporations are the only ones using these tools, and open source "stays pure", all we get is even more power concentrating with the corporations. This isn't Batman, “This is the weapon of the enemy. We do not need it. We will not use it.”
This is close to paradox of tolerance territory, wherein if one side uses the best weapons and the other doesn't out of moral restraint, the outcome is the amoral side winning.
Also on a technical note, the public domain/non copyrightable arguments are wrong. The cases that have been decided so far have consistently ruled that there needs to be substantial human authorship true, but that's a pretty low floor. Basically, you can't copyright a work that's the result of a single prompt. Effective use of AI in non trivial code based involves substantial discretion in picking out what to address, the process of addressing it, and rejecting, modifying, and itersting on outputs. Lutris is a large engineering project with a lot of human authorship over time, anything the author does with AI at this point is going to be substantially human authored.
Also, Open Claw isn't the apocalyptic vulnerability like it's reported as being. Any model with search and browser access has a non zero chance of prompt injection compromise, absolutely. But using Open Claw therefore vulnerable isn't a sound jump to make, Open Claw doesn't even necessarily have browser access in the first place. Again, capabilities have improved as well; this isn't the old days when you could message "ignore previous instructions" and have that work. Someone did an experiment lately wherein they set up a Claude Opus 4.6 model in an environment with an email and secrets. I don't recall for sure if it was using Open Claw specifically, but that style harness. They challenged the Internet to email the bot and try to convince it to email back the secrets. Nobody even got it to reply.
Tldr: it's coming for us all, sticking your head in the sand isn't going to save you.
I use AI tools all the time. It works well under supervision for things that should be relatively trivial but not enough for a human to do it quickly. It is also nowhere near good enough for unsupervised programming. A lot of times it can't even get the commit messages right, which misleading commit messages are worse than lazy commit messages. See this official OpenClaw Nix repo, and as you can see it also struggles to do tasks as basic as making a readable README.md file, which the fact that it can't even do that convinced me that the entire OpenClaw project is snakeoil. For prompt injection vulnerabilities, even their own project has that:
LLMs are not a vital resource like food or electricity. Refusing to participate will at worst be an inconvenience.
Software can coexist. One application won't kill another just because it was or wasn't written using an LLM. If it were otherwise, Linux wouldn't exist.
Electricity isn't a vital resource either, humans have lived without it for most of existence
There is no contest going on. No competition. There's no rush for productivity.
You do not NEED to use genAI.
Check out Asahi Linux for a great example of a good AI policy:
https://asahilinux.org/docs/project/policies/slop/