this post was submitted on 11 Mar 2026
207 points (96.4% liked)

Linux Gaming

24811 readers
720 users here now

Discussions and news about gaming on the GNU/Linux family of operating systems (including the Steam Deck). Potentially a $HOME away from home for disgruntled /r/linux_gaming denizens of the redditarian demesne.

This page can be subscribed to via RSS.

Original /r/linux_gaming pengwing by uoou.

No memes/shitposts/low-effort posts, please.

Resources

WWW:

Discord:

IRC:

Matrix:

Telegram:

founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] hperrin@lemmy.ca 19 points 8 hours ago (2 children)

Here’s my issue with this specifically. It makes Lutris very vulnerable to being considered entirely public domain:

https://github.com/lutris/lutris/issues/6538

[–] stsquad@lemmy.ml 1 points 8 hours ago (1 children)

There is no settled legal status on the output of AI systems and it's certainly something that does need clarification going forward. The law may treat asking an LLM to regurgitate it's training data vs following instructions in a local context differently. Human engineers are allowed to use "retained knowledge" from their experiences even if they can't bring their notebooks from previous careers. LLMs are just better at it.

[–] hperrin@lemmy.ca 9 points 7 hours ago* (last edited 7 hours ago) (3 children)

As of March 2, it has been settled. AI generated works must have substantial human creative input in order to be copyrightable. Prompting the AI does not meet that requirement.

https://www.morganlewis.com/pubs/2026/03/us-supreme-court-declines-to-consider-whether-ai-alone-can-create-copyrighted-works

In other words, if the AI wrote the code, and you didn’t change it since then, it’s not yours at all. It’s public domain, no question.

[–] yucandu@lemmy.world 3 points 7 hours ago (2 children)

Prompting the AI alone does not meet that requirement. IE you can't say "draw me a picture of a cat" and then copyright the picture of the cat claiming you made it.

You can say "help me draw this left ear over here, now make the right ear up here, a little taller, darken the edges a bit", all with prompts, but with your sufficient creative input.

[–] hperrin@lemmy.ca 3 points 6 hours ago* (last edited 5 hours ago)

That’s not how the dev said he’s generating code. He said sometimes he does it without any intervention at all.

Also, that’s potentially copyrightable. It hasn’t been settled.

[–] dgdft@lemmy.world 1 points 7 hours ago* (last edited 7 hours ago) (1 children)

Your link doesn't support what you're saying in the slightest. Have whatever opinion you want, but don't shovel up transparent bullshit to push your narrative.

TFA is about a a copyright on a work made by a purely autonomous device, and SCOTUS declining to hear a case doesn't "settle" jack-shit.

Quoting further:

Thaler submitted an application to the US Copyright Office to register copyright in “A Recent Entrance to Paradise,” explicitly identifying the AI system as the author and stating the work was created without human intervention.

For now, businesses and creators using AI should continue to rely on the longstanding human authorship requirement. Under current law, works made solely by autonomous AI are not eligible for copyright protection in the United States. Ongoing cases also consider the amount of human input, including prompting or post-generation editing, required to register copyright in an AI-generated work.[12]

Companies should ensure a human contributes creatively and is named as the author in any copyright applications for AI-assisted works. To maximize protection, organizations should review their creative workflows and document human involvement in AI-assisted projects, particularly for commercial content. Organizations should continue to document the timing and scope of the use of AI in copyrightable works, for example by retaining prompts provided by the author. Internal policies should clarify attribution, ownership, the nature of creative input, and documentation requirements to avoid denied copyright applications.

Iteratively working on a codebase by guiding an LLM's design choices and feeding it bug reports is fundamentally different from this case you're citing.

[–] hperrin@lemmy.ca 1 points 6 hours ago* (last edited 6 hours ago)

If all you do is prompt the AI, “hey, fix bugs in this repo,” then you had no creative input into what it produces. So that kind of code would not be copyrightable, 100%. You can fight it in court, but the Supreme Court refusing to hear it means the lower court’s decision is settled law, and your chances of winning are essentially zero.

Whether code where you hold its hand and basically pair program with it is copyrightable hasn’t been settled. Considering the dev said he does it both ways, the point is rather moot, since for sure, he doesn’t own the copyright to at least some of that AI generated code.

OpenClaw is an autonomous system just like the one in that article, and the dev said that’s what he’s using at least some of the time. It generates and commits code without human intervention.

[–] stsquad@lemmy.ml 1 points 7 hours ago

Glad it applies worldwide /s

Slop can't be copyrighted, great. We don't want slop.

[–] db2@lemmy.world 0 points 8 hours ago (1 children)

"AI" has been known to present code from other projects and hence other licenses. It can't become public domain unless all of that code was also public domain.

[–] bss03@infosec.pub 2 points 2 hours ago

I'd imagine there have been more nonsensical (than AI = public domain) legal decisions that have had the full force of law for decades.

I recently dug around for a while, and if the copyright of works in the training data affects the copyright of outputs, no popular model can output anything that would even be close to acceptable for a contribution to an open-source project. Maybe if you trained a model exclusively on "The Stack" (NOT "The Pile") and then included all the required attributions -- but no ready-made model does that. All of the "open source" model frameworks that I could find included some amount of proprietary "pre-training" data that would also be an issue.

If AI output is NOT affected by the copyright of training data... there might not BE a (legal) person that can hold any copyrights over it, which is pretty close to public domain.