this post was submitted on 20 Mar 2026
46 points (91.1% liked)

Fuck AI

6441 readers
1569 users here now

"We did it, Patrick! We made a technological breakthrough!"

A place for all those who loathe AI to discuss things, post articles, and ridicule the AI hype. Proud supporter of working people. And proud booer of SXSW 2024.

AI, in this case, refers to LLMs, GPT technology, and anything listed as "AI" meant to increase market valuations.

founded 2 years ago
MODERATORS
 

Sadly, it seems like Lemmy is going to integrate LLM code going forward: https://github.com/LemmyNet/lemmy/issues/6385 If you comment on the issue, please try to make sure it's a productive and thoughtful comment and not pure hate brigading.

Consider upvoting the issue to show community interest.

Edit: perhaps I should also mention this one here as a similar discussion: https://github.com/sashiko-dev/sashiko/issues/31 This one concerns the Linux kernel. I hope you'll forgive me this slight tangent, but more eyes could benefit this one too.

you are viewing a single comment's thread
view the rest of the comments
[–] ell1e@leminal.space 1 points 1 day ago* (last edited 1 day ago) (1 children)

Then the PR can be evaluated, rejected if it’s nonfree or just poor quality

I don't get the difficulty of rejecting "if it's nonfree or just poor quality or known LLM code". I don't think it's a vague criterion.

And for many projects, if you admit it's from a StackOverflow post, unless you can show it's not a direct copy they will reject it as well. This isn't commonly taken as incentivizing people to lie.

Now whether you think LLMs are worth the trouble to use is a different discussion, but the enforcement point doesn't convince me.

There is also a responsibility and liability question here. If something turns out to be a copyright issue and the contributor skirted a known rule, the moral judgement may look different than if you knew and included it anyway. (I can't comment on the legal outcomes since I'm not a lawyer.)

[–] Rentlar@lemmy.ca 0 points 1 day ago (1 children)

To be specific, the jump you are making is likening LLM output to non-free code, while on the surface level it makes sense, it's much closer to making stuff based on copied code. In the US at least, there's clear legal precedent that LLM fabrications are not copyrightable.

Blanket AI bans are enforceable, I'm not arguing against that, it's just that I don't think it's worth instituting, that it's not a good fit for this project. My argument is that a Lemmy development policy of "please mark which parts of your code are AI-generated and how you used LLMs, and we will evaluate accordingly" is better than "if you indicate anywhere that your code is AI/LLM-generated, we will automatically reject it".

[–] ell1e@leminal.space 2 points 1 day ago* (last edited 1 day ago) (1 children)
[–] Rentlar@lemmy.ca 0 points 1 day ago

I don't mean in any way to imply that your opinion isn't sound, but simply that I don't agree with it here in the context of whether the Lemmy devs should accept or not PRs with any reported LLM usage.