this post was submitted on 20 Mar 2026
46 points (91.1% liked)

Fuck AI

6441 readers
1066 users here now

"We did it, Patrick! We made a technological breakthrough!"

A place for all those who loathe AI to discuss things, post articles, and ridicule the AI hype. Proud supporter of working people. And proud booer of SXSW 2024.

AI, in this case, refers to LLMs, GPT technology, and anything listed as "AI" meant to increase market valuations.

founded 2 years ago
MODERATORS
 

Sadly, it seems like Lemmy is going to integrate LLM code going forward: https://github.com/LemmyNet/lemmy/issues/6385 If you comment on the issue, please try to make sure it's a productive and thoughtful comment and not pure hate brigading.

Consider upvoting the issue to show community interest.

Edit: perhaps I should also mention this one here as a similar discussion: https://github.com/sashiko-dev/sashiko/issues/31 This one concerns the Linux kernel. I hope you'll forgive me this slight tangent, but more eyes could benefit this one too.

top 50 comments
sorted by: hot top controversial new old
[–] GreenKnight23@lemmy.world 2 points 14 hours ago

ITT people making excuses to use AI in objectively "acceptable" ways.

I thought this was fuck AI.

guess everyone has their price. hope y'all enjoy the ride down to the enslavement level. I'd say you'll see me there, but I'll be at the top, cutting the cable to your elevator.

[–] BlameTheAntifa@lemmy.world 5 points 18 hours ago* (last edited 18 hours ago) (1 children)

I’m sure many here know how militantly anti-AI I am, but things like “intellisense” have been around for decades. If someone is using an ML model for code prediction and they actively write code with their own fingers, I don’t see it as much different than earlier code hinting systems. That’s far different from allowing an AI to perform a task autonomously, like an Avian Intelligence.

[–] ell1e@leminal.space 2 points 17 hours ago* (last edited 17 hours ago)

Problem is, LLM code prediction will likely plagiarize too. Some argue "it's too short I can't get sued", but even if that were universally true (don't know, IANAL) that still leaves the ethics and morals of seemingly stealing some lines literally hook and sinker with every punctuation bit and intricancy from GPL code bases, without attribution.

Some simply think that's bad for FOSS, notwithstanding other ways LLMs seem to harms FOSS.

(And oldschool "IntelliSense" is semantics based and doesn't do that.)

[–] cloudskater@piefed.blahaj.zone 2 points 18 hours ago* (last edited 18 hours ago) (1 children)

I'm glad I moved to PieFed a few months after joining Lemmy. Tankies have no real values and will claim to be radicals while holding no better values then our oppressors.

[–] IndustryStandard@lemmy.world 1 points 15 hours ago

Piefed has the sloppiest codebase imagineable.

[–] Rentlar@lemmy.ca 30 points 1 day ago (3 children)

Code written with the help of LLM and being reviewed is different than like what was happening with Lutris where the developer decided to obfuscate their use of AI-generated code.

The approach you suggest to totally ban it, while in principle can agree and I think that's noble, it could lead to people accusing each other of using AI code where it may or may not have happened, or others just hiding it and trying to submit anyway without the reviewers knowing, which is just counter-productive.

I've followed Lemmy development now for 3 years, the devs approach is slow and steady, to a fault in some people's views. I think it's a better use of open source resources if we encourage candor and honesty. If the repo gets spammed with AI-generated PRs, then it will probably be blanket banned, but contributors accurately documenting and reporting their usage of AI will help direct reviewers attention to ensure the code is not slop quality or full of hallucinations.

[–] ell1e@leminal.space 18 points 1 day ago* (last edited 1 day ago) (2 children)

In my opinion, this argument is exactly the same as saying "we can't enforce people not stealing GPL-licensed code and copy&pasting it into our project, so we might as well allow it and ask them to disclose it."

You can try to argue AI may actually useful, which seems like what they did, and that would more fairly inform a policy in my opinion. I think your argument doesn't.

[–] MrLLM@ani.social 8 points 20 hours ago

Yeah, and of top of that all the reasons why we hate AI,

  • It’s a plagiarism machine
  • It still hallucinates which might end up in borked projects
  • it has and will continue to fuck up RAM and storage market
  • It consumes a shit ton of energy
  • It’s ruining everything with poor quality products
[–] Rentlar@lemmy.ca 4 points 23 hours ago (1 children)

My argument is that a total ban on AI use is more comparable to saying "Code from any other coding project is not allowed". It will start unproductive arguments over boilerplate, struct definitions and other commonly used code.

The broadness and vaagueness of "no AI whatsoever" or "no code from any other projects whatsoever" will be more confusing than saying, "if you do copy any code from another project, let us know where from". Then the PR can be evaluated, rejected if it's nonfree or just poor quality, rather than incentivizing people to pretend other people's code is their own, risking bigger consequences for the whole project. People can be honest if they got inspiration from stackoverflow, a reference book, or another project, if they are allowed to be.

I'm not saying AI should be blanket allowed, the submitter needs to understand the code, enough to be able to revise it for errors themselves if the devs point out something. They can't just say "I asked AI and it's confident that the code does this and is bug free".

[–] ell1e@leminal.space 1 points 22 hours ago* (last edited 21 hours ago) (1 children)

Then the PR can be evaluated, rejected if it’s nonfree or just poor quality

I don't get the difficulty of rejecting "if it's nonfree or just poor quality or known LLM code". I don't think it's a vague criterion.

And for many projects, if you admit it's from a StackOverflow post, unless you can show it's not a direct copy they will reject it as well. This isn't commonly taken as incentivizing people to lie.

Now whether you think LLMs are worth the trouble to use is a different discussion, but the enforcement point doesn't convince me.

There is also a responsibility and liability question here. If something turns out to be a copyright issue and the contributor skirted a known rule, the moral judgement may look different than if you knew and included it anyway. (I can't comment on the legal outcomes since I'm not a lawyer.)

[–] Rentlar@lemmy.ca 0 points 22 hours ago (1 children)

To be specific, the jump you are making is likening LLM output to non-free code, while on the surface level it makes sense, it's much closer to making stuff based on copied code. In the US at least, there's clear legal precedent that LLM fabrications are not copyrightable.

Blanket AI bans are enforceable, I'm not arguing against that, it's just that I don't think it's worth instituting, that it's not a good fit for this project. My argument is that a Lemmy development policy of "please mark which parts of your code are AI-generated and how you used LLMs, and we will evaluate accordingly" is better than "if you indicate anywhere that your code is AI/LLM-generated, we will automatically reject it".

[–] ell1e@leminal.space 2 points 21 hours ago* (last edited 21 hours ago) (1 children)
[–] Rentlar@lemmy.ca 0 points 20 hours ago

I don't mean in any way to imply that your opinion isn't sound, but simply that I don't agree with it here in the context of whether the Lemmy devs should accept or not PRs with any reported LLM usage.

[–] cloudskater@piefed.blahaj.zone 1 points 18 hours ago (1 children)

Its different but no better. Its still AI slop, just human reviewed AI slop.

[–] hitmyspot@aussie.zone 1 points 16 hours ago (1 children)

Not all ai, or rather, llm output is slop. Some is useful. The reason for review is to differentiate. I'm not just talking about coding. I'm talking about their actual useful functionality.

It would be great if they didn't hallucinate, or produce slop. It would also be great if the fact that companies use them instead of workers meant we worked less hours and had more leisure time rather than less paying jobs and more stress. The llm is not at fault for the structure of society.

Llm and ai is a tool. If used appropriately, there should be no issue. Of used inappropriately, it should be called out. Certainly where there is a risk of it appearing on the surface to be good, but not actually good,.like AI generated codez then marking it as such seems reasonable. Banning it doesn't get rid of it. It hides it. It exists and is now in the world. We need to have policies that support appropriate use.

[–] cloudskater@piefed.blahaj.zone 2 points 15 hours ago* (last edited 15 hours ago) (1 children)

I'm sorry, but no matter how many times I hear this argument, it never addresses the issues with AI that exist regardless of its usecase. There are plenty of other unacceptable things in this world that we apply strict bans to. No, it will never rid the world of the issue, but that doesn't mean you concede to "appropriate" uses of the maliciously envisioned technology. Someone in the world will always be hungry, but that doesn't mean we settle for mostly eradicating world hunger, we try to do all we can.

No amount of "but it's for a good purpose" with erase the issues inherent to LLMs and "generative" AI. I like the idea of pure tedium being automated in the future, but so long as its based on the this tech as it currently exists, any genuine attempt to make create something positive is a non-starter. I'm not a "luddite", I don't hate progress or new ideas, I simply refuse to support projects that rub shoulders with hyper-capitalist theft machines that destroy the planet.

[–] hitmyspot@aussie.zone 0 points 15 hours ago (1 children)

In your analogy, we don't ban processed food as some people go hungry. We use agriculture to feed as many as possible with better foods. We try to do better. But more production is generally better. That's what AI is, the equivalent of processed food. It's not real food, it's less healthy but it's functional.

Same with ai. It is an input and output machine. It has costs associated. We assess the output on this merits and cost. If the output is slop, it should be discarded. If it is functional output, it gets used.

[–] cloudskater@piefed.blahaj.zone 1 points 15 hours ago (1 children)

I knew I shouldn't have used that analogy, because then the focus would be redirected to it and I'd end up defending it instead of the position it was meant to represent.

I've said what I intended to say. I don't wanna argue over the uses of AI when its the foundation itself that's rotten. There's no good way to make use of "gen" AI as it stands.

[–] hitmyspot@aussie.zone -1 points 14 hours ago (1 children)

It's fine you have that opinion. I disagree and so do many others. I've used ai to generate notes, checklists, letters,.emails, work templates etc.

The output was correct and valid in most cases. What about the foundation is rotten, in your view? The fact that it's based on other people's work being regurgitated, or the environmental concerns, or how big tech is trying to leverage it to be an arbiter of knowledge and computing power? All are valid concerns, but they don't mean the technology is inherently unusable or unethical.

Banning it because of the views of some is unfair on the views of others. I do think that marking it is appropriate, so that anyone who objects to its use can avoid it. I would be concerned that over time or becomes impossible to avoid though. However, that's the point of open source. People can fork projects at the point where there is no AI code (except in the case where that is purposefully obfuscated).

[–] cloudskater@piefed.blahaj.zone 0 points 14 hours ago (1 children)

"What about the foundation is rotten, in your view? The fact that it's based on other people's work being regurgitated, or the environmental concerns, or how big tech is trying to leverage it to be an arbiter of knowledge and computing power? All are valid concerns, but they don't mean the technology is inherently unusable or unethical."

It literally does. There's no point in this discussion if we're disagreeing over something so fundamental.

[–] hitmyspot@aussie.zone 0 points 13 hours ago (1 children)

Cool, I can see it's a waste of time too if you're not able to appreciate other people's view or express yours beyond absolutisms. It's not a discussion when the only view you pay attention to is your own.

[–] cloudskater@piefed.blahaj.zone 1 points 13 hours ago

Lol as if I didn't hear you out. At this point anyone could present any point against "generative" AI and you'd find a way to say "but if it produces something that works".

At least, that's how you've come off. I know I'm being abrasive, but I genuinely don't wanna believe people think like that, and I don't enjoy fighting like this.

When tedious tasks can be automated without using tech made by fascists for fascists, I'll be all over that. Until then, its pretty hard to defend.

[–] wheezy@lemmy.ml 2 points 1 day ago* (last edited 1 day ago) (1 children)

Great perspective and response. Far too many "fuck AI" people are literally advocating for the equivalent of "fuck computers" and "more tedious labor please!"

The reason you should hate AI should be related to it's exploitation of labor and it's over use leading to energy and environmental impacts. Trying to ban AI for all applications is just counter productive and impossible. If the anti AI crowd is just filled with people that want it banned outright for everything, well, then the pro AI crowd that wants to slam it into anything and everything will win out.

We need to be pointing to good applications of AI that can benefit open source projects in a responsible way as examples of how it should be used. Not spamming them with hate comments because "AI bad"

[–] ell1e@leminal.space 12 points 1 day ago* (last edited 20 hours ago) (1 children)

far too many “fuck AI” people are literally advocating for the equivalent of “fuck computers” and “more tedious labor please!”

Not what I'm advocating for.

We need to be pointing to good applications of AI

Feel free to do so, but studies are not on your side. Edit: this is a reminder we're talking about LLMs for code and documentation.

The only somewhat clearly useful use case appear to be code reviews, but then you don't need to actually allow submitting any LLM rewritten code or text since code reviews can be done using natural language. And if you use server-side LLMs, you'll probably agree to ToS that they steal your data.

And LLMs seem to be amazing at plagiarism.

[–] FauxLiving@lemmy.world 1 points 1 day ago (2 children)

We need to be pointing to good applications of AI Freel free to do so, but studies are not on your side.

The only somewhat clearly useful use case appear to be code reviews, but then you don’t need to actually allow submitting any LLM rewritten code or text since code reviews can be done using natural language. And if you use server-side LLMs, you’ll probably agree to ToS that they steal your data.

And they seem to be amazing at plagiarism.

You, like a large portion of the 'fuck AI' community are angry at LLMs or image/video generation models and their associated capitalist bubble. Yes, LLMs produce poor quality output compared to humans and yes the current marketing and capital explosion is bad for everyone involved that isn't otherwise independently wealthy.

The reason that these are the AI that you're aware of is that AI needs a lot of data to train and the only source of a huge amount of data, the Internet, is primarily text, images and video. So the first large transformer-based neural networks were trained on that dataset.

ChatGPT and Sora are toys, they were just the toys that were easiest to make given the data available when transformers were discovered.

If you train neural networks on different kinds of data you get different models. For example, if you train neural networks on protein folding data, you get neural networks that can predict protein folding based on an amino acid sequence. This is a thing that human-created software has not had great success at.

People may be familiar with Folding@Home, a project which attempts to leverage donated computing resources to brute force the problem. These projects have consumed thousands of person-hours of our best scientists and engineers and the results are pretty poor.

However, since we now know how to train neural networks on data, we can train an AI to predict the protein structures and the resulting networks such as AlphaFold (https://en.wikipedia.org/wiki/AlphaFold) produce results much higher than human engieered software.

In addition to predicting the structure, other scientists have used diffusion models (similar to how consumer AI products generate images) to go the other way. Now a scientist can describe a protein's properties in a prompt and instead of generating a picture the network outputs the sequence of amino acids that are most likely to fold into a shape with those properties.

Robotics are another field where AI is making an impact unseen to the public. There isn't an Internet full of bipedal motion or limb-positioning data, so it is much harder to train an AI to operate robotics. There are many projects which are working to create that data and the results are pretty impressive. This is a bipedal robot which has been trained on human motion: https://www.youtube.com/watch?v=I44_zbEwz_w compare that to pre-AI motion: https://www.youtube.com/watch?v=LikxFZZO2sk

Weather forecasting is another field where AI is useful. Predicting weather requires identifying patterns in huge amounts of data and AI is uniquely able to deal with that level of complexity.

None of these uses of AI can talk to you, or produce pictures. They cannot understand sentences or write e-mails or generate code. They're trained on data generated specifically for their purpose, not on public data scrapped from the Internet. Their output allows us to develop medicines faster, automate dangerous jobs and predict weather disasters.

I'm with anyone who's concerned about the capitalist frenzy over LLMs and image/video generation products. This is clearly another dotcom bubble and the spending frenzy and disruption in the job markets is damaging the economy and hurting workers at a large scale.

I do not lay the blame for this at the feet of neural networks. The blame lies with the human beings making the decision to take a promising technology and to dump trillions of dollars into it without any endgame other than market dominance.

The community should but 'fuck AI executives', AI has many uses outside of LLMs and image generation and people are completely missing all of the amazing things that this technology is making possible.

[–] baggachipz@sh.itjust.works 4 points 1 day ago (1 children)

Thank you so much for taking the time to put into words what I’ve been too lazy to enunciate. Transformer-based tools are a great development with some fantastic uses. I think the problem is one of nomenclature and extremely aggressive marketing by grifters. The reason I’m in this community isn’t to outright banish anything related to transformer-based tech, but to rail against the insanely overhyped, economy-wrecking shitshow that has commandeered the nebulous term “AI” when it’s really just LLMs.

[–] FauxLiving@lemmy.world 2 points 1 day ago

The reason I’m in this community isn’t to outright banish anything related to transformer-based tech, but to rail against the insanely overhyped, economy-wrecking shitshow that has commandeered the nebulous term “AI” when it’s really just LLMs.

Same, I'm here because capitalism is doing serious damage to the world by taking a promising technology and massively over investing.

I'm not here to side with the Luddites who reflexively downvote anything that says 'AI'.

Though, I will say that this is a nuanced opinion and so I understand that I'm going to be dog piled by the people who're only here for low effort performative activism.

[–] ell1e@leminal.space 4 points 1 day ago* (last edited 1 day ago) (5 children)

We were talking about lemmy and LLMs. They're not part of any use case you're listing.

But my apologies if I missed something here.

load more comments (5 replies)
[–] _haha_oh_wow_@sh.itjust.works 2 points 19 hours ago (1 children)

Guess it's time to jump ship to Mbin or PieFed...

[–] Liketearsinrain@lemmy.ml 3 points 18 hours ago* (last edited 18 hours ago)

Dunno about Mbin but PieFed very much looks like it uses LLM code from what I've seen

[–] Kolanaki@pawb.social 19 points 1 day ago (1 children)

I mean the lead dev is literally agreeing that LLM code shouldn't be in the project at all as the first reply to the issue. I'm not seeing how it's headed toward integration from what you've linked.

[–] ell1e@leminal.space 14 points 1 day ago (2 children)

Sadly, the lemmy team seem to have reversed their opinion immediately after. https://github.com/LemmyNet/lemmy-docs/pull/414/changes

[–] ohshit604@sh.itjust.works 1 points 19 hours ago* (last edited 19 hours ago) (1 children)

Use of so-called Artificial Intelligence (AI) is allowed only if it is explicitly mentioned. Additionally all LLM-generated text or code must be manually reviewed by the author before submission (no vibe coding allowed).

Linus Torvalds has the best take regarding using “AI” for software development a documentation.

As I said in private elsewhere, I do not want any kernel development documentation to be some AI statement. We have enough people on both sides of the "sky is falling" and "it's going to revolutionize software engineering", I don't want some kernel development docs to take either stance.

It's why I strongly want this to be that "just a tool" statement.

source

Using it in a staging environment should be perfectly acceptable usage, review it and adjust it before introducing it on a production build, treat is like a tool and not a one-click magic code machine.

[–] ell1e@leminal.space 0 points 17 hours ago* (last edited 17 hours ago)

That doesn't take into account the extensively researched plagiarism concerns. It's not just that it's low quality slop but that some of us think the GPL won't work if you can just train LLMs on GPL, then have it spit out GPL snippets un-GPL'ed.

Some people literally un-GPL projects via AI in one go. While that's the egregious version, any LLM use seems to risk having a similar effect in a smaller scope.

This isn't only a legal question. At least if you think the GPL has societal and moral value.

[–] Zetta@mander.xyz -2 points 21 hours ago (1 children)

Better stop using the internet. I always say this, but in the next five years, every single piece of software you use is going to have generated code in it. You may not like it, but it's happening, so sorry.

[–] ell1e@leminal.space 1 points 20 hours ago* (last edited 20 hours ago)

There is a growing list of projects to collaborate with that reject LLM code: Asahi Linux, elementaryOS, Gentoo, GIMP, GoToSocial, Löve2D, Loupe, NetBSD, postmarketOS, Qemu, RedoxOS, Servo, stb libraries, Zig.

[–] Liketearsinrain@lemmy.ml 1 points 18 hours ago

Seems reasonable, I recommend people actually read the linked discussions instead of just the title.

[–] in_my_honest_opinion@piefed.social 12 points 1 day ago (9 children)
[–] uuj8za@piefed.social 2 points 23 hours ago

Yeah! I was ok with Lemmy, but recently (unrelated) decided to try Piefed. I'm liking Piefed better. Lots of nice UI/UX improvements over Lemmy. Didn't realize what I was missing.

load more comments (6 replies)
load more comments
view more: next ›