this post was submitted on 08 Feb 2026
1120 points (98.5% liked)

Fuck AI

5755 readers
1161 users here now

"We did it, Patrick! We made a technological breakthrough!"

A place for all those who loathe AI to discuss things, post articles, and ridicule the AI hype. Proud supporter of working people. And proud booer of SXSW 2024.

AI, in this case, refers to LLMs, GPT technology, and anything listed as "AI" meant to increase market valuations.

founded 2 years ago
MODERATORS
 
you are viewing a single comment's thread
view the rest of the comments
[–] yucandu@lemmy.world 1 points 5 days ago (1 children)

Can you elaborate on why this is unethical?

I use 0.2kWh of electricity to spend a day coding with this model:

https://en.wikipedia.org/wiki/Apertus_(LLM)

[–] jaredwhite@humansare.social 1 points 5 days ago (1 children)

It is still trained on open source code on GitHub. These code communities seemingly have no way to opt out of their free (libre) contributions being used as training data, nor does the resulting code generation contribute anything back to those communities. It is a form of license stripping. That's just one issue.

Just because your inference running locally doesn't use much electricity doesn't mean you've sidestepped all of the other ethical issues surrounding LLMs.

[–] yucandu@lemmy.world 1 points 5 days ago (1 children)

It is not trained on open source code on Github.

But I can use it to analyze a datasheet and generate a library for an obscure module that I can then upload to Github and contribute to the community.

[–] jaredwhite@humansare.social 1 points 5 days ago (1 children)

Apertus is most certainly trained on source code hosted on GitHub. It is laid out here in their technical report:

https://github.com/swiss-ai/apertus-tech-report

It uses a large dataset called TheStack, among others.

[–] yucandu@lemmy.world 1 points 4 days ago* (last edited 4 days ago) (1 children)

StarCoderData.23 A large-scale code dataset derived from the permissively licensed GitHub collection The Stack (v1.2). (Kocetkov et al., 2022), which applies deduplication and filtering of opted-out files. In addition to source code, the dataset includes supplementary resources such as GitHub Issues and Jupyter Notebooks (Li et al., 2023).

That's not random Github accounts or "delicensing" anything. People had to opt IN to be part of "The Stack". Apertus isn't training itself from community code.

[–] jaredwhite@humansare.social 1 points 4 days ago (1 children)

I'm tired of arguing with you about this, and you're still wrong. It was opt-out, not opt-in, based initially on a GitHub crawl of 137M repos and 52B files before filtering & dedup.

[–] yucandu@lemmy.world 1 points 4 days ago

But again, you'd have to set your project to public and your license to "anyone can take my code and do whatever they want with it" before it'd be even added to that list. That's opt-in, not opt-out. I don't see the ethical dilemma here. I'm pretty sure I've found ethical AI, that produces good value for me and society, and I'm going to keep telling people about it and how to use it.