this post was submitted on 08 Feb 2026
1118 points (98.5% liked)

Fuck AI

5755 readers
891 users here now

"We did it, Patrick! We made a technological breakthrough!"

A place for all those who loathe AI to discuss things, post articles, and ridicule the AI hype. Proud supporter of working people. And proud booer of SXSW 2024.

AI, in this case, refers to LLMs, GPT technology, and anything listed as "AI" meant to increase market valuations.

founded 2 years ago
MODERATORS
 
you are viewing a single comment's thread
view the rest of the comments
[–] yucandu@lemmy.world 11 points 5 days ago (5 children)

What about the AI that I run on my local GPU that is using a model trained on open source and public works?

[–] Jankatarch@lemmy.world 7 points 5 days ago* (last edited 5 days ago) (2 children)

It's cool as hell to train models don't get me wrong but if you use them as assistants you will still slowly stop thinking no?

So Nazgûl.

[–] yucandu@lemmy.world -2 points 5 days ago

Feels like telling me not to use a calculator so I don't forget how to add and subtract.

[–] mattvanlaw@lemmy.world -1 points 5 days ago

I've settled on a future model where AIs are familiars that level up from their experience more naturally and are less immediately omnipotent

[–] Mesophar@pawb.social 6 points 5 days ago

Sounds like the rings of the Elves to me

[–] jaredwhite@humansare.social 3 points 5 days ago (1 children)

That is slightly less unethical than Claude or whatever, but it is still unethical.

[–] yucandu@lemmy.world 1 points 5 days ago (1 children)

Can you elaborate on why this is unethical?

I use 0.2kWh of electricity to spend a day coding with this model:

https://en.wikipedia.org/wiki/Apertus_(LLM)

[–] jaredwhite@humansare.social 1 points 4 days ago (1 children)

It is still trained on open source code on GitHub. These code communities seemingly have no way to opt out of their free (libre) contributions being used as training data, nor does the resulting code generation contribute anything back to those communities. It is a form of license stripping. That's just one issue.

Just because your inference running locally doesn't use much electricity doesn't mean you've sidestepped all of the other ethical issues surrounding LLMs.

[–] yucandu@lemmy.world 1 points 4 days ago (1 children)

It is not trained on open source code on Github.

But I can use it to analyze a datasheet and generate a library for an obscure module that I can then upload to Github and contribute to the community.

[–] jaredwhite@humansare.social 1 points 4 days ago (1 children)

Apertus is most certainly trained on source code hosted on GitHub. It is laid out here in their technical report:

https://github.com/swiss-ai/apertus-tech-report

It uses a large dataset called TheStack, among others.

[–] yucandu@lemmy.world 1 points 4 days ago* (last edited 4 days ago) (1 children)

StarCoderData.23 A large-scale code dataset derived from the permissively licensed GitHub collection The Stack (v1.2). (Kocetkov et al., 2022), which applies deduplication and filtering of opted-out files. In addition to source code, the dataset includes supplementary resources such as GitHub Issues and Jupyter Notebooks (Li et al., 2023).

That's not random Github accounts or "delicensing" anything. People had to opt IN to be part of "The Stack". Apertus isn't training itself from community code.

[–] jaredwhite@humansare.social 1 points 4 days ago (1 children)

I'm tired of arguing with you about this, and you're still wrong. It was opt-out, not opt-in, based initially on a GitHub crawl of 137M repos and 52B files before filtering & dedup.

[–] yucandu@lemmy.world 1 points 3 days ago

But again, you'd have to set your project to public and your license to "anyone can take my code and do whatever they want with it" before it'd be even added to that list. That's opt-in, not opt-out. I don't see the ethical dilemma here. I'm pretty sure I've found ethical AI, that produces good value for me and society, and I'm going to keep telling people about it and how to use it.

[–] mattvanlaw@lemmy.world 1 points 5 days ago (1 children)

This is very cool. Any advice a simple software engineer (me) could follow to practice the same?

[–] yucandu@lemmy.world 4 points 5 days ago* (last edited 5 days ago) (1 children)
  1. Install LM Studio

  2. Tell LM Studio to download the Apertus model: https://en.wikipedia.org/wiki/Apertus_(LLM)

  3. Bob's ur uncle.

stick to 8B models for video cards with 8GB VRAM.

[–] mattvanlaw@lemmy.world 2 points 4 days ago

Thanks! I've always wanted an uncle bob, too!

[–] amino@lemmy.blahaj.zone 0 points 5 days ago (1 children)

your local model wouldn't exist without sauron (openai)

[–] Jankatarch@lemmy.world 8 points 5 days ago* (last edited 5 days ago) (1 children)

People were making AI models and LLMs before openai/chatgpt tbf.

It's the "destroy the environment and economy in an attempt to make something that sucks just enough to justify not paying people fairly when advertising to rich assholes gambling their generational wealth" that they invented.

[–] amino@lemmy.blahaj.zone 2 points 5 days ago

what are those LLMs you mention that people are still using? never heard of them, sounds like a cop out