this post was submitted on 16 Jan 2026
60 points (85.7% liked)
Open Source
43309 readers
177 users here now
All about open source! Feel free to ask questions, and share news, and interesting stuff!
Useful Links
- Open Source Initiative
- Free Software Foundation
- Electronic Frontier Foundation
- Software Freedom Conservancy
- It's FOSS
- Android FOSS Apps Megathread
Rules
- Posts must be relevant to the open source ideology
- No NSFW content
- No hate speech, bigotry, etc
Related Communities
- !libre_culture@lemmy.ml
- !libre_software@lemmy.ml
- !libre_hardware@lemmy.ml
- !linux@lemmy.ml
- !technology@lemmy.ml
Community icon from opensource.org, but we are not affiliated with them.
founded 6 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
The problem is not the algorithm. The problem is the way they're trained. If I made a dataset from sources whose copyright holders exercise their IP rights and then train an LLM on it, I'd probably go to jail or just kill myself (or default on my debts to the holders) if they sue for damages.
I support FOSS LLMs like Qwen just because of that. China doesn't care about IP bullshit and their open source models are great.
Exactly, open models are basically unlocking knowledge for everyone that's been gated by copyright holders, and that's a good thing.