All my new code will be closed-source from now on. I've contributed millions of lines of carefully written OSS code over the past decade, spent thousands of hours helping other people. If you want to use my libraries (1M+ downloads/month) in the future, you have to pay.
I made good money funneling people through my OSS and being recognized as expert in several fields. This was entirely based on HUMANS knowing and seeing me by USING and INTERACTING with my code. No humans will ever read my docs again when coding agents do it in seconds. Nobody will even know it's me who built it.
Look at Tailwind: 75 million downloads/month, more popular than ever, revenue down 80%, docs traffic down 40%, 75% of engineering team laid off. Someone submitted a PR to add LLM-optimized docs and Wathan had to decline - optimizing for agents accelerates his business's death. He's being asked to build the infrastructure for his own obsolescence.
Two of the most common OSS business models:
- Open Core: Give away the library, sell premium once you reach critical mass (Tailwind UI, Prisma Accelerate, Supabase Cloud...)
- Expertise Moat: Be THE expert in your library - consulting gigs, speaking, higher salary
Tailwind just proved the first one is dying. Agents bypass the documentation funnel. They don't see your premium tier. Every project relying on docs-to-premium conversion will face the same pressure: Prisma, Drizzle, MikroORM, Strapi, and many more.
The core insight: OSS monetization was always about attention. Human eyeballs on your docs, brand, expertise. That attention has literally moved into attention layers. Your docs trained the models that now make visiting you unnecessary. Human attention paid. Artificial attention doesn't.
Some OSS will keep going - wealthy devs doing it for fun or education. That's not a system, that's charity. Most popular OSS runs on economic incentives. Destroy them, they stop playing.
Why go closed-source? When the monetization funnel is broken, you move payment to the only point that still exists: access. OSS gave away access hoping to monetize attention downstream. Agents broke downstream. Closed-source gates access directly. The final irony: OSS trained the models now killing it. We built our own replacement.
My prediction: a new marketplace emerges, built for agents. Want your agent to use Tailwind? Prisma? Pay per access. Libraries become APIs with meters. The old model: free code -> human attention -> monetization. The new model: pay at the gate or your agent doesn't get in.
this post was submitted on 10 Jan 2026
263 points (92.0% liked)
Open Source
43201 readers
553 users here now
All about open source! Feel free to ask questions, and share news, and interesting stuff!
Useful Links
- Open Source Initiative
- Free Software Foundation
- Electronic Frontier Foundation
- Software Freedom Conservancy
- It's FOSS
- Android FOSS Apps Megathread
Rules
- Posts must be relevant to the open source ideology
- No NSFW content
- No hate speech, bigotry, etc
Related Communities
- !libre_culture@lemmy.ml
- !libre_software@lemmy.ml
- !libre_hardware@lemmy.ml
- !linux@lemmy.ml
- !technology@lemmy.ml
Community icon from opensource.org, but we are not affiliated with them.
founded 6 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
Technically the act of incorporating code into a model’s weights does not trigger GPL's redistribution clause, so they are legally in the right even though morally you shouldn't scrape copylefted code into a model that can be used to create non-copylefted code.
So these weights don't count as "derived works" because they are not code, but can only be used to generate code (among many other things) in conjunction with an LLM architecture?
Well, once again, that's just my hot/IANAL take, but when those weights serve to store information in a way that can easily be extracted losslessly (check-out "model extraction attacks"), we should stop treating them as "just weights".
I agree on a moral standpoint, but unfortunately this does not hold up legally. Even for licenses specifically targeted in addressing AI outputs to count as derivative works like RAIL, I couldn't find any case of it holding up in a US court. The best course of action might just be to add bot-filtering to whatever Git instance you host your copylefted works on until this issue has a legal solution. I'm curious on the FSF's stance on AI output counting as derived works and if they'd ever consider a GPLv4 or new license to explicitly target AI. Couldn't find anything online.
Its not lossless.
Except for when it is, and even when it's not, there is a fine line leading to calling that plagiarism.
The search results I'm seeing for that term point to people extracting (a clone of) the model, through interacting with the available API of an otherwise closed model. I'm not really seeing anyone interacting with a model to extract its training input data.
Is there a better search term, or do you have a more direct reference to lossless extraction of training data from model weights?
https://arxiv.org/abs/2012.07805
https://openreview.net/forum?id=vjel3nWP2a
https://www.nightfall.ai/ai-security-101/training-data-extraction-attacks