this post was submitted on 09 Mar 2026
444 points (98.5% liked)
Technology
82460 readers
2785 users here now
This is a most excellent place for technology news and articles.
Our Rules
- Follow the lemmy.world rules.
- Only tech related news or articles.
- Be excellent to each other!
- Mod approved content bots can post up to 10 articles per day.
- Threads asking for personal tech support may be deleted.
- Politics threads may be removed.
- No memes allowed as posts, OK to post as comments.
- Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
- Check for duplicates before posting, duplicates may be removed
- Accounts 7 days and younger will have their posts automatically removed.
Approved Bots
founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
Arguing that training models isn't fair use us going to be a massive uphill battle, it's basically reading the book but with a computer. It's not actually a big deal to people, unless you hold the copyright to a ton of works and want to get a percentage of all the AI income these companies have made.
Torrenting the books is likely absolutely copyright infringement, but that has relatively low payout compared to the money these companies are getting for their models. The training being fair use means that rights holders can't try to take any money from the model's use. The statutory limits for infringement even at per work levels aren't significant compared to the legal cost of proving it happened.
Anthropic pirating books for their training corpus resulted in the biggest copyright settlement in history--well over a billion. That is still being quibbled over i believe, but they settled because they were likely to pay out more if the case went forward. So I'm not really sure where you're coming from that infringement via torrenting does not result in monstrously large liability.
The judge in that case ruled the training wasn't fair use for pirated books, which left them on the hook for potentially all revenue (likely a court determined percentage) that the model generated for them in addition to statutory damages. That is well north of 1.5 billion.
Which is kind of a pity. Anyone who’s ever written something on the net should be getting royalty checks from these fucks. I’m not exactly famous but I’ve written prolifically in my field of work and have gotten nearly word-for-word reproductions of my articles out of every big model I’ve tested since GPT-3.
There's an argument to be made that it is, in fact, not 'reading'. The training of the model could be considered a lossy compression of the data. And streaming movies in a lossy compression format is not fair use, is it?
The model doesn't stream out anyone's content though. The article mentions that the plaintiffs have provided no examples of a prompt that creates anything substantial.
Streaming a lossy compression would generally be infringement, but there is definitely a point where it becomes not infringement if it's lossy enough.
What a model generally stores, is factual information that isn't copyright in the first place. It's storing word counts, sentence lengths, sentiment analysis, and so on.
It's not the storage of the information that matters as much as the presentation. Google's search index stores a huge amount of copyrighted material, even losslessly. But they only present small snippets at a time which is not considered copyright infringement. The question really is whether or not the information being presented by the models is in a format which is considered copyright infringement. So far, courts have not found that they are.