this post was submitted on 24 Jun 2025
609 points (98.9% liked)

Technology

71890 readers
4849 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related news or articles.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] vane@lemmy.world 21 points 1 day ago* (last edited 1 day ago) (2 children)

Ok so you can buy books scan them or ebooks and use for AI training but you can't just download priated books from internet to train AI. Did I understood that correctly ?

[–] nednobbins@lemmy.zip 4 points 21 hours ago

That's my understanding too. If you obtained them legally, you can use them the same way anyone else who obtained them legally could use them.

[–] forkDestroyer@infosec.pub 6 points 1 day ago (3 children)

Make an AI that is trained on the books.

Tell it to tell you a story for one of the books.

Read the story without paying for it.

The law says this is ok now, right?

[–] nednobbins@lemmy.zip 8 points 21 hours ago

Sort of.

If you violated laws in obtaining the book (eg stole or downloaded it without permission) it's illegal and you've already violated the law, no matter what you do after that.

If you obtain the book legally you can do whatever you want with that book, by the first sale doctrine. If you want to redistribute the book, you need the proper license. You don't need any licensing to create a derivative work. That work has to be "sufficiently transformed" in order to pass.

[–] Enkimaru@lemmy.world 5 points 23 hours ago

The LLM is not repeating the same book. The owner of the LLM has the exact same rights to do with what his LLM is reading, as you have to do with what ever YOU are reading.

As long as it is not a verbatim recitation, it is completely okay.

According to story telling theory: there are only roughly 15 different story types anyway.

[–] LoreleiSankTheShip@lemmy.ml 7 points 1 day ago (1 children)

As long as they don't use exactly the same words in the book, yeah, as I understand it.

[–] vane@lemmy.world 2 points 23 hours ago* (last edited 23 hours ago) (2 children)

How they don't use same words as in the book ? That's not how LLM works. They use exactly same words if the probabilities align. It's proved by this study. https://arxiv.org/abs/2505.12546

[–] nednobbins@lemmy.zip 5 points 21 hours ago (1 children)

I'd say there are two issues with it.

FIrst, it's a very new article with only 3 citations. The authors seem like serious researchers but the paper itself is still in the, "hot off the presses" stage and wouldn't qualify as "proven" yet.

It also doesn't exactly say that books are copies. It says that in some models, it's possible to extract some portions of some texts. They cite "1984" and "Harry Potter" as two books that can be extracted almost entirely, under some circumstances. They also find that, in general, extraction rates are below 1%.

[–] vane@lemmy.world 1 points 19 hours ago* (last edited 19 hours ago) (1 children)

Yeah but it's just a start to reverse the process and prove that there is no AI. We only started with generating text I bet people figure out how to reverse process by using some sort of Rosetta Stone. It's just probabilities after all.

[–] nednobbins@lemmy.zip 2 points 19 hours ago (1 children)

That's possible but it's not what the authors found.

They spend a fair amount of the conclusion emphasizing how exploratory and ambiguous their findings are. The researchers themselves are very careful to point out that this is not a smoking gun.

[–] vane@lemmy.world 2 points 19 hours ago

Yeah authors rely on the recent deep mind paper https://aclanthology.org/2025.naacl-long.469.pdf ( they even cite it ) that describes (n, p)-discoverable extraction. This is recent studies because right now there are no boundaries, basically people made something and now they study their creation. We're probably years from something like gdpr for llm.

[–] SufferingSteve@feddit.nu 6 points 22 hours ago

The "if" is working overtime in your statement