89
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
this post was submitted on 10 Jul 2023
89 points (100.0% liked)
Technology
30 readers
1 users here now
This magazine is dedicated to discussions on the latest developments, trends, and innovations in the world of technology. Whether you are a tech enthusiast, a developer, or simply curious about the latest gadgets and software, this is the place for you. Here you can share your knowledge, ask questions, and engage in discussions on topics such as artificial intelligence, robotics, cloud computing, cybersecurity, and more. From the impact of technology on society to the ethical considerations of new technologies, this category covers a wide range of topics related to technology. Join the conversation and let's explore the ever-evolving world of technology together!
founded 2 years ago
The training process of LLMs is to copy the source material word for word. It’s instructed to plagiarize during the training process. The copyrighted material are possibly in one way or another embedded into the model itself.
In machine learning, there’s always this concern whether the model is actually learning patterns, or if it’s just memorizing the training data. Same applies to LLMs.
Can LLMs recite entire pieces of work? Who knows?
Does it count as copyright infringement if it does so? Possibly.
No it isn't. That;s not how neural networks work, like at all
It's learning patterns. It's not memorising training data. Again, not how the system works at all
No. No they can't.
That'd be one for the lawyers were it to ever come up, but it won't
Here’s a basic description of how (a part of) LLMs work: https://huggingface.co/learn/nlp-course/chapter1/6
LLMs are generating texts word for word (or token by token if you’re pedantic). This is why ChatGPT is slowly generating the response word by word instead of giving you the entire response at once.
Same applies during the training phase. It gets a piece of text and the word it’s supposed to predict. Then it’s tuned to improve its chances to predict the right word based on the text it’s given.
Ideally it’s supposed to make predictions by learning the patterns of the language. This is not always the case. Sometimes it can just memorize the answer instead of learning why (just like how a child can memorize the multiplication table without understanding multiplication). This is formally known as overfitting, which is a machine learning 101 concept.
There are ways to mitigate overfitting, but there’s no silver bullet solution. Sometimes it cannot help to memorize the training data.
When GitHub Copilot was new people quickly figured out it could generate the fast inverse square root implementation from Quake. Word for word. Including the “what the fuck” comment. It had memorized it completely.
I’m not sure how much OpenAI has done to mitigate this issue. But it’s a thing that can happen. It’s not imaginary.
No, that is simply a completely misunderstanding of... well the entire concept