407
submitted 10 months ago* (last edited 10 months ago) by GlitzyArmrest@lemmy.world to c/technology@lemmy.world

OpenAI has publicly responded to a copyright lawsuit by The New York Times, calling the case “without merit” and saying it still hoped for a partnership with the media outlet.

In a blog post, OpenAI said the Times “is not telling the full story.” It took particular issue with claims that its ChatGPT AI tool reproduced Times stories verbatim, arguing that the Times had manipulated prompts to include regurgitated excerpts of articles. “Even when using such prompts, our models don’t typically behave the way The New York Times insinuates, which suggests they either instructed the model to regurgitate or cherry-picked their examples from many attempts,” OpenAI said.

OpenAI claims it’s attempted to reduce regurgitation from its large language models and that the Times refused to share examples of this reproduction before filing the lawsuit. It said the verbatim examples “appear to be from year-old articles that have proliferated on multiple third-party websites.” The company did admit that it took down a ChatGPT feature, called Browse, that unintentionally reproduced content.

you are viewing a single comment's thread
view the rest of the comments
[-] SheeEttin@programming.dev 95 points 10 months ago

The problem is not that it's regurgitating. The problem is that it was trained on NYT articles and other data in violation of copyright law. Regurgitation is just evidence of that.

[-] blargerer@kbin.social 64 points 10 months ago

Its not clear that training on copyrighted material is in breach of copyright. It is clear that regurgitating copyrighted material is in breach of copyright.

[-] abhibeckert@lemmy.world 16 points 10 months ago* (last edited 10 months ago)

Sure but who is at fault?

If I manually type an entire New York Times article into this comment box, and Lemmy distributes it all over the internet... that's clearly a breach of copyright. But are the developers of the open source Lemmy Software liable for that breach? Of course not. I would be liable.

Obviously Lemmy should (and does) take reasonable steps (such as defederation) to help manage illegal use... but that's the extent of their liability.

All NYT needed to do was show OpenAI how they go the AI to output that content, and I'd expect OpenAI to proactively find a solution. I don't think the courts will look kindly on NYT's refusal to collaborate and find some way to resolve this without a lawsuit. A friend of mine tried to settle a case once, but the other side refused and it went all the way to court. The court found that my friend had been in the wrong (as he freely admitted all along) but also made them pay my friend compensation for legal costs (including just time spent gathering evidence). In the end, my friend got the outcome he was hoping for and the guy who "won" the lawsuit lost close to a million dollars.

[-] CleoTheWizard@lemmy.world 5 points 10 months ago

They might look down upon that but I doubt they’ll rule against NYT entirely. The AI isn’t a separate agent from OpenAI either. If the AI infringes on copyright, then so does OpenAI.

Copyright applies to reproduction of a work so if they build any machine that is capable of doing that (they did) then they are liable for it.

Seems like the solution here is to train data to not output copyrighted works and to maybe train a sub-system to detect it and stop the main chatbot from responding with it.

[-] lolcatnip@reddthat.com 2 points 10 months ago* (last edited 10 months ago)

Copyright applies to reproduction of a work so if they build any machine that is capable of doing that (they did) then they are liable for it.

That is for sure not the case. The modern world is bursting with machines capable of reproducing copyrighted works, and their manufacturers are not liable for copyright violations carried out by users of those machines. You're using at least once of those machines to read this comment. This stuff was decided around the time VCRs were invented.

[-] CleoTheWizard@lemmy.world 1 points 10 months ago

Sorry, the unlicensed reproduction of those works via machine. Missed a word but it’s important. Most machines do not reproduce works in unlicensed ways, especially not by themselves. Then we talk users. Yes, if a user utilizes a machine to reproduce a work, it’s on the user. However, the machine doesn’t usually produce the copyrighted work itself because that production is illegal. For VCR, it’s fine to make a tv recorder because the VCR itself doesn’t violate copyright, the user does via its inputs. If the NYT input its own material and then received it, obviously fine. If it didn’t though, that’s illegal reproduction.

So here I expect the court will say that OpenAI has no right to reproduce the work in full or in amounts not covered by fair use and must take measures to prevent the reproduction of irrelevant portions of articles. However, they’ll likely be able to train their AI off of publicly available data so long as they don’t violate anyone’s TOS.

[-] mryessir@lemmy.sdf.org 1 points 10 months ago

I am not familiar with any judicative system. It sounds to me that OpenAI wants to get the evidence the NYT collected beforehand.

[-] 000@fuck.markets 22 points 10 months ago* (last edited 10 months ago)

There hasn't been a court ruling in the US that makes training a model on copyrighted data any sort of violation. Regurgitating exact content is a clear copyright violation, but simply using the original content/media in a model has not been ruled a breach of copyright (yet).

[-] SheeEttin@programming.dev -3 points 10 months ago

True. I fully expect that the court will rule against OpenAI here, because it very obviously does not meet any fair use exemption.

[-] 520@kbin.social 3 points 10 months ago* (last edited 10 months ago)

For that to work, NYT has to prove OpenAI is copying their words verbatim, not just their style.

If the AI isn't outputting a string of words that can be found on an NYT article, they don't stand a chance

[-] kromem@lemmy.world 1 points 10 months ago

Tell me you haven't actually read legal opinions on the subject without telling me...

[-] SheeEttin@programming.dev 0 points 10 months ago

I'm not aware of any federal case law on copyright and AI. Happy to read some if you have a suggestion.

[-] kromem@lemmy.world 1 points 10 months ago
[-] V1K1N6@lemmy.world 18 points 10 months ago

I've seen and heard your argument made before, not just for LLM's but also for text-to-image programs. My counterpoint is that humans learn in a very similar way to these programs, by taking stuff we've seen/read and developing a certain style inspired by those things. They also don't just recite texts from memory, instead creating new ones based on probabilities of certain words and phrases occuring in the parts of their training data related to the prompt. In a way too simplified but accurate enough comparison, saying these programs violate copyright law is like saying every cosmic horror writer is plagiarising Lovecraft, or that every surrealist painter is copying Dali.

[-] Catoblepas@lemmy.blahaj.zone 44 points 10 months ago

Machines aren’t people and it’s fine and reasonable to have different standards for each.

[-] lolcatnip@reddthat.com -1 points 10 months ago

But is it reasonable to have different standards for someone creating a picture with a paintbrush as opposed to someone creating the same picture with a machine learning model?

[-] Catoblepas@lemmy.blahaj.zone 3 points 10 months ago

Yes, given that one is creating art and the other is typing words into the plagiarism machine.

[-] lolcatnip@reddthat.com -1 points 10 months ago

plagiarism machine

This is called assuming the consequent. Either you're not trying to make a persuasive argument or you're doing it very, very badly.

[-] LWD@lemm.ee 16 points 10 months ago

LLMs cannot learn or create like humans, and even if they somehow could, they are not humans. So the comparison to human creators expounding upon a genre is false because the premises on which it is based are false.

Perhaps you could compare it to a student getting blackout drunk, copying Wikipedia articles and pasting them together, using a thesaurus app to change a few words here and there... And in the end, the student doesn't know what they created, has no recollection of the sources they used, and the teacher can't detect whether it's plagiarized or who from.

OpenAI made a mistake by taking data without consent, not just from big companies but from individuals who are too small to fight back. Regurgitating information without attribution is gross in every regard, because even if you don't believe in asking for consent before taking from someone else, you should probably ask for a source before using this regurgitated information.

[-] ricecake@sh.itjust.works 19 points 10 months ago

Well, machine learning algorithms do learn, it's not just copy paste and a thesaurus. It's not exactly the same as people, but arguing that it's entirely different is also wrong.
It isn't a big database full of copy written text.

The argument is that it's not wrong to look at data that was made publicly available when you're not making a copy of the data.
It's not copyright infringement to navigate to a webpage in your browser, even though that makes your computer download it, process all of the contents of the page, render the content to the screen and hold onto that download for a finite but indefinite period of time, while you perform whatever operations you like on the downloaded data.
You can even take notes on the data and keep those indefinitely, including using that derivative information to create your own similar works.
The NYT explicitly publishes articles in a format designed to be downloaded, processed and have information extracted from that download by a computer program, and then to have that processed information presented to a human. They just didn't expect that the processing would end up looking like this.

The argument doesn't require that we accept that a human and a computers system for learning be held to the same standard, or that we can't differentiate between the two, it hinges on the claim that this is just an extension of what we already find it reasonable for a computer to do.
We could certainly hold that generative AI is a different and new category for copyright law, but that's very different from saying that their actions are unacceptable under current law.

[-] LWD@lemm.ee 1 points 10 months ago

Their actions are unacceptable, whether it fits under the technicality of legality or not. Just like when the BBC intentionally plagiarized the work of Brian Deer, except at least in his case they had the foresight to try asking first, and not just to assume he consented because of the way the data looked.

The NYT explicitly publishes articles in a format designed to be downloaded, processed and have information extracted from that download by a computer program, and then to have that processed information presented to a human.

Speaking of overutilizing a thesaurus, you buried the lede: The text is designed for a human to read.

I don't like the "just look at it, it was asking for it" defense because that abuses publishers who try to present things in a DRM free fashion for their readers:

"Our authors and readers have been asking for this for a long time," president and publisher Tom Doherty explained at the time. "They're a technically sophisticated bunch, and DRM is a constant annoyance to them. It prevents them from using legitimately-purchased e-books in perfectly legal ways, like moving them from one kind of e-reader to another."

But DRM-free e-books that circulate online are easy for scrapers to ingest.

The SFWA submission suggests "Authors who have made their work available in forms free of restrictive technology such as DRM for the benefit of their readers may have especially been taken advantage of."

[-] ricecake@sh.itjust.works 1 points 10 months ago

Have you deleted and reposted this comment three times now, or is something deeply wrong with your client?

[-] LWD@lemm.ee -5 points 10 months ago* (last edited 10 months ago)
load more comments (2 replies)
[-] General_Effort@lemmy.world 11 points 10 months ago

It doesn't work that way. Copyright law does not concern itself with learning. There are 2 things which allow learning.

For one, no one can own facts and ideas. You can write your own history book, taking facts (but not copying text) from other history books. Eventually, that's the only way history books get written (by taking facts from previous writings). Or you can take the idea of a superhero and make your own, which is obviously where virtually all of them come from.

Second, you are generally allowed to make copies for your personal use. For example, you may copy audio files so that you have a copy on each of your devices. Or to tie in with the previous examples: You can (usually) make copies for use as reference, for historical facts or as a help in drawing your own superhero.

In the main, these lawsuits won't go anywhere. I don't want to guarantee that none of the relative side issues will be found to have merit, but basically this is all nonsense.

[-] SheeEttin@programming.dev -2 points 10 months ago

Generally you're correct, but copyright law does concern itself with learning. Fair use exemptions require consideration of the purpose character of use, explicitly mentioning nonprofit educational purposes. It also mentions the effect on the potential market for the original work. (There are other factors required but they're less relevant here.)

So yeah, tracing a comic book to learn drawing is totally fine, as long as that's what you're doing it for. Tracing a comic to reproduce and sell is totally not fine, and that's basically what OpenAI is doing here: slurping up whole works to improve their saleable product, which can generate new works to compete with the originals.

[-] ricecake@sh.itjust.works 3 points 10 months ago

What about the case where you're tracing a comic to learn how to draw with the intent of using the new skills to compete with who you learned from?

Point of the question being, they're not processing the images to make exact duplicates like tracing would.
It's significantly closer to copying a style, which you can't own.

[-] Eccitaze@yiffit.net 1 points 10 months ago

Still a copyright violation, especially if you make it publicly available and claim the work as your own for commercial purposes. At the very minimum, tracing without fully attributing the original work is considered to be in poor enough taste that most art sites will permaban you for doing it, no questions asked.

[-] ricecake@sh.itjust.works 1 points 10 months ago

In the analogy being developed though, they're not making it available.
The initial argument was that tracing something to practice and learn was fine.

Which is why I said, what if you trace to practice, and then draw something independent to try to compete?

To remove the analogy: most generative AI systems don't actually directly reproduce works unless you jump through some very specific and questionable hoops. (If and when they do, that's a problem and needs to not happen).

A lot of the copyright arguments boil down to "it's wrong for you to look at this picture for the wrong reasons", or to wanting to build a protectionist system for creators.

It's totally legit to want to build a protectionist system, but it feels disingenuous to argue that our current system restricts how freely distributed content is used beyond restrictions on making copies or redistribution.

[-] General_Effort@lemmy.world 1 points 10 months ago

I meant "learning" in the strict sense, not institutional education.

I think you are simply mistaken about what AI is typically doing. You can test your "tracing" analogy by making an image with Stable Diffusion. It's trained only on images from the public internet, so if the generated image is similar to one in the training data, then a reverse image search should turn it up.

[-] LodeMike@lemmy.today 1 points 10 months ago

It doesn’t matter how it “”learns””

[-] CrayonRosary@lemmy.world 11 points 10 months ago

violation of copyright law

That's quite the claim to make so boldly. How about you prove it? Or maybe stop asserting things you aren't certain about.

[-] FaceDeer@kbin.social 6 points 10 months ago

But you don't understand, he wants it to be true!

[-] SheeEttin@programming.dev -3 points 10 months ago

17 USC § 106, exclusive rights in copyrighted works:

Subject to sections 107 through 122, the owner of copyright under this title has the exclusive rights to do and to authorize any of the following:

(1) to reproduce the copyrighted work in copies or phonorecords;

(2) to prepare derivative works based upon the copyrighted work;

(3) to distribute copies or phonorecords of the copyrighted work to the public by sale or other transfer of ownership, or by rental, lease, or lending;

(4) in the case of literary, musical, dramatic, and choreographic works, pantomimes, and motion pictures and other audiovisual works, to perform the copyrighted work publicly;

(5) in the case of literary, musical, dramatic, and choreographic works, pantomimes, and pictorial, graphic, or sculptural works, including the individual images of a motion picture or other audiovisual work, to display the copyrighted work publicly; and

(6) in the case of sound recordings, to perform the copyrighted work publicly by means of a digital audio transmission.

Clearly, this is capable of reproducing a work, and is derivative of the work. I would argue that it's displayed publicly as well, if you can use it without an account.

You could argue fair use, but I doubt this use would meet any of the four test factors, let alone all of them.

[-] regbin_@lemmy.world 7 points 10 months ago

Training on copyrighted data should be allowed as long as it's something publicly posted.

[-] assassin_aragorn@lemmy.world 6 points 10 months ago

Only if the end result of that training is also something public. OpenAI shouldn't be making money on anything except ads if they're using copyright material without paying for it.

[-] themusicman@lemmy.world 2 points 10 months ago

I was trained on copyrighted material... I guess I should work for free

[-] ricecake@sh.itjust.works 0 points 10 months ago

Why an exception for ads if you're going that route? Wouldn't advertisers deserve the same protections as other creatives?

Personally, since they're not making copies of the input (beyond what's transiently required for processing), and they're not distributing copies, I'm not sure why copyright would come into play.

[-] Bogasse@lemmy.ml 1 points 10 months ago

And I suppose people at OpenAI understand how to build a formal proof and that it is one. So it's straight up dishonest.

[-] tinwhiskers@lemmy.world 0 points 10 months ago

Only publishing it is a copyright issue. You can also obtain copyrighted material with a web browser. The onus is on the person who publishes any material they put together, regardless of source. OpenAI is not responsible for publishing just because their tool was used to obtain the material.

[-] SheeEttin@programming.dev 0 points 10 months ago

There are issues other than publishing, but that's the biggest one. But they are not acting merely as a conduit for the work, they are ingesting it and deriving new work from it. The use of the copyrighted work is integral to their product, which makes it a big deal.

[-] tinwhiskers@lemmy.world 1 points 10 months ago

Yeah, the ingestion part is still to be determined legally, but I think OpenAI will be ok. NYT produces content to be read, and copyright only protects them from people republishing their content. People also ingest their content and can make derivative works without problem. OpenAI are just doing the same, but at a level of ability that could be disruptive to some companies. This isn't even really very harmful to the NYT, since the historical material used doesn't even conflict with their primary purpose of producing new news. It'll be interesting to see how it plays out though.

[-] SheeEttin@programming.dev 1 points 10 months ago

copyright only protects them from people republishing their content

This is not correct. Copyright protects reproduction, derivation, distribution, performance, and display of a work.

People also ingest their content and can make derivative works without problem. OpenAI are just doing the same, but at a level of ability that could be disruptive to some companies.

Yes, you can legally make derivative works, but without license, it has to be fair use. In this case, where not only did they use one whole work in its entirety, they likely scraped thousands of whole NYT articles.

This isn’t even really very harmful to the NYT, since the historical material used doesn’t even conflict with their primary purpose of producing new news.

This isn't necessarily correct either. I assume they sell access to their archives, for research or whatever. Being able to retrieve articles verbatim through chatgpt does harm their business.

[-] ApexHunter@lemmy.ml 2 points 10 months ago

Yes, you can legally make derivative works, but without license, it has to be fair use. In this case, where not only did they use one whole work in its entirety, they likely scraped thousands of whole NYT articles.

Scraping is the same as reading, not reproducing. That isn't a copyright violation.

this post was submitted on 08 Jan 2024
407 points (96.1% liked)

Technology

59415 readers
3652 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 1 year ago
MODERATORS