421
submitted 1 year ago by dl007@lemmy.ml to c/technology@lemmy.ml
top 50 comments
sorted by: hot top controversial new old
[-] givesomefucks@lemmy.world 103 points 1 year ago* (last edited 1 year ago)

In evidence for the suit against OpenAI, the plaintiffs claim ChatGPT violates copyright law by producing a “derivative” version of copyrighted work when prompted to summarize the source.

Both filings make a broader case against AI, claiming that by definition, the models are a risk to the Copyright Act because they are trained on huge datasets that contain potentially copyrighted information

They've got a point.

If you ask AI to summarize something, it needs to know what it's summarizing. Reading other summaries might be legal, but then why not just read those summaries first?

If the AI "reads" the work first, then it would have needed to pay for it. And how do you deal with that? Is a chatbot treated like one user? Or does it need to pay for a copy for each human that asks for a summary?

I think if they'd have paid for a single ebbok Library subscription they'd be fine. However the article says they used pirate libraries so it could read anything on the fly.

Pointing an AI at pirated media is going to be hard to defend in court. And a class action full of authors and celebrities isn't going to be a cakewalk. They've got a lot of money to fight, and have lots of contacts for copyright laws. I'm sure all the publishers are pissed too.

Everyone is going after AI money these days, this seems like the rare case where it's justified

[-] Rivalarrival@lemmy.today 37 points 1 year ago

If the AI "reads" the work first, then it would have needed to pay for it

That's not actually true. Copyright applies to distribution, not consumption. You violate no law when I create an unauthorized copy of a work, and you read that copy. Copyright law prohibits you from distributing further copies, but it does not prohibit you from possessing the copy I provided you, nor are you prohibited from speaking about the copy you have acquired.

Unless the AI is regurgitating substantial parts of the original work, it's output is a "transformative derivation", which is not subject to the protections of the original copyright. The AI is doing what English teachers ask of every school-age child: create a book report.

[-] TWeaK@lemm.ee 11 points 1 year ago* (last edited 1 year ago)

Copyright applies to distribution, not consumption. You violate no law when I create an unauthorized copy of a work

This is completely untrue. Making any unauthorised copy is an infringement of copyright. Hell, the UK determined that merely loading a pirated game into RAM was unauthorised copying, making the act of playing a pirated game unlawful - thankfully this is ruling only the case in the UK, however the basic principles of copyright are the same all over the world.

When you buy something, you get a limited license to make copies for the purpose of viewing the material. That license does not extend to making backup copies. However, in a practical sense, it is very unlikely you will be prosecuted for most kinds of infringement like this - particularly when no money is involved. It's still infringement, though.

Edit: I will say though: you violate no law when you view a copy I create. However I would still be infringing for making and showing you the copy.

In the case of making a book report, that is educational, and thus fair use. ChatGPT is not educational - you might use it for education, but ChatGPT's use of copyrighted material is for commercial enterprise.

[-] Rivalarrival@lemmy.today 9 points 1 year ago

The uploader is the person creating the copy. Downloading is not creating a copy; downloading is receiving a copy.

I would love to see a citation on that UK precedent, but as you said: "thankfully this is only the case in the UK" and does not apply in the rest of the world.

Making any unauthorised copy is an infringement of copyright.

The exceptions to that are so numerous that the statement is closer to false than truth. "Fair Use" blows the absolute nature of that statement out of the water.

There has never been a successful prosecution for downloading only.

[-] vrighter@discuss.tchncs.de 7 points 1 year ago

Every single transfer of data is a copy. There is no such thing as moving data. Only copying it and then voluntarily deleting the original, to fake it having "moved"

load more comments (5 replies)
load more comments (2 replies)
[-] lobelia581@lemmy.dbzer0.com 8 points 1 year ago

There was still copyright infringement because the company probably downloaded the text (which created another copy) and modified it (alteration is also protected by copyright) before using it as training data. If you write an original novel and admit that you had pirated a bunch of novels to use for reference, those novels were still downloaded illegally even if you've deleted them by now. The AI isn't copyright infringement itself, it's proof that copyright infringement has happened.

But personally I don't think the actual laws will matter so much as which side has the better case for why they will lead to more innovation and growth for the economy.

load more comments (5 replies)
load more comments (4 replies)
[-] limeaide@lemmy.ml 16 points 1 year ago

Can the sources where ChatGPT got it's information from be traced? What if it got the information from other summaries?

I think the hardest thing for these companies will be validating the information their AI is using. I can see an encyclopedia-like industry popping up over the next couple years.

Btw I know very little about this topic but I find it fascinating

[-] rainroar@lemmy.ml 5 points 1 year ago

Yes! They publish the data sources and where they got everything from. Diffusers (stable diffusion/midjoirny etc) and GPT both use tons of data that was taken in ways that likely violate that data’s usage agreement.

Imo they deserve whatever lawsuits they have coming.

load more comments (3 replies)
[-] fazey_o0o@lemmy.world 16 points 1 year ago

"It was like this when I got it"

[-] beejjorgensen@lemmy.sdf.org 15 points 1 year ago

It depends on if the summary is an infringing derivative work, doesn't it? Wikipedia is full of summaries, for example, and it's not violating copyright.

If they illegally downloaded the works, that feels like a standalone issue to me, not having anything to do with AI.

[-] TWeaK@lemm.ee 5 points 1 year ago

Wikipedia is a non profit whose primary purpose is education. ChatGPT is a business venture.

[-] Rivalarrival@lemmy.today 5 points 1 year ago

A book review published in a newspaper is a commercial venture for the purpose of selling ads. The commercial aspect doesn't make the review an infringement.

A summary is a "Transformative Derivation". It is a related work, created for a fundamentally different purpose. It is a discussion about the work, not a copy of the work. Transformative derivations are not infringements, even where they are specifically intended to be used for commercial purposes.

load more comments (10 replies)
[-] dartos@reddthat.com 40 points 1 year ago

I’ve noticed that the lemmy crowd seems more accepting of AI stuff than the Reddit crowd was

[-] aniki@lemm.ee 74 points 1 year ago

I mean for tech stuff it's fantastic. I could spend 30 minutes working out a regex to grep the logs in the format I need or I could have a back and forth with ChatGPT and get it sorted in 5.

I still don't want it to write my TV or movies. Or code to a significant degree.

[-] ColonelSanders@lemmy.world 15 points 1 year ago

On the flip side, anytime I've tried to use it to write python scripts for me, it always seems to get them slightly wrong. Nothing that a little troubleshooting can't handle, and certainly helps to get me in the ballpark of what I'm looking for, but I think it still has a little ways to go for specific coding use cases.

load more comments (3 replies)
load more comments (8 replies)
[-] throwsbooks@lemmy.world 8 points 1 year ago

It's probably related to the fact that it seems a lot of Lemmy users are in tech, rather than art.

I think generative AI is a great tool, but a lot of people who don't understand how it works either overestimate (it can do everything and it's so smart!!) or underestimate it (all it does is steal my work!!)

load more comments (4 replies)
[-] MaxPower@feddit.de 37 points 1 year ago* (last edited 1 year ago)

I like her and I get why creatives are panicking because of all the AI hype.

However:

In evidence for the suit against OpenAI, the plaintiffs claim ChatGPT violates copyright law by producing a “derivative” version of copyrighted work when prompted to summarize the source.

A summary is not a copyright infringement. If there is a case for fair-use it's a summary.

The comic's suit questions if AI models can function without training themselves on protected works.

A language model does not need to be trained on the text it is supposed to summarize. She clearly does not know what she is talking about.

IANAL though.

[-] jmcs@discuss.tchncs.de 25 points 1 year ago

I guess they will get to analyze OpenAI's dataset during discovery. I bet OpenAI didn't have authorization to use even 1% of the content they used.

[-] maynarkh@feddit.nl 15 points 1 year ago

That's why they don't feel they can operate in the EU, as the EU will mandate AI companies to publish what datasets they trained their solutions on.

[-] Jaded@lemmy.dbzer0.com 7 points 1 year ago

Things might change but right now, you simply don't need anyones authorization.

Hopefully it doesn't change because only a handful of companies have the data or the funds to buy the data, it would kill any kind of open source or low priced endeavour.

load more comments (1 replies)
load more comments (5 replies)
[-] Asafum@lemmy.world 27 points 1 year ago

I feel like when confronted about a "stolen comedy bit" a lot of these people complaining would also argue that "no work is entirely unique, everyone borrows from what already existed before." But now they're all coming out of the woodwork for a payday or something... It's kinda frustrating especially if they kill any private use too...

[-] TheyHaveNoName@lemmy.fmhy.ml 24 points 1 year ago

I’m a teacher and the last half of this school year was a comedy of my colleagues trying to “ban” chat GPT. I’m not so much worried about students using chat GPT to do work. A simple two minute conversation with a student who creates an excellent (but suspected) piece of writing will tell you whether they wrote it themselves or not. What worries me is exactly those moments where you’re asking for a summary or a synopsis of something. You really have no idea what data is being used to create that summary.

[-] BedbugCutlefish@lemmy.world 12 points 1 year ago* (last edited 1 year ago)

The issue isn't that people are using others works for 'derivative' content.

The issue is that, for a person to 'derive' comedy from Sarah Silverman the 'analogue' way, you have to get her works legally, be that streaming her comedy specials, or watching movies/shows she's written for.

With chat GPT and other AI, its been 'trained' on her work (and, presumably as many other's works as possible) once, and now there's no 'views', or even sources given, to those properties.

And like a lot of digital work, its reach and speed is unprecedented. Like, previously, yeah, of course you could still 'derive' from people's works indirectly, like from a friend that watched it and recounted the 'good bits', or through general 'cultural osmosis'. But that was still limited by the speed of humans, and of culture. With AI, it can happen a functionally infinite number of times, nearly instantly.

Is all that to say Silverman is 100% right here? Probably not. But I do think that, the legality of ChatGPT, and other AI that can 'copy' artist's work, is worth questioning. But its a sticky enough issue that I'm genuinely not sure what the best route is. Certainly, I think current AI writing and image generation ought to be ineligible for commercial use until the issue has at least been addressed.

load more comments (21 replies)
[-] Riptide502@lemm.ee 20 points 1 year ago

AI is a duel sided blade. On one hand, you have an incredible piece of technology that can greatly improve the world. On the other, you have technology that can be easily misused to a disastrous degree.

I think most people can agree that an ideal world with AI is one where it is a tool to supplement innovation/research/creative output. Unfortunately, that is not the mindset of venture capitalists and technology enthusiasts. The tools are already extremely powerful, so these parties see them as replacements to actual humans/workers.

The saddest example has to be graphic designers/digital artists. It’s not some job that “anyone can do.” It’s an entire profession that takes years to master and perfect. AI replacement doesn’t just mean taking away their job, it’s rendering years of experience worthless. The frustrating thing is it’s doing all of this with their works, their art. Even with more regulations on the table, companies like adobe and deviant art are still using shady practices to unknowingly con users into building their AI algorithms (quietly instating automatic OPT-IN and making OPT-OUT options difficult). It’s sort of like forcing a man to dig their own grave.

You can’t blame artists for being mad about the whole situation. If you were in their same position, you would be just as angry and upset. The hard truth is that a large portion of the job market could likely be replaced by AI at some point, so it could happen to you.

These tools need to be TOOLS, not replacements. AI has it’s downfalls and expert knowledge should be used as a supplement to both improve these tools and the final product. There was a great video that covered some of those fundamental issues (such as not actually “knowing” or understanding what a certain object/concept is), but I can’t find it right now. I think the best comes when everyone is cooperating.

[-] Steeve@lemmy.ca 13 points 1 year ago

Even as tools, every time we increase worker productivity without a similar adjustment to wages we transfer more wealth to the top. It's definitely time to seriously discuss a universal basic income.

[-] MargotRobbie@lemmy.world 20 points 1 year ago

She's going to lose the lawsuit. It's an open and shut case.

"Authors Guild, Inc. v. Google, Inc." is the precedent case, in which the US Supreme Court established that transformative digitalization of copyrighted material inside a search engine constitutes as fair use, and text used for training LLMs are even more transformative than book digitalization since it is near impossible to reconstitute the original work barring extreme overtraining.

You will have to understand why styles can't and should not be able to be copyrighted, because that would honestly be a horrifying prospect for art.

[-] patatahooligan@lemmy.world 10 points 1 year ago

"Transformative" in this context does not mean simply not identical to the source material. It has to serve a different purpose and to provide additional value that cannot be derived from the original.

The summary that they talk about in the article is a bad example for a lawsuit because it is indeed transformative. A summary provides a different sort of value than the original work. However if the same LLM writes a book based on the books used as training data, then it is definitely not an open and shut case whether this is transformative.

load more comments (5 replies)
[-] TheSaneWriter@lemm.ee 19 points 1 year ago

If the models were trained on pirated material, the companies here have stupidly opened themselves to legal liability and will likely lose money over this, though I think they're more likely to settle out of court than lose. In terms of AI plagiarism in general, I think that could be alleviated if an AI had a way to cite its sources, i.e. point back to where in its training data it obtained information. If AI cited its sources and did not word for word copy them, then I think it would fall under fair use. If someone then stripped the sources out and paraded the work as their own, then I think that would be plagiarism again, where that user is plagiarizing both the AI and the AI's sources.

[-] ayaya@lemmy.fmhy.ml 10 points 1 year ago* (last edited 1 year ago)

It is impossible for an AI to cite its sources, at least in the current way of doing things. The AI itself doesn't even know where any particular text comes from. Large language models are essentially really complex word predictors, they look at the previous words and then predict the word that comes next.

When it's training it's putting weights on different words and phrases in relation to each other. If one source makes a certain weight go up by 0.0001% and then another does the same, and then a third makes it go down a bit, and so on-- how do you determine which ones affected the outcome? Multiply this over billions if not trillions of words and there's no realistic way to track where any particular text is coming from unless it happens to quote something exactly.

And if it did happen to quote something exactly, which is basically just random chance, the AI wouldn't even be aware it was quoting anything. When it's running it doesn't have access to the data it was trained on, it only has the weights on its "neurons." All it knows are that certain words and phrases either do or don't show up together often.

[-] Zetaphor@zemmy.cc 14 points 1 year ago* (last edited 1 year ago)

Quoting this comment from the HN thread:

On information and belief, the reason ChatGPT can accurately summarize a certain copyrighted book is because that book was copied by OpenAI and ingested by the underlying OpenAI Language Model (either GPT-3.5 or GPT-4) as part of its training data.

While it strikes me as perfectly plausible that the Books2 dataset contains Silverman's book, this quote from the complaint seems obviously false.

First, even if the model never saw a single word of the book's text during training, it could still learn to summarize it from reading other summaries which are publicly available. Such as the book's Wikipedia page.

Second, it's not even clear to me that a model which only saw the text of a book, but not any descriptions or summaries of it, during training would even be particular good at producing a summary.

We can test this by asking for a summary of a book which is available through Project Gutenberg (which the complaint asserts is Books1 and therefore part of ChatGPT's training data) but for which there is little discussion online. If the source of the ability to summarize is having the book itself during training, the model should be equally able to summarize the rare book as it is Silverman's book.

I chose "The Ruby of Kishmoor" at random. It was added to PG in 2003. ChatGPT with GPT-3.5 hallucinates a summary that doesn't even identify the correct main characters. The GPT-4 model refuses to even try, saying it doesn't know anything about the story and it isn't part of its training data.

If ChatGPT's ability to summarize Silverman's book comes from the book itself being part of the training data, why can it not do the same for other books?

As the commentor points out, I could recreate this result using a smaller offline model and an excerpt from the Wikipedia page for the book.

[-] patatahooligan@lemmy.world 8 points 1 year ago

You are treating publicly available information as free from copyright, which is not the case. Wikipedia content is covered by the Creative Commons Attribution-ShareAlike License 4.0. Images might be covered by different licenses. Online articles about the book are also covered by copyright unless explicitly stated otherwise.

load more comments (5 replies)
load more comments (1 replies)
[-] dep@lemmy.world 14 points 1 year ago

Feels like a publicity play

[-] RoundSparrow@lemm.ee 9 points 1 year ago

The comic's suit questions if AI models can function without training themselves on protected works.

I doubt a human can compose chat responses without having trained at school on previous language. Copyright favors the rich and powerful, established like Silverman.

[-] patatahooligan@lemmy.world 14 points 1 year ago

Selectively breaking copyright laws specifically to allow AI models also favors the rich, unfortunately. These models will make a very small group of rich people even richer while putting out of work the millions of creators whose works wore stolen to train the models.

load more comments (2 replies)
[-] trachemys@lemmy.world 12 points 1 year ago

We are overdue for strengthening fair use.

load more comments (1 replies)
[-] Marxine@lemmy.ml 5 points 1 year ago

VC backed AI makers and billionaire-ran corporations should definitely pay for the data they use to train their models. The common user should definitely check the licences of the data they use as well.

load more comments (4 replies)
[-] RedCanasta@lemmy.fmhy.ml 5 points 1 year ago

Copyright laws are a recent phenomenon and should have never been a thing imo. The only reason it's there is not to "protect creators," but to make sure upper classes extract as much wealth over the maximum amount of time possible.

Music piracy has showed that it's got too many holes in it to be effective, and now AI is showing us its redundancy as it uses data to give better results.

it stifles creativity to the point it makes us inhuman. Hell, Chinese writers used to praise others if they used a line or two from other writers.

[-] TheSaneWriter@lemm.ee 6 points 1 year ago

I think that copyright laws are fine in a vacuum, but that if nothing else we should review the amount of time before a copyright enters the public domain. Disney lobbied to have it set to something awful like 100 years, and I think it should almost certainly be shorter than that.

load more comments
view more: next ›
this post was submitted on 10 Jul 2023
421 points (94.7% liked)

Technology

34781 readers
238 users here now

This is the official technology community of Lemmy.ml for all news related to creation and use of technology, and to facilitate civil, meaningful discussion around it.


Ask in DM before posting product reviews or ads. All such posts otherwise are subject to removal.


Rules:

1: All Lemmy rules apply

2: Do not post low effort posts

3: NEVER post naziped*gore stuff

4: Always post article URLs or their archived version URLs as sources, NOT screenshots. Help the blind users.

5: personal rants of Big Tech CEOs like Elon Musk are unwelcome (does not include posts about their companies affecting wide range of people)

6: no advertisement posts unless verified as legitimate and non-exploitative/non-consumerist

7: crypto related posts, unless essential, are disallowed

founded 5 years ago
MODERATORS