OpenAI has publicly responded to a copyright lawsuit by The New York Times, calling the case “without merit” and saying it still hoped for a partnership with the media outlet.
In a blog post, OpenAI said the Times “is not telling the full story.” It took particular issue with claims that its ChatGPT AI tool reproduced Times stories verbatim, arguing that the Times had manipulated prompts to include regurgitated excerpts of articles. “Even when using such prompts, our models don’t typically behave the way The New York Times insinuates, which suggests they either instructed the model to regurgitate or cherry-picked their examples from many attempts,” OpenAI said.
OpenAI claims it’s attempted to reduce regurgitation from its large language models and that the Times refused to share examples of this reproduction before filing the lawsuit. It said the verbatim examples “appear to be from year-old articles that have proliferated on multiple third-party websites.” The company did admit that it took down a ChatGPT feature, called Browse, that unintentionally reproduced content.
If the point is to prove that the model contains an encoded version of the original article, and you make the model spit out the entire thing by just giving it the first paragraph or two, I don't see anything wrong with such a proof.
Your previous comment was suggesting that the entire article (or most of it) was included in the prompt/context, and that the part generated purely by the model was somehow generic enough that it could have feasibly been created without having an encoded/compressed/whatever version of the entire article somewhere.
Which does not appear to be the case.