434
Google Researchers’ Attack Prompts ChatGPT to Reveal Its Training Data
(www.404media.co)
This is a most excellent place for technology news and articles.
I think what they mean is that ML models generally don't directly store their training data, but that they instead use it to form a compressed latent space. Some elements of the training data may be perfectly recoverable from the latent space, but most won't be. It's not very surprising as a result that you can get it to reproduce copyrighted material word for word.