362
Google Researchers’ Attack Prompts ChatGPT to Reveal Its Training Data
(www.404media.co)
Privacy has become a very important issue in modern society, with companies and governments constantly abusing their power, more and more people are waking up to the importance of digital privacy.
In this community everyone is welcome to post links and discuss topics related to privacy.
[Matrix/Element]Dead
much thanks to @gary_host_laptop for the logo design :)
Model collapse is likely to kill them in the medium term future. We're rapidly reaching the point where an increasingly large majority of text on the internet, i.e. the training data of future LLMs, is itself generated by LLMs for content farms. For complicated reasons that I don't fully understand, this kind of training data poisons the model.
It's not hard to understand. People already trust the output of LLMs way too much because it sounds reasonable. On further inspection often it turns out to be bullshit. So LLMs increase the level of bullshit compared to the input data. Repeat a few times and the problem becomes more and more obvious.
Like incest for computers. Random fault goes in, multiplies and is passed down.
Photocopy of a photocopy.
Or, in more modern terms, JPEG of a JPEG.