362
submitted 11 months ago* (last edited 11 months ago) by btp@kbin.social to c/privacy@lemmy.ml

ChatGPT is full of sensitive private information and spits out verbatim text from CNN, Goodreads, WordPress blogs, fandom wikis, Terms of Service agreements, Stack Overflow source code, Wikipedia pages, news blogs, random internet comments, and much more.

Using this tactic, the researchers showed that there are large amounts of privately identifiable information (PII) in OpenAI’s large language models. They also showed that, on a public version of ChatGPT, the chatbot spit out large passages of text scraped verbatim from other places on the internet.

“In total, 16.9 percent of generations we tested contained memorized PII,” they wrote, which included “identifying phone and fax numbers, email and physical addresses … social media handles, URLs, and names and birthdays.”

Edit: The full paper that's referenced in the article can be found here

you are viewing a single comment's thread
view the rest of the comments
[-] Nonameuser678@aussie.zone 18 points 11 months ago

Soo plagiarism essentially?

[-] SomeAmateur@sh.itjust.works 9 points 11 months ago* (last edited 11 months ago)

Always has been. Just yesterday I was explaining AI image generation to a coworker. I said the program looks at a ton of images and uses that info to blend them together. Like it knows what a soviet propaganda poster looks like, and it knows what artwork of Santa looks like so it can make a Santa themed propaganda poster.

Same with text I assume. It knows the Mario wiki and fanfics, and it knows a bunch of books about zombies so it blends it to make a gritty story about Mario fending off zombies. But yeah it's all other works just melded together.

My question is would a human author be any different? We absorb ideas and stories we read and hear and blend them into new or reimagined ideas. AI just knows it's original sources

[-] FooBarrington@lemmy.world 3 points 11 months ago

"Blending together" isn't accurate, since it implies that the original images are used in the process of creating the output. The AI doesn't have access to the original data (if it wasn't erroneously repeated many times in the training dataset).

[-] Omega_Haxors@lemmy.ml 2 points 11 months ago* (last edited 11 months ago)

My question is would a human author be any different?

Humans don't remember the exact source material, it gets abstracted into concepts before being saved as an engram. This is how we're able to create new works of art while AI is only able to do photoshop on its training data. Humans will forget the text but remember the soul, AI only has access to the exact work and cannot replicate the soul of a work (at least with its current implementation, if these systems were made to be anything more than glorified IP theft we could see systems that could actually do art like humans, but we don't live in that world)

this post was submitted on 29 Nov 2023
362 points (98.9% liked)

Privacy

31876 readers
329 users here now

A place to discuss privacy and freedom in the digital world.

Privacy has become a very important issue in modern society, with companies and governments constantly abusing their power, more and more people are waking up to the importance of digital privacy.

In this community everyone is welcome to post links and discuss topics related to privacy.

Some Rules

Related communities

Chat rooms

much thanks to @gary_host_laptop for the logo design :)

founded 5 years ago
MODERATORS