1
submitted 1 year ago* (last edited 1 year ago) by duncesplayed@lemmy.one to c/privacyguides@lemmy.one

It feels like we have a new privacy threat that's emerged in the past few years, and this year especially. I kind of think of the privacy threats over the past few decades as happening in waves of:

  1. First we were concerned about governments spying on us. The way we fought back (and continue to fight back) was through encrypted and secure protocols.
  2. Then we were concerned about corporations (Big Tech) taking our data and selling it to advertisers to target us with ads, or otherwise manipulate us. This is still a hard battle being fought, but we're fighting it mostly by avoiding Big Tech ("De-Googling", switching from social media to communities, etc.).
  3. Now we're in a new wave. Big Tech is now building massive GPTs (ChatGPT, Google Bard, etc.) and it's all trained on our data. Our reddit posts and Stack Overflow posts and maybe even our Mastodon or Lemmy posts! Unlike with #2, avoiding Big Tech doesn't help, since they can access our posts no matter where we post them.

So for that third one...what do we do? Anything that's online is fair game to be used to train the new crop of GPTs. Is this a battle that you personally care a lot about, or are you okay with GPTs being trained on stuff you've provided? If you do care, do you think there's any reasonable way we can fight back? Can we poison their training data somehow?

all 17 comments
sorted by: hot top controversial new old
[-] unfazedbeaver@lemmy.one 2 points 1 year ago

I'm considering using Power Delete Suite to delete my account, overwrite my previous comments, and maybe leaving a couple of my top comments up regarding tech support so people can still find information on troubleshooting

[-] curioushom@lemmy.one 1 points 1 year ago

The issue is that most of the content posted is archived fairly quickly. Deleting/rewriting only hurts the humans that might have gone looking for it. The way I look at it is, if the data is searchable/indexable by search engines (as a proxy for all other tools) at any point of its life cycle then it's essentially permanent.

[-] jonah@lemmy.one 2 points 1 year ago

The biggest problem to me is what I just saw you post in another reply, that these models built upon our knowledge exist almost solely within proprietary ecosystems.

and maybe even our Mastodon or Lemmy posts!

The Washington Post published a great piece which allows you to search which websites were included in the "C4" dataset published in 2019. I searched for my personal blog jonaharagon.com and sure enough it was included, and the C4 dataset is practically minuscule compared to what is being compiled for larger models like ChatGPT. If my tiny website was included, Mastodon and Lemmy posts (which are actually very visible and SEO optimized tbh) are 100% being scraped as well, there's no maybe about it.

[-] greybeard@lemmy.one 1 points 1 year ago

I've been posting publically for years. I expect when I do, it was viewed and used by anyone any time for anything. AI hasn't changed that.

[-] bpudding@lemmy.one 1 points 1 year ago

Regardless of how anyone feels about their writing being used for model training, there's definitely nothing anyone can do to prevent it other than just not writing anything visitble to the public.

[-] Neromar@lemmy.one 1 points 1 year ago

Not yet, I think. If AI as regulated more strictly, users might get the chance of putting permission on their data. However that well look like. I hope it's better than the cookie opt-out or do-not-track setting in your browser though.

[-] Kalkaline@lemmy.one 1 points 1 year ago

Do I care? Sure, a little, someone is going to get paid and it's not going to be me. There's nothing I can do about it and my boss gets paid for my work too.

[-] BacardiT@lemmy.one 1 points 1 year ago

I’m okay with it as long as I’m aware of it. If the platforms are up front about it, then users can choose for themselves whether they want to potentially contribute to training data. It will be interesting to watch the next few years.

[-] mainfrog@beehaw.org 1 points 1 year ago

It depends on if the data is suitably anonymized or not. If my data isn't able to be reconstructed word for word in a way to directly links back to me? I don't know if I mind that anymore then I'd mind someone reading content I wrote and taking inspiration from that.

On the topic of privacy - how do people feel Lemmy compares to Reddit for privacy? I don't really like the way Lemmy handles deleted content for example.

[-] DevCat@lemmy.world 0 points 1 year ago

GIGO - Garbage In, Garbage Out. I asked ChatGPT to write a short essay and include a bibliography with URL 's. Every URL was a 404, and when looking up the bibliographic entries, they were nonexistent as well.

[-] Limivorous@lemmy.one 1 points 1 year ago

That's because you don't understand the tool you are using and use tech-sounding language in the wrong context to look like you do.

GPT models generate text based on the patterns of the tokens it learned during training. The URL it gives you doesn't work because they have to only look legit. It's all statistical patterns.

It's not because they fed it garbage during the semi-supervised training, it's because that literally is what the tool is meant for. Use the right tool like google scholar if what you need are sources.

this post was submitted on 12 Jun 2023
1 points (100.0% liked)

Privacy Guides

16263 readers
51 users here now

In the digital age, protecting your personal information might seem like an impossible task. We’re here to help.

This is a community for sharing news about privacy, posting information about cool privacy tools and services, and getting advice about your privacy journey.


You can subscribe to this community from any Kbin or Lemmy instance:

Learn more...


Check out our website at privacyguides.org before asking your questions here. We've tried answering the common questions and recommendations there!

Want to get involved? The website is open-source on GitHub, and your help would be appreciated!


This community is the "official" Privacy Guides community on Lemmy, which can be verified here. Other "Privacy Guides" communities on other Lemmy servers are not moderated by this team or associated with the website.


Moderation Rules:

  1. We prefer posting about open-source software whenever possible.
  2. This is not the place for self-promotion if you are not listed on privacyguides.org. If you want to be listed, make a suggestion on our forum first.
  3. No soliciting engagement: Don't ask for upvotes, follows, etc.
  4. Surveys, Fundraising, and Petitions must be pre-approved by the mod team.
  5. Be civil, no violence, hate speech. Assume people here are posting in good faith.
  6. Don't repost topics which have already been covered here.
  7. News posts must be related to privacy and security, and your post title must match the article headline exactly. Do not editorialize titles, you can post your opinions in the post body or a comment.
  8. Memes/images/video posts that could be summarized as text explanations should not be posted. Infographics and conference talks from reputable sources are acceptable.
  9. No help vampires: This is not a tech support subreddit, don't abuse our community's willingness to help. Questions related to privacy, security or privacy/security related software and their configurations are acceptable.
  10. No misinformation: Extraordinary claims must be matched with evidence.
  11. Do not post about VPNs or cryptocurrencies which are not listed on privacyguides.org. See Rule 2 for info on adding new recommendations to the website.
  12. General guides or software lists are not permitted. Original sources and research about specific topics are allowed as long as they are high quality and factual. We are not providing a platform for poorly-vetted, out-of-date or conflicting recommendations.

Additional Resources:

founded 2 years ago
MODERATORS