this post was submitted on 26 Feb 2026
55 points (93.7% liked)

Privacy

5283 readers
99 users here now

Welcome! This is a community for all those who are interested in protecting their privacy.

Rules

PS: Don't be a smartass and try to game the system, we'll know if you're breaking the rules when we see it!

  1. Be civil and no prejudice
  2. Don't promote big-tech software
  3. No apathy and defeatism for privacy (i.e. "They already have my data, why bother?")
  4. No reposting of news that was already posted
  5. No crypto, blockchain, NFTs
  6. No Xitter links (if absolutely necessary, use xcancel)

Related communities:

Some of these are only vaguely related, but great communities.

founded 1 year ago
MODERATORS
 

New study shows smart chatbots can figure out who you really are from just a few posts... and it only costs a couple of dollars.

top 22 comments
sorted by: hot top controversial new old
[–] x550@lemmy.dbzer0.com 27 points 1 day ago (2 children)

Nothing of substance here. Stylometric analysis was already a thing. Easy enough to defeat and with correct opsec you can avoid it. Burn accounts regularly. Use accounts for specific topics or share accounts with others. Dont post personal details online , on the internet everyone is a cat.

[–] other_cat@piefed.zip 21 points 1 day ago

I certainly am!

[–] umbrella@lemmy.ml 2 points 1 day ago

what kind of opsec can defeat it?

[–] homesweethomeMrL@lemmy.world 33 points 1 day ago (3 children)

Step 2: Search the whole internet: It quietly checks LinkedIn, Google, other Reddit accounts, etc., to find possible real people who match those clues.

Oh. Whew.

[–] Draconic_NEO@lemmy.dbzer0.com 4 points 1 day ago* (last edited 1 day ago)

Yeah, call this article what it is, clickbait fear mongering.

[–] gigachad@piefed.social 21 points 1 day ago

To be honest, internet search got so shitty, soon it will be a really impressive skill to search the internet efficiently

[–] floquant@lemmy.dbzer0.com 6 points 1 day ago (2 children)

I thought Lemmy comments might be indexed anyway, but neither Kagi nor DDG turned up anything for my username. Wonder if it's different for other instances?

Not surprising considering those challenges against AI scrapers likely also effect Search Engine crawlers. Stuff can get through if it's federated on other servers that don't have such measures but if you don't participate in communities on those instances it's less likely.

[–] Mac@mander.xyz 2 points 1 day ago* (last edited 1 day ago)

Searched "lemmy floquant" on DDG and one of your comments is the second result.

First result was the user profile of the same name but different instance.

[–] floquant@lemmy.dbzer0.com 16 points 1 day ago (1 children)

I don't like having to be vague about my age, nationality, job etc, because I'd rather be honest and relate to others online, but sadly it's a necessity in the modern landscape

[–] CucumberFetish@lemmy.dbzer0.com 7 points 1 day ago (1 children)

That's why you have multiple accounts. Some that are for above table things which can have your personal details and others for eating the rich

[–] GamingChairModel@lemmy.world 6 points 1 day ago

One account that can be correlated to place/city, willing to discuss local news and issues.

One account that can be correlated to family status, willing to mention details about relationships.

One account that can be correlated to career, willing to mention details about educational background, industry news, the job market, the workplace, etc.

One account that can be correlated to each distinct hobby or interest. Some interests can correlate among themselves (like an all sports account that discusses multiple sports) and are safe to discuss on a single account. Like my current account that is tech oriented, including some stuff about games or Linux or networking or even the tech industry. But keep the different interests on separate accounts.

Then different accounts for topics that you consider controversial or private.

And, preferably, spread all those accounts across multiple instances so that instance admins can't link accounts from metadata (client, OS, IP address, email verification), use completely unique usernames, and avoid unique markers like esoteric phrases, unique autocorrect errors, etc.

Even if an adversary can link two accounts, they probably can't link all of them.

[–] HubertManne@piefed.social 10 points 1 day ago

and if they are wrong they still get the couple of dollars so win win. I can unmask anyone you want online from just a few posts and a name randomizer.

[–] CucumberFetish@lemmy.dbzer0.com 10 points 1 day ago (2 children)

Looks like the LLM can be used to cross reference data from your pseudo private account to your public account. What a surprise

[–] pivot_root@lemmy.world 10 points 1 day ago (1 children)

It's a good thing that I work for Dick's Fish & Chips located on the main street of a bustling city in Antarctica. I wouldn't want the LLM to get it wrong.

The same Dick's Fish & Chips where in 1998, The Undertaker threw Mankind off Hell In A Cell, and plummeted 16 ft through an announcer's table?

That Dick's Fish & Chips?

Yup. The only difference between this and what any individual could already do is just time and scale.

Data brokers and government surveillance organizations have already had specialized tools to do this sort of thing for a while now, it's just that LLMs reduce the complexity and specialization needed to actually make an implementation that works well as an individual person.

[–] reddig33@lemmy.world 7 points 1 day ago (2 children)

AI can’t even get facial identification correct. It might claim to be able to identify people by online presence, but I wouldn’t doubt if it guesses incorrectly most of the time.

Based on the research they had a 60 something % accuracy. But the test data was for HackerNews accounts which linked to LinkedIn. I would guess that anyone linking their anonymous account to their LinkedIn profile isn't really trying to hide themselves.

[–] chicken@lemmy.dbzer0.com 2 points 1 day ago

On real Hacker News users, the AI correctly linked the secret username to the real person 67% of the time, and when it made a guess, it was right 90%. The paper also states that matching the same person's Reddit posts from different years or groups met with 68% success.

[–] Willoughby@piefed.world 6 points 1 day ago

Oh no, who's making them use all those terrible services that whore them out like that?

[–] corsicanguppy@lemmy.ca 3 points 1 day ago

upto

If the 'journalist' can't use a spell-check, I don't trust his opinions on automation.