this post was submitted on 25 Mar 2026
85 points (95.7% liked)

Privacy

4273 readers
223 users here now

Icon base by Lorc under CC BY 3.0 with modifications to add a gradient

founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] ViatorOmnium@piefed.social 23 points 1 day ago (3 children)

The key points:

If something suggests an account isn’t human, including automation (hi, web agents), we may ask it to confirm there’s a person behind it. This will be rare and will not apply to most users. Accounts that can’t pass may be restricted.

To be clear, this is not sitewide human verification, let alone sitewide ID verification.

Redditors have long been the best bullshit detectors, and increasingly great Turing testers. We’ll make reporting easier and more flexible (these days, we can infer most issues from a report without a lot of context). I’d also like to include comments from other users pointing something out (e.g., “nice post, bot, now fuck off”), since that’s most users’ preferred reporting method.

In the end it also has a (valid) rant against AI generated content posted by humans, but he says they aren't going to block it site wide for now.

In the end there's nothing earth-shattering in there.

[–] deliriousdreams@fedia.io 3 points 1 day ago

Thank you. I stay away from reddit since they pulled the bullshit where it tries to auto-sign you in if you visit the site.

[–] BillyClark@piefed.social 6 points 1 day ago (1 children)

"Please continue to work for free to make me richer."

[–] halfapage@lemmy.world 3 points 1 day ago

"You do it for the love of the game, am I right sport's fans?!"

[–] ken@discuss.tchncs.de 1 points 1 day ago

Redditors have long been the best bullshit detectors, and increasingly great Turing testers.

🦾