this post was submitted on 23 Feb 2026
95 points (94.4% liked)
Fediverse
40516 readers
702 users here now
A community to talk about the Fediverse and all it's related services using ActivityPub (Mastodon, Lemmy, Mbin, etc).
If you wanted to get help with moderating your own community then head over to !moderators@lemmy.world!
Rules
- Posts must be on topic.
- Be respectful of others.
- Cite the sources used for graphs and other statistics.
- Follow the general Lemmy.world rules.
Learn more at these websites: Join The Fediverse Wiki, Fediverse.info, Wikipedia Page, The Federation Info (Stats), FediDB (Stats), Sub Rehab (Reddit Migration)
founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
That's an interesting example of a user this is designed for/around.
The general system of up/downvotes seems to be doing its job quite as intended: their views appear routinely unpopular and there's a seemingly pretty strong community consensus around that.
It looks like their threads have comments that solidly and clearly refute the garbage manosphere stuff. For some people it's the opportunity to express a refutation of it publicly and directly. The public viewer gets to read those responses too.
So with that example: what do the flags do that the content of their posts don't already communicate?
It warns other users that this commenter may be a bad faith user / troll.
Usually when I encounter a troll, I check their profile to see if they are indeed a troll. The warning saves some time on that, and is accurate the vast majority of the time.
I guess I approach it inversely. I encounter what looks like a troll post and I'll only check profiles when either I am interacting with them, or there's such deep downvoting already I'm just doing a morbid dive into someone's history.
Most of the time though the user just has a deeply downvoted argument but otherwise normal and/or low engagement posts, so they wouldn't be flagged by this.
So I understand that it can save some time with some niche cases.
But I can't help but note that the system seems intentionally blind to targeted harassment, which can be a source, if not cause, of bad faith accounts. (And likely those need different approaches since those are also niche cases themselves.)
And maybe it's all just because of my instance's Local feed, so that's what I see as a prominent problem on Lemmy.
If you mean using puppet accounts to massively downvote someone, that's also tracked, but with another tool
Not necessarily puppet accounts, just brigading in general.
It's the rationale many instances used to defederate hexbear. (Even though iirc hexbear disables downvotes, so they're defederated for users mass posting, usually that hogshit image, instead of mass voting.) It wasn't puppets or bot accounts at any rate.
But then there's repost communities where users share comments (especially in places they or their audience is banned from) or DMs for a group response.
Not to mention the whole 'block and downvote all .ml on sight' mentality. But hopefully that might be something this tool could catch.