this post was submitted on 24 Feb 2025
2006 points (99.6% liked)
Fediverse
30368 readers
2927 users here now
A community to talk about the Fediverse and all it's related services using ActivityPub (Mastodon, Lemmy, KBin, etc).
If you wanted to get help with moderating your own community then head over to !moderators@lemmy.world!
Rules
- Posts must be on topic.
- Be respectful of others.
- Cite the sources used for graphs and other statistics.
- Follow the general Lemmy.world rules.
Learn more at these websites: Join The Fediverse Wiki, Fediverse.info, Wikipedia Page, The Federation Info (Stats), FediDB (Stats), Sub Rehab (Reddit Migration)
founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
I'm lost, here. Do you not think fighting toxicity and hate speech is a valid and important function of moderation that's just as much or more for the sake of the people as it might be for advertisers?
I think the rise of hate speech on centralised platforms relies very heavily on their centralised moderation and curation via algorithms.
They have all known for a long time that their algorithms promote hate speech, but they know that curbing that behaviour negatively affects their revenue, so they don't do it. They chase the fast buck, and they appease advertisers who have a naturally conservative bent, and that means rage bait and conventional values.
That's quite apart from when platform owners explicitly support that hate speech and actively suppress left leaning voices.
I think what we have on decentralised systems where we curate/moderate for ourselves works well because most of that open hate speech is siloed, which I think is the best thing you can do with it.