this post was submitted on 04 May 2026
4 points (83.3% liked)

PieFed Meta

4679 readers
41 users here now

Discuss PieFed project direction, provide feedback, ask questions, suggest improvements, and engage in conversations related to the platform organization, policies, features, and community dynamics.

Wiki

founded 2 years ago
MODERATORS
 

After reading about other instances using LLM assisted moderation to analyze users. I thought about how they could easily be used on a user's voting history as well.

So, I have an idea that could potentially allow federated votes to remain anonymous while the user's own instance Admins would still be able to review voting history for any sort of abuse.

Basically a PieFed instance would have a decent amount of voting accounts created for the sole purpose of federating votes. They could be called something like:

piefed.social_vote1
piefed.social_vote2
...
piefed.social_vote{n}

Anytime a vote is federated to other instances, instead of using the original user's account, the vote would be federated using one of these voting accounts. For any individual vote cast, the voting account used could be selected either randomly or via some sort of round-robin. This does require the instance to have a decent number of voting accounts on hand (probably a percentage of the overall active user base). That way, as multiple users from the same instance upvote a particular post, different voting accounts would be used to federate each of those votes individually. Voting accounts would not be tied to any one user.

At the very least this would allow users who are used to their votes remaining private to be able to vote as freely as they normally would on other forums where only Admins can see the actual votes.

you are viewing a single comment's thread
view the rest of the comments
[–] hendrik@palaver.p3x.de 3 points 1 week ago* (last edited 1 week ago) (1 children)

Well, it'd make it impossible for (non-local) community moderators to detect vote abuse. And furthermore random users get their voting ability banned, if a remote person decides to ban one of these proxy accounts.

And using LLMs is a bit silly, IMO. It doesn't really detect the patterns, because it looks at some text. We'd need to do proper machine learning on the data for that. And it's wasteful. Why not use sentiment analysis? Or train a classificator? Or do old-school statistics on the user's behaviour reflected in the numbers in the database? That's both more powerful, and comes at a fraction of the computational cost. And probably has a lower error-rate as well.

Edit: But I bet whoever unleashes LLM / bot "moderators" on the Fediverse, is up to no good. That sounds like automatically pushing some political agenda, or some even more abusive dynamics than on Reddit, and not content moderation?!

[–] QuadratureSurfer@piefed.social 3 points 1 week ago (1 children)

In my opinion, detecting vote abuse should be something up to the instance Admins rather than a job for any non-local community moderator.

If a non-local community moderator suspects vote manipulation, they can reach out to the instance Admins to investigate.

The way it is right now, votes are almost entirely public. Most users coming from certain other popular platforms would not expect their votes to be as public as they actually are in the fediverse.

[–] hendrik@palaver.p3x.de 2 points 1 week ago* (last edited 1 week ago)

Most users [...] would not expect their votes to be as public as they actually are [...]

That's correct. I hear that all the time.

From what I heard, admins of larger instances are quite busy, and they try to delegate as much as possible to mod teams, etc. And some want to stay neutral and let the specific people handle feuds. Or they just run the infrastructure, and managing the crowd is up to other people. But I've also seen admins step in several time. Some seem to pay attention, or there's some automod. Idk maybe we should ask some of the admins whether they're willing to handle that workload.

And another issue: We have some badly moderated instances in the network. Guess that'll be an issue as well as they don't really have active admins. It's a bit difficult to handle it for admins of other instances... Guess if there's only anonymous votes coming in, we'd need to ban entire instances from voting if we can't tell which of their users are problematic.

I think it's just hard from the tech perspective, as the Fediverse is designed to work entirely the other way around, and scatter metadata and actual data all through the network.