Well, You're more than welcome to reach out on more secure comms, such as matrix to get the proof! It's strange how that's basically standard for most of the other respectable instances, but not here.
Do I know if they are the exact same individual - no - I can't know that because I don't have IP information from other instances. To use this limitation as a bludgeon is dishonest. Admins that host sockpuppets and know it aren't likely to ever reveal this information.
Do I have clear evidence that the UM/CM0002/BarryGoldWater user(s) that attempted signup on my instance are bots - Oh definitely yes.
Now, I see you are a mod not an admin. I do not typically share my methods with non-admins, and definitely not over a public forum like this.
If the dbzer0 admin wants my supporting information, they may DM me with their preferred matrix handle/server, and i will happily discuss there.
Give the documents then? And am I a bot now too?
Interesting how you continue to leave out the security implications of posting this publicly.
Odd that.
I know not if you are a bot, you aren't on my instance, nor would it be likely you could get through my process.
I do know that you are awfully defensive of sockpuppet like behavior though.
Nooope, I have IP data, email logs, and other things. However, much of the data had fallen off my WAF retention period. Oddly convenient how you just assume I don't have these things rather than keeping them close because I don't want the bots to figure out how I'm catching them.
When you conveniently leave out that providing proof reduces an admins ability to re-use certain detection methods, it makes me pretty convinced you're complicit.
How do you identify sock puppets? Are they all the same IP?
From an Admin perspective, most botnets do a good job of distributing most of their traffic. But the key is they don't distribute ALL of their traffic.
From a user perspective my advice is generally "if it quacks like a duck"...
That is, is the persona that of an extreme stereotype? Are they overly contrarian? Is what they are doing destructive to those who claim similar identities? Then it's likely a sockpuppet.
And if it isn't - oh well, treat them like one anyways - it's better for society that way.
CM0002 may not be shaking the cage as hard, but he is still a bot - and associated with the same botnet when I got a burst of signups for UM and his alts.
Barrygoldwater is a bot associated with UM. UM also is associated with CM0002 from an IP standpoint given the last "bot signup attack" I experienced. (Fun fact they use barrygoldwater in their email they use to sign up from)
Or rather, this admin, do they have a WAF? Are they analyzing the traffic that comes in? Are the sure they're checking every point of interaction for consistency? If no, then they didn't really "Check".
This admin will state that UM is a bot. And wouldn't ya know it some of the other signup attempts used the alt names. Weird that.
I don't think many admins know infosec practices very well to be frank.
Fuck all, but luckily TrickDacy is here to instantly believe any baseless accusation.
As an admin who had to fend off UM's bot signups - it's definitely not unfounded.
I have a more effective way of confirming things like this if interested…
Probably not more effective than my method - but you need to be an instance admin to be able to use my method.
I certainly don't doubt the top line trends here in this study. However, I wonder how the fediverse might differ. Anyone can set up a Lemmy or Mastodon instance, regardless of their technical aptitude and desire to secure the instance from toxic content. It's also inherently more anonymous. A more direct comparison might be 4chan not Reddit.
Both of the platforms they studied on have more sophisticated methods to determine bad actors because of their dominance. Particularly Facebook, where a profile is supposed to be mappable to a single, real identity.
That being said, there's a very real concern about how algorithms end up placing these "loud mouths" in other people's feeds. After all, outrage is still something that is preferred by algorithms. So those 3 to 7% of users creating the toxic content, might represent an outsized proportion of views.
It's good to know the reality on these platforms is that most people are reasonable. I guess the bigger question is why people come to the opposite conclusion. And I think that algorithms overly indexing on outrage are part of that.