Corgana

joined 2 years ago
[–] Corgana@startrek.website 11 points 1 day ago* (last edited 1 day ago)

The most difficult parts of moderating on Reddit aren't the trolls or spammers or even the rule-breakers, it's identifying the accounts who intentionally walk the line of what's appropriate.

IMO only a human moderator can recognize when someone is being a complete asshole but "doing it politely", or trying to push an agenda or generally behaving inauthentically, because human moderators are (in theory) members of the community themselves and have an interest in that community being enjoyable to be a part of.

Humans are messy, and finding the right balance of mess to keep things interesting without making a place overwhelming to newcomers is a fine balance to strike that I just don't believe an AI can do on it's own.

[–] Corgana@startrek.website 3 points 1 week ago

Lemmy.world for instance could put the rest of the Lemmy fediverse between a rock and a hard place if they wanted to

beehaw.org is doing great, and they deferated from.world a while ago. Your point is correct though, Mastodon.social for example has half of all Mastodon users.

That said- there is little incentive to having a large instance, it costs a lot more and requires a lot more work.

[–] Corgana@startrek.website 4 points 1 week ago (2 children)

"The fediverse" has no rules, if an instance wants to allow vote manipulation they have that power.

[–] Corgana@startrek.website 8 points 1 week ago

The best defense is to call them out on it and then walk away

Yes exactly, I try to just simply describe what they are doing "This account is spreading the false narrative _____ for the purposes of ___" then not replying again. They want engagement because the more back-and-forth bickering that goes on, the less likely a third party reader is going to care to read beyond the top comment (the propaganda) and seeing a lot of replies can also give the impression that the debate is legitimate. Getting into a "debate" with someone "debating" in bad faith only helps them flood the zone with shit.

[–] Corgana@startrek.website 4 points 1 week ago

Reddit mods can sniff out astroturfing pretty easily actually, but Reddit inc doesn't do much to stop it. On the Fediverse, admins can simply ban from the instance, and if an instance does a poor job of removing inauthentic content then they can defederate.

[–] Corgana@startrek.website 1 points 2 weeks ago

He's not really dead. As long as we remember him.

[–] Corgana@startrek.website 4 points 2 weeks ago

Exactly. Block and move on. Don't twist yourself into knots appeasing people, focus on keeping the users you want happy.

[–] Corgana@startrek.website 11 points 2 weeks ago (5 children)

Not trying to victim blame or anything, but I find it hard to believe that someone operating a low-moderation instance would truly expect people who don't like moderation to stay away.

Don't get me wrong I agree with your sentiment and dislike that behavior, but what I'm saying is that asking or expecting users not to go on witch hunts or to behave in a certain way is a fool's errand that will always lead to burnout. A more sustainable approach for admins and mods is creating space for what they want to host and not trying to control what they don't.

[–] Corgana@startrek.website 2 points 2 weeks ago* (last edited 2 weeks ago) (2 children)

In the article they quoted the moderator (emphasis mine):

This whole topic is so sad. It's unfortunate how many mentally unwell people are attracted to the topic of AI. I can see it getting worse before it gets better. I've seen sooo many posts where people link to their github which is pages of rambling pre prompt nonsense that makes their LLM behave like it's a god or something,” the r/accelerate moderator wrote. “Our policy is to quietly ban those users and not engage with them, because we're not qualified and it never goes well. They also tend to be a lot more irate and angry about their bans because they don't understand it.”

It seems pretty clear to me that they view it as a problem. Why ban something if they don't see it as a problem?

[–] Corgana@startrek.website 7 points 2 weeks ago (4 children)

Absolutely. And to be clear, the "researcher" being quoted is just a guy on the internet who self-published an official looking "paper".

That said- I think that's partly why it's so interesting that this particular group of people identified the problem, because this group of people are pretty extreme LLM devotees and already ascribe unrealistic traits to LLMs. So if they are noticing people "taking it too seriously" then you know it must be bad.

[–] Corgana@startrek.website 7 points 2 weeks ago

Yeeeeah that user doesn't really understand how these things work. Hopefully stories like this can get out there because the only thing that can stop predatory behavior by corporations is bad press.

 

Pretty freaky article, and it doesn't surprise me that chatbots could have this effect on some people more vulnerable to this sort of delusional thinking.

I also thought this was very interesting that even a subreddit full of die-hard AI evangelists (many of whom have an already religious-esque view of AI) would notice and identify a problem with this behavior.

 

Thought this was a really interesting read and felt my fellow Website enjoyers might think so too.

 
 
 
 

Very cool to see this topic in a place like Forbes, IMO.

view more: next ›