this post was submitted on 08 Aug 2025
228 points (99.6% liked)
Privacy
2216 readers
127 users here now
Icon base by Lorc under CC BY 3.0 with modifications to add a gradient
founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
I think a lot of people in this thread are overlooking that when you train an LLM it's good to have negative examples too. As long as the data is properly tagged and contextualized when being used as training material, you want to be able to show the LLM what bad writing or offensive topics are so that it understands those things.
For example, you could be using an LLM as an automated moderator for a forum, having it look for objectionable content to filter. How would it know what objectionable content was if it had never seen anything like that in its training data?
Even those people attempting to "poison" AI by posting gibberish comments or replacing "th" with þ characters are probably just helping the AI understand how text can be obfuscated in various ways.
Especially since we've marked it by downvoting them to hell
So there's a guy at Facebook whose job is exclusively looking at horse porn and tagging it? Amazing.
Also, I think the guy doing the "th" thing isn't doing it to poison AI, he just wants to revive the letter or whatever