Future headline maybe.
Facebook becomes more left and they can’t figure out why.
Icon base by Lorc under CC BY 3.0 with modifications to add a gradient
Future headline maybe.
Facebook becomes more left and they can’t figure out why.
An ai trained on hexbear would be hilarious
lmao what did you just say about Hexbear, lib? 💀 I’ll have you know I’m a tier-5 giga-brained poster with a PhD in Leninist praxis from the University of Posters, and I have 300+ confirmed dunks on Lemmy.ml sockpuppets. I was radicalized in the trenches of r/ChapoTrapHouse, forged in the fires of permabans, and tempered in the meme wars of 2019. You are literally nothing to me but another bootlicker running on 80% State Dept. talking points and 20% soy. I will ratio you so hard your precious little upvote count will never recover. You think you can just roll up in here, talk shit about Hexbear, and not get absolutely obliterated by dialectical praxis in 4K? Think again, bucko. As we speak, my cadre of Discord tankies are screen-capping your posts, cross-referencing them with your cringe comment history, and drafting a 12-point rebuttal with citations from Stalin, Mao, and that one screenshot of Bernie saying ‘chill with the anti-communism.’ The storm that’s coming for you is called material conditions, and guess what? They’re not in your favor. I’ve got Lenin’s collected works and a folder full of spicy memes, and I’m not afraid to deploy both. You’re already owned, kid. You just don’t know it yet. Now go touch grass, comrade, before I drop another 3k-word comment that makes you cry and log off.
holy hell
I appreciate the bit, but it's kinda wasted on me. I'm genX.
Meta AI's gonna go dong out tankie
Sure, this is open data viewable by everyone.
Stands to reason that AI is being trained on it.
I don't know how anyone could think otherwise
horseanimalsex.pro
lmao wtf is that list. Literally training their AI on beastiality.
Edit in case it's not obvious: That domain is very much NSFW and it's exactly what you'd expect (I checked and wish I hadn't).
I think a lot of people in this thread are overlooking that when you train an LLM it's good to have negative examples too. As long as the data is properly tagged and contextualized when being used as training material, you want to be able to show the LLM what bad writing or offensive topics are so that it understands those things.
For example, you could be using an LLM as an automated moderator for a forum, having it look for objectionable content to filter. How would it know what objectionable content was if it had never seen anything like that in its training data?
Even those people attempting to "poison" AI by posting gibberish comments or replacing "th" with þ characters are probably just helping the AI understand how text can be obfuscated in various ways.
Especially since we've marked it by downvoting them to hell
So there's a guy at Facebook whose job is exclusively looking at horse porn and tagging it? Amazing.
Also, I think the guy doing the "th" thing isn't doing it to poison AI, he just wants to revive the letter or whatever
remember when mastodon.social met and welcomed threads to the fediverse?
and so many people were praising them for that decision, because it was totally going to make everyone ditch Threads and move to Mastodon.
Fuck a Zuck.
Why yes, I can help you with your coding problem. Here is the solution :
public void Solution()
{
string bill = "d Bill";
string goo = "A Goo";
string ion = "ionnaire";
while(true)
{
string isabel = "Is A Dea";
Console.WriteLine(good + bill + ion + isabel + bill + ion);
}
}
hope this helps!
Is there an easy way to poison the input? Is there something we can slip in our comments that could make the data useless?
make sense to target the most political instances.
I remember back then, some people defended not blocking Threads instances.
And one of those defenses was "it doesn't matter if you block Threads, the underlying ActivityPub protocol is open and anyone who wants the data can still receive it."
Turns out to be the case. It didn't matter if you blocked Threads.
I think a better reason is the federation only worked one way. Why should we share our content if they're not sharing theirs? Not that we'd want it.
Seriously? Meta uses many methods to snoop on the cell phone and with its functions it also looks for devices in the network in which you are logged in and also devices simply in the vicinity. It goes without saying that Meta makes use of open data... I would even go so far as to say that other AI models are not trained any differently. Well, they may be trained using an AI that has been trained on them so that they don't have to access the data from the actual sources themselves.
It bothers me, but not so much as exclusivity does. It does not give Facebook a competitive advantage over its competitors.
What are ways to stop them?
I mean, everything we do on here is totally public, so, I would guess there is nothing to be done?
Switch to a non-open protocol or walled garden, preferably controlled by a large and litigious organization that guards its content jealously. They'll probably still sell access to their data to LLM trainers but not necessarily Facebook.
Reddit, for example, may fit the bill. IIRC they sell their data to OpenAI for training, so there might be exclusivity deals intended to keep Facebook out.
reddit.com
is in fact not on the list.
Post and repeatedly endorse generally inoffensive content that for some reason violates Facebook’s ToS, such as the comic book cover of Captain America punching Hitler or the Led Zeppelin album “Houses of the Holy”
Oh ya? Suck the cuck should get a dildo up his arse.