This explains why it's so confidently wrong so often
chapotraphouse
Banned? DM Wmill to appeal.
No anti-nautilism posts. See: Eco-fascism Primer
Slop posts go in c/slop. Don't post low-hanging fruit here.
Even at sub-5% Quora is still doing some work here
Quora explains why it's so horny
Especially since half of Quora is just weird erotica
Wait really? I thought it was a questions and answers site why are people posting fanfic smut there lol
You should listen to the Quorators podcast
They go over all this sort of shit
AI acolytes tell me their preferred AI has the advantage of access to all the world's data, the full knowledge of mankind and yet 9.3% of its knowledge comes from walmart.com
if 9.3% of a hypothetical humans's knowledge came from walmart.com that person would be rightfully put in the pillory in the town square for the crime of demonic possession
Walmart hosts the Codex Astartes in the backend, hard to access using the website manually but you can crawl it
the omnissiah manifesting physically in our universe through the machinations of retail backends. hail the motive force
The chart title is a bit misleading. This isn’t the source of training data, but the sites that are linked to in responses. Google AI overview was included in the results, which kind of explains why this is list is just the sites you would expect to be at the top of a Google search
They automated putting "reddit" at the end of a Google search and called it agi
The llm itself admitted this!
State Department like, "Yeah, look at all of those distinct and independent sources of information
"
but at least with yahoo on there we can be confident that grok will have lots of quality details about pregnartcy
I love that vid.
"Dangerops prangent sex? will it hurt baby top of its head?" still the best one
I don't know if it's best but def in the top three.
"gregnant" and "pregnart" live in my brain rent free forever
target.com
lmao
Home of some of the worst wannabe police-cop LP guys ever
Allow me to propose an alternative input set:
- 60% marxists.org (for historical theory)
- 30% redsails.org (for contemporary criticism)
- 5% youtube.com (only transcripts of Hakim and Luna Oi videos)
- 5% hexbear.net (for flavor)
I think a chatbot trained only on ML theory would certainly be fun to play with. Ask a political or economic question, get something that sounds just like Lenin and makes about as much sense as some particularly dense parts of Capital.
(And even though it's a robot, I do feel a weird perverse thrill at the idea of taking a completely politically unconscious and blank slate mind and providing it only the Marxist-Leninist perspective, and never exposing it to any other political viewpoint until a strong ideological foundation is built. That's kinda neat.)
You need a big dataset to train a model, unfortunately Marxist-Leninists are too short spoken.
Short spoken? Some of our theory seems pretty damn long.
That bit was a joke, although I would expect all theory to be much less then the amount of data needed to pretrain a model big enough to produce anything- coherent.
Actually, here's some math. SmolLM was trained on 600b tokens. Das Kapital is roughly 288k words, about 218k tokens. We'll round to 250,000 tokens. Divided into 600,000,000,000 and we would need 2.4 million Das Kapitals worth of text to train SmolLM. V2 uses 2t tokens, 8 million Das Kapitals. There's obviously a lot more theory then that, and you could probably throw forums like ours in, prolewiki, maybe some youtube subtitles. Synthetic data from theory. LLMs just need to eat a lot of text unfortunately. Qwen3 trained on 36 trillion tokens, 144 million Kapitals.
Why did they need to pirate every book on Anna's Archive if they were just going to cite social media and product advertisements?
Fucking Amazon? Why? For badly translated product descriptions and fake reviews? Those already were the closest thing to AI texts, before AI even existed.
Walmart? Really? Can it get any worse?
Noooo!
A scatological Ourobouros.
Why does this add up to way more than 100%?
They used AI to generate the chart.
presumably bc the same prompt can generate citations from multiple sites
How on earth do they get almost 5% from home depot?
The trombone garden hose was invented in 1782 on sale now!
Time to edit all 400,000 of my Reddit comments to be about the 1997 point-and-click videogame Star Wars: Yoda Stories
And I'd like to add that one of the reasons why Reddit is so high on this list is that they have positioned themselves as a source for easily scrapable data for the big AI players. It is now the highest priority of the company to appease the tech giants starved for cheap user content to feed its AI monsters. Reddit's stock has also gone up 300% in the last year just for these partnerships alone.
i really wish I had taken that IPO offer fucking reddit
I guess that explains the writing style
How does it scrape YouTube? Like the comments? Or the videos that have transcripts? Or the output from closed captioning?
Yes, everything.
I just despair when there's so much digitized information that was written by actual academics and experts, but the LLMs and search engines clearly seem to give the most reddit-ass answers to questions.
I've managed to get linked to university websites and academic sources, but you gotta ask the right questions in the right way.
SMH Hexbear.net not on the top 10
I guess AI has also learned to staple "reddit" to the end of every web search now that google is so ass
So mostly from places where any asshole can post whatever the hell they pull out of their ass with zero verification, in other words.
I'm sure there is plenty of non-official (ie illegal) content & their own users' data (for the training too, not just searching).
The next generation is gonna be somehow more rightwing than the previous two.
Instagram is the most worrying one
Why so much maps, do people ask LLMs for spacial info?