Want to wade into the snowy surf of the abyss? Have a sneer percolating in your system but not enough time/energy to make a whole post about it? Go forth and be mid.
Welcome to the Stubsack, your first port of call for learning fresh Awful you’ll near-instantly regret.
Any awful.systems sub may be subsneered in this subthread, techtakes or no.
If your sneer seems higher quality than you thought, feel free to cut’n’paste it into its own post — there’s no quota for posting and the bar really isn’t that high.
The post Xitter web has spawned so many “esoteric” right wing freaks, but there’s no appropriate sneer-space for them. I’m talking redscare-ish, reality challenged “culture critics” who write about everything but understand nothing. I’m talking about reply-guys who make the same 6 tweets about the same 3 subjects. They’re inescapable at this point, yet I don’t see them mocked (as much as they should be)
Like, there was one dude a while back who insisted that women couldn’t be surgeons because they didn’t believe in the moon or in stars? I think each and every one of these guys is uniquely fucked up and if I can’t escape them, I would love to sneer at them.
(Credit and/or blame to David Gerard for starting this. Also, hope you had a wonderful Valentine's Day!)
Tante.cc writes about Cory using an 'Drunk Uncle' style argument to defend his LLM usage (and go after the left using strawmans).
(To counter one of Cory's arguments, If disliking LLMs was just about the people who run it, people against it would have have stayed in sneerclub).
Quick update: The post's popped off in the Fediverse, and Doctorow's actively posting through it in the replies.
EDIT: Tante's also written a follow-up post, trying to convince people to tone down their vitriol against Cory.
I assume a lot of people are using this moment to do the 'I never liked him' hate.
I disagree with Tante on the second article btw. Dont think people drop others on a dime, Inthink it is a slower process where someone you look up to does more and more small things you dislike (or you reread and start to realize you perhaps had a few too rose colored glasses on) and then your opinion turns. (With some exceptions of course, lot of people have a few things they consider red lines, like a lot of leftwingers not being fan of sex crimes, or people on the right not being a fan of treating poc like equals).
E: i do have a hit skeet on bsky saying 'Guess even Doctorow must eventually enshittify' hope this didn't trigger this blog post. (I meant it both as he got worse, but also im using enshittify intentionally wrong cause Cory said a very weird thing about how anti AI was neoliberal purity culture, which I also think is misusing terms).
Good read, thanks.
as someone from a colonial country that never got the chance partake on the wealth of fossil fuel society but will take the brunt of its consequences as rich countries continue to burn carbon, what LLMs taught me is that "energy waste by the First World fucks up the Third, even more" does not even register as an ethical argument to the First World. like, it's some sort of purity argument not even worth considering, an extremist position of arguing abstractions and future hypotheticals, rather than, say, 478 cities in my country flooding with abnormal weather two years ago etc.
That was a good read.
Corey doc wrote:
Equivocating what LLMs do and what goes into LLM web scraping with "a search engine" is messed up. His article that he links about scraping is mostly about how badly copyright works and how analysing trade-secret-walled data can be beneficial both to consumers and science but occasionally bad for citizen privacy, which you'll recognize as mostly irrelevant to the concerns people tend to have against LLM training data providers ddosing the fuck out of everything.
Corey also provides this anecdote:
what the actual shit
This is probably just me, but that doesn't seem particularly shocking. If this AI bubble's taught me anything, its that tech culture (if not tech as a whole) was deeply, deeply vulnerable to the LLM rot from the start.
I was a bit alarmed by this, a client brought in that Colombia data for their dissertation last month, and did not mention this. I looked up the paper https://www.arxiv.org/abs/2509.04523 - what they /actually/ did was use GPT 4o-mini only for feature extraction, then stack into a random forest in a supervised setting to dedupe. This is very different than what he described. And the GPT features weren't even the most important ones, the RF preferred cosine similarity of articles, a decidedly not-large approach...
That he went from that all the way to it's mostly ok when sam altman steals all your data, misrepresents it and then steals all your traffic is... bad.
At any rate it's definitely good to know that that war crime forensics data project isn't quite the unintentional shambles corey makes it out to be.
This one hurts. Maybe CD can be brought back around but oof.
I the post he keeps referring to Ollama as an LLM (it's a desktop app that runs a local server that lets you download and interface with a local LLM via CLI or http API) so it's possible he's just that far behind in his technical understanding of LLMs that he's fallen to taking the wrong people's word for it.
The post certainly reads like he doesn't even know which local LLM he's using, let alone what it takes to make one.