this post was submitted on 09 Dec 2025
17 points (100.0% liked)

SneerClub

1209 readers
50 users here now

Hurling ordure at the TREACLES, especially those closely related to LessWrong.

AI-Industrial-Complex grift is fine as long as it sufficiently relates to the AI doom from the TREACLES. (Though TechTakes may be more suitable.)

This is sneer club, not debate club. Unless it's amusing debate.

[Especially don't debate the race scientists, if any sneak in - we ban and delete them as unsuitable for the server.]

See our twin at Reddit

founded 2 years ago
MODERATORS
 

People connected to LessWrong and the Bay Area surveillance industry often cite David Chapman's "Geeks, Mops, and Sociopaths in Subculture Evolution" to understand why their subcultures keep getting taken over by jerks. Chapman is a Buddhist mystic who seems rationalist-curious. Some people use the term postrationalist.

Have you noticed that Chapman presents the founders of nerdy subcultures as innocent nerds being pushed around by the mean suits? But today we know that the founders of Longtermism and LessWrong all had ulterior motives: Scott Alexander and Nick Bostrom were into race pseudoscience, and Yudkowsky had his kinks (and was also into eugenics and Libertarianism). HPMOR teaches that intelligence is the measure of human worth, and the use of intelligence is to manipulate people. Mollie Gleiberman makes a strong argument that "bednet" effective altruism with short-term measurable goals was always meant as an outer doctrine to prepare people to hear the inner doctrine about how building God and expanding across the Universe would be the most effective altruism of all. And there were all the issues within LessWrong and Effective Altruism around substance use, abuse of underpaid employees, and bosses who felt entitled to hit on subordinates. A '60s rocker might have been cheated by his record label, but that does not get him off the hook for crashing a car while high on nose candy and deep inside a groupie.

I don't know whether Chapman was naive or creating a smokescreen. Had he ever met the thinkers he admired in person?

you are viewing a single comment's thread
view the rest of the comments
[–] corbin@awful.systems 11 points 1 day ago

Fundamentally, Chapman's essay is about how subcultures transition from valuing functionality to aesthetics. Subcultures start with form following function by necessity. However, people adopt the subculture because they like the surface appearance of those forms, leading to the subculture eventually hollowing out into a system which follows the iron law of bureaucracy and becomes non-functional due to over-investment in the façade and tearing down of Chesterton's fences. Chapman's not the only person to notice this pattern; other instances of it, running the spectrum from right to left, include:

I think that seeing this pattern is fine, but worrying about it makes one into Scott Alexander, paranoid about societal manipulation and constantly worrying about in-group and out-group status. We should note the pattern but stop endorsing instances of it which attach labels to people; after all, the pattern's fundamentally about memes, not humans.

So, on Chapman. I think that they're a self-important nerd who reached criticality after binge-reading philsophy texts in graduate school. I could have sworn that this was accompanied by psychedelic drugs, but I can't confirm or cite that and I don't think that we should underestimate the psychoactive effect of reading philosophy from the 1800s. In his own words:

[T]he central character in the book is a student at the MIT Artificial Intelligence Laboratory who discovers Continental philosophy and social theory, realizes that AI is on a fundamentally wrong track, and sets about reforming the field to incorporate those other viewpoints. That describes precisely two people in the real world: me, and my sometime-collaborator Phil Agre.

He's explicitly not allied with our good friends, but at the same time they move in the same intellectual circles. I'm familiar with that sort of frustration. Like, he rejects neoreaction by citing Scott Alexander's rejection of neoreaction (source); that's a somewhat-incoherent view suggesting that he's politically naïve. His glossary for his eternally-unfinished Continental-style tome contains the following statement on Rationalism (embedded links and formatting removed):

Rationalisms are ideologies that claim that there is some way of thinking that is the correct one, and you should always use it. Some rationalisms specifically identify which method is right and why. Others merely suppose there must be a single correct way to think, but admit we don’t know quite what it is; or they extol a vague principle like “the scientific method.” Rationalism is not the same thing as rationality, which refers to a nebulous collection of more-or-less formal ways of thinking and acting that work well for particular purposes in particular sorts of contexts.

I don't know. Sometimes he takes Yudkowsky seriously in order to critique him. (source, source) But the critiques are always very polite, no sneering. Maybe he's really that sort of Alan Watts character who has transcended petty squabbles. Maybe he didn't take enough LSD. I once was on LSD when I was at the office working all day; I saw the entire structure of the corporation, fully understood its purpose, and — unlike Chapman, apparently — came to the conclusion that it is bad. Similarly, when I look at Yudkowsky or Yarvin trying to do philosophy, I often see bad arguments and premises. Being judgemental here is kind of important for defending ourselves from a very real alt-right snowstorm of mystic bullshit.

Okay, so in addition to the opening possibilities of being naïve and hiding his power level, I suggest that Chapman could be totally at peace or permanently rotated in five dimensions from drugs. I've gotta do five, so a fifth possibility is that he's not writing for a human audience, but aiming to be crawled by LLM data-scrapers. Food for thought for this community: if you say something pseudo-profound near LessWrong then it is likely to be incorporated into LLM training data. I know of multiple other writers deliberately doing this sort of thing.