this post was submitted on 19 Jan 2026
7 points (88.9% liked)

TechTakes

2381 readers
91 users here now

Big brain tech dude got yet another clueless take over at HackerNews etc? Here's the place to vent. Orange site, VC foolishness, all welcome.

This is not debate club. Unless it’s amusing debate.

For actually-good tech, you want our NotAwfulTech community

founded 2 years ago
MODERATORS
 

Want to wade into the snowy surf of the abyss? Have a sneer percolating in your system but not enough time/energy to make a whole post about it? Go forth and be mid.

Welcome to the Stubsack, your first port of call for learning fresh Awful you’ll near-instantly regret.

Any awful.systems sub may be subsneered in this subthread, techtakes or no.

If your sneer seems higher quality than you thought, feel free to cut’n’paste it into its own post — there’s no quota for posting and the bar really isn’t that high.

The post Xitter web has spawned so many “esoteric” right wing freaks, but there’s no appropriate sneer-space for them. I’m talking redscare-ish, reality challenged “culture critics” who write about everything but understand nothing. I’m talking about reply-guys who make the same 6 tweets about the same 3 subjects. They’re inescapable at this point, yet I don’t see them mocked (as much as they should be)

Like, there was one dude a while back who insisted that women couldn’t be surgeons because they didn’t believe in the moon or in stars? I think each and every one of these guys is uniquely fucked up and if I can’t escape them, I would love to sneer at them.

(Credit and/or blame to David Gerard for starting this.)

you are viewing a single comment's thread
view the rest of the comments
[–] scruiser@awful.systems 17 points 3 days ago* (last edited 3 days ago) (3 children)

TracingWoodgrains's hit piece on David Gerard (the 2024 one, not the more recent enemies list one, where David Gerard got rated above the Zizians as lesswrong's enemy) is in the top 15 for lesswrong articles from 2024, currently rated at #5! https://www.lesswrong.com/posts/PsQJxHDjHKFcFrPLD/deeper-reviews-for-the-top-15-of-the-2024-review

It's nice to see that with all the lesswrong content about AI safety and alignment and saving the world and human rationality and fanfiction, an article explaining about how terrible David Gerard is (for... checks notes, demanding proper valid sources about lesswrong and adjacent topics on wikipedia) won out to be voted above them! Let's keep up our support for dgerard!

[–] Soyweiser@awful.systems 6 points 3 days ago* (last edited 3 days ago)

Wonder if that was because it basically broke containment (still was not widely spread, but I have seen it at a few places, more than normal lw stuff) and went after one of their enemies (And people swallowed it uncritically, wonder how many of those people now worry about NRx/Yarvin and don't make the connection).

[–] corbin@awful.systems 5 points 3 days ago (1 children)

Picking a few that I haven't read but where I've researched the foundations, let's have a party platter of sneers:

  • #8 is a complaint that it's so difficult for a private organization to approach the anti-harassment principles of the 1965 Civil Rights Act and Higher Education Act, which broadly say that women have the right to not be sexually harassed by schools, social clubs, or employers.
  • #9 is an attempt to reinvent skepticism from ~~Yud's ramblings~~ first principles.
  • #11 is a dialogue with no dialectic point; it is full of cult memes and the comments are full of cult replies.
  • #25 is a high-school introduction to dimensional analysis.
  • #36 violates the PBR theorem by attaching epistemic baggage to an Everettian wavefunction.
  • #38 is a short helper for understanding Bayes' theorem. The reviewer points out that Rationalists pay lots of lip service to Bayes but usually don't use probability. Nobody in the thread realizes that there is a semiring which formalizes arithmetic on nines.
  • #39 is an exercise in drawing fractals. It is cosplaying as interpretability research, but it's actually graduate-level chaos theory. It's only eligible for Final Voting because it was self-reviewed!
  • #45 is also self-reviewed. It is an also-ran proposal for a company like OpenAI or Anthropic to train a chatbot.
  • #47 is a rediscovery of the concept of bootstrapping. Notably, they never realize that bootstrapping occurs because self-replication is a fixed point in a certain evolutionary space, which is exactly the kind of cross-disciplinary bonghit that LW is supposed to foster.
[–] scruiser@awful.systems 5 points 1 day ago

To add to your sneers... lots of lesswrong content fits you description of #9, with someone trying to invent something that probably exists in philosophy, from (rationalist, i.e. the sequences) first principles and doing a bad job at it.

I actually don't mind content like #25 where someone writes an explainer topic? If lesswrong was less pretentious about it and more trustworthy (i.e. cited sources in a verifiable way and called each other out for making stuff up) and didn't include all the other junk and just had stuff like that it would be better at its stated goal of promoting rationality. Of course, even if they tried this, they would probably end up more like #47 where they rediscover basic concepts because they don't know how to search existing literature/research and cite it effectively.

45 is funny. Rationalists and rationalist adjacent people started OpenAI, ultimately ignored "AI safety". Rationalist spun off anthropic, which also abandoned the safety focus pretty much after it had gotten all the funding it could with that line. Do they really think a third company would be any better?

[–] blakestacey@awful.systems 13 points 3 days ago

The #5 article of the year was a crock of a few kinds of shit, and I have already spent too much time thinking about why