this post was submitted on 20 Sep 2025
18 points (100.0% liked)

SneerClub

1195 readers
29 users here now

Hurling ordure at the TREACLES, especially those closely related to LessWrong.

AI-Industrial-Complex grift is fine as long as it sufficiently relates to the AI doom from the TREACLES. (Though TechTakes may be more suitable.)

This is sneer club, not debate club. Unless it's amusing debate.

[Especially don't debate the race scientists, if any sneak in - we ban and delete them as unsuitable for the server.]

See our twin at Reddit

founded 2 years ago
MODERATORS
 

So seeing the reaction on lesswrong to Eliezer's book has been interesting. It turns out, even among people that already mostly agree with him, a lot of them were hoping he would make their case better than he has (either because they aren't as convinced as him, or they are, but were hoping for something more palatable to the general public).

This review (lesswrong discussion here), calls out a really obvious issue: Eliezer's AI doom story was formed before Deep Learning took off, and in fact was mostly focusing on more GOFAI than neural networks, yet somehow, the details of the story haven't changed at all. The reviewer is a rationalist that still believes in AI doom, so I wouldn't give her too much credit, but she does note this is a major discrepancy from someone that espouses a philosophy that (nominally) features a lot of updating your beliefs in response to evidence. The reviewer also notes that "it should be illegal to own more than eight of the most powerful GPUs available in 2024 without international monitoring" is kind of unworkable.

This reviewer liked the book more than they expected to, because Eliezer and Nate Soares gets some details of the AI doom lore closer to the reviewer's current favored headcanon. The reviewer does complain that maybe weird and condescending parables aren't the best outreach strategy!

This reviewer has written their own AI doom explainer which they think is better! From their limited description, I kind of agree, because it sounds like the focus on current real world scenarios and harms (and extrapolate them to doom). But again, I wouldn't give them too much credit, it sounds like they don't understand why existential doom is actually promoted (as a distraction and source of crit-hype). They also note the 8 GPUs thing is batshit.

Overall, it sounds like lesswrongers view the book as an improvement to the sprawling mess of arguments in the sequences (and scattered across other places like Arbital), but still not as well structured as they could be or stylistically quite right for a normy audience (i.e. the condescending parables and diversions into unrelated science-y topics). And some are worried that Nate and Eliezer's focus on an unworkable strategy (shut it all down, 8 GPU max!) with no intermediate steps or goals or options might not be the best.

you are viewing a single comment's thread
view the rest of the comments
[–] scruiser@awful.systems 8 points 2 days ago (28 children)

In Eliezer's "utopian" worldbuilding fiction concept, dath ilan, they erased their entire history just to cover up the any mention of any concept that might inspire someone to think of "superintelligence" (and as an added bonus purge other wrong-think concepts). The ~~Philosopher Kings~~ Keepers have also discouraged investment and improvement in computers (because somehow, despite now holding any direct power and the massive financial incentives and dath ilan being described as capitalist and libertarian, the Keepers can just sort of say their internal secret prediction market predicts bad vibes from improving computers too much and everyone falls in line). According to several worldbuiding posts, dath ilan has built an entire secret city that gets funded with 2% of the entire world's GDP to solve AI safety in utter secrecy.

[–] swlabr@awful.systems 6 points 2 days ago (15 children)

Thanks for the lore, and sorry that you had to ingest all that at some point.

Like a lot of sci-fi that I will comment on without having read, that sounds like a Dune ripoff (that I also have not read). Except, rather than "ok we tried the whole AI thing and it turned out bad", Yud is saying "my world is smarter because they predicted AI would be bad, so they just didn't try it, neener neener neener!"

Also, iterating on the name origins of "dath ilan," as I've said before, it's an anagram of Thailand. But I have a new hypothesis. It's formed from "Death Island" minus the letters "ESD". What could this mean?!?!?! Electrostatic discharge? "Eek, Small Dragon?" "Eliezer Sudden Death?" The abyss pondering continues...

[–] Architeuthis@awful.systems 12 points 2 days ago (8 children)

OG Dune actually had some complex and layered stuff to say about AI before the background lore was retconned to dollar store WH40K by the current handlers of the IP.

There was no superintelligence, thinking machines were gatekept by specialists who formed entrenched elites, overreliance to them was causing widespread intellectual stagnation, and people were becoming content with letting unknowable algorithms decide on matters of life and death.

The Butlerian Jihad was first and foremost a cultural revolution.

[–] V0ldek@awful.systems 3 points 1 day ago (1 children)

It was very much a Luddite movement that succeeded

[–] scruiser@awful.systems 1 points 1 day ago (1 children)

I mean the aftermath of Buterlian Jihad eventually lead to brutal feudalism that lasted a really long time and halted multiple lines of technological and social development, so I wouldn't exactly call it a success for the common person.

[–] Architeuthis@awful.systems 2 points 7 hours ago* (last edited 7 hours ago)

Well, at least they got to find out exactly how far extreme mental discipline (and space psychedelics) could take you. You got mentats, truth sayers, suk doctors and so on and so forth. Not that the vast majority of the population ever got to see any benefit from them, because hey, feudalism, and they themselves were basically luxury slaves to the Great Houses, but it's not nothing I guess.

load more comments (6 replies)
load more comments (12 replies)
load more comments (24 replies)