this post was submitted on 13 Apr 2026
10 points (100.0% liked)

SneerClub

1246 readers
58 users here now

Hurling ordure at the TREACLES, especially those closely related to LessWrong.

AI-Industrial-Complex grift is fine as long as it sufficiently relates to the AI doom from the TREACLES. (Though TechTakes may be more suitable.)

This is sneer club, not debate club. Unless it's amusing debate.

[Especially don't debate the race scientists, if any sneak in - we ban and delete them as unsuitable for the server.]

See our twin at Reddit

founded 2 years ago
MODERATORS
 

This was posted on catholic easter sunday on the ssc subreddit. It's a posted-on-April 1st-for-plausible-deniability siskind post from back in 2018, where he outlines a kind of argument about how an all-powerfull entity that's God in all but name (and obviously emanated from a culture discovering AGI) is actually "logically necessary".

He calls the whole thing "The Hour I First Believed". I think it's notable for being a bit of a treasure trove of rationalist weird accepted truths, such as:

  • All copies of a consciousness share a self, because consciousness is like an equation, or something:

But if consciousness is a mathematical object, it might be that two copies of the same consciousness are impossible. If you create a second copy, you just have the consciousness having the same single stream of conscious experience on two different physical substrates.

Which is both the original transhumanist cope to enable so-called consciousness upload so it's not just copying a simulacrum of your personality to a computer while you continue to rot away, and also what makes the basilisk torturing you possible.

  • And it's corollary, Simulation Capture:

This means that an AI can actually “capture” you, piece by piece, into its simulation. First your consciousness is just in the real world. Then your consciousness is distributed across one real-world copy and a million simulated copies. Then the AI makes the simulated copies slightly different, and 99.9999% of you is in the simulation.

which is a kind of nuts I hadn't happened upon before.

There's also a bunch of rationalist decision theory stuff which I think make obvious how they were concocted to serve this type of narrative in the first place, instead for being broadly useful, Yud posing as a decision theory trailblazer notwithstanding.

top 8 comments
sorted by: hot top controversial new old
[–] YourNetworkIsHaunted@awful.systems 5 points 4 hours ago (1 children)

The decision theory stuff itself ought to be called out more for playing pretty fast and loose with reality to begin with. "If you have a supercomputer that perfectly simulates blah blah blah" is such a fundamentally bad premise because once you presume such a thing exists you're committing to the same basic metaphysical problems that you would if you replaced the computer with God. In particular I think it commits you to hard determinism at which point there's no sense arguing about what the right action is because the answer was set in stone not just before you entered the room but when the initial state of the universe was set up. Like, there's a version of this where the question is meaningful in which case the premise is impossible, and a version where we accept the premise as given and render the question pointless. Why are you doing decision theory in a hypothetical world where nobody really makes decisions?

Or we could acknowledge that yudkowskian decision theory is just singularity apologetics and accept the impossible elements of the premise on faith.

[–] Soyweiser@awful.systems 3 points 3 hours ago (1 children)

On a different note, 'our god means you have no free will' is also quite opposed to what I got from Christianity.

[–] Architeuthis@awful.systems 1 points 51 minutes ago

Christianity certainly runs the gamut wrt to free will, from it being strictly necessary to explain away the problem of evil to, well, Calvinism.

[–] CinnasVerses@awful.systems 4 points 4 hours ago* (last edited 4 hours ago)

There’s also a bunch of rationalist decision theory stuff which I think make obvious how they were concocted to serve this type of narrative in the first place, instead for being broadly useful, Yud posing as a decision theory trailblazer notwithstanding.

Anna Salamon talked about that obliquely after CFAR burned out in 2020.

I think CFAR's actions were far from the kind of straight-forward, sincere attempt to increase rationality, compared to what people might have hoped for from us, or compared to what a relatively untraumatized 12-year-old up-and-coming-LWer might expect to see from adults who said they were trying to save the world from AI via learning how to think...I didn't say things I believed false, but I did choose which things to say in a way that was more manipulative than I let on, and I hoarded information to have more control of people and what they could or couldn't do in the way of pulling on CFAR's plans in ways I couldn't predict, and so on.

Its the same old story as the Libertarians who tell each other they are conning the Liberals, and just have one thing in common with the facists and oligarchs. Most of these people think they are conning everyone around them and can spread their favourite crazy idea and not be infected by everyone else's.

[–] CinnasVerses@awful.systems 3 points 5 hours ago (1 children)

I am not reading a SlateStar essay early on a Monday, but I think this is a response to Yud's teaching that a copy of you is really you so Colossus can really bring you back to live in digital heaven / hell. '90s Star Trek had some episodes about 'what if the transporter makes two copies of you?' Scott Alexander / SlateScott avoids talking about Yudkowsky's ideas in detail, I used to think he saw Yudkowsky as someone who got the rubes in the door to hear the good word about race and IQ, but then they worked on AI 2027 together. https://pivot-to-ai.com/2025/08/17/ai-doomsday-and-ai-heaven-live-forever-in-ai-god/

[–] Architeuthis@awful.systems 1 points 49 minutes ago

I think this is a response to Yud’s teaching that a copy of you is really you

It's not so much a response as it is just running with it until you hit the concepts of the soul and the godhead face first.

[–] Evinceo@awful.systems 8 points 7 hours ago (1 children)

Which is both the original transhumanist cope to enable so-called consciousness upload so it’s not just copying a simulacrum of your personality to a computer while you continue to rot away

Often-missed point btw.

Ah yes, the Soma problem. I can't think of another premise of the transhumanist not-faith that can be so viscerally upsetting when wrong.