So two thoughts:
-
Per Saltman's comments the improvised incendiary bounced off the side of the house rather than breaking and spreading the gas on the house proper. Apparently if you want the bottle to break the way you intend you gotta really just whang that thing because glass bottles are sturdier than you'd think.
-
One thing I find ironic about his referencing the New Yorker article on him was that part of my takeaway from that article was how mundane he is, individually. Like, he's a snake, but not in any way that isn't pretty standard once you start getting that level of wealth and power. He credibly pretended to be a proper AI cultist for the critihype, and then as the rubber started hitting the road he pivoted towards the direction that gave him and the company more money, even if it meant sacrificing the values that it turns out a lot of other people really cared about (however dumb I might think they are). That's shitty, but it's shitty in the most boring way that so many things are in the rot economy, and it's not like even if they had managed to kill Altman himself there wouldn't be another bunch of enterprising sociopaths ready to move into the same position. That profile is one of the strongest pieces of evidence why even if you are a hardcore AI doom cultist you shouldn't focus your ire on the man himself, because he's just not that special.

The decision theory stuff itself ought to be called out more for playing pretty fast and loose with reality to begin with. "If you have a supercomputer that perfectly simulates blah blah blah" is such a fundamentally bad premise because once you presume such a thing exists you're committing to the same basic metaphysical problems that you would if you replaced the computer with God. In particular I think it commits you to hard determinism at which point there's no sense arguing about what the right action is because the answer was set in stone not just before you entered the room but when the initial state of the universe was set up. Like, there's a version of this where the question is meaningful in which case the premise is impossible, and a version where we accept the premise as given and render the question pointless. Why are you doing decision theory in a hypothetical world where nobody really makes decisions?
Or we could acknowledge that yudkowskian decision theory is just singularity apologetics and accept the impossible elements of the premise on faith.