how do we ensure that no-one builds it?
Eliezer made a lesswrong post yesterday where he explains that since anyone could build it, lone acts of violence are obviously ineffective and the only solution is the right and proper ("Lawful" as he calls it, because he has been stuck on DnD since writing Planecrash) state violence which can enforce a worldwide ban (which you may recall Eliezer has put at the absurdly low 8 2024 GPUs).
Eliezer joins the trend of condemning "political" violence with confidence on the far end of the dunning-kruger curve: https://www.lesswrong.com/posts/5CfBDiQNg9upfipWk/only-law-can-prevent-extinction
I've already mocked this attitude down thread and in the previous weekly thread, so I'll try to keep my mockery to a few highlights...
He's admitting nuke the data centers is in fact violence!
But then drawing a special case around it.
I don't think Eliezer has checked the news if he think the US government carries out violence in predictable or fair or avoidable ways! Venezuela! (It wasn't fair before Trump, or avoidable if you didn't want to bend over for the interest of US capital, but it is blatantly obvious under Trump) The entire lead up to Iran consisted of ripping up Obama's attempts at treaties and trying to obtain regime change through surprise assassination! Also, if the stop AI doomers used some clever cryptography scheme to make their policy of property destruction (and assassination) sufficiently predictable and avoidable would that count as "Lawful" in Eliezers book? ~~If he kept up with the DnD/Pathfinder source material, he would know Achaekek's assassins are actually Lawful Evil~~
His practical argument against non-state-sanctioned violence is that we need a total ban (and thus the authority of state driving it), because otherwise someone with 8 GPUs in a basement could invent strong AGI and doom us all. This is a dumb argument, because even most AI doomers acknowledge you need a lot of computational power to make the AGI God. And they think slowing down AGI (whether through violence or other means) might buy time for another sort of solution that is more permanent (like the idea of "solve alignment" Eliezer originally promised them). Lots of lesswrong posts regularly speculate on how to slow down the AI race and how to make use of the time they have, this isn't even outside the normal window of lesswrong discourse!
Sources cited: 0
One of the comments also pisses me off:
"Drone strike the data centers even if starts nuclear war" is the exact argument Eliezer made and that we mocked. It is the rationalists that have tried to soften it by eliding over the exact details.