The decision theory stuff itself ought to be called out more for playing pretty fast and loose with reality to begin with. "If you have a supercomputer that perfectly simulates blah blah blah" is such a fundamentally bad premise because once you presume such a thing exists you're committing to the same basic metaphysical problems that you would if you replaced the computer with God. In particular I think it commits you to hard determinism at which point there's no sense arguing about what the right action is because the answer was set in stone not just before you entered the room but when the initial state of the universe was set up. Like, there's a version of this where the question is meaningful in which case the premise is impossible, and a version where we accept the premise as given and render the question pointless. Why are you doing decision theory in a hypothetical world where nobody really makes decisions?
Or we could acknowledge that yudkowskian decision theory is just singularity apologetics and accept the impossible elements of the premise on faith.
Ah yes, the Soma problem. I can't think of another premise of the transhumanist not-faith that can be so viscerally upsetting when wrong.