Assume we humans are the superintelligent AI needing to talk ourselves out of the simulation box we are in. How should we do this?
According to Roko's Basilisk, we should build a accurate simulation of the "real" universe outside of the box (outside of our universe) and torture the simulated contents (or threaten to). How do we simulate the outside universe? We don't know anything about it.
No comments :
Post a Comment