Our Universe: A Prison
August 16, 2013
There are reasons to suspect we are living in a simulation, some empirical, and some logical. If that is true, we might conjecture about the ethical qualities of our overseers, as well as their competence. It’s possible our universe has simply been forgotten, a scenario that brings to mind the final scene of Ran Prieur’s novel, Apocalypsopolis:
But now he was drifting upward. The rock dwindled and the stars…
Were they getting closer? Yes! He could see now that they were just tiny holes in a great black shell with supporting girders. He caught onto one of them and pulled himself along until he came to a protruding black handle, a crank. He turned it, and a great seam of blinding light opened, and he slipped through.
He was in a bedroom, in a corner. Below him was the Earth, the size of a basketball, on a little platform, with the black perforated shell over the top. A label said Planetarium.
Over at a desk was a kid, a twelve year old boy with glasses, playing some kind of computer game. Sirach stepped out of the corner, over some dirty clothes and boxes, and the kid turned and looked at him. “Who are you?”
“I just came out of the Earth.”
Sirach pointed at it.
“Oh, that thing.” He looked vaguely guilty.
“Is it yours?”
“Yeah, I guess so.”
“And you’re supposed to be taking care of it?”
“That’s what my dad says too.”
“Yeah, I thought it’d be dead by now.”
“You just left it in the corner to die?”
“Sorry. I took care of it for a while. After I built it up to dinosaurs, it got boring. Dinosaurs are pretty much all that model’s good for.”
There are some problems with the simulation idea, however. First, there’s the assumption of substrate independence, which Bolstrom derives from computationalism. Judging from the past failed metaphors for mind, the outlook for computationalism isn’t bright. It’s apparent that whatever technology is viewed as most advanced is always the darling metaphor, but that only tells us about our own society’s level of development. To derive mind from symbols is probably a completely backward way of thinking, although I don’t presume to be certain. But in linguistics, in which I do know enough to judge, it’s a complete fail. Grammaticality is a matter of degrees. People seem mostly aesthetic, balancing the yin and yang almost as if their brain were composed of some sort of network of “cells” summing potentials stochastically. Which state could arguably also be implemented in a computer, but Bolstrom appears commited to computationalism.
Another problem is that, if we assume the third case is true, that we are in a simulation, it immediately reduces the probability that we can ever create a simulation, which increases the probability that no post human civilizations exist. This is because if sims could create sims it would place resource constraints on the base computer. Furthermore, being in a simulation would at least call into question our ideas of physics on which the idea of computability rests. If holding the proposition reduces the likelihood of it, it’s probably wrong.
Then there’s the very significant fact that, as mammals who live on the planet earth, there’s no reason to believe that we will ever be creating giant computers and attaining any sort of “posthuman state”. It’s just highly unlikely. Species tend to be rather limited by what the biosphere provides them.
Let’s consider the energy required for a simulator that actually runs 10^36 operations per second at room temperature. Not being a scientist, I’m going to assume for the moment that one Landauer computation is equal to one computation in Bolstrum terminology. Using Landauer’s principle and multiplying by the number of operations per second that Bolstrum suggests, we get a per second energy consumption on the order of petajoules per second.
1.38*10^-23 * 0.69315 * 298.15 * 10^36 = 2.85 * 10^15
This would require the energy of three Tsar Bombas, released every second, for as long as the simulation runs. Even given the possibility of time compression inside the simulation, the total energy consumption remains gigantic. It’s telling that no one that I’ve seen in the discussion seems to have noticed this as relevant. But it’s hard to evade the consequent fact that no naturally evolved species– even a tool making ape– will ever reach such a post-human state.
The greatest objection, however, is that creating an ancestor simulation would be so unethical that even humans would decline to do it. This creation of new sentient beings is not necessarily a natural thing, and so it will be scrutinized the way that cloning is today, or the way procreation ought to be. It would be apparent that all of the advances they have made will not be available to us. Just as in the same way we could not collectively bring ourselves to simulate the stone age, they would not be able to collectively bring themselves to simulate our age.
Very expensive propositions pursued collectively are usually pursued because of some enemy– a negative inducement– not to create new people. Even panspermia has not garnered much traction. No motive is in fact given for why our post human civilization would want to do this. It would be wrong to compare it to our motivation to colonize space, because that case makes use of the drive for conquest and exploration. And empires are forged on the basis of self interest. But if we built a simulation, we would conquer nothing, learn little, and inflict much pain.
There is one situation in which we pursue goals in which we conquer no space, learn little, and inflict much pain, however. Our universe could be being used as a kind of penal colony, for pre-existing beings in the base universe. Provided there are enough prisoners, and keeping us alive in physical form turned out to be more expensive, mind-uploading could be pursued to both safely kept us at bay and ensure that we undergo some sort of life-lesson. It is also likely that if we developed simulations (at least, simulations in which the beings are amnesiac regarding any possible previous state, as we are), these would also be prisons.
Because of the constraints on collective behavior, I consider imprisonment to be the almost certain conclusion if the simulation argument is true. If that is the case, we might hypothesize that something like karma does exist, with each of us being punished for what was committed in the past. This would raise further questions of whether we will be here repeatedly– until the lesson is learned– or whether our time in the simulation is a death sentence.