I have a love/hate relationship with the sort of thought experiments that Ezra Klein describes Elon Musk indulging in at the Recode Code Conference last year. They are part of what drew me to philosophy (I got a master's) and part of what pushed me away (I bailed before my PhD).
That was all many, many years ago, and I haven't followed debates in academic philosophy closely since then. Still, what boss man wants, boss man gets.
I am excited for some Musk-driven do-we-live-in-a-simulation hot takes. This is the debate America needs. #codecon— Ezra Klein (@ezraklein) June 2, 2016
All right then. Here we go.
We might be brains in vats inside simulations programmed by an evil demon
Musk thinks we're probably living in some advanced civilization's virtual reality video game, a variation on philosopher Nick Bostrom's popular brain bender "Are You Living in a Computer Simulation?"
The idea is that sufficiently complex virtual reality simulations of conscious beings would produce consciousness; the simulations would become self-aware and believe themselves to be in the "real world." Ha ha, joke's on them.
This is just the latest version of a thought experiment that goes back at least to Descartes, whose version of a video game simulation was an evil demon bent on misleading him.
It has popped up in various guises over the years (see: brain in a vat), but it always draws on the same basic premise.
Everything we know about the world comes to us through our five senses, which we experience internally (as neurons firing, though Descartes wouldn't have put it that way). How do we know those firing neurons correspond to anything real out in the world?
After all, if our senses were being systematically and ubiquitously deceived, whether by demon or daemon, we would have no way of knowing. How would we? We have no tools other than our senses with which to fact-check our senses.
Because we can't rule out the possibility of such deception, we can't know for certain that our world is the real world. We could all just be suckers.
This kind of skepticism sent Descartes on an internal journey, searching for something he could know with absolute confidence, something that could serve as a foundation upon which to build a true philosophy. He ended up with cogito, ergo sum — "I think, therefore I am" — but that has not fared well with subsequent philosophers.
All we can really knows is, "There are thoughts." Whee!
(Note: Bostrom says the simulation argument is different from the brain-in-the-vat argument, because it raises the probabilities. After all, how many evil scientists with brain vats are likely to exist? Whereas it seems that almost any sufficiently advanced civilization could run virtual reality simulations. If there are such civilizations, and they're inclined to run them, there could be an almost infinite number of such simulations. That means we're much more likely to be in one of them than in the real world. Contra Bostrom, I don't think that changes the structure of the problem, but it's clever.)
The Matrix is compelling because of the bogus red pill
The most iconic pop culture representation of the we-live-in-a-simulation idea is The Matrix, the 1999 movie by the Wachowski siblings, in which humans are indeed brains in vats, or at least bodies in pods, living in a computer simulation created by the computers themselves.
But The Matrix also shows why the thought experiment relies on a bit of a cheat.
One of the most thrilling parts of the movie is the moment when Neo takes the red pill, opens his eyes, and sees the real reality for the first time. That's the intuition these thought experiments draw on: the notion that there's some reality behind the veil, that it might just be possible to see the truth once it's lifted.
But that intuition, tempting as it is, ignores the core premise of the thought experiment: Our senses could be deceived.
Why should Neo think the "real world" he's seeing after taking the red pill is real? It could just as easily be another simulation. After all, what better way to keep strong-willed pod humans occupied than to give them a gritty simulation of rebellion?
No matter how many pills he takes, or how earnestly Morpheus assures him that this new reality is the real one, Neo is still relying on his senses, and his senses can still theoretically be deceived. So he's right back where he started.
[UPDATE: A reader points me to this video that makes the same point:
And that's the import of the simulation thought experiment: It cannot be proven or disproven. But by the same token, it cannot matter. It cannot make any difference to us.
Insofar as a deception is perfect, it cannot matter
Say you were told, "The universe and all its contents are probably upside down." It would blow your mind for a minute, because you'd imagine taking the red pill and seeing all the upside-down things. But then you'd realize that things can only be upside down in relation to other things, so everything being upside down … doesn't make much sense.
The very same is true of "everything is probably unreal," which is what the simulation thought experiment amounts to. Things are real to us human beings in relation to other parts of our experience (just as the red-pill world is real in relation to the blue-pill world in The Matrix). We are real in relation to other things and people. "Everything is unreal" is no more meaningful than "everything is upside down."
These assertions are not so much true or false as, well, ¯\_(ツ)_/¯. Because their truth or falsehood doesn't relate to anything else, has no practical or epistemological implications, they are inert. They cannot matter.
The way philosopher David Chalmers puts it is that the simulation idea is not an epistemological thesis (about how we know things) or a moral thesis (about how/whether we should value things) but a metaphysical thesis (about the ultimate nature of things). If it's true, it's not that trees and clouds and other people don't exist, it's just that trees and clouds and other people have a different ultimate nature than we thought.
But to me, this amounts to saying: So what? One ultimate reality to which I have no direct access turns out to be a different ultimate reality to which I have no direct access. Big whoop. Meanwhile, the reality I actually inhabit, the one I interact with via my senses and my beliefs, remains the same.
Everything is a computer simulation? Whatever. I'm fine with that. It quite literally makes no difference.
Even Bostrom more or less concedes the point: "To a first approximation, therefore, the answer to how you should live if you are in a Matrix is that you should live the same way as if you are not in a Matrix." What else could the answer be? What other difference could it make? I still want to go home and kiss my wife and hug my kids.
This was one of the insights articulated by (my philosophical heroes) the American pragmatists. Our beliefs and language are not abstract representations that correspond (or fail to correspond) to some numinous realm of independent reality. They are tools. They are valuable insofar as they help us do things — organize, navigate, and predict our world of experience. "We are in a computer simulation" is of absolutely no use.
We'd be better off abandoning the quest for certainty and getting better at probabilities
Descartes lived in the era just preceding the Enlightenment and was an important precursor, in that he wanted to build philosophy on what human beings could assess for themselves, not what was handed down by religion or tradition — to take nothing on faith.
His mistake, and that of many Enlightenment thinkers to follow, is that he believed such a philosophy had to mimic religious knowledge: hierarchical, built on a foundation of certain, incontestable truth from which all other truths follow.
Without that foundation of certainty, many feared (and still fear), mankind is doomed to skepticism in epistemology and nihilism in morality.
But once you abandon religion — once you trade authority for empiricism and the scientific method — you also abandon certainty. It's part of the deal.
What humans can assess for themselves is always partial, always provisional, always a matter of probabilities. We can weigh parts of our experience against other parts, we can test and replicate and remain open to new evidence, but there's no way to step outside experience and build a fixed foundation beneath it. Everything is good, or true, or real, in relation to other things. If they are also good, true, or real in transcendent, independent, "objective" ways, we have no way of knowing.
The essence of the human condition (if I may be so bold) is this: decision-making in the face of insufficient data. The fact is, the evidence of any one person's senses is paltry. What we know from direct experience about the people, places, and things around us is limited. To fill in the gaps, we rely on a whole messy skein of assumptions, inherited beliefs, interpretive frameworks, and heuristics. We have to; the direct evidence wildly underdetermines any sense we might make of it.
Even science, the (cognitively unnatural) mode of inquiry through which we try to suspend our assumptions and accept only what the data shows, is in practice shot through with culturally bound heuristics and value judgments.
And it never achieves certainty, only levels of probability. To rely on science is to abandon certainty.
In the world we live in ("real" or not), we act based on probabilities, using highly fallible and unreliable cognitive tools, amid a flurry of oncoming evidence and a haze of uncertainty. That's what it is to be human. But it makes people anxious. They crave certainty, some fixed point amid the flux, which is what sent philosophers digging down deeper and deeper looking for foundations.
If there just are no fixed foundations, though, we have to learn to live with uncertainty, with provisional knowledge, and adopt the habits of intellect that help us hone our knowledge and make it more useful. Figuring out how to think better, how to organize sociopolitical institutions to absorb new evidence and translate it into useful knowledge, is more techne than episteme.
And if that's true, then philosophy isn't going to be much help. (This sentiment, and those that precede it, owes much to Richard Rorty, one of the inheritors of American pragmatism.)
What will help humanity gain knowledge is practical training and tools that improve our knowledge-gathering skills, lessons drawn from history and social science, ways of overcoming the tribalism and motivated reasoning that bedevil all human endeavor. Those tools, not certainty and ahistorical foundations, will help us develop a more true and useful worldview.
So, yes, it very much matters, determining what is real and what isn't, in relation to our experience. Gravity is real; phlogiston isn't. This is a defeasible, rough-and-ready sense of what's "real," i.e., what holds up to repeated testing, what helps us navigate the world successfully.
But what's real in absolute, independent, ahistorical terms? Whether everything is real? These are basically confusions of language, questions constructed so as to preclude answers. They have no implications for our lives and behavior.
Want evidence? Elon Musk believes that the entire world he and everyone he loves inhabits is an illusion, a simulation. He is not real; his family is not real; climate change is not real; Mars is not real.
Yet what does Musk spend his days doing? Working as hard as he can to improve humanity's lot, reducing carbon emissions on Earth and enabling access to other planets. Why would he work so hard on behalf of simulations?
Because on some level, he knows that this world is real in the only way that matters.