Over the past few years, Silicon Valley denizens have become worried they will invent an artificial intelligence so brilliant that it will quickly become too smart to control, and once it becomes too smart to control, it will make a simple, obvious decision: It's time to kill all the humans.
Phil Libin, the founder of Evernote, disagrees. And in an interview with Tim Ferriss, he gives a brilliant riff on what makes humans so afraid of intelligent machines — and why it says more about our own guilt as a species than about the likely intentions of our future robot overlords:
I'm not afraid of AI. I really think the AI debate is kind of overdramatized. To be honest with you, I kind of find it weird. And I find it weird for several reasons, including this one: there's this hypothesis that we are going to build super-intelligent machines, and then they are going to get exponentially smarter and smarter, and so they will be much smarter than us, and these super-smart machines are going to make the logical decision that the best thing to do is to kill us.
I feel like there's a couple of steps missing in that chain of events. I don't understand why the obviously smart thing to do would be to kill all the humans. The smarter I get the less I want to kill all the humans! Why wouldn't these really smart machines not want to be helpful? What is it about our guilt as a species that makes us think the smart thing to do would be to kill all the humans? I think that actually says more about what we feel guilty about than what's actually going to happen.
If we really think a smart decision would be to wipe out humanity then maybe, instead of trying to prevent AI, it would be more useful to think about what are we so guilty about, and let's fix that? Can we maybe get to a point where we feel proud of our species, and like the smart thing to do wouldn't be to wipe it out?
I think there are a lot of important issues that are being sublimated into the AI/kill-all-humans discussion that are probably worth pulling apart and tackling independently ... I think AI is going to be one of the greatest forces for good the universe has ever seen and it's pretty exciting we're making progress towards it.
You can listen to the whole interview here, and you should. At about the 1:25:00 mark, Libin unspools a theory about how some people live their life in a first-person shooter and some people live their life in a third-person shooter that's well worth the price of admission.
It's worth noting that AI fears are not just the province of a few cranks. Oxford philosopher Nick Bostrom wrote a whole book about such fears (you can read Vox's interview with him here). Venture capitalist Peter Thiel has given more than $1.6 million to the AI-focused Machine Intelligence Research Institute. Tesla founder Elon Musk tweeted, "Hope we're not just the biological boot loader for digital superintelligence. Unfortunately, that is increasingly probable." Stephen Hawking and Bill Gates also count themselves among the concerned. And as my colleague Dylan Matthews reported, a recent gathering of nerdish philanthropists at Google was dominated by concerns about AI, rather than concerns about, say, global poverty.
For various reasons, I've never found the AI fears all that convincing. It seems more likely that a superintelligent AI would be completely disinterested in humans than that it would dedicate itself to their destruction. And it isn't obvious, at least to me, that analytical superintelligence is actually enough to take over the world. A lot of very smart people are quite hapless at actually getting big things done — particularly when getting things done requires working with people less smart than they are. To a large degree, I think AI fears speak to computer engineers overestimating the importance of the trait they're most proud of.
Which isn't to say that AI fears can't come true. But the threat posed by other human beings, or by asteroids, or by pandemic flus, seems much more pressing.