/cdn.vox-cdn.com/uploads/chorus_image/image/63702663/transcendence-movie-poster.0.1537347549.0.jpeg)
This is Part 2 of a two-part interview with Barney Pell, a pioneer of general game-playing programs in artificial intelligence (AI), founder of the Powerset AI-based search engine, and once an autonomous robotics researcher and manager at NASA.
Pell joined interviewer Scott Adelson for a screening of “Transcendence,” directed by Wally Pfister and starring Johnny Depp. The sci-fi movie pivots off the latest AI technology to offer a fictional depiction of what the future may hold. Here is Part 1 of their Q&A.
Warning: Some spoilers ahead! While you read the interview, listen to Mychael Danna’s music from the “Transcendence” soundtrack here:
Scott Adelson: The film’s creators consulted two scientists from Berkeley. We know that this is a piece of fiction meant for entertainment, but what realities do you think the filmmakers did a good job of conveying?
Barney Pell: I think the filmmakers did a great job at conveying what an AI would do as soon as it gained consciousness. First, connect to the Internet. Then, get money, build a safe data center, secure your energy and protect your perimeter. Then, set up a lab to do more research, and create physical surrogates so you can move around in the world as a human. I also think they conveyed biological tech improvements well. And I liked how they had the AI monitoring his wife along a large number of emotional and physical markers.
What do you think the film was about?
Ultimately, I think the film was about love and marriage. The film depicts a wonderful relationship between the two scientists as work and life partners. The scientist’s wife uploads him so that they can continue to be together. He then carries out her dream to change the world (which the film notes were not his own motivations). Any marriage is strained as the partners grow differently, and this is all the harder when one partner is evolving exponentially. Ultimately trust is tested, sacrifices are made for the relationship, and once resolved the relationship can transcend the past.
In this regard, I found the film to fall short, just as science fiction often has. In the ’50s, sci-fi envisioned a future with all kinds of amazing technologies, including intelligent robots, life on other planets and flying cars. However, they still imagined that women would be housewives, who benefit from this AI because now they can have robot maids and 3-D printers to make it easier for them to prepare dinner and lunch for the kids. In “Transcendence,” we have a brilliant and equal partnership between the scientist husband and wife, prior to the upload event. Then the husband becomes increasingly intelligent. He does many things to change the outside world and increase physical security. But he doesn’t seem to do anything to help his wife become more intelligent or powerful. Instead, we see her walking around looking at demos and having candlelit dinners with her husband projected on screens around the room.
I believe that Intelligence Amplification (IA) is as or more likely than AI to transform the world. We already benefit from a vast array of cognitive prosthetics, or “power tools for the mind.” A super-intelligent husband would pay as much attention to leveling up his wife as he does to helping the rest of the world. Even the obvious option of uploading her doesn’t seem to occur to either of them until the end of the movie, when the relationship is already damaged.
More generally, while researching all kinds of tech innovations, there should be plenty of time for both of them to research innovations in emotion and communication and partnership. I don’t think they even tried.
In the movie, they touched on how human evolution might lead to the marriage of our flesh with technology. The undercurrent of the movie was that this new form of us would be inferior to being human, and that some people will be extremely afraid of that change. Should they be? Do you think they would be? What does that future look like?
I think we are already “cyborg” and married to our technology. We each carry more computing power in our pockets now than was available to all of humanity 30 years ago. We outsourced memory of phone numbers and such items long ago. We go to the Web for information whenever and wherever we need it. And we wear glasses to improve our vision, and use microscopes and telescopes to see things smaller and more distant than our bodies were built to see. I think it’s not much of a stretch to see technology evolving to give us better peripherals (seeing, hearing, touch, smell, taste), better memories, faster access to internal computations and faster connections to other minds.
I think people are afraid of change because it always comes with unknown and often unintended consequences — and some of those will be bad. There is a real, existential risk that post-Singularity AI could take over the world. Once the genie is out of the bottle, there might be no way to put it back. On this topic, I would say while that is possible, I think it’s unlikely. In my opinion, the more intelligent people are, the less they need to resort to violence, the more they perceive abundance and possibility instead of scarcity, and the more they are motivated by actualization and helping others. I think super-intelligent AI is likely to take that path.
What did you think of the computer technology they used (data centers, quantum computing, the use of Unix, etc.)?
The movie depicted quantum computing as being the new form of computing in data centers. I’m excited about the potential of quantum computing to solve computationally hard problems of the kind we often find in building AI systems. It’s the early days, and I’m not sure what we see today is doing anything particularly special, but the research line has promise.
The idea from the film of a virus infecting every system in the world and shutting down everything connected to the Internet seemed preposterous to me. I laughed when I saw this in “Independence Day” (a virus that works across both human and alien computing environments, even!), and I laughed again when I saw it in this film.
It was fun to see Unix being used in the movie. I think that’s more likely to serve AI system builders than other OSes we have today. I think if we had a resurgence in use of LISP and Prolog programming languages, AI would come even faster.
What did you think of the government’s response in the film?
I was really annoyed by the government’s response. First off, once the word gets out that there is a true AI among us, why would you send only a small tactical unit to deal with it? Second, they really jumped to a conclusion about the AI being a threat. Given the positive changes it was already provably making for the world, they should have jumped to learn and harvest as much as possible for a world that needs help. Third, who gets to make a call to: (a) kill the only AI system in the world and (b) inject a virus to take down all computers and electronic devices in the world? A U.S. government couldn’t responsibly do (a); and (b) is not only stupid and likely to cause more damage than any AI, but also not the choice of any single government, as it affects the whole world.
Finally, the government partnering with terrorists was completely silly. And the reason to partner, because we’ll likely screw up and then we can blame the terrorists, was not just silly but also circular: If you don’t think it’s going to work, then you shouldn’t do it, with or without the terrorists to blame.
Finally, what issues do you wish the movie covered in more detail?
I wish they had covered more detail about the AI technology. I know it’s hard, but I just didn’t find it satisfying at all — but then, the AI technology itself wasn’t the story, as we discussed earlier. I would have also liked to see more detail about the “hive mind.” How did these people experience life, and what could they now do together that they couldn’t do alone? And I would have loved to see some real innovation in marriage — what would a truly superhuman intelligent husband do?
Barney Pell, PhD, is co-founder, chairman and chief strategy officer of LocoMobi, a startup deploying exponential technologies to reinvent parking and transportation. He is also co-founder, vice chairman and chief strategy officer at Moon Express, a startup building autonomous robotic lunar landers, and an associate founder and trustee of Singularity University. He was previously founder and CEO of Powerset, a natural-language search engine that was acquired by Microsoft, where he was search strategist and leader of local and mobile search for Bing. Earlier, he was a researcher and manager in autonomous robotics at NASA, where he worked on the Mars Exploration Rovers mission and the development of the Remote Agent, the first AI system to fly onboard and control a deep space probe. Reach him @barneyp.
Scott Adelson is the executive director of Signal Media Project, a nonprofit organization that promotes and facilitates the accurate portrayal of science, technology and history in popular media.
This article originally appeared on Recode.net.