clock menu more-arrow no yes mobile

Filed under:

Transcending Artificial Intelligence: Part 1

A Q&A with AI guru Barney Pell on the realities behind the recent sci-fi movie, "Transcendence."

Warner Bros.

While it can be a blast going to see the latest Hollywood sci-fi thriller, watching it along with a leading expert in the science and technology fields relevant to the dystopian color and light in front of you can make it that much better.

Such was the case at a recent screening of “Transcendence,” — director Wally Pfister’s sci-fi epic starring Johnny Depp — when I was joined for a screening by Barney Pell, a pioneer of general game-playing programs in artificial intelligence (AI), founder of the Powerset AI-based search engine, and once an autonomous robotics researcher and manager at NASA.

The movie pivots off the latest AI technology to offer a fictional depiction of what the future may hold. Needless to say, I had some questions for Barney:

Warning: Some spoilers ahead! While you read the interview, listen to Mychael Danna’s music from the “Transcendence” soundtrack here:

Scott Adelson: What is “the Singularity,” and what is the history behind it?

Barney Pell: The Technological Singularity, or the Singularity for short, refers to a hypothetical “singular” moment in time when artificial intelligence surpasses human intelligence. Because computers are networked and computing resources increase exponentially, shortly after that moment, computers would thereafter exceed human intelligence exponentially, and the world would be changed forever.

According to Wikipedia, the first use of the term “singularity” in this context was by mathematician John von Neumann. In 1958, regarding a summary of a conversation with von Neumann, Stanislaw Ulam described “ever accelerating progress of technology and changes in the mode of human life, which gives the appearance of approaching some essential singularity in the history of the race beyond which human affairs, as we know them, could not continue.” The term was popularized by science-fiction writer Vernor Vinge, who argues that artificial intelligence, human biological enhancement, or brain-computer interfaces could be possible causes of the singularity. Futurist Ray Kurzweil cited von Neumann’s use of the term in a foreword to von Neumann’s classic “The Computer and the Brain.” Kurzweil’s recent book, “The Singularity Is Near,” goes into this in depth, and describes the exponential technologies that make this possibility likely to happen sooner than one might think.

Is “Transcendence” the first mainstream movie that talks about the Singularity? What did you think about its depiction of it?

While there have been many science fiction movies that depict AIs that are smarter than humans, this is the first mainstream movie that talks about the Singularity explicitly. (Recent films about Ray Kurzweil are not mainstream movies in the big box-office sense).

I was excited to see how this idea was captured in a mainstream movie. Overall, I enjoyed it as a story. However, from the AI perspective, the movie largely sidestepped the question of how such an AI would come to be. Essentially, the lead character is a scientist whose intelligence and personality gets “uploaded” into a computer. This seems to be achieved by having a small number of diodes on his skull, and having him read the dictionary while the system records some measure of his brain patterns. Because of the upload process, there remains an understandable question throughout the movie as to the relationship between the deceased scientist and the “uploaded” AI. In some sense, it’s perfectly reasonable, because the process would suggest that it’s at best only an approximation of the original personality. In another sense, the question becomes meaningless, because any human-level personality changes and evolves over time, and a post-Singularity AI would evolve exponentially rapidly. So the question is analogous to whether the 90-year-old man is the “same” person as the 6-year-old boy who preceded him in time.

I know you are involved with Singularity University. Can you tell us a bit about that?

What I found perhaps most interesting about the movie was that it addressed far more than the AI Singularity. If you read “The Singularity Is Near,” while it talks about the Singularity, it really talks about a whole suite of technologies that are advancing exponentially, including AI and robotics, computers and networks, biotech, nanotech, 3-D printing, and even energy and environmental systems. These “exponential technologies” became the foundation of Singularity University, of which I’m an associate founder. We teach current and future leaders about these technologies, what they mean about thinking about the future, and how they will disrupt existing industries and create opportunities for new ones.

The movie covers many of these technologies beyond AI. For example, the scientist creates advanced quantum computers, nanotech for medicine and prosthetics, nanotech for 3-D printing of any structures, brain-machine interfaces with networked minds, environmental technologies to counter global warming, and sensing systems for health care and emotion detection.

What technology in the movie do you think is possible? How far off are we from it?

For the core AI technology in the movie, which is basically intelligence vastly superior to humans, I believe it is definitely possible that we will have such intelligences. However, the technology approach in the movie, in the form of “uploading” a personality based on scanning a living brain, is more of a concept than a specific technology. The technology discussion would have to address two things: First, what is the computational architecture that can support a superhuman intelligence? And second, assuming we had such an architecture, how would we instantiate the architecture with a specific person’s personality? I don’t see anything close to such architecture in the scientific community today, much less an uploading approach compatible with an architecture.

I do believe we will have superhuman AI within 100 years, and I have spoken about my thoughts for how this will happen at the first Singularity Summit, on “Pathways to Advanced General Intelligence.” Ray Kurzweil predicts this will happen around 2037. I wouldn’t bet against this, but I just don’t see anything close to this in the scientific community at this point.

Turning to the other technologies in the movie: I think a wonderful thing about science fiction is that most technologies that can be imagined are likely to become real in the future in one form or another, unless they violate physical laws. Even then, as today’s laws reflect only our current understanding — and scientific knowledge evolves rapidly. For example, I believe we will have brain-machine interfaces that will let people interact with each other as if they had “hangouts” in their mind. I’d place that within 20 to 30 years. I think we’ll have increasingly high-fidelity and even-better-than-human prosthetics and high-quality tissue engineering within 10 years.

What, if anything, in the movie do you think is not possible?

3-D printing is advancing rapidly, and will change many aspects of life as we know it, even over the next decade. However, I’m not sure about the self-organizing smart particles as depicted in the movie — that seems to require internal energy and propulsion sources that I’m not sure will exist even in the next 50 years, and could quite possibly violate laws of physics.

If I were able to upload myself as an AI, would that really be me, or just a copy of myself?

This is really a philosophical question. As an analogy, if you have a boat and you replace one plank at a time, is it still the same boat even when all the planks have been replaced? We think people are the “same” person even after sleep, anesthesia, coma, or even “near-death” experiences. So we are comfortable with the notion that there can be breaks in psychological continuity.

But our intuitions are generally based around a person being a single entity that continues or not, as a single entity. Our intuitions don’t cope well with the concept of “copying.” If I copy your saved state of a videogame, and then we both start playing from that same saved state, we would agree that we started playing the same game, but then rapidly we are playing different games. Analogously, “you” are playing a different “you” game every second, you just don’t notice it. When there are multiple “copies of you,” it becomes impossible to say which was the original and valid one. The one occupying the original body (if any) has the best claim. So I’d say if you truly uploaded yourself as an AI, with the same fidelity as you had originally, it would validly be you. Our challenge is that we just don’t believe in the fidelity of the upload, which is a major source of concern in the movie. But if we did believe it, I think we should accept that each copy is you — or rather, was you, before it started evolving.

Did you think the movie was pro- or anti-technology?

I think the movie was pro-technology. The technologies portrayed were, in general, world-improving, and made people’s lives better of their own choice. In the movie, word spreads and people line up from all around to reap the benefits. We don’t see anyone who benefitted from the surgery, for example, regretting his or her decisions. Rather, the whole community is happier and healthier. But there are indeed anti-technology factions in the movie, which creates the drama. However, their reasons for being anti-technology are not truly validated — they are really Luddites who act out of fear of things they don’t understand, and never give the innovations a chance.

This is Part 1 of a two-part interview. Part 2 will continue tomorrow.

Barney Pell, PhD, is co-founder, chairman and chief strategy officer of LocoMobi, a startup deploying exponential technologies to reinvent parking and transportation. He is also co-founder, vice chairman and chief strategy officer at Moon Express, a startup building autonomous robotic lunar landers, and an associate founder and trustee of Singularity University. He was previously founder and CEO of Powerset, a natural-language search engine that was acquired by Microsoft, where he was search strategist and leader of local and mobile search for Bing. Earlier, he was a researcher and manager in autonomous robotics at NASA, where he worked on the Mars Exploration Rovers mission and the development of the Remote Agent, the first AI system to fly onboard and control a deep space probe. Reach him @barneyp.

Scott Adelson is the executive director of Signal Media Project, a nonprofit organization that promotes and facilitates the accurate portrayal of science, technology and history in popular media.

This article originally appeared on

Sign up for the newsletter Sign up for Vox Recommends

Get curated picks of the best Vox journalism to read, watch, and listen to every week, from our editors.