/cdn.vox-cdn.com/uploads/chorus_image/image/46253034/thumb__0-00-09-14_.0.0.png)
In theaters around the world, a robot named Ultron is trying to destroy humanity. He's the star of Avengers: Age of Ultron and the apocalyptic embodiment of the singularity — the moment when artificial intelligence exceeds human intelligence.
Today's Ultron is undoubtedly influenced by the intellectual outlook of people like Ray Kurzweil, the biggest living popularizer of the concept (which, in turn, came after a 1993 Vernor Vinge lecture that used the term). Kurzweil and many other contemporary philosophers probably led to the Ultron we see on screen.
But the evil robot appeared in comics long before that: in 1968, when computers were as large as a room. Ultron's essence played off science fiction — and scientific concepts — proposed since the 18th century. People were already worried about AI let loose, thanks to the work of some trailblazing thinkers.
Paranoia about evil smart machines has been around for 200 years
:no_upscale()/cdn.vox-cdn.com/uploads/chorus_asset/file/3661930/90738396.jpg)
Way back in 1794, mathematician Nicolas de Condorcet wrote about how machines might exceed the progress of the human mind. But it was the scenario put forth in Samuel Butler's 1872 novel Erewhon that might be the most influential.
Erewhon was adapted from an article, "Darwin Among the Machines," in which Butler espoused a standard singularity fear: that robots would take over:
Assume for the sake of argument that conscious beings have existed for some twenty million years: see what strides machines have made in the last thousand! May not the world last twenty million years longer? If so, what will they not in the end become? Is it not safer to nip the mischief in the bud and to forbid them further progress?
Butler lived in a post–Industrial Revolution era, when the prevalence of factories and railroads prompted a lot of examination of machines' increasing influence. That made Erewhon a popular success and a template for how to think about the rise of the machines.
As computers advanced from simple adding machines to devices able to do more complex calculations, those fears about the singularity only worsened. Alan Turing himself later referenced Butler's novel in his own singularity predictions, writing, "At some stage therefore we should have to expect the machines to take control in the way that is mentioned in Samuel Butler's Erewhon."
Alan Turing kicked off the 1950s and '60s fears about machines taking over
:no_upscale()/cdn.vox-cdn.com/uploads/chorus_asset/file/3661908/youngalanturing.jpg)
In 1951, Turing presented a paper titled "Intelligent Machinery: A Heretical Theory." Though Turing became famous for cracking codes in World War II, inventing early computers, and setting the groundwork for the Turing Test (which artificial intelligence can pass by successfully seeming to be human), he also introduced the concept of singularity to a large popular audience.
As Turing wrote in "Intelligent Machinery," he imagined a theoretical computer that experienced an exponential increase in intelligence. Through learning by experience, the machine could go from simple to complex in a matter of moments. Turing wrote in the conclusion of his paper, "It seems probable that once the machine thinking method had started, it would not take long to outstrip our feeble powers."
Shortly after the paper's publication, Turing spoke about it on the BBC network Third Programme, and that excited a broader contemporary debate about the singularity (though much of the public focused on the unique abilities of humans rather than the threat of an AI takeover).
That theme was later taken up by Turing's colleague Irving (I. J.) Good, who worked with him as a cryptologist during World War II (decoding the famous Enigma machine). Good's papers "Speculations Concerning the First Ultraintelligent Machine" and "Logic of Man and Machine," both published in 1965, furthered the conversation about the singularity and put a finer point on the potential threat of robots: smart machines could build smarter machines.
As Good wrote in "Speculations Concerning the First Ultraintelligent Machine":
Let an ultraintelligent machine be defined as a machine that can far surpass all the intellectual activities of any man however clever. Since the design of machines is one of these intellectual activities, an ultra-intelligent machine could design even better machines; there would then unquestionably be an 'intelligence explosion,' and the intelligence of man would be left far behind.
He then goes on to explain why computers will end up being better than humans at ... almost everything.
All that shaped 1968, when Ultron combined old and new fears about robots
:no_upscale()/cdn.vox-cdn.com/uploads/chorus_asset/file/2441900/Screen_Shot_2014-11-09_at_10.52.21_AM.0.png)
We know that these ideas quickly gained currency in the wider world, specifically in science fiction circles. Stanley Kubrick hired Goode as a consultant on 1968's 2001: A Space Odyssey. That movie featured its own malevolent AI, HAL, which fit in a science fiction landscape in which robots were becoming increasingly popular. And that doubtless set the stage for a comic book robot, especially since 2001 came out three months before Ultron showed up.
Ultron was a mishmash of influences: pop psychology played a big part, as did the psychological issues of Ultron's creator, Henry Pym, also known as Ant-Man (in the current movie, Ultron's creator is Tony Stark). The writers have also said that Ultron's physical appearance was inspired by a character called Mechano from Captain Video. But it was still an unmistakably paranoid vision of the singularity: in an early flashback, the robot exponentially progressed in intelligence, calling its creator "Da Da," then "Dad," and finally "Father" within a matter of seconds.
By using Henry Pym's brain patterns as a starting point, Ultron followed the template of old stories in which inanimate objects might be possessed by human consciousness, like Pinocchio or Frankenstein. But Ultron also incorporated the modern fear in which machines developed their own consciousness: the singularity.
The creation of Ultron was a collaborative project, and his success was collaborative, as well. It would be absurd to argue that both Ultron's real-life creators and comic book readers had plowed through the academic papers of Irving Good or been schooled in the classic vision of Erewhon. Those influences, however, involved ideas that rippled through the public consciousness in many ways, from science fiction to politics. Ultron succeeded because the concepts resonated so clearly.
And because of the quickly evolving ideas about the singularity, the vision of Ultron that took over was the one in which he was a representation of the growing presence — and potential threat — of artificial intelligence. It's that unique philosophical grounding that makes him resonant, and frightening, today.
Update: A reader notes that any history of robotic singularity paranoia would be incomplete without a mention of R.U.R. (Rossum's Universal Robots). The 1920 play is generally credited with inventing the word "robot" and as an early portrayal of robots taking over.