Artificial intelligence has been having a heyday lately, with open letters and warnings from Elon Musk, Stephen Hawking and other household technology names; and large acquisitions and investments to further the industry’s development. With all this activity, the question of how we develop AI and in what context is something we must not take lightly.
If we are overly cautious and regulate AI too heavily, we run the risk of crushing progress on innovations that could fundamentally benefit human life and society. We need to consider that along with “summoning the demon,” on the other side, we have the risk of wreaking havoc through unevenly distributing AI.
The Chicken and the Egg
What we lose sight of with each Hollywood blockbuster is that AI is in its infancy. Yes, Apple’s Siri is amazing, but we quickly and easily run into her boundaries. And while we have computers that can be trained to beat humans at games such as chess, iRobot’s Roomba still can’t detect if your dog has had an accident and not spread that accident around the house.
What we have now in AI is a narrow form of technology programmed for specific tasks that does not function outside those tasks without looking less like intelligence and more like a bug. The success of this narrow AI will proliferate, and in some areas become ubiquitous to the point that it may seem almost like we’ve reached “general AI.”
But from the perspective of someone inside the industry, we are far from general AI that can learn and decide “like us” across categories without being trained specifically for that function. Humans are able to process new situations and adapt, applying what we know. We can operate far beyond a single task. This is what holds the most promise and fear for society: Machines that are smart enough to mimic humans’ ability to learn and adapt.
So far, the humans-vs.-robots game challenges and Siris of the world do not prove any measure of general AI intelligence. That said, if we take a cue from technology’s progress in the past, innovation typically follows an S curve, and when we hit the upward part of that S, things happen fast. Once we do see some general AI progress, it’s likely that capabilities will go up quickly, and we’ll reach a human-like ability to process information.
The Risks of Over-Parenting
So, how can we make sure we don’t end humanity while we bring the benefits of AI out? If AI is in its child phase, we can best draw analogies to parenting. Like a baby, AI can make us swoon with delight, and hold our noses with the smell of its misses.
As we have seen, the materials needed to create narrow machine intelligence are readily available now. We’ve already birthed this new technology and, as with a real baby, we can’t put the proverbial bun back in the oven. Trying will not only be futile, but we risk that the very people we most want to take part in this field — the good guys — won’t pursue it.
Our job now is to raise this infant industry in a responsible way. We could be overbearing parents and crush AI, or we can give it a well-rounded upbringing in the following ways:
- Extend AI beyond the hands of the tech elite: AI has been driven by mathematicians, physicists and computer scientists in research settings. There’s a good analogy between AI’s potential development and what has happened to software in the last 10 years. As we went from engineers developing code to product managers, designers — heck, even our mothers! — defining what software is, we’ve improved. Like with children, AI takes a village. We need more people involved, not fewer.
- Give AI the freedom to play outside: Hovering versus giving AI freedom is going to be the most delicate balance we can make. As Justice Louis Brandeis said: “Publicity is justly commended as a remedy for social and industrial diseases. Sunlight is said to be one of the best disinfectants.” As with kids, we need AI outside playing and learning, not confined to a basement, a corner, or constantly under our protective wing. Give the inner workings publicity, not hype.
- Expand AI’s experience to many problems: There’s a fear that machines will replace humans, and then what will the humans do? But if we restrict AI’s applications to a few categories, we unnaturally hold back its growth — and its potential for good. There is plenty we do not get done in our world that either machines or newly freed-up humans can do. Yes, there will be retraining, and yes, we need to understand the value-creation mechanisms, but these are problems worth solving. We can do it, and we can do it better than we did in the Industrial Revolution. This time, we have experience.
AI is many infants; there are thousands of efforts taking place in the field, and soon it will be millions. Certainly, there are areas where greater regulation may make sense, such as Elon Musk’s recent call for oversight on military AI development. But broadly speaking, we need more people working on AI, solving more problems in more settings, and with more publicity explaining the actual status and capabilities. We need to keep an eye on it while letting it gain experience. We need to practice healthy parenting. Otherwise, we won’t raise healthy children.
That is my biggest fear for AI — that we won’t treat it like a growing child, and practice healthy parenting. And if we don’t — forget about that demon, we’ll have millions of monsters on our hands.
Jana Eggers is a technophile with 20+ years of experience in technology, machine learning and artificial intelligence, including a stint at Los Alamos National Laboratory. She is currently CEO of Nara Logics, a Boston-based company delivering a neuroscience-inspired AI platform for recommendations and decision support. Reach her @jeggers.
This article originally appeared on Recode.net.