It's easy to dream about a future full of autonomous electric vehicles (AEVs), in which combustion engines have been relegated to history, transportation is a service, and urban spaces are adaptable and multimodal, shared by AEVs, bikes, and pedestrians in shifting configurations. I did a little utopian musing about it last week.
What's more difficult is envisioning the path from here to there.
As I see it, AEVs face two fundamental challenges. Both take the form of an "uncanny valley" between partial automation and full automation. It's impossible to predict which challenge will be overcome first, or how, or how the two challenges might interact ... but it's fun to speculate!
AEV challenge No. 1: unreliable drivers
One route that self-driving vehicles might take to the mass market is incremental. It begins with systems that assist drivers in certain conditions — parking, changing lanes, cruising in highway traffic. Perhaps there's a ping, an alarm of some sort, or a nudge of the steering wheel one way or the other.
These assistive systems are already being introduced. Toyota will put "Intelligent Transportation System" technology into three 2016 models. It "communicates with equipment at street corners to detect oncoming cars, pedestrians and other hazards and warns the driver with a beep."
BMW has already demonstrated completely self-parking vehicles. So has Ford. Mercedes-Benz is working with Bosch to include self-parking in the Car2Go fleet soon.
Some 2017 Cadillacs will include "Super Cruise":
Super Cruise isn't intended to completely take over from the person in the driver's seat, unlike Google's autonomous cars. Instead, it will pull together a number of smart systems intended to take some of the repetition out of more predictable driving, like on freeways.
So, there'll be hands-off lane following, braking, and speed control while on the highway.
Super Cruise, GM says, "is being positioned as a safety and driver assistance feature, taking on some of the cognitive load in freeway driving and bumper-to-bumper traffic."
Audi will introduce "Traffic Jam Pilot" to its vehicles in the next few years, which will do similar things. It also notes:
[U]nlike Google's self-driving "pod" cars which will begin their own trials imminently, Audi says drivers will still need to be alert and ready to take over, particularly as low-speed traffic jams give way to regular highway speeds.
And therein lies the rub.
As assistive systems get more sophisticated and are able to handle more varieties of routine traffic, they will take more and more of the "cognitive load" off of the driver. Soon the driver will only need to be paying close attention 75 percent of the time, then 50 percent of the time, then 25 percent of the time.
Then you start reaching the uncanny valley. Can we trust human drivers who are inattentive 75 percent of the time to pay attention the right 25 percent of the time, and to make the right decisions? What if autonomous systems get really good and the driver can tune out 95 percent of the time? Do we trust that driver to snap to and pilot the vehicle capably during the 5 percent?
It's an issue already facing the airline industry, as autopilot becomes more capable and pilots' skill and judgment begins to atrophy:
... as pilots were being freed of these responsibilities, they were becoming increasingly susceptible to boredom and complacency—problems that were all the more insidious for being difficult to identify and assess. As one pilot whom Wiener interviewed put it: "I know I’m not in the loop, but I’m not exactly out of the loop. It’s more like I’m flying alongside the loop."
As drivers are increasingly "driving alongside the loop," will they too succumb to boredom and complacency and start making dumb errors?
It seems likely. In fact, in California, piloting an autonomous vehicle already requires a special license. "Humans aren’t hardwired to sit and monitor a system for long periods of time and then quickly react properly when an emergency happens," said Patrick Lin of California Polytechnic State University.
Perhaps the safety benefits of autonomous systems will outweigh those errors. But it's likely to cause quite a bit of social friction along the way.
So maybe Google has it right. Maybe the way to solve the uncanny valley problem is to vault right over it, straight to 100 percent autonomous vehicles. That's what the company's done with its latest prototype.
But AEVs face a second challenge.
AEV challenge No. 2: unreliable traffic
Move back from the driver-vehicle perspective to the vehicle-traffic perspective, and you find another uncanny valley.
Consider: If there were only AEVs on the road (along with bikes and pedestrians), then the full benefits of AEVs would be unlocked. All vehicles would not only be "smart," they would be communicating with one another, acting in concert. Serious accidents would fall to virtually nil.
Bikes and pedestrians would remain somewhat unpredictable and might cause a few accidents. But AEVs driving among other AEVs could be much, much lighter, and they would move more slowly through crowded spaces, so there would be nothing like the terrible toll of today's traffic.
But if there are even a few heavy, fast internal combustion vehicles piloted by human beings on the road — say, 10 percent of traffic — then the benefits of AEVs drop sharply. For one thing, to meet safety standards, they have to be up-armored against collision with very heavy vehicles, which in turn makes them much heavier, reducing their range and flexibility.
For another, as long as there are unpredictable, human-piloted vehicles on the road, AEVs will be forced into making weird ethical judgments like this: "a child suddenly dashing into the road, forcing the self-driving car to choose between hitting the child or swerving into an oncoming van."
Should the AEV prioritize its own driver? Should it sacrifice its own driver to save the younger victim? Should it rank outcomes by total cost? By lives at risk? These are thorny dilemmas that will also create considerable social friction (and inevitable lawsuits).
Here's the thing: If the oncoming van is also an AEV, it's not a problem if a child dashes into the road. The two vehicles will communicate and coordinate the best way to swerve or stop without touching each other or the child.
But if the oncoming van is piloted by a human — or even might be — all those ethical dilemmas arise, simply because the van's behavior can't be reliably predicted.
It really seems like to fully unlock the benefits of AEVs, you need to push the last of the internal combustion engines and human pilots off the road. And that seems a distant prospect.
Virtually all automakers are making their vehicles smarter at a rapid pace, and fully autonomous vehicles are being tested in real-world traffic today. One Japanese prefecture is testing a fleet of "robot taxis" that mix in normal traffic. Google is sending self-driving Lexus SUVs to cruise around a few blocks of Austin, Texas (and now its AEVs, too). Honda recently got a permit to put its autonomous cars on California streets. Uber has a permit to test autonomous vehicles in Arizona. The US government has plans to require vehicle-to-vehicle wireless communication on all cars soon.
But again, it's hard not to think that the best way across the uncanny valley is simply to leap it all at once — to have places with only AEVs.
That's difficult to envision on a large scale, at least in today's cities, but smaller experiments are popping up. GM is testing a fleet of self-driving Volts on one of its campuses in Michigan. Santa Clara University will have a system of AEVs (self-driving electric golf carts) on campus.
It's possible to envision such spaces expanding, perhaps to the point that a whole separate set of standards and vehicles operates within them. If they demonstrate the system benefits of AEVs (substantially beyond what AEVs can offer when mixing with normal cars), they could catch on in city centers, corporate campuses, or resort towns, spreading over time.
Questions about AEVs will be answered soon enough
Whether we're prepared or not, AEVs are coming. Autonomous vehicles are on the rise, and they will be much more efficient and flexible running on electricity. The trends will converge. But how and through what routes? What unexpected twists in the road can we expect in the next 10 years? Who knows!
But it's happening. Tesla has been testing software to autopilot from Seattle to San Francisco. Elon Musk thinks fully autonomous vehicles will be on the road in "two or three years" (though he worries that the lack of clear federal regulation might delay them). He says Tesla will have a fully autonomous vehicle with 750-mile range by 2020. "A lot of people think of us as the leader in electric cars, but I think we'll also be the leader in autonomous cars," he has said, and speculates, "in the distant future ... people may outlaw driving cars, because it's too dangerous. You can't have a person driving a two-ton death machine."
Toyota says it will have autonomous vehicles ready by 2020. So does Nissan. So does Google. GM president Dan Ammann is similarly bullish. Apple is in the game, somewhat mysteriously (as usual). Ford says an array of driver-assist technologies will feature in its entire fleet by 2020.
A Business Insider report expects fully autonomous vehicles to be introduced by 2019 and 10 million vehicles with "self-driving features" on the road by 2020:
Even at a steep rate of penetration, it will take several decades for a majority of vehicles to have even some self-driving features, and plenty of time after that for AEVs to take over, and longer after that for urban infrastructure to begin adapting in a serious way.
Best we can tell, anyway.
Perhaps the uncanny valleys described above will put a damper on development at some point. Or perhaps they will inspire a larger leap, moving up the schedule for Truly Futuristic Shit like smart cities and multimodal open spaces and, uh, robot taxis.
There's so much changing in this area, so fast, it's a fool's errand to predict how it will all unfold. Should be cool, though.