clock menu more-arrow no yes mobile

Filed under:

5 big challenges that self-driving cars still have to overcome


It's fun to ponder a future filled with self-driving cars, a world with breezy commutes where robot navigators have made deadly crashes a thing of the past. But how far off is that future, really?

Last month, Google suggested that this driverless utopia may actually be much further away than many people may realize. In a speech at SXSW in Austin, Google's car project director Chris Urmson explained that the day when fully autonomous vehicles are widely available, going anywhere that regular cars can, might be as much as 30 years away. There are still serious technical and safety challenges to overcome. In the near term, self-driving cars may be limited to more narrow situations and clearer weather.

As Lee Gomes pointed out at IEEE Spectrum, this was the most conservative roadmap yet offered by Google, which has been operating and tweaking autonomous cars for years on private and public roads. If they're saying it's hard, we ought to listen.

So what are the big hold-ups, anyway? After watching Urmson's presentation, I called two experts — Edwin Olson of the University of Michigan and Nidhi Kalra of the RAND Corporation — to dive more into the obstacles that stand between us and our glorious self-driving future. None of these things are deal-breakers per se, and there are tons of smart people working on these problems. Instead, think of this as a big to-do list:

1) Creating (and maintaining) maps for self-driving cars is difficult work

First, a quick clarification: Lots of car companies, from GM to BMW to Tesla to Uber, are working on various species of autonomous technology. Some of this is partial autonomy, as with Honda's Civic LX, a car now on the market that can stay within its lane. But I'm mostly going to focus on full autonomy — cars that don't need drivers at all. And right now, Google seems to be the furthest along with that technology:

The Google self-driving car maneuvers through the streets of in Washington, DC, May 14, 2012.
(Karen Bleier/AFP/Getty Images)

Google's self-driving cars work by relying on a combination of detailed pre-made maps as well as sensors that "see" obstacles on the road in real time. Both systems are crucial and they work in tandem.

Before Google can test a self-driving car in any new city or town, its employees first manually drive the vehicles all over the streets and build a rich, detailed 3-D map of the area using the rotating Lidar camera on the car's roof. The camera sends out laser pulses to gauge its surroundings, and the people on Google's mapping team then pore over the data to categorize different features such as intersections, driveways, or fire hydrants. Like so:

Google makes specialized maps for self-driving cars with features such as the length of a crosswalk, height of a traffic light, and the curve of a turn.

This is a time-intensive process, but Google thinks it's the best way forward. The idea is that building the map ahead of the time can free up processing power for the car's software to be "alert" while puttering around autonomously. The car uses the map as a reference and then deploys its sensors to look out for other vehicles, pedestrians, as well as any new objects that weren't on the map, such as unexpected signs or construction.

Olson points out that relying on this mapping system will pose some major challenges. Right now, Google has only built detailed 3-D maps for a relatively limited number of test areas, like Mountain View. For self-driving cars to go mainstream, Google would have to build and maintain detailed maps all over the country — across 4 million miles of public roads — and update them constantly. After all, roads change a lot: Researchers at Oxford University recently tracked a single 6-mile stretch of road in England over the course of a year and found its features were constantly shifting. One rotary along the path was moved three times.

Google is confident it can pull this off — mapping, after all, is something the company is extremely good at. As more and more self-driving cars hit the road, they will constantly be encountering new objects and obstacles that they can relay to the mapping team and update other cars. Still, it's an incredibly daunting and potentially costly undertaking. Over at MIT Technology Review, Will Knight recently argued that driverless technology might advance more quickly if all the companies testing such vehicles shared the data that their sensors were collecting.

By the way, some car companies don't seem to think that Google's precise mapping is the way to go. Tesla is hoping to build self-driving cars that rely more prominently on imaging and sensor processing. We'll see which approach wins out.

2) Driving requires many complex social interactions — which are still tough for robots

A far more difficult hurdle, meanwhile, is the fact that driving is an intensely social process that frequently involves intricate interactions with other drivers, cyclists, and pedestrians. In many of those situations, humans rely on generalized intelligence and common sense that robots still very much lack.

Much of the testing that Google has been doing over the years has involved "training" the cars' software to recognize various thorny situations that pop up on the roads. For example, the company says its cars can now recognize cyclists and interpret their hand signals — slowing down, say, if the cyclist intends to turn. Here's a demonstration:

So far, so nifty. But Olson points out that there are thousands and thousands of other challenges that pop up, many of them quite subtle and unpredictable. Just imagine, for instance, that you're a driver coming up on a crosswalk and there's a pedestrian standing on the curb looking down at his smartphone. A human driver will use her judgment to figure out whether that person is standing in place or absent-mindedly about to cross the street while absorbed in his phone. A computer can't (yet) make that call.

Or think of all the different driving situations that involve eye contact and subtle communication, like navigating four-way intersections, or a cop waving cars around an accident scene. Easy for us. Still hard for a robot. As Harvard's Sam Anthony points out, AI cars are incredibly easy to troll.

Olson explains that fully self-driving cars will ultimately need to be adept at four key tasks: 1) understanding the environment around them; 2) understanding why the people they encounter on the road are behaving the way they are; 3) deciding how to respond (it's tough to come up with a rule of thumb for four-way stop signs that works every single time); and 4) communicating with other people.

"There's a long ways to go in all of these areas," he says. "And reliability is the biggest challenge of all. Humans aren't perfect, but we're amazingly good drivers when you think about it, with 100 million miles driven for every fatality. The reality is that a robot system has to perform at least at that level, and getting all these weird interactions right can make the difference between a fatality every 100 million miles and a fatality every 1 million miles."

In the interim, to deal with the very toughest situations, companies (or regulators) might end up settling on a compromise: self-driving cars that hand the controls back over to humans when the computer is unsure what to do. Google's cars are meant to be completely driverless, but more traditional car companies such as BMW or Audi are working on autonomous vehicles that can flip between computer and driver control, depending on the situation.

The huge drawback to the latter approach, as plenty of analysts have noted, is that shared control could potentially make self-driving cars much more dangerous. Imagine, say, that the human inside the car has been drifting off but then suddenly has to snap to attention to prevent a crash. (This has been a growing problem in the airline industry as autopilot becomes more prevalent.) Plus, it's a bit of a high-wire act to hand over controls on a highway when the car is going 60 mph.

3) Bad weather makes everything trickier

Not a happy car.
(Ian Forsyth/Getty Images)

Compounding these challenges is the fact that weather still poses a major challenge for self-driving vehicles. Much like our eyes, car sensors don't work as well in fog or rain or snow. What's more, companies are currently testing cars in locations with benign climates, like Mountain View, California — and not, say, up in the Colorado Rockies.

Olson classifies this as a real, but lesser, hurdle. "Weather adds to the difficulty, but it's not a fundamental challenge," he says. "Also, even if you had a car that only worked in fair weather, that's still enormously valuable. I suspect it might take longer to overcome weather challenges, but I don't think this will derail the technology."

Urmson took a similar view in his SXSW talk: "This technology is almost certainly going to come out incrementally," he said. "We imagine we are going to find places where the weather is good, where the roads are easy to drive — the technology might come there first. And then once we have confidence with that, we will move to more challenging locations."

4) We may have to design regulations before we know how safe self-driving cars really are

Don't ask us.
(Bill Pugliano/Getty Images)

Another big obstacle for self-driving cars isn't technical — it's political. Before self-driving cars can hit the roads, regulators are going to have to approve them for use. One thing they're going to want to ask is: How safe are these things, anyway?

And here's the tricky part: We probably won't know!

Kalra laid this all out in a recent paper for RAND. As noted above, drivers in the United States currently get into fatal accidents at a rate of about one for every 100 million miles driven. Ideally, we'd want self-driving cars to be at least that safe. But it's unlikely we'll be able to prove that any time soon. Google only drove its cars 1.3 million miles total between 2009 and 2015 — not nearly enough to draw rigorous statistical conclusions about safety. It would take many decades to drive the hundreds and hundreds of millions of miles needed to prove safety.

"My hunch is that by the time automakers are ready to sell these things, we still won't know how safe they are," says Kalra. "We're going to have to make these decisions under uncertainty."

What might that look like? Regulators could come up with alternative testing procedures — such as modeling or simulations or even pilot programs in volunteer cities. We might also look to other technologies that get approved even when their safety is uncertain, such as personalized medicine. But this is going to be something to think hard about.

(There are separate legal questions too, such as how these cars will be insured and who exactly will be liable — the driver or the manufacturer — in the event of a crash. But Kalra suspects our courts and insurance industry will be flexible enough to figure that stuff out eventually.)

5) Cybersecurity will likely be an issue — though a surmountable one

Chevy's next-generation touchscreen interface.
(The Verge)

"Another issue is cybersecurity," says Kalra. "How do you make sure these cars can't be hacked? As vehicles get smarter and more connected, there are more ways to get into them and disrupt what they're doing."

This shouldn't be impossible to fix. Software companies have been dealing with this issue for a long time. But as Vox's Timothy Lee has written, it will likely require a culture change in the auto industry, which hasn't traditionally worried much about cybersecurity issues.

Olson raises a related issue: Many car enthusiasts already modify their own vehicles to improve performance. What happens if they do this for self-driving cars and inadvertently compromise the computers' decision-making ability? "Just as an example, someone puts on oversized wheels that distorts' the cars sense of how fast it's going," he notes. "It's hard to stop anyone from doing that."

Olson points out this could be a particular challenge if the auto industry tries to develop systems that enable different vehicles to talk to each other on the road (say, to make merging easier). "The whole premise of using V2V [vehicle-to-vehicle communication] for safety is that if you get a message to slam on the brakes, you better be able to trust that message. But securing that system could be extremely difficult." Again, not fatal. But something to ponder.

Self-driving cars are coming — but perhaps not all at once

So when might we overcome all these challenges? I asked Olson to wager a prediction, and he (wisely) countered that it's complicated. "What most people envision when they envision autonomous vehicles probably won't be the reality anytime soon. I'm skeptical that you'll be able to buy a car in 2020 that you can just put your kid in and ship off to school. That kind of complete trust and autonomy is a ways off."

"But," he added, "limited forms of autonomy are very plausible. We already have the technology to do automatic parking in garage structures. Or you could envision something like low-speed autonomous vehicles in retirement communities." Singapore, for instance, is hoping to install driverless pods that work on smaller roads in gated communities or school campuses by the end of the year.

Similarly, it wouldn't be surprising to see self-driving buses along fixed routes or trucks that can use autonomous technology to platoon and save fuel on highways. The technology is advancing rapidly, and it's likely to become useful in all sorts of unexpected places.

Google's Urmson took a similar view in his SXSW presentation: "How quickly can we get this into people's hands? If you read the papers, you see maybe it's three years, maybe it's 30 years. And I am here to tell you that honestly, it's a bit of both."

Go deeper: