clock menu more-arrow no yes mobile

Filed under:

Car companies' vision of a gradual transition to self-driving cars has a big problem

Investigation Continues Into Tesla Driver's Death While In Autopilot Mode Photo by Spencer Platt/Getty Images

aiAn important moment in the self-diving car debate came on May 7, 2016, when Joshua Brown lost his life after his Tesla vehicle crashed into a semi-truck trailer. Brown had engaged Tesla’s Autopilot feature, and the software didn’t detect the white side of the trailer against the daytime sky. The car slammed into the truck at full speed — 74 miles per hour — shearing off the top of the car and killing Brown.

The accident — characterized by many as the first of self-driving car accident — caught the attention of the National Transportation Safety Board, which released hundreds of pages of new details about the incident last month.

But the industry that’s trying to make self-driving a reality takes exception with this characterization. Tesla argued that autopilot is not a self-driving technology, but more like an advanced form of cruise control. Drivers need to keep hands on the wheel and their eyes on the road at all times when Autopilot is engaged, the company says. Computer logs released by the NTSB last month show that Brown’s car warned him seven times to keep his hands on the wheel. Each time, Brown put his hands on the wheel for a few seconds — long enough to make the warnings go away — before taking them off again.

In January, the National Highway Traffic Safety Administration sided with Tesla in the case, concluding that the Autopilot system wasn’t defective. “Not all systems can do all things,” said agency spokesman Bryan Thomas. “There are driving scenarios that automatic emergency braking systems are not designed to address.”

Still, Brown’s death illustrates one of the biggest challenges carmakers will face as they work to bring self-driving cars to market. Most car companies envision a gradual path to self-driving capabilities, selling cars with increasingly sophisticated driver-assistance features over several years before ultimately introducing fully self-driving cars with no pedals or steering wheel.

But there’s a danger that people will trust the technology too quickly, as Brown did. After hundreds or even thousands of miles of flawless driving, people could stop paying attention to the road, with deadly consequences. Industry’s answers to what makes driverless cars safer to drive reveal a lot about how we already drive — and how easy it us to become dependent on a new technology.

Partially self-driving cars are going to nag drivers constantly

Delphi Automotive Showcases Its Driverless Car, After Completing Cross Country Trip
The lidar sensor is displayed on the driverless specially outfitted Audi Q5 sport-utility vehicle at the Waldorf Astoria following the car's return from a cross country trip, a first for a driverless vehicle, on April 2, 2015 in New York City.
Photo by Spencer Platt/Getty Images

Tesla’s approach to self-driving — one the company has doubled down on since Brown’s death — is for cars with partial self-driving capabilities to pester drivers to pay attention to the road.

Earlier this month, I visited Tesla’s Palo Alto headquarters for my own test drive of the Model S. Before she let me drive the car on my own, a Tesla employee gave me a lecture where she emphasized that Autopilot is not a full self-driving system. She pointed out that before enabling the Autopilot feature for the first time, drivers must read and accept a disclaimer, displayed on the 17-inch screen in Tesla’s center console, that stresses the need to keep hands on the wheel at all times.

And this isn’t just an idle suggestion. The Model S had sensors that that allowed it to detect if I had my hands on the steering wheel. If I didn’t, the dashboard would eventually begin to flash until I put my hands back on the wheel.

Joshua Brown’s Tesla gave him these warnings too — seven of them — but he didn’t pay enough attention to them. Since his death, Tesla has established a stricter “three strikes and you’re out” rule: If the driver ignores three consecutive warnings, he gets locked out of Autopilot for the rest of that trip. If the driver still doesn’t grab the wheel, the car will assume the driver is incapacitated and come to a gradual stop with the hazard lights flashing.

Other car companies are working on similar technologies. Audi, for example, will soon be introducing a product called traffic jam pilot that will allow hands-free freeway driving up to 35 miles per hour (self-driving at full highway speeds is four to five years away, Audi says). During a recent test drive, Audi engineer Kaushik Raghu told me that Traffic Jam Pilot will include a “driver-availability monitoring system” that makes sure the driver isn’t sleeping or looking backward for an extended period of time.

Cadillac recently announced a freeway-driving technology called Super Cruise that does this. An infrared camera mounted in the steering wheel can tell if the driver is looking out at the road or down at a smartphone. If the driver’s eyes are off the road for too long, the car will start beeping until the driver gets his eyes back on the road.

I got an idea of what this might look like when I visited Nauto, a startup that is developing sophisticated driver-monitoring technology. Nauto makes a windshield-mounted device that looks back at the driver and can tell if the driver isn’t looking at the road. If integrated into a self-driving car — or a conventional car, for that matter — this kind of technology could prevent a lot of distracted driving and highway deaths.

Of course, plenty of people drive regular cars without their hands on the wheel, meaning that driver-assistance cars could ironically wind up asking drivers to pay more attention than their all-human-driven counterparts.

Why almost-perfect autopilot can be dangerous

Transportation Sec'y Foxx Discusses Future Transportation Trends With Google CEO Photo by Justin Sullivan/Getty Images

Chris Urmson, who led the team of Google’s self-driving car engineers until he left to found his own startup last year, describes the decision over whether to assist drivers or take the drivers out of the equation as “one of the big open debates in the space.”

Google — which recently renamed its self-driving car project Waymo — has been thinking about this problem for years. Back in 2014, Google self-driving car engineer Nathaniel Fairfield gave a talk at a computer vision conference describing how Google had built self-driving technology for freeway driving, then decided that the approach was too dangerous. Waymo customers will never be allowed to touch the steering wheel on its cars.

Early versions of Google’s self-driving car technology worked a lot like Tesla’s Autopilot or the Audi prototype I rode in a few weeks ago. The driver would be responsible for navigating surface streets at the beginning and end of a trip, but would be able to activate self-driving mode during freeway driving. Clearly marked roads and a lack of pedestrians and other obstacles make freeway driving a relatively easy computer science problem.

The problem, Fairfield said, was that people started trusting the system too quickly. “People have this curve where they go from somewhat unreasonable but plausible distrust to way, way overconfidence,” Fairfield said. After a few hours of seeing Google’s freeway driving technology in action, people came to have “complete and utter trust” in its efficacy.

“In-car cameras recorded employees climbing into the back seat, climbing out of an open car window, and even smooching while the car was in motion,” John Markoff reported in the New York Times.

A technology that drives perfectly for 100 or even 1,000 miles might still make a catastrophic mistake once in a while. But after mile after mile of flawless driving, it’s unlikely the driver will be paying close attention to the road. Which means that if the car suddenly encounters a problem it can’t handle, handing off control to the human driver could make things worse rather than better.

This is a problem that the aviation industry has grappled with for decades. In 2009, Air France flight 447 crashed into the ocean on the way from Rio de Janeiro to Paris. The problem: ice on the plane’s sensors caused the autopilot to disengage. The pilot, with little experience handling the plane without computer assistance, did exactly the wrong thing, pitching the plane’s nose up when they should have pushed it down. The plane stalled, and the resulting crash killed all 228 people on board.

“It’s clear that automation played a role in this accident, though there is some disagreement about what kind of role it played,” a 2015 Slate article on the crash argues. “Maybe it was a badly designed system that confused the pilots, or maybe years of depending on automation had left the pilots unprepared to take over the controls.”

This is the paradox of automated systems: The better an autopilot system is, the more human beings will come to depend on it and the worse-prepared they’ll be if they’re forced to suddenly take over control.

It’s easy to imagine something similar happening with self-driving cars — especially to younger drivers. As driver assistance technologies become more common, there will be a growing cohort of teenage drivers who have never driven without computer assistance. But if these cars are designed to fall back on a human driver in unexpected situations, there’s a danger that human drivers won’t be prepared, and will do exactly the wrong thing at a life-or-death moment.

And the closer self-driving cars get to full autonomy, the harder it will be to get drivers to pay attention. Even if cars can force drivers to keep their hands on the wheel and their eyes on the road, they might still zone out if they don’t have to actually make any decisions for thousands of miles in a row. Beyond a certain point, it might become safer for the car to just make its best guess about what to do rather than turn over control to a driver who is likely to be disoriented and out of practice.

Waymo wants to jump straight to full self-driving

North American International Auto Show Features Latest Car Models
John Krafcik, CEO of Waymo, debuts a customized Chrysler Pacifica Hybrid that will be used for Google's autonomous vehicle program at the 2017 North American International Auto Show on January 8, 2017 in Detroit.
Photo by Bill Pugliano/Getty Images

Google ultimately concluded that partially self-driving cars were a technological dead end. Instead, the company set itself a goal to build a fully self-driving vehicle — one that was so reliable it would never need intervention from a human driver.

Urmson, the former Google engineer, believes that driver assistance and full self-driving are “actually two distinct technologies.”

“In a driver-assistance system, most of the time it’s better to do nothing than to do something,” Urmson said in an April talk. “Only when you’re really, really, really sure that you’re going to prevent a bad event, that’s when you should trigger. That will guide you down a selection of technologies that will limit your ability to bridge over to the other side.”

In contrast, Waymo is trying to build cars that never hand off control to a human passenger. That means the software has to be able to choose a reasonable, safe response to every conceivable situation. And it means building in redundancies so that the car can respond gracefully even if key components fail.

“Each of our self-driving vehicles is equipped with a secondary computer to act as a backup in the rare event that the primary computer goes offline,” Google wrote in a 2016 report. “Its sole responsibility is to monitor the main computer and, if needed, safely pull over or come to a complete stop. It can even account for things like not stopping in the middle of an intersection where it could cause a hazard to other drivers.”

Building software that can gracefully handle any contingency — and redundant, bulletproof hardware — is a much harder technical problem than building driver-assistance technology that counts on humans to handle tricky situations. But it also has a big upside: If the car never transfers control to a human driver, then it never has to worry about the driver being caught unprepared.

For now, Waymo does still have drivers behind the wheel, but these drivers are Waymo employees with special training on handling self-driving vehicles, who are presumably paid to continue paying attention to the road no matter how boring it gets. And in recent months, the job has been getting pretty boring. According to a regulatory filing, Waymo’s self-driving cars drove more than 600,000 miles in California and only had to hand over control to a human driver 124 times.

That works out to one disengagement every 5,000 miles, a four-fold improvement over 2015, and by far the best showing of any company testing on California roads. At that rate of progress, it’ll take a few more years for Waymo to surpass human levels of driving safety. If the California data is any indication, rivals like GM, Ford, BMW, and Mercedes have a lot of catching up to do.

While Waymo seems to be the clear leader in self-driving technology right now, the company is taking a big risk by trying to jump straight to full self-driving technology. Reaching the necessarily level of reliability — going from 99.99 percent reliability to 99.9999 percent, say — might take several more years. During that time, companies taking a gradualist approach could steadily gain ground.

One big advantage of the gradualist approach is that it can allow car companies to collect a lot of data, and many experts believe that having a lot of data is crucial for building successful self-driving cars. Waymo’s cars have driven more than 3 million miles on public road, providing the company with the raw data they use to tune their software algorithms. in contrast, Tesla has collected more than 1 billion miles of real-world sensor data from its customers’ cars. All that extra data could allow Tesla to make more rapid progress toward a goal of full autonomy, allowing it to eventually catch and surpass Waymo’s early lead.

Disclosure: My brother works at Google.

Correction: My article initially said the Air France flight 447 crash was caused by pilots being disoriented in the seconds after the autopilot disengaged. But it’s more accurate to say the crash occurred because they were inexperienced at handling the plane without computer assistance. I’ve modified the article accordingly. Thanks to reader Mike Chowla for alerting me to the mistake.