clock menu more-arrow no yes mobile

Filed under:

Uber’s self-driving software detected the pedestrian in the fatal Arizona crash but did not react in time

The company’s internal investigation as well as the federal investigation are ongoing.

Screengrab from video taken inside Uber’s self-driving car

As part of its ongoing preliminary internal investigation, Uber has determined that its self-driving software did detect the pedestrian who was killed in a recent crash in Arizona but did not react immediately, according to The Information.

The software detected Elaine Herzberg, a 47-year-old woman who was hit by a semi-autonomous Volvo operated by Uber, as she was crossing the street but decided not to stop right away. That’s in part because the technology was adjusted to be less reactive or slower to react to objects in its path that may be “false positives” — such as a plastic bag.

Both Uber and the National Transportation Safety Board launched investigations into the crash to determine whether the software was at fault. Both investigations are ongoing. But people who were briefed on some of the findings of the investigation told The Information that the software may have been the likely cause of the crash.

Self-driving companies are able to tune their technologies to be more or less cautious when it is maneuvering around obstacles on public roads. Typically when the tech — like the computer vision software that is detecting and understanding what objects are — is less sophisticated, companies will make it so the vehicle is overly cautious.

Those rides can be clumsy and filled with hard brakes as the car stops for everything that may be in its path. According to The Information, Uber decided to adjust the system so it didn’t stop for potential false positives but because of that was unable to react immediately to Herzberg in spite of detecting her in its path.

Uber has halted all its self-driving tests on public roads and has hired the former chair of the NTSB, Christopher Hart, to help asses the safety protocols of its self-driving technology.

The company also said it was unable to comment on the investigation as it’s against NTSB policy to reveal any information unless its has been vetted by the agency.

“We have initiated a top-to-bottom safety review of our self-driving vehicles program, and we have brought on former NTSB Chair Christopher Hart to advise us on our overall safety culture,” an Uber spokesperson said in a statement. “Our review is looking at everything from the safety of our system to our training processes for vehicle operators, and we hope to have more to say soon.”

Herzberg’s death has ushered in an important debate about Uber’s safety protocols as well as a broader debate about the safety of testing semi-autonomous technology on public roads. For example, companies typically have two safety drivers — people trained to take back control of the car — until they are completely confident in the capability of the tech. However, Uber only had one vehicle operator.

That’s in spite of the self-driving technology’s slow progress relative to that of other companies, like Waymo. As of February 2017, the company’s vehicle operators had to take back control of the car an average of once every mile, Recode first reported. As of March of 2018, the company was still struggling to meet its goal of driving an average of 13 miles without a driver having to take back control, according to the New York Times.

Alphabet’s self-driving company, Waymo, had a rate of 5,600 miles per intervention in California. (At the time, Uber pointed out this is not the only metric by which to measure self-driving progress.)

But even with multiple vehicle operators, it’s unclear how dependable humans can be as a backup to a technology that is not yet fully developed. As CityLab previously reported, some Uber safety drivers shared those concerns.

This article originally appeared on Recode.net.