Do Self-driving Cars Really Mean Fewer Auto Accidents?

Self-driving cars have been making their way onto streets and highways for a few years now, and the technology continues to improve. While they may hold the promise of fewer auto accidents due to human error, they aren’t yet error-free.

Several cases of injuries or deaths involving self-driving vehicles have been in the news, including the incident of a pedestrian killed in Tempe, Arizona in March 2018 by a self-driving Uber. This was the first known traffic fatality involving a pedestrian and a self-driving car. The car was in self-driving mode and had a human safety driver in place. A preliminary report on the accident showed that the car recognized the need to brake when the victim was spotted on the roadway. However, the emergency braking system had been disabled to allow the human safety operator in the vehicle to brake when necessary. Unfortunately, the human operator had been distracted watching a show on her mobile electronic device and did not see the pedestrian. According to the report, the system did not alert the human operator to the need to brake.

Following this fatal accident, Uber suspended testing of its self-driving vehicles in several states. Arizona governor Doug Ducey then suspended Uber from continuing to test its vehicles in Arizona. By the end of May, Uber announced that it would no longer test its self-driving vehicles in Arizona at all.

Arizona had been particularly keen to attract self-driving car companies to the state. Governor Ducey issued an executive order in 2015 promising little in the way of regulations for companies wanting to test their self-driving vehicles in the state. Dozens of companies flocked to Arizona to take advantage of the permissive and regulation-free atmosphere.

Several companies, including Uber, GM, Ford, Waymo, Tesla, and others are developing and testing self-driving cars. These companies vary widely in the level of development, including the number of miles they are able to drive without requiring human drivers to take control.

Some states, such as California, require manufacturers of self-driving cars to report incidents in which human drivers have to take control (also called an “intervention”). One site reported that in California, Waymo cars traveled approximately 5600 miles without human drivers taking control or intervening. General Motors reported one intervention approximately every 1250 miles. Unlike California, Arizona does not require manufacturers who are testing self-driving cars to report incidents in which human drivers take control. The New York Times, however, was able to determine that Uber was barely meeting its target in Arizona of just 13 miles per intervention as of March 2018.

As of March 2018, there were an estimated 600 driverless cars on the road in Arizona. Waymo was testing cars with an operator in the vehicle, but not behind the wheel, as of November 2017. Arizona still doesn’t require companies to report accidents involving self-driving vehicles, so it’s difficult to determine a total number for how many injury and non-fatal accidents have involved a self-driving vehicle in the state.

Are we allowing these vehicles on public roads before all the issues that could affect public safety have been worked out? As the recent unfortunate accident involving the pedestrian in Tempe shows, even cars with a human safety driver aren’t yet infallible. In addition, there are numerous other factors to consider, including insurance, reporting requirements, and liability issues. As the LA Times pointed out, self-driving cars are here, but are we, and our social and legal structures, ready for them?