By Adam Thierer and Jennifer Huddleston Skees
There was horrible news from Tempe, Arizona this week as a pedestrian was struck and killed by a driverless car owned by Uber. This is the first fatality of its type and is drawing widespread media attention as a result. According to both police statements and Uber itself, the investigation into the accident is ongoing and Uber is assisting in the investigation. While this certainly is a tragic event, we cannot let it cost us the life-saving potential of autonomous vehicles.
While any fatal traffic accident involving a driverless car is certainly sad, we can’t ignore the fact that each and every day in the United States letting human beings drive on public roads is proving far more dangerous. This single event has led some critics to wonder why we were allowing driverless cars to be tested on public roads at all before they have been proven to be 100% safe. Driverless cars can help reverse a public health disaster decades in the making, but only if policymakers allow real-world experimentation to continue.
Let’s be more concrete about this: Each day, Americans take 1.1 billion trips driving 11 billion miles in vehicles that weigh on average between 1.5 and 2 tons. Sadly, about 100 people die and over 6,000 are injured each day in car accidents. 94% of these accidents have been shown to be attributable to human error and this deadly trend has been increasing as we become more distracted while driving. Moreover, according to the Center for Disease Control and Prevention, almost 6000 pedestrians were killed in traffic accidents in 2016, which means there was roughly one crash-related pedestrian death every 1.6 hours. In Arizona, the issue is even more pronounced with the state ranked 6th worst for pedestrians and the Phoenix area ranked the 16th worst metro for such accidents nationally.
No matter how concerned the public is about the idea of autonomous vehicles on our roadways, one thing should be abundantly clear: Automated technologies can be part of the solution to the harms of our almost 100 year experiment with human drivers behind the wheel of a car. The algorithms behind self-driving cars don’t get drunk, drowsy, or distracted. Unfortunately, humans do those things with great regularity and the only way for autonomous vehicles to truly understand how to deal with idiosyncrasies and irrationalities of human drivers is to interact with them in the “real world.” Every time a human driver gets behind a wheel for a drive, therefore, an “experiment” of sorts is underway and we’ve seen the results of our human driven “experiments” on public roads far too often have catastrophic results.
Because these human-caused accidents are so common, they don’t make headlines. While as high as 83% of people admit they are concerned about safety when driving, the aggregate death toll is so large that the numbers aren’t as easy to “humanize” when crashes occur unless they involve people or places we know. As a result, we don’t heed the warnings and continue to engage in risky behavior by choosing to drive every day. But precisely because this week’s driverless car-related death in Arizona is so unique and rare, it is making major news. If we turn a blind eye to all the lives lost due to human error while focusing on the rare occurrence of this one driverless car fatality, we risk many more lives in the long run.
But what should be done when accidents or deaths occur and autonomous cars are involved?
First, we can dispense with the notion that driverless cars are completely unregulated. Anytime these vehicles are operating on public roadways, they still must comply with traffic and safety laws. Driverless cars are programmed to operate in compliance with those laws and will be far more likely to do so than human operators. In fact, the concern is not that the cars won’t follow the traffic laws, but how they will interact with humans’ lawlessness and our misguided reactions to them.
Second, when accidents, like the one in Arizona this week do occur, courts are equipped to handle legal claims. This is how we have handled human-created accidents for decades, and there is no reason to believe that the common law and courts can’t evolve to handle new technology-created problems, too. The courts have an existing toolkit for handling both defective products and individual liability or bad actors. Some manufacturers have even publicly stated they will accept liability if it is shown that the technology behind the autonomous vehicle caused the accident. Courts have been able to apportion fault and deal with the specifics of particular without the need to completely overhaul the common law for a variety of new technologies throughout history. It would be misguided to assume the courts could not determine the true cause of an accident when it involved an autonomous vehicle when the courts have been dealing with increasingly sophisticated products in a variety of fields for years.
Third, driverless car innovators are currently working together, and with government officials, to address the safety and security of these technologies. In both the Obama and current Trump Administrations, an open, collaborative effort has been underway to sketch out sensible safety and security policies while also making sure to keep the innovation moving forward in this field. These conversations have resulted in guidance from the Department of Transportation that is flexible enough to adapt to the emerging technology while still promoting safe development and deployment. This flexible approach is the smart path forward insuring that we don’t let overly precautionary concerns prevent technology that could save many, many more lives.
The most effective way to achieve significant auto safety gains is to make sure experimentation with new and better automotive technologies continues. That cannot all happen in a closed lab setting that is stifled by heavy-handed regulation at every juncture. We need driverless cars on the roadways now more than ever precisely because those machines will need to learn to anticipate and correct for the many real-world scenarios that human drivers struggle with every day.
Any loss of human life is a tragedy. But we cannot let a rare incident cost us the long-term potential life-saving technology of autonomous vehicles. We also must not rush to conclusions that technology was at fault before knowing all the facts of any particular situation. While Uber has temporarily halted its technology trials, this tragic accident should be looked at as a rarity we can learn from rather than a reason to stop moving forward.