Making Sure the “Trolley Problem” Doesn’t Derail Life-Saving Innovation

by on January 13, 2015 · 0 comments

I want to highlight an important new blog post (“Slow Down That Runaway Ethical Trolley“) on the ethical trade-offs at work with autonomous vehicle systems by Bryant Walker Smith, a leading expert on these issues. Writing over at Stanford University’s Center for Internet and Society blog, Smith notes that, while serious ethical dilemmas will always be present with such technologies, “we should not allow the perfect to be the enemy of the good.” He notes that many ethical philosophers, legal theorists, and media pundits have recently been actively debating variations of the classic “Trolley Problem,” and its ramifications for the development of autonomous or semi-autonomous systems. (Here’s some quick background on the Trolley Problem, a thought experiment involving the choices made during various no-win accident scenarios.) Commenting on the increased prevalence of the Trolley Problem in these debates, Smith observes that:

Unfortunately, the reality that automated vehicles will eventually kill people has morphed into the illusion that a paramount challenge for or to these vehicles is deciding who precisely to kill in any given crash. This was probably not the intent of the thoughtful proponents of this thought experiment, but it seems to be the result. Late last year, I was asked the “who to kill” question more than any other — by journalists, regulators, and academics. An influential working group to which I belong even (briefly) identified the trolley problem as one of the most significant barriers to fully automated motor vehicles.

Although dilemma situations are relevant to the field, they have been overhyped in comparison to other issues implicated by vehicle automation. The fundamental ethical question, in my opinion, is this: In the United States alone, tens of thousands of people die in motor vehicle crashes every year, and many more are injured. Automated vehicles have great potential to one day reduce this toll, but the path to this point will involve mistakes and crashes and fatalities. Given this stark choice, what is the proper balance between caution and urgency in bringing these systems to the market? How safe is safe enough?

That’s a great question and one that Ryan Hagemann and put some thought into as part of our recent Mercatus Center working paper, “Removing Roadblocks to Intelligent Vehicles and Driverless Cars.That paper, which has been accepted for publication in a forthcoming edition of the Wake Forest Journal of Law & Policy, outlines the many benefits of autonomous or semi-autonomous systems and discusses the potential cost of delaying their widespread adoption. When it comes to “Trolley Problem”-like ethical questions, Hagemann and I argue that, “these ethical considerations need to be evaluated against the backdrop of the current state of affairs, in which tens of thousands of people die each year in auto-related accidents due to human error.” We continue on later in the paper:

Autonomous vehicles are unlikely to create 100 percent safe, crash-free roadways, but if they significantly decrease the number of people killed or injured as a result of human error, then we can comfortably suggest that the implications of the technology, as a whole, are a boon to society. The ethical underpinnings of what makes for good software design and computer-generated responses are a difficult and philosophically robust space for discussion. Given the abstract nature of the intersection of ethics and robotics, a more detailed consideration and analysis of this space must be left for future research. Important work is currently being done on this subject. But those ethical considerations must not derail ongoing experimentation with intelligent-vehicle technology, which could save many lives and have many other benefits, as already noted. Only through ongoing experimentation and feedback mechanisms can we expect to see constant improvement in how autonomous vehicles respond in these situations to further minimize the potential for accidents and harms. (p. 42-3)

None of this should be read to suggest that the ethical issues being raised by some philosophers or other pundits are unimportant. To the contrary, they are raising legitimate concerns about how ethics are “baked-in” to the algorithms that control autonomous or semi-autonomous systems. It is vital we continue to debate the wisdom of the choices made by the companies and programmers behind those technologies and consider better ways to inform and improve their judgments about how to ‘optimize the sub-optimal,’ so to speak. After all, when you are making decisions about how to minimize the potential for harm — including the loss of life — there are many thorny issues that must be considered and all of them will have downsides. Smith considers a few when he notes:

Automation does not mean an end to uncertainty. How is an automated vehicle (or its designers or users) to immediately know what another driver will do? How is it to precisely ascertain the number or condition of passengers in adjacent vehicles? How is it to accurately predict the harm that will follow from a particular course of action? Even if specific ethical choices are made prospectively, this continuing uncertainty could frustrate their implementation.

Again, these are all valid questions deserving serious exploration, but we’re not having this discussion in a vacuum. Ivory Tower debates cannot be divorced from real-world realities. Although road safety has been improving for many years, people are still dying at a staggering rate due to vehicle-related accidents. Specifically, in 2012, there were 33,561 total traffic fatalities (92 per day) and 2,362,000 people injured (6,454 per day) in over 5,615,000 reported crashes. And, to reiterate, the bulk of those accidents were due to human error.

That is a staggering toll and anything we can do to reduce it significantly is something we need to be pursuing with great vigor, even while we continue to sort through some of those challenging ethical issues associated with automated systems and algorithms. Smith argues, correctly in my opinion, that “a more practical approach in emergency situations may be to weight general rules of behavior: decelerate, avoid humans, avoid obstacles as they arise, stay in the lane, and so forth. … [T]his simplified approach would accept some failures in order to expedite and entrench what could be automation’s larger successes. As Voltaire reminds us, we should not allow the perfect to be the enemy of the good.”

Quite right. Indeed, the next time someone poses an an ethical thought experiment along the lines of the Trolley Problem, do what I do and reverse the equation. Ask them about the ethics of slowing down the introduction of a technology into our society that would result in a (potentially significant) lowering of the nearly 100 deaths and over 6,000 injuries that occur because of vehicle-related fatalities each day in the United States. Because that’s no hypothetical thought experiment; that’s the world we live in right now.

______________

(P.S. The late, great political scientist Aaron Wildavsky crafted a framework for considering these complex issues in his brilliant 1988 book, Searching for Safety. No book has had a more significant influence on my thinking about these and other “risk trade-off” issues since I first read it 25 years ago. I cannot recommend it highly enough. I discussed Wildavsky’s framework and vision in my recent little book on “Permissionless Innovation.” Readers might also be interested in my August 2013 essay, “On the Line between Technology Ethics vs. Technology Policy,” which featured an exchange with ethical philosopher Patrick Lin, co-editor of an excellent collection of essays on Robot Ethics: The Ethical and Social Implications of Robotics. You should add that book to your shelf if you are interested in these issues.

 

Previous post:

Next post: