Today, the U.S. Department of Transportation released its eagerly-awaited “Federal Automated Vehicles Policy.” There’s a lot to like about the guidance document, beginning with the agency’s genuine embrace of the potential for highly automated vehicles (HAVs) to revolutionize this sector and save thousands of lives annually in the process.
It is important we get HAV policy right, the DOT notes, because, “35,092 people died on U.S. roadways in 2015 alone” and “94 percent of crashes can be tied to a human choice or error.” (p. 5) HAVs could help us reverse that trend and save thousands of lives and billions in economic costs annually. The agency also documents many other benefits associated with HAVs, such as increasing personal mobility, reducing traffic and pollution, and cutting infrastructure costs.
I will not attempt here to comment on every specific recommendation or guideline suggested in the new DOT guidance document. I could nit-pick about some of the specific recommended guidelines, but I think many of the guidelines are quite reasonable, whether they are related to safety, security, privacy, or state regulatory issues. Other issues need to be addressed and CEI’s Marc Scribner does a nice job documenting some of them is his response to the new guidelines.
Instead of discussing those specific issues today, I want to ask a more fundamental and far-reaching question which I have been writing about in recent papers and essays: Is this guidance or regulation? And what does the use of informal guidance mechanisms like these signal for the future of technological governance more generally? Continue reading →
In previous essays here I have discussed the rise of “global innovation arbitrage” for genetic testing, drones, and the sharing economy. I argued that: “Capital moves like quicksilver around the globe today as investors and entrepreneurs look for more hospitable tax and regulatory environments. The same is increasingly true for innovation. Innovators can, and increasingly will, move to those countries and continents that provide a legal and regulatory environment more hospitable to entrepreneurial activity.” I’ve been working on a longer paper about this with Samuel Hammond, and in doing research on the issue, we keep finding interesting examples of this phenomenon.
The latest example comes from a terrific new essay (“Humans: Unsafe at Any Speed“) about driverless car technology by Wall Street Journal technology columnist L. Gordon Crovitz. He cites some important recent efforts by Ford and Google and he notes that they and other innovators will need to be given more flexible regulatory treatment if we want these life-saving technologies on the road as soon as possible. “The prospect of mass-producing cars without steering wheels or pedals means U.S. regulators will either allow these innovations on American roads or cede to Europe and Asia the testing grounds for self-driving technologies,” Crovitz observes. “By investing in autonomous vehicles, Ford and Google are presuming regulators will have to allow the new technologies, which are developing faster even than optimists imagined when Google started working on self-driving cars in 2009.” Continue reading →
This week, my Mercatus Center colleague Andrea Castillo and I filed comments with the White House Office of Science and Technology Policy (OSTP) in a proceeding entitled, “Preparing for the Future of Artificial Intelligence.” For more background on this proceeding and the accompanying workshops that OSTP has hosted on this issue, see this White House site.
In our comments, Andrea and I make the case for prudence, patience, and a continuing embrace of “permissionless innovation” as the appropriate policy framework for artificial intelligence (AI) technologies at this nascent stage of their development. Down below, I have pasted our full comments, which were limited to just 2,000 words as required by the OSTP. But we plan on releasing a much longer report on these issues in coming months. You can find the full version of filing that includes footnotes here.
On May 3rd, I’m excited to be participating in a discussion with Yale University bioethicist Wendell Wallach at the Microsoft Innovation & Policy Center in Washington, DC. (RSVP here.) Wallach and I will be discussing issues we write about in our new books, both of which focus on possible governance models for emerging technologies and the question of how much preemptive control society should exercise over new innovations.
Of all the books of technological criticism or skepticism that I’ve read in recent years—and I have read stacks of them!—A Dangerous Master is by far the most thoughtful and interesting. I have grown accustomed to major works of technological criticism being caustic, angry affairs. Most of them are just dripping with dystopian dread and a sense of utter exasperation and outright disgust at the pace of modern technological change.
Although he is certainly concerned about a wide variety of modern technologies—drones, robotics, nanotech, and more—Wallach isn’t a purveyor of the politics of panic. There are some moments in the book when he resorts to some hyperbolic rhetoric, such as when he frets about an impending “techstorm” and the potential, as the book’s title suggests, for technology to become a “dangerous master” of humanity. For the most part, however, his approach is deeper and more dispassionate than what is found in the leading tracts of other modern techno-critics.
The success of the Internet and the modern digital economy was due to its open, generative nature, driven by the ethos of “permissionless innovation.” A “light-touch” policy regime helped make this possible. Of particular legal importance was the immunization of online intermediaries from punishing forms of liability associated with the actions of third parties.
As “software eats the world” and the digital revolution extends its reach to the physical world, policymakers should extend similar legal protections to other “generative” tools and platforms, such as robotics, 3D printing, and virtual reality.
I want to highlight an important new blog post (“Slow Down That Runaway Ethical Trolley“) on the ethical trade-offs at work with autonomous vehicle systems by Bryant Walker Smith, a leading expert on these issues. Writing over at Stanford University’s Center for Internet and Society blog, Smith notes that, while serious ethical dilemmas will always be present with such technologies, “we should not allow the perfect to be the enemy of the good.” He notes that many ethical philosophers, legal theorists, and media pundits have recently been actively debating variations of the classic “Trolley Problem,” and its ramifications for the development of autonomous or semi-autonomous systems. (Here’s some quick background on the Trolley Problem, a thought experiment involving the choices made during various no-win accident scenarios.) Commenting on the increased prevalence of the Trolley Problem in these debates, Smith observes that:
Unfortunately, the reality that automated vehicles will eventually kill people has morphed into the illusion that a paramount challenge for or to these vehicles is deciding who precisely to kill in any given crash. This was probably not the intent of the thoughtful proponents of this thought experiment, but it seems to be the result. Late last year, I was asked the “who to kill” question more than any other — by journalists, regulators, and academics. An influential working group to which I belong even (briefly) identified the trolley problem as one of the most significant barriers to fully automated motor vehicles.
Although dilemma situations are relevant to the field, they have been overhyped in comparison to other issues implicated by vehicle automation. The fundamental ethical question, in my opinion, is this: In the United States alone, tens of thousands of people die in motor vehicle crashes every year, and many more are injured. Automated vehicles have great potential to one day reduce this toll, but the path to this point will involve mistakes and crashes and fatalities. Given this stark choice, what is the proper balance between caution and urgency in bringing these systems to the market? How safe is safe enough?
This Thanksgiving holiday season, an estimated 39 million people plan on traveling by car. Sadly, according to the National Safety Council, some 418 Americans may lose their lives on the roads over the next few days, in addition to over 44,000 injuries from car crashes.
In a new oped for theOrange County Register, Ryan Hagemann and I argue that many of these accidents and fatalities could be averted if more “intelligent” vehicles were on the road. That’s why it is so important that policymakers clear away roadblocks to intelligent vehicle technology (including driverless cars) as quickly as possible. The benefits would be absolutely enormous.