Today, the U.S. Department of Transportation released its eagerly-awaited “Federal Automated Vehicles Policy.” There’s a lot to like about the guidance document, beginning with the agency’s genuine embrace of the potential for highly automated vehicles (HAVs) to revolutionize this sector and save thousands of lives annually in the process.
It is important we get HAV policy right, the DOT notes, because, “35,092 people died on U.S. roadways in 2015 alone” and “94 percent of crashes can be tied to a human choice or error.” (p. 5) HAVs could help us reverse that trend and save thousands of lives and billions in economic costs annually. The agency also documents many other benefits associated with HAVs, such as increasing personal mobility, reducing traffic and pollution, and cutting infrastructure costs.
I will not attempt here to comment on every specific recommendation or guideline suggested in the new DOT guidance document. I could nit-pick about some of the specific recommended guidelines, but I think many of the guidelines are quite reasonable, whether they are related to safety, security, privacy, or state regulatory issues. Other issues need to be addressed and CEI’s Marc Scribner does a nice job documenting some of them is his response to the new guidelines.
Instead of discussing those specific issues today, I want to ask a more fundamental and far-reaching question which I have been writing about in recent papers and essays: Is this guidance or regulation? And what does the use of informal guidance mechanisms like these signal for the future of technological governance more generally?
When Is “Voluntary” Really Mandatory?
The surreal thing about DOT’s new driverless car guidance is how the agency repeatedly stresses it “is not mandatory” and that the guidelines are voluntary in nature but then — often in the same paragraph or sentence — the agency hints how it might convert those recommendations into regulations in the near future. Consider this paragraph on pg. 11 of the DOT’s new guidance document:
The Agency expects to pursue follow-on actions to this Guidance, such as performing additional research in areas such as benefits assessment, human factors, cybersecurity, performance metrics, objective testing, and others as they are identified in the future. As discussed, DOT further intends to hold public workshops and obtain public comment on this Guidance and the other elements of the Policy. This Guidance highlights important areas that manufacturers and other entities designing HAV systems should be considering and addressing as they design, test, and deploy HAVs. This Guidance is not mandatory. NHTSA may consider, in the future, proposing to make some elements of this Guidance mandatory and binding through future regulatory actions. This Guidance is not intended for States to codify as legal requirements for the development, design, manufacture, testing, and operation of automated vehicles. Additional next steps are outlined at the end of this Guidance. [emphasis added.]
The agency continues on to request that “manufacturers and other entities voluntarily provide reports regarding how the Guidance has been followed,” but then notes how “[t]his reporting process may be refined and made mandatory through a future rulemaking.” (p. 15)
And so it goes throughout the DOT’s new “guidance” document. With one breath the DOT suggests that everything is informal and voluntary; with the next it suggests that some form of regulation could be right around the proverbial corner.
Agency Threats Are the Future of Technological Governance
What’s going on here? In essence, DOT’s driverless car guidance is another example of how “soft law” and “agency threats” are becoming the dominant governance models for fast-paced emerging technology.
As noted by Tim Wu, a proponent of such regimes, these agency threats can include “warning letters, official speeches, interpretations, and private meetings with regulated parties.” “Soft law” simply refers to any sort of informal governance mechanism that agencies might seek to use to influence private decision-making or in this case the future course of technological innovation.
The problem with agency threats, as my former Mercatus Center colleague Jerry Brito pointed out in a 2014 law review article, is that they are fundamentally undemocratic and represent a betrayal of the rule of law. The use of “threat regimes,” Brito argued, “places undue power in the hands of regulators unconstrained by predictable procedures.” Such regimes breed uncertainty by leaving decisions up to the whim of regulators who will be unconstrained by administrative procedures, legal precedents, and strict timetables. “[B]ecause it has no limiting principle,” Brito concluded, the agency threats model “leaves the regulatory process without much meaning” and “would obviously be ripe for abuse.”
The danger exists that we are witnessing gradual mission creep as the DOT’s “guidance” process slowly moves from being a truly voluntary self-certification process to something more akin to a pre-market approval process. Every “informal” request that DOT makes — even when those requests are just presented in the form of vague questions — opens the door to greater technocratic meddling in the innovation process by federal bureaucrats.
Coping with the Pacing Problem
Why are agencies like the DOT adopting this new playbook? In a nutshell, it comes down to the realization on their part that the “pacing problem” is now an undeniable fact of life.
I discussed the pacing problem at length in my recent review of Wendell Wallach’s important new book, A Dangerous Master: How to Keep Technology from Slipping beyond Our Control. Wallach nicely defined the pacing problem as “the gap between the introduction of a new technology and the establishment of laws, regulations, and oversight mechanisms for shaping its safe development.” “There has always been a pacing problem,” Wallach noted, but like other philosophers, he believes that modern technological innovation is occurring at an unprecedented pace, making it harder than ever to “govern” it using traditional legal and regulatory mechanisms.
Which is exactly why the DOT and whole lot of other agencies are now defaulting to soft law and agencies threat models as their old regimes struggle to keep up with the pace of modern technological innovation. As the DOT put it in its new guidance document: “The speed with which HAVs are advancing, combined with the complexity and novelty of these innovations, threatens to outpace the Agency’s conventional regulatory processes and capabilities.” (p. 8) More specifically, the agency notes that:
The remarkable speed with which increasingly complex HAVs are evolving challenges DOT to take new approaches that ensure these technologies are safely introduced (i.e., do not introduce significant new safety risks), provide safety benefits today, and achieve their full safety potential in the future. To meet this challenge, we must rapidly build our expertise and knowledge to keep pace with developments, expand our regulatory capability, and increase our speed of execution. (p. 6)
Rarely has any agency been quite so blunt about how it is racing to get ahead of the pacing problem before it completely loses control of the future course of technological innovation.
But the DOT is hardly alone in its increased reliance on soft law governance mechanisms. In fact, I’m in the early research stages of a new paper about what soft law and agency threat models mean for the future of emerging technology and its governance. In that paper, I hope to document how many different agencies (FAA, FDA, FTC, FCC, NTIA, & DOT among others) are using some variant of soft law model to informally regulate the growing universe of emerging technologies out there today (commercial drones, connected medical devices, the Internet of Things, 3D printing, immersive technology, the sharing economy, driverless cars, and more.)
If nothing else, I would like to devise a taxonomy of soft law/agency threat models and then discuss the upsides and downsides of those models. If anyone has recommendations for additional reading on this topic, please let me know. The best thing I have seen on the issue is a 2013 book of collected essays on Innovative Governance Models for Emerging Technologies, edited by Gary E. Marchant, Kenneth W. Abbott and Braden Allenby. I’m surprised more hasn’t been written about this in law reviews or political science journals.
What Does It Mean for Innovation? And Accountable Government?
So, what does all this mean for the future of driverless cars, autonomous systems, and other emerging technologies? I think it’s both good and bad news.
The good news — at least from the perspective of those of us who want to see innovators freed up to experiment more without prior restraint — is that the technological genie is increasingly out of the bottle. Technology regulators are at an impasse and they know it. Their old regulatory regimes are doomed to always be one step behind the action. Thus, a lot of technological innovation is going to be happening before any blessing has been given to engage in those experiments.
The bad news is that the regulatory regimes of the future will become almost hopelessly arbitrary in terms of their contours and enforcement ceiling. Basically, in our new world of soft law and agency threats, you can tear up the Administrative Procedures Act and throw it out the window. When regulatory agencies act in the future, they will do so in a sort of extra-legal Twilight Zone, where things are not always as they seem. Agencies will increasingly act like nagging nannies, constantly pressuring innovators to behave themselves. And sometimes that nagging will work, and sometimes it will even improve consumer welfare at the margin! It will work sometimes precisely because government still wields a mighty big hammer and no innovator wants to be nailed to the ground in the courts, or the court of public opinion for that matter. Thus, many — not all, but many — of those innovators will go along with whatever agencies like DOT suggests as “best practices” even if those guidelines are horribly misguided or have no force of law whatsoever. And because agencies know that many (perhaps most) innovators will fall in line with whatever “best practices” or “codes of conduct” that they concoct, it will reinforce the legitimacy of this model and become the new method of imposing their will on current or emerging technology sectors.
Again, agency threats won’t always work because some innovators will continue to engage in rough forms of “technological civil disobedience” and just ignore a lot of these informal guidelines and agency threats. Agencies will push back and seek to make an example of specific innovators (especially the ones with deep pockets) in order to send a message to every other innovator out there that they better fall in line or else!
But what that “or else!” moment or action looks like remains completely unclear. The problem with soft law is that, by its very nature, it is completely open-ended and fundamentally arbitrary. It is really just “non-law law.” That’s the “legal regime” that will “govern” the emerging technologies of the present and the future.
Isn’t Soft Law Better Than the Alternative?
Now, here’s the funny thing about this messy, arbitrary, unaccountable world of soft law and agency threats: It is probably a hell of lot better than the old world we used to live in!
The old analog era regulatory systems were very top-down and command-and-control in orientation. These traditional regimes were driven by the desire of regulators to enforce policy priorities by imposing prior restraints on innovation and then selectively passing out permission slips to get around those rules.
As I noted in my latest book, the problem with those traditional regulatory systems is that they “tend to be overly rigid, bureaucratic, inflexible, and slow to adapt to new realities. They focus on preemptive remedies that aim to predict the future, and future hypothetical problems that may not ever come about. Worse yet, administrative regulation generally preempts or prohibits the beneficial experiments that yield new and better ways of doing things.” (Permissionless Innovation, p. 120)
For all the reasons I outlined in my book and other papers on these topics, “permissionless innovation” remains the superior policy default compared to precautionary principle-based prior restraints. But I am not so naïve as to expect that permissionless innovation will prevail in the policy world all of the time. Moreover, I am not one of those technological determinists who goes around saying that technology is an unstoppable force that relentlessly drives history, regardless of what policymakers say. I am more of a soft determinist who believes that technology often can be a major driver of history, but not without a significant shaping from other social, cultural, economic, and political forces.
Thus, as much as I worry about the new “soft law/agency threats” regime being arbitrary, unaccountable, and innovation-threatening, I know that the ideal of permissionless innovation will only rarely be our default policy regime. But I also don’t think we are going back the old regulatory regimes of the past and we absolutely wouldn’t want to anyway in light of the deleterious impacts those regimes had on innovation in practice.
The best bet for those of us who care about the freedom to innovate is to make sure that these soft law governance mechanisms have some oversight from Congress (unlikely) and the Courts (more likely) when agencies push too far with informal agency threats. Better yet, we can hope that the pace of technological change continues to accelerate and pressures agencies to only intervene to address the most pressing problems and then largely leaves the rest of the field wide open for continued experimentation with new and better ways of doing things.
But make no doubt about it, as today’s DOT guidance document for driverless cars makes clear, “agency threats” will increasingly shape the future of emerging technologies whether we like it or not.