The National Academies of Sciences, Engineering, and Medicine has released an amazing new report focused on, “Assessing the Risks of Integrating Unmanned Aircraft Systems (UAS) into the National Airspace System.” In what the Wall Street Journal rightly refers to as an “unusually strongly worded report,” the group of experts assembled by the National Academies call for a sea change in regulatory attitudes and policies toward regulation of Unmanned Aircraft Systems (or “drones”) and the nation’s airspace more generally.
The report uses the term “conservative” or “overly conservative” more than a dozen times to describe the Federal Aviation Administration’s (FAA) problematic current approach toward drones. They point out that the agency has “a culture with a near-zero tolerance for risk,” and that the agency needs to adjust that culture to take into account “the various ways in which this new technology may reduce risk and save lives.” (Ch. S, p.2) The report continues on to say that:
The committee concluded that “fear of making a mistake” drives a risk culture at the FAA that is too often overly conservative, particularly with regard to UAS technologies, which do not pose a direct threat to human life in the same way as technologies used in manned aircraft. An overly conservative attitude can take many forms. For example, FAA risk avoidance behavior is often rewarded, even when it is excessively risk averse, and rewarded behavior is repeated behavior. Balanced risk decisions can be discounted, and FAA staff may conclude that allowing new risk could endanger their careers even when that risk is so minimal that it does not exceed established safety standards. The committee concluded that a better measure for the FAA to apply is to ask the question, “Can we make UAS as safe as other background risks that people experience daily?” As the committee notes, we do not ground airplanes because birds fly in the airspace, although we know birds can and do bring down aircraft.
[. . . ]
In many cases, the focus has been on “What might go wrong?” instead of a holistic risk picture: “What is the net risk/benefit?” Closely related to this is what the committee considers to be paralysis wherein ever more data are often requested to address every element of uncertainty in a new technology. Flight experience cannot be gained to generate these data due to overconservatism that limits approvals of these flights. Ultimately, the status quo is seen as safe. There is too little recognition that new technologies brought into the airspace by UAS could improve the safety of manned aircraft operations, or may mitigate, if not eliminate, some nonaviation risks. (p. S-2)
Importantly, the report makes it clear that the problem here is not just that “an overly conservative risk culture that overestimates the severity and the likelihood of UAS risk can be a significant barrier to introduction and development of these technologies,” but, more profoundly, the report highlights how, “Avoiding risk entirely by setting the safety target too high creates imbalanced risk decisions and can degrade overall safety and quality of life.” (p. 3-6,7) In other words, we should want a more open and common sense-oriented approach to drones, not only to encourage more life-enriching innovation, but also because it could actually make us safer as a result.
No Reward without Some Risk
What the National Academies report is really saying here is that there can be no reward without some risk. This is something I have spent a great deal of time writing about in my last book, a recent book chapter, and various other essays and journal articles over the past 25 years. As I noted in my last book, “living in constant fear of worst-case scenarios—and premising public policy on them—means that best-case scenarios will never come about.” If we want a wealthier, healthier, and safer society, we must embrace change and risk-taking to get us there.
This is exactly what that National Academies report is getting at when they note that the FAA”s “overly conservative culture prevents safety beneficial operations from entering the airspace. The focus is on what might go wrong. More dialogue on potential benefits is needed to develop a holistic risk picture that addresses the question, What is the net risk/benefit?” (p. 3-10)
In other words, all safety regulation involves trade-offs, and if (to paraphrase a classic Hardin cartoon you’ll see to your right) we consider every potential risk except the risk of avoiding all risks, the result will be not only a decline in short-term innovation, but also a corresponding decline in safety and overall living standards over time.
Countless risk scholars have studied this process and come to the same conclusion. “We could virtually end all risk of failure by simply declaring a moratorium on innovation, change, and progress,” notes engineering historian Henry Petroski. But the costs to society of doing so would be catastrophic, of course. “The history of the human race would be dreary indeed if none of our forebears had ever been willing to accept risk in return for potential achievement,” observed H.L. Lewis, an expert on technological risk trade-offs.
The most important book ever written on this topic was Aaron Wildavsky’s 1988 masterpiece, Searching for Safety. Wildavsky warned of the dangers of “trial without error” reasoning and contrasted it with the trial-and-error method of evaluating risk and seeking wise solutions to it. Wildavsky argued that real wisdom is born of experience and that we can learn how to be wealthier and healthier as individuals and a society only by first being willing to embrace uncertainty and even occasional failure. As he put it:
The direct implication of trial without error is obvious: If you can do nothing without knowing first how it will turn out, you cannot do anything at all. An indirect implication of trial without error is that if trying new things is made more costly, there will be fewer departures from past practice; this very lack of change may itself be dangerous in forgoing chances to reduce existing hazards. . . . Existing hazards will continue to cause harm if we fail to reduce them by taking advantage of the opportunity to benefit from repeated trials.
When this logic takes the form of public policy prescriptions, it is referred to as the “precautionary principle,” which generally holds that, because new ideas or technologies could pose some theoretical danger or risk in the future, public policies should control or limit the development of such innovations until their creators can prove that they won’t cause any harms.
Again, if we adopt that attitude, human safety actually suffers because it holds back beneficial experiments aimed at improving the human condition. As the great economic historian Joel Mokyr argues, “technological progress requires above all tolerance toward the unfamiliar and the eccentric.” But the regulatory status quo all too often rejects “the unfamiliar and the eccentric” out of an abundance of caution. While usually well-intentioned, that sort of status quo thinking holds back new and better was of doing old things better, or doing all new things. The end result is that real health and safety advances are ignored or forgone.
How Status Quo Thinking at the FAA Results in Less Safety
This is equally true for air safety and FAA regulation of drones. “Ultimately, the status quo is seen as safe,” the National Acadamies report notes. “There is too little recognition that new technologies brought into the airspace by UAS could improve the safety of manned aircraft operations, or may mitigate, if not eliminate, some nonaviation risks.” The example of the life-saving potential of drones have already been well-documented.
Drones have already been used to monitor fires, help with search-and-rescue missions for missing people or animals, assist life guards by dropping life vests to drowning people, deliver medicines to remote areas, and help with disaster monitoring and recovery efforts. But that really just scratches the surface in terms of their potential.
Some people scoff at the idea of drones being used to deliver small packages to our offices or homes. But consider how many of those packages are delivered by human-operated vehicles that are far more likely to be involved in dangerous traffic accidents on our over-crowded roadways. If drones were used to make some of those deliveries, we might be able to save a lot of lives. Or how about an elderly person stuck at home during storm, only to realize they are out of some essential good or medicine that is a long drive away. Are we better off having them (or someone else) get behind the wheel to drive and get it, or might a drone be able to deliver it more safely?
The authors of the National Academies report understand this, as they made clear when they concluded that, “operation of UAS has many advantages and may improve the quality of life for people around the world. Avoiding risk entirely by setting the safety target too high creates imbalanced risk decisions and can degrade overall safety and quality of life.” (Ch. 3, p. 5-6)
Reform Ideas: Use the “Innovator’s Presumption” & “Sunsetting Imperative”
Given that reality, the National Academies report makes several sensible reform recommendations aimed at countering the FAA’s hyper-conservatism and bias for the broken regulatory status quo. I won’t go through them all, but I think they are an excellent set of reforms that deserve to be taken seriously.
I do, however, want to highly recommend everyone take a close look at this one outstanding recommendation in Chapter 3, which is aimed at keep things moving and making sure that status quo thinking doesn’t freeze beneficial new forms of airspace innovation. Specifically, the National Academies report recommends that:
The FAA should meet requests for certifications or operations approvals with an initial response of “How can we approve this?” Where the FAA employs internal boards of executives throughout the agency to provide input on decisions, final responsibility and authority and accountability for the decision should rest with the executive overseeing such boards. A time limit should be placed on responses from each member of the board, and any “No” vote should be accompanied with a clearly articulated rationale and suggestion for how that “No” vote could be made a “Yes.” (Ch. 3, p. 8)
I absolutely love this reform idea because it essentially combines elements of two general innovation policy reform ideas that I discussed in my recent essay, “Converting Permissionless Innovation into Public Policy: 3 Reforms.” In that piece, I proposed the idea of instituting an “Innovator’s Presumption” that would read: “Any person or party (including a regulatory authority) who opposes a new technology or service shall have the burden to demonstrate that such proposal is inconsistent with the public interest.” I also proposed a so-called “Sunsetting Imperative” that would read: “Any existing or newly imposed technology regulation should include a provision sunsetting the law or regulation within two years.”
The National Academies report recommendation above basically embodies the spirit of both the Innovator’s Presumption and the Sunsetting Imperative. It puts the burden of proof on opponents of change and then creates a sort of shot clock to keep things moving.
These are the kind of reforms we need to make sure status quo thinking at regulatory agencies doesn’t hold back life-enriching and life-saving innovations. It’s time for a change in the ways business is done at the FAA to make sure that regulations are timely, effective, and in line with common sense. Sadly, as the new National Academies report makes clear, today’s illogical policies governing airspace innovation are having counter-productive results that hurt society.