Technopanics & the Precautionary Principle

By Brent Skorup and Trace Mitchell

An important benefit of 5G cellular technology is more bandwidth and more reliable wireless services. This means carriers can offer more niche services, like smart glasses for the blind and remote assistance for autonomous vehicles. A Vox article last week explored an issue familiar to technology experts: will millions of new 5G transmitters and devices increase cancer risk? It’s an important question but, in short, we’re not losing sleep over it.

5G differs from previous generations of cellular technology in that “densification” is important–putting smaller transmitters throughout neighborhoods. This densification process means that cities must regularly approve operators’ plans to upgrade infrastructure and install devices on public rights-of-way. However, some homeowners and activists are resisting 5G deployment because they fear more transmitters will lead to more radiation and cancer. (Under federal law, the FCC has safety requirements for emitters like cell towers and 5G. Therefore, state and local regulators are not allowed to make permitting decisions based on what they or their constituents believe are the effects of wireless emissions.)

We aren’t public health experts; however, we are technology researchers and decided to explore the telecom data to see if there is a relationship. If radio transmissions increase cancer, we should expect to see a correlation between the number of cellular transmitters and cancer rates. Presumably there is a cumulative effect: the more cellular radiation people are exposed to, the higher the cancer rates.

From what we can tell, there is no link between cellular systems and cancer. Despite a huge increase in the number of transmitters in the US since 2000, the nervous system cancer rate hasn’t budged. In the US the number of wireless transmitters have increased massively–300%–in 15 years. (This is on the conservative side–there are tens of millions of WiFi devices that are also transmitting but are not counted here.) Continue reading →

I recently posted an essay over at The Bridge about “The Pacing Problem and the Future of Technology Regulation.” In it, I explain why the pacing problem—the notion that technological innovation is increasingly outpacing the ability of laws and regulations to keep up—“is becoming the great equalizer in debates over technological governance because it forces governments to rethink their approach to the regulation of many sectors and technologies.”

In this follow-up article, I wanted to expand upon some of the themes developed in that essay and discuss how they relate to two other important concepts: the “Collingridge Dilemma” and technological determinism. In doing so, I will build on material that is included in a forthcoming law review article I have co-authored with Jennifer Skees, Ryan Hagemann (“Soft Law for Hard Problems: The Governance of Emerging Technologies in an Uncertain Future”) as well as a book I am finishing up on the growth of “evasive entrepreneurialism” and “technological civil disobedience.”

Recapping the Nature of the Pacing Problem

First, let us quickly recap that nature of “the pacing problem.” I believe Larry Downes did the best job explaining the “problem” in his 2009 book on The Laws of Disruption. Downes argued that “technology changes exponentially, but social, economic, and legal systems change incrementally” and that this “law” was becoming “a simple but unavoidable principle of modern life.” Continue reading →

By Andrea O’Sullivan and Adam Thierer (First published at The Bridge on August 1, 2018.)

Technology is changing the ways that entrepreneurs interact with, and increasingly get away from, existing government regulations. The ongoing legal battles surrounding 3D-printed weapons provide yet another timely example.

For years, a consortium of techies called Defense Distributed has sought to secure more protections for gun owners by making the code allowing someone to print their own guns available online. Rather than taking their fight to Capitol Hill and spending billions of dollars lobbying in potentially fruitless pursuits of marginal legislative victories, Defense Distributed ties their fortunes to the mast of technological determinism and blurs the lines between regulated physical reality and the open world of cyberspace.

The federal government moved fast, with gun control advocates like Senator Chuck Schumer (D-NY) and former Representative Steve Israel (D-NY) proposing legislation to criminalize Defense Distributed’s activities. They failed.

Plan B in the efforts to quash these acts of 3D-printing disobedience was to classify the Computer-aided design (CAD) files that Defense Distributed posted online as a kind of internationally-controlled munition. The US State Department engaged in a years-long legal brawl over whether or not Defense Distributed violated established International Traffic in Arms Regulations (ITAR). The group pulled down the files while the issue was examined in court, but the code had long since been uploaded to sharing sites like The Pirate Bay. The files have also been available on the Internet Archive for many years. The CAD, if you will excuse the pun, is out of the bag.

In a surprising move, the Department of Justice suddenly moved to drop the suit and settle with Defense Distributed last month. It agreed to cover the group’s legal fees and cease its attempt to regulate code already easily accessible online. While no legal precedent was set, since this was merely a settlement, it is likely that the government realized that its case would be unwinnable.

Gun control advocates did not react well to this legal retreat. Continue reading →

The National Academies of Sciences, Engineering, and Medicine has released an amazing new report focused on, “Assessing the Risks of Integrating Unmanned Aircraft Systems (UAS) into the National Airspace System.” In what the Wall Street Journal rightly refers to as an “unusually strongly worded report,” the group of experts assembled by the National Academies call for a sea change in regulatory attitudes and policies toward regulation of Unmanned Aircraft Systems (or “drones”) and the nation’s airspace more generally.

The report uses the term “conservative” or “overly conservative” more than a dozen times to describe the Federal Aviation Administration’s (FAA) problematic current approach toward drones. They point out that the agency has “a culture with a near-zero tolerance for risk,” and that the agency needs to adjust that culture to take into account “the various ways in which this new technology may reduce risk and save lives.” (Ch. S, p.2) The report continues on to say that:

The committee concluded that “fear of making a mistake” drives a risk culture at the FAA that is too often overly conservative, particularly with regard to UAS technologies, which do not pose a direct threat to human life in the same way as technologies used in manned aircraft. An overly conservative attitude can take many forms. For example, FAA risk avoidance behavior is often rewarded, even when it is excessively risk averse, and rewarded behavior is repeated behavior. Balanced risk decisions can be discounted, and FAA staff may conclude that allowing new risk could endanger their careers even when that risk is so minimal that it does not exceed established safety standards.  The committee concluded that a better measure for the FAA to apply is to ask the question, “Can we make UAS as safe as other background risks that people experience daily?” As the committee notes, we do not ground airplanes because birds fly in the airspace, although we know birds can and do bring down aircraft.

[. . . ]

In many cases, the focus has been on “What might go wrong?” instead of a holistic risk picture: “What is the net risk/benefit?” Closely related to this is what the committee considers to be paralysis wherein ever more data are often requested to address every element of uncertainty in a new technology. Flight experience cannot be gained to generate these data due to overconservatism that limits approvals of these flights. Ultimately, the status quo is seen as safe. There is too little recognition that new technologies brought into the airspace by UAS could improve the safety of manned aircraft operations, or may mitigate, if not eliminate, some nonaviation risks. (p. S-2)

Importantly, the report makes it clear that the problem here is not just that “an overly conservative risk culture that overestimates the severity and the likelihood of UAS risk can be a significant barrier to introduction and development of these technologies,” but, more profoundly, the report highlights how,  “Avoiding risk entirely by setting the safety target too high creates imbalanced risk decisions and can degrade overall safety and quality of life.” (p. 3-6,7) In other words, we should want a more open and common sense-oriented approach to drones, not only to encourage more life-enriching innovation, but also because it could actually make us safer as a result.

No Reward without Some Risk

What the National Academies report is really saying here is that there can be no reward without some risk.  This is something I have spent a great deal of time writing about in my last book, a recent book chapter, and various other essays and journal articles over the past 25 years.  As I noted in my last book, “living in constant fear of worst-case scenarios—and premising public policy on them—means that best-case scenarios will never come about.”  If we want a wealthier, healthier, and safer society, we must embrace change and risk-taking to get us there.

This is exactly what that National Academies report is getting at when they note that the FAA”s “overly conservative culture prevents safety beneficial operations from entering the airspace. The focus is on what might go wrong. More dialogue on potential benefits is needed to develop a holistic risk picture that addresses the question, What is the net risk/benefit?” (p. 3-10)

In other words, all safety regulation involves trade-offs, and if (to paraphrase a classic Hardin cartoon you’ll see to your right) we consider every potential risk except the risk of avoiding all risks, the result will be not only a decline in short-term innovation, but also a corresponding decline in safety and overall living standards over time.

Countless risk scholars have studied this process and come to the same conclusion. “We could virtually end all risk of failure by simply declaring a moratorium on innovation, change, and progress,” notes engineering historian Henry Petroski. But the costs to society of doing so would be catastrophic, of course. “The history of the human race would be dreary indeed if none of our forebears had ever been willing to accept risk in return for potential achievement,” observed H.L. Lewis, an expert on technological risk trade-offs.

The most important book ever written on this topic was Aaron Wildavsky’s 1988 masterpiece, Searching for Safety. Wildavsky warned of the dangers of “trial without error” reasoning and contrasted it with the trial-and-error method of evaluating risk and seeking wise solutions to it. Wildavsky argued that real wisdom is born of experience and that we can learn how to be wealthier and healthier as individuals and a society only by first being willing to embrace uncertainty and even occasional failure. As he put it:

The direct implication of trial without error is obvious: If you can do nothing without knowing first how it will turn out, you cannot do anything at all. An indirect implication of trial without error is that if trying new things is made more costly, there will be fewer departures from past practice; this very lack of change may itself be dangerous in forgoing chances to reduce existing hazards. . . . Existing hazards will continue to cause harm if we fail to reduce them by taking advantage of the opportunity to benefit from repeated trials.

When this logic takes the form of public policy prescriptions, it is referred to as the “precautionary principle,” which generally holds that, because new ideas or technologies could pose some theoretical danger or risk in the future, public policies should control or limit the development of such innovations until their creators can prove that they won’t cause any harms.

Again, if we adopt that attitude, human safety actually suffers because it holds back beneficial experiments aimed at improving the human condition. As the great economic historian Joel Mokyr argues, “technological progress requires above all tolerance toward the unfamiliar and the eccentric.” But the regulatory status quo all too often rejects “the unfamiliar and the eccentric” out of an abundance of caution. While usually well-intentioned, that sort of status quo thinking holds back new and better was of doing old things better, or doing all new things. The end result is that real health and safety advances are ignored or forgone.

How Status Quo Thinking at the FAA Results in Less Safety

This is equally true for air safety and FAA regulation of drones. “Ultimately, the status quo is seen as safe,” the National Acadamies report notes. “There is too little recognition that new technologies brought into the airspace by UAS could improve the safety of manned aircraft operations, or may mitigate, if not eliminate, some nonaviation risks.” The example of the life-saving potential of drones have already been well-documented.

Drones have already been used to monitor fires, help with search-and-rescue missions for missing people or animals, assist life guards by dropping life vests to drowning people, deliver medicines to remote areas, and help with disaster monitoring and recovery efforts. But that really just scratches the surface in terms of their potential.

Some people scoff at the idea of drones being used to deliver small packages to our offices or homes. But consider how many of those packages are delivered by human-operated vehicles that are far more likely to be involved in dangerous traffic accidents on our over-crowded roadways. If drones were used to make some of those deliveries, we might be able to save a lot of lives. Or how about an elderly person stuck at home during storm, only to realize they are out of some essential good or medicine that is a long drive away. Are we better off having them (or someone else) get behind the wheel to drive and get it, or might a drone be able to deliver it more safely?

The authors of the National Academies report understand this, as they made clear when they concluded that, “operation of UAS has many advantages and may improve the quality of life for people around the world. Avoiding risk entirely by setting the safety target too high creates imbalanced risk decisions and can degrade overall safety and quality of life.” (Ch. 3, p. 5-6)

Reform Ideas: Use the “Innovator’s Presumption” & “Sunsetting Imperative”

Given that reality, the National Academies report makes several sensible reform recommendations aimed at countering the FAA’s hyper-conservatism and bias for the broken regulatory status quo. I won’t go through them all, but I think they are an excellent set of reforms that deserve to be taken seriously.

I do, however, want to highly recommend everyone take a close look at this one outstanding recommendation in Chapter 3, which is aimed at keep things moving and making sure that status quo thinking doesn’t freeze beneficial new forms of airspace innovation. Specifically, the National Academies report recommends that:

The FAA should meet requests for certifications or operations approvals with an initial response of “How can we approve this?” Where the FAA employs internal boards of executives throughout the agency to provide input on decisions, final responsibility and authority and accountability for the decision should rest with the executive overseeing such boards. A time limit should be placed on responses from each member of the board, and any “No” vote should be accompanied with a clearly articulated rationale and suggestion for how that “No” vote could be made a “Yes.” (Ch. 3, p. 8)

I absolutely love this reform idea because it essentially combines elements of two general innovation policy reform ideas that I discussed in my recent essay, “Converting Permissionless Innovation into Public Policy: 3 Reforms.” In that piece, I proposed the idea of instituting an “Innovator’s Presumption” that would read: “Any person or party (including a regulatory authority) who opposes a new technology or service shall have the burden to demonstrate that such proposal is inconsistent with the public interest.” I also proposed a so-called “Sunsetting Imperative” that would read: “Any existing or newly imposed technology regulation should include a provision sunsetting the law or regulation within two years.”

The National Academies report recommendation above basically embodies the spirit of both the Innovator’s Presumption and the Sunsetting Imperative. It puts the burden of proof on opponents of change and then creates a sort of shot clock to keep things moving.

These are the kind of reforms we need to make sure status quo thinking at regulatory agencies doesn’t hold back life-enriching and life-saving innovations. It’s time for a change in the ways business is done at the FAA to make sure that regulations are timely, effective, and in line with common sense. Sadly, as the new National Academies report makes clear, today’s illogical policies governing airspace innovation are having counter-productive results that hurt society.

On Monday, April 16th, the Technology Policy Institute hosted an event on “Facebook & Cambridge Analytica: Regulatory & Policy Implications.” I was invited to deliver some remarks on a panel that included Howard Beales of George Washington University, Stuart Ingis of Venable LLP, Josephine Wolff of the Rochester Institute of Technology, and Thomas Lenard of TPI, who moderated. I offered some thoughts about the potential trade-offs associated with treating Facebook like a regulated public utility. I wrote an essay here last week on that topic. My remarks at the event begin at the 13:45 mark of the video.

 

By Adam Thierer and Jennifer Huddleston Skees

There was horrible news from Tempe, Arizona this week as a pedestrian was struck and killed by a driverless car owned by Uber. This is the first fatality of its type and is drawing widespread media attention as a result. According to both police statements and Uber itself, the investigation into the accident is ongoing and Uber is assisting in the investigation. While this certainly is a tragic event, we cannot let it cost us the life-saving potential of autonomous vehicles.

While any fatal traffic accident involving a driverless car is certainly sad, we can’t ignore the fact that each and every day in the United States letting human beings drive on public roads is proving far more dangerous. This single event has led some critics to wonder why we were allowing driverless cars to be tested on public roads at all before they have been proven to be 100% safe. Driverless cars can help reverse a public health disaster decades in the making, but only if policymakers allow real-world experimentation to continue.

Let’s be more concrete about this: Each day, Americans take 1.1 billion trips driving 11 billion miles in vehicles that weigh on average between 1.5 and 2 tons. Sadly, about 100 people die  and over 6,000 are injured each day in car accidents. 94% of these accidents have been shown to be attributable to human error and this deadly trend has been increasing as we become more distracted while driving. Moreover, according to the Center for Disease Control and Prevention, almost 6000 pedestrians were killed in traffic accidents in 2016, which means there was roughly one crash-related pedestrian death every 1.6 hours. In Arizona, the issue is even more pronounced with the state ranked 6th worst for pedestrians and the Phoenix area ranked the 16th worst metro for such accidents nationally. Continue reading →

Reason magazine recently published my review of Franklin Foer’s new book, World Without Mind: The Existential Threat of Big Tech. My review begins as follows:

If you want to sell a book about tech policy these days, there’s an easy formula to follow.

First you need a villain. Google and Facebook should suffice, but if you can throw in Apple, Amazon, or Twitter, that’s even better. Paint their CEOs as either James Bond baddies bent on world domination or naive do-gooders obsessed with the quixotic promise of innovation.

Finally, come up with a juicy Chicken Little title. Maybe something like World Without Mind: The Existential Threat of Big Tech. Wait—that one’s taken. It’s the title of Franklin Foer’s latest book, which follows this familiar techno-panic template almost perfectly.

The book doesn’t break a lot of new ground; it serves up the same old technopanicky tales of gloom-and-doom that many others have said will befall us unless something is done to save us. But Foer’s unique contribution is to unify many diverse strands of modern tech criticism in one tome, and then amp up the volume of panic about it all. Hence, the “existential” threat in the book’s title. I bet you didn’t know the End Times were so near!

Read the rest of my review over at Reason. And, if you care to read some of my other essays on technopanics through the ages, here’s a compendium of them.

“First electricity, now telephones. Sometimes I feel as if I were living in an H.G. Wells novel.” –Dowager Countess, Downton Abbey

Every technology we take for granted was once new, different, disruptive, and often ridiculed and resisted as a result. Electricity, telephones, trains, and television all caused widespread fears once in the way robots, artificial intelligence, and the internet of things do today. Typically it is realized by most that these fears are misplaced and overly pessimistic, the technology gets diffused and we can barely remember our life without it. But in the recent technopanics, there has been a concern that the legal system is not properly equipped to handle the possible harms or concerns from these new technologies. As a result, there are often calls to regulate or rein in their use.

In the late 1980s, video cassette recorders (VCRs) caused a legal technopanic. The concerns were less that VCRs would lead to some bizarre human mutation as in many technopanics, but rather that the existing system of copyright infringement and vicarious liability could not adequately address the potential harm to the motion picture industry. The then president of the Motion Picture Association of America Jack Valenti famously told Congress, “I say to you that the VCR is to the American film producer and the American public as the Boston Strangler is to the woman home alone.”

Continue reading →

“Responsible research and innovation,” or “RRI,” has become a major theme in academic writing and conferences about the governance of emerging technologies. RRI might be considered just another variant of corporate social responsibility (CSR), and it indeed borrows from that heritage. What makes RRI unique, however, is that it is more squarely focused on mitigating the potential risks that could be associated with various technologies or technological processes. RRI is particularly concerned with “baking-in” certain values and design choices into the product lifecycle before new technologies are released into the wild.

In this essay, I want to consider how RRI lines up with the opposing technological governance regimes of “permissionless innovation” and the “precautionary principle.” More specifically, I want to address the question of whether “permissionless innovation” and “responsible innovation” are even compatible. While participating in recent university seminars and other tech policy events, I have encountered a certain degree of skepticism—and sometimes outright hostility—after suggesting that, properly understood, “permissionless innovation” and “responsible innovation” are not warring concepts and that RRI can co-exist peacefully with a legal regime that adopts permissionless innovation as its general tech policy default. Indeed, the application of RRI lessons and recommendations can strengthen the case for adopting a more “permissionless” approach to innovation policy in the United States and elsewhere. Continue reading →

I’ve written here before about the problems associated with the “technopanic mentality,” especially when it comes to how technopanics sometimes come to shape public policy decisions and restict important new, life-enriching innovations. As I argued in a recent book, the problem with this sort Chicken Little thinking is that, “living in constant fear of worst-case scenarios—and premising public policy on them—means that best-case scenarios will never come about. When public policy is shaped by precautionary principle reasoning, it poses a serious threat to technological progress, economic entrepreneurialism, social adaptation, and long-run prosperity.”

Perhaps the worst thing about worst-case thinking is how short-sighted and hubristic it can be. The technopanic crowd often has an air of snooty-ness about them in that they ask us to largely ignore the generally positive long-run trends associated with technological innovation and instead focus on hypothetical fears about an uncertain future that apparently only they can foresee. This is the case whether they are predicting the destruction of jobs, economy, lifestyles, or culture. Techno-dystopia lies just around the corner, they say, but the rest of us ignorant sheep who just can’t see it coming!

In his wonderful 2013 book, Smarter Than You Think: How Technology Is Changing Our Minds for the BetterClive Thompson correctly noted that “dystopian predictions are easy to generate” and “doomsaying is emotionally self-protective: if you complain that today’s technology is wrecking the culture, you can tell yourself you’re a gimlet-eyed critic who isn’t hoodwinked by high-tech trends and silly, popular activities like social networking. You seem like someone who has a richer, deeper appreciation for the past and who stands above the triviality of today’s life.”

Stated differently, the doomsayers are guilty of a type of social and technical arrogance. They are almost always wrong on history, wrong on culture, and wrong on facts. Again and again, humans have proven remarkably resilient in the face of technological change and have overcome short-term adversity. Yet, the technopanic pundits are almost never called out for their elitist attitudes later when their prognostications are proven wildly off-base. And even more concerning is the fact that their Chicken Little antics lead them and others to ignore the more serious risks that could exist out there and which are worthy of our attention.

Here’s a nice example of that last point that comes from a silent film made all the way back in 1911! Continue reading →