Articles by Adam Thierer

Adam ThiererSenior Fellow in Technology & Innovation at the R Street Institute in Washington, DC. Formerly a senior research fellow at the Mercatus Center at George Mason University, President of the Progress & Freedom Foundation, Director of Telecommunications Studies at the Cato Institute, and a Fellow in Economic Policy at the Heritage Foundation.


I recently posted an essay over at The Bridge about “The Pacing Problem and the Future of Technology Regulation.” In it, I explain why the pacing problem—the notion that technological innovation is increasingly outpacing the ability of laws and regulations to keep up—“is becoming the great equalizer in debates over technological governance because it forces governments to rethink their approach to the regulation of many sectors and technologies.”

In this follow-up article, I wanted to expand upon some of the themes developed in that essay and discuss how they relate to two other important concepts: the “Collingridge Dilemma” and technological determinism. In doing so, I will build on material that is included in a forthcoming law review article I have co-authored with Jennifer Skees, Ryan Hagemann (“Soft Law for Hard Problems: The Governance of Emerging Technologies in an Uncertain Future”) as well as a book I am finishing up on the growth of “evasive entrepreneurialism” and “technological civil disobedience.”

Recapping the Nature of the Pacing Problem

First, let us quickly recap that nature of “the pacing problem.” I believe Larry Downes did the best job explaining the “problem” in his 2009 book on The Laws of Disruption. Downes argued that “technology changes exponentially, but social, economic, and legal systems change incrementally” and that this “law” was becoming “a simple but unavoidable principle of modern life.” Continue reading →

The ongoing ride-sharing wars in New York City are interesting to watch because they signal the potential move by state and local officials to use infrastructure management as an indirect form of innovation control or competition suppression. It is getting harder for state and local officials to defend barriers to entry and innovation using traditional regulatory rationales and methods, which are usually little more than a front for cronyist protectionism schemes. Now that the public has increasingly enjoyed new choices and better services in this and other fields thanks to technological innovation, it is very hard to convince citizens they would be better off without more of the same.

If, however, policymakers claim that they are limiting entry or innovation based on concerns about how disruptive actors supposedly negatively affect local infrastructure (in the form of traffic or sidewalk congestion, aesthetic nuisance, deteriorating infrastructure, etc.), that narrative can perhaps make it easier to sell the resulting regulations to the public or, more importantly, the courts. Going forward, I suspect that this will become a commonly-used playbook for many state and local officials looking to limit the reach of new technologies, including ride-sharing companies, electric scooters, driverless cars, drones, and many others.

To be clear, infrastructure control is both (a) a legitimate state and local prerogative; and (b) something that has been used in the past to control innovation and entry in other sectors. But I suspect that this approach is about to become far more prevalent because a full-frontal defense of barriers to innovation is far more likely to face serious public and legal challenges. Continue reading →

[first published at The Bridge on August 9, 2018]

What happens when technological innovation outpaces the ability of laws and regulations to keep up?

This phenomenon is known as “the pacing problem,” and it has profound ramifications for the governance of emerging technologies. Indeed, the pacing problem is becoming the great equalizer in debates over technological governance because it forces governments to rethink their approach to the regulation of many sectors and technologies.

The Innovation Cornucopia

Had Rip Van Winkle woken up his famous nap today, he’d be shocked by all the changes around him. At-home genetics tests, personal drones, driverless cars, lab-grown meats, and 3D-printed prosthetic limbs are just some of the amazing innovations that would boggle his mind. New devices and services are flying at us so rapidly that we sometimes forget that most did not even exist a short time ago. Continue reading →

FCC Chairman Ajit Pai recently delivered an excellent speech at the Resurgent Conference, Austin, TX. In it, he stressed the importance of adopting a permissionless innovation policy vision to ensure a bright future for technology, economic growth, and consumer welfare. The whole thing is worth your time, but the last two paragraphs make two essential points worth highlighting.

Pai correctly notes that we should reject the sort of knee-jerk hysteria or technopanic mentality that sometimes accompanies new technologies. Instead, we should have some patience and humility in the face of uncertainty and be open to new ideas and technologies creations.

“Here’s the bottom line,” Pai concludes:

Whenever a technological innovation creates uncertainty, some will always have the knee-jerk reaction to presume it’s bad. They’ll demand that we do whatever’s necessary to maintain the status quo. Strangle it with a study. Call for a commission. Bemoan those supposedly left behind. Stipulate absolute certainty. Regulate new services with the paradigms of old.

But we should resist that temptation. “Guilty until proven innocent” is not a recipe for innovation, and it doesn’t make consumers better off. History tells us that it is not preemptive regulation, but permissionless innovation made possible by competitive free markets that best guarantees consumer welfare. A future enabled by the next generation of technology can be bright, if only we choose to let the light in.

Read the whole thing here. Good stuff. I also appreciate him citing my work on the topic, which you can find in my last book and other publications.

By Andrea O’Sullivan and Adam Thierer (First published at The Bridge on August 1, 2018.)

Technology is changing the ways that entrepreneurs interact with, and increasingly get away from, existing government regulations. The ongoing legal battles surrounding 3D-printed weapons provide yet another timely example.

For years, a consortium of techies called Defense Distributed has sought to secure more protections for gun owners by making the code allowing someone to print their own guns available online. Rather than taking their fight to Capitol Hill and spending billions of dollars lobbying in potentially fruitless pursuits of marginal legislative victories, Defense Distributed ties their fortunes to the mast of technological determinism and blurs the lines between regulated physical reality and the open world of cyberspace.

The federal government moved fast, with gun control advocates like Senator Chuck Schumer (D-NY) and former Representative Steve Israel (D-NY) proposing legislation to criminalize Defense Distributed’s activities. They failed.

Plan B in the efforts to quash these acts of 3D-printing disobedience was to classify the Computer-aided design (CAD) files that Defense Distributed posted online as a kind of internationally-controlled munition. The US State Department engaged in a years-long legal brawl over whether or not Defense Distributed violated established International Traffic in Arms Regulations (ITAR). The group pulled down the files while the issue was examined in court, but the code had long since been uploaded to sharing sites like The Pirate Bay. The files have also been available on the Internet Archive for many years. The CAD, if you will excuse the pun, is out of the bag.

In a surprising move, the Department of Justice suddenly moved to drop the suit and settle with Defense Distributed last month. It agreed to cover the group’s legal fees and cease its attempt to regulate code already easily accessible online. While no legal precedent was set, since this was merely a settlement, it is likely that the government realized that its case would be unwinnable.

Gun control advocates did not react well to this legal retreat. Continue reading →

The White House has announced a new effort to help prepare workers for the challenges they will face in the future. While it’s a well-intentioned effort, and one that I hope succeeds, I’m skeptical about it for a simple reason: It’s just really hard to plan for the workforce needs of the future and train people for jobs that we cannot possibly envision today.

Writing in the Wall Street Journal today, Ivanka Trump, senior adviser to the president, outlines the elements of new Executive Order that President Trump is issuing “to prioritize and expand workforce development so that we can create and fill American jobs with American workers.” Toward that end, the Administration plans on:

  • establishing a National Council for the American Worker, “composed of senior administration officials, who will develop a national strategy for training and retraining workers for high-demand industries.” This is meant to bring more efficiency and effectiveness to the “more than 40 workforce-training programs in more than a dozen agencies, and too many have produced meager results.”
  • “facilitat[ing] the use of data to connect American businesses, workers and educational institutions.” This is meant to help workers find “what jobs are available, where they are, what skills are required to fill them, and where the best training is available.”
  • launching a nationwide campaign “to highlight the growing vocational crisis and promote careers in the skilled trades, technology and manufacturing.”

The Administration also plans on creating a new advisory board of experts to address these issues, and the administration is also “asking companies and trade groups throughout the country to sign our new Pledge to America’s Workers—a commitment to invest in the current and future workforce.” They hope to encourage companies to take additional steps “to educate, train and reskill American students and workers.”

Perhaps some of these steps make sense, and perhaps a few will even help workers deal with the challenges of our more complex, fast-evolving, global economy. But I doubt it.

Continue reading →

I’ve been working on a new book that explores the rise of evasive entrepreneurialism and technological civil disobedience in our modern world. Following the publication of my last book, Permissionless Innovation: The Continuing Case for Comprehensive Technological Freedom, people started bringing examples of evasive entrepreneurialism and technological civil disobedience to my attention and asked how they were related to the concept of permissionless innovation. As I started exploring and cataloging these cases studies, I realized I could probably write an entire book about these developments and their consequences.

Hopefully that book will be wrapped up shortly. In the meantime, I am going to start rolling out some short essays based on content from the book. To begin, I will state the general purpose of the book and define the key concepts discussed therein. In coming weeks and months, I’ll build on these themes, explain why they are on the rise, explore the effect they are having on society and technological governance efforts, and more fully develop some relevant case studies. Continue reading →

In preparation for a Federalist Society teleforum call that I participated in today about the compliance costs of the EU’s General Data Protection Regulation (GDPR), I gathered together some helpful recent articles on the topic and put together some talking points. I thought I would post them here and try to update this list in coming months as I find new material. (My thanks to Andrea O’Sullivan for a major assist on coming up with all this.)

Key Points:

  • GDPR is no free lunch; compliance is very costly
      • All regulation entails trade-offs, no matter how well-intentioned rules are
      • $7.8 billion estimated compliance cost for U.S. firms already
      • Punitive fees can range from €20 million to 4 percent of global firm revenue
      • Vagueness of language leads to considerable regulatory uncertainty — no one knows what “compliance” looks like
      • Even EU member states do not know what compliance looks like: 17 of 24 regulatory bodies polled by Reuters said they were unprepared for GDPR
  • GDPR will hurt competition & innovation; favors big players over small
      • Google, Facebook & others beefing up compliance departments. (“ EU official, Vera Jourova: “They have the money, an army of lawyers, an army of technicians and so on.”)
      • Smaller firms exiting or dumping data that could be used to provide better, more tailored services
      • PwC survey found that 88% of companies surveyed spent more than $1 million on GDPR preparations, and 40% more than $10 million.
      • Before GDPR, half of all EU ad spend went to Google. The first day after it took effect, an astounding 95 percent went to Google.
      • In essence, with the GDPR, the EU is surrendering on the idea of competition being possible going forward
      • The law will actually benefit the same big companies that the EU has been going after on antitrust grounds. Meanwhile, the smaller innovators and innovations will suffer.

Continue reading →

The National Academies of Sciences, Engineering, and Medicine has released an amazing new report focused on, “Assessing the Risks of Integrating Unmanned Aircraft Systems (UAS) into the National Airspace System.” In what the Wall Street Journal rightly refers to as an “unusually strongly worded report,” the group of experts assembled by the National Academies call for a sea change in regulatory attitudes and policies toward regulation of Unmanned Aircraft Systems (or “drones”) and the nation’s airspace more generally.

The report uses the term “conservative” or “overly conservative” more than a dozen times to describe the Federal Aviation Administration’s (FAA) problematic current approach toward drones. They point out that the agency has “a culture with a near-zero tolerance for risk,” and that the agency needs to adjust that culture to take into account “the various ways in which this new technology may reduce risk and save lives.” (Ch. S, p.2) The report continues on to say that:

The committee concluded that “fear of making a mistake” drives a risk culture at the FAA that is too often overly conservative, particularly with regard to UAS technologies, which do not pose a direct threat to human life in the same way as technologies used in manned aircraft. An overly conservative attitude can take many forms. For example, FAA risk avoidance behavior is often rewarded, even when it is excessively risk averse, and rewarded behavior is repeated behavior. Balanced risk decisions can be discounted, and FAA staff may conclude that allowing new risk could endanger their careers even when that risk is so minimal that it does not exceed established safety standards.  The committee concluded that a better measure for the FAA to apply is to ask the question, “Can we make UAS as safe as other background risks that people experience daily?” As the committee notes, we do not ground airplanes because birds fly in the airspace, although we know birds can and do bring down aircraft.

[. . . ]

In many cases, the focus has been on “What might go wrong?” instead of a holistic risk picture: “What is the net risk/benefit?” Closely related to this is what the committee considers to be paralysis wherein ever more data are often requested to address every element of uncertainty in a new technology. Flight experience cannot be gained to generate these data due to overconservatism that limits approvals of these flights. Ultimately, the status quo is seen as safe. There is too little recognition that new technologies brought into the airspace by UAS could improve the safety of manned aircraft operations, or may mitigate, if not eliminate, some nonaviation risks. (p. S-2)

Importantly, the report makes it clear that the problem here is not just that “an overly conservative risk culture that overestimates the severity and the likelihood of UAS risk can be a significant barrier to introduction and development of these technologies,” but, more profoundly, the report highlights how,  “Avoiding risk entirely by setting the safety target too high creates imbalanced risk decisions and can degrade overall safety and quality of life.” (p. 3-6,7) In other words, we should want a more open and common sense-oriented approach to drones, not only to encourage more life-enriching innovation, but also because it could actually make us safer as a result.

No Reward without Some Risk

What the National Academies report is really saying here is that there can be no reward without some risk.  This is something I have spent a great deal of time writing about in my last book, a recent book chapter, and various other essays and journal articles over the past 25 years.  As I noted in my last book, “living in constant fear of worst-case scenarios—and premising public policy on them—means that best-case scenarios will never come about.”  If we want a wealthier, healthier, and safer society, we must embrace change and risk-taking to get us there.

This is exactly what that National Academies report is getting at when they note that the FAA”s “overly conservative culture prevents safety beneficial operations from entering the airspace. The focus is on what might go wrong. More dialogue on potential benefits is needed to develop a holistic risk picture that addresses the question, What is the net risk/benefit?” (p. 3-10)

In other words, all safety regulation involves trade-offs, and if (to paraphrase a classic Hardin cartoon you’ll see to your right) we consider every potential risk except the risk of avoiding all risks, the result will be not only a decline in short-term innovation, but also a corresponding decline in safety and overall living standards over time.

Countless risk scholars have studied this process and come to the same conclusion. “We could virtually end all risk of failure by simply declaring a moratorium on innovation, change, and progress,” notes engineering historian Henry Petroski. But the costs to society of doing so would be catastrophic, of course. “The history of the human race would be dreary indeed if none of our forebears had ever been willing to accept risk in return for potential achievement,” observed H.L. Lewis, an expert on technological risk trade-offs.

The most important book ever written on this topic was Aaron Wildavsky’s 1988 masterpiece, Searching for Safety. Wildavsky warned of the dangers of “trial without error” reasoning and contrasted it with the trial-and-error method of evaluating risk and seeking wise solutions to it. Wildavsky argued that real wisdom is born of experience and that we can learn how to be wealthier and healthier as individuals and a society only by first being willing to embrace uncertainty and even occasional failure. As he put it:

The direct implication of trial without error is obvious: If you can do nothing without knowing first how it will turn out, you cannot do anything at all. An indirect implication of trial without error is that if trying new things is made more costly, there will be fewer departures from past practice; this very lack of change may itself be dangerous in forgoing chances to reduce existing hazards. . . . Existing hazards will continue to cause harm if we fail to reduce them by taking advantage of the opportunity to benefit from repeated trials.

When this logic takes the form of public policy prescriptions, it is referred to as the “precautionary principle,” which generally holds that, because new ideas or technologies could pose some theoretical danger or risk in the future, public policies should control or limit the development of such innovations until their creators can prove that they won’t cause any harms.

Again, if we adopt that attitude, human safety actually suffers because it holds back beneficial experiments aimed at improving the human condition. As the great economic historian Joel Mokyr argues, “technological progress requires above all tolerance toward the unfamiliar and the eccentric.” But the regulatory status quo all too often rejects “the unfamiliar and the eccentric” out of an abundance of caution. While usually well-intentioned, that sort of status quo thinking holds back new and better was of doing old things better, or doing all new things. The end result is that real health and safety advances are ignored or forgone.

How Status Quo Thinking at the FAA Results in Less Safety

This is equally true for air safety and FAA regulation of drones. “Ultimately, the status quo is seen as safe,” the National Acadamies report notes. “There is too little recognition that new technologies brought into the airspace by UAS could improve the safety of manned aircraft operations, or may mitigate, if not eliminate, some nonaviation risks.” The example of the life-saving potential of drones have already been well-documented.

Drones have already been used to monitor fires, help with search-and-rescue missions for missing people or animals, assist life guards by dropping life vests to drowning people, deliver medicines to remote areas, and help with disaster monitoring and recovery efforts. But that really just scratches the surface in terms of their potential.

Some people scoff at the idea of drones being used to deliver small packages to our offices or homes. But consider how many of those packages are delivered by human-operated vehicles that are far more likely to be involved in dangerous traffic accidents on our over-crowded roadways. If drones were used to make some of those deliveries, we might be able to save a lot of lives. Or how about an elderly person stuck at home during storm, only to realize they are out of some essential good or medicine that is a long drive away. Are we better off having them (or someone else) get behind the wheel to drive and get it, or might a drone be able to deliver it more safely?

The authors of the National Academies report understand this, as they made clear when they concluded that, “operation of UAS has many advantages and may improve the quality of life for people around the world. Avoiding risk entirely by setting the safety target too high creates imbalanced risk decisions and can degrade overall safety and quality of life.” (Ch. 3, p. 5-6)

Reform Ideas: Use the “Innovator’s Presumption” & “Sunsetting Imperative”

Given that reality, the National Academies report makes several sensible reform recommendations aimed at countering the FAA’s hyper-conservatism and bias for the broken regulatory status quo. I won’t go through them all, but I think they are an excellent set of reforms that deserve to be taken seriously.

I do, however, want to highly recommend everyone take a close look at this one outstanding recommendation in Chapter 3, which is aimed at keep things moving and making sure that status quo thinking doesn’t freeze beneficial new forms of airspace innovation. Specifically, the National Academies report recommends that:

The FAA should meet requests for certifications or operations approvals with an initial response of “How can we approve this?” Where the FAA employs internal boards of executives throughout the agency to provide input on decisions, final responsibility and authority and accountability for the decision should rest with the executive overseeing such boards. A time limit should be placed on responses from each member of the board, and any “No” vote should be accompanied with a clearly articulated rationale and suggestion for how that “No” vote could be made a “Yes.” (Ch. 3, p. 8)

I absolutely love this reform idea because it essentially combines elements of two general innovation policy reform ideas that I discussed in my recent essay, “Converting Permissionless Innovation into Public Policy: 3 Reforms.” In that piece, I proposed the idea of instituting an “Innovator’s Presumption” that would read: “Any person or party (including a regulatory authority) who opposes a new technology or service shall have the burden to demonstrate that such proposal is inconsistent with the public interest.” I also proposed a so-called “Sunsetting Imperative” that would read: “Any existing or newly imposed technology regulation should include a provision sunsetting the law or regulation within two years.”

The National Academies report recommendation above basically embodies the spirit of both the Innovator’s Presumption and the Sunsetting Imperative. It puts the burden of proof on opponents of change and then creates a sort of shot clock to keep things moving.

These are the kind of reforms we need to make sure status quo thinking at regulatory agencies doesn’t hold back life-enriching and life-saving innovations. It’s time for a change in the ways business is done at the FAA to make sure that regulations are timely, effective, and in line with common sense. Sadly, as the new National Academies report makes clear, today’s illogical policies governing airspace innovation are having counter-productive results that hurt society.

On March 19th, I had the chance to debate Franklin Foer at a Patrick Henry College event focused on the question, “Is Big Tech Big Brother?” It was billed as a debate over the role of technology in American society and whether government should be regulating media and technology platforms more generally.  [The full event video is here.] Foer is the author of the new book, World Without Mind: The Existential Threat of Big Tech, in which he advocates a fairly expansive regulatory regime for modern information technology platforms. He is open to building on regulatory ideas from the past, including broadcast-esque licensing regimes, “Fairness Doctrine”-like mandates for digital intermediaries, “fiduciary” responsibilities, beefed-up antitrust intervention, and other types of controls. In a review of the book for Reason, and then again during the debate at Patrick Henry University, I offered some reflections on what we can learn from history about how well ideas like those worked out in practice.

My closing statement of the debate, which lasted just a little over three minutes, offers a concise summation of what that history teaches us and why it would be so dangerous to repeat the mistakes of the past by wandering down that disastrous path again. That 3-minute clip is posted below. (The audience was polled before and after the event and asked the same question each time: “Do large tech companies wield too much power in our economy, media and personal lives and if so, should government(s) intervene?” Apparently at the beginning, the poll was roughly Yes – 70% and No – 30%, but after the debated ended it has reversed, with only 30% in favor of intervention and 70% against. Glad to turn around some minds on this one!)

via ytCropper