Technopanics & the Precautionary Principle

“First electricity, now telephones. Sometimes I feel as if I were living in an H.G. Wells novel.” –Dowager Countess, Downton Abbey

Every technology we take for granted was once new, different, disruptive, and often ridiculed and resisted as a result. Electricity, telephones, trains, and television all caused widespread fears once in the way robots, artificial intelligence, and the internet of things do today. Typically it is realized by most that these fears are misplaced and overly pessimistic, the technology gets diffused and we can barely remember our life without it. But in the recent technopanics, there has been a concern that the legal system is not properly equipped to handle the possible harms or concerns from these new technologies. As a result, there are often calls to regulate or rein in their use.

In the late 1980s, video cassette recorders (VCRs) caused a legal technopanic. The concerns were less that VCRs would lead to some bizarre human mutation as in many technopanics, but rather that the existing system of copyright infringement and vicarious liability could not adequately address the potential harm to the motion picture industry. The then president of the Motion Picture Association of America Jack Valenti famously told Congress, “I say to you that the VCR is to the American film producer and the American public as the Boston Strangler is to the woman home alone.”

Continue reading →

“Responsible research and innovation,” or “RRI,” has become a major theme in academic writing and conferences about the governance of emerging technologies. RRI might be considered just another variant of corporate social responsibility (CSR), and it indeed borrows from that heritage. What makes RRI unique, however, is that it is more squarely focused on mitigating the potential risks that could be associated with various technologies or technological processes. RRI is particularly concerned with “baking-in” certain values and design choices into the product lifecycle before new technologies are released into the wild.

In this essay, I want to consider how RRI lines up with the opposing technological governance regimes of “permissionless innovation” and the “precautionary principle.” More specifically, I want to address the question of whether “permissionless innovation” and “responsible innovation” are even compatible. While participating in recent university seminars and other tech policy events, I have encountered a certain degree of skepticism—and sometimes outright hostility—after suggesting that, properly understood, “permissionless innovation” and “responsible innovation” are not warring concepts and that RRI can co-exist peacefully with a legal regime that adopts permissionless innovation as its general tech policy default. Indeed, the application of RRI lessons and recommendations can strengthen the case for adopting a more “permissionless” approach to innovation policy in the United States and elsewhere. Continue reading →

I’ve written here before about the problems associated with the “technopanic mentality,” especially when it comes to how technopanics sometimes come to shape public policy decisions and restict important new, life-enriching innovations. As I argued in a recent book, the problem with this sort Chicken Little thinking is that, “living in constant fear of worst-case scenarios—and premising public policy on them—means that best-case scenarios will never come about. When public policy is shaped by precautionary principle reasoning, it poses a serious threat to technological progress, economic entrepreneurialism, social adaptation, and long-run prosperity.”

Perhaps the worst thing about worst-case thinking is how short-sighted and hubristic it can be. The technopanic crowd often has an air of snooty-ness about them in that they ask us to largely ignore the generally positive long-run trends associated with technological innovation and instead focus on hypothetical fears about an uncertain future that apparently only they can foresee. This is the case whether they are predicting the destruction of jobs, economy, lifestyles, or culture. Techno-dystopia lies just around the corner, they say, but the rest of us ignorant sheep who just can’t see it coming!

In his wonderful 2013 book, Smarter Than You Think: How Technology Is Changing Our Minds for the BetterClive Thompson correctly noted that “dystopian predictions are easy to generate” and “doomsaying is emotionally self-protective: if you complain that today’s technology is wrecking the culture, you can tell yourself you’re a gimlet-eyed critic who isn’t hoodwinked by high-tech trends and silly, popular activities like social networking. You seem like someone who has a richer, deeper appreciation for the past and who stands above the triviality of today’s life.”

Stated differently, the doomsayers are guilty of a type of social and technical arrogance. They are almost always wrong on history, wrong on culture, and wrong on facts. Again and again, humans have proven remarkably resilient in the face of technological change and have overcome short-term adversity. Yet, the technopanic pundits are almost never called out for their elitist attitudes later when their prognostications are proven wildly off-base. And even more concerning is the fact that their Chicken Little antics lead them and others to ignore the more serious risks that could exist out there and which are worthy of our attention.

Here’s a nice example of that last point that comes from a silent film made all the way back in 1911! Continue reading →

Written with Christopher Koopman and Brent Skorup (originally published on Medium on 4/10/17)

Innovation isn’t just about the latest gee-whiz gizmos and gadgets. That’s all nice, but something far more profound is at stake: Innovation is the single most important determinant of long-term human well-being. There exists widespread consensus among historians, economists, political scientists and other scholars that technological innovation is the linchpin of expanded economic growth, opportunity, choice, mobility, and human flourishing more generally. It is the ongoing search for new and better ways of doing things that drives human learning and prosperity in every sense — economic, social, and cultural.

As the Industrial Revolution revealed, leaps in economic and human growth cannot be planned. They arise from societies that reward risk takers and legal systems that accommodate change. Our ability to achieve progress is directly proportional to our willingness to embrace and benefit from technological innovation, and it is a direct result of getting public policies right.

The United States is uniquely positioned to lead the world into the next era of global technological advancement and wealth creation. That’s why we and our colleagues at the Technology Policy Program at the Mercatus Center at George Mason University devote so much time and energy to defending the importance of innovation and countering threats to it. Unfortunately, those threats continue to multiply as fast as new technologies emerge. Continue reading →

The future of emerging technology policy will be influenced increasingly by the interplay of three interrelated trends: “innovation arbitrage,” “technological civil disobedience,” and “spontaneous private deregulation.” Those terms can be briefly defined as follows:

  • Innovation arbitrage” refers to the idea that innovators can, and will with increasingly regularity, move to those jurisdictions that provide a legal and regulatory environment more hospitable to entrepreneurial activity. Just as capital now fluidly moves around the globe seeking out more friendly regulatory treatment, the same is increasingly true for innovations. And this will also play out domestically as innovators seek to play state and local governments off each other in search of some sort of competitive advantage.
  • Technological civil disobedience” represents the refusal of innovators (individuals, groups, or even corporations) or consumers to obey technology-specific laws or regulations because they find them offensive, confusing, time-consuming, expensive, or perhaps just annoying and irrelevant. New technological devices and platforms are making it easier than ever for the public to openly defy (or perhaps just ignore) rules that limit their freedom to create or use modern technologies.
  • Spontaneous private deregulation” can be thought of as de facto rather than the de jure elimination of traditional laws and regulations owing to a combination of rapid technological change as well the potential threat of innovation arbitrage and technological civil disobedience. In other words, many laws and regulations aren’t being formally removed from the books, but they are being made largely irrelevant by some combination of those factors. “Benign or otherwise, spontaneous deregulation is happening increasingly rapidly and in ever more industries,” noted Benjamin Edelman and Damien Geradin in a Harvard Business Review article on the phenomenon.[1]

I have previously documented examples of these trends in action for technology sectors as varied as drones, driverless cars, genetic testing, Bitcoin, and the sharing economy. (For example, on the theme of global innovation arbitrage, see all these various essays. And on the growth of technological civil disobedience, see, “DOT’s Driverless Cars Guidance: Will ‘Agency Threats’ Rule the Future?” and “Quick Thoughts on FAA’s Proposed Drone Registration System.” I also discuss some of these issues in the second edition of my Permissionless Innovation book.)

In this essay, I want to briefly highlight how, over the course of just the past month, a single company has offered us a powerful example of how both global innovation arbitrage and technological civil disobedience—or at least the threat thereof—might become a more prevalent feature of discussions about the governance of emerging technologies. And, in the process, that could lead to at least the partial spontaneous deregulation of certain sectors or technologies. Finally, I will discuss how this might affect technological governance more generally and accelerate the movement toward so-called “soft law” governance mechanisms as an alternative to traditional regulatory approaches. Continue reading →

SecGen BanOn Tuesday, UN Secretary-General Ban Ki-Moon delivered an address to the UN Security Council “on the Non-Proliferation of Weapons of Mass Destruction.” He made many of the same arguments he and his predecessors have articulated before regarding the need for the Security Council “to develop further initiatives to bring about a world free of weapons of mass destruction.” In particular, he was focused on the great harm that could come about from the use of chemical, biological and nuclear weapons. “Vicious non-state actors that target civilians for carnage are actively seeking chemical, biological and nuclear weapons,” the Secretary-General noted. A stepped-up disarmament agenda is needed, he argued, “to prevent the human, environmental and existential destruction these weapons can cause . . . by eradicating them once and for all.”

The UN has created several multilateral mechanisms to pursue those objectives, including the Nuclear Non-Proliferation Treaty, the Chemical Weapons Convention, and the Biological Weapons Convention. Progress on these fronts has always been slow and limited, however. The Secretary-General observed that nuclear non-proliferation efforts have recently “descended into fractious deadlock,” but the effectiveness of those and similar UN-led efforts have long been challenged by the dual realities of (1) rapid ongoing technological change that has made WMDs more ubiquitous than ever, plus (2) a general lack of teeth in UN treaties and accords to do much to slow those advances, especially among non-signatories.

Despite those challenges, the Secretary-General is right to remain vigilant about the horrors of chemical, biological and nuclear attacks. But what was interesting about this address is that the Secretary-General continued on to discuss his concerns about a rising class of emerging technologies, which we usually don’t hear mentioned in the same breath as those traditional “weapons of mass destruction”: Continue reading →

Juma book cover

“The quickest way to find out who your enemies are is to try doing something new.” Thus begins Innovation and Its Enemies, an ambitious new book by Calestous Juma that will go down as one of the decade’s most important works on innovation policy.

Juma, who is affiliated with the Harvard Kennedy School’s Belfer Center for Science and International Affairs, has written a book that is rich in history and insights about the social and economic forces and factors that have, again and again, lead various groups and individuals to oppose technological change. Juma’s extensive research documents how “technological controversies often arise from tensions between the need to innovate and the pressure to maintain continuity, social order, and stability” (p. 5) and how this tension is “one of today’s biggest policy challenges.” (p. 8)

What Juma does better than any other technology policy scholar to date is that he identifies how these tensions develop out of deep-seated psychological biases that eventually come to affect attitudes about innovations among individuals, groups, corporations, and governments. “Public perceptions about the benefits and risks of new technologies cannot be fully understood without paying attention to intuitive aspects of human psychology,” he correctly observes. (p. 24) Continue reading →

This week, my Mercatus Center colleague Andrea Castillo and I filed comments with the White House Office of Science and Technology Policy (OSTP) in a proceeding entitled, “Preparing for the Future of Artificial Intelligence.” For more background on this proceeding and the accompanying workshops that OSTP has hosted on this issue, see this White House site.

In our comments, Andrea and I make the case for prudence, patience, and a continuing embrace of “permissionless innovation” as the appropriate policy framework for artificial intelligence (AI) technologies at this nascent stage of their development. Down below, I have pasted our full comments, which were limited to just 2,000 words as required by the OSTP. But we plan on releasing a much longer report on these issues in coming months. You can find the full version of filing that includes footnotes here.

Continue reading →

DM coverOn May 3rd, I’m excited to be participating in a discussion with Yale University bioethicist Wendell Wallach at the Microsoft Innovation & Policy Center in Washington, DC. (RSVP here.) Wallach and I will be discussing issues we write about in our new books, both of which focus on possible governance models for emerging technologies and the question of how much preemptive control society should exercise over new innovations.

Wallach’s latest book is entitled, A Dangerous Master: How to Keep Technology from Slipping beyond Our Control. And, as I’ve noted here recently, the greatly expanded second edition of my latest book, Permissionless Innovation: The Continuing Case for Comprehensive Technological Freedom, has just been released.

Of all the books of technological criticism or skepticism that I’ve read in recent years—and I have read stacks of them!—A Dangerous Master is by far the most thoughtful and interesting. I have grown accustomed to major works of technological criticism being caustic, angry affairs. Most of them are just dripping with dystopian dread and a sense of utter exasperation and outright disgust at the pace of modern technological change.

Although he is certainly concerned about a wide variety of modern technologies—drones, robotics, nanotech, and more—Wallach isn’t a purveyor of the politics of panic. There are some moments in the book when he resorts to some hyperbolic rhetoric, such as when he frets about an impending “techstorm” and the potential, as the book’s title suggests, for technology to become a “dangerous master” of humanity. For the most part, however, his approach is deeper and more dispassionate than what is found in the leading tracts of other modern techno-critics.

Continue reading →

One of my favorite themes, and not just in the field of tech policy, is the “Unintended Consequences of Well-Intentioned Regulations.” I believe that all laws and regulations have dynamic effects and that to fully appreciate the true impact of any particular public policy, you must always closely investigate the potential opportunity costs and unintended consequences associated with those policies. Because all too often laws and regulations are hastily put on the books with the very best of intentions in mind, only to later be shown to produce the opposite of what was intended.

Today’s case in point comes from Wall Street Journal article by Rachel Bachman and it involves how the growing wave of cycling helmet laws are having a net negative impact on public health because they discourage ridership in the aggregate. Thus, those potential riders are then either (a) just less active overall or (b) driving their cars to get where they need to go. And both of those results are, ultimately, riskier than cycling without a helmet. For that reason, Bachman reports, cycling advocates “are pushing back against mandatory bike-helmet laws in the U.S. and elsewhere. They say mandatory helmet laws, particularly for adults, make cycling less convenient and seem less safe, thus hindering the larger public-health gains of more people riding bikes.” Supporting evidence comes from this 2012 paper in the journal Risk Analysis by Piet de Jong, a professor in the department of applied finance and actuarial studies at Sydney’s Macquarie University. His paper included an empirical model that showed how mandatory bike-helmet laws “have a net negative health impact.”

This strikes me as one of the very best examples of how to do dynamic benefit-cost analysis and show the full range of societal impacts associated with well-intentioned regulations. And it reminds me of the playground example I use in several of my papers: Laws and liability threats discouraged tall playground climbing structures in the ’80s and ’90s. Continue reading →