The future of emerging technology policy will be influenced increasingly by the interplay of three interrelated trends: “innovation arbitrage,” “technological civil disobedience,” and “spontaneous private deregulation.” Those terms can be briefly defined as follows:
- “Innovation arbitrage” refers to the idea that innovators can, and will with increasingly regularity, move to those jurisdictions that provide a legal and regulatory environment more hospitable to entrepreneurial activity. Just as capital now fluidly moves around the globe seeking out more friendly regulatory treatment, the same is increasingly true for innovations. And this will also play out domestically as innovators seek to play state and local governments off each other in search of some sort of competitive advantage.
- “Technological civil disobedience” represents the refusal of innovators (individuals, groups, or even corporations) or consumers to obey technology-specific laws or regulations because they find them offensive, confusing, time-consuming, expensive, or perhaps just annoying and irrelevant. New technological devices and platforms are making it easier than ever for the public to openly defy (or perhaps just ignore) rules that limit their freedom to create or use modern technologies.
- “Spontaneous private deregulation” can be thought of as de facto rather than the de jure elimination of traditional laws and regulations owing to a combination of rapid technological change as well the potential threat of innovation arbitrage and technological civil disobedience. In other words, many laws and regulations aren’t being formally removed from the books, but they are being made largely irrelevant by some combination of those factors. “Benign or otherwise, spontaneous deregulation is happening increasingly rapidly and in ever more industries,” noted Benjamin Edelman and Damien Geradin in a Harvard Business Review article on the phenomenon.
I have previously documented examples of these trends in action for technology sectors as varied as drones, driverless cars, genetic testing, Bitcoin, and the sharing economy. (For example, on the theme of global innovation arbitrage, see all these various essays. And on the growth of technological civil disobedience, see, “DOT’s Driverless Cars Guidance: Will ‘Agency Threats’ Rule the Future?” and “Quick Thoughts on FAA’s Proposed Drone Registration System.” I also discuss some of these issues in the second edition of my Permissionless Innovation book.)
In this essay, I want to briefly highlight how, over the course of just the past month, a single company has offered us a powerful example of how both global innovation arbitrage and technological civil disobedience—or at least the threat thereof—might become a more prevalent feature of discussions about the governance of emerging technologies. And, in the process, that could lead to at least the partial spontaneous deregulation of certain sectors or technologies. Finally, I will discuss how this might affect technological governance more generally and accelerate the movement toward so-called “soft law” governance mechanisms as an alternative to traditional regulatory approaches. Continue reading →
[This is an excerpt from Chapter 6 of the forthcoming 2nd edition of my book, “Permissionless Innovation: The Continuing Case for Comprehensive Technological Freedom,” due out later this month. I was presenting on these issues at today’s New America Foundation “Cybersecurity for a New America” event, so I thought I would post this now. To learn more about the contrast between “permissionless innovation” and “precautionary principle” thinking, please consult the earlier edition of my book or see this blog post.]
Viruses, malware, spam, data breeches, and critical system intrusions are just some of the security-related concerns that often motivate precautionary thinking and policy proposals. But as with privacy- and safety-related worries, the panicky rhetoric surrounding these issues is usually unfocused and counterproductive.
In today’s cybersecurity debates, for example, it is not uncommon to hear frequent allusions to the potential for a “digital Pearl Harbor,” a “cyber cold war,” or even a “cyber 9/11.” These analogies are made even though these historical incidents resulted in death and destruction of a sort not comparable to attacks on digital networks. Others refer to “cyber bombs” or technological “time bombs,” even though no one can be “bombed” with binary code. Michael McConnell, a former director of national intelligence, went so far as to say that this “threat is so intrusive, it’s so serious, it could literally suck the life’s blood out of this country.”
Such outrageous statements reflect the frequent use of “threat inflation” rhetoric in debates about online security. Threat inflation has been defined as “the attempt by elites to create concern for a threat that goes beyond the scope and urgency that a disinterested analysis would justify.” Unfortunately, such bombastic rhetoric often conflates minor cybersecurity risks with major ones. For example, dramatic doomsday stories about hackers pushing planes out of the sky misdirects policymakers’ attention from the more immediate, but less gripping, risks of data extraction and foreign surveillance. Well-meaning skeptics might then conclude that our real cybersecurity risks are also not a problem. In the meantime, outdated legislation and inappropriate legal norms continue to impede beneficial defensive measures that could truly improve security. Continue reading →
“Why hasn’t Europe fostered the kind of innovation that has spawned hugely successful technology companies?” asks James B. Stewart in an important new column for the New York Times (“A Fearless Culture Fuels U.S. Tech Giants“).
That’s a great question, and one that I have tried to answer in a series of recent essays. (See, for example, “Europe’s Choice on Innovation” and “Embracing a Culture of Permissionless Innovation.”) What I have suggested in those essays is that the starkly different outcomes on either side of the Atlantic in terms of recent economic growth and innovation can primarily be explained by cultural attitudes toward risk-taking and failure. “For innovation and growth to blossom, entrepreneurs need a clear green light from policymakers that signals a general acceptance of risk-taking—especially risk-taking that challenges existing business models and traditional ways of doing things,” I have argued. And the most powerful proof of this is to examine the amazing natural experiment that has played out on either side of the Atlantic over the past two decades with the Internet and the digital economy.
For example, an annual Booz & Company report on the world’s most innovative companies revealed that 9 of the top 10 most innovative companies are based in the U.S. and that most of them are involved in computing and digital technology. None of them are based in Europe, however. Another recent survey revealed that the world’s 15 most valuable Internet companies (based on market capitalizations) have a combined market value of nearly $2.5 trillion, but none of them are European while 11 of them are U.S. firms. Again, it is America’s tech innovators that dominate that list.
Many European officials and business leaders are waking up to this grim reality and are wondering how to reverse this situation. In his Times essay, Stewart quotes Danish economist Jacob Kirkegaard of the Peterson Institute for International Economics, who notes that Europeans “all want a Silicon Valley. . . . But none of them can match the scale and focus on the new and truly innovative technologies you have in the United States. Europe and the rest of the world are playing catch-up, to the great frustration of policy makers there.”
OK, but why is that? Continue reading →
Claire Cain Miller of The New York Times posted an interesting story yesterday noting how, “Technology Has Made Life Different, but Not Necessarily More Stressful.” Her essay builds on a new study by researchers at the Pew Research Center and Rutgers University on “Social Media and the Cost of Caring.” Miller’s essay and this new Pew/Rutgers study indirectly make a point that I am always discussing in my own work, but that is often ignored or downplayed by many technological critics, namely: We humans have repeatedly proven quite good at adapting to technological change, even when it entails some heartburn along the way.
The major takeaway of the Pew/Rutgers study was that, “social media users are not any more likely to feel stress than others, but there is a subgroup of social media users who are more aware of stressful events in their friends’ lives and this subgroup of social media users does feel more stress.” Commenting on the study, Miller of the Times notes:
Fear of technology is nothing new. Telephones, watches and televisions were similarly believed to interrupt people’s lives and pressure them to be more productive. In some ways they did, but the benefits offset the stressors. New technology is making our lives different, but not necessarily more stressful than they would have been otherwise. “It’s yet another example of how we overestimate the effect these technologies are having in our lives,” said Keith Hampton, a sociologist at Rutgers and an author of the study. . . . Just as the telephone made it easier to maintain in-person relationships but neither replaced nor ruined them, this recent research suggests that digital technology can become a tool to augment the relationships humans already have.
I found this of great interest because I have written about how humans assimilate new technologies into their lives and become more resilient in the process as they learn various coping techniques. Continue reading →
As 2014 draws to a close, we take a look back at the most-read posts from the past year at The Technology Liberation Front. Thank you for reading, and enjoy. Continue reading →
What sort of public policy vision should govern the Internet of Things? I’ve spent a lot of time thinking about that question in essays here over the past year, as well as in a new white paper (“The Internet of Things and Wearable Technology: Addressing Privacy and Security Concerns without Derailing Innovation”) that will be published in the Richmond Journal of Law & Technology early next year.
But I recently heard three policymakers articulate their recommended vision for the Internet of Things (IoT) and I found their approach so inspiring that I wanted to discuss it here in the hopes that it will become the foundation for future policy in this arena.
Last Thursday, it was my pleasure to attend a Center for Data Innovation (CDI) event on “How Can Policymakers Help Build the Internet of Things?” As the title implied, the goal of the event was to discuss how to achieve the vision of a more fully-connected world and, more specifically, how public policymakers can help facilitate that objective. It was a terrific event with many excellent panel discussions and keynote addresses.
Two of those keynotes were delivered by Senators Deb Fischer (R-Neb.) and Kelly Ayotte (R-N.H.). Below I will offer some highlights from their remarks and then relate them to the vision set forth by Federal Trade Commission (FTC) Commissioner Maureen K. Ohlhausen in some of her recent speeches. I will conclude by discussing how the Ayotte-Fischer-Ohlhausen vision can be seen as the logical extension of the Clinton Administration’s excellent 1997 Framework for Global Electronic Commerce, which proposed a similar policy paradigm for the Internet more generally. This shows how crafting policy for the IoT can and should be a nonpartisan affair. Continue reading →
In my previous essay, I discussed a new white paper by my colleague Robert Graboyes, Fortress and Frontier in American Health Care, which examines the future of medical innovation. Graboyes uses the “fortress vs frontier” dichotomy to help explain different “visions” about how public policies debates about technological innovation in the health care arena often play out. It’s a terrific study that I highly recommend for all the reasons I stated in my previous post.
As I was reading Bob’s new report, I realized that his approach shared much in common with a couple of other recent innovation policy paradigms I have discussed here before from Virginia Postrel (“Stasis” vs. “Dynamism”), Robert D. Atkinson (“Preservationists” vs. “Modernizers”), and myself (“Precautionary Principle” vs. “Permissionless Innovation”). In this essay, I will briefly relate Bob’s’ approach to those other three innovation policy paradigms and then note a deficiency with our common approaches. I’ll conclude by briefly discussing another interesting framework from science writer Joel Garreau. Continue reading →
The sharing economy is growing faster than ever and becoming a hot policy topic these days. I’ve been fielding a lot of media calls lately about the nature of the sharing economy and how it should be regulated. (See latest clip below from the Stossel show on Fox Business Network.) Thus, I sketched out some general thoughts about the issue and thought I would share them here, along with some helpful additional reading I have come across while researching the issue. I’d welcome comments on this outline as well as suggestions for additional reading. (Note: I’ve also embedded some useful images from Jeremiah Owyang of Crowd Companies.)
1) Just because policymakers claim that regulation is meant to protect consumers does not mean it actually does so.
- Cronyism/ Rent-seeking: Regulation is often “captured” by powerful and politically well-connected incumbents and used to their own benefit. (+ Lobbying activity creates deadweight losses for society.)
- Innovation-killing: Regulations become a formidable barrier to new innovation, entry, and entrepreneurism.
- Unintended consequences: Instead of resulting in lower prices & better service, the opposite often happens: Higher prices & lower quality service. (Example: Painting all cabs same color destroying branding & ability to differentiate).
Continue reading →
How is it that we humans have again and again figured out how to assimilate new technologies into our lives despite how much those technologies “unsettled” so many well-established personal, social, cultural, and legal norms?
In recent years, I’ve spent a fair amount of time thinking through that question in a variety of blog posts (“Are You An Internet Optimist or Pessimist? The Great Debate over Technology’s Impact on Society”), law review articles (“Technopanics, Threat Inflation, and the Danger of an Information Technology Precautionary Principle”), opeds (“Why Do We Always Sell the Next Generation Short?”), and books (See chapter 4 of my new book, “Permissionless Innovation: The Continuing Case for Comprehensive Technological Freedom”).
It’s fair to say that this issue — how individuals, institutions, and cultures adjust to technological change — has become a personal obsession of mine and it is increasingly the unifying theme of much of my ongoing research agenda. The economic ramifications of technological change are part of this inquiry, of course, but those economic concerns have already been the subject of countless books and essays both today and throughout history. I find that the social issues associated with technological change — including safety, security, and privacy considerations — typically get somewhat less attention, but are equally interesting. That’s why my recent work and my new book narrow the focus to those issues. Continue reading →