If there are two general principles that unify my recent work on technology policy and innovation issues, they would be as follows. To the maximum extent possible:

  1. We should avoid preemptive and precautionary-based regulatory regimes for new innovation. Instead, our policy default should be innovation allowed (or “permissionless innovation”) and innovators should be considered “innocent until proven guilty” (unless, that is, a thorough benefit-cost analysis has been conducted that documents the clear need for immediate preemptive restraints).
  2. We should avoid rigid, “top-down” technology-specific or sector-specific regulatory regimes and/or regulatory agencies and instead opt for a broader array of more flexible, “bottom-up” solutions (education, empowerment, social norms, self-regulation, public pressure, etc.) as well as reliance on existing legal systems and standards (torts, product liability, contracts, property rights, etc.).

I was very interested, therefore, to come across two new essays that make opposing arguments and proposals. The first is this recent Slate oped by John Frank Weaver, “We Need to Pass Legislation on Artificial Intelligence Early and Often.” The second is Ryan Calo’s new Brookings Institution white paper, “The Case for a Federal Robotics Commission.”

Weaver argues that new robot technology “is going to develop fast, almost certainly faster than we can legislate it. That’s why we need to get ahead of it now.” In order to preemptively address concerns about new technologies such as driverless cars or commercial drones, “we need to legislate early and often,” Weaver says. Stated differently, Weaver is proposing “precautionary principle”-based regulation of these technologies. The precautionary principle generally refers to the belief that new innovations should be curtailed or disallowed until their developers can prove that they will not cause any harms to individuals, groups, specific entities, cultural norms, or various existing laws, norms, or traditions.

Calo argues that we need “the establishment of a new federal agency to deal with the novel experiences and harms robotics enables” since there exists “distinct but related challenges that would benefit from being examined and treated together.” These issues, he says, “require special expertise to understand and may require investment and coordination to thrive.

I’ll address both Weaver and Calo’s proposals in turn. Continue reading →

How is it that we humans have again and again figured out how to assimilate new technologies into our lives despite how much those technologies “unsettled” so many well-established personal, social, cultural, and legal norms?

In recent years, I’ve spent a fair amount of time thinking through that question in a variety of blog posts (“Are You An Internet Optimist or Pessimist? The Great Debate over Technology’s Impact on Society”), law review articles (“Technopanics, Threat Inflation, and the Danger of an Information Technology Precautionary Principle”), opeds (“Why Do We Always Sell the Next Generation Short?”), and books (See chapter 4 of my new book, “Permissionless Innovation: The Continuing Case for Comprehensive Technological Freedom”).

It’s fair to say that this issue — how individuals, institutions, and cultures adjust to technological change — has become a personal obsession of mine and it is increasingly the unifying theme of much of my ongoing research agenda. The economic ramifications of technological change are part of this inquiry, of course, but those economic concerns have already been the subject of countless books and essays both today and throughout history. I find that the social issues associated with technological change — including safety, security, and privacy considerations — typically get somewhat less attention, but are equally interesting. That’s why my recent work and my new book narrow the focus to those issues. Continue reading →

My latest law review article is entitled, “Privacy Law’s Precautionary Principle Problem,” and it appears in Vol. 66, No. 2 of the Maine Law Review. You can download the article on my Mercatus Center page, on the Maine Law Review website, or via SSRN. Here’s the abstract for the article:

Privacy law today faces two interrelated problems. The first is an information control problem. Like so many other fields of modern cyberlaw—intellectual property, online safety, cybersecurity, etc.—privacy law is being challenged by intractable Information Age realities. Specifically, it is easier than ever before for information to circulate freely and harder than ever to bottle it up once it is released.

This has not slowed efforts to fashion new rules aimed at bottling up those information flows. If anything, the pace of privacy-related regulatory proposals has been steadily increasing in recent years even as these information control challenges multiply.

This has led to privacy law’s second major problem: the precautionary principle problem. The precautionary principle generally holds that new innovations should be curbed or even forbidden until they are proven safe. Fashioning privacy rules based on precautionary principle reasoning necessitates prophylactic regulation that makes new forms of digital innovation guilty until proven innocent.

This puts privacy law on a collision course with the general freedom to innovate that has thus far powered the Internet revolution, and privacy law threatens to limit innovations consumers have come to expect or even raise prices for services consumers currently receive free of charge. As a result, even if new regulations are pursued or imposed, there will likely be formidable push-back not just from affected industries but also from their consumers.

In light of both these information control and precautionary principle problems, new approaches to privacy protection are necessary. Continue reading →

I’ve spent a lot of time here through the years trying to identify the factors that fuel moral panics and “technopanics.” (Here’s a compendium of the dozens of essays I’ve written here on this topic.) I brought all this thinking together in a big law review article (“Technopanics, Threat Inflation, and the Danger of an Information Technology Precautionary Principle”) and then also in my new booklet, “Permissionless Innovation: The Continuing Case for Comprehensive Technological Freedom.”

One factor I identify as contributing to panics is the fact that “bad news sells.” As I noted in the book, “Many media outlets and sensationalist authors sometimes use fear-based tactics to gain influence or sell books. Fear mongering and prophecies of doom are always effective media tactics; alarmism helps break through all the noise and get heard.”

In line with that, I want to highly recommend you check out this excellent new oped by John Stossel of Fox Business Network on “Good News vs. ‘Pessimism Porn‘.”  Stossel correctly notes that “the media win by selling pessimism porn.” He says:

Are you worried about the future? It’s hard not to be. If you watch the news, you mostly see violence, disasters, danger. Some in my business call it “fear porn” or “pessimism porn.” People like the stuff; it makes them feel alive and informed.

Of course, it’s our job to tell you about problems. If a plane crashes — or disappears — that’s news. The fact that millions of planes arrive safely is a miracle, but it’s not news. So we soak in disasters — and warnings about the next one: bird flu, global warming, potential terrorism. I won Emmys hyping risks but stopped winning them when I wised up and started reporting on the overhyping of risks. My colleagues didn’t like that as much.

He continues on to note how, even though all the data clearly proves that humanity’s lot is improving, the press relentlessly push the “pessimism porn.” Continue reading →

Last December, it was my pleasure to take part in a great event, “The Disruptive Competition Policy Forum,” sponsored by Project DisCo (or The Disruptive Competition Project). It featured several excellent panels and keynotes and they’ve just posted the video of the panel I was on here and I have embedded it below. In my remarks, I discussed:

  • benefit-cost analysis in digital privacy debates (building on this law review article);
  • the contrast between Europe and America’s approach to data & privacy issues (referencing this testimony of mine);
  • the problem of “technopanics” in information policy debates (building on this law review article);
  • the difficulty of information control efforts in various tech policy debates (which I wrote about in this law review article and these two blog posts: 1, 2);
  • the possibility of less-restrictive approaches to privacy & security concerns (which I have written about here as well in those other law review articles);
  • the rise of the Internet of Things and the unique challenges it creates (see this and this as well as my new book); and,
  • the possibility of a splintering of the Internet or the rise of “federated Internets.”

The panel was expertly moderated by Ross Schulman, Public Policy & Regulatory Counsel for CCIA, and also included remarks from John Boswell, SVP & Chief Legal Officer at SAS, and Josh Galper, Chief Policy Officer and General Counsel of Personal, Inc. (By the way, you should check out some of the cool things Personal is doing in this space to help consumers. Very innovative stuff.) The video lasts one hour. Here it is:

book cover (small)I am pleased to announce the release of my latest book, “Permissionless Innovation: The Continuing Case for Comprehensive Technological Freedom.” It’s a short manifesto (just under 100 pages) that condenses — and attempts to make more accessible — arguments that I have developed in various law review articles, working papers, and blog posts over the past few years. I have two goals with this book.

First, I attempt to show how the central fault line in almost all modern technology policy debates revolves around “the permission question,” which asks: Must the creators of new technologies seek the blessing of public officials before they develop and deploy their innovations? How that question is answered depends on the disposition one adopts toward new inventions. Two conflicting attitudes are evident.

One disposition is known as the “precautionary principle.” Generally speaking, it refers to the belief that new innovations should be curtailed or disallowed until their developers can prove that they will not cause any harms to individuals, groups, specific entities, cultural norms, or various existing laws, norms, or traditions.

The other vision can be labeled “permissionless innovation.” It refers to the notion that experimentation with new technologies and business models should generally be permitted by default. Unless a compelling case can be made that a new invention will bring serious harm to society, innovation should be allowed to continue unabated and problems, if they develop at all, can be addressed later.

I argue that we are witnessing a grand clash of visions between these two mindsets today in almost all major technology policy discussions today. Continue reading →

Last night, I appeared on a short segment on the PBS News Hour discussing, “What’s the future of privacy in a big data world?” I was also joined by Jules Polonetsky, executive director of the Future of Privacy Forum. If you’re interested, here’s the video. Transcript is here. Finally, down below the fold, I’ve listed a few law review articles and other essays of mine on this same subject.

Continue reading →

When Google announced it was acquiring digital thermostat company Nest yesterday, it set off another round of privacy and security-related technopanic talk on Twitter and elsewhere. Fear and loathing seemed to be the order of the day. It seems that each new product launch or business announcement in the “Internet of Things” space is destined to set off another round of Chicken Little hand-wringing. We are typically told that the digital sky will soon fall on our collective heads unless we act preemptively to somehow head-off some sort of pending privacy or security apocalypse.

Meanwhile, however, a whole heck of lot of people are demanding more and more of these technologies, and American entrepreneurs are already engaged in heated competition with European and Asian rivals to be at the forefront of the next round Internet innovation to satisfy those consumer demands. So, how is this going to play out?

This gets to what becoming the defining policy issue of our time, not just for the Internet but for technology policy more generally: To what extent should the creators of new technologies seek the blessing of public officials before they develop and deploy their innovations? We can think of this as “the permission question” and it is creating a massive rift between those who desire more preemptive, precautionary safeguards for a variety of reasons (safety, security, privacy, copyright, etc.) and those of us who continue to believe that permissionless innovation should be the guiding ethos of our age. The chasm between these two worldviews is only going to deepen in coming years as the pace of innovation around new technologies (the Internet of Things, wearable tech, driverless cars, 3D printing, commercial drones, etc) continues to accelerate.

Sarah Kessler of Fast Company was kind enough to call me last night and ask for some general comments about Google buying Nest and she also sought out the comments of Marc Rotenberg of EPIC about privacy in the Internet of Things era more generally. Our comments provide a useful example of the divide between these two worldviews and foreshadow debates to come: Continue reading →

Tomorrow, the Federal Trade Commission (FTC) will host an all-day workshop entitled, “Internet of Things: Privacy and Security in a Connected World.” [Detailed agenda here.] According to the FTC: “The workshop will focus on privacy and security issues related to increased connectivity for consumers, both in the home (including home automation, smart home appliances and connected devices), and when consumers are on the move (including health and fitness devices, personal devices, and cars).”

Where is the FTC heading on this front? This Politico story by Erin Mershon from last week offers some possible ideas. Yet, it still remains unclear whether this is just another inquiry into an exciting set of new technologies or if it is, as I worried in my recent comments to the FTC on this matter, “the beginning of a regulatory regime for a new set of information technologies that are still in their infancy.”

First, for those not familiar with the “Internet of Things,” this short new report from Daniel Castro & Jordan Misra of the Center for Data Innovation offers a good definition:

The “Internet of Things” refers to the concept that the Internet is no longer just a global network for people to communicate with one another using computers, but it is also a platform or devices to communicate electronically with the world around them. The result is a world that is alive with information as data flows from one device to another and is shared and reused for a multitude of purposes. Harnessing the potential of all of this data for economic and social good will be one of the primary challenges and opportunities of the coming decades.

The report continues on to offer a wide range of examples of new products and services that could fulfill this promise.

What I find somewhat worrying about the FTC’s sudden interest in the Internet of Things is that it opens to the door for some regulatory-minded critics to encourage preemptive controls on this exciting new wave of digital age innovation, based almost entirely on hypothetical worst-case scenarios they have conjured up. Continue reading →

Much of my recent research and writing has been focused on the contrast between “permissionless innovation” (the notion that innovation should generally be allowed by default) versus its antithesis, the “precautionary principle” (the idea that new innovations should be discouraged or even disallowed until their developers can prove that they won’t cause any harms).  I have discussed this dichotomy in three recent law review articles, a couple of major agency filings, and several blog posts. Those essays are listed at the end of this post.

In this essay, I want to discuss a recent speech by Federal Trade Commission (FTC) Chairwoman Edith Ramirez and show how precautionary principle thinking is increasingly creeping into modern information technology policy discussions, prompted by the various privacy concerns surrounding “big data” and the “Internet of Things” among other information innovations and digital developments.

First, let me recap the core argument I make in my recent articles and filings. It can be summarized as follows: Continue reading →