Does “Permissionless Innovation” Even Mean Anything?

by on May 18, 2017 · 1 comment

[Remarks prepared for Fifth Annual Conference on Governance of Emerging Technologies: Law, Policy & Ethics at Arizona State University, Phoenix, AZ, May 18, 2017.]

_________________

What are we to make of this peculiar new term “permissionless innovation,” which has gained increasing currency in modern technology policy discussions? And how much relevance has this notion had—or should it have—on those conversations about the governance of emerging technologies? That’s what I’d like to discuss here today.

Uncertain Origins, Unclear Definitions

I should begin by noting that while I have written a book with the term in the title, I take no credit for coining the phrase “permissionless innovation,” nor have I been able to determine who the first person was to use the term. The phrase is sometimes attributed to Grace M. Hopper, a computer scientist who was a rear admiral in the United States Navy. She once famously noted that, “It’s easier to ask forgiveness than it is to get permission.”

“Hopper’s Law,” as it has come to be known in engineering circles, is probably the most concise articulation of the general notion of “permissionless innovation” that I’ve ever heard, but Hopper does not appear to have ever used the actual phrase anywhere. Moreover, Hopper was not necessarily applying this notion to the realm of technological governance, but was seemingly speaking more generically about the benefit of trying new things without asking for the blessing of any number of unnamed authorities or overseers—which could include businesses, bosses, teachers, or perhaps even government officials.

Today, however, we most often hear the “permissionless innovation” used in discussions about the governance of information technologies as well as a wide variety of emerging technologies. Unfortunately, scholars and advocates who have suggested that permissionless innovation should serve as the governing lodestar in these areas do not always precisely define what they mean by the term.

None of them seem to be suggesting, however, that permissionless innovation is synonymous with anarchy. To the contrary, many of them are quick to note that governments will continue to have a role to play. It is even rare to see advocates of permissionless innovation in these varied contexts calling for the abolition of any laws, programs, or agencies.

Instead, it seems to be the case that most of those defenders of permissionless innovation are using the term as a sort of shorthand when what they really mean to say is something like: “give innovators a bit more breathing room,” or, “don’t rush to regulate.”

This is consistent with my own articulation of the term, which goes as follows:

“Permissionless innovation refers to the notion that experimentation with new technologies and business models should generally be permitted by default. Unless a compelling case can be made that a new invention will bring serious harm to society, innovation should be allowed to continue unabated and problems, if any develop, can be addressed later.”

Default Policy Positions

Framing the term in this fashion makes it clear that, as it pertains to technological governance, permissionless innovation is about setting our public policy defaults closer to green lights rather than red ones.

It switches the burden of proof to the opponents of ongoing technological change by asserting five things:

  • First, technological innovation is the single most important determinant of long-term human well-being.
  • Second, there is real value to learning through continued trial-and-error experimentation, resiliency, and ongoing adaptation to technological change.
  • Third, constraints on new innovation should be the last resort, not the first. Innovation should be innocent until proven guilty.
  • Fourth, as regulatory interventions are considered, policy should be based on evidence of concrete potential harm and not fear of worst-case hypotheticals.
  • Fifth, and finally, where policy interventions are deemed needed, flexible, bottom-up solutions of an ex post (responsive) nature are almost always preferable to rigid, top-down controls of an ex ante (anticipatory) nature.

Shared Shortcomings of Both Visions

At least on the surface, that sort of governance vision stands in stark contrast to the “precautionary principle.” Defenders of the precautionary principle as the general default position in technology policy debates generally believe that new innovations should be curtailed or disallowed until their developers can prove that they will not cause any harm to individuals, groups, specific entities, cultural norms, or various existing laws, norms, or traditions.

That being said, I’d like to point out some of the shared shortcomings of both of these governance visions.

First, as with attempts to define the parameters of “permissionless innovation,” the precautionary principle is not always as rigid as its critics sometimes suggest. There are as many flavors of the precautionary principle as there are ice cream. Indeed, this is why many have criticized the precautionary principle not for what it says but rather for what it doesn’t say. It doesn’t tell us exactly how and when to apply precautionary measures, or how to evaluate the trade-offs associated with precaution.

This points the second and deeper underlying problem faced by advocates of both precautionary measures and permissionless innovation: Our collective inability to craft a widely-shared definition of what constitutes “technological harm” in various contexts. This is certainly not to suggest that no attempt has been made to do so. Rather, simply that we don’t seem to be any closer to concrete agreement about how or where to draw those lines.

Of course, let’s not kid ourselves into thinking that we can find bright-line answers to all these questions. After all, for many of these technological governance issues we are operating in the realm of “Level 3” or “Earth-level” systems, as Professors Allenby and Sarewitz refer to it in their book, The Techno-Human Condition. These are systems in which we deal with, as they say, “a context that is always shifting, and on meanings that are never fixed.”

That makes it even more challenging to define what we mean by “responsible innovation” or “socially desirable innovation” for purposes of determining optimal technology policy.

Risk Analysis through the Lens of Permissionless Innovation

For me, there are no easy ways out of this mess. But I do know two things for certain.

First, we must continue to refine and improve our risk analysis tools and techniques to make better determinations of when proposed interventions are sensible and cost-effective relative to the many trade-offs at work.

Again, I recognize the challenge of doing this when many of the issues and values in play are amorphous and metaphysical conflicts exist about how to even define some of these things. Most of the emerging technology policy issues I write about today, for example, involve some sort of privacy, safety, or security concern. In each case, however, very little consensus exists about what those terms even mean in varied contexts.

Nonetheless, the fact that benefit-cost analysis is hard should not serve as an excuse for failing to go through the exercise of attempting some sort of valuation of the many variables in play.

Soft Law Alternatives

The second thing I know for certain is that, due the combination of both definitional complexity regarding what constitutes technological harm, as well as the ever-accelerating pace of the so-called “pacing problem,” all roads lead back to soft law solutions instead of hard law remedies.

Last year, I had the pleasure of reading and reviewing Wendell Wallach’s new book and then having a nice conversation with him about it at Microsoft’s DC headquarters. The most interesting thing about our exchange was that, although we do not begin in the same place philosophically-speaking, we largely end up in the same place practically-speaking.

That is, there seemed to be some grudging acceptance on both our parts that “soft law” systems, multistakeholder processes, and various other informal governance mechanisms will need to fill the governance gap left by the gradual erosion of hard law.

Many other scholars, including many of you in this room, have discussed the growth of soft law mechanisms in specific contexts, but I believe we have probably failed to acknowledge the extent to which these informal governance models have already become the dominant form of technological governance, at least in the United States.

I’m currently co-authoring a very long study which documents how the Obama Administration came to rely quite heavily on multistakeholder processes, negotiated “best practices,” and industry codes of conduct as the primary governance mechanisms for a long list of emerging tech issues, including: driverless cars, commercial drones, big data, facial recognition, the Internet of Things and wearable technology, mobile medical applications, 3D printing, artificial intelligence, the Sharing Economy, and much more.

Most of these soft law processes were driven by the NTIA and FTC, but plenty of other agencies with an “N” or an “F” at the beginning of their name have undertaken some sort of soft law process, including NHTSA, the FDA, the FAA, and so on.

Now, I’m willing to bet that many of those involved in these processes who generally favor more anticipatory regulatory approaches would have preferred to start with hard law solutions to some of these issues. And I am equally certain that many of the innovators involved in those multistakeholder processes would have probably preferred not to have had to come to the table at all.

But at the end of the day, for the most part, all sides did come to the table and worked together in a good faith effort to find some rough consensus about what sort of informal guidelines would govern the future of innovation in these sectors.

The Worst of All Systems, Except All the Others

Plenty of questions remain about such soft law systems, and the irony is that defenders of both permissionless innovation and the precautionary principle will quite often be raising very similar concerns regarding the transparency, accountability, and enforceability of these systems.

But I’m inclined to believe that no matter where you sit on the permissionless vs. precautionary spectrum, and no matter what your reservations may be about it the new world of soft law governance that we find ourselves moving into, this is the future and the future is now.

Much as Churchill said of democracy being “the worst form of Government except for all those other forms that have been tried from time to time,” I think we are well on our way to a world in which soft law is the worst form of technological governance except for all those others that have been tried before.

Of course, the devil is always in the details and I suspect that we’ll have plenty of discuss and debate in that regard. Let’s get that conversation going.

Previous post:

Next post: