Are “Permissionless Innovation” and “Responsible Innovation” Compatible?

by on July 12, 2017 · 0 comments

“Responsible research and innovation,” or “RRI,” has become a major theme in academic writing and conferences about the governance of emerging technologies. RRI might be considered just another variant of corporate social responsibility (CSR), and it indeed borrows from that heritage. What makes RRI unique, however, is that it is more squarely focused on mitigating the potential risks that could be associated with various technologies or technological processes. RRI is particularly concerned with “baking-in” certain values and design choices into the product lifecycle before new technologies are released into the wild.

In this essay, I want to consider how RRI lines up with the opposing technological governance regimes of “permissionless innovation” and the “precautionary principle.” More specifically, I want to address the question of whether “permissionless innovation” and “responsible innovation” are even compatible. While participating in recent university seminars and other tech policy events, I have encountered a certain degree of skepticism—and sometimes outright hostility—after suggesting that, properly understood, “permissionless innovation” and “responsible innovation” are not warring concepts and that RRI can co-exist peacefully with a legal regime that adopts permissionless innovation as its general tech policy default. Indeed, the application of RRI lessons and recommendations can strengthen the case for adopting a more “permissionless” approach to innovation policy in the United States and elsewhere.

Definitional Ambiguities, Part 1: “Governance”

Before we can have a constructive conversation about these issues, however, we need to agree upon how narrowly or broadly we are defining some relevant terms, beginning with the word “governance.” When some hear the term “governance” their first reaction might be to think “government,” and formal legal and regulatory processes in particular. That is certainly one form of governance, but it is hardly the only one.

We often speak of the “governance” of corporations, schools, churches, other institutions, and even households. When we do, we usually do not mean government administration of these things; we are instead thinking of some other, more amorphous form of governance by a variety of individuals or groups. The “governance” of a company, for example, includes the interaction of shareholders, board members, corporate officials, workers, and so on. The “governance” of a church might involve clergy, the congregation, and sacred scriptures or traditions.  Household “governance” comes down to decisions made by parents and caretakers. And so on.

Thus, “governance” can certainly have the narrow connotation of being associated with formal regulatory enactments by governments, but it can also describe a much broader universe of norms and rules that are established and enforced by a wide variety of people (or groups of people) in a wide variety of ways.

When we consider questions of technological governance—and specifically the notion of “anticipatory governance,” which is prominent feature of RRI discussions—it helps to specify whether we are speaking of governance in a broad or narrow sense. Whether it is done consciously or not, in much of the literature, RRI scholars and advocates fail to make it clear what type of “governance” they are thinking of when proposing new forms of anticipatory technological governance.

Definitional Ambiguities, Part 2: “Precautionary Principle” & “Permissionless Innovation”

These distinctions are particularly important when we compare and contrast the “precautionary principle” and “permissionless innovation.” These concepts are most useful when viewed as governance dispositions or policy postures and they are usually—although not always—used in the narrow “governance” sense to describe one’s perspective on where legal and regulatory defaults should be set.

Even when applied narrowly, however, both terms are open to interpretation as applied in various policy contexts. For example, precaution could mean an outright prohibition on an innovative activity until such time as it had been proven safe (this is the way many FDA or FAA regulations work). But precaution might be imposed through somewhat less restrictive approaches, such as a set of government-established safety standards buttressed by a recall regime (think NHTSA or CPSC). Even less restrictive but still precautionary in orientation would be a mandatory labeling law or a government-led risk reduction educational campaign. In other words, there are probably as many flavors of the precautionary principle as there are flavors of ice cream.

For the longest time, both proponents and critics of the precautionary principle have failed to put a name on its opposing worldview or governance disposition. I have argued that, despite its uncertain origin and imprecise meaning, “permissionless innovation” provides a useful name for the antithesis of the precautionary principle.

As I noted in a recent speech at an Arizona State University law school conference on technological governance, critics of permissionless innovation sometimes like to imply that it is synonymous with anarchy. (In fact, a few people at that event leveled that accusation at me.) But I’ve written an entire book on this notion and surveyed countless essays and articles that cite the term, and I have never once seen any advocate of permissionless innovation going to such an extreme. In fact, those advocates often don’t even bother calling for the abolition of any laws, programs, or agencies. As I noted in my ASU talk, “most of those defenders of permissionless innovation are using the term as a sort of shorthand when what they really mean to say is something like: ‘give innovators a bit more breathing room,’ or, ‘don’t rush to regulate.’”

And so, as a policy posture, permissionless innovation really comes down to a preference for setting public policy defaults closer to green lights rather than red ones. In my own book on the subject, I defined the term as follows:

“Permissionless innovation refers to the notion that experimentation with new technologies and business models should generally be permitted by default. Unless a compelling case can be made that a new invention will bring serious harm to society, innovation should be allowed to continue unabated and problems, if any develop, can be addressed later.”

By contrast, the precautionary principle posture generally recommends keeping the light red until innovators can prove their new products and services are “safe,” however that is defined. But there are many points along the spectrum between these two policy postures. And if we can accept the idea that the “precautionary principle” and “permissionless innovation” act more as general governance dispositions instead of fixed and rigid edicts, then it is also easier to imagine how both of those dispositions can incorporate “responsible innovation” notions into their governance visions.

Definitional Ambiguities, Part 3: “Responsible Innovation”

But what exactly constitutes “responsible innovation”? Definitions of responsible research and innovation are still evolving, but a leading article on the subject by René von Schomberg from 2011 argues that it can be defined as:

“A transparent, interactive process by which societal actors and innovators become mutually responsive to each other with a view to the (ethical) acceptability, sustainability and societal desirability of the innovation process and its marketable products (in order to allow a proper embedding of scientific and technological advances in our society).”

A more streamlined definition was offered by Jack Stigloe, Richard Owen, and Phil Macnaghten in a 2013 article: “Responsible innovation means taking care of the future through collective stewardship of science and innovation in the present.” They also proposed four dimensions of responsible innovation—anticipation, reflexivity, inclusion and responsiveness—which they say “provide a framework for raising, discussing and responding to such questions.”

RRI Tools, a European consortium focused on promoting responsible innovation strategies, identifies the six core goals of RRI as: open access, gender equality in science, ethics, science education, governance, and public engagement. Other groups and individuals promoting RRI focus on privacy, safety, and security as crucial values that they hope to work into more product development processes early on.

As with “corporate social responsibility” before it, “responsible innovation” will remain a term that is open to varying interpretations and which can incorporate many distinct values that are context-dependent. What Milton Friedman said of CSR discussions in 1970—that they “are notable for their analytical looseness and lack of rigor”—continues to be somewhat true for both CSR and RRI circa 2017. Nonetheless, what both concepts hold in common is the belief that, whatever those “responsible” values are, they can be “baked in” to corporate decision-making and product design processes in an anticipatory fashion.

And while not everyone will agree on the contours of these concepts, practically speaking, I think we can expect both the CSR and RRI movement will continue to grow in coming years. That will be the case not only because of the pressures applied by various activists, stakeholders, and governments, but also because many companies and their consumers will demand more than just better products and greater profitability.

But Doesn’t RRI Necessitate the Precautionary Principle as a Policy Prerequisite?

But how precisely should RRI notions and recommendations influence policy deliberations over the future course of technological governance in the narrow sense of the term (i.e., more legalistic sense)? Here’s where things get more interesting.

The problem is that many of the advocates of RRI are seemingly more sympathetic to precautionary policy regimes and skeptical of the wisdom of permissionless innovation as a policy default. This is not always well-articulated in their writing. Instead, it is the attitude seemingly on display when I speak with RRI advocates or hear them deliver speeches.  Yet, most of these advocates just won’t ever let you nail them down on the point.

Some RRI advocates do come close to making that connection. In his seminal article, Rene von Schomberg argues that RRI, “can reduce the human cost of trial and error and make advantage of a societal learning process of stakeholders and technical innovators. It creates a possibility for anticipatory governance,” he says. “This should ultimately lead to products which are (more) societal robust.”

He then briefly raises the possibility of RRI informing the application of the precautionary principle in public policy debates:

“The precautionary principle works as an incentive to make safe and sustainable products and allow governmental bodies to intervene with Risk Management decisions (such as temporary licensing, case by case decision making etc) whenever necessary in order to avoid negative impacts.”

Yet, von Schomberg never really spells out the exact relationship between RRI and the precautionary principle as a matter of public policy.

Another leading article on the meaning of RRI by Grace Eden, Marina Jirotka, and Bernd Stahl, says that, “The RRI focus is more on mitigating wider societal long-term risks and so favors incremental rather than radical innovation.” That seems to suggest a closer connection between RRI and a formal application of the precautionary principle in policy deliberations about emerging technologies. They also speak of the “two very different approaches to problem solving (anticipatory vs. evidence-based),” which I have argued gets to the heart of the divergence between the precautionary principle and permissionless innovation policy paradigms. Yet, these authors do not dwell on this connection at length, and most of the rest of their article is focused on the ways in which RRI can (and already does) infuse product and service development processes outside of the realm of public policy.

In a 2015 Brookings Institution white paper about RRI, Walter D. Valdivia and David H. Guston offer a more concrete answer to this question when they insist that responsible innovation “is not a doctrine of regulation and much less an instantiation of the precautionary principle; the actions it recommends do not seek to slow down innovation because they do not constrain the set of options for researchers and businesses, they expand it.” They continue on to note that:

“[responsible innovation] considers innovation inherent to democratic life and recognizes the role of innovation in the social order and prosperity. It also recognizes that at any point in time, innovation and society can evolve down several paths and the path forward is to some extent open to collective choice. What RI pursues is a governance of innovation where that choice is more consonant with democratic principles.”

Here, finally, we have a better demarcation between the general notion of RRI and the formal application of the precautionary principle. But is that line really so bright? Do other RRI scholars agree with Valdivia and Guston about this separation between the “responsible innovation” movement and the formal application of the precautionary principle in the policy realm? And, finally, what is meant by “democratic life” and “democratic principles” in this context?

I suspect that many RRI advocates would read that last line from Valdivia and Guston above (“What RI pursues is a governance of innovation where that choice is more consonant with democratic principles.”) and suggest that it favors an embrace of the precautionary principle as the default position in emerging technology policy discussions. But, again, that remains open to debate because so much of the RRI literature lacks precision regarding the connection between these concepts.

How RRI Can be Compatible with Both Visions

Regardless, I would like to suggest that parties on both sides of this debate would be wise to divorce the concept of responsible innovation from their priors regarding optimal regulatory policy toward emerging technology. Properly understood, “responsible innovation” could be a feature of the “precautionary” vision, but it could also be compatible with the “permissionless” governance vision and resulting policy regimes. To reach that understanding, both sides will need to be open to learning from the other and willing to take their concerns seriously.

Advocates of RRI should understand that, just as CSR can do a great deal of good even in the absence of formal regulatory action, the same can be true of RRI, even in a policy regime in which permissionless innovation is the general default.

If, however, the first instinct among the RRI community is to consider advocates of permissionless innovation nothing more than a bunch of uncaring anarchists, they relinquish the opportunity to work with diverse parties to instill wise guidelines into technological development processes. This would be particularly misguided in an age when the so-called “Pacing Problem”—i.e., the growing gap between the introduction of new technologies and time it takes laws and regulations to adjust or be formulated in response—has become an ever-accelerating reality, making traditional “hard law” regulatory enactment increasingly difficult. If the RRI community wants to get any of the values that they care about incorporated into technological development processes, then they will need to be open to the idea that perhaps the only way to do so will be through less formal procedures precisely because law will likely lag so far behind marketplace developments.

Likewise, if the first instinct among the permissionless innovation advocates is to regard the RRI movement as little more than repackaged Ludditism, hell-bent on derailing all the great inventions of the future, then they are foolishly forgoing the chance to work with a diverse group of well-intentioned scholars and stakeholders who could ensure that new products and services gain more widespread acceptance and public trust. More practically, permissionless innovation advocates would be wise to accept the fact that, although technological innovation is generally outpacing the ability of government to keep up, that doesn’t mean most of the traditional regulatory regimes or agencies are going away any time soon. After all, can you name a technocratic law or regulatory body that has been liberalized or eliminated in recent memory? RRI offers a chance to forge a rough peace with agencies and officials who often just want to have a small say in how innovative processes are unfolding. Of course, if regulators seek to have a BIG say in those matters, then policy fights will no doubt ensue. But in my experience, this is less often the case than some defenders of permissionless innovation suggest.

Thus, advocates of permissionless innovation should understand that RRI is not synonymous with a formal precautionary principle-focused policy prescription and that “anticipatory governance” can mean something more generic and beneficial, so long as it does not come to mean the formal application of the precautionary principle as the public policy default.

We Are Already Going Down This Path

Perhaps I am being naïve to think this sort of common ground might exist. But the funny thing is that I know for a fact that it already does! RRI principles have been infusing various multistakeholder processes in the United States for many years now.

For example, here’s a paper I wrote back in 2009 about the various online safety task forces, blue ribbon commissions, and other collaborative efforts that were instilling “safety by design” principles into various online services and digital products. Meanwhile, “privacy by design” and “security by design” efforts are all the rage these days and a wide variety of best practices and codes of conduct have been established to make sure privacy and security values are baked-in to the product design process from the start.

Meanwhile, safety, security, and privacy best practices have increasingly been formulated by the U.S. Department of Commerce (the National Telecommunications and Information Administration in particular), the Federal Trade Commission, FDA, FCC, and the White House Office of Science and Technology Policy. These multistakeholder efforts and agency best practice reports have contained assorted “responsible innovation” principles for technologies as wide-ranging as: big data, artificial intelligence, the Internet of Things, facial recognition, online advertising, mobile phone privacy, mobile apps for kids, driverless cars, commercial drones, genetic testing, medical advertising on social media, 3D printed medical devices, medical device cybersecurity, nanotech, and much more. (I have a forthcoming paper in the works with Ryan Hagemann of the Niskanen Center in which we attempt to document many of these new “soft law” technological governance efforts. There have been so many of these efforts – many of which are still underway – that we are having a hard time cataloging them all!)

I am utterly perplexed why more RRI scholarship has not identified the many ways in which the principles they advocate already infuse multistakeholder processes such as these. Perhaps it is because those scholars feel that some of these multistakeholder processes fail to address the full range of issues or values that they feel are in play. But if you examine recent reports from these agencies and government bodies, I think you will come away quite impressed by the breadth of issues and concerns that they cover. Likewise, the values and best practices they discuss and/or recommend are exactly the sort of responsible innovation principles that the RRI movement cares about.

To some extent, therefore, RRI is already well-entrenched in the technology governance process, it’s just a bit messy. I think some RRI scholars probably fall prey to the old “Goldilocks myth” that we can get these principles just right with enough consideration and oversight. The reality on the ground is that instilling RRI values into the technological design process is a dynamic, iterative, and quite imprecise art.

In closing, there’s still more to the technological governance story that RRI advocates fail to incorporate into their work. To fully appreciate the many ways technological processes are constrained and corrected, they must take into account other governance forces and factors, including the role of:

  • social norms and reputational effects (especially the growing importance of reputational feedback mechanisms);
  • third-party accreditation and standards-setting bodies;
  • courts and common law (including legal solutions like product liability, negligence, design defects law, failure to warn, breach of warranty, and other assorted torts and class action claims);
  • insurance markets as risk calibrators and correctional mechanisms;
  • federal and state consumer protection agencies (such as the FTC), which police “unfair and deceptive practices” and other harms; and
  • media, academic institutions, non-profit advocacy groups, and the general public more generally, all of which can put pressure on technology developers.

Only by taking into account the full range of players and activities at work can we develop a more robust understanding of how technology is actually “governed” in our modern world. I suspect that many in the RRI community of scholars do appreciate these other factors, even though they don’t always account for all of them in their writing and advocacy. Then again, many of those advocates would perhaps decry the more remedial, ex post nature of these governance tools and insist that more ex ante anticipatory planning must be at the heart of technological design and development processes.

In reality, a mix of these two approaches is already at work today and will likely continue to dominate the governance process well into the future. So long as the anticipatory efforts don’t become formal regulatory proposals, there is no reason that this mix of “responsible innovation” governance tools and methods can’t be embraced by a diverse array of scholars and innovators.


Further Reading:

Previous post: