Who Really Believes in “Permissionless Innovation”?

by on March 4, 2013 · 587 comments

[Note: I later adapted this essay into a short book, which you can download for free here.]

Let’s talk about “permissionless innovation.” We all believe in it, right? Or do we? What does it really mean? How far are we willing to take it? What are its consequences? What is its opposite? How should we balance them?

What got me thinking about these questions was a recent essay over at The Umlaut by my Mercatus Center colleague Eli Dourado entitled, “‘Permissionless Innovation’ Offline as Well as On.” He opened by describing the notion of permissionless innovation as follows:

In Internet policy circles, one is frequently lectured about the wonders of “permissionless innovation,” that the Internet is a global platform on which college dropouts can try new, unorthodox methods without the need to secure authorization from anyone, and that this freedom to experiment has resulted in the flourishing of innovative online services that we have observed over the last decade.

Eli goes on to ask, “why it is that permissionless innovation should be restricted to the Internet. Can’t we have this kind of dynamism in the real world as well?”

That’s a great question, but let’s ponder an even more fundamental one: Does anyone really believe in the ideal of “permissionless innovation”? Is there anyone out there who makes a consistent case for permissionless innovation across the technological landscape, or is it the case that a fair degree of selective morality is at work here? That is, people love the idea of “permissionless innovation” until they find reasons to hate it — namely, when it somehow conflicts with certain values they hold dear.

I’ve written about this here before when referencing the selective morality we often see at work in debates over online safety, digital privacy, and cybersecurity. [See my essays: “When It Comes to Information Control, Everybody Has a Pet Issue & Everyone Will Be Disappointed;” “Privacy as an Information Control Regime: The Challenges Ahead,” and “And so the IP & Porn Wars Give Way to the Privacy & Cybersecurity Wars.“] In those essays, I’ve noted how ironic it is that the same crowd that preaches about how essential permissionless innovation is when it comes to overly-restrictive copyright laws are often among the first to advocate “permissioned” regulations for online data collection and advertising practices. I also noted how many conservatives who demand permissionless innovation on the economic / infrastructure front are quick to call for preemptive content controls to restrict objectionable online content, and a handful of them want “permissioned” cybersecurity rules.

Of course, it’s not really all that surprising that people wouldn’t hold true to the ideal of “permissionless innovation” across the board because at some theoretical point almost every technology has a use scenario that someone — perhaps many of us — would want to see restricted. How do we know when it makes sense to impose some restrictions on innovation to make it more “permissioned”?

The Range of Options

I spend a lot of time thinking about that question these days. The sheer volume and diversity of interesting innovations that surround us today — or that are just on the horizon — are forcing us to struggle both individually and collectively with our tolerance for unabated innovation. Here are just a few of the issues I’m thinking of (many of which I am currently writing about) where these questions come up constantly:

  • Online data aggregation / targeted advertising
  • Commercial drones
  • 3D printing
  • Facial recognition & biometrics
  • Wearable computing
  • Geolocation / Geotagging / RFID
  • Robotics
  • Nanotechnology

When thinking about innovation in these spaces, it is useful to consider a range of theoretical responses to new technological risks. I developed such a model in my new Minnesota Journal of Law, Science & Technology article on, “Technopanics, Threat Inflation, and the Danger of an Information Technology Precautionary Principle.” In that piece,  I identify four general responses and place them along a “risk response continuum”:

  1. Prohibition: Prohibition attempts to eliminate potential risk through suppression of technology, product or service bans, information controls, or outright censorship.
  2. Anticipatory Regulation: Anticipatory regulation controls potential risk through preemptive, precautionary safeguards, including administrative regulation, government ownership or licensing controls, or restrictive defaults. Anticipatory regulation can lead to prohibition, although that tends to be rare, at least in the United States.
  3. Resiliency: Resiliency addresses risk through education, awareness building, transparency and labeling, and empowerment steps and tools.
  4. Adaptation: Adaptation involves learning to live with risk through trial-and-error experimentation, experience, coping mechanisms, and social norms. Adaptation strategies often begin with, or evolve out of, resiliency-based efforts.

While these risk-response strategies could also describe the possible range of responses that individuals or families might employ to cope with technological change, generally speaking, I am here using this framework to consider the theoretical responses by society at large or governments. That allows us to bring three general policy concepts into the discussion:

  1. Permissionless Innovation“: Complete freedom to experiment and innovate.
  2. Permissioned Innovation“: General freedom to experiment and innovate, but with possibility that innovation might later be restricted in some fashion.
  3. The Precautionary Principle“: New innovations are discouraged or even disallowed until their developers can prove that they won’t cause any harms.

Here’s how I put all these concepts together in one image:

 Risk Response Continuum 2 PICTURE - Adam Thierer Mercatus Center

This gives us a framework to consider responses to various technological developments we are struggling with today. But how do we decide which response makes the most sense for any given technology? The answer will come down to a complicated (and often quite contentious) cost-benefit analysis that weighs the theoretical harms of technological innovation alongside the many potential benefits of ongoing experimentation.

The Case for Permissionless Innovation or an “Anti-Precautionary Principle”

I believe a strong case can be made that permissionless innovation should be our default position in public policy deliberations about technological change. Here’s how I put it in the conclusion of my “Technopanics” article:

Resiliency and adaption strategies are generally superior to more restrictive approaches because they leave more breathing room for continuous learning and innovation through trial-and-error experimentation. Even when that experimentation may involve risk and the chance of mistake or failure, the result of such experimentation is wisdom and progress. As Friedrich August Hayek concisely wrote, “Humiliating to human pride as it may be, we must recognize that the advance and even preservation of civilization are dependent upon a maximum of opportunity for accidents to happen.”

I believe this is the more sensible default position toward technological innovation because the opposite default — a technological Precautionary Principle — essentially holds the “anything new is guilty until proven innocent,” as journalist Ronald Bailey has noted in critiquing the notion. When the law mandates “play it safe” as the default policy toward technological progress, progress is far less likely to occur at all. Social learning and adaptation become less likely, perhaps even impossible, under such a regime. In practical terms, it means fewer services, lower quality goods, higher prices, diminished economic growth, and a decline in the overall standard of living.

Therefore, the default policy disposition toward innovation should be an “anti-Precautionary Principle.” Paul Ohm outlined that concept in his 2008 article, “The Myth of the Superuser: Fear, Risk, and Harm Online.” Ohm, who recently joined the Federal Trade Commission as a Senior Policy Advisor, began his essay by noting that “Fear of the powerful computer user, the ‘Superuser,’ dominates debates about online conflict,” but that this superuser is generally “a mythical figure” concocted by those who are typically quick to set forth worst-case scenarios about the impact of digital technology on society. Fear of the “superuser” and hypothetical worst-case scenarios prompts policy action, since as Ohm notes: “Policymakers, fearful of his power, too often overreact by passing overbroad, ambiguous laws intended to ensnare the Superuser but which are instead used against inculpable, ordinary users.” “This response is unwarranted,” Ohm argues “because the Superuser is often a marginal figure whose power has been greatly exaggerated.” (at 1327).

Ohm correctly notes that Precautionary Principle policies are often the result. He prefers the “anti-Precautionary Principle” instead, which he summarized as follows: “when a conflict involves ordinary users in the main and Superusers only at the margins, the harms resulting from regulating the few cannot be justified.” (at 1394) In other words, policy should not be shaped by hypothetical fears and worst-case “boogeyman” scenarios. He elaborates as follows:

Even if Congress adopts the Anti-Precautionary Principle and begins to demand better empirical evidence, it may conclude that the Superuser threat outweighs the harm from regulating. I am not arguing that Superusers should never be regulated or pursued. But given the checkered history of the search for Superusers — the overbroad laws that have ensnared non-Superuser innocents; the amount of money, time, and effort that could have been used to find many more non-Superuser criminals; and the spotty record of law enforcement successes — the hunt for the Superuser should be narrowed and restricted. Policymakers seeking to regulate the Superuser can adopt a few strategies to narrowly target Superusers and minimally impact ordinary users. The chief evil of past efforts to regulate the Superuser has been the inexorable broadening of laws to cover metaphor-busting, impossible-to-predict future acts. To avoid the overbreadth trap, legislators should instead extend elements narrowly, focusing on that which separates the Superuser from the rest of us: his power over technology. They should, for example, write tightly constrained new elements that single out the use of power, or even, the use of unusual power. (at 1396-7)

To summarize, the Anti-Precautionary Principle generally holds that:

  1. society is better off when innovation is not preemptively restricted;
  2. accusations of harm and calls for policy responses should not be premised on worst-case scenarios;  and,
  3. remedies to actual harms should be narrowly tailored so that beneficial uses of technology are not derailed.

Alternatives to Precaution / Permissioning

I don’t necessarily believe that the “anti-Precautionary Principle” or the norm of “permissionless innovation” should hold in every case.  Neither does Ohm. In fact, in his recent work on privacy and online data collection, Ohm betrays his own rule. He does so too casually, I think, and falls prey to the very “Superuser” boogeyman fears he lamented earlier.

For example, in his latest law review article on “Branding Privacy,” Ohm argues that “Change can be deeply unsettling. Human beings prefer predictability and stability, and abrupt change upsets those desires. . . . Rapid change causes harm by disrupting settled expectations” (at 924). His particular concern is the way that corporate privacy policies continue to evolve and generally in the direction of allowing more and more sharing of personal information. Ohm believes that this is a significant enough concern that, at a minimum, companies should be required to assign a new name to any service or product if a material change was made to its information-handling policies and procedures.  For example, if Facebook or Google wanted to make a major change to their services in the direction of greater information sharing, they have to change their names (at least for a time) to something like Facebook Public or Google Public.

Before joining the FTC, Ohm also authored a panicky piece for the Harvard Business Review that outlined a worst-case scenario “database of ruin” that will link our every past transgression and most intimate secret. This fear led him to argue that:

We need to slow things down, to give our institutions, individuals, and processes the time they need to find new and better solutions. The only way we will buy this time is if companies learn to say, “no” to some of the privacy-invading innovations they’re pursuing. Executives should require those who work for them to justify new invasions of privacy against a heavy burden, weighing them against not only the financial upside, but also against the potential costs to individuals, society, and the firm’s reputation.

Well geez Paul, that sounds a lot like the same Precautionary Principle that you railed against in your “Superuser” essay! In a sense, I can’t blame Paul for not being true to his “anti-Precautionary Principle.” I would be the first to admit that use scenarios matter, it’s just that I don’t think Paul has proven that the Precautionary Principle should be the norm we adopt in this case, or even that permissioned regulation is necessary. To be fair, Paul has left it a bit unclear just what he wants law to accomplish in this case and when I challenged him on the issue at a recent policy conference at GMU, I could not nail him down on it. But it is not enough just to claim, as Ohm does, that “change can be deeply unsettling” or that “human beings prefer predictability and stability, and abrupt change upsets those desires.” Those are universal truths that can be applied to almost any new type of technological change that society must come to grips with. But it simply cannot serve as the test for preemptively restricting innovation. Something more is needed. Before we get to the point where we “slow things down” for online data collection, or anything else for that matter, we should consider:
  1. How serious is the asserted problem or “harm” in question? (And we need to be very concrete about these harms; conjectural fears and hypothetical harms should not drive regulation.)
  2. What alternatives exist to prohibition or administrative regulation as solutions to those problems?
Regarding this second point, we should ask: how can education and awareness-building help solve problems? How might consumers take advantage of the empowerment tools or strategies at their disposal to deal with technological change? How might we learn to assimilate some of these new technologies into our lives in a gradual fashion to take advantage of the many benefits they offer? Short of administrative regulation, what other legal mechanisms exist (contracts, property rights, torts, anti-fraud statutes, etc), that could be tapped to remedy harms — whether real or perceived? And should we trust the value judgments consumers make and encourage them to exercise personal and parental responsibility before we call in the law to trump everyone’s preferences?
I spend the entire second half of my “Technopanics” paper trying to develop this “bottom-up” approach to dealing with technological change in the hope that we can remain as true as possible to the ideal of “permissionless innovation” whenever possible. When real harms are identified and proven, when can then slide our way up that continuum outlined above as needed, but generally speaking, we should be starting from the default position of innovation allowed.

Applying the Model to Online Safety & Digital Privacy

I’d argue that this “bottom-up” model of coping with technological change is already at work in many areas of modern society. In my “Technopanics” paper, I note that this pretty much the approach we’ve adopted for online safety concerns, at least here in the United States. Very little innovation (or content) is prohibited or even permissioned today. Instead, we rely on other mechanisms: User education and empowerment, informal household media rules, social pressure, societal norms, and so on. [I’ve documented this in greater detail in this booklet.]

Fifteen years ago, there were many policymakers and policy activists who advocated a very different approach: indecency rules for the Net, mandatory filtering schemes, mandatory age verification, and so on. But that prohibitionary and permission-based approach lost out to the resiliency and adaptation paradigm. As a result, innovation and freedom speech continues relatively unabated.  That doesn’t mean everything is sunshine and roses. The Web is full of filth, and hateful things are said every second of the day across digital networks. But we are finding other ways to deal with those problems — not always perfectly, but well enough to get by and allow innovation and speech to continue. When serious harms can be identified — such as persistent online bullying or predation of youth — targeted legal remedies have been utilized.

In two forthcoming law review articles (for the Harvard Journal of Law & Public Policy and the George Mason Law Review), I apply this same framework to concerns about commercial data collection and digital privacy. I conclude the Harvard essay by noting that:

Many of the thorniest social problems citizens encounter in the information age will be better addressed through efforts that are bottom-up, evolutionary, education-based, empowerment-focused, and resiliency-centered. That framework is the best approach to address personal privacy protection. Evolving social and market norms will also play a role as citizens incorporate new technologies into their lives and business practices. What may seem like a privacy-invasive practice or technology one year might be considered an essential information resource the next. Public policy should embrace—or at least not unnecessarily disrupt—the highly dynamic nature of the modern digital economy.

Two additional factors shape my conclusion that this framework makes as much sense for privacy as it does for online child safety concerns. First, the effectiveness of law and regulation on this front is limited by the normative considerations. The inherent subjectivity of privacy as a personal and societal value is one reason why expanded regulation is not sensible. As with online safety, we have a rather formidable “eye of the beholder” problem at work here. What we need, therefore, are diverse solutions for a diverse citizenry, not one-size-fits-all top-down regulatory solutions that seek to apply to values of the few on the many.  Second, enforcement challenges must be taken into consideration. Most of the problems policymakers and average individuals face when it comes to controlling the flow of private information online are similar to the challenges they face when trying to control the free flow of digitalized bits in other information policy contexts, such as online safety, cybersecurity, and digital copyright. It will be increasingly difficult and costly to enforce top-down regulatory regimes (assuming we can even agree to common privacy standards), therefore, alternative approaches to privacy protection should be considered.

Of course, some alleged privacy harms involve highly sensitive forms of personal information and can do serious harm to person or property. Our legal regime has evolved to handle those harms. We have targeted legal remedies for health and financial privacy violations, for example, and state torts to fill other gaps. Meanwhile, the FTC has broad discretion under Section 5 of the Federal Trade Commission Act to pursue “unfair and deceptive practices,” including those that implicate privacy. These remedies are more “bottom-up” in character in that they leave sufficient breathing room for ongoing experimentation and innovation but allow individuals to pursue remedies for egregious harms that can be proven.

Applying the Model Elsewhere

We can apply this model more broadly. Let’s pick an issue that’s been in the news recently: concerns about Google Glass and fears about “wearable computing” more generally, which Jerry Brito wrote about earlier today. Google Glass hasn’t even hit the market yet, but the privacy paranoia has already kicked into high gear. Andrew Keen argues that “Google Glass opens an entirely new front in the digital war against privacy” and that “It is the sort of radical transformation that may actually end up completely destroying our individual privacy in the digital 21st century.” His remedy: “I would make data privacy its default feature. Nobody else sees the data I see unless I explicitly say so. Not advertisers, nor the government, and certainly not those engineers of the human soul at the Googleplex. No, Google Glass must be opaque. For my eyes only.”

There’s even more fear and loathing to be found in this piece by Mark Hurst entitled, “The Google Glass feature no one is talking about.” That feature would be Glass’s ability to record massive amounts of video and audio in both public and private spaces. In reality, plenty of people are talking about that feature and wringing the hands about its implications for our collective privacy. Also see, for example, Gary Marshall’s essay, “Google Glass: Say Goodbye to Your Privacy.”

But Google Glass is just the beginning. For another example of a wearable computing technology that is bound to raise concern once it goes mainstream, check out the Memoto Lifelogging Camera. Here’s the description from the website:

The Memoto camera is a tiny camera and GPS that you clip on and wear. It’s an entirely new kind of digital camera with no controls. Instead, it automatically takes photos as you go. The Memoto app then seamlessly and effortlessly organizes them for you. . . . As long as you wear the camera, it is constantly taking pictures. It takes two geotagged photos a minute with recorded orientation so that the app can show them upright no matter how you are wearing the camera. . . . The camera and the app work together to give you pictures of every single moment of your life, complete with information on when you took it and where you were. This means that you can revisit any moment of your past.

Of course, that means you will also be able to revisit many moments from the lives of others who may have been around you while your Memoto Camera was logging your life. So, what are we going to do about Google Glass, Memoto, and wearable computing? Well, for now I hope that our answer is: nothing. This technology is not even out of the cradle yet and we have no idea how it will be put to use by most people. I certainly understand some of the privacy paranoia and worst-case scenarios that some people are circulating these days. As someone who deeply values their own privacy, and as the father of two digital natives who are already begging for more and more digital gadgets, I’ve already thought about a wide variety of worse-case scenarios for me and my kids.

But we’ve been here before. In my Harvard essay, I go back and track privacy panics from the rise of the camera and public photography in the late 1800s all the way down to Gmail in the mid-2000s and note that societal attitudes quickly adjusted to these initially unsettling technologies. That doesn’t mean that all the concerns raised by those technologies disappeared. A century after Warren and Brandeis railed against the camera and called for controls on public photography, many people are still complaining about what people can do with the devices. And although 425 million people now use Gmail and love the free service it provides, some vociferous privacy advocates are still concerned about how it might affect our privacy. And the same is true of a great many other technologies.

But here’s the key question: Are we not better off because we have allowed these technologies to develop in a relatively unfettered fashion? Would we have been better off imposing a Precautionary Principle on cameras and Gmail right out of the gates and then only allowing innovation once some techno-philosopher kings told us that all was safe? I would hope that the costs associated with such restrictions would be obvious. And I would hope that we might exercise similar policy restraint when it comes to new technologies, including Google Glass, Memoto, and other forms of wearable computing. After all, there are a a great many benefits that will come from such technologies and it is likely that many (perhaps most) of us will come to view these tools as an indispensable part of our lives despite the privacy fears of some academics and activists. As Brito notes in his essay on the topic, “in the long run, the public will get the technology it wants, despite the perennial squeamishness of some intellectuals.”

How will we learn to cope? Well, I already have a speech prepared for my kids about the proper use of such technologies that will build on the same sort of “responsible use” talk I have with them about all their other digital gadgets and the online services they love. It won’t be an easy talk because part of it will involve the inevitable chat about responsible use in very personal situations, including times when they may be involved in moments of intimacy with others. But this is the sort of uncomfortable talk we need to be having at the individual level, the family level, and the societal level. How can social norms and smart etiquette help us teach our children and each other responsible use of these new technologies? Such a dialogue is essential since, no matter how much we might hope for these new technologies and the problems they raise might just go away, they won’t.

In those cases where serious harms can be demonstrated — for example “peeping Toms” who use wearable computing to surreptitiously film unsuspecting victims — we can use targeted remedies already on the books to go after them. And I suspect that private contracts might play a stronger role here in the future as a remedy. Many organizations (corporations, restaurants, retail establishments, etc) will want nothing to do with wearable computing on their premises. I can imagine that they may be on the front line of finding creative contractual solutions to curb the use of such technologies.

Embracing Permissionless Innovation While Rejecting “The Borg Complex”

One final point. It is essential that advocates of the “anti-Precautionary Principle” and the ideal of “permissionless innovation” avoid falling prey to what philosopher Michael Sacasas refers to as “the Borg Complex“:

A Borg Complex is exhibited by writers and pundits who explicitly assert or implicitly assume that resistance to technology is futile. The name is derived from the Borg, a cybernetic alien race in the Star Trek universe that announces to their victims some variation of the following: “We will add your biological and technological distinctiveness to our own. Resistance is futile.”

Indeed, too often in digital policy texts and speeches these days, we hear pollyannish writers adopting a cavalier attitude about the impact of technological change on individuals and society. Some extreme technological optimists are highly deterministic about technology as an unstoppable force and its potential to transform man and society for the better. Such rigid technological determinism and wild-eyed varieties of cyber-utopianism should be rejected. For example, as I noted in my review of Kevin Kelly’s What Technology Wants, “Much of what Kelly sputters in the opening and closing sections of the book sounds like quasi-religious kookiness by a High Lord of the Noosphere” and that “at times, Kelly even seems to be longing for humanity’s assimilation into the machine or The Matrix.”

I discussed this problem in more detail in my chapter on “The Case for Internet Optimism, Part 1,” which appeared in the book, The Next Digital Decade. I noted that technological optimists need to appreciate that, as Neil Postman argued, there are some moral dimensions to technological progress that deserve attention. Not all changes will have positive consequences for society. Those of us who espouse the benefits of permissionless innovation as the default rule must simultaneously be mature enough to understand and address the downsides of digital life without casually dismissing the critics.  A “just-get-over-it” attitude toward the challenges sometimes posed by technological change is never wise. In fact, it is downright insulting.

For example, when I am confronted with frustrated fellow parents who are irate about some of the objectionable content their kids sometimes discover online, I never say, “Well, just get over it!” Likewise, when I am debating advocates of increased privacy regulation who are troubled by data aggregation or targeted advertising, I listen to their concerns and try to offer constructive alternatives to their regulatory impulses. I also ask them to think through to consequences of prohibiting innovation and to realize that not everyone shares their same values when it comes to privacy. In other words, I do not dismiss their concerns, no matter how subjective, about the impact of technological change on their lives or the lives of their children. But I do ask them to be careful about imposing their value judgments on everyone else, especially by force of law. I am not harping at them about how “Resistance is futile,” but I am often explaining to them why a certain amount of societal and individual adaptation will be necessary and that building coping mechanisms and strategies will be absolutely essential. I also share tips about the tools and strategies they can tap to help protect their privacy and specifically how it is easier (and cheaper) than ever to find and use ad preference managers, private browsing tools, advertising blocking technologies, cookie-blockers, web script blockers, Do Not Track tools, and reputation protection services. This is all part of the resiliency and adaptation paradigm.

Conclusion

In closing, it should be clear by now that I am fairly bullish about humanity’s ability to adapt to technological change; even radical change. Such change can be messy, uncomfortable, and unsettling, but the amazing thing to me is how we humans have again and again and again found ways to assimilate new tools into our lives and marched boldly forward. On occasion, we may need to slow down that process a bit when it can be demonstrated that the harms associated with technological change are unambiguous and extreme in character. But I think a powerful case can be made that, more often than not, we can and do find ways to effectively adapt to most forms of change by employing a variety of coping mechanisms. We should continue to allow progress through trial-and-error experimentation — in other words, through permissionless innovation — so that we can enjoy the many benefits that accrue from this process, including the benefits of learning from the mistakes that we will sometimes along the way.

Previous post:

Next post: