Innovation & Entrepreneurship

I’ve been working on a new book that explores the rise of evasive entrepreneurialism and technological civil disobedience in our modern world. Following the publication of my last book, Permissionless Innovation: The Continuing Case for Comprehensive Technological Freedom, people started bringing examples of evasive entrepreneurialism and technological civil disobedience to my attention and asked how they were related to the concept of permissionless innovation. As I started exploring and cataloging these cases studies, I realized I could probably write an entire book about these developments and their consequences.

Hopefully that book will be wrapped up shortly. In the meantime, I am going to start rolling out some short essays based on content from the book. To begin, I will state the general purpose of the book and define the key concepts discussed therein. In coming weeks and months, I’ll build on these themes, explain why they are on the rise, explore the effect they are having on society and technological governance efforts, and more fully develop some relevant case studies. Continue reading →

By Adam Thierer and Jennifer Huddleston Skees

There was horrible news from Tempe, Arizona this week as a pedestrian was struck and killed by a driverless car owned by Uber. This is the first fatality of its type and is drawing widespread media attention as a result. According to both police statements and Uber itself, the investigation into the accident is ongoing and Uber is assisting in the investigation. While this certainly is a tragic event, we cannot let it cost us the life-saving potential of autonomous vehicles.

While any fatal traffic accident involving a driverless car is certainly sad, we can’t ignore the fact that each and every day in the United States letting human beings drive on public roads is proving far more dangerous. This single event has led some critics to wonder why we were allowing driverless cars to be tested on public roads at all before they have been proven to be 100% safe. Driverless cars can help reverse a public health disaster decades in the making, but only if policymakers allow real-world experimentation to continue.

Let’s be more concrete about this: Each day, Americans take 1.1 billion trips driving 11 billion miles in vehicles that weigh on average between 1.5 and 2 tons. Sadly, about 100 people die  and over 6,000 are injured each day in car accidents. 94% of these accidents have been shown to be attributable to human error and this deadly trend has been increasing as we become more distracted while driving. Moreover, according to the Center for Disease Control and Prevention, almost 6000 pedestrians were killed in traffic accidents in 2016, which means there was roughly one crash-related pedestrian death every 1.6 hours. In Arizona, the issue is even more pronounced with the state ranked 6th worst for pedestrians and the Phoenix area ranked the 16th worst metro for such accidents nationally. Continue reading →

We hear a lot these days about “technological moonshots.” It’s an interesting phrase because the meaning of both words in it are often left undefined. I won’t belabor the point about how people define–or, rather, fail to define–“technology” when they use it. I’ve already spent a lot of time writing about that problem. See, for example, this constantly updated essay here about “Defining ‘Technology.'” It’s a compendium I began curating years ago that collects what dozens of others have had to say on the matter. I’m always struck by how many different definitions are out there that I keep unearthing.

The term “moonshots” has a similar problem. The first meaning is the literal one that hearkens back to President Kennedy’s famous 1962 “we choose to go to the moon” speech. That use of the terms implies large government programs and agencies, centralized control, and top-down planning with a very specific political objective in mind. Increasingly, however, the term “moonshot” is used more generally, as I note in this new Mercatus essay about “Making the World Safe for More Moonshots.”  My Mercatus Center colleague Donald Boudreaux has referred to moonshots as, “radical but feasible solutions to important problems,” and  Mike Cushing of Enterprise Innovation defines a moonshot as an “innovation that achieves the previously unthinkable.” I like that more generic use of the term and think it could be used appropriately when discussing the big innovations many of us hope to see in fields as diverse as quantum computing, genetic editing, AI and autonomous systems, supersonic transport, and much more. I still have some reservations about the term, but I think it’s definitely a better term than “disruptive innovation,” which is also used differently by various scholars and pundits.

Continue reading →

Over at Plain Text, I have posted a new essay entitled, “Converting Permissionless Innovation into Public Policy: 3 Reforms.” It’s a preliminary sketch of some reform ideas that I have been working on as part of my next book project. The goal is to find some creative ways to move the ball forward on the innovation policy front, regardless of what level of government we are talking about.

To maximize the potential for ongoing, positive change and create a policy environment conducive to permissionless innovation, I argue that policymakers should pursue policy reforms based on these three ideas:

  1. The Innovator’s PresumptionAny person or party (including a regulatory authority) who opposes a new technology or service shall have the burden to demonstrate that such proposal is inconsistent with the public interest.
  2. The Sunsetting ImperativeAny existing or newly imposed technology regulation should include a provision sunsetting the law or regulation within two years.
  3. The Parity ProvisionAny operator offering a similarly situated product or service should be regulated no more stringently than its least regulated competitor.

These provisions are crafted in a somewhat generic fashion in the hope that these reform proposals could be modified and adopted by various legislative or regulatory bodies. If you are interested in reading more details about each proposal, jump over to Plain Text to read the entire essay.

The Mercatus Center at George Mason University has just released a new paper on, “Artificial Intelligence and Public Policy,” which I co-authored with Andrea Castillo O’Sullivan and Raymond Russell. This 54-page paper can be downloaded via the Mercatus website, SSRN, or ResearchGate. Here is the abstract:

There is growing interest in the market potential of artificial intelligence (AI) technologies and applications as well as in the potential risks that these technologies might pose. As a result, questions are being raised about the legal and regulatory governance of AI, machine learning, “autonomous” systems, and related robotic and data technologies. Fearing concerns about labor market effects, social inequality, and even physical harm, some have called for precautionary regulations that could have the effect of limiting AI development and deployment. In this paper, we recommend a different policy framework for AI technologies. At this nascent stage of AI technology development, we think a better case can be made for prudence, patience, and a continuing embrace of “permissionless innovation” as it pertains to modern digital technologies. Unless a compelling case can be made that a new invention will bring serious harm to society, innovation should be allowed to continue unabated, and problems, if they develop at all, can be addressed later.

[originally published on Plaintext on June 21, 2017.]

This summer, we celebrate the 20th anniversary of two developments that gave us the modern Internet as we know it. One was a court case that guaranteed online speech would flow freely, without government prior restraints or censorship threats. The other was an official White House framework for digital markets that ensured the free movement of goods and services online.

The result of these two vital policy decisions was an unprecedented explosion of speech freedoms and commercial opportunities that we continue to enjoy the benefits of twenty years later.

While it is easy to take all this for granted today, it is worth remembering that, in the long arc of human history, no technology or medium has more rapidly expanded the range of human liberties — both speech and commercial liberties — than the Internet and digital technologies. But things could have turned out much differently if not for the crucially important policy choices the United States made for the Internet two decades ago. Continue reading →

[Remarks prepared for Fifth Annual Conference on Governance of Emerging Technologies: Law, Policy & Ethics at Arizona State University, Phoenix, AZ, May 18, 2017.]

_________________

What are we to make of this peculiar new term “permissionless innovation,” which has gained increasing currency in modern technology policy discussions? And how much relevance has this notion had—or should it have—on those conversations about the governance of emerging technologies? That’s what I’d like to discuss here today.

Uncertain Origins, Unclear Definitions

I should begin by noting that while I have written a book with the term in the title, I take no credit for coining the phrase “permissionless innovation,” nor have I been able to determine who the first person was to use the term. The phrase is sometimes attributed to Grace M. Hopper, a computer scientist who was a rear admiral in the United States Navy. She once famously noted that, “It’s easier to ask forgiveness than it is to get permission.”

“Hopper’s Law,” as it has come to be known in engineering circles, is probably the most concise articulation of the general notion of “permissionless innovation” that I’ve ever heard, but Hopper does not appear to have ever used the actual phrase anywhere. Moreover, Hopper was not necessarily applying this notion to the realm of technological governance, but was seemingly speaking more generically about the benefit of trying new things without asking for the blessing of any number of unnamed authorities or overseers—which could include businesses, bosses, teachers, or perhaps even government officials. Continue reading →

By Jordan Reimschisel & Adam Thierer

[Originally published on Medium on May 2, 2017.]

Americans have schizophrenic opinions about artificial intelligence (AI) technologies. Ask the average American what they think of AI and they will often respond with a combination of fear, loathing, and dread. Yet, the very same AI applications they claim to be so anxious about are already benefiting their lives in profound ways.

Last week, we posted complementary essays about the growing “technopanic” over artificial intelligence and the potential for that panic to undermine many important life-enriching medical innovations or healthcare-related applications. We were inspired to write those essays after reading the results of a recent poll conducted by Morning Consult, which suggested that the public was very uncomfortable with AI technologies. “A large majority of both Republicans and Democrats believe there should be national and international regulations on artificial intelligence,” the poll found, Of the 2,200 American adults surveyed, the poll revealed that “73 percent of Democrats said there should be U.S. regulations on artificial intelligence, as did 74 percent of Republicans and 65 percent of independents.”

We noted that there were reasons to question the significance of those in light of the binary way in which the questions were asked. Nonetheless, there are clearly some serious concerns among the public about AI and robotics. You see that when you read deeper into the poll results for specific questions and find respondents saying that they are “somewhat” to “very uncomfortable” about a wide range of specific AI applications.

Yet, in each case, Americans are already deriving significant benefits from each of the AI applications they claim to be so uncomfortable with.

Continue reading →

Written with Christopher Koopman and Brent Skorup (originally published on Medium on 4/10/17)

Innovation isn’t just about the latest gee-whiz gizmos and gadgets. That’s all nice, but something far more profound is at stake: Innovation is the single most important determinant of long-term human well-being. There exists widespread consensus among historians, economists, political scientists and other scholars that technological innovation is the linchpin of expanded economic growth, opportunity, choice, mobility, and human flourishing more generally. It is the ongoing search for new and better ways of doing things that drives human learning and prosperity in every sense — economic, social, and cultural.

As the Industrial Revolution revealed, leaps in economic and human growth cannot be planned. They arise from societies that reward risk takers and legal systems that accommodate change. Our ability to achieve progress is directly proportional to our willingness to embrace and benefit from technological innovation, and it is a direct result of getting public policies right.

The United States is uniquely positioned to lead the world into the next era of global technological advancement and wealth creation. That’s why we and our colleagues at the Technology Policy Program at the Mercatus Center at George Mason University devote so much time and energy to defending the importance of innovation and countering threats to it. Unfortunately, those threats continue to multiply as fast as new technologies emerge. Continue reading →

The future of emerging technology policy will be influenced increasingly by the interplay of three interrelated trends: “innovation arbitrage,” “technological civil disobedience,” and “spontaneous private deregulation.” Those terms can be briefly defined as follows:

  • Innovation arbitrage” refers to the idea that innovators can, and will with increasingly regularity, move to those jurisdictions that provide a legal and regulatory environment more hospitable to entrepreneurial activity. Just as capital now fluidly moves around the globe seeking out more friendly regulatory treatment, the same is increasingly true for innovations. And this will also play out domestically as innovators seek to play state and local governments off each other in search of some sort of competitive advantage.
  • Technological civil disobedience” represents the refusal of innovators (individuals, groups, or even corporations) or consumers to obey technology-specific laws or regulations because they find them offensive, confusing, time-consuming, expensive, or perhaps just annoying and irrelevant. New technological devices and platforms are making it easier than ever for the public to openly defy (or perhaps just ignore) rules that limit their freedom to create or use modern technologies.
  • Spontaneous private deregulation” can be thought of as de facto rather than the de jure elimination of traditional laws and regulations owing to a combination of rapid technological change as well the potential threat of innovation arbitrage and technological civil disobedience. In other words, many laws and regulations aren’t being formally removed from the books, but they are being made largely irrelevant by some combination of those factors. “Benign or otherwise, spontaneous deregulation is happening increasingly rapidly and in ever more industries,” noted Benjamin Edelman and Damien Geradin in a Harvard Business Review article on the phenomenon.[1]

I have previously documented examples of these trends in action for technology sectors as varied as drones, driverless cars, genetic testing, Bitcoin, and the sharing economy. (For example, on the theme of global innovation arbitrage, see all these various essays. And on the growth of technological civil disobedience, see, “DOT’s Driverless Cars Guidance: Will ‘Agency Threats’ Rule the Future?” and “Quick Thoughts on FAA’s Proposed Drone Registration System.” I also discuss some of these issues in the second edition of my Permissionless Innovation book.)

In this essay, I want to briefly highlight how, over the course of just the past month, a single company has offered us a powerful example of how both global innovation arbitrage and technological civil disobedience—or at least the threat thereof—might become a more prevalent feature of discussions about the governance of emerging technologies. And, in the process, that could lead to at least the partial spontaneous deregulation of certain sectors or technologies. Finally, I will discuss how this might affect technological governance more generally and accelerate the movement toward so-called “soft law” governance mechanisms as an alternative to traditional regulatory approaches. Continue reading →