Over at the Mercatus Center’s Bridge blog, Chad Reese interviewed me about my forthcoming book and continuing research on “evasive entrepreneurialism” and the freedom to innovate. I provide a quick summary of the issues and concepts that I am exploring with my colleagues currently. Those issues include:

  • free innovation
  • evasive entrepreneurialism & social entrepreneurialism
  • technological civil disobedience
  • the freedom to tinker / freedom to try / freedom to innovate
  • the right to earn a living
  • “moonshots” / deep technologies / disruptive innovation / transformative tech
  • innovation culture
  • global innovation arbitrage
  • the pacing problem & the Collingridge dilemma
  • “soft law” solutions for technological governance

You can read the entire Q&A over at The Bridge, or I have pasted it down below.

Continue reading →

[originally posted on Medium]

Today is the anniversary of the day the machines took over.

Exactly twenty years ago today, on May 11, 1997, the great chess grandmaster Garry Kasparov became the first chess world champion to lose a match to a supercomputer. His battle with IBM’s “Deep Blue” was a highly-publicized media spectacle, and when he lost Game 6 of his match against the machine, it shocked the world.

At the time, Kasparov was bitter about the loss and even expressed suspicions about how Deep Blue’s team of human programmers and chess consultants might have tipped the match in favor of machine over man. Although he still wonders about how things went down behind the scenes during the match, Kasparov is no longer as sore as he once was about losing to Deep Blue. Instead, Kasparov has built on his experience that fateful week in 1997 and learned how he and others can benefit from it.

The result of this evolution in his thinking is Deep Thinking: Where Machine Intelligence Ends and Human Creativity Begins, a book which serves as a paean to human resiliency and our collective ability as a species to adapt in the face of technological disruption, no matter how turbulent.

Kasparov’s book serves as the perfect antidote to the prevailing gloom-and-doom narrative in modern writing about artificial intelligence (AI) and smart machines. His message is one of hope and rational optimism about future in which we won’t be racing against the machines but rather running alongside them and benefiting in the process.

Overcoming the Technopanic Mentality

There is certainly no shortage of books and articles being written today about AI, robotics, and intelligent machines. The tone of most of these tracts is extraordinarily pessimistic. Each page is usually dripping with dystopian dread and decrying a future in which humanity is essentially doomed.

As I noted in a recent essay about “The Growing AI Technopanic,” after reading through most of these books and articles, one is left to believe that in the future: “Either nefarious-minded robots enslave us or kill us, or AI systems treacherously trick us, or at a minimum turn our brains to mush.” These pessimistic perspectives are clearly on display within the realm of fiction, where every sci-fi book, movie, or TV show depicts humanity as certain losers in the proverbial “race” against machines. But such lugubrious lamentations are equally prevalent within the pages of many non-fiction books, academic papers, editorials, and journalistic articles.

Given the predominantly panicky narrative surrounding the age of smart machines, Kasparov’s Deep Thinking serves as a welcome breath of fresh air. The aim of his book is finding ways of “doing a smarter job of humans and machines working together” to improve well-being. Continue reading →

DM coverOn May 3rd, I’m excited to be participating in a discussion with Yale University bioethicist Wendell Wallach at the Microsoft Innovation & Policy Center in Washington, DC. (RSVP here.) Wallach and I will be discussing issues we write about in our new books, both of which focus on possible governance models for emerging technologies and the question of how much preemptive control society should exercise over new innovations.

Wallach’s latest book is entitled, A Dangerous Master: How to Keep Technology from Slipping beyond Our Control. And, as I’ve noted here recently, the greatly expanded second edition of my latest book, Permissionless Innovation: The Continuing Case for Comprehensive Technological Freedom, has just been released.

Of all the books of technological criticism or skepticism that I’ve read in recent years—and I have read stacks of them!—A Dangerous Master is by far the most thoughtful and interesting. I have grown accustomed to major works of technological criticism being caustic, angry affairs. Most of them are just dripping with dystopian dread and a sense of utter exasperation and outright disgust at the pace of modern technological change.

Although he is certainly concerned about a wide variety of modern technologies—drones, robotics, nanotech, and more—Wallach isn’t a purveyor of the politics of panic. There are some moments in the book when he resorts to some hyperbolic rhetoric, such as when he frets about an impending “techstorm” and the potential, as the book’s title suggests, for technology to become a “dangerous master” of humanity. For the most part, however, his approach is deeper and more dispassionate than what is found in the leading tracts of other modern techno-critics.

Continue reading →

Today, Eli Dourado, Ryan Hagemann and I filed comments with the Federal Aviation Administration (FAA) in its proceeding on the “Operation and Certification of Small Unmanned Aircraft Systems” (i.e. small private drones). In this filing, we begin by arguing that just as “permissionless innovation” has been the primary driver of entrepreneurialism and economic growth in many sectors of the economy over the past decade, that same model can and should guide policy decisions in other sectors, including the nation’s airspace. “While safety-related considerations can merit some precautionary policies,” we argue, “it is important that those regulations leave ample space for unpredictable innovation opportunities.”

We continue on in our filing to note that  “while the FAA’s NPRM is accompanied by a regulatory evaluation that includes benefit-cost analysis, the analysis does not meet the standard required by Executive Order 12866. In particular, it fails to consider all costs and benefits of available regulatory alternatives.” After that, we itemize the good and the bad of the FAA propose with an eye toward how the agency can maximize innovation opportunities. We conclude by noting:

 The FAA must carefully consider the potential effect of UASs on the US economy. If it does not, innovation and technological advancement in the commercial UAS space will find a home elsewhere in the world. Many of the most innovative UAS advances are already happening abroad, not in the United States. If the United States is to be a leader in the development of UAS technologies, the FAA must open the American skies to innovation.

You can read our entire 9-page filing here. Continue reading →

Yesterday afternoon, the Federal Aviation Administration (FAA) finally released its much-delayed rules for private drone operations. As The Wall Street Journal points out, the rules “are about four years behind schedule,” but now the agency is asking for expedited public comments over the next 60 days on the whopping 200-page order. (You have to love the irony in that!) I’m still going through all the details in the FAA’s new order — and here’s a summary of what the major provisions — but here are some high-level thoughts about what the agency has proposed.

Opening the Skies…

  • The good news is that, after a long delay, the FAA is finally taking some baby steps toward freeing up the market for private drone operations.
  • Innovators will no longer have to operate entirely outside the law in a sort of drone black market. There’s now a path to legal operation. Specifically, small unmanned aircraft systems (UAS) operators (for drones under 55 lbs.) will be able to go through a formal certification process and, after passing a test, get to operate their systems.

Continue reading →

By Adam Thierer & Jennifer Huddleston Skees

He’s making a list and checking it twice. Gonna find out who’s naughty and nice.”

With the Christmas season approaching, apparently it’s not just Santa who is making a list. The Trump Administration has just asked whether a long list of emerging technologies are naughty or nice — as in whether they should be heavily regulated or allowed to be developed and traded freely.

If they land on the naughty list, these technologies could be subjected to complex export control regulations, which would limit research and development efforts in many emerging tech fields and inadvertently undermine U.S. innovation and competitiveness. Worse yet, it isn’t even clear there would be any national security benefit associated with such restrictions.  

From Light-Touch to a Long List

Generally speaking, the Trump Administration has adopted a “light-touch” approach to the regulation of emerging technology and relied on more flexible “soft law” approaches to high-tech policy matters. That’s what makes the move to impose restrictions on the trade and usage of these emerging technologies somewhat counter-intuitive. On November 19, the Department of Commerce’s Bureau of Industry and Security launched a “Review of Controls for Certain Emerging Technologies.” The notice seeks public comment on “criteria for identifying emerging technologies that are essential to U.S. national security, for example because they have potential conventional weapons, intelligence collection, weapons of mass destruction, or terrorist applications or could provide the United States with a qualitative military or intelligence advantage.” Continue reading →

I’ve been working on a new book that explores the rise of evasive entrepreneurialism and technological civil disobedience in our modern world. Following the publication of my last book, Permissionless Innovation: The Continuing Case for Comprehensive Technological Freedom, people started bringing examples of evasive entrepreneurialism and technological civil disobedience to my attention and asked how they were related to the concept of permissionless innovation. As I started exploring and cataloging these cases studies, I realized I could probably write an entire book about these developments and their consequences.

Hopefully that book will be wrapped up shortly. In the meantime, I am going to start rolling out some short essays based on content from the book. To begin, I will state the general purpose of the book and define the key concepts discussed therein. In coming weeks and months, I’ll build on these themes, explain why they are on the rise, explore the effect they are having on society and technological governance efforts, and more fully develop some relevant case studies. Continue reading →

In recent months, I’ve come across a growing pool of young professionals looking to enter the technology policy field. Although I was lucky enough to find a willing and capable mentor to guide me through a lot of the nitty gritty, a lot of these would-be policy entrepreneurs haven’t been as lucky. Most of them are keen on shifting out of their current policy area, or are newcomers to Washington, D.C. looking to break into a technology policy career track. This is a town where there’s no shortage of sage wisdom, and while much of it still remains relevant to new up-and-comers, I figured I would pen these thoughts based on my own experiences as a relative newcomer to the D.C. tech policy community.

I came to D.C. in 2013, originally spurred by the then-recent revelations of mass government surveillance revealed by Edward Snowden’s NSA leaks. That event led me to the realization that the Internet was fragile, and that engaging in the battle of ideas in D.C. might be a career calling. So I packed up and moved to the nation’s capital, intent on joining the technology policy fray. When I arrived, however, I was immediately struck by the almost complete lack of jobs in, and focus on, technology issues in libertarian circles.

Through a series of serendipitous and fortuitous circumstances, I managed to ultimately break into a field that was still a small and relatively under-appreciated group. What we lacked in numbers and support we had to make up for in quality and determined effort. Although the tech policy community has grown precipitously in recent years, this is still a relatively niche policy vocation relative to other policy tracks. That means there’s a lot of potential for rapid professional growth—if you can manage to get your foot in the door.

So if you’re interested in breaking into technology policy, here are some thoughts that might be of help. Continue reading →

I’ve been thinking about the “right to try” movement a lot lately. It refers to the growing movement (especially at the state level here in the U.S.) to allow individuals to experiment with alternative medical treatments, therapies, and devices that are restricted or prohibited in some fashion (typically by the Food and Drug Administration). I think there are compelling ethical reasons for allowing citizens to determine their own course of treatment in terms of what they ingest into their bodies or what medical devices they use, especially when they are facing the possibility of death and have exhausted all other options.

But I also favor a more general “right to try” that allows citizens to make their own health decisions in other circumstances. Such a general freedom entails some risks, of course, but the better way to deal with those potential downsides is to educate citizens about the trade-offs associated with various treatments and devices, not to forbid them from seeking them out at all.

The Costs of Control

But this debate isn’t just about ethics. There’s also the question of the costs associated with regulatory control. Practically speaking, with each passing day it becomes harder and harder for governments to control unapproved medical devices, drugs, therapies, etc.  Correspondingly, that significantly raises the costs of enforcement and makes one wonder exactly how far the FDA or other regulators will go to stop or slow the advent of new technologies.

I have written about this “cost of control” problem in various law review articles as well as my little Permissionless Innovation book and pointed out that, when enforcement challenges and costs reach a certain threshold, the case for preemptive control grows far weaker simply because of (1) the massive resources that regulators would have to pour into the task on crafting a workable enforcement regime; and/or (2) the massive loss of liberty it would entail for society more generally to devise such solutions. With the rise of the Internet of Things, wearable devices, mobile medical apps, and other networked health and fitness technologies, these issues are going to become increasingly ripe for academic and policy consideration. Continue reading →

What sort of public policy vision should govern the Internet of Things? I’ve spent a lot of time thinking about that question in essays here over the past year, as well as in a new white paper (“The Internet of Things and Wearable Technology: Addressing Privacy and Security Concerns without Derailing Innovation”) that will be published in the Richmond Journal of Law & Technology early next year.

But I recently heard three policymakers articulate their recommended vision for the Internet of Things (IoT) and I found their approach so inspiring that I wanted to discuss it here in the hopes that it will become the foundation for future policy in this arena.

Last Thursday, it was my pleasure to attend a Center for Data Innovation (CDI) event on “How Can Policymakers Help Build the Internet of Things?” As the title implied, the goal of the event was to discuss how to achieve the vision of a more fully-connected world and, more specifically, how public policymakers can help facilitate that objective. It was a terrific event with many excellent panel discussions and keynote addresses.

Two of those keynotes were delivered by Senators Deb Fischer (R-Neb.) and Kelly Ayotte (R-N.H.). Below I will offer some highlights from their remarks and then relate them to the vision set forth by Federal Trade Commission (FTC) Commissioner Maureen K. Ohlhausen in some of her recent speeches. I will conclude by discussing how the Ayotte-Fischer-Ohlhausen vision can be seen as the logical extension of the Clinton Administration’s excellent 1997 Framework for Global Electronic Commerce, which proposed a similar policy paradigm for the Internet more generally. This shows how crafting policy for the IoT can and should be a nonpartisan affair. Continue reading →