Innovation & Entrepreneurship

In my previous essay, I discussed a new white paper by my colleague Robert Graboyes, Fortress and Frontier in American Health Care, which examines the future of medical innovation. Graboyes uses the “fortress vs frontier” dichotomy to help explain different “visions” about how public policies debates about technological innovation in the health care arena often play out.  It’s a terrific study that I highly recommend for all the reasons I stated in my previous post.

As I was reading Bob’s new report, I realized that his approach shared much in common with a couple of other recent innovation policy paradigms I have discussed here before from Virginia Postrel (“Stasis” vs. “Dynamism”), Robert D. Atkinson (“Preservationists” vs. “Modernizers”), and myself (“Precautionary Principle” vs. “Permissionless Innovation”). In this essay, I will briefly relate Bob’s’ approach to those other three innovation policy paradigms and then note a deficiency with our common approaches. I’ll conclude by briefly discussing another interesting framework from science writer Joel Garreau. Continue reading →

Robert-GraboyesI want to bring to everyone’s attention an important new white paper by Dr. Robert Graboyes, a colleague of mine at the Mercatus Center at George Mason University who specializes in the economics of health care. His new 67-page study, Fortress and Frontier in American Health Care, seeks to move away from the tired old dichotomies that drive health care policy discussions: Left versus Right, Democrat versus Republican, federal versus state, and public versus private, and so on. Instead, Graboyes seeks to reframe the debate over the future of health care innovation in terms of “Fortress versus Frontier” and to highlight what lessons we can learn from the Internet and the Information Revolution when considering health care policy.

What does Graboyes mean by “Fortress and Frontier”? Here’s how he explains this conflict of visions:

The Fortress is an institutional environment that aims to obviate risk and protect established producers (insiders) against competition from newcomers (outsiders). The Frontier, in contrast, tolerates risk and allows outsiders to compete against established insiders. . . .  The Fortress-Frontier divide does not correspond neatly with the more familiar partisan or ideological divides. Framing health care policy issues in this way opens the door for a more productive national health care discussion and for unconventional policy alliances. (p. 4)

He elaborates in more detail later in the paper: Continue reading →

DroneIf you want a devastating portrait of how well-intentioned regulation sometimes has profoundly deleterious unintended consequences, look no further than the Federal Aviation Administration’s (FAA) current ban on commercial drones in domestic airspace. As Jack Nicas reports in a story in today’s Wall Street Journal (“Regulation Clips Wings of U.S. Drone Makers“), the FAA’s heavy-handed regulatory regime is stifling America’s ability to innovate in this space and remain competitive internationally. As Nicas notes:

as unmanned aircraft enter private industry—for purposes as varied as filming movies, inspecting wind farms and herding cattle—many U.S. drone entrepreneurs are finding it hard to get off the ground, even as rivals in Europe, Canada, Australia and China are taking off.

The reason, according to interviews with two-dozen drone makers, sellers and users across the world: regulation. The FAA has banned all but a handful of private-sector drones in the U.S. while it completes rules for them, expected in the next several years. That policy has stifled the U.S. drone market and driven operators underground, where it is difficult to find funding, insurance and customers.

Outside the U.S., relatively accommodating policies have fueled a commercial-drone boom. Foreign drone makers have fed those markets, while U.S. export rules have generally kept many American manufacturers from serving them.

Of course, the FAA simply responds that they are looking out for the safety of the skies and that we shouldn’t blame them. Continue reading →

The sharing economy is growing faster than ever and becoming a hot policy topic these days. I’ve been fielding a lot of media calls lately about the nature of the sharing economy and how it should be regulated. (See latest clip below from the Stossel show on Fox Business Network.) Thus, I sketched out some general thoughts about the issue and thought I would share them here, along with some helpful additional reading I have come across while researching the issue. I’d welcome comments on this outline as well as suggestions for additional reading. (Note: I’ve also embedded some useful images from Jeremiah Owyang of Crowd Companies.)

1) Just because policymakers claim that regulation is meant to protect consumers does not mean it actually does so.

  1. Cronyism/ Rent-seeking: Regulation is often “captured” by powerful and politically well-connected incumbents and used to their own benefit. (+ Lobbying activity creates deadweight losses for society.)
  2. Innovation-killing: Regulations become a formidable barrier to new innovation, entry, and entrepreneurism.
  3. Unintended consequences: Instead of resulting in lower prices & better service, the opposite often happens: Higher prices & lower quality service. (Example: Painting all cabs same color destroying branding & ability to differentiate).

Continue reading →

drone picToday, Ryan Hagemann and I filed comments with the Federal Aviation Administration (FAA) in its proceeding on the “Interpretation of the Special Rule for Model Aircraft.” This may sound like a somewhat arcane topic but it is related to the ongoing policy debate over the integration of unmanned aircraft systems (UASs)—more commonly referred to as drones—into the National Airspace System. As part of the FAA Modernization and Reform Act of 2012, Congress required the FAA to come up with a plan by September 2015 to accomplish that goal. As part of that effort, the FAA is currently accepting comments on its enforcement authority over model aircraft. Because the distinction between “drones” and “model aircraft” is blurring rapidly, the outcome of this proceeding could influence the outcome of the broader debate about drone policy in the United States.

In our comment to the agency, Hagemann and I discuss the need for the agency to conduct a thorough review of the benefits and costs associated with this rule. We argue this is essential because airspace is poised to become a major platform for innovation if the agency strikes the right balance between safety and innovation. To achieve that goal, we stress the need for flexibility and humility in interpreting older standards, such as “line of sight” restrictions, as well as increasingly archaic “noncommercial” vs. “commercial” distinctions or “hobbyists” vs. “professional” designations.

We also highlight the growing tension between the agency’s current regulatory approach and the First Amendment rights of the public to engage in peaceful, information-gathering activities using these technologies. (Importantly, on that point, we attached to our comments a new Mercatus Center working paper by Cynthia Love, Sean T. Lawson, and Avery Holton entitled, “News from Above: First Amendment Implications of the Federal Aviation Administration Ban on Commercial Drones.” See my coverage of the paper here.)

Finally, Hagemann and I close by noting the important role that voluntary self-regulation and codes of conduct already play in governing proper use of these technologies. We also argue that other “bottom-up” remedies are available and should be used before the agency imposes additional restrictions on this dynamic, rapidly evolving space.

You can download the complete comment on the Mercatus Center website here. (Note: The Mercatus Center filed comments with the FAA earlier about the prompt integration of drones into the nation’s airspace. You can read those comments here.)

Continue reading →

If there are two general principles that unify my recent work on technology policy and innovation issues, they would be as follows. To the maximum extent possible:

  1. We should avoid preemptive and precautionary-based regulatory regimes for new innovation. Instead, our policy default should be innovation allowed (or “permissionless innovation”) and innovators should be considered “innocent until proven guilty” (unless, that is, a thorough benefit-cost analysis has been conducted that documents the clear need for immediate preemptive restraints).
  2. We should avoid rigid, “top-down” technology-specific or sector-specific regulatory regimes and/or regulatory agencies and instead opt for a broader array of more flexible, “bottom-up” solutions (education, empowerment, social norms, self-regulation, public pressure, etc.) as well as reliance on existing legal systems and standards (torts, product liability, contracts, property rights, etc.).

I was very interested, therefore, to come across two new essays that make opposing arguments and proposals. The first is this recent Slate oped by John Frank Weaver, “We Need to Pass Legislation on Artificial Intelligence Early and Often.” The second is Ryan Calo’s new Brookings Institution white paper, “The Case for a Federal Robotics Commission.”

Weaver argues that new robot technology “is going to develop fast, almost certainly faster than we can legislate it. That’s why we need to get ahead of it now.” In order to preemptively address concerns about new technologies such as driverless cars or commercial drones, “we need to legislate early and often,” Weaver says. Stated differently, Weaver is proposing “precautionary principle”-based regulation of these technologies. The precautionary principle generally refers to the belief that new innovations should be curtailed or disallowed until their developers can prove that they will not cause any harms to individuals, groups, specific entities, cultural norms, or various existing laws, norms, or traditions.

Calo argues that we need “the establishment of a new federal agency to deal with the novel experiences and harms robotics enables” since there exists “distinct but related challenges that would benefit from being examined and treated together.” These issues, he says, “require special expertise to understand and may require investment and coordination to thrive.

I’ll address both Weaver and Calo’s proposals in turn. Continue reading →

DroneThe use of unmanned aircraft systems, or “drones,” for private and commercial uses remains the subject of much debate. The issue has been heating up lately after Congress ordered the Federal Aviation Administration (FAA) to integrate UASs into the nation’s airspace system by 2015 as part of the FAA Modernization and Reform Act of 2012.

The debate has thus far centered mostly around the safety and privacy-related concerns associated with private use of drones. The FAA continues to move slowly on this front based on a fear that private drones could jeopardize air safety or the safety of others on the ground. Meanwhile, some privacy advocates are worried that private drones might be used in ways that invade private spaces or even public areas where citizens have a reasonable expectation of privacy. For these and other reasons, the FAA’s current ban on private operation of drones in the nation’s airspace remains in place.

But what about the speech-related implications of this debate? After all, private and commercial UASs can have many peaceful, speech-related uses. Indeed, to borrow Ithiel de Sola Pool’s term, private drones can be thought of as “technologies or freedom” that expand and enhance the ability of humans to gather and share information, thus in turn expanding the range of human knowledge and freedom.

A new Mercatus Center at George Mason University working paper, “News from Above: First Amendment Implications of the Federal Aviation Administration Ban on Commercial Drones,” deals with these questions.  This 59-page working paper was authored by Cynthia Love, Sean T. Lawson, and Avery Holton. (Love is currently a Law Clerk for Judge Carolyn B. McHugh in 10th Circuit U.S. Court of Appeals. Lawson and Holton are affliated with the Department of Communication at the University of Utah.)

“To date, little attention has been paid to the First Amendment implications of the [FAA] ban,” note Love, Lawson, and Holton. Their article argues that “aerial photography with UASs, whether commercial or not, is protected First Amendment activity, particularly for news-gathering purposes. The FAA must take First Amendment-protected uses of this technology into account as it proceeds with meeting its congressional mandate to promulgate rules for domestic UASs.” They conclude by noting that “The dangers of [the FAA's] regulatory approach are no mere matter of esoteric administrative law. Rather, as we have demonstrated, use of threats to enforce illegally promulgated rules, in particular a ban on journalistic use of UASs, infringes upon perhaps our most cherished constitutional right, that of free speech and a free press.” Continue reading →

Driverless CarI’m pleased to announce that the Mercatus Center at George Mason University has just released my latest working paper, “Removing Roadblocks to Intelligent Vehicles and Driverless Cars.” This paper, which was co-authored with Ryan Hagemann, has been accepted for publication in a forthcoming edition of the Wake Forest Journal of Law & Policy.

In the paper, Hagemann and I explore the growing market for both “connected car” technologies as well as autonomous (or “driverless”) vehicle technology. We argue that intelligent-vehicle technology will produce significant benefits. Most notably, these technologies could save many lives. In 2012, 33,561 people were killed and 2,362,000 injured in traffic crashes, largely as a result of human error. Reducing the number of accidents by allowing intelligent vehicle technology to flourish would constitute a major public policy success. As Philip E. Ross noted recently at IEEE Spectrum, thanks to these technologies, “eventually it will be positively hard to use a car to hurt yourself or others.” The sooner that day arrives, the better.

These technologies could also have positive environmental impacts in the form of improved fuel economy, reduced traffic congestion, and reduced parking needs. They might also open up new mobility options for those who are unable to drive, for whatever reason. Any way you cut it, these are exciting technologies that promise to substantially improve human welfare.

Of course, as with any new disruptive technology, connected cars and driverless vehicles raise a variety of economic, social, and ethical concerns. Hagemann and I address some of the early policy concerns about these technologies (safety, security, privacy, liability, etc.) and we outline a variety of “bottom-up” solutions to ensure that innovation continues to flourish in this space. Importantly, we also argue that policymakers should keep in mind that individuals have gradually adapted to similar disruptions in the past and, therefore, patience and humility are needed when considering policy for intelligent-vehicle systems. Continue reading →

On Thursday, it was my great pleasure to present a draft of my forthcoming paper, “The Internet of Things & Wearable Technology: Addressing Privacy & Security Concerns without Derailing Innovation,” at a conference that took place at the Federal Communications Commission on “Regulating the Evolving Broadband Ecosystem.” The 3-day event was co-sponsored by the American Enterprise Institute and the University of Nebraska College of Law.

The 65-page working paper I presented is still going through final peer review and copyediting, but I posted a very rough first draft on SSRN for conference participants. I expect the paper to be released as a Mercatus Center working paper in October and then I hope to find a home for it in a law review. I will post the final version once it is released.

In the meantime, however, I thought I would post the 46 slides I presented at the conference, which offer an overview of the nature of the Internet of Things and wearable technology, the potential economic opportunities that exist in this space, and the various privacy and security challenges that could hold this technological revolution back. I also outlined some constructive solutions to those concerns. I plan to be very active on these issues in coming months.

Continue reading →

How is it that we humans have again and again figured out how to assimilate new technologies into our lives despite how much those technologies “unsettled” so many well-established personal, social, cultural, and legal norms?

In recent years, I’ve spent a fair amount of time thinking through that question in a variety of blog posts (“Are You An Internet Optimist or Pessimist? The Great Debate over Technology’s Impact on Society”), law review articles (“Technopanics, Threat Inflation, and the Danger of an Information Technology Precautionary Principle”), opeds (“Why Do We Always Sell the Next Generation Short?”), and books (See chapter 4 of my new book, “Permissionless Innovation: The Continuing Case for Comprehensive Technological Freedom”).

It’s fair to say that this issue — how individuals, institutions, and cultures adjust to technological change — has become a personal obsession of mine and it is increasingly the unifying theme of much of my ongoing research agenda. The economic ramifications of technological change are part of this inquiry, of course, but those economic concerns have already been the subject of countless books and essays both today and throughout history. I find that the social issues associated with technological change — including safety, security, and privacy considerations — typically get somewhat less attention, but are equally interesting. That’s why my recent work and my new book narrow the focus to those issues. Continue reading →