Technology Liberation Front Keeping politicians' hands off the Net & everything else related to technology Mon, 17 Aug 2015 14:35:36 +0000 en-US hourly 1 The Right to Try, 3D Printing, the Costs of Technological Control & the Future of the FDA Mon, 10 Aug 2015 13:28:37 +0000

I’ve been thinking about the “right to try” movement a lot lately. It refers to the growing movement (especially at the state level here in the U.S.) to allow individuals to experiment with alternative medical treatments, therapies, and devices that are restricted or prohibited in some fashion (typically by the Food and Drug Administration). I think there are compelling ethical reasons for allowing citizens to determine their own course of treatment in terms of what they ingest into their bodies or what medical devices they use, especially when they are facing the possibility of death and have exhausted all other options.

But I also favor a more general “right to try” that allows citizens to make their own health decisions in other circumstances. Such a general freedom entails some risks, of course, but the better way to deal with those potential downsides is to educate citizens about the trade-offs associated with various treatments and devices, not to forbid them from seeking them out at all.

The Costs of Control

But this debate isn’t just about ethics. There’s also the question of the costs associated with regulatory control. Practically speaking, with each passing day it becomes harder and harder for governments to control unapproved medical devices, drugs, therapies, etc.  Correspondingly, that significantly raises the costs of enforcement and makes one wonder exactly how far the FDA or other regulators will go to stop or slow the advent of new technologies.

I have written about this “cost of control” problem in various law review articles as well as my little Permissionless Innovation book and pointed out that, when enforcement challenges and costs reach a certain threshold, the case for preemptive control grows far weaker simply because of (1) the massive resources that regulators would have to pour into the task on crafting a workable enforcement regime; and/or (2) the massive loss of liberty it would entail for society more generally to devise such solutions. With the rise of the Internet of Things, wearable devices, mobile medical apps, and other networked health and fitness technologies, these issues are going to become increasingly ripe for academic and policy consideration.

A Hypothetical Regulatory Scenario

Here’s an interesting case study to consider in this regard: Can 3D printing of prosthetics be controlled? Clearly prosthetics are medical devices in the traditional regulatory sense, but few people are going to the FDA and asking for permission or a “right to try” new 3D-printed limbs. They’re just doing it. And the results have been incredibly exciting, as my Mercatus Center colleague Robert Graboyes has noted.

But let’s imagine what the regulators might do if they really wanted to impose their will and limit the right to try in this context:

  • Could government officials ban 3D printers outright? I don’t see how. The technology is already too diffuse and is utilized for so many alternative (and uncontroversial) uses that it doesn’t seem likely such a control regime would work or be acceptable. And if any government did take this extreme step, “global innovation arbitrage” would kick in. That is, innovators would just move offshore.
  • Could government officials ban the inputs used by 3D printers? Again, I don’t see how. After all, we are primarily talking about plastics and glue here!
  • Could government officials ban 3D printer blueprints? Two problems with that. First, such blueprints are a form of free speech and government efforts to censor them would represent a form of prior restraint that would violate the First Amendment of the U.S. Constitution. Second, even ignoring the First Amendment issues, information control is just damned hard and I don’t see how you could suppress such blueprints effectively when are they are freely available across the Internet. Or, people would just “torrent” them, as they do (illegally) with copyrighted files today.
  • Could government officials ban and/or fine specific companies (especially those with deep pockets)? Perhaps, but that is likely a losing strategy since 3D printing is already so highly decentralized and is done by average citizens in the comfort of their own home (and often for no monetary gain). So, attempting to go after a handful of corporate players and “make an example out of them” to deter others from experimenting isn’t likely to work. And, again, it’ll just lead to more offshoring and undergrounding of these devices and innovative activities.
  • Could government officials ban the sale of certain 3D printing applications? They could try, but enterprising minds would likely start using alternative payment methods (like Bitcoin) to conduct their deals. But the question of payments is largely irrelevant in many fields because much of this activity is non-commercial and open-source in character. People are freely distributing blueprints for 3D-printed prosthetics, for example, and they are even giving away the actual 3D-printed prosthetic devices to those who need them.
  • Could government officials just create a licensing / approval regime for narrowly-targeted 3D printed medical devices? Of course, but for all the reasons outlined above, it would likely be pretty easy to evade such a regime. Moreover, the very effort to enforce such a licensing regime would likely deter many beneficial innovations in the process, while also leading to the old cronyist problems associated with firms engaging in rent-seeking and courting favor with regulators in order to survive or prosper.

Anyway, you get the point: The practicality of control makes a difference and at some point the enormous costs associated with enforcement become an ethical matter in its own right. Stated differently, it’s not just that citizens should generally be at liberty to determine their own treatments and decide what drugs they ingest and what medical devices they use, it’s also the case that regulatory efforts aimed at limiting that right have so many corresponding enforcement costs that can spillover on to society more generally. And that’s an ethical matter of a different sort when you get right down to it. But, at a minimum, it’s an increasingly costly strategy and the costs associated with such technological control regimes should be considered closely and quantified where possible.

The Need for a Shift toward Risk Education

Let’s return to the question I raised above regarding the educational role that the FDA, or governments more generally, could play in the future. As I noted, a world in which citizens are granted the liberty to make more of their own health decisions is a world in which they could, at times, be rolling the dice with their health and lives. The highly paternalistic approach of modern food and drug regulation is rooted in the belief that citizens simply cannot be trusted to make such decisions on their own because they will never be able to appreciate the relative risks. You might be surprised to hear that I am somewhat sympathetic to that argument. People can and do make rash and unwise decisions about their health based on misinformation or a general lack of quality information presented in an easy-to-understand fashion. As a result, policymakers have taken the right to make these decisions away from us in many circumstances.

Although motivated by the best of intentions, paternalistic controls are not the optimal way to address these concerns. The better approach is rooted in risk education. To reiterate, the wise way to deal with the potential downsides associated with freedom of choice is to educate citizens about the relative risks associated with various medical treatments and devices, not to forbid them from seeking them out at all.

What does that mean for the future of the FDA? If the agency was smart, it would recognize that traditional command-and-control regulation is no longer a sensible strategy; it’s increasingly unworkable and imposes too many other costs on innovators and personal liberty. Thus, the agency needs to reorient its focus toward becoming a risk educator. Their goal should be to help create a more fully-informed citizenry that is empowered with more and better information about relative risk trade-offs.

Overcoming the Opposition & Getting Consent Mechanisms Right

Such an approach (i.e., shifting the FDA’s mission from being primarily a risk regulator to becoming a risk educator) will encounter opposition from strident defenders and opponents of the FDA alike.

The defenders of the FDA and its traditional approach will continue to insist that people can never be trusted to make such decisions on their own, regardless of how much information they have at their disposal or how many warnings we might give them. The problem with that position is that it treats citizens like ignorant sheep and denies them the most basic of all human rights: The right to live a life of your own choosing and to make the ultimate determinations about your own health and welfare. And, again, blindly defending the old system isn’t wise because traditional command-and-control regulatory methods are increasingly impractical and incredibly costly to enforce.

Opponents of the FDA, by contrast, will insist that the agency can’t even be trusted to provide us with good information for us to make these decisions on our own. Additionally, critics will likely argue that the agency might give us the wrong information or try to “nudge” us in certain directions. I share some of those concerns, but I am willing to live with that possibility so long as we are moving toward a world in which that is the only real power that the FDA possess over me and my fellow citizens. Because if all the agency is doing is providing us with information about risk trade-offs, then at least we still remain free to seek out alternative information from other experts and then choose our own courses of action.

The tricky issue here is getting consent mechanisms right. In fact, it’s the lynchpin of the new regime I am suggesting. In other words, even if we could agree that a more fully-informed citizenry should be left free to make these decisions on their own, we need to make sure that those individuals have provided clear and informed consent to the parties they might need to contract with when seeking alternative treatments. That’s particularly essential in a litigious society like America, where the threat of liability always looms large over doctors, nurses, hospital, insurers, and medical innovators. Those parties will only be willing to go along with an expanded “right to try” regime if they can be assured they won’t be held to blame when citizens make controversial choices that they advised them against, or at least clearly laid out all the potential risks and other alternatives at their disposal. This will require not only an evolution of statutory law and regulatory standards, but also of the common law and insurance norms.

Once we get all that figured out—and it will, no doubt, take some time—we’ll be on our way to a better world where the idea of having a “right to try” is the norm instead of the exception.


(My thanks to Adam Marcus for commenting on a draft of this essay. For more general background on 3D printing, see his excellent 2011 primer here, “3D Printing: The Future is Here.”)

]]> 1
What market failure? The weak transaction cost argument for TV compulsory licenses. Fri, 31 Jul 2015 15:12:45 +0000

At the same time FilmOn, an Aereo look-alike, is seeking a compulsory license to broadcast TV content, free market advocates in Congress and officials at the Copyright Office are trying to remove this compulsory license. A compulsory license to copyrighted content gives parties like FilmOn the use of copyrighted material at a regulated rate without the consent of the copyright holder. There may be sensible objections to repealing the TV compulsory license, but transaction costs–the ostensible inability to acquire the numerous permissions to retransmit TV content–should not be one of them.

Economists can devise situations where transaction costs are immense and compulsory licenses are needed for a well-functioning market. Today, as when the compulsory license was created, the conventional wisdom is that TV compulsory licenses are still needed to prevent market failure.

In the 1970s, cable companies were capturing broadcast channels and retransmitting it to their subscribers for free because, per the Supreme Court, cable was a passive transmitter and didn’t need copyright permission. In 1976, to correct this perceived unfairness, Congress amended the Copyright Act and said this cable retransmission did necessitate copyright authorization. To make it easier on cable systems (most of which were small, local operations), the law created a compulsory license to broadcast TV content like NBC, ABC, and CBS programming.

The compulsory license primarily does two things: it provides cable operators local TV content royalty-free and provides non-local (“distant”) content (imagine a DC cable company importing a WGN broadcast from Chicago) at regulated rates.

As the House report says:

The Committee recognizes…that it would be impractical and unduly burdensome to require every cable system to negotiate with every copyright owner whose work was retransmitted by a cable system.

The Copyright Office, early on, opposed the compulsory license and has called for the repeal of the compulsory license to broadcast TV content since 1981. As the Register of Copyrights said at a 2000 congressional hearing,

A compulsory license is not only a derogation of a copyright owner’s exclusive rights, but it also prevents the marketplace from deciding the fair value of copyrighted works through government-set price controls.

But when the issue of repeal comes up, many parties cite “significant transaction costs” as a problem with conventional, direct licensing. GAO echoed these objections in an April 2015 report,

we have previously found that obtaining the copyright holders’ permission for all this content would be challenging. Each television program may have multiple copyright holders, and rebroadcasting an entire day of content may require obtaining permission from hundreds of copyright holders. The transaction costs of doing so make this impractical for cable operators.

That sounds sensible but we have powerful contradictory evidence: for decades, hundreds of TV channels requiring the bundling of thousands of copyright licenses are distributed seamlessly and completely outside of the compulsory license regime.

So it’s a mystery to me why analysts still talk about the difficulty in acquiring copyright permission from hundreds or thousands of rights holders. TV distributors outside of the compulsory license scheme do these complex content acquisition deals routinely. Hundreds of non-broadcast channels–like ESPN, CNN, Bravo, HGTV, MTV, and Fox News–are distributed to tens of millions of households via private contractual agreements and without regulated compulsory licenses. TBS, uniquely, in the late 1990s went from a broadcast channel, subject to a compulsory license, to a cable channel distributed via direct licensing with no apparent ill effects. Analysts raising the transactions costs for keeping compulsory licenses, to my knowledge, never explain why the market failure they predict is absent for these hundreds of cable and satellite channels.

Further, while cable and satellite companies don’t need to negotiate broadcast TV copyrights because of the compulsory license, the FCC’s retransmission consent process, part of the 1992 Cable Act, requires these companies to negotiate payment to retransmit broadcast signals–signals that contain the underlying copyrighted content. This process, though bizarre and artificial, is essentially the same negotiation cable and satellite companies would need to enter into in a world without compulsory license.

Finally, online programming from distributors like Hulu, Netflix, and (potentially) Apple TV operate entirely outside of the retrans-compulsory copyright system and undermine the transaction costs objection. Netflix, for instance, doesn’t negotiate with every individual right holder like GAO and Congress imply is necessary in a non-compulsory license regime. Content aggregators and intermediaries, not regulation, streamline the rights acquisition process without the need for a compulsory license. The ostensibly burdensome transaction costs don’t stop Netflix from licensing over 10,000 titles worth around $9 billion.

Certainly, converting from compulsory licensing to direct licensing has issues. Changing legal regimes can be costly and there is a need to prevent anticompetitive withholding of content. Understandably, many cable and satellite distributors oppose repeal of compulsory licenses if the complex FCC system of retransmission consent and must carry are maintained. I tend to agree. Nevertheless, it’s time to strike the transaction cost argument from the policy discussion. The predicted market failure is overcome by market forces.

For more background on TV regulation, see Adam Thierer and Brent Skorup, Video Marketplace Regulation: A Primer on the History of Television Regulation and Current Legislative Proposals (Mercatus working paper).

]]> 0
Important New White House Report Documents Costs of Occupational Licensing Wed, 29 Jul 2015 22:25:37 +0000

Yesterday, the White House Council of Economic Advisers released an important new report entitled, “Occupational Licensing: A Framework for Policymakers.” (PDF, 76 pgs.) The report highlighted the costs that outdated or unneeded licensing regulations can have on diverse portions of the citizenry. Specifically, the report concluded that:

the current licensing regime in the United States also creates substantial costs, and often the requirements for obtaining a license are not in sync with the skills needed for the job. There is evidence that licensing requirements raise the price of goods and services, restrict employment opportunities,  and make it more difficult for workers to take their skills across State lines. Too often, policymakers do not carefully weigh these costs and benefits when making decisions about whether or how to regulate a profession through licensing.

The report supported these conclusions with a wealth of evidence. In that regard, I was pleased to see that research from Mercatus Center-affiliated scholars was cited in the White House report (specifically on pg. 34). Mercatus Center scholars have repeatedly documented the costs of occupational licensing and offered suggestions for how to reform or eliminate unnecessary licensing practices. Most recently, my colleagues and I have explored the costs of licensing restrictions for new sharing economy platforms and innovators. The White House report cited, for example, the recently-released Mercatus paper on “How the Internet, the Sharing Economy, and Reputational Feedback Mechanisms Solve the ‘Lemons Problem,’” which I co-authored with Christopher Koopman, Anne Hobson, and Chris Kuiper. And it also cited a new essay by Tyler Cowen and Alex Tabarrok on “The End of Asymmetric Information.”

Moreover, along with Christopher Koopman and Matt Mitchell, I recently submitted comments to the Federal Trade Commission for a sharing economy workshop. In those comments, as well as a recent paper on the same subject, we documented how occupational licensing rules were often “captured” by affected interests and are then used to discourage new forms of competition and innovation. This harms both consumers and workers by depriving them of new and better options. Many sharing economy operations are having great success in breaking down these barriers and proving that consumers and workers do better in an environment free of unnecessary and costly licensing restrictions. This suggests that consumer welfare would be improved even more by reforming other licensing regimes.

Mercatus has published dozens of other things related to this issue, many of which I have listed down below. Just recently, in fact, we published a new paper on “Breaking Down the Barriers: Three Ways State and Local Governments Can Improve the Lives of the Poor,” by economist Steven Horwitz. The report begins by documenting how “occupational licensure laws disproportionately burden the poor by requiring them to spend significant resources just to enter a market.” This is consistent with the findings from other Mercatus reports and other academic publications.

Anyway, check out the new White House report and, if you are serious about studying the issue of occupational licensing in more detail, you’ll want to take a closer look at some of these other Mercatus Center publications on the issue. The case for occupational licensing reform is strong and non-partisan, as the release of this White House report makes clear.


Mercatus Center publications and related material on occupational licensing & barriers to entry 

]]> 0
Spectrum NIMBYs and the Return of FCC Beauty Contests? Thu, 23 Jul 2015 17:43:42 +0000

The FCC is being dragged–reluctantly, it appears–into disputes that resemble the infamous beauty contests of bygone years, where the agency takes on the impossible task of deciding which wireless services deliver more benefits to the public. Two novel technologies used for wireless broadband–TLPS and LTE-U–reveal the growing tensions in unlicensed spectrum. The two technologies are different and pose slightly different regulatory issues but each is an attempt to bring wireless Internet to consumers. Their advocates believe these technologies will provide better service than existing wifi technology and will also improve wifi performance. Their major similarity is that others, namely wifi advocates, object that the unlicensed bands are already too crowded and these new technologies will cause interference to existing users.

The LTE-U issue is new and developing. The TLPS proceeding, on the other hand, has been pending for a few years and there are warning signs the FCC may enter into beauty contests–choosing which technologies are entitled to free spectrum–once again.

What are FCC beauty contests and why does the FCC want to avoid them? From the 1930s to the 1990s (aside from a few short-lived spectrum lotteries), the FCC handed out valuable spectrum licenses for free to applicants who showed they would benefit the public with their planned services. TV broadcasters, taxicab dispatchers, satellite communications companies, medical facilities, and others lobbied to claim their stake when new spectrum became available.

These time-consuming proceedings became known as beauty contests, reflecting the subjective nature of giving away an input often worth tens or hundreds of millions of dollars to “deserving” applicants. The inefficiency, delay, and predictable corruption of beauty contests were widely criticized, but it wasn’t until the 1990s that Congress permitted auctioning spectrum. Allowing markets to allocate spectrum greatly improved the chances spectrum would go to the firms that had financial incentives to put it to good use, rather than the firms that had the most persuasive insiders.

But not all spectrum is auctioned today. Decades ago the FCC realized that short-range, innovative new services could be deployed without expensive and time-consuming licensing. The agency decided to authorize low-power devices in certain bands of spectrum. Essentially, any device maker could freely deploy technologies in these bands as long as they complied with a few basic FCC rules, the Part 15 rules. The FCC left technology choices to the device makers, who share the spectrum with other–sometimes interference-prone–device makers and users. While wifi technology is the most popular and most economically significant user of unlicensed spectrum, there are many other technologies coexisting in unlicensed bands. Today, hardware companies make dozens of short-range technologies like toy RC cars, wireless speakers, Bluetooth earpieces, baby monitors, garage door openers, cordless phones, and wifi routers.

Unlicensed spectrum has downsides for device makers, however. As the FCC said in a recent proceeding, “As a general condition of operation, Part 15 devices … must accept any interference that may be received from [licensed users] or other Part 15 devices.” Operators like AT&T, Sprint, and Dish pay millions or billions of dollars for their licensed spectrum at auction. In return, however, they can exclude other wireless operators from using their spectrum assignments. In contrast, using free unlicensed spectrum means you have no protection from interference from other unlicensed and licensed users. This is intended to create an environment of permissionless innovation, where wireless entrants can be free to try new services.

In theory, this means unlicensed users cannot object when other unlicensed users deploy new technologies. In practice, however, now that unlicensed spectrum is occupied by services like Bluetooth and wifi-delivered Internet, new entrants often modify their technology to be “good neighbors.” The potential for interference also motivates established players to prevent entrants like TLPS and LTE-U from using the bands.

Richard Bennett has a good explanation of the LTE-U engineering issues before the FCC. TLPS has slightly different issues. After a few years of testing, TLPS may be approved soon, but not without a fight. TLPS is a novel wireless technology that uses a channel of spectrum that straddles unlicensed spectrum and licensed spectrum. The licensed portion is currently used by Globalstar for satellite communications but the FCC generally wishes to get away from mandating certain services–like satellite communications–and to allow licensees to use their spectrum for whatever service is demanded by consumers. For that reason, the FCC has sought, since releasing the 2010 National Broadband Plan, to make this relatively unproductive “satellite spectrum” available for land-based wireless broadband use. Knowing that the FCC is willing to be flexible to meet growing consumer broadband needs, Globalstar saw an opportunity to merge its licensed spectrum with a portion of the free, adjacent unlicensed spectrum. With this wider channel, some of it shared with existing unlicensed users, wireless broadband delivered via TLPS technology became feasible. As TLPS approval nears the finish line, however, some unlicensed users are objecting that TLPS will interfere with their services.

The FCC proceedings reveal a technical debate about interference measurements. These claims distract from the larger issue: Either the Part 15 rules mean what they say–unlicensed users have no interference protection–or the FCC is increasingly back in the business of beauty contests and deciding which services are entitled to free spectrum.

Henry Goldberg, a communications lawyer who represented Apple years ago in getting more unlicensed spectrum allocated, predicted these fights at a 2008 Information Economy Project conference.

[I]f you are a company or a municipality or a port authority or a university who has invested in unlicensed spectrum to provide a WiFi services for a fee, you’re not so sure you want someone using unlicensed spectrum to compete with you. Such players may try to use contractual rights, lawsuits, etc. to seek to limit additional entry to what has become “their” spectrum. If a “not-in-my-back-yard” dynamic takes over, the very essence of Part 15 is compromised. Vigilance is needed to fight Part 15 NIMBY.

It’s this growing Part 15 NIMBYism that concerns many spectrum policy watchers. No one wants the return of beauty contests and the FCC picking winners among different technologies.

But Goldberg has a discouraging addendum to his prescient warning against NIMBYism in unlicensed bands:

Supporter of unfettered grazing rights that I am, it doesn’t offend me to have the town permit grazing by sheep and cows, but forbid elephants.

Herein lies the problem. The FCC is being pressured to declare that TLPS is an elephant that should not be allowed in the commons filled with wifi sheep and Bluetooth cows. LTE-U will be the next target.

If the FCC encourages these kinds of complaints, the result will be customary law that is destructive to innovation in unlicensed bands. Firms will sink investments in technologies and business plans that comply with the rules, and only later learn they are violating unwritten rules.

The bigger problem is that the FCC is entering beauty contest territory once again. Even if the FCC someday prohibits “elephants” in unlicensed–current Part 15 rules say unlicensed users have no protection against others–the agency has to determine what that means. The FCC does not want to go back to the bad old days of beauty contests, specifying, in the face of intense lobbying, that only certain technologies were allowed on certain frequencies in certain places.

As firms find ways to intensely use free unlicensed spectrum, more conflicts like these may arise. Unfortunately these fights politicize FCC decisionmaking and could stymie new wireless innovations.

It may be that NIMBYism in unlicensed is inevitable. If interference in unlicensed is a regular problem and the FCC finds itself picking winners, the FCC needs to be much more cautious about allocating unlicensed spectrum. It’s worth noting that auctioning spectrum removes the temptation to engage in the ad hoc dispensations of spectrum that plagued the agency for decades. In any case, the results of the TLPS and LTE-U proceedings will have ramifications beyond the approval or denial of those technologies.

Related Reading:
Super Wifi and Unlicensed Spectrum: “Spectrum Condos”
How the FCC Killed a Nationwide Wireless Broadband Network

]]> 0
How Attitudes about Risk & Failure Affect Innovation on Either Side of the Atlantic Fri, 19 Jun 2015 22:15:06 +0000

“Why hasn’t Europe fostered the kind of innovation that has spawned hugely successful technology companies?” asks James B. Stewart in an important new column for the New York Times (“A Fearless Culture Fuels U.S. Tech Giants“).

That’s a great question, and one that I have tried to answer in a series of recent essays. (See, for example, “Europe’s Choice on Innovation” and “Embracing a Culture of Permissionless Innovation.”) What I have suggested in those essays is that the starkly different outcomes on either side of the Atlantic in terms of recent economic growth and innovation can primarily be explained by cultural attitudes toward risk-taking and failure. “For innovation and growth to blossom, entrepreneurs need a clear green light from policymakers that signals a general acceptance of risk-taking—especially risk-taking that challenges existing business models and traditional ways of doing things,” I have argued. And the most powerful proof of this is to examine the amazing natural experiment that has played out on either side of the Atlantic over the past two decades with the Internet and the digital economy.

For example, an annual Booz & Company report on the world’s most innovative companies revealed that 9 of the top 10 most innovative companies are based in the U.S. and that most of them are involved in computing and digital technology. None of them are based in Europe, however. Another recent survey revealed that the world’s 15 most valuable Internet companies (based on market capitalizations) have a combined market value of nearly $2.5 trillion, but none of them are European while 11 of them are U.S. firms. Again, it is America’s tech innovators that dominate that list.

Many European officials and business leaders are waking up to this grim reality and are wondering how to reverse this situation. In his Times essay, Stewart quotes Danish economist Jacob Kirkegaard of the Peterson Institute for International Economics, who notes that Europeans “all want a Silicon Valley. . . . But none of them can match the scale and focus on the new and truly innovative technologies you have in the United States. Europe and the rest of the world are playing catch-up, to the great frustration of policy makers there.”

OK, but why is that? Again, it comes down to those different cultural attitudes about risk and the stark differences over the potential lessons to be gained from allowing firms, business models, and entire professions to fail and/or be significantly disrupted.

Stewart quotes German economist Petra Moser on this point. He noted that “Europeans are worried. . . . They’re trying to recreate Silicon Valley in places like Munich, so far with little success,” she said. “The institutional and cultural differences are still too great.” In Europe, stability is prized,” she says. Here’s the key passage from the Stewart piece elaborating on this point:

Often overlooked in the success of American start-ups is the even greater number of failures. “Fail fast, fail often” is a Silicon Valley mantra, and the freedom to innovate is inextricably linked to the freedom to fail. In Europe, failure carries a much greater stigma than it does in the United States. Bankruptcy codes are far more punitive, in contrast to the United States, where bankruptcy is simply a rite of passage for many successful entrepreneurs.

Moreover, he notes, “Europeans are also much less receptive to the kind of truly disruptive innovation represented by a Google or a Facebook.”

And that remains the heart of the problem for Europe. What many leaders there fail to appreciate, as I noted in my earlier essays, is that:

Innovation is more likely in systems that maximize breathing room for ongoing economic and social experimentation, evolution, and adaptation. Societies that appreciate those values—and allow them to influence both social norms and policy decisions—are likely to experience greater economic growth. By contrast, those that deride such values and adopt a more precautionary policy approach are more likely to discourage innovation and languish economically.

The remarkable aversion to failure and its affect on deterring entrepreneurialism and long-term growth in Europe and elsewhere cannot be overstated. As I will argue in a forthcoming book chapter on this topic, we can conclude, paradoxically, that individuals, institutions, and countries that over-zealously seek to avoid the possibility of certain short-term failures are actually far more prone to potentially far more dangerous and systemic failures in the long-term. Put more simply: the more you try to avoid all the little failures, the harder you fail more generally. This is Europe’s fundamental predicament circa 2015.

Of course, changing long-entrenched cultural attitudes toward risk and failure can be challenging and take many years, even decades. But the path forward–at least in terms of legal policy and regulatory reforms–has been charted by Larry Downes in his new Harvard Business Review essay, “How Europe Can Create Its Own Silicon Valley.” EU policymakers, he correctly observes, will “have to learn to appreciate in the first place the profound role regulation (or the lack of it) plays in the creation of economic value in the Internet economy.” Downes then continues on to itemize some of the policy changes that would help put Europe on the right track to unlock the amazing entrepreneurial spirit that lies dormant across the continent.

Whether or not the Europeans are willing to take those steps remains to be seen. Regardless, the lesson for U.S. policymakers should be clear: If you want to continue to produce world-beating tech innovators, you must avoid Europe’s overly precautionary and highly risk-averse approach to policy. “Permissionless innovation” remains the better default policy position toward new entrepreneurs and technologies, no matter how disruptive they may be in the short-term.

]]> 1
The Challenge of Defining Privacy Harm Fri, 19 Jun 2015 18:12:30 +0000

On Thursday, it was my great pleasure to participate in a Washington Legal Foundation (WLF) event on “Online Privacy Regulation: The Challenge of Defining Harm.” The entire event video can be found on YouTube here, but down below I pasted the clip of just my remarks. Other speakers at the event included:  FTC Commissioner Maureen K. Ohlhausen, Commissioner; John B. Morris, Jr., the Associate Administrator and Director of Internet Policy athe U.S. Department of Commerce’s National Telecommunications and Information Administration; and Katherine Armstrong, Counsel at the law firm of Hogan Lovells. Glenn Lammi of the WLF moderated the session.

My remarks drew upon a few recent law review articles I have published relating digital privacy debates to previous debates over free speech and online child safety issues. (Here are those articles: 1, 2, 3).

]]> 0
New Paper Surveying Growth Projections for the Internet of Things Mon, 15 Jun 2015 19:16:15 +0000

The “Internet of Things” (IoT) is already growing at a breakneck pace and is expected to continue to accelerate rapidly. In a short new paper (“Projecting the Growth and Economic Impact of the Internet of Things“) that I’ve just released with my Mercatus Center colleague Andrea Castillo, we provide a brief explanation of IoT technologies before describing the current projections of the economic and technological impacts that IoT could have on society. In addition to creating massive gains for consumers, IoT is projected to provide dramatic improvements in manufacturing, health care, energy, transportation, retail services, government, and general economic growth. Take a look at our paper if you’re interested, and you might also want to check out my 118-page law review article, “The Internet of Things and Wearable Technology: Addressing Privacy and Security Concerns without Derailing Innovation” as well as my recent congressional testimony on the policy issues surrounding the IoT.)



]]> 0
Video of FTC Workshop Panel on Sharing Economy Policy Issues Fri, 12 Jun 2015 15:38:48 +0000

On June 9th, the Federal Trade Commission hosted an excellent workshop on “The ‘Sharing’ Economy: Issues Facing Platforms, Participants, and Regulators,” which included 4 major panels and dozens of experts speaking about these important issues. It was my great pleasure to be part of the 4th panel of the day on the policy implications of the sharing economy. Along with my Mercatus colleagues Christopher Koopman and Matt Mitchell, I submitted a 20-page filing to the Commission summarizing our research findings in this area. (We also released a major new working paper that same day on, “How the Internet, the Sharing Economy, and Reputational Feedback Mechanisms Solve the ‘Lemons Problem.’” (All Mercatus Center research on sharing economy issues can be found on this page and we plan on releasing additional papers in coming months.)

The FTC has now posted the videos from their workshop and down below I have embedded my particular panel. My remarks begin around the 5-minute mark of the video.

]]> 0
New Filing & Working Paper on the Regulation of the Sharing Economy Tue, 26 May 2015 17:41:04 +0000

Along with colleagues at the Mercatus Center at George Mason University, I am releasing two major new reports today dealing with the regulation of the sharing economy. The first report is a 20-page filing to the Federal Trade Commission that we are submitting to the agency for its upcoming June 9th workshop on “The “Sharing” Economy: Issues Facing Platforms, Participants, and Regulators.” We have been invited to participate in that event and I will be speaking on the fourth panel of the workshop. The filing I am submitting today for that workshop was co-authored with my Mercatus colleagues Christopher Koopman and Matt Mitchell.

The second report we are releasing today is a new 47-page working paper entitled, “How the Internet, the Sharing Economy, and Reputational Feedback Mechanisms Solve the ‘Lemons Problem.'” This study was co-authored with my Mercatus colleagues Christopher Koopman, Anne Hobson, and Chris Kuiper.

I will summarize each report briefly here.

In our new filing to the FTC, we address the five questions the Commission set forth in its workshop annoucement. Those five questions are as follows:

  • How can state and local regulators meet legitimate regulatory goals (such as protecting consumers, and promoting public health and safety) in connection with their oversight of sharing economy platforms and business models, without also restraining competition or hindering innovation?
  • How have sharing economy platforms affected competition, innovation, consumer choice, and platform participants in the sectors in which they operate? How might they in the future?
  • What consumer protection issues—including privacy and data security, online reviews and disclosures, and claims about earnings and costs—do these platforms raise, and who is responsible for addressing these issues?
  • What particular concerns or issues do sharing economy transactions raise regarding the protection of platform participants? What responsibility does a sharing economy platform bear for consumer injury arising from transactions undertaken through the platform?
  • How effective are reputation systems and other trust mechanisms, such as the vetting of sellers, insurance coverage, or complaint procedures, in encouraging consumers and suppliers to do business on sharing economy platforms?

We provide detailed answers to each of these questions as well as one additional major question that was not posed by the Commission in its workshop notice but which is, no doubt, on the minds of many at the agency and outside it: What should the FTC do about state and local barriers to entry and innovation that might be thwarting the growth of the sharing economy? (I blogged about that issue here a couple of weeks ago and our filing includes that discussion.)

Please take a look at our filing for detailed answers to each of these questions. (Incidentally, our filing is an extension of an earlier working paper that Koopman, Mitchell, and I released late last year on “The Sharing Economy and Consumer Protection Regulation: The Case for Policy Change.”) But, to briefly highlight the thrust of our argument, here’s a passage from our new filing:

As the debate surrounding the sharing economy moves forward, policymakers must keep in mind that merely because regulations were once justified on the grounds of consumer protection does not mean they accomplished those goals or that they are still needed today. Even well-intentioned policies must be judged against real-world evidence. Unfortunately, the evidence shows that many traditional consumer protection regulations hurt consumers; in the words of New York Attorney General Eric Schneiderman, they are often “cumbersome, and some are just plain protectionist.”

Markets, competition, reputational systems, and ongoing innovation often solve problems better than regulation when they are given a chance to do so. There are two reasons for this. First, market imperfections create powerful profit opportunities for entrepreneurs who are able to find ways to correct them. Second, regulatory solutions too often undermine competition and lock in inefficient business models.

We continue on to explain exactly why that is the case, while also offering some constructive solutions to other issues that are on the minds of regulators.

Meanwhile, the new working paper we are releasing today provides much greater detail on the fifth of the five questions the FTC posed in its workshop notice regarding reputation systems and other trust mechanisms. Here is the abstract from the paper:

This paper argues that the sharing economy—through the use of the Internet and real time reputational feedback mechanisms—is providing a solution to the lemons problem that many regulators have spent decades attempting to overcome. Section I provides an overview of the sharing economy and traces its rapid growth. Section II revisits the lemons theory as well as the various regulatory solutions proposed to deal with the problem of asymmetric information. Section III discusses the relationship between reputation and trust and analyzes how reputational incentives affect commercial interactions. Section IV discusses how information asymmetries were addressed in the pre-Internet era. It also discusses how the evolution of both the Internet and information systems (especially the reputational feedback mechanisms of the sharing economy) addresses the lemons problem. Section V explains how these new realities affect public policy and concludes that asymmetric information is not a legitimate rationale for policy intervention in light of technological changes. We also argue that continued use of this rationale to regulate in the name of consumer protection might, in fact, make consumers worse off. This has ramifications for the current debate over regulation of the sharing economy.

We believe that our research makes it clear “how the sharing economy relies upon—and has helped spur the growth of—sophisticated reputational feedback mechanisms that facilitate online trust and commerce, overcoming many of the information asymmetries that seemed intractable… just a generation ago. In combination with online review services and other information-sharing technologies enabled by the Internet,” we conclude, “these reputational tools can help create more effective, and largely self-regulating, markets that provide more information to more individuals than ever before.”

We look forward to continuing engagement with officials at the FTC and other policymakers at the federal, state, and even international level on these issues. We hope our research will help legislators and regulators find sensible ways to adjust policy for the sharing economy so as not to derail the sort of “permissionless innovation” that has thus far powered this exciting sector and produced the many pro-consumer benefits flowing from it. Check out our filing and new paper for more details.

]]> 0
What Should the FTC Do about State & Local Barriers to Sharing Economy Innovation? Tue, 12 May 2015 20:21:02 +0000

The Federal Trade Commission (FTC) is taking a more active interest in state and local barriers to entry and innovation that could threaten the continued growth of the digital economy in general and the sharing economy in particular. The agency recently announced it would be hosting a June 9th workshop “to examine competition, consumer protection, and economic issues raised by the proliferation of online and mobile peer-to peer business platforms in certain sectors of the [sharing] economy.” Filings are due to the agency in this matter by May 26th. (Along with my Mercatus Center colleagues, I will be submitting comments and also releasing a big paper on reputational feedback mechanisms that same week. We have already released this paper on the general topic.)

Relatedly, just yesterday, the FTC sent a letter to Michigan policymakers about restricting entry by Tesla and other direct-to-consumer sellers of vehicles. Michigan passed a law in October 2014 prohibiting such direct sales. The FTC’s strongly-worded letter decries the state’s law as “protectionism for independent franchised dealers” noting that “current provisions operate as a special protection for dealers—a protection that is likely harming both competition and consumers.” The agency argues that:

consumers are the ones best situated to choose for themselves both the vehicles they want to buy and how they want to buy them. Automobile manufacturers have an economic incentive to respond to consumer preferences by choosing the most effective distribution method for their vehicle brands. Absent supportable public policy considerations, the law should permit automobile manufacturers to choose their distribution method to be responsive to the desires of motor vehicle buyers.

The agency cites the “well-developed body of research on these issues strongly suggests that government restrictions on distribution are rarely desirable for consumers” and the staff letter continues on to utterly demolish the bogus arguments set forth by defenders of the blatantly self-serving, cronyist law. (For more discussion of just how anti-competitive and anti-consumer these laws are in practice, see this January 2015 Mercatus Center study, “State Franchise Law Carjacks Auto Buyers,” by Jerry Ellig and Jesse Martinez.)

The FTC’s letter is another example of how the agency can take steps using its advocacy tools to explain to state and local policymakers how their laws may be protectionist and anti-consumer in character. Needless to say, this also has ramifications for how the agency approaches parochial restraints on entry and innovation affecting the sharing economy.

In our forthcoming Mercatus Center comments to the FTC for its June 6th sharing economy workshop, Christopher Koopman, Matt Mitchell, and I will address many issues related to the sharing economy and its regulation. Beyond addressing all five of the specific questions asked in the Commission’s workshop notice, we also include a discussion about “Federal Responses to Local Anticompetitive Regulations.” Down below I have reproduced the current rough draft of that section of our filing in the hope of getting input from others. Needless to say, the idea of the FTC aggressively using its advocacy efforts or even federal antitrust laws to address state and local barriers to trade and innovation will make some folks uncomfortable–especially on federalism grounds. But we argue that a good case can be made for the agency using both its advocacy and antitrust tools to address these issues. Let us know what you think.



The Federal Trade Commission possesses two primary tools to address public restraints of trade created by state and local authorities: advocacy and antitrust.[1]

Through its advocacy program, the Commission can provide specific comments to state and local officials regarding the effects of both proposed and existing regulations.[2] Commissioner Joshua Wright has noted that, “For many years, the FTC has used its mantle to comment on legislation and regulation that may restrain competition in a way that harms consumers.”[3] Thus, at a minimum, the Commission can and should shine light on parochial governmental efforts to restrain trade and limit innovation throughout the sharing economy.[4] By shining more light on state or local anti-competitive rules, the Commission will hopefully make governments, or their surrogate bodies (such as licensing boards), more transparent about their practices and more accountable for laws or regulations that could harm consumer welfare. However, to be successful, the Commission’s advocacy efforts depend upon the willingness of state and local legislators and regulators to heed its advice.[5]

The Commission has already used its advisory role in its recent guidance to state and local policymakers regarding the regulation of ridesharing services. The Commission noted then that “a regulatory framework should be responsive to new methods of competition,” and set forth the following vision regarding what it regards as the proper approach to parochial regulation of passenger transportation services:

Staff recommends that a regulatory framework for passenger vehicle transportation should allow for flexibility and adaptation in response to new and innovative methods of competition, while still maintaining appropriate consumer protections. [Regulators] also should proceed with caution in responding to calls for change that may have the effect of impairing new forms or methods of competition that are desirable to consumers. . . .  In general, competition should only be restricted when necessary to achieve some countervailing procompetitive virtue or other public benefit such as protecting the public from significant harm.[6]

This represents a reasonable framework for addressing concerns about parochial regulation of the sharing economy more generally.

Unfortunately, in areas relevant to the regulation of the sharing economy (e.g., taxicab regulations and rules governing home and apartment rentals) anticompetitive regulations have remained on the books—and in some instances have expanded—in spite of more than 30 years of Commission comment and advocacy.[7]  In fact, as Public Citizen noted in a recent Supreme Court filing:

[M]any more occupations are regulated than ever before, and most boards doing the regulating—in both traditional and new professions—are dominated by industry members who compete in the regulated market. Those board member-competitors, in turn, commonly engage in regulation that can be seen as anticompetitive self-protection. The particular forms anticompetitive regulations take are highly varied, the possibilities seemingly limited only by the imaginations of the board members.[8]

In these instances, the Commission’s antitrust enforcement authority may need to be utilized when its advocacy efforts fall short with regard to regulations that favor incumbents by limiting competition and entry.[9] Many academics have endorsed expanded antitrust oversight of public barriers to trade and innovation.[10] As Commissioner Wright has argued, “the FTC is in a good position to use its full arsenal of tools to ensure that state and local regulators do not thwart new entrants from using technology to disrupt existing marketplace.”[11] He notes specifically that he is “quite confident that a significant shift of agency resources away from enforcement efforts aimed at taming private restraints of trade and instead toward fighting public restraints would improve consumer welfare.”[12] We agree.

The Supreme Court’s recent decision in North Carolina State Board of Dental Examiners v. Federal Trade Commission made it clear that local authorities cannot claim broad immunity from federal antitrust laws.[13] This is particularly true, the Court noted, “where a State delegates control over a market to a nonsovereign actor,” such as a professional licensing board consisting primarily of members of the affected interest being regulated.[14] “Limits on state-action immunity are most essential when a State seeks to delegate its regulatory power to active market participants,” the Court held, “for dual allegiances are not always apparent to an actor and prohibitions against anticompetitive self-regulation by active market participants are an axiom of federal antitrust policy.”[15]

The touchstone of this case and the Court’s related jurisprudence in this area is political accountability.[16] State officials must (1) “clearly articulate” and (2) “actively supervise” licensing arrangements and regulatory bodies if they hope to withstand federal antitrust scrutiny.[17] The Court clarified this test in N.C. Dental holding that “the Sherman Act confers immunity only if the State accepts political accountability for the anticompetitive conduct it permits and controls.”[18] In other words, if state and local officials want to engage in protectionist activities that restrain trade in pursuit of some other countervailing objective, then they need to own up to it by being transparent about their anticompetitive intentions and then actively oversee the process after that to ensure it is not completely captured by affected interests.[19]

Some might argue that this does not go far enough to eradicate anti-competitive barriers to trade at the state or local level that could restrain the innovative potential of the sharing economy. While that may be true, some limits on the Commission’s federal antitrust discretion are necessary to avoid impinging upon legitimate state and local priorities.

Over time, it is our hope that by empowering the public with more options, more information and better ways to shine light on bad actors, the sharing economy will continue to make many of those old regulations unnecessary. Thus, in line with Commissioner Maureen Ohlhausen’s wise advice, the Commission should encourage state and local officials to exercise patience and humility as they confront technological changes that disrupt traditional regulatory systems.[20]

But when parochial regulators engage in blatantly anti-competitive activities that restrain trade, foster cartelization, or harm consumer welfare in other ways, the Commission can act to counter the worst of those tendencies.[21] The Commission’s standard of review going forward was appropriately articulated by Commissioner Wright recently when he noted that, “in the context of potentially disruptive forms of competition through new technologies or new business models, we should generally be skeptical of regulatory efforts that have the effect of favoring incumbent industry participants.”[22]

Such parochial protectionist barriers to trade and innovation will become even more concerning as the potential reach of so many sharing economy businesses grows larger. The boundary between intrastate and interstate commerce is sometimes difficult to determine for many sharing economy platforms. Clearly, much of the commerce in question occurs within the boundaries of a state or municipality, but sharing economy services also rely upon Internet-enabled platforms with a broader reach. To the extent state or local restrictions on sharing economy operations create negative externalities in the form of “interstate spillovers,” the case for federal intervention is strengthened.[23] It would be preferable if Congress chose to deal with such spillovers using its Commerce Clause authority (Art. 1, Sec. 8 of the Constitution),[24] but the presence of such negative externalities might also bolster the case for the Commission’s use of antitrust to address parochial restraints on trade.


[1]     See Maureen K. Ohlhausen, Reflections on the Supreme Court’s North Carolina Dental Decision and the FTC’s Campaign to Rein in State Action Immunity, before the Heritage Foundation, Washington, DC, March 31, 2015, at 19-20.

[2]     Id., at 20. (“The primary goal of such advocacy is to convince policymakers to consider and then minimize any adverse effects on competition that may result from regulations aimed at preventing various consumer harms.”) Also see James C. Cooper and William E. Kovacic, “U.S. Convergence with International Competition Norms: Antitrust Law and Public Restraints on Competition,” Boston University Law Review, Vol. 90, No. 4, (August 2010): 1582, “Competition advocacy helps solve consumers’ collective action problem by acting within the regulatory process to advocate for regulations that do not restrict competition unless there is a compelling consumer protection rationale for imposing such costs on citizens.”).

[3]     Joshua D. Wright, “Regulation in High-Tech Markets:  Public Choice, Regulatory Capture, and the FTC,” Remarks of Joshua D. Wright Commissioner, Federal Trade Commission at the Big Ideas about Information Lecture Clemson University, Clemson, South Carolina, April 2, 2015, at 15,

[4]     Cooper and Kovacic, “U.S. Convergence with International Competition Norms,” at 1610, (“Competition agencies could devote greater resources to conduct research to measure the effects of public policies that restrict competition. A research program could accumulate and analyze empirical data that assesses the consumer welfare effects of specific restrictions. Such a program could also assess whether the stated public interest objectives of government restrictions are realized in practice.”)

[5]     Cooper and Kovacic, “U.S. Convergence with International Competition Norms,” at 1582, (“The value of competition advocacy should be measured by (1) the degree to which comments altered regulatory outcomes times (2) the value to consumers of those improved outcomes. For all practical purposes, however, both elements are difficult to measure with any degree of certainty.”).

[6]     Federal Trade Commission, Staff Comments Before the Colorado Public Utilities Commission In The Matter of The Proposed Rules Regulating Transportation By Motor Vehicle, 4 Code of Colorado Regulations, (March 6, 2013),

[7]     Marvin Ammori, “Can the FTC Save Uber,” Slate, March 12, 2013, (noting that, “not only does the FTC have the authority to take these cities to impartial federal courts and end their anticompetitive actions; it also has deep expertise in taxi markets and antitrust doctrines.”) Also see, Edmund W. Kitch, “Taxi Reform—The FTC Can Hack It,” Regulation, May/June 1984,

[8]     Brief of Amici Curiae Public Citizen in Support of Respondent, North Carolina State Bd. of Dental Exam’rs v. FTC, (August 2014): 24.

[9]     Brief of Antitrust Scholars as Amici Curiae in Support of Respondent, North Carolina State Bd. of Dental Exam’rs v. FTC, (August 6, 2014): 24, (“Antitrust review is entirely appropriate for curbing the excesses of occupational licensing because the anticompetitive effect has a similar effect on the market—and in particular consumers—as does traditional cartel activity.”)

[10]   See Mark A. Perry, “Municipal Supervision and State Action Antitrust Immunity,” The University of Chicago Law Review, Vol. 57, (Fall 1990): 1413-1445; William J. Martin, “State Action Antitrust Immunity for Municipally Supervised Parties,” The University of Chicago Law Review, Vol. 72, (Summer, 2005): 1079-1102; Jarod M. Bona, “The Antitrust Implications of Licensed Occupations Choosing Their Own Exclusive Jurisdiction,” University of St. Thomas Journal of Law & Public Policy, Vol 5, (Spring 2011): 28-51; Ingram Weber “The Antitrust State Action Doctrine and State Licensing Boards,” The University of Chicago Law Review, Vol. 79, (2012); Aaron Edlin and Rebecca Haw, “Cartels by Another Name:  Should Licensed Occupations Face Antitrust Scrutiny?,” University of Pennsylvania Law Review, Vol. 162, (2014): 1093-1164.

[11]   Wright, “Regulation in High-Tech Markets,” at 28-9.

[12]   Wright, “Regulation in High-Tech Markets,” at 29.

[13]   North Carolina State Bd. of Dental Exam’rs v. FTC, 135 S. Ct. 1101 (2015).

[14]   Id.

[15]   Id. Also see Edlin & Haw, “Cartels by Another Name,” at 1143, (“Who could seriously argue that an unsupervised group of competitors appointed to regulate their own profession can be counted on to neglect their selfish interests in favor of the state’s?”); Brief Amicus of the Pacific Legal Foundation and Cato Institute, North Carolina State Bd. of Dental Exam’rs v. FTC, (August 2014): 3, (“Antitrust immunity for private parties who act under color of state law is especially problematic, given that anticompetitive conduct is most likely to occur when private parties are in a position to exploit government’s regulatory powers.”)

[16]   See Maureen K. Ohlhausen, Reflections on the Supreme Court’s North Carolina Dental Decision and the FTC’s Campaign to Rein in State Action Immunity, before the Heritage Foundation, Washington, DC, March 31, 2015, at 16,, (“states need to be politically accountable for whatever market distortions they impose on consumers.”); Edlin & Haw, “Cartels by Another Name,” at 1137, (“political accountability is the price a state must pay for antitrust immunity.)

[17]   See Federal Trade Commission, Office of Policy and Planning, Report of the State Action Task Force (2003): 54, (“clear articulation requires that a state enunciate an affirmative intent to displace competition and to replace it with a stated criterion. Active supervision requires the state to examine individual private conduct, pursuant to that regulatory regime, to ensure that it comports with that stated criterion. Only then can the underlying conduct accurately be deemed that of the state itself, and political responsibility for the conduct fairly placed with the state.”) This test has been developed and refined in a variety of cases over the past 35 years. See: California Retail Liquor Dealers Ass’n v. Midcal Aluminum, Inc., 445 U.S. 97 (1980); Cmty. Comm’ns Co., Inc. v. City of Boulder, 455 U.S. 40, 48-51 (1982); City of Columbia v. Omni Outdoor Advertising, Inc., 499 U.S. 365 (1991); FTC v. Ticor Title Ins. Co., 504 U.S. 621 (1992).

[18]   North Carolina State Bd. of Dental Exam’rs v. FTC, 135 S. Ct. 1101 (2015).

[19]   Edlin & Haw, “Cartels by Another Name,” at 1156. (“Requiring that the state place its imprimatur on regulation is at least better than the status quo, in which states too often delegate self-regulation to professionals and walk away.”) See also North Carolina State Bd. of Dental Exam’rs v. FTC, 135 S. Ct. 1101 (2015) (“[Federal antitrust] immunity requires that the anticompetitive conduct of nonsovereign actors, especially those authorized by the State to regulate their own profession, result from procedures that suffice to make it the State’s own.”).

[20]  Maureen K. Ohlhausen, Commissioner, Fed. Trade Commission, “Regulatory Humility in Practice,” Remarks of the American Enterprise Institute, Washington, D.C. (April 1, 2015).

[21]   Edlin & Haw, “Cartels by Another Name,” at 1094, (“state action doctrine should not prevent antitrust suits against state licensing boards that are comprised of private competitors deputized to regulate and to outright exclude their own competition, often with the threat of criminal sanction.”). See also Brief Amicus of the Pacific Legal Foundation and Cato Institute, North Carolina State Bd. of Dental Exam’rs v. FTC, (August 2014): 2, 21,, (noting that courts “should presume strongly against granting state-action immunity in antitrust cases.  It makes little sense to impose powerful civil and criminal punishments on private parties who are deemed to have engaged in anti-competitive conduct, while exempting government entities—or, worse, private parties acting under the government’s aegis—when they engage in the exact same conduct. . . . “Whatever one’s opinion of antitrust law in general, there is no justification for allowing states broad latitude to disregard federal law and erect private cartels with only vague instructions and loose oversight.”)

[22]   Wright, “Regulation in High-Tech Markets,” at 7.

[23]   FTC, Report of the State Action Task Force, 44, (“an unfortunate gap has emerged between scholarship and case law. Although many of the leading commentators have expressed serious concern regarding problems posed by interstate spillovers, their thinking has yet to take root in the law. Such spillovers undermine both economic efficiency and some of the same political representation values thought to be protected by principles of federalism.”); Brief Amicus of the Pacific Legal Foundation and Cato Institute, North Carolina State Bd. of Dental Exam’rs v. FTC, (August 2014): 13, (“Allowing states expansive power to exempt private actors from antitrust laws would also disrupt national economic policy by encouraging a patchwork of state-established entities licensed to engage in cartel behavior. This would disrupt interstate investment and consumer expectations, and would have spillover effects across state lines.”) Cooper and Kovacic, “U.S. Convergence with International Competition Norms,” at 1598, (“When a state exports the costs attendant to its anticompetitive regulatory scheme to those who have not participated in the political process, however, there is no political backstop; arguments for immunity based on federalism concerns are severely weakened, if not wholly eviscerated, in these situations.”

[24]   See Adam Thierer, The Delicate Balance: Federalism, Interstate Commerce, and Economic Freedom in the Technological Age (Washington, DC: The Heritage Foundation, 1998): 81-118.

]]> 0
Mercatus Filing to FAA on Small Drones Fri, 24 Apr 2015 18:46:09 +0000

Today, Eli Dourado, Ryan Hagemann and I filed comments with the Federal Aviation Administration (FAA) in its proceeding on the “Operation and Certification of Small Unmanned Aircraft Systems” (i.e. small private drones). In this filing, we begin by arguing that just as “permissionless innovation” has been the primary driver of entrepreneurialism and economic growth in many sectors of the economy over the past decade, that same model can and should guide policy decisions in other sectors, including the nation’s airspace. “While safety-related considerations can merit some precautionary policies,” we argue, “it is important that those regulations leave ample space for unpredictable innovation opportunities.”

We continue on in our filing to note that  “while the FAA’s NPRM is accompanied by a regulatory evaluation that includes benefit-cost analysis, the analysis does not meet the standard required by Executive Order 12866. In particular, it fails to consider all costs and benefits of available regulatory alternatives.” After that, we itemize the good and the bad of the FAA propose with an eye toward how the agency can maximize innovation opportunities. We conclude by noting:

 The FAA must carefully consider the potential effect of UASs on the US economy. If it does not, innovation and technological advancement in the commercial UAS space will find a home elsewhere in the world. Many of the most innovative UAS advances are already happening abroad, not in the United States. If the United States is to be a leader in the development of UAS technologies, the FAA must open the American skies to innovation.

You can read our entire 9-page filing here.


Additional Reading

]]> 0
The Wrong Way to End the Terrestrial Radio Exemption Mon, 20 Apr 2015 00:53:22 +0000

A bill before Congress would for the first time require radio broadcasters to pay royalty fees to recording artists and record labels pursuant to the Copyright Act. The proposed Fair Play Fair Pay Act (H.R. 1733) would “[make] sure that all radio services play by the same rules, and all artists are fairly compensated,” according to Congressman Jerrold Nadler (D-NY).

… AM/FM radio has used whatever music it wants without paying a cent to the musicians, vocalists, and labels that created it. Satellite radio has paid below market royalties for the music it uses …

The bill would still allow for different fees for AM/FM radio, satellite radio and Internet radio, but it would mandate a “minimum fee” for each type of service for the first time.

A February report from the U.S. Copyright Office cites the promotional value of airtime as the longstanding justification for exempting terrestrial radio broadcasters from paying royalties under the Copyright Act.

In the traditional view of the market, broadcasters and labels representing copyright owners enjoy a mutually beneficial relationship whereby terrestrial radio stations exploit sound recordings to attract the listener pools that generate advertising dollars, and, in return, sound recording owners receive exposure that promotes record and other sales.

The Copyright Office now feels there are “significant questions” whether the traditional view remains credible today. But significant questions are not the same thing as clear evidence.The problem with the proposed Fair Play Fair Pay Act is two-fold. First, notwithstanding that there is now some uncertainty around the traditional view of the AM/FM market, the bill mandates new minimum fees anyway. Second, it would empower a government panel consisting of three judges appointed by the Librarian of Congress to engage in what could become highly-subjective decision-making.

The Copyright Royalty Judges shall establish rates and terms that most clearly represent the rates and terms that would have been negotiated in the marketplace between a willing buyer and a willing seller.

The most efficient way to get an accurate indicator of what a willing buyer and a willing seller would’ve negotiated in the marketplace is to call for private negotiations. The Copyright Office recommends this approach, too. Only when a music rights organization (MRO) and a licensee are unsuccessful in reaching an agreement on their own would the Copyright Royalty Board set the rates.

Each MRO would enjoy an antitrust exemption to negotiate performance and mechanical licenses collectively on behalf of its members—as would licensee groups negotiating with the MROs—with the CRB available to establish a rate in case of a dispute.

If Congress wants to end the terrestrial radio exemption, this is the better way to do it. Plainly, however, promotional value counts for something—and even the proposed Fair Play Fair Pay Act acknowledges that the value of the promotional effect qualifies as a legitimate form of compensation to recording artists and record labels.

]]> 0
Autonomous Vehicles Under Attack: Cyber Dashboard Standards and Class Action Lawsuits Sat, 14 Mar 2015 13:06:08 +0000

In a recent Senate Commerce Committee hearing on the Internet of Things, Senators Ed Markey (D-Mass.) and Richard Blumenthal (D-Conn.) “announced legislation that would direct the National highway Traffic Safety Administration (NHTSA) and the Federal Trade Commission (FTC) to establish federal standards to secure our cars and protect drivers’ privacy.” Spurred by a recent report from his office (Tracking and Hacking: Security and Privacy Gaps Put American Drivers at Risk) Markey argued that Americans “need the equivalent of seat belts and airbags to keep drivers and their information safe in the 21st century.”

Among the many conclusions reached in the report, it says, “nearly 100% of cars on the market include wireless technologies that could pose vulnerabilities to hacking or privacy intrusions.” This comes across as a tad tautological given that everything from smartphones and computers to large-scale power grids are prone to being hacked, yet the Markey-Blumenthal proposal would enforce a separate set of government-approved, and regulated, standards for privacy and security, displayed on every vehicle in the form of a “Cyber Dashboard” decal.

Leaving aside the irony of legislators attempting to dictate privacy standards, especially in the post-Snowden world, it would behoove legislators like Markey and Blumenthal to take a closer look at just what it is they are proposing and ask whether such a law is indeed necessary to protect consumers. For security in particular, there may be concerns that require redress, but if one looks at the report, it becomes apparent that it lacks a very important feature:: no specific examples of real car hacking are mentioned. The only examples illustrated in the report are described in brief detail:

An application was developed by a third party and released for Android devices that could integrate with a vehicle through the Bluetooth connection. A security analysis did not indicate any ability to introduce malicious code or steal data, but the manufacturer had the app removed from the Google Play store as a precautionary measure.

Great! The company solved the problem. What about the other instance cited in the report?

Some individuals have attempted to reprogram the onboard computers of vehicles to increase engine horsepower or torque through the use of “performance chips”. Some of these devices plug into the mandated onboard diagnostic port or directly into the under-the-hood electronics system.

So the only two examples of “car hacking” described in the Markey report are essentially duds. The first is a non-issue, since the company (1) determined there was little security risk involved and (2) removed the item from the market anyways, just to be sure. The second is, in a sense, hacking, but it is individual car owners doing it to their own cars. Neither of these cases appears to be sufficient grounds for imposing a set of arbitrary and, in many cases, capriciously anti-innovation approaches to privacy and data security in cars.

In the wake of the report’s release, this past Tuesday, March 10, General Motors, Toyota, and Ford were all hit with a nationwide class action lawsuit, alleging that the companies concealed “dangers posed by a lack of electronic security in a vast swath of vehicles.” Specifically, the lawsuit is aimed at the presence of controller area network (CAN) buses, which act as data hubs between the various electronic systems in a car. These systems are, indeed, susceptible to hacking, but no more than any personal computer that is connected to the Internet.

The trouble with this lawsuit, brought by the Stanley Law Group, is that it has not cited any specific harms that have occurred as a result of this “defect” (as a side note, saying a computer being susceptible to hacking constitutes a defect in design is the equivalent of saying an airplane that is susceptible to lightning strikes is fundamentally defective). Rather, the plaintiffs argue that “[w]e shouldn’t need to wait for a hacker or terrorist to prove exactly how dangerous this is before requiring car makers to fix the defect.”

As Adam Thierer and I pointed out in our 2014 paper, Removing Roadblocks to Intelligent Vehicles and Driverless Cars:

Manufacturers have powerful reputational incentives at stake here, which will encourage them to continuously improve the security of their systems. Companies like Chrysler and Ford are already looking into improving their telematics systems to better compartmentalize the ability of hackers to gain access to a car’s controller-area-network bus. Engineers are also working to solve security vulnerabilities by utilizing two-way data-verification schemes (the same systems at work when purchasing items online with a credit card), routing software installs and updates through remote servers to check and double-check for malware, adopting of routine security protocols like encrypting files with digital signatures, and other experimental treatments. (pg. 40-41)

It’s always easy to see the potential for abuse and harm with any new emerging technology, but optimism and fortitude in the face of the uncertain is what helps society, and individuals, grow and progress. Car hacking, while certainly a viable concern, is not so ubiquitous that it necessitates a heavy-handed regulatory approach. Rather, we should permit various standards to emerge and attempt to deal with possible harms. In this way, we can experiment to properly determine what approaches work and what do not. Federal standards imposed from on high assume that firms and individuals are not capable of working through these murky issues. We should be a bit more optimistic about the human capacity for ingenuity and adaptability.

To end on something of a more optimistic note, Tom Vanderbilt of Wired magazine gives keen insight into the reality of regulating based on hypothetical scenarios:

Every scenario you can spin out of computer error – what if the car drives the wrong way – already exists in analog form, in abundance. Yes, computer-guidance systems and the rest will require advances in technology, not to mention redundancy and higher standards of performance, but at least these are all feasible, and capable of quantifiable improvement. On the other hand, we’ll always have lousy drivers.



Additional Reading 

]]> 0
Bipartisan Internet of Things Resolution Introduced in Senate Wed, 04 Mar 2015 21:08:24 +0000

A new bipartisan “sense of the Senate” resolution was introduced today calling for “a national strategy for the Internet of Things to promote economic growth and consumer empowerment.” [PDF is here.] The resolution was cosponsored by U.S. Senators Deb Fischer (R-Neb.), Cory A. Booker (D-N.J.), Kelly Ayotte (R-N.H.), and Brian Schatz (D-Hawaii), who are all members of the Senate Commerce Committee, which oversees these issues. Just last month, on February 11th, the full Commerce Committee held a hearing titled “The Connected World: Examining the Internet of Things,” which examined the policy issues surrounding this exciting new space.

[Update: The U.S. Senate unanimously approved the resolution on the evening of March 24th, 2015.]

The new Senate resolution begins by stressing the many current or potential benefits associate with the Internet of Things (IoT), which, it notes, “currently connects tens of billions of devices worldwide and has the potential to generate trillions of dollars in economic opportunity.” It continues on to note how average consumers will benefit because “increased connectivity can empower consumers in nearly every aspect of [our] daily lives, including in the fields of agriculture, education, energy, healthcare, public safety, security, and transportation, to name just a few.” And then the resolution also discussed the commercial benefits, noting, “businesses across our economy can simplify logistics, cut costs in supply chains, and pass savings on to consumers because of the Internet of Things and innovations derived from it.” More generally, the Senators argue “the United States should strive to be a world leader in smart cities and smart infrastructure to ensure its citizens and businesses, in both rural and urban parts of the country, have access to the safest and most resilient communities in the world.”

In light of those amazing potential benefits, the resolution continues on to argue that while “the United States is the world leader in developing the Internet of Things technology,” an even more focused and dedicated policy vision is needed to promote continued success. “[W]ith a national strategy guiding both public and private entities,” it argues, “the United States will continue to produce breakthrough technologies and lead the world in innovation.” 

Toward that end, the resolution says that it is the sense of the Senate that:

(1) the United States should develop a national strategy to incentivize the development of the Internet of Things in a way that maximizes the promise connected technologies hold to empower consumers, foster future economic growth, and improve our collective social well-being;

(2) the United States should prioritize accelerating the development and deployment of the Internet of Things in a way that recognizes its benefits, allows for future innovation, and responsibly protects against misuse;

(3) the United States should recognize the importance of consensus-based best practices and communication among stakeholders, with the understanding that businesses can play an important role in the future development of the Internet of Things;

(4) the United States Government should commit itself to using the Internet of Things to improve its efficiency and effectiveness and cut waste, fraud, and abuse whenever possible; and,

(5) using the Internet of Things, innovators in the United States should commit to improving the quality of life for future generations by developing safe, new technologies aimed at tackling the most challenging societal issues facing the world.

This is a pretty solid statement from this group of Senators, who appear committed to advancing a pro-innovation, pro-growth approach to the emerging Internet of Things universe of technologies. This is exciting because this reflects the strong bipartisan approach American policymakers adopted two decades ago for the Internet more generally. America’s unified, “light-touch” Internet policy vision worked wonders for consumers and our economy before, and it can happen again thanks to a vision like the one these four Senators floated today.

As I explained in more detail when I testified at the February 11th Senate Commerce hearing on IoT issue:

America took a commanding lead in the digital economy because, in the mid-1990s, Congress and the Clinton administration crafted a nonpartisan vision for the Internet that protected “permissionless innovation” — the idea that experimentation with new technologies and business models should generally be permitted without prior approval. Congress embraced permissionless innovation by passing the Telecommunications Act of 1996 and rejecting archaic Analog Era command-and-control regulations for this exciting new medium. The Clinton administration embraced permissionless innovation with its 1997 “Framework for Global Electronic Commerce,” which outlined a clear vision for Internet governance that relied on civil society, voluntary agreements, and ongoing marketplace experimentation. This nonpartisan blueprint sketched out almost two decades ago for the Internet is every bit as sensible today as we begin crafting a policy paradigm for the Internet of Things

I view this new Senate resolution on the Internet of Things as an effort to freshen up and extend that original vision that lawmakers crafted for the Internet back in the mid-1990s.  As I documented in my recent essay, “Why Permissionless Innovation Matters,” that vision has worked wonders for American consumers and our modern economy. Meanwhile, our international rivals languished on this front because they strapped their tech sectors with layers of regulatory red tape that thwarted digital innovation.

We got policy right once before in the United States, and we can get it right again with a policy vision like that found in this new Senate resolution for the Internet of Things.


Additional Reading

]]> 0
Initial Thoughts on Obama Administration’s “Privacy Bill of Rights” Proposal Fri, 27 Feb 2015 21:28:30 +0000

The Obama Administration has just released a draft “Consumer Privacy Bill of Rights Act of 2015.” Generally speaking, the bill aims to translate fair information practice principles (FIPPs) — which have traditionally been flexible and voluntary guidelines — into a formal set of industry best practices that would be federally enforced on private sector digital innovators. This includes federally-mandated Privacy Review Boards, approved by the Federal Trade Commission, the agency that will be primarily responsible for enforcing the new regulatory regime.

Many of the principles found in the Administration’s draft proposal are quite sensible as best practices, but the danger here is that they could soon be converted into a heavy-handed, bureaucratized regulatory regime for America’s highly innovative, data-driven economy.

No matter how well-intentioned this proposal may be, it is vital to recognize that restrictions on data collection could negatively impact innovation, consumer choice, and the competitiveness of America’s digital economy.

Online privacy and security is vitally important, but we should look to use alternative and less costly approaches to protecting privacy and security that rely on education, empowerment, and targeted enforcement of existing laws. Serious and lasting long-term privacy protection requires a layered, multifaceted approach incorporating many solutions.

That is why flexible data collection and use policies and evolving best practices will ultimately serve consumers better than one-size-fits all, top-down regulatory edicts. Instead of imposing these FIPPs in a rigid regulatory fashion, privacy and security best practices will need to evolve gradually to new marketplace realities and be applied in a more organic and flexible fashion, often outside the realm of public policy.

Regulatory approaches, like the Obama Administration’s latest proposal, will instead impose significant costs on consumers and the economy. Data is the fuel that powers our information economy. Privacy-related mandates that curtail the use of data to better target or personalize new services could raise costs for consumers. There is no free lunch. Something has to pay for all the wonderful free sites and services we enjoy today. If data can’t be used to cross-subsidize those services, prices will go up.

Data regulations could also indirectly cost consumers by diminishing the abundance of content and culture now supported by the data-driven economy. In other words, even if prices and paywalls don’t go up, quantity or quality could suffer if data collection is restricted.

Data regulations could also hurt the competitiveness of domestic markets and the global competitive advantage that America’s tech sector has in this space. That regulatory burden would fall hardest on smaller operators and new start-ups. Today’s “app economy” has given countless small innovators a chance to compete on even footing with the biggest players. Burdensome data collection restrictions could short-circuit the engine that drives entrepreneurial innovation among mom-and-pop companies if ad dollars get consolidated in the hands of only the larger companies that can afford to comply with new rules.

We don’t want to go down the path the European Union charted in the 1990s with heavy-handed data directives. That suffocated high-tech entrepreneurialism and innovation there. America’s Internet sector came to be the envy of the world because our more flexible, light-touch regulatory regime leaves more breathing room for competition and innovation compared to Europe’s top-down regime. We should not abandon that approach now.

Finally, the Obama Administration’s proposal deals exclusively with private sector data collection and has nothing to say about government surveillance activities. The Administration would be wise to channel its energies into that far more significant privacy problem first.


Additional Reading from Adam Thierer of the Mercatus Center

Law Review Articles:

Testimony / Filings


]]> 0
Mercatus Center Scholars Contributions to Cybersecurity Research Mon, 23 Feb 2015 16:46:00 +0000

by Adam Thierer & Andrea Castillo

Cybersecurity policy is a big issue this year, so we thought it be worth reminding folks of some contributions to the literature made by Mercatus Center-affiliated scholars in recent years. Our research, which can be found here, can be condensed to these five core points:

1)         Institutions, societies, and economies are more resilient than we give them credit for and can deal with adversity, even cybersecurity threats.

See: Sean Lawson, “Beyond Cyber-Doom: Assessing the Limits of Hypothetical Scenarios in the Framing of Cyber-Threats,” December 19, 2012.

2)         Companies and organizations have a vested interest in finding creative solutions to these problems through ongoing experimentation and they are pursing them with great vigor.

See: Eli Dourado, “Internet Security Without Law: How Service Providers Create Order Online,” June 19, 2012.

3)         Over-arching, top-down “cybersecurity frameworks” threaten to undermine dynamism in cybersecurity and Internet governance, and could promote rent-seeking and corruption. Instead, the government should foster continued dynamic cybersecurity efforts through the development of a robust private-sector cybersecurity insurance market.

See: Eli Dourado and Andrea Castillo, “Why the Cybersecurity Framework Will Make Us Less Secure,” April 17, 2014.

4)         The language sometimes used to describe cybersecurity threats sometimes borders on “techno-panic” rhetoric that is based on “threat inflation.

See the Lawson paper already cited as well as: Jerry Brito & Tate Watkins “Loving the Cyber Bomb? The Dangers of Threat Inflation in Cybersecurity Policy,” April 10, 2012; and Adam Thierer, “Technopanics, Threat Inflation, and the Danger of an Information Technology Precautionary Principle,” January 25, 2013.

5)         Finally, taking these other points into account, our scholars have conclude that academics and policymakers should be very cautious about how they define “market failure” in the cybersecurity context. Moreover, to the extent they propose new regulatory controls to address perceived problems, those rules should be subjected to rigorous benefit-cost analysis.

See: Eli Dourado, “Is There a Cybersecurity Market Failure,” January 23, 2012.


C2-Spending-and-Breaches_0Developing cybersecurity policies—like the White House’s “Securing Cyberspace” proposal and the Senate Intelligence Committee’s risen-from-the-grave Cybersecurity Information Sharing Act (CISA) of 2015—prioritize government-led “information-sharing” among federal agencies and private organizations as a one-stop technocratic solution to the dynamic problem of cybersecurity provision. But, as Eli and Andrea pointed out in a Mercatus chart series from this year, the federal government’s own success with internal information-sharing policies has been abysmal for decades.

The Federal Information Security Management Act of 2002 compelled federal investment in IT security infrastructure along with internal information-sharing of system breaches and proactive responses among agencies. Apparently, this has not worked like a charm. The chart shows that reported federal breaches have risen by over 1000% since 2006 despite spending billions of dollars on agency systems and information sharing capabilities over the same time.

Many of the same agencies who would be imbued with power to coordinate information-sharing among private and government entities through CISA and other cybersecurity proposals were responsible for coordinating threat-sharing on the federal level. These are the National Institute of Standards and Technology (NIST), the Office of Management and Budget (OMB), and the Department of Homeland Security (DHS). Are we to believe these bodies will become magically efficient once they have more power to cajole the private sector?

Government Accountability Office (GAO) reports analyzing the failure of federal information security practices and threat coordination find that the technocratic solutions that look so perfectly rational and controlled on paper break down when imposed from above on employees that have no buy-in. The report concludes, “As we and inspectors general have long pointed out, federal agencies continue to face challenges in effectively implementing all elements of their information security programs.” Repeating the same failed policies in the private sector is unlikely to result in success.

Cybersecurity provision is too important of an issue to be left to brittle, technocratic policies with proven track records of failure. Rather, good cybersecurity policy will be grounded in an understanding of the incentives and norms that have allowed the Internet to develop and thrive as the system that it is today to target specific sources of failure.

Industry analyses find again and again that with cybersecurity, the problem exists between chair and keyboard—“human error,” not insufficient government meddling, is responsible for the vast majority of cyber incidents. Introducing more error-prone humans to the equation, as government cybersecurity plans seek to do, will only complicate the problem while neglecting the underlying factors that need addressing.

Cybersecurity will be an issue we continue to cover closely at the Mercatus Center Technology Policy Program.

]]> 0
Initial Thoughts on New FAA Drone Rules Mon, 16 Feb 2015 20:08:55 +0000

Yesterday afternoon, the Federal Aviation Administration (FAA) finally released its much-delayed rules for private drone operations. As The Wall Street Journal points out, the rules “are about four years behind schedule,” but now the agency is asking for expedited public comments over the next 60 days on the whopping 200-page order. (You have to love the irony in that!) I’m still going through all the details in the FAA’s new order — and here’s a summary of what the major provisions — but here are some high-level thoughts about what the agency has proposed.

Opening the Skies…

  • The good news is that, after a long delay, the FAA is finally taking some baby steps toward freeing up the market for private drone operations.
  • Innovators will no longer have to operate entirely outside the law in a sort of drone black market. There’s now a path to legal operation. Specifically, small unmanned aircraft systems (UAS) operators (for drones under 55 lbs.) will be able to go through a formal certification process and, after passing a test, get to operate their systems.

… but Not Without Some Serious Constraints

  • The problem is that the rules only open the skies incrementally for drone innovation.
  • You can’t read through these 200 pages of regulations without getting sense that the FAA still wishes that private drones would just go away.
  • For example, the FAA still wants to keep a bit of a leash around drones by (1) limiting their use to being daylight-only flights (2) that are in the visual line-of-sight of the operators at all times. And (3) the agency also says that drones cannot be flown over people.
  • Those three limitations will hinder some obvious innovations, such as same-day drone delivery for small packages, which Amazon has suggested they are interested in pursuing. (Amazon isn’t happy about these restrictions.)

Impact on Small Innovators?

  • But what I worry about more are all the small ‘Mom-and-Pop’ drone entrepreneur, who want to use airspace as a platform for open, creative innovation. These folks are out there but they don’t have the name or the resources to weather these restrictions the way that Amazon can. After all, if Amazon has to abandon same-day drone delivery because of the FAA rules, the company will still have a thriving commercial operation to fall back on. But all those small, nameless drone innovators currently experimenting with new, unforeseeable innovations may not be so lucky.
  • As a result, there’s a real threat here of drone entrepreneurs bolting the U.S. and offering their services in more hospitable environments if the FAA doesn’t take a more flexible approach.
  • [For more discussion of this problem, see my recent essay on “global innovation arbitrage.”]

Impact on News-Gathering?

  • It’s also worth asking how these rules might limit legitimate news-gathering operations by both journalistic enterprises and average citizens. If we can never fly a drone over a crowd of people, as the rules stipulate, that places some rather serious constraints on our ability to capture real-time images and video from events of societal importance (such as political protests or even just major events like sporting events or concerts).
  • [For more discussion about this, see this September 2014 Mercatus Center working paper, “News from Above: First Amendment Implications of the Federal Aviation Administration Ban on Commercial Drones.”]

Still Time to Reconsider More Flexible Rules

  • Of course, these aren’t final rules and the agency still has time to relax some of these restrictions to free the skies for less fettered private drone operation.
  • I suspect that drone innovators will protest the three specific limitations I identified above and ask for a more flexible approach to enforcing those rules.
  • But it’s good that the FAA has finally taken the first step toward decriminalizing private drone operations in the United States.


Additional Reading

]]> 0
What Cory Booker Gets about Innovation Policy Mon, 16 Feb 2015 15:32:43 +0000

Cory BookerLast Wednesday, it was my great pleasure to testify at a Senate Commerce Committee hearing entitled, “The Connected World: Examining the Internet of Things.” The hearing focused “on how devices… will be made smarter and more dynamic through Internet technologies. Government agencies like the Federal Trade Commission, however, are already considering possible changes to the law that could have the unintended consequence of slowing innovation.”

But the session went well beyond the Internet of Things and became a much more wide-ranging discussion about how America can maintain its global leadership for the next-generation of Internet-enabled, data-driven innovation. On both sides of the aisle at last week’s hearing, one Senator after another made impassioned remarks about the enormous innovation opportunities that were out there. While doing so, they highlighted not just the opportunities emanating out of the IoT and wearable device space, but also many other areas, such as connected cars, commercial drones, and next-generation spectrum.

I was impressed by the energy and nonpartisan vision that the Senators brought to these issues, but I wanted to single out the passionate statement that Sen. Cory Booker (D-NJ) delivered when it came his turn to speak because he very eloquently articulated what’s at stake in the battle for global innovation supremacy in the modern economy. (Sen. Booker’s remarks were not published, but you can watch them starting at the 1:34:00 mark of the hearing video.)

Embrace the Opportunity

First, Sen. Booker stressed the enormous opportunity with the Internet of Things. “This is a phenomenal opportunity for a bipartisan, profoundly patriotic approach to an issue that can explode our economy. I think that there are trillions of dollars, creating countless jobs, improving quality of life, [and] democratizing our society,” he said. “We can’t even imagine the future that this portends of, and we should be embracing that.”

Sen. Booker has it exactly right. And for more details about the enormous innovation opportunities associated with the Internet of Things, see Section 2 of my new law review article, “The Internet of Things and Wearable Technology Addressing Privacy and Security Concerns without Derailing Innovation,” which provides concrete evidence.

Protect America’s Competitive Advantage in the Innovation Age

Second, Sen. Booker highlighted the importance of getting our policy vision right to achieve those opportunities. He noted that “a lot of my concerns are what my Republican colleagues also echoed, which is we should be doing everything possible to encourage this and nothing to restrict it.”

America right now is the net exporter of technology and innovation in the globe, and we can’t lose that advantage,” he said and “we should continue to be the global innovators on these areas.” He continued on to say:

And so, from copyright issues, security issues, privacy issues… all of these things are worthy of us wrestling and grappling with, but to me we cannot stop human innovation and we can’t give advantages in human innovation to other nations that we don’t have. America should continue to lead.

This is something I have been writing actively about now for many years and I agree with Sen. Booker that America needs to get our policy vision right to ensure we don’t lose ground in the international competition to see who will lead the next wave of Internet-enabled innovation. As I noted in my testimony, “If America hopes to be a global leader in the Internet of Things, as it has been for the Internet more generally over the past two decades, then we first have to get public policy right. America took a commanding lead in the digital economy because, in the mid-1990s, Congress and the Clinton administration crafted a nonpartisan vision for the Internet that protected “permissionless innovation”—the idea that experimentation with new technologies and business models should generally be permitted without prior approval.”

Meanwhile, as I documented in my longer essay, “Why Permissionless Innovation Matters: Why does economic growth occur in some societies & not in others?” our international rivals languished on this front because they strapped their tech sectors with layers of regulatory red tape that thwarted digital innovation.

Reject Fear-Based Policymaking

Third, and perhaps most importantly, Sen. Booker stressed how essential it was that we reject a fear-based approach to public policymaking. As he noted at the hearing about these new information technologies, “there’s a lot of legitimate fears, but in the same way of every technological era, there must have been incredible fears.”

He cited, for example, the rise of air travel and the onset of humans taking flight. Sen. Booker correctly noted that while that must have been quite jarring at first, we quickly came to realize the benefits of that new innovation. The same will be true for new technologies such as the Internet of Things, connected cars, and private drones, Booker argued. In each case, some early fears about these technologies could lead to overly-precautionary approach to policy. “But for us to do anything to inhibit that leap in humanity to me seems unfortunate,” he said.

Once again, the Senator has it exactly right. As I noted in my law review article on “Technopanics, Threat Inflation, and the Danger of an Information Technology Precautionary Principle,” as well as my recent essay, “Muddling Through: How We Learn to Cope with Technological Change,” humans have exhibited the uncanny ability to adapt to changes in their environment, bounce back from adversity, and learn to be resilient over time. A great deal of wisdom is born of experience, including experiences that involve risk and the possibility of occasional mistakes and failures while both developing new technologies and learning how to live with them. More often than not, citizens have found ways to adapt to technological change by employing a variety of coping mechanisms, new norms, or other creative fixes.

Booker gets that and understands why we need to be patient to allow that process to unfold once again so that we can enjoy the abundance of riches that will accompany a more innovative economy.

Avoiding Global Innovation Arbitrage

Sen. Booker also highlighted how some existing government legal and regulatory barriers could hold back progress. On the wireless spectrum front he noted that “the government hoards too much spectrum and there is a need for more spectrum out there. Everything we are talking about,” he argued, “is going to necessitate more spectrum.” Again, 100% correct. Although some spectrum reform proposals (licensed vs. unlicensed, for example) will still prove contentious, we can at least all agree that we have to work together to find ways to open up more spectrum since the coming Internet of Things universe of technologies is going to demand lots of it.

Booker also noted that another area where fear undermines American leadership is the issue of private drone use. He noted that, “the potential possibilities for drone technology to alleviate burdens on our infrastructure, to empower commerce, innovation, jobs… to really open up unlimited opportunities in this country is pretty incredible to me.”

The problem is that existing government policies, enforced by the Federal Aviation Administration (FAA), have been holding back progress. And that has had consequences in terms of global competitiveness. “As I watch our government go slow in promulgating rules holding back American innovation,” Booker said, “what happened as a result of that is that innovation has spread to other countries that don’t have these rules (or have) put in place sensible regulations. But now we seeing technology exported from America and going other places.”

Correct again! I wrote about this problem in a recent essay on “global innovation arbitrage,” in which I noted how “Capital moves like quicksilver around the globe today as investors and entrepreneurs look for more hospitable tax and regulatory environments. The same is increasingly true for innovation. Innovators can, and increasingly will, move to those countries and continents that provide a legal and regulatory environment more hospitable to entrepreneurial activity.”

That’s already happening with drone innovation, as I documented in that piece. Evidence suggests that the FAA’s heavy-handed and overly-precautionary approach to drones has encouraged some innovators to flock overseas in search of more hospitable regulatory environment.

Luckily, just this weekend, the FAA finally announced its (much-delayed) rules for private drone operations. (Here’s a summary of those rules.) Unfortunately, the rules are a bit of mixed bag, with some greater leeway being provided for very small drones, but the rules will still be too restrictive to allow for other innovative applications, such as widespread drone delivery (which has Amazon angry, among others.)

Bottom line: if our government doesn’t take a more flexible, light-touch approach to these and other cutting-edge technologies, than some of our most creative minds and companies are going to bolt.

I dealt with all of these innovation policy issues in far more detail in my latest little book Permissionless Innovation: The Continuing Case for Comprehensive Technological Freedom, which I condensed further still into this essay on, “Embracing a Culture of Permissionless Innovation.” But Sen. Booker has offered us an even more concise explanation of just what’s at stake in the battle for innovation leadership in the modern economy. His remarks point the way forward and illustrate, as I have noted before, that innovation policy can and should be a nonpartisan issue.



Additional Reading


]]> 0
My Testimony for Senate Internet of Things Hearing Wed, 11 Feb 2015 14:31:34 +0000

This morning at 9:45, the Senate Committee on Commerce, Science, and Transportation is holding a full committee hearing entitled, “The Connected World: Examining the Internet of Things.” According to the Committee press release, the hearing “will focus on how devices — from home heating systems controlled by users online, to wearable devices that track health and activity with the help of Internet-based analytics — will be made smarter and more dynamic through Internet technologies. Government agencies like the Federal Trade Commission, however, are already considering possible changes to the law that could have the unintended consequence of slowing innovation.”

It is my pleasure to have been invited to testify at this hearing. I’ve long had an interest in the policy issues surrounding the Internet of Things. All my relevant research products can be found online here, including my latest law review article, “The Internet of Things and Wearable Technology Addressing Privacy and Security Concerns without Derailing Innovation.

My testimony, which can be found on the Mercatus Center website here, begins by highlighting the three general conclusions of my work:

  1. First, the Internet of Things offers compelling benefits to consumers, companies, and our country’s national competitiveness that will only be achieved by adopting a flexible policy regime for this fast-moving space.
  2. Second, while there are formidable privacy and security challenges associated with the Internet of Things, top-down or one-size-fits-all regulation will limit innovative opportunities.
  3. Third, with those first two points in mind, we should seek alternative and less costly approaches to protecting privacy and security that rely on education, empowerment, and targeted enforcement of existing legal mechanisms. Long-term privacy and security protection requires a multifaceted approach incorporating many flexible solutions.

I continue on to elaborate on each point and then conclude my testimony on a note of optimism:

we should also never forget that, no matter how disruptive these new technologies may be in the short term, we humans have an extraordinary ability to adapt to technological change and bounce back from adversity. That same resilience will be true for the Internet of Things. We should remain patient and continue to embrace permissionless innovation to ensure that the Internet of Things thrives and American consumers and companies continue to be global leaders in the digital economy.

My testimony also includes 7 appendices offering more detail for those interested.  Two of those appendices focus on defining the parameters of the Internet of Things as then documenting the projected economic impact associated with this rapidly-growing market.  The other appendices reproduce essays I have published here before, including articles about the Federal Trade Commission’s recent Internet of Things report as well as my thoughts on how to craft a nonpartisan policy vision for the Internet of Things.

Finally, here’s a list of most of my recent work the Internet of Things and wearable technology policy issues for those interested in reading even more about the topic:

]]> 0
Don’t Hit the (Techno-)Panic Button on Connected Car Hacking & IoT Security Tue, 10 Feb 2015 20:15:02 +0000

do not panicOn Sunday night, 60 Minutes aired a feature with the ominous title, “Nobody’s Safe on the Internet,” that focused on connected car hacking and Internet of Things (IoT) device security. It was followed yesterday morning by the release of a new report from the office of Senator Edward J. Markey (D-Mass) called Tracking & Hacking: Security & Privacy Gaps Put American Drivers at Risk, which focused on connected car security and privacy issues. Employing more than a bit of techno-panic flare, these reports basically suggest that we’re all doomed.

On 60 Minutes, we meet former game developer turned Department of Defense “cyber warrior” Dan (“call me DARPA Dan”) Kaufman–and learn his fears of the future: “Today, all the devices that are on the Internet [and] the ‘Internet of Things’ are fundamentally insecure. There is no real security going on. Connected homes could be hacked and taken over.”

60 Minutes reporter Lesley Stahl, for her part, is aghast. “So if somebody got into my refrigerator,” she ventures, “through the internet, then they would be able to get into everything, right?” Replies DARPA Dan, “Yeah, that’s the fear.” Prankish hackers could make your milk go bad, or hack into your garage door opener, or even your car.

This segues to a humorous segment wherein Stahl takes a networked car for a spin. DARPA Dan and his multiple research teams have been hard at work remotely programming this vehicle for years. A “hacker” on DARPA Dan’s team proceeded to torment poor Lesley with automatic windshield wiping, rude and random beeps, and other hijinks. “Oh my word!” exclaims Stahl.

Never mind that we are told that the “hackers” who “hacked” into this car had been directly working on its systems for years—a luxury scarcely available to the shadowy malicious hackers about whom DARPA Dan and his team so hoped to frighten us. The careful setup, editing, and Lesley Stahl’s squeals made for convincing theater.

Then there’s the Markey report. On the surface, the findings appear grim. For instance, we are warned that “Nearly 100% of cars on the market include wireless technologies that could pose vulnerabilities to hacking or privacy intrusions.” Nearly 100%? We’re practically naked out there! But digging through the report, we learn that the basis for this claim is that most of the 16 manufacturers surveyed responded that 100% of their vehicles are equipped with wireless entry points (WEPs)—like Bluetooth, Wi-Fi, navigation, and anti-theft features. Because these features “could pose vulnerabilities,” they are listed as a threat—one that lurks in nearly 100% of the cars on the market, at that.

Much of the report is similarly panicky and sometimes humorous (complaint #3: “many manufacturers did not seem to understand the questions posed by Senator Markey.”) The report concludes that the “alarmingly inconsistent and incomplete state of industry security and privacy practice,” warrants recommendations that federal regulators — led by the National Highway Traffic Safety Administration (NHTSA) and the Federal Trade Commission (FTC) — “promulgate new standards that will protect the data, security and privacy of drivers in the modern age of increasingly connected vehicles.”

Take a Deep Breath

As we face an uncertain future full of rapidly-evolving technologies, it’s only natural that some might feel a little anxiety about how these new machines and devices operate. Despite the exaggerated and sometimes silly nature of techno-panic reports like these, they reflect many people’s real and understandable concerns about new technologies.

But the problem with these reports is that they embody a “panic-first” approach to digital security and privacy issues. It is certainly true that our cars are become rolling computers, complete with an arsenal of sensors and networking technologies, and the rise of the Internet of Things means almost everything we own or come into contact with will possess networking capabilities. Consequently, just as our current generation of computing and communications technologies are vulnerable to some forms of hacking, it is likely that our cars and IoT devices will be as well.

But don’t you think that automakers and IoT developers know that? Are we really to believe that journalists, congressmen, and DARPA Dan have a greater incentive to understand these issues than the manufacturers whose companies and livelihoods are on the line? And wouldn’t these manufacturers only take on these risks if consumer demand and expected value supported them? Watching the 60 Minutes spot and reading through the Markey report, one is led to think that innovators in this space are completely oblivious to these threats, simply don’t care enough to address them, and don’t have any plans in motion. But that is lunacy.

No Mention of Liability?

To begin, neither report even mentions the possibility of massive liability for future hacking attacks on connected cars or IoT devices. That is amazing considering how the auto industry already attracts an absolutely astonishing amount of litigation activity. (Ambulance-chasing is a full-time legal profession, after all.) Thus, to the extent that some automakers don’t want to talk about everything they are doing to address security issues, it’s likely because they are still figuring out how to address the various vulnerabilities out there without attracting the attention of either enterprising hackers or trial lawyers.

Nonetheless, contrary to the absurd statement by Mr. Kaufman that “There is no real security going on” for connected cars or the Internet of Things, the reality is that these are issues that developers are actively studying and trying to address. Manufacturers of connected devices know that: (1) nobody wants to own or use devices that are fundamentally insecure or dangerous; and (2) if they sell such devices to the public, they are in for a world of hurt once the trial lawyers see the first headlines about it.

It also still quite unclear how big the threat is here. Writing over at Forbes yesterday, Doug Newcomb notes that “the threat of car hacking has largely been overblown by the media – there’s been only one case of a malicious car hack, and that was an inside job by a disgruntled former car dealer employee. But it’s a surefire way to get the attention of the public and policymakers,” he correctly observes. Newcomb also interviewed Damon McCoy, an assistant professor of computer science at George Mason University and a car security researcher, who noted that car hacking hasn’t become prevalent and that “Given the [monetary] motivation of most hackers, the chance of [automotive hacking] is very low.”

Security is a Dynamic, Evolving Process

Regardless, the notion that we can just clean this whole device security situation up with a single set of federal standards, as the Markey report suggests, is appealing but fanciful. “Security threats are constantly changing and can never be holistically accounted for through even the most sophisticated flowcharts,” observed my Mercatus Center colleagues Eli Dourado and Andrea Castillo in their recent white paper on “Why the Cybersecurity Framework Will Make Us Less Secure.” “By prioritizing a set of rigid, centrally designed standards, policymakers are neglecting potent threats that are not yet on their radar,” Dourado and Castillo note elsewhere.

We are at the beginning of a long process. There is no final destination when it comes to security; it’s a never-ending process of devising and refining policies to address vulnerabilities on the fly. The complex problem of cybersecurity readiness requires dynamic solutions that properly align incentives, improve communication and collaboration, and encourage good personal and organizational stewardship of connected systems. Implementing the brittle bureaucratic standards that Markey and others propose could have the tragic unintended consequence of rendering our devices even less secure.

Standards Are Developing Rapidly

Meanwhile, the auto industry has already come up with privacy standards that go above and beyond what most other digital innovators apply to their own products today. Here are the Auto Alliance’s “Consumer Privacy Protection Principles: Privacy Principles for Vehicle Technologies and Services,” which 23 major automobile manufacturers agreed to abide by. And, according to a press release yesterday, “automakers are currently working to establish an Information Sharing Analysis Center (or “Auto-ISAC”) for sharing vehicle cybersecurity information among industry stakeholders.”

Again, progress continues and standards are evolving. This needs to be a flexible, evolutionary process, instead of a static, top-down, one-size-fits-all bureaucratic political proceeding.

We can’t set down security and privacy standards in stone for fast-moving technologies like these for another reason, and one I am constantly stressing in my work on “Why Permissionless Innovation Matters.” If we spend all our time worrying about hypothetical worst-case scenarios — and basing our policy interventions on a parade of hypothetical horribles — then we run the risk that best-case scenarios will never come about.  As analysts at the Center for Data Innovation correctly argue, policymakers should only intervene to address specific, demonstrated harms. “Attempting to erect precautionary regulatory barriers for purely speculative concerns is not only unproductive, but it can discourage future beneficial applications of the Internet of Things.” And the same is true for connected cars.

Trade-Offs Matter

Technopanic indulgence isn’t always merely silly or annoying—it can be deadly.

“During the four deadliest wars the United States fought in the 20th century, 39 percent more Americans were dying in motor vehicles” than on the battlefield. So writes Washington Post reporter Matt McFarland in a powerful new post today. The ongoing toll associated with human error behind the wheel is falling but remains absolutely staggering, with almost 100 people losing their lives and almost 6,500 people injured every day.

We must never fail to appreciate the trade-offs at work when we are pondering precautionary regulation. Ryan Hagemann and I wrote about these issues in our recent Mercatus Center working paper, “Removing Roadblocks to Intelligent Vehicles and Driverless Cars.” That paper, which has been accepted for publication in a forthcoming edition of the Wake Forest Journal of Law & Policy, outlines the many benefits of autonomous or semi-autonomous systems and discusses the potential cost of delaying their widespread adoption.

When it comes to the various security, privacy, and ethical considerations related to intelligent vehicles, Hagemann and I argue that they “need to be evaluated against the backdrop of the current state of affairs, in which tens of thousands of people die each year in auto-related accidents due to human error.” We continue on later in the paper:

Autonomous vehicles are unlikely to create 100 percent safe, crash-free roadways, but if they significantly decrease the number of people killed or injured as a result of human error, then we can comfortably suggest that the implications of the technology, as a whole, are a boon to society. The ethical underpinnings of what makes for good software design and computer-generated responses are a difficult and philosophically robust space for discussion. Given the abstract nature of the intersection of ethics and robotics, a more detailed consideration and analysis of this space must be left for future research. Important work is currently being done on this subject. But those ethical considerations must not derail ongoing experimentation with intelligent-vehicle technology, which could save many lives and have many other benefits, as already noted. Only through ongoing experimentation and feedback mechanisms can we expect to see constant improvement in how autonomous vehicles respond in these situations to further minimize the potential for accidents and harms. (p. 42-3)

As I noted here in another recent essay, “anything we can do to reduce it significantly is something we need to be pursuing with great vigor, even while we continue to sort through some of those challenging ethical issues associated with automated systems and algorithms.”

No Mention of Alternative Solutions

Finally, it is troubling that neither the 60 Minutes segment nor the Markey report spend any time on alternative solutions to these problems. In my forthcoming law review article, “The Internet of Things and Wearable Technology: Addressing Privacy and Security Concerns without Derailing Innovation,” I devote the second half of the 90-page paper to constructive solutions to the sort of complex challenges raised in the 60 Minutes segment and the Markey report.

Many of the solutions I discuss in that paper — such as education and awareness-building efforts, empowerment solutions, the development of new social norms, and so on – aren’t even touched on by the reports. That’s a real shame because those methods could go a long way toward helping to alleviate many of the issues the reports identify.

We need a better public dialogue than this about the future of connected cars and Internet of Things security. Political scare tactics and techno-panic journalism are not going to help make the world a safer place. In fact, by whipping up a panic and potentially discouraging innovation, reports such as these can actually serve to prevent critical, life-saving technologies that could change society for the better.


Additional Reading


]]> 0
My State of the Net panel on Bitcoin Tue, 10 Feb 2015 16:19:50 +0000

A couple weeks ago at State of the Net, I was on a panel on Bitcoin moderated by Coin Center’s Jerry Brito. The premise of the panel was that the state of Bitcoin is like the early Internet. Somehow we got policy right in the mid-1990s to allow the Internet to become the global force it is today. How can we reprise this success with Bitcoin today?

In my remarks, I recall making two basic points.

First, in my opening remarks, I argued that on a technical level, the comparison between Bitcoin and the Internet is apt.

What makes the Internet different from the telecommunications media that came before is the separation of an application layer from a transport layer. The transport layer (and the layers below it) does the work of getting bits to where they need to go. This frees anybody up to develop new applications on a permissionless basis, taking this transport capability basically for granted.

Earlier telecom systems did not function this way. The applications were jointly defined with the transport mechanism. Phone calls are defined in the guts of the network, not at the edges.

Like the Internet, Bitcoin separates out not a transport layer, but a fiduciary layer, from the application layer. The blockchain gives applications access to a fiduciary mechanism that they can take basically for granted.

No longer will fiduciary applications (payments, contracts, asset exchange, notary services, voting, etc.) and fiduciary mechanisms need to be developed jointly. Unwieldy fiduciary mechanisms (banks, legal systems, oversight) will be able to be replaced with computer code.

Second, in the panel’s back and forth, particularly with Chip Poncy, I argued that technological change may necessitate a rebalancing of our laws and regulations on financial crimes.

We have payment systems because they improve human welfare. We have laws against certain financial activities because those activities harm human welfare. Ideally, we would balance the gains against the losses to come up with the optimal, human-welfare-maximizing level of regulation.

However, when a new technology like the blockchain comes along, the gains from payment freedom increase. People in a permissionless environment will be able to accomplish more than before. This means that we have to redo our balancing calculus. Because the benefits of unimpeded payments are higher, we need to tolerate more harms from unsavory financial activities if our goal remains to maximize human welfare.

Thanks to my co-panelists for a great discussion.

]]> 0
Wanted: talented, gritty libertarians who are passionate about technology Mon, 09 Feb 2015 18:07:04 +0000

Ten or fifteen years ago, when I sat around and thought about what I would do with my life, I never considered directing the technology policy program at Mercatus. It’s not exactly a career track you can get on — not like being a lawyer, a doctor, a professor.

One of the things I loved about Peter Thiel’s book Zero to One is that it is self-consciously anti-track. The book is a distillation of Thiel’s 2012 Stanford course on startups. In the preface, he writes,

“My primary goal in teaching the class was to help my students see beyond the tracks laid down by academic specialties to the broader future that is theirs to create.”

I think he is right. The modern economy provides unprecedented opportunity for people with talent and grit and passion to do unique and interesting things with their lives, not just follow an expected path.

This is great news if you are someone with talent and grit and passion. Average is Over. What you have is valuable. You can do amazing things. We want to work with you, invest in you—maybe even hire you—and unleash you upon the world.

The biggest problem we have is finding you.

There is no technology policy career track, nor would we want there to be one. Frankly, we don’t want someone who needs the comfort and safety of a future that someone else designed for him.

Unfortunately, this also means that there is no defined pool of talented, gritty libertarians who are passionate about technology for Mercatus or our tech policy allies to hire from.

So how are we supposed to find you? We need your help. You need to do two things.

First, get started now.

Just start doing technology policy.

Write about it every day. Say unexpected things; don’t just take a familiar side in a drawn-out debate. Do something new. What is going to be the big tech policy issue two years from now? Write about that. Let your passion show.

The tech policy world is small enough — and new ideas rare enough — that doing this will get you a following in our community.

It also sends a very strong signal come interview time. Anybody can say that they are talented, or gritty, or passionate. You’ll be able to show it.

I literally got hired because of a blog post. There were other helpful inputs, of course — credentials, references, some contract work that turned out well. But what initially got me on Mercatus’s radar screen was a single post.

Second, get in touch.

Everyone on the Mercatus tech policy team is highly Googleable (on Twitter, here’s me, Adam, Brent, and Andrea). We want to know who you are, what you are doing, and what your plans are.

There is almost no downside to this.

Best case scenario: we create a position for you. No one on our team was hired to fill a vacancy. Instead, we hire people because it’s too good of an opportunity for us to pass up.

Alternatively, maybe we’ll pay you to write a paper or a book.

If for some reason you’re not a great fit for Mercatus, we can connect you with allied groups in tech policy. My discussions with people running other tech policy programs confirms that finding talent is an ever-present problem for them, too.

And at a minimum, we’ll know who you are when we see your work online.

We are serious about winning the battle of ideas over technology, but we can’t do it alone. As technology policy eats the world, the opportunities in our field are going to grow. Let us know if you want to get in on this.

]]> 0
This Is Not How We Should Ensure Net Neutrality Fri, 06 Feb 2015 00:30:30 +0000

Chairman Thomas E. Wheeler of the Federal Communications Commission unveiled his proposal this week for regulating broadband Internet access under a 1934 law. Since there are three Democrats and two Republicans on the FCC, Wheeler’s proposal is likely to pass on a party-line vote and is almost certain to be appealed.

Free market advocates have pointed out that FCC regulation is not only unnecessary for continued Internet openness, but it could lead to years of disruptive litigation and jeopardize investment and innovation in the network.

Writing in WIRED magazine, Wheeler argues that the Internet wouldn’t even exist if the FCC hadn’t mandated open access for telephone network equipment in the 1960s, and that his mid-1980s startup either failed or was doomed because the phone network was open whereas the cable networks (on which his startup depended) were closed. He also predicts that regulation can be accomplished while encouraging investment in broadband networks, because there will be “no rate regulation, no tariffs, no last-mile unbundling.”  There are a number of problems with Chairman Wheeler’s analysis. First, let’s examine the historical assumptions that underlie the Wheeler proposal.

The FCC had to mandate open access for network equipment in the late 1960s only because of the unintended consequences of another regulatory objective—that of ensuring that basic local residential phone service was “affordable.” In practice, strict price controls required phone companies to set local rates at or below cost. The companies were permitted to earn a profit only by charging high prices for all of their other services including long-distance. Open access threatened this system of cross-subsidies, which is why the FCC strongly opposed open access for years. The FCC did not seriously rethink this policy until it was forced to do so by a federal appeals court ruling in the 1950s. That court decision set the stage for the FCC’s subsequent open access rules. Wheeler is trying to claim credit for a heroic achievement, when actually all the commission did was clean up a mess it created.

The failure of Wheeler’s Canadian government-subsidized startup in 1985 had nothing to do with open access, according to Wikipedia. NABU Network was attempting to sell up to 6.4 Mbps broadband service over Canadian cable networks notwithstanding the extremely limited capabilities of the network at the time. For one thing, most cable networks of that era were not bi-directional. The reason Wheeler’s startup didn’t choose to offer broadband over open telephone networks is because under-investment rendered those networks unsuitable. The copper loop simply didn’t offer the same bandwidth as coaxial cable. Why was there under-investment? Because of over-regulation.

Next, let’s examine Chairman Wheeler’s prediction that new regulation won’t discourage investment because there will be “no rate regulation, no tariffs, no last-mile unbundling.” Let’s be real. Wheeler simply cannot guarantee there will be no rate regulation, no tariffs, no last-mile unbundling nor other inappropriate regulation in the future. Anyone can petition the FCC to impose more regulation at any time, and nothing will prevent the commission from going down that road. The FCC will become a renewed target for special-interest pleading if Chairman Wheeler’s proposal is adopted by the commission and upheld by the courts.

Wheeler’s proposal would reclassify broadband as a “telecommunications” service notwithstanding the fact that the commission has previously found that broadband is an “information” service and the Supreme Court upheld that determination. These terms are clearly defined in the In the 1996 telecom act, in which bipartisan majorities in Congress sought to create a regulatory firewall. Communications services would continue to be regulated until they became competitive. Services that combine communications and computing (“information” services) would not be regulated at all. Congress wanted to create appropriate incentives for firms that provide communications service to invest and innovate by adding computing functionality. Congress was well aware that the commission tried over many years to establish a bright-line separation between communications and computing, and it failed. It’s an impossible task, because communications and computing are becoming more integrated all the time. The solution was to maintain legacy regulation for legacy network services, and open the door to competition for advanced services. The key issue now is whether or not broadband is a competitive industry. If the broadband offerings of cable operators, telephone companies and wireless providers are all taken into account, the answer is clearly yes.

In the view of Chairman Wheeler and others, regulation is needed to ensure the Internet is fast, fair and open. In reality, the Internet wants to be fast, fair and open. So called “walled garden” experiments of the past have all ended in failure. Before broadband, the open telephone network was significantly more profitable than the closed cable network. Now, broadband either is or soon will become more profitable than cable. Since open networks are more profitable than closed networks, legacy regulation is more than likely to be unnecessary and almost certain to be counter-productive.  Internet openness is chiefly a function not of regulation but of innovation and investment in bandwidth abundance.  With sufficient bandwidth, all packets travel at the speed of light.

Then again, this debate isn’t really about open networks. Republican leaders in Congress are offering to pass a bill that would prevent blocking and paid prioritization, and they can’t find any Democratic co-sponsors. That’s because the bill would prohibit reclassification of broadband as a “telecommunications” service, which would give the FCC a green light to regulate like it’s 1934. The idea that we need to give the commission unfettered authority so it can enact a limited amount of “smart” regulation that can be accomplished while encouraging private investment–and that we can otherwise rely on the FCC to practice regulatory restraint and not abuse its power–sounds a lot like the sales pitch for the Affordable Care Act, i.e., that we can have it all, there are no trade-offs. Right.

]]> 0
Permissionless Innovation & Commercial Drones Wed, 04 Feb 2015 23:20:57 +0000

Farhad Manjoo’s latest New York Times column, “Giving the Drone Industry the Leeway to Innovate,” discusses how the Federal Aviation Administration’s (FAA) current regulatory morass continues to thwart many potentially beneficial drone innovations. I particularly appreciated this point:

But perhaps the most interesting applications for drones are the ones we can’t predict. Imposing broad limitations on drone use now would be squashing a promising new area of innovation just as it’s getting started, and before we’ve seen many of the potential uses. “In the 1980s, the Internet was good for some specific military applications, but some of the most important things haven’t really come about until the last decade,” said Michael Perry, a spokesman for DJI [maker of Phantom drones]. . . . He added, “Opening the technology to more people allows for the kind of innovation that nobody can predict.”

That is exactly right and it reflects the general notion of “permissionless innovation” that I have written about extensively here in recent years. As I summarized in a recent essay: “Permissionless innovation refers to the notion that experimentation with new technologies and business models should generally be permitted by default. Unless a compelling case can be made that a new invention or business model will bring serious harm to individuals, innovation should be allowed to continue unabated and problems, if they develop at all, can be addressed later.”

The reason that permissionless innovation is so important is that innovation is more likely in political systems that maximize breathing room for ongoing economic and social experimentation, evolution, and adaptation. We don’t know what the future holds. Only incessant experimentation and trial-and-error can help us achieve new heights of greatness. If, however, we adopt the opposite approach of “precautionary principle”-based reasoning and regulation, then these chances for serendipitous discovery evaporate. As I put it in my recent book, “living in constant fear of worst-case scenarios—and premising public policy upon them—means that best-case scenarios will never come about. When public policy is shaped by precautionary principle reasoning, it poses a serious threat to technological progress, economic entrepreneurialism, social adaptation, and long-run prosperity.”

In this regard, the unprecedented growth of the Internet is a good example of how permissionless innovation can significantly improve consumer welfare and our nation’s competitive status relative to the rest of the world. And this also holds lessons for how we treat commercial drone technologies, as Jerry Brito, Eli Dourado, and I noted when filing comments with the FAA back in April 2013. We argued:

Like the Internet, airspace is a platform for commercial and social innovation. We cannot accurately predict to what uses it will be put when restrictions on commercial use of UASs are lifted. Nevertheless, experience shows that it is vital that innovation and entrepreneurship be allowed to proceed without ex ante barriers imposed by regulators. We therefore urge the FAA not to impose any prospective restrictions on the use of commercial UASs without clear evidence of actual, not merely hypothesized, harm.

Manjoo builds on that same point in his new Times essay when he notes:

[drone] enthusiasts see almost limitless potential for flying robots. When they fantasize about our drone-addled future, they picture not a single gadget, but a platform — a new class of general-purpose computer, as important as the PC or the smartphone, that may be put to use in a wide variety of ways. They talk about applications in construction, firefighting, monitoring and repairing infrastructure, agriculture, search and response, Internet and communications services, logistics and delivery, filmmaking and wildlife preservation, among other uses.

If only the folks at the FAA and in Congress saw things this way. We need to open up the skies to the amazing innovative potential of commercial drone technology, especially before the rest of the world seizes the opportunity to jump into the lead on this front.


Additional Reading

]]> 0
New FCC rules will kick at least 4.7 million households offline Tue, 03 Feb 2015 18:11:09 +0000

This month, the FCC is set to issue an order that will reclassify broadband under Title II of the Communications Act. As a result of this reclassification, broadband will suddenly become subject to numerous federal and local taxes and fees.

How much will these new taxes reduce broadband subscribership? Nobody knows for sure, but using the existing economic literature we can come up with a back-of-the-envelope calculation.

According to a policy brief by Brookings’s Bob Litan and the Progressive Policy Institute’s Hal Singer, reclassification under Title II will increase fixed broadband costs on average by $67 per year due to both federal and local taxes. With pre-Title II costs of broadband at $537 per year, this represents a 12.4 percent increase.

[I have updated these estimates at the end of this post.]

How much will this 12.4 percent increase in broadband costs reduce the number of broadband subscriptions demanded? For that, we must turn to the literature on the elasticity of demand for broadband.

As is often the case, the literature on this subject does not give one clear answer. For example, Austan Goolsbee, who was chairman of President Obama’s Council of Economic Advisors in 2010 and 2011, estimated in 2006 that broadband elasticity ranged from -2.15 to -3.76, with an average of around -2.75.

A 2014 study by two FCC economists and their coauthors estimates the elasticity of demand for marginal non-subscribers. That is, they use survey data of people who are not currently broadband subscribers, exclude the 2/3 of respondents who say they would not buy broadband at any price, and estimate their demand elasticity at -0.62.

Since the literature doesn’t settle the matter, let’s pick the more conservative number and use it as a lower bound.

With 84 million fixed broadband subscribers facing a 12.4 percent increase in prices, with an elasticity of -0.62, there will be a 7.7 percent reduction in broadband subscribers, or a decline of 6.45 million households.

Obviously, this is a terrible result.

A question for my friends in the tech policy world who support reclassification: How many households do you think will lose broadband access due to new taxes and fees? Please show your work.

UPDATE: Looks like I missed this updated post from Singer and Litan, which notes that due to the extension of the Internet Tax Freedom Act, the total amount of new taxes from reclassification will be only about $49/year, not $67/year as stated above.

This represents a 9.1 percent increase in costs, so the number of households with broadband will decline by only 5.6 percent, or 4.7 million.

While I regret the oversight, this is still a very high number that deserves attention.

]]> 0
Network Neutrality’s Watershed Moment Tue, 03 Feb 2015 14:31:16 +0000

After some ten years, gallons of ink and thousands of megabytes of bandwidth, the debate over network neutrality is reaching a climactic moment.

Bills are expected to be introduced in both the Senate and House this week that would allow the Federal Communications Commission to regulate paid prioritization, the stated goal of network neutrality advocates from the start. Led by Sen. John Thune (R-S.D.) and Rep. Fred Upton (R-Mich.), the legislation represents a major compromise on the part of congressional Republicans, who until now have held fast against any additional Internet regulation. Their willingness to soften on paid prioritization has gotten the attention of a number of leading Democrats, including Sens. Bill Nelson (D-Fla.) and Cory Booker (D-N.J.). The only question that remains is if FCC Chairman Thomas Wheeler and President Barack Obama are willing to buy into this emerging spirit of partisanship.

Obama wants a more radical course—outright reclassification of Internet services under Title II of the Communications Act, a policy Wheeler appears to have embraced in spite of reservations he expressed last year. Title II, however, would give the FCC the same type of sweeping regulatory authority over the Internet as it does monopoly phone service—a situation that stands to create a “Mother, may I” regime over what, to date, has been an wildly successful environment of permissionless innovation.

Important to remember is that Title II reclassification is a response to repeated court decisions preventing the FCC from enforcing certain provisions against paid prioritization. Current law, the courts affirmed, classifies the Internet as an information service, a definition that limits the FCC’s regulatory control over it. Using reclassification, the FCC hopes to give itself the necessary legal cover.

But the paid prioritization matter can addressed easily, elegantly and, most important, constitutionally, through Congress.

As a libertarian, I question the value of any regulation on the Internet on principle. And practically speaking, there’s been no egregious abuse of paid prioritization that justifies unilateral reclassification. It’s not in an ISPs interest to block any websites. And, contrary to being a consumer problem, allowing major content companies like Netflix to purchase network management services that improve the quality of video delivery while reducing network congestion for other applications might actually serve the market.

But if paid prioritization is the concern, then Thune-Upton addresses it. It would allow the FCC to investigate and impose penalties on ISPs that throttle traffic, or demand payment for quality delivery. On the other hand, Thune-Upton would also create carve outs for certain types of applications that require prioritization to work, like telemedicine and emergency services, and would allow for the reasonable network management that is necessary for optimum performance—answering criticisms that come not only from center-right policy analysts, but from network engineers.

Legislation also gives the FCC specific instructions, whereas Title II reclassification opens the door to large-scale, open-ended regulation. Here’s where I do indulge my libertarian leanings. Giving the government vague, unspecified powers asks for trouble. All we have to do is look at the National Security Agency’s widespread warrantless wiretapping and the Drug Enforcement Administration’s tracking of private vehicle movements around the country. Disturbing as they are to all citizens who value liberty and privacy, these practices are technically legal because there are no laws setting rules of due process with contemporary communications technology (a blog for another day). As much as the FCC promises to “forbear” more extensive Internet regulation, it’s better for all if specific limits are written in.

At the same time, the addition of regulatory powers invites corporate rent-seeking whereby companies turn to the government to protect them in the marketplace. Even as the FCC was drafting its Title II proposal, BlackBerry’s CEO, John Chen, were complaining that applications developers were only focusing on the iPhone and Android platforms. Chen seeks “app neutrality,” essentially a law to require any applications that work on iPhone and Android platforms to work on BlackBerry’s operating system, too, despite the low marker penetration of the devices.

Also, forcing the FCC to work inside narrow parameters means it can more readily ease up or even reverse itself in case a ban on paid prioritization leads to intended consequences, like a significant uptick in bandwidth congestion and measureable degradation in applications performance.

Finally, successful bi-partisan legislation can put net neutrality to bed. If the White House remains stubborn and instead pushes the FCC to reclassify, it almost assures a lengthy court case that not only would drag out the debate, but likely end with another decision against the FCC. But even if the court rulings go the FCC’s way, Title II is no guarantee against paid prioritization. Allowing Congress to give the FCC the necessary authority is constitutionally sound approach and has a better chance of meeting the desired objectives. Congress is offering a bipartisan solution that is reasonable and workable. The Obama administration has been banging the drum for network neutrality since Day 1. This is its moment to seize.

]]> 1
Money for graduate students who love liberty Mon, 02 Feb 2015 15:45:05 +0000

My employer, the Mercatus Center, provides ridiculously generous funding (up to $40,000/year) for graduate students. There are several opportunities depending on your goals, but I encourage people interested in technology policy to particularly consider the MA Fellowship, as that can come with an opportunity to work with the tech policy team here at Mercatus. Mind the deadlines!

The PhD Fellowship is a three-year, competitive, full-time fellowship program for students who are pursuing a doctoral degree in economics at George Mason University. Our PhD Fellows take courses in market process economics, public choice, and institutional analysis and work on projects that use these lenses to understand global prosperity and the dynamics of social change. Successful PhD Fellows have secured tenure track positions at colleges and universities throughout the US and Europe.

It includes full tuition support, a stipend, and experience as a research assistant working closely with Mercatus-affiliated Mason faculty. It is a total award of up to $120,000 over three years. Acceptance into the fellowship program is dependent on acceptance into the PhD program in economics at George Mason University. The deadline for applications is February 1, 2015.

The Adam Smith Fellowship is a one-year, competitive fellowship for graduate students attending PhD programs at any university, in a variety of fields, including economics, philosophy, political science, and sociology. The aim of this fellowship is to introduce students to key thinkers in political economy that they might not otherwise encounter in their graduate studies. Smith Fellows receive a stipend and spend three weekends during the academic year and one week during the summer participating in workshops and seminars on the Austrian, Virginia, and Bloomington schools of political economy.

It includes a quarterly stipend and travel and lodging to attend colloquia hosted by the Mercatus Center. It is a total award of up to $10,000 for the year. Acceptance into the fellowship program is dependent on acceptance into a PhD program at an accredited university. The deadline for applications is March 15, 2015.

The MA Fellowship is a two-year, competitive, full-time fellowship program for students pursuing a master’s degree in economics at George Mason University who are interested in gaining advanced training in applied economics in preparation for a career in public policy. Successful fellows have secured public policy positions as Presidential Management Fellows, economists and analysts with federal and state governments, and policy analysts at prominent research institutions.

It includes full tuition support, a stipend, and practical experience as a research assistant working with Mercatus scholars. It is a total award of up to $80,000 over two years. Acceptance into the fellowship program is dependent on acceptance into the MA program in economics at George Mason University. The deadline for applications is March 1, 2015.

The Frédéric Bastiat Fellowship is a one-year competitive fellowship program for graduate students interested in pursuing a career in public policy. The aim of this fellowship is to introduce students to the Austrian, Virginia, and Bloomington schools of political economy as academic foundations for pursuing contemporary policy analysis. They will explore how this framework is utilized to analyze policy implications of a variety of topics, including the study of American capitalism, state and local policy, regulatory studies, technology policy, financial markets, and spending and budget.

It includes a quarterly stipend and travel and lodging to attend colloquia hosted by the Mercatus Center. It is a total award of up to $5,000 for the year. Acceptance into the fellowship program is dependent on acceptance into a graduate program at an accredited university. The deadline for applications is April 1, 2015.

]]> 0
The LAPD versus the First Amendment Fri, 30 Jan 2015 19:35:57 +0000

Last month, my Mercatus Center colleague Brent Skorup published a major scoop: police departments around the country are scanning social media to assign people individualized “threat ratings” — green, yellow, or red. This week, police are complaining that the public is using social media to track them back.

LAPD Chief Charlie Beck has expressed concerns that Waze, the social traffic app owned by Google, could be used to target police officers. The National Sherriff’s Association has also complained about the app.

To be clear, Waze does not allow anybody to track individual officers. Users of the app can drop a pin on a map letting drivers know that there is police activity (or traffic jams, accidents, or traffic enforment cameras) in the area.

That’s it.

And police departments around the country frequently publicize their locations. They are essentially required to do so for sobriety checkpoints bySupreme Court order and NHTSA guidelines.

But in a letter to Google CEO Larry Page, Beck writes breathlessly that Waze “poses a danger to the lives of police officers in the United States.” The letter also (falsely) states that the app was used by Ismaaiyl Brinsley to kill two NYPD officers. The Associated Press notes that “Investigators do not believe he used Waze to ambush the officers, in part because police say Brinsley tossed his cellphone more than two miles from where he shot the officers.”

It’s somewhat rich of the LAPD to cite fear for its officers’ lives while the department is in possession of some 3408 assault rifles, 7 armored vehicles, and 3 grenade launchers.

In fact, what Waze poses a danger to is police department revenue. Drivers are using the app as a crowdsourced radar detector, as a means of avoiding traffic tickets. But unlike radar detectors, which have been outlawed in my home state of Virginia, Waze benefits from First Amendment protection.

The fundamental activity that Waze users are engaging in is speech. “Hey, there is a cop over there,” is protected speech under the First Amendment. As all LAPD officers must swear an oath affirming that they “will support and defend the Constitution of the United States,” it seems reasonable to expect the police chief not to stifle, by lobbying private corporations, the First Amendment rights of those citizens who choose to engage in this protected activity.

The Waze kerfuffle is a symptom of a longer-term breakdown in trust between police departments around the country and the publics they are sworn to protect and serve. This is a widely recognized problem, and some in the law enforcement community are working on strategies to remedy it.

But as long as departments continue to view the public as the enemy or even as a passive revenue source, not as the rightful recipients of their service and protection, we will continue to see the public respond by introducing technologies that protect users from the police’s arbitrary powers.

Fortunately, police complaints about Waze have backfired. Many smartphone users had no idea there was an app for avoiding speeding tickets until Beck and the Sherriff’s Association made it national news. As a result of the publicity, downloads of Waze have skyrocketed.

This is how the modern world works, and it gives me great hope for the future.

]]> 0
DRM for Drones Will Fail Wed, 28 Jan 2015 22:00:18 +0000

I suppose it was inevitable that the DRM wars would come to the world of drones. Reporting for the Wall Street Journal today, Jack Nicas notes that:

In response to the drone crash at the White House this week, the Chinese maker of the device that crashed said it is updating its drones to disable them from flying over much of Washington, D.C.SZ DJI Technology Co. of Shenzhen, China, plans to send a firmware update in the next week that, if downloaded, would prevent DJI drones from taking off within the restricted flight zone that covers much of the U.S. capital, company spokesman Michael Perry said.

Washington Post reporter Brian Fung explains what this means technologically:

The [DJI firmware] update will add a list of GPS coordinates to the drone’s computer telling it where it can and can’t go. Here’s how that system works generally: When a drone comes within five miles of an airport, Perry explained, an altitude restriction gets applied to the drone so that it doesn’t interfere with manned aircraft. Within 1.5 miles, the drone will be automatically grounded and won’t be able to fly at all, requiring the user to either pull away from the no-fly zone or personally retrieve the device from where it landed. The concept of triggering certain actions when reaching a specific geographic area is called “geofencing,” and it’s a common technology in smartphones. Since 2011, iPhone owners have been able to create reminders that alert them when they arrive at specific locations, such as the office.

This is complete overkill and it almost certainly will not work in practice. First, this is just DRM for drones, and just as DRM has failed in most other cases, it will fail here as well. If you sell somebody a drone that doesn’t work within a 15-mile radius of a major metropolitan area, they’ll be online minutes later looking for a hack to get it working properly. And you better believe they will find one.

Second, other companies or even non-commercial innovators will just use such an opportunity to promote their DRM-free drones, making the restrictions on other drones futile.

Perhaps, then, the government will push for all drone manufacturers to include DRM on their drones, but that’s even worse. The idea that the Washington, DC metro area should be a completely drone-free zone is hugely troubling. We might as well put up a big sign at the edge of town that says, “Innovators Not Welcome!”

And this isn’t just about commercial operators either. What would such a city-wide restriction mean for students interested in engineering or robotics in local schools? Or how about journalists who might want to use drones to help them report the news?

For these reasons, a flat ban on drones throughout this or any other city just shouldn’t fly.

Moreover, the logic behind this particular technopanic is particularly silly. It’s like saying that we should install some sort of kill switch in all automobile ignitions so that they will not start anywhere in the DC area on the off chance that one idiot might use their car to drive into the White House fence. We need clear and simple rules for drone use; not technically-unworkable and unenforceable bans on all private drone use in major metro areas.

[Update 1/30: Washington Post reporter Matt McFarland was kind enough to call me and ask for comment on this matter. Here’s his excellent story on “The case for not banning drone flights in the Washington area,” which included my thoughts.]

]]> 0
Some Initial Thoughts on the FTC Internet of Things Report Wed, 28 Jan 2015 14:54:30 +0000

Yesterday, the Federal Trade Commission (FTC) released its long-awaited report on “The Internet of Things: Privacy and Security in a Connected World.” The 55-page report is the result of a lengthy staff exploration of the issue, which kicked off with an FTC workshop on the issue that was held on November 19, 2013.

I’m still digesting all the details in the report, but I thought I’d offer a few quick thoughts on some of the major findings and recommendations from it. As I’ve noted here before, I’ve made the Internet of Things my top priority over the past year and have penned several essays about it here, as well as in a big new white paper (“The Internet of Things and Wearable Technology: Addressing Privacy and Security Concerns without Derailing Innovation”) that will be published in the Richmond Journal of Law & Technology shortly. (Also, here’s a compendium of most of what I’ve done on the issue thus far.)

I’ll begin with a few general thoughts on the FTC’s report and its overall approach to the Internet of Things and then discuss a few specific issues that I believe deserve attention.

Big Picture, Part 1: Should Best Practices Be Voluntary or Mandatory?

Generally speaking, the FTC’s report contains a variety of “best practice” recommendations to get Internet of Things innovators to take steps to ensure greater privacy and security “by design” in their products. Most of those recommended best practices are sensible as general guidelines for innovators, but the really sticky question here continued to be this: When, if ever, should “best practices” become binding regulatory requirements?

The FTC does a bit of a dance when answering that question. Consider how, in the executive summary of the report, the Commission answers the question regarding the need for additional privacy and security regulation: “Commission staff agrees with those commenters who stated that there is great potential for innovation in this area, and that IoT-specific legislation at this stage would be premature.” But, just a few lines later, the agency (1) “reiterates the Commission’s previous recommendation for Congress to enact strong, flexible, and technology-neutral federal legislation to strengthen its existing data security enforcement tools and to provide notification to consumers when there is a security breach;” and (2) “recommends that Congress enact broad-based (as opposed to IoT-specific) privacy legislation.”

Here and elsewhere, the agency repeatedly stresses that it is not seeking IoT-specific regulation; merely “broad-based” digital privacy and security legislation. The problem is that once you understand what the IoT is all about you come to realize that this largely represents a distinction without a difference. The Internet of Things is simply the extension of the Net into everything we own or come into contact with. Thus, this idea that the agency is not seeking IoT-specific rule sounds terrific until you realize that it is actually seeking something far more sweeping: greater regulation of all online / digital interactions. And because “the Internet” and “the Internet of Things” will eventually (if they are not already) be considered synonymous, this notion that the agency is not proposing technology-specific regulation is really quite silly.

Now, it remains unclear whether there exists any appetite on Capitol Hill for “comprehensive” legislation of any variety – although perhaps we’ll learn more about that possibility when the Senate Commerce Committee hosts a hearing on these issues on February 11. But at least thus far, “comprehensive” or “baseline” digital privacy and security bills have been non-starters.

And that’s for good reason in my opinion: Such regulatory proposals could take us down the path that Europe charted in the late 1990s with onerous “data directives” and suffocating regulatory mandates for the IT / computing sector. The results of this experiment have been unambiguous, as I documented in congressional testimony in 2013. I noted there how America’s Internet sector came to be the envy of the world while it was hard to name any major Internet company from Europe. Whereas America embraced “permissionless innovation” and let creative minds develop one of the greatest success stories in modern history, the Europeans adopted a “Mother, May I” regulatory approach for the digital economy. America’s more flexible, light-touch regulatory regime leaves more room for competition and innovation compared to Europe’s top-down regime. Digital innovation suffered over there while it blossomed here.

That’s why we need to be careful about adopting the sort of “broad-based” regulatory regime that the FTC recommends in this and previous reports.

Big Picture, Part 2: Does the FTC Really Need More Authority?

Something else is going on in this report that has also been happening in all the FTC’s recent activity on digital privacy and security matters: The agency has been busy laying the groundwork for its own expansion.

In this latest report, for example, the FTC argues that

Although the Commission currently has authority to take action against some IoT-related practices, it cannot mandate certain basic privacy protections… The Commission has continued to recommend that Congress enact strong, flexible, and technology-neutral legislation to strengthen the Commission’s existing data security enforcement tools and require companies to notify consumers when there is a security breach.

In other words, this agency wants more authority. And we are talking about sweeping authority here that would transcend its already sweeping authority to police “unfair and deceptive practices” under Section 5 of the FTC Act. Let’s be clear: It would be hard to craft a law that grants an agency more comprehensive and open-ended consumer protection authority than Section 5. The meaning of those terms — “unfairness” and “deception” — has always been a contentious matter, and at times the agency has abused its discretion by exploiting that ambiguity.

Nonetheless, Sec. 5 remains a powerful enforcement tool for the agency and one that has been wielded aggressively in recently years to police digital economy giants and small operators alike. Generally speaking, I’m alright with most Sec. 5 enforcement, especially since that sort of retrospective policing of unfair and deceptive practices is far less likely to disrupt permissionless innovation in the digital economy. That’s because it does not subject digital innovators to the sort of “Mother, May I” regulatory system that European entrepreneurs face. But an expansion of the FTC’s authority via more “comprehensive, baseline” privacy and security regulatory policies threatens to convert America’s more sensible bottom-up and responsive regulatory system into the sort of innovation-killing regime we see on the other side of the Atlantic.

Here’s the other thing we can’t forget when it comes to the question of what additional authority to give the FTC over privacy and security matters: The FTC is not the end of the enforcement story in America. Other enforcement mechanism exist, including: privacy torts, class action litigation, property and contract law, state enforcement agencies, and other targeted privacy statutes. I’ve summarized all these additional enforcement mechanisms in my recent law review article referenced above. (See section VI of the paper.)

FIPPS, Part 1: Notice & Choice vs. Use-Based Restrictions

Next, let’s drill down a bit and examine some of the specific privacy and security best practices that the agency discusses in its new IoT report.

The FTC report highlights how the IoT creates serious tensions for many traditional Fair Information Practice Principles (FIPPs). The FIPPs generally include: (1) notice, (2) choice, (3) purpose specification, (4) use limitation, and (5) data minimization. But the report is mostly focused on notice and choice as well as data minimization.

When it comes to notice and choice, the agency wants to keep hope alive that it will still be applicable in an IoT world. I’m sympathetic to this effort because it is quite sensible for all digital innovators to do their best to provide consumers with adequate notice about data collection practices and then give them sensible choices about it. Yet, like the agency, I agree that “offering notice and choice is challenging in the IoT because of the ubiquity of data collection and the practical obstacles to providing information without a user interface.”

The agency has a nuanced discussion of how context matters in providing notice and choice for IoT, but one can’t help but think that even they must realize that the game is over, to some extent. The increasing miniaturization of IoT devices and the ease with which they suck up data means that traditional approaches to notice and choice just aren’t going to work all that well going forward. It is almost impossible to envision how a rigid application of traditional notice and choice procedures would work in practice for the IoT.

Relatedly, as I wrote here last week, the Future of Privacy Forum (FPF) recently released a new white paper entitled, “A Practical Privacy Paradigm for Wearables,” that notes how FIPPs “are a valuable set of high-level guidelines for promoting privacy, [but] given the nature of the technologies involved, traditional implementations of the FIPPs may not always be practical as the Internet of Things matures.” That’s particularly true of the notice and choice FIPPS.

But the FTC isn’t quite ready to throw in the towel and make the complete move toward “use-based restrictions,” as many academics have. (Note: I have lengthy discussion of this migration toward use-based restrictions in my law review article in section IV.D.). Use-based restrictions would focus on specific uses of data that are particularly sensitive and for which there is widespread agreement they should be limited or disallowed altogether. But use-based restrictions are, ironically, controversial from both the perspective of industry and privacy advocates (albeit for different reasons, obviously).

The FTC doesn’t really know where to go next with use-based restrictions. The agency says that, on one hand, “has incorporated certain elements of the use-based model into its approach” to enforcement in the past. On the other hand, the agency says it has concerns “about adopting a pure use-based model for the Internet of Things,” since it may not go far enough in addressing the growth of more widespread data collection, especially of more sensitive information.

In sum, the agency appears to be keeping the door open on this front and hoping that a best-of-all-worlds solution miraculously emerges that extends both notice and choice and use-based limitations as the IoT expands. But the agency’s new report doesn’t give us any sort of blueprint for how that might work, and that’s likely for good reason: because it probably won’t work at that well in practice and there will be serious costs in terms of lost innovation if they try to force unworkable solutions on this rapidly evolving marketplace.

FIPPS, Part 2: Data Minimization

The biggest policy fight that is likely to come out of this report involves the agency’s push for data minimization. The report recommends that, to minimize the risks associated with excessive data collection:

companies should examine their data practices and business needs and develop policies and practices that impose reasonable limits on the collection and retention of consumer data. However, recognizing the need to balance future, beneficial uses of data with privacy protection, staff’s recommendation on data minimization is a flexible one that gives companies many options. They can decide not to collect data at all; collect only the fields of data necessary to the product or service being offered; collect data that is less sensitive; or deidentify the data they collect. If a company determines that none of these options will fulfill its business goals, it can seek consumers’ consent for collecting additional, unexpected categories of data…

This is an unsurprising recommendation in light of the fact that, in previous major speeches on the issue, FTC Chairwoman Edith Ramirez argued that, “information that is not collected in the first place can’t be misused,” and that:

The indiscriminate collection of data violates the First Commandment of data hygiene: Thou shall not collect and hold onto personal information unnecessary to an identified purpose. Keeping data on the off chance that it might prove useful is not consistent with privacy best practices. And remember, not all data is created equally. Just as there is low quality iron ore and coal, there is low quality, unreliable data. And old data is of little value.

In my forthcoming law review article, I discussed the problem with such reasoning at length and note:

if Chairwoman Ramirez’s approach to a preemptive data use “commandment” were enshrined into a law that said, “Thou shall not collect and hold onto personal information unnecessary to an identified purpose.” Such a precautionary limitation would certainly satisfy her desire to avoid hypothetical worst-case outcomes because, as she noted, “information that is not collected in the first place can’t be misused,” but it is equally true that information that is never collected may never lead to serendipitous data discoveries or new products and services that could offer consumers concrete benefits. “The socially beneficial uses of data made possible by data analytics are often not immediately evident to data subjects at the time of data collection,” notes Ken Wasch, president of the Software & Information Industry Association. If academics and lawmakers succeed in imposing such precautionary rules on the development of IoT and wearable technologies, many important innovations may never see the light of day.

FTC Commissioner Josh Wright issued a dissenting statement to the report that lambasted the staff for not conducting more robust cost-benefit analysis of the new proposed restrictions, and specifically cited how problematic the agency’s approach to data minimization was. “[S]taff merely acknowledges it would potentially curtail innovative uses of data. . . [w]ithout providing any sense of the magnitude of the costs to consumers of foregoing this innovation or of the benefits to consumers of data minimization,” he says. Similarly, in her separate statement, FTC Commissioner Maureen K. Ohlhausen worried about the report’s overly precautionary approach on data minimization when noting that, “without examining costs or benefits, [the staff report] encourages companies to delete valuable data — primarily to avoid hypothetical future harms. Even though the report recognizes the need for flexibility for companies weighing whether and what data to retain, the recommendation remains overly prescriptive,” she concludes.

Regardless, the battle lines have been drawn by the FTC staff report as the agency has made it clear that it will be stepping up its efforts to get IoT innovators to significantly slow or scale back their data collection efforts. It will be very interesting to see how the agency enforces that vision going forward and how it impacts innovation in this space. All I know is that the agency has not conducted a serious evaluation here of the trade-offs associated with such restrictions. I penned another law review article last year offering “A Framework for Benefit-Cost Analysis in Digital Privacy Debates” that they could use to begin that process if they wanted to get serious about it.

The Problem with the “Regulation Builds Trust” Argument

One of the interesting things about this and previous FTC reports on privacy and security matters is how often the agency premises the case for expanded regulation on “building trust.” The argument goes something like this (as found on page 51 of the new IoT report): “Staff believes such legislation will help build trust in new technologies that rely on consumer data, such as the IoT. Consumers are more likely to buy connected devices if they feel that their information is adequately protected.”

This is one of those commonly-heard claims that sounds so straight-forward and intuitive that few dare question it. But there are problems with the logic of the “we-need-regulation-to-build-trust-and boost adoption” arguments we often hear in debates over digital privacy.

First, the agency bases its argument mostly on polling data. “Surveys also show that consumers are more likely to trust companies that provide them with transparency and choices,” the report says. Well, of course surveys say that! It’s only logical that consumers will say this, just as they will always say they value privacy and security more generally when asked. You might as well ask people if they love their mothers!

But what consumers claim to care about and what they actually do in the real-world are often two very different things. In the real-world, people balance privacy and security alongside many other values, including choice, convenience, cost, and more. This leads to the so-called “privacy paradox,” or the problem of many people saying one thing and doing quite another when it comes to privacy matters. Put simply, people take some risks — including some privacy and security risks — in order to reap other rewards or benefits. (See this essay for more on the problem with most privacy polls.)

Second, online activity and the Internet of Things are both growing like gangbusters despite the privacy and security concerns that the FTC raises. Virtually every metric I’ve looked at that track IoT activity show astonishing growth and product adoption, and projections by all the major consultancies that have studied this consistently predict the continued rapid growth of IoT activity. Now, how can this be the case if, as the FTC claims, we’ll only see the IoT really take off after we get more regulation aimed at bolstering consumer trust? Of course, the agency might argue that the IoT will grow at an even faster clip than it is right now, but there is no way to prove one way or the other. In any event, the agency cannot possible claim that the IoT isn’t already growing at a very healthy clip — indeed, a lot of the hand-wringing the staff engages in throughout the report is premised precisely on the fact that the IoT is exploding faster that our ability to keep up with it!! In reality, it seems far more likely that cost and complexity are the bigger impediments to faster IoT adoption, just as cost and complexity have always been the factors weighing most heavily on the adoption of other digital technologies.

Third, let’s say that the FTC is correct – and it is – when it says that a certain amount of trust is needed in terms of IoT privacy and security before consumers are willing to use more of these devices and services in their everyday lives. Does the agency imagine that IoT innovators don’t know that? Are markets and consumers completely irrational? The FTC says on page 44 of the report that, “If a company decides that a particular data use is beneficial and consumers disagree with that decision, this may erode consumer trust.” Well, if such a mismatch does exist, then the assumption should be that consumers can and will push back, or seek out new and better options. And other companies should be able to sense the market opportunity here to offer a more privacy-centric offering for those consumers who demand it in order to win their trust and business.

Finally, and perhaps most obviously, the problem with the argument that increased regulation will help IoT adoption is that it ignores how the regulations put in place to achieve greater “trust” might become so onerous or costly in practice that there won’t be as many innovations for us to adopt to begin with! Again, regulation — even very well-intentioned regulation — has costs and trade-offs.

In any event, if the agency is going to premise the case for expanded privacy regulation on this notion, they are going to have to do far more to make their case besides simply asserting it.

Once Again, No Appreciation of the Potential for Societal Adaptation

Let’s briefly shift to a subject that isn’t discussed in the FTC’s new IoT report at all.

Regular readers may get tired of me making this point, but I feel it is worth stressing again: Major reports and statements by public policymakers about rapidly-evolving emerging technologies are always initially prone to stress panic over patience. Rarely are public officials willing to step-back, take a deep breath, and consider how a resilient citizenry might adapt to new technologies as they gradually assimilate new tools into their lives.

That is really sad, when you think about it, since humans have again and again proven capable of responding to technological change in creative ways by adopting new personal and social norms. I won’t belabor the point because I’ve already written volumes on this issue elsewhere. I tried to condense all my work into a single essay entitled, “Muddling Through: How We Learn to Cope with Technological Change.” Here’s the key takeaway:

humans have exhibited the uncanny ability to adapt to changes in their environment, bounce back from adversity, and learn to be resilient over time. A great deal of wisdom is born of experience, including experiences that involve risk and the possibility of occasional mistakes and failures while both developing new technologies and learning how to live with them. I believe it wise to continue to be open to new forms of innovation and technological change, not only because it provides breathing space for future entrepreneurialism and invention, but also because it provides an opportunity to see how societal attitudes toward new technologies evolve — and to learn from it. More often than not, I argue, citizens have found ways to adapt to technological change by employing a variety of coping mechanisms, new norms, or other creative fixes.

Again, you almost never hear regulators or lawmakers discuss this process of individual and social adaptation even though they must know there is something to it. One explanation is that every generation has their own techno-boogeymen and lose faith in the ability of humanity to adapt to it.

To believe that we humans are resilient, adaptable creatures should not be read as being indifferent to the significant privacy and security challenges associated with any of the new technologies in our lives today, including IoT technologies. Overly-exuberant techno-optimists are often too quick to adopt a “Just-Get-Over-It!” attitude in response to the privacy and security concerns raised by others. But it is equally unforgivable for those who are worried about those same concerns to utterly ignore the reality of human adaptation to new technologies realities.

Why are Educational Approaches Merely an Afterthought?

One final thing that troubled me about the FTC report was the way consumer and business education is mostly an afterthought. This is one of the most important roles that the FTC can and should play in terms of explaining potential privacy and security vulnerabilities to the general public and product developers alike.

Alas, the agency devotes so much ink to the more legalistic questions about how to address these issues, that all we end up with in the report is this one paragraph on consumer and business education:

Consumers should understand how to get more information about the privacy of their IoT devices, how to secure their home networks that connect to IoT devices, and how to use any available privacy settings. Businesses, and in particular small businesses, would benefit from additional information about how to reasonably secure IoT devices. The Commission staff will develop new consumer and business education materials in this area.

I applaud that language, and I very much hope that the agency is serious about plowing more effort and resources into developing new consumer and business education materials in this area. But I’m a bit shocked that the FTC report didn’t even bother mentioning the excellent material already available on the “On Guard Online” website it helped created with a dozen other federal agencies. Worse yet, the agency failed to highlight the many other privacy education and “digital citizenship” efforts that are underway today to help on this front. I discuss those efforts in more detail in the closing section of my recent law review article.

I hope that the agency spends a little more time working on the development of new consumer and business education materials in this area instead of trying to figure out how to craft a quasi-regulatory regime for the Internet of Things. As I noted last year in this Maine Law Review article, that would be a far more productive use of the agency’s expertise and resources. I argued there that “policymakers can draw important lessons from the debate over how best to protect children from objectionable online content” and apply them to debates about digital privacy. Specifically, after a decade of searching for legalistic solutions to online safety concerns — and convening a half-dozen blue ribbon task forces to study the issue — we finally saw a rough consensus emerge that no single “silver-bullet” technological solutions or legal quick-fixes would work and that, ultimately, education and empowerment represented the better use of our time and resources. What was true for child safety is equally true for privacy and security for the Internet of Things.

It’s a shame the FTC staff squandered the opportunity it had with this new report to highlight all the good that could be done by getting more serious about focusing first on those alternative, bottom-up, less costly, and less controversial solutions to these challenging problems. One day we’ll all wake up and realize that we spent a lost decade debating legalistic solutions that were either technically unworkable or politically impossible. Just imagine if all the smart people who were spending all their time and energy on those approaches right now were instead busy devising and pushing educational and empowerment-based solutions instead!

One day we’ll get there. Sadly, if the FTC report is any indication, that day is still a ways off.

]]> 0