Technology Liberation Front Keeping politicians' hands off the Net & everything else related to technology Thu, 30 Oct 2014 04:13:01 +0000 en-US hourly 1 How Much Tax? Thu, 30 Oct 2014 04:13:01 +0000

As I and others have recently noted, if the Federal Communications Commission reclassifies broadband Internet access as a “telecommunications” service, broadband would automatically become subject to the federal Universal Service tax—currently 16.1%, or more than twice the highest state sales tax (California–7.5%), according to the Tax Foundation.

Erik Telford, writing in The Detroit News, has reached a similar conclusion.

U.S. wireline broadband revenue rose to $43 billion in 2012 from $41 billion in 2011, according to one estimate. “Total U.S. mobile data revenue hit $90 billion in 2013 and is expected to rise above $100 billion this year,” according to another estimate.  Assuming that the wireline and wireless broadband industries as a whole earn approximately $150 billion this year, the current 16.1% Universal Service Contribution Factor would generate over $24 billion in new revenue for government programs administered by the FCC if broadband were defined as a telecommunications service.

The Census Bureau reports that there were approximately 90 million households with Internet use at home in 2012. Wireline broadband providers would have to collect approximately $89 from each one of those households in order to satisfy a 16.1% tax liability on earnings of $50 billion. There were over 117 million smartphone users over the age of 15 in 2011, according to the Census Bureau. Smartphones would account for the bulk of mobile data revenue. Mobile broadband providers would have to collect approximately $137 from each of those smartphone users to shoulder a tax liability of 16.1% on earnings of $100 billion.

The FCC adjusts the Universal Service Contribution Factor quarterly with the goal of generating approximately $8 billion annually to subsidize broadband for some users. One could argue that if the tax base increases by $150 billion, the FCC could afford to drastically reduce the Universal Service Contribution Factor. However, nothing would prevent the FCC from raising the contribution factor back up into the double digits again in the future. The federal income tax started out at 2%.

The FCC is faced with the problem of declining international and interstate telecommunications revenues upon which to impose the tax—since people are communicating in more ways besides making long-distance phone calls—and skeptics might question whether the FCC could resist the temptation to make vast new investments in the “public interest.” For example, at this very moment the FCC is proposing to update the broadband speed required for universal service support to 10 Mbps.

What Role Will the States Play?

Another interesting question is how the states will react to this. There is a long history of state public utility commissions and taxing authorities acting to maximize the scope of state regulation and taxes. Remember that telecommunications carriers file tax returns in every state and local jurisdiction—numbering in the thousands.

In Smith v. Illinois (1930), the United States Supreme Court ruled that there has to be an apportionment of telecommunication service expenses and revenue between interstate and intrastate jurisdictions. The Communications Act of 1934 is scrupulously faithful to Smith v. Illinois.

In 2003, Minnesota tried to regulate voice over Internet Protocol (VoIP) services the same as “telephone services.” The FCC declined to rule whether VoIP was a telecommunication service or an information service, however it preempted state regulation anyway when it concluded that it is “impossible or impractical to separate the intrastate components of VoIP service from its interstate components.” The FCC emphasized

the significant costs and operational complexities associated with modifying or procuring systems to track, record and process geographic location information as a necessary aspect of the service would substantially reduce the benefits of using the Internet to provide the service, and potentially inhibit its deployment and continued availability to consumers.

The U.S. Court of Appeals for the Eighth Circuit agreed with the FCC in 2007. Unfortunately, this precedent did not act as a brake on the FCC.

In 2006—while the Minnesota case was still working it’s way through the courts—the FCC was concerned that the federal Universal Service Fund was “under significant strain”; the commission therefore did not hesitate to establish universal service contribution obligations for providers of fixed interconnected VoIP services. The FCC had no difficulty resolving the problem of distinguishing between intrastate and interstate components: It simply took the telephone traffic percentages reported by long-distance companies (64.9% interstate versus 35.1% intrastate) and applyed them to interconnected VoIP services. Vonage Holdings Corp., the litigant in the Minnesota case (as well as in the subsequent Nebraska case, discussed below), did not offer fixed interconnected VoIP Service, so it was unaffected.

Before long, Nebraska tried to require “nomadic” interconnected VoIP service providers (including Vonage) to collect a state universal service tax on the intrastate portion (35.1%) of their revenues. Following the Minnesota precedent, the Eighth Circuit rejected the Nebraska universal service tax.

Throughout these legal and regulatory proceedings, the distinction between “fixed” and “nomadic” VoIP services was observed. According to the Nebraska court,

Nomadic service allows a customer to use the service by connecting to the Internet wherever a broadband connection is available, making the geographic originating point difficult or impossible to determine.   Fixed VoIP service, however, originates from a fixed geographic location. * * * As a result, the geographic originating point of the communications can be determined and the interstate and intrastate portions of the service are more easily distinguished.

Nebraska argued that it wasn’t impossible at all to determine the geographic origin of nomadic VoIP service—the state simply used the customer’s billing address as a proxy for where nomadic services occurred.  If Nebraska had found itself in a more sympathetic tribunal, it might have won.

The bottom line is that the FCC has been successful so far in imposing limits on state taxing authority—at least within the Eighth Circuit (Arkansas, Iowa, Minnesota, Missouri, Nebraska, North Dakota and South Dakota)—but there are no limits on the FCC.


Reclassifying broadband Internet access as a telecommunications service will have significant tax implications. Broadband providers will have to collect from consumers and remit to government approximately $24 billion—equivalent to approximately $89 per household for wireline Internet access and approximately $137 per smartphone. The FCC could reduce these taxes, but it will be under enormous political pressure to collect and spend the money.  States can be expected to seek a share of these revenues, resulting in litigation that will create uncertainty for consumers and investors.

]]> 0
Regarding the Use of Apocalyptic Rhetoric in Policy Debates Wed, 29 Oct 2014 18:08:40 +0000

Evan Selinger, a super-sharp philosopher of technology up at the Rochester Institute of Technology, is always alerting me to interesting new essays and articles and this week he brought another important piece to my attention. It’s a short new article by Arturo Casadevall, Don Howard, and Michael J. Imperiale, entitled, “The Apocalypse as a Rhetorical Device in the Influenza Virus Gain-of-Function Debate.” The essay touches on something near and dear to my own heart: the misuse of rhetoric in debates over the risk trade-offs associated with new technology and inventions. Casadevall, Howard, and Imperiale seek to “focus on the rhetorical devices used in the debate [over infectious disease experiments] with the hope that an analysis of how the arguments are being framed can help the discussion.”

They note that “humans are notoriously poor at assessing future benefits and risks” and that this makes many people susceptible to rhetorical ploys based on the artificial inflation of risks. Their particular focus in this essay is the debate over so-called “gain-of-function” (GOF) experiments involving influenza virus, but what they have to say here about how rhetoric is being misused in that field is equally applicable to many other fields of science and the policy debates surrounding various issues. The last two paragraphs of their essay are masterful and deserve everyone’s attention:

Who has the upper hand in the GOF debate? The answer to this question will be apparent only when the history of this time is written. However, it is possible that in the near future, arguments about risk will trump arguments about benefits, because the risk of a GOF experiment unleashing a devastating epidemic plays on a well-founded human fear, while the potential benefits of the research are considerably harder to articulate. In debates about benefits and risks, arguments based on positing extreme risks, however unlikely, are powerful rhetorical devices because they play into human fears. While we all agree that the risk of a GOF experiment unleashing a deadly epidemic is not zero, such an event would be at the extreme end of the likely outcomes from GOF experimentation. Arguing against GOF on the basis of pandemic dangers is a powerful rhetorical device because anyone can understand it. The problem with the use of apocalyptic scenarios in risk-benefit analysis is that they invoke the possibility for infinite suffering, irrespective of the probability of such an event, and the prospect of infinite suffering can potentially overwhelm any good obtained from knowledge gained from such experiments.

Repeatedly invoking the apocalypse can create a sophistry that we call the apocalyptic fallacy, which, when applied in a vacuum of evidence and theory, proposes consequences that are so dire, however low the probability, that this tactic can be employed to quash any new invention, technique, procedures, and/or policy. The apocalyptic fallacy is an effective rhetorical tool that is meaningless in the absence of objective numbers. We remind those who invoke the apocalypse that the DNA revolution went on to deliver a multitude of benefits without unleashing the fears of Asilomar and that the large hadron collider was turned on, the Higgs boson was discovered, the standard model in physics was validated, and we are still here. Hence, we caution individuals against overreliance on the apocalypse in the debates ahead, for rhetoric can win the day, but rhetoric never gave us a single medical advance.

This is spot-on and, again, has applicability in many other arenas. Indeed, it aligns quite nicely with what I had to say about the use and misuse of rhetoric in information technology debates in my recent law review article on “Technopanics, Threat Inflation, and the Danger of an Information Technology Precautionary Principle” (Minnesota Journal of Law, Science and Technology, Vol. 14, No. 1, Winter 2013). In that piece, I began by noting that:

Fear is an extremely powerful motivational force. In public policy debates, appeals to fear are often used in an attempt to sway opinion or bolster the case for action. Such appeals are used to convince citizens that  threats to individual or social well-being may be avoided only if specific steps are taken. Often these steps take the form of anticipatory regulation based on the precautionary principle. Such “fear appeal arguments” are frequently on display in the Internet policy arena and often take the form of a fullblown “moral panic” or “technopanic.”  These panics are intense public, political, and academic responses to the emergence or use of media or technologies, especially by the young.  In the extreme, they result in regulation or censorship. While cyberspace has its fair share of troubles and troublemakers, there is no evidence that the Internet is leading to greater problems for society than previous technologies did. That has not stopped some from suggesting there are reasons to be particularly fearful of the Internet and new digital technologies. There are various individual and institutional factors at work that perpetuate fear-based reasoning and tactics.

I continued on to document the structure of “fear appeal” arguments, and then outlined how those arguments can be deconstructed and refuted using sound analysis and real-world evidence. The logic pattern behind fear appeal arguments looks something like this (as documented by Douglas Walton, in his outstanding textbook, Fundamentals of Critical Argumentation):

  • Fearful Situation Premise: Here is a situation that is fearful to you.
  • Conditional Premise: If you carry out A, then the negative consequences portrayed in this fearful situation will happen to you.
  • Conclusion: You should not carry out A.

In the field of rhetoric and argumentation, this logic pattern is referred to as argumentum in terrorem or argumentum ad metum. A closely related variant of this argumentation scheme is known as argumentum ad baculum, or an argument based on a threat. Argumentum ad baculum literally means “argument to the stick,” and the logic pattern in this case looks like this (again, according to Walton’s book on the subject):

  • Conditional Premise: If you do not bring about A, then consequence B will occur.
  • Commitment Premise: I commit myself to seeing to it that B will come about.
  • Conclusion: You should bring about A.

The problem is that these logic patterns and rhetorical devices are logical fallacies or are based on outright myths. Once you start carefully unpacking arguments based on this logic pattern and applying reasoned, evidenced-based analysis, you can quickly show why the premise is not valid.

Unfortunately, that doesn’t stop some people (including a great many policymakers) from utilizing such faulty logic or misguided rhetorical devices. Even worse, as I note in my paper, is that,

fear appeals are facilitated by the use of threat inflation. Specifically, threat inflation involves the use of fear-inducing rhetoric to inflate artificially the potential harm a new development or technology poses to certain classes of the population, especially children, or to society or the economy at large. These rhetorical flourishes are empirically false or at least greatly blown out of proportion relative to the risk in question.

I then go on for many pages in my paper to document the use of fear appeals and threat inflation in a variety of information technology debates. I show that in every case where such tactics are common, they are unjustified once the evidence is evaluated dispassionately.  Regrettably, those who employ fear tactics and use threat inflation often don’t care because they know exactly what they are doing: The use of apocalyptic rhetoric grabs attention and sometimes ends serious deliberation. It is often an intentional ploy to scare people into action (or perhaps just into silence), even if that result is not based on a reasoned, level-headed evaluation of all the facts on hand.

The lesson here is simple: The ends do not justify the means. No matter how passionately you feel about a particular policy issue — even those that you perhaps believe potentially involve life and death ramifications — it is wise to avoid the use of apocalyptic rhetoric. Generally speaking, the sky is not falling and when one insists that it is, they should be backing up their assertions with a substantial body of evidence. Otherwise, they are just using fear appeal arguments and apocalyptic rhetoric to unnecessarily scare people and end all serious debate over issues that are likely far more complex and nuanced than their rhetoric suggests.


[For all my essays on "technopanics," moral panics, and threat inflation, see this compendium I have assembled.]

]]> 0
Tax Consequences of Net Neutrality Tue, 21 Oct 2014 21:36:00 +0000

Would the Federal Communications Commission expose broadband Internet access services to tax rates of at least 16.6% of every dollar spent on international and interstate data transfers—and averaging 11.23% on transfers within a particular state and locality—if it reclassifies broadband as a telecommunications service pursuant to Title II of the Communications Act of 1934?

As former FCC Commissioner Harold Furchtgott-Roth notes in a recent Forbes column, the Internet Tax Freedom Act only prohibits state and local taxes on Internet access.  It says nothing about federal user fees.  The House Energy & Commerce Committee report accompanying the “Permanent Internet Tax Freedom Act” (H.R. 3086) makes this distinction clear.

The law specifies that it does not prohibit the collection of the 911 access or Universal Service Fund (USF) fees. The USF is imposed on telephone service rather than Internet access anyway, although the FCC periodically contemplates broadening the base to include data services.

The USF fee applies to all interstate and international telecommunications revenues.  If the FCC reclassifies broadband Internet access as a telecommunications service in the Open Internet Proceeding, the USF fee would automatically apply unless and until the commission concluded a separate rulemaking proceeding to exempt Internet access.  The Universal Service Contribution Factor is not insignificant. Last month, the commission increased it to 16.1%.  According to Furchtgott-Roth,

At the current 16.1% fee structure, it would be perhaps the largest, one-time tax increase on the Internet.  The FCC would have many billions of dollars of expanded revenue base to fund new programs without, according to the FCC, any need for congressional authorization.

In another Forbes column, Steve Posiask discusses the possibility that reclassification could also trigger state and local taxes.  The committee report notes that if Congress allows the Internet access tax moratorium (which expires on Dec. 11, 2014) to lapse, states and localities could impose a crippling burden on the Internet.

In 2007, the average tax rate on communications services was 13.5%, more than twice the rate of 6.6% on all other goods and services. Some rates even exceed sin tax rates. For example, in Jacksonville, Florida, households pay 33.24% wireless taxes, higher than beer (19%), liquor (23%) and tobacco (28%). Moreover, these tax burdens fall heavier on low income households. They pay ten times as much in communications taxes as high income households as a share of income.  (citation omitted.)

For more information on state and local taxation of communications services, see, e.g., the report from the Tax Foundation on wireless taxation that came out this month.

The House committee report also notes that broadband Internet access is highly price-elastic, which means that higher taxes would be economically inefficient.

former White House Chief economist Austan Goolsbee authored a paper finding the average elasticity for broadband to be 2.75. Elasticity is a measure of price sensitivity and here indicates that a $1.00 increase in Internet access taxes would reduce expenditures on those services by an average of $2.75.  (citation omitted.)

Even if the Internet Tax Freedom Act is renewed by the lame duck Congress, the act isn’t exactly a model of clarity on this issue. The definition (see Sec. 1104) of “internet access service,” for example, specifically excludes telecommunications services.

INTERNET ACCESS.—The term ‘‘Internet access’’ means a service that enables users to access content, information, electronic mail, or other services offered over the Internet, and may also include access to proprietary content, information, and other services as part of a package of services offered to users. Such term does not include telecommunications services.

And the definition of “telecommunications service” (also in Sec. 1104) is the same one (by cross-reference) that the FCC may try to interpret as including broadband.

TELECOMMUNICATIONS SERVICE.—The term ‘‘telecommunications service’’ has the meaning given such term in section 3(46) of the Communications Act of 1934 (47 U.S.C. 153(46)) and includes communications services (as defined in section 4251 of the Internal Revenue Code of 1986).

When the Internet Tax Freedom Act was enacted in 1998, Congress took great pains to not jeopardize the pre-existing authority of state and local governments to levy substantial taxes on telecommunications carriers. Notwithstanding, state and local tax collectors have been fighting the moratorium ever since.  Their point of view was summarized by Michael Mazerov in The Hill as follows:

Beyond costing states the $7 billion a year in potential revenue to support education, healthcare, roads, and other services, the bill would violate an understanding between Congress and the states dating back to the 1998 Internet Tax Freedom Act (ITFA): that any ban on applying sales taxes to Internet access charges would be temporary and not apply to existing access taxes.

The House passed H.R. 3086 in July. The “Internet Tax Freedom Forever Act” (S. 1431) is pending in the Senate Finance Committee.  Even if Congress renews the Internet Tax Freedom Act but fails to clarify the definitions in current law, there is a distinct possibility that state and local tax collectors will test the limits of the law if and when the FCC rules that broadband is no different than a Title II telecommunications service.

]]> 0
Driverless Cars, Privacy & Security: Event Video & Talking Points Mon, 20 Oct 2014 19:23:01 +0000

Last week, it was my pleasure to speak at a Cato Institute event on “The End of Transit and the Beginning of the New Mobility: Policy Implications of Self-Driving Cars.” I followed Cato Institute Senior Fellow Randal O’Toole and Marc Scribner, a Research Fellow at the Competitive Enterprise Institute. They provided a broad and quite excellent overview of all the major issues at play in the debate over driverless cars. I highly recommend you read the excellent papers that Randal and Marc have published on these issues.

My role on the panel was to do a deeper dive into the privacy and security implications of not just the autonomous vehicles of our future, but also the intelligent vehicle technologies of the present. I discussed these issues in greater detail in my recent Mercatus Center working paper, “Removing Roadblocks to Intelligent Vehicles and Driverless Cars,” which was co-authored with Ryan Hagemann. (That article will appear in a forthcoming edition of the Wake Forest Journal of Law & Policy.)  I’ve embedded the video of the event down below (my remarks begin at the 38:15 mark) as well as my speaking notes. Again, please consult the longer paper for details.


The privacy & security implications of self-driving cars are already driving public policy concerns because of the amount of data they collect. Here are a few things we should keep in mind as we consider new regulations for these technologies:

1)      Security & privacy are relative concepts with amorphous boundaries

  • Not everyone affixes the same value on security & privacy; very subjective
  • Some people are hyper-cautious about security or hyper-sensitive about their privacy; others are risk-takers or are just somewhat indifferent (or pragmatic) about these things

2)      Security & privacy norms can and often do evolve very rapidly over time

  • With highly disruptive technologies, we tend to panic first but then when move to a new plateau with new ethical and legal baselines
  • [I’ve written about this in my recent law review articles on about privacy and security]
  • The familiar cycle at work: initial resistance, gradual adaptation, eventual assimilation
  • This was true for the first cars a century ago; true today as well

3)      For almost every perceived privacy or security harm, there is a corresponding consumer benefit that may outweigh the feared harm

  • We see this reality at work with the broader Internet & we will see it at work with intelligent vehicles
  • Ex: Compare vehicle telematics to locational tracking technologies for smartphones
  • In both contexts, locational tracking raises rather obvious privacy considerations
  • But has many benefits and could not exist without them (traffic)
  • “tracking” concerns may dissipate for cars like smartphones (but not evaporate!)

4)      As it pertains to intelligent vehicle technologies, today’s security & privacy concerns are not the same as yesterdays and they will not be the same as tomorrow’s either.

  • Today’s “intelligent vehicle” technology privacy issues may be more concerning that tomorrow’s for fully autonomous vehicles
  • today’s on-baord EDRs & telematics may cause more privacy concerns for us as drivers than tomorrow’s technologies
  • ex: concerns about tailored insurance & automated law enforcement
  • That may lead to some privacy concerns in the short-term (or fears of “discrimination”)
  • BUT… What happens when cars are no longer a final good but merely a service for hire? (i.e., What happens when we combine Sharing Economy w/ self-driving cars?)
  • Car of future = robotic chauffeur (like Uber + Zip Car)
  • Old privacy concerns will evolve rapidly; security likely to become bigger concern

5)      Any security & privacy solutions must take these realities into account in order to be successful and those solutions must also accommodate the need to balance many different values and interests simultaneously.

  • There are no silver bullet solutions to privacy & security problems
  • + it will be difficult for law to keep up with pace of innovations
  • Therefore, We need a flexible, “layered approach” with many different solutions

we need “simple rules for a complex world” (Richard Epstein) 

  • Contracts / enforce Terms of Service
  • Common law / torts / products liability
  • see excellent new Brookings paper by John Villasenor: “when confronted with new, often complex, questions involving products liability, courts have generally gotten things right. . . . Products liability law has been highly adaptive to the many new technologies that have emerged in recent decades, and it will be quite capable of adapting to emerging autonomous vehicle technologies as the need arises.”
  • liability norms & insurance standards will evolve rapidly as cars move from final good to service
  • “least-cost avoider” implications (the more you know, the more responsible you become)

Privacy & Security “by design” (“Baking-in” best practices)

  • Data collection minimization
  • Limit sharing w 3rd parties
  • Transparency about all data collection and use practices
  • Clear consent for new uses
  • see Future of Privacy Forum best practices for intelligent vehicle tech providers
  • this is already happening (GAO report noted 10 smart car tech makers already doing so)
  • Hopefully some firms compete on privacy & exceed these standards for those who want it
  • And hopefully privacy & security advocates develop tools to better safeguard these values, again for those who want more protection

 Query: But shouldn’t there be some minimal standards? Federal or state regulation?

  • Things moving too quick; hard for law to keep pace w/o limiting innovation opportunities
  • The flexible approach and methods I just listed are better suited to evolve with the cases and controversies that pop up along the way
  • it is better to utilize a “wait and see” strategy & see if serious & persistent problems develop that require regulatory remedies; but don’t lead with preemptive, precautionary controls
  • permissionless innovation” should remain our default policy position
  • Ongoing experimentation should be permitted not just with technology in general, but also with privacy and security solutions and standards
  • In sum… avoid One Size Fits All solutions

6)      Special consideration should be paid to government actions that affect user privacy

  • Whereas many of the privacy and security concerns involving private data collection can be handled using the methods discussed previously, governmental data collection raises different issues
  • Private entities cannot fine, tax, or imprison us since they lack the coercive powers governments possess.
  • Moreover, although it is possible to ignore or refuse to be a part of various private services, the same is not true for governments, whose grasp cannot be evaded.
  • Thus, special protections are needed for law enforcement agencies and officials as it pertains to these technologies.
  • When government seeks access to privately-held data collected from these technologies, strong constitutional and statutory protections should apply.
  • We need stronger 4th Amendment constraints
  • Courts should revisit the “third-party doctrine,” which holds that an individual sacrifices their Fourth Amendment interest in their personal information when they divulges it to a third party, even if that party has promised to safeguard that data.


]]> 0
ITU agrees to open access for Plenipot contributions Mon, 20 Oct 2014 14:38:13 +0000

Good news! As the ITU’s Plenipotentiary Conference gets underway in Busan, Korea, the heads of delegation have met and decided to open up access to some of the documents associated with the meeting. At this time, it is only the documents that are classified as “contributions“—other documents such as meeting agendas, background information, and terms of reference remain password protected. It’s not clear yet whether that is an oversight or an intentional distinction. While I would prefer all documents to be publicly available, this is a very welcome development. It is gratifying to see the ITU membership taking transparency seriously.

Special thanks are due to ITU Secretary-General Hamadoun Touré. When Jerry Brito and I launched WCITLeaks in 2012, at first, the ITU took a very defensive posture. But after the WCIT, the Secretary-General demonstrated tremendous leadership by becoming a real advocate for transparency and reform. I am told that he was instrumental in convincing the heads of delegation to open up access to Plenipot documents. For that, Dr. Touré has my sincere thanks—I would be happy to buy him a congratulatory drink when I arrive in Busan, although I doubt his schedule would permit it.

It’s worth noting that this decision only applies to the Plenipotentiary conference. The US has a proposal that will be considered at the conference to make something like this arrangement permanent, to instruct the incoming SG to develop a policy of open access to all ITU meeting documents. That is a development that I will continue to watch closely.

]]> 0
More evidence that ‘SOPA for Search Engines’ is a bad idea Fri, 17 Oct 2014 14:30:54 +0000

Although SOPA was ignominiously defeated in 2012, the content industry never really gave up on the basic idea of breaking the Internet in order to combat content piracy. The industry now claims that a major cause of piracy is search engines returning results that direct users to pirated content. To combat this, they would like to regulate search engine results to prevent them from linking to sites that contain pirated music and movies.

This idea is problematic on many levels. First, there is very little evidence that content piracy is a serious concern in objective economic terms. Most content pirates would not, but for the availability of pirated content, empty their wallets to incentivize the creation of more movies and music. As Ian Robinson and I explain in our recent paper, industry estimates of the jobs created by intellectual property are absurd. Second, there are serious free speech implications associated with regulating search engine results. Search engines perform an information distribution role similar to that of newspapers, and they have an editorial voice. They deserve protection from censorship as long as they are not hosting the pirated material themselves. Third, as anyone who knows anything about the Internet knows, nobody uses the major search engines to look for pirated content. The serious pirates go straight to sites that specialize in piracy. Fourth, this is all part of a desperate attempt by the content industry to avoid modernizing and offering more of their content online through convenient packages such as Netflix.

As if these were not sufficient reason to reject the idea of “SOPA for Search Engines,” Google has now announced that they will be directing users to legitimate digital content if it is available on Netflix, Amazon, Google Play, Spotify, or other online services. The content industry now has no excuse—if they make their music and movies available in convenient form, users will see links to legitimate content even if they search for pirated versions.


Google also says they will be using DMCA takedown notices as an input into search rankings and autocomplete suggestions, demoting sites and terms that are associated with piracy. This is above and beyond what Google needs to do, and in fact raises some concerns about fraudulent DMCA takedown notices that could chill free expression—such as when CBS issued a takedown of John McCain’s campaign ad on YouTube even though it was likely legal under fair use. Google will have to carefully monitor the DMCA takedown process for abuse. But in any case, these moves by Google should once and for all put the nail in the coffin of the idea that we should compromise the integrity of search results through government regulation for the sake of fighting a piracy problem that is not that serious in the first place.

]]> 0
How to Destroy American Innovation: The FAA & Commercial Drones Mon, 06 Oct 2014 14:56:38 +0000

DroneIf you want a devastating portrait of how well-intentioned regulation sometimes has profoundly deleterious unintended consequences, look no further than the Federal Aviation Administration’s (FAA) current ban on commercial drones in domestic airspace. As Jack Nicas reports in a story in today’s Wall Street Journal (“Regulation Clips Wings of U.S. Drone Makers“), the FAA’s heavy-handed regulatory regime is stifling America’s ability to innovate in this space and remain competitive internationally. As Nicas notes:

as unmanned aircraft enter private industry—for purposes as varied as filming movies, inspecting wind farms and herding cattle—many U.S. drone entrepreneurs are finding it hard to get off the ground, even as rivals in Europe, Canada, Australia and China are taking off.

The reason, according to interviews with two-dozen drone makers, sellers and users across the world: regulation. The FAA has banned all but a handful of private-sector drones in the U.S. while it completes rules for them, expected in the next several years. That policy has stifled the U.S. drone market and driven operators underground, where it is difficult to find funding, insurance and customers.

Outside the U.S., relatively accommodating policies have fueled a commercial-drone boom. Foreign drone makers have fed those markets, while U.S. export rules have generally kept many American manufacturers from serving them.

Of course, the FAA simply responds that they are looking out for the safety of the skies and that we shouldn’t blame them. Again, there’s no doubt that the agency’s hyper-cautious approach to commercial drone integration is based on the best of intentions. But as we’ve noted here again and again, all the best of intentions don’t count for much–or at least shouldn’t count for much–when stacked against real-world evidence and results. And the results in this case are quite troubling.

An article last week from Alan McQuinn of the Information Technology and Innovation Foundation (“Commercial Drone Companies Fly Away from FAA Regulations, Go Abroad“) documented how problematic this situation has become:

With no certainty surrounding a timeline, limited access to exemptions, and a dithering pace for setting its rules, the FAA is slowing innovation. . . .  These overbearing rules have pushed U.S. companies to move their drone research and development projects to more permissive nations, such as Australia, where Google chose to test its drones. Australia’s Civil Aviation Safety Authority, the agency in charge of commercial drones, offers a great example of unrestrictive regulations. While it has not yet finalized its drone laws, it still allows companies and citizens to test and use these technologies under certain rules. Instead of forcing companies to reveal their technologies at government test sites, it allows them to test outdoors if they receive an operator’s certificate and submit their test area for approval. Australia’s more permissive nature shows how a country can allow innovation to thrive while simultaneously examining it for potential safety concerns.

The Wall Street Journal’s Nicas similarly observes that foreign innovators are already taking advantage of America’s regulatory mistakes to leapfrog us in drone innovation. He reports that Germany, Canada, Australia and China are starting to move ahead of us. Nicas quotes Steve Klindworth, head of a DJI drone retailer in Liberty Hill, Texas, who says that if the United States doesn’t move soon to adopt a more sensible policy position for drones that, “It’ll reach a point of no return where American companies won’t ever be able to catch up.”

In essence, the United States is adopting the exact opposite approach we did a generation ago for the Internet and digital technology.  I’ve written recently about how “permissionless innovation” powered the Information Revolution and helped American companies become the envy of the globe. (See my essay, “Why Permissionless Innovation Matters,” for more details and data.) That happened because America got policy right, whereas other countries either tried to micromanage the Information Revolution into existence or they adopted policies that instead actively stifled it. (See my recent book on this subject for more discussion.)

In essence, we see this story playing out in reverse with commercial drones. The FAA is adopting a hyper-precautionary principle position that is holding back innovation based on worse-case scenarios. Certainly the safety of the national airspace is a vital matter. But to shut down all other aerial innovation in the meantime is completely unreasonable. As I wrote in a filing to the FAA with my Mercatus Center colleagues Eli Dourado and Jerry Brito last year:

Like the Internet, airspace is a platform for commercial and social innovation. We cannot accurately predict to what uses it will be put when restrictions on commercial use of UASs are lifted. Nevertheless, experience shows that it is vital that innovation and entrepreneurship be allowed to proceed without ex ante barriers imposed by regulators. We therefore urge the FAA not to impose any prospective restrictions on the use of commercial UASs without clear evidence of actual, not merely hypothesized, harm.

Countless life-enriching innovations are being sacrificed because of the FAA’s draconian policy. (Below I have embedded a video of me discussing those innovations with John Stossel, which was taped earlier this year.) New industry sectors and many jobs are also being forgone. It’s time for the FAA to get moving to open up the skies to drone innovation. Congress should be pushing the agency harder on this front since the agency seems determined to ignore the law, which requires the agency to integrate commercial drones into the nation’s airspace.

Additional Reading

]]> 0
Trust (but verify) the engineers – comments on Transatlantic digital trade Sun, 28 Sep 2014 18:29:33 +0000

Last week, I participated in a program co-sponsored by the Progressive Policy Institute, the Lisbon Council, and the Georgetown Center for Business and Public Policy on “Growing the Transatlantic Digital Economy.”

The complete program, including keynote remarks from EU VP Neelie Kroes and U.S. Under Secretary of State Catherine A. Novelli, is available below.

My remarks reviewed worrying signs of old-style interventionist trade practices creeping into the digital economy in new guises, and urged traditional governments to stay the course (or correct it) on leaving the Internet ecosystem largely to its own organic forms of regulation and market correctives:

Vice President Kroes’s comments underscore an important reality about innovation and regulation. Innovation, thanks to exponential technological trends including Moore’s Law and Metcalfe’s Law, gets faster and more disruptive all the time, a phenomenon my co-author and I have coined “Big Bang Disruption.”

Regulation, on the other hand, happens at the same pace (at best). Even the most well-intentioned regulators, and I certainly include Vice President Kroes in that list, find in retrospect that interventions aimed at heading off possible competitive problems and potential consumer harms rarely achieve their objectives, and, indeed, generate more harmful unintended consequences.

This is not a failure of government. The clock speeds of innovation and regulation are simply different, and diverging faster all the time. The Internet economy has been governed from its inception by the engineering-driven multistakeholder process embodied in the task forces and standards groups that operate under the umbrella of the Internet Society.   Innovation, for better or for worse, is regulated more by Moore’s Law than traditional law.

I happen to think the answer is “for better,” but I am not one of those who take that to the extreme in arguing that there is no place for traditional governments in the digital economy. Governments have and continue to play an essential part in laying the legal foundations for the remarkable growth of that economy and in providing incentives if not funding for basic research that might not otherwise find investors.

And when genuine market failures appear, traditional regulators can and should step in to correct them as efficiently and narrowly as they can. Sometimes this has happened. Sometimes it has not.

Where in particular I think regulatory intervention is least effective and most dangerous is in regulating ahead of problems—in enacting what the FCC calls “prophylactic rules.” The effort to create legally sound Open Internet regulations in the U.S. has faltered repeatedly, yet in the interim investment in both infrastructure and applications continues at a rapid pace—far outstripping the rest of the world.

The results speak for themselves. U.S. companies dominate the digital economy, and, as Prof. Christopher Yoo has definitively demonstrated, U.S. consumers overall enjoy the best wired and mobile infrastructure in the world at competitive prices.

At the same time, those who continue to pursue interventionist regulation in this area often have hidden agendas. Let me give three examples:

1.  As we saw earlier this month at the Internet Governance Forum, which I attended along with Vice President Kroes and 2,500 other delegates, representatives of the developing world were told by so-called consumer advocates from the U.S. and the EU that they must reject so-called “zero rated” services, in which mobile network operators partner with service providers including Facebook, Twitter and Wikimedia to provide their popular services to new Internet users without use applying to data costs.

Zero rating is an extremely popular tool for helping the 2/3 of the world’s population not currently on the Internet get connected and, likely, from these services to many others. But such services violate the “principle” of neutrality that has mutated from an engineering concept to a nearly-religious conviction. And so zero rating must be sacrificed, along with users who are too poor to otherwise join the digital economy.

2.  Closer to home, we see the wildly successful Netflix service making a play to hijack the Open Internet debate into one about back-end interconnection, peering, and transit—engineering features that work so well that 99% of the agreements involved between networks, according to the OECD, aren’t even written down.

3.  And in Europe, there are other efforts to turn the neutrality principle on its head, using it as a hammer not to regulate ISPs but to slow the progress of leading content and service providers, including Apple, Amazon and Google, who have what the French Digital Council and others refer to as non-neutral “platform monopolies” which must be broken.

To me, these are in fact new faces on very old strategies—colonialism, rent-seeking, and protectionist trade warfare respectively. My hope is that Internet users—an increasingly powerful and independent source of regulatory discipline in the Internet economy—will see these efforts for what they truly are…and reject them resoundingly.

The more we trust (but also verify) the engineers, the faster the Internet economy will grow, both in the U.S. and Europe, and the greater our trade in digital goods and services will strengthen the ties between our traditional economies. It’s worked brilliantly for almost two decades.

The alternatives, not so much.

]]> 0
WCITLeaks is Ready for Plenipot Fri, 26 Sep 2014 19:23:16 +0000

The ITU is holding its quadrennial Plenipotentiary Conference in Busan, South Korea from October 20 to November 7, 2014. The Plenipot, as it is called, is the ITU’s “supreme organ” (a funny term that I did not make up). It represents the highest level of decision making at the ITU. As it has for the last several ITU conferences, WCITLeaks will host leaked documents related to the Plenipot.

For those interested in transparency at the ITU, two interesting developments are worth reporting. On the first day of the conference, the heads of delegation will meet to decide whether documents related to the conference should be available to the public directly through the TIES system without a password. All of the documents associated with the Plenipot are already available in English on WCITLeaks, but direct public access would have the virtue of including those in the world who do not speak English but do speak one of the other official UN languages. Considering this additional benefit of inclusion, I hope that the heads of delegation will seriously consider the advantages of adopting a more open model for document access during this Plenipot. If you would like to contact the head of delegation for your country, you can find their names in this document. A polite email asking them to support open access to ITU documents might not hurt.

In addition, at the meeting, the ITU membership will consider a proposal from the United States to, as a rule, provide open access to all meeting documents.


This is what WCITLeaks has always supported—putting ourselves out of business. As the US proposal notes, the ITU Secretariat has conducted a study finding that other UN agencies are much more forthcoming in terms of public access to their documents. A more transparent ITU is in everyone’s interest—including the ITU’s. This Plenipot has the potential to remedy a serious deficiency with the institution; I’m cheering for them and hoping they get it right.

]]> 0
The Debate over the Sharing Economy: Talking Points & Recommended Reading Fri, 26 Sep 2014 15:40:11 +0000

The sharing economy is growing faster than ever and becoming a hot policy topic these days. I’ve been fielding a lot of media calls lately about the nature of the sharing economy and how it should be regulated. (See latest clip below from the Stossel show on Fox Business Network.) Thus, I sketched out some general thoughts about the issue and thought I would share them here, along with some helpful additional reading I have come across while researching the issue. I’d welcome comments on this outline as well as suggestions for additional reading. (Note: I’ve also embedded some useful images from Jeremiah Owyang of Crowd Companies.)

1) Just because policymakers claim that regulation is meant to protect consumers does not mean it actually does so.

  1. Cronyism/ Rent-seeking: Regulation is often “captured” by powerful and politically well-connected incumbents and used to their own benefit. (+ Lobbying activity creates deadweight losses for society.)
  2. Innovation-killing: Regulations become a formidable barrier to new innovation, entry, and entrepreneurism.
  3. Unintended consequences: Instead of resulting in lower prices & better service, the opposite often happens: Higher prices & lower quality service. (Example: Painting all cabs same color destroying branding & ability to differentiate).

2) The Internet and information technology alleviates the need for top-down regulation & actually does a better job of serving consumers.

  1. Ease of entry/innovation in online world means that new entrants can come in to provide better options and solve problems previously thought to be unsolvable in the absence of regulation.
  2. Informational empowerment: The Internet and information technology solves old problem of lack of consumer access to information about products and services. This gives them monitoring tools to find more and better choices. (i.e., it lowers both search costs & transaction costs). (“To the extent that consumer protection regulation is based on the claim that consumers lack adequate information, the case for government intervention is weakened by the Internet’s powerful and unprecedented ability to provide timely and pointed consumer information.” – John C. Moorhouse)
  3. Feedback mechanisms (product & service rating / review systems) create powerful reputational incentives for all parties involved in transactions to perform better.
  4. Self-regulating markets: The combination of these three factors results in a powerful check on market power or abusive behavior. The result is reasonably well-functioning and self-regulating markets. Bad actors get weeded out.
  5. Law should evolve: When circumstances change dramatically, regulation should as well. If traditional rationales for regulation evaporate, or new technology or competition alleviates need for it, then the law should adapt.

3) Sharing economy has demonstrably improved consumer welfare. It provides:

  1. more choices / competition
  2. more service innovation / differentiation
  3. better prices
  4. higher quality services  (safety & cleanliness /convenience / peace of mind)
  5. Better options & conditions for workers

4) If we need to “level the (regulatory) playing field,” best way to do so is by “deregulating down” to put everyone on equal footing; not by “regulating up” to achieve parity.

  1. Regulatory asymmetry is real: Incumbents are right that they are at disadvantage relative to new sharing economy start-ups.
  2. Don’t punish new innovations for it: But solution is not to just roll the old regulatory regime onto the new innovators.
  3. Parity through liberalization: Instead, policymakers should “deregulate down” to achieve regulatory parity. Loosen old rules on incumbents as new entrants challenge status quo.
  4. “Permissionless innovation” should trump “precautionary principle” regulation: Preemptive, precautionary regulation does not improve consumer welfare. Competition and choice do better. Thus, our default position toward the sharing economy should be “innovation allowed” or permissionless innovation.
  5. Alternative remedies exist: Accidents will always happen, of course. But insurance, contracts, product liability, and other legal remedies exist when things go wrong. The difference is that ex post remedies don’t discourage innovation and competition like ex ante regulation does. By trying to head off every hypothetical worst-case scenario, preemptive regulations actually discourage many best-case scenarios from ever coming about.

5) Bottom line = Good intentions only get you so far in this world.

  1. Just because a law was put on the books for noble purposes, it does not mean it really accomplished those goals, or still does so today.
  2. Markets, competition, and ongoing innovation typically solve problems better than law when we give them a chance to do so.

[P.S. On 9/30, my Mercatus Center colleague Matt Mitchell posted this excellent follow-up essay building on my outline and improving it greatly.]

Sharing Economy Taxonomy-001

Why People Use Sharing Services Source: Jeremiah Owyang, Crowd Companies


Additional Reading

]]> 3
Net Neutrality and the Dangers of Title II Fri, 26 Sep 2014 14:40:32 +0000

There are several “flavors” of net neutrality–Eli Noam at Columbia University estimates there are seven distinct meanings of the term–but most net neutrality proponents agree that reinterpreting the 1934 Communications Act and “classifying” Internet service providers as Title II “telecommunications” companies is the best way forward. Proponents argue that ISPs are common carriers and therefore should be regulated much like common carrier telephone companies. Last week I filed a public interest comment about net neutrality and pointed out why the Title II option is unwise and possibly illegal.

For one, courts have defined “common carriers” in such a way that ISPs don’t look much like common carriers. It’s also unlikely that ISPs can be classified as telecommunications providers because Congress defines “telecommunications” as the transmission of information “between or among points specified by the user.” Phone calls are telecommunications because callers are selecting the endpoint–a person associated with the known phone number. Even simple web browsing, however, requires substantial processing by an ISP that often coordinates several networks, servers, and routers to bring the user the correct information, say, a Wikipedia article or Netflix video. Under normal circumstances, this process is completely mysterious to a user. By classifying ISPs as common carriers and telecommunications providers, therefore, the FCC invites immense legal risk.

As I’ve noted before, prioritized data can provide consumer benefits and stringent net neutrality rules would harm the development of new services on the horizon. Title II–in making the Internet more “neutral”–is anti-progress and is akin to putting the toothpaste back in the tube. The Internet has never been neutral, as computer scientist David Clark and others point out, and it’s getting less neutral all the time. VoIP phone service is already prioritized for millions of households. VoLTE will do the same for wireless phone customers.

It’s a largely unreported story that many of the most informed net neutrality proponents, including President Obama’s former chief technology officer, are fine with so-called “fast lanes”–particularly if it’s the user, not the ISP, selecting the services to be prioritized. There is general agreement that prioritized services are demanded by consumers, but Title II would have a predictable chilling effect on new services because of the regulatory burdens.

MetroPCS, for example, a small wireless carrier with about 3% market share attempted selling a purportedly non-neutral phone plan that allowed unlimited YouTube viewing and was pilloried for it by net neutrality proponents. MetroPCS, chastened, dropped the plan. With Title II, a small ISP or wireless carrier wouldn’t dream of attempting such a thing.

In the comment, I note other undesirable effects of Title II, including that it undermines the position the US has held publicly for years that the Internet is different than traditional communications.

If the FCC further intermingles traditional telecommunications with broadband, it may increase the probability of the [International Telecommunications Union] extending sender-pays or other tariffing and tax rules to the exchange of Internet traffic. Several countries proposed instituting sender-pays at a contentious 2012 ITU forum and the United States representatives vigorously fought sender-pays for the Internet. Many developing countries, particularly, would welcome such a change in regulations, because, as Mercatus scholar Eli Dourado found, sender-pays rules “allow governments to export some of their statutory tax burden.” New foreign tariffing rules would function essentially as a transfer of wealth from popular US-based companies like Facebook and Google to corrupt foreign governments and telephone cartels.

Finally, I note that classifying ISPs as common carriers weakens the enforcement of antitrust and consumer protection laws. Generally, it is difficult to bring antitrust lawsuits in extensively regulated industries. After filing my comment, I learned that the FTC also filed a comment noting, similarly, that its Section 5 authority would be limited if the FCC goes the Title II route. Brian Fung and others have since written about this interesting political and legal development. This detrimental effect on antitrust enforcement should weigh against Title II regulation.

There are substantial drawbacks to Title II regulation of ISPs and the FCC should exercise regulatory humility and its traditional hands-off approach to the Internet. In the end, Title II would harm investment in nascent technologies and network upgrades. The harms to consumers and small carriers, particularly, would be immense. It almost makes one think that comedy sketches and “death of the Internet” reporting don’t lead to good public policy.

More Information

See my presentation (36 minutes) on net neutrality and “fast lanes” on the Mercatus website.

]]> 2
Don’t Ban In-Flight Calls; Allow Experimentation Tue, 23 Sep 2014 22:13:09 +0000

According to this article by Julian Hattem in The Hill (“Lawmakers warn in-flight calls could lead to fights“), 77 congressional lawmakers have sent a letter to the heads of four federal agencies warning them not to allow people to have in-flight cellphone conversations on the grounds that it “could lead to heated arguments among passengers that distract officials’ attention and make planes less safe.”  The lawmakers say “arguments in an aircraft cabin already start over mundane issues, like seat selection and overhead bin space, and the volume and pervasiveness of voice communications would only serve to exacerbate and escalate these disputes.” They’re also concerned that it may distract passengers from important in-flight announcements.

Well, I think I speak for a lot of other travelers when I say I find the idea of gabby passengers — whether on a phone or just among themselves — insanely annoying. For those of us who value peace and quiet and find airline travel to be among the most loathsome of experiences to begin with, it might be tempting to sympathize with this letter and just say, “Sure, go ahead and make this a federal problem and solve this for us with an outright ban.”

But isn’t there a case to be made here for differentiation and choice over yet another one-size-fits all mandate? Why must we have federal lawmakers or bureaucrats dictating that every flight be the same? I don’t get that. After all, enough of us would be opposed to in-flight calls that we would likely pressure airlines to not offer many of them. But perhaps a few flights or routes might be “business traveler”-oriented and offer this option to those who do. Or perhaps some airlines would restrict calling to certain areas of the cabin, or limit when the calls could occur.

I dealt with a similar issue back in 2007 when Democratic representative Heath Shuler of North Carolina, along with several cosponsors, introduced the “Family Friendly Flights Act,” which would have required that airlines create “child safe viewing areas”: no publicly viewable TV screens would air violent programming within ten rows of the designated zones. The act defined  “violent programming” as any movie originally rated PG-13 or above, or any television show rated PG-V or PG-14-V or above.

As I noted at the time, it was somewhat easy for me to sympathize with other parents of young children who didn’t want them seeing violent fare during flights. However, I noted that there were some alternatives to government censorship of in-flight films, including privacy film to cover the screens or “no-video” flight options. The law never passed and instead what we got was airlines toning down some of the more violent and racy stuff because of public pressure.

I think that same sort of “social pressure / social norms” response would deter some of the most egregious behavior by passengers who used cell phones to carry on conversations. After all, legislators are certainly correct when they assert that tensions already run high over lesser matters inside the cabin of planes (like bin space and reclining seats). But somehow we get by without new laws on that front.

So, instead of always looking first to one-size-fits all mandates to solve such problems, perhaps a little experimentation and differentiation among carriers could yield a better solution to such problems. Sure, I know that’s not easy because of the relatively standardized airplane cabin designs. But perhaps that could change, too. Is it really all that unthinkable that some carrier in the future might create segregated cabin areas for “connected class” vs. “quiet class,” for example? Why couldn’t some enterprising airline retrofit at least some of the planes to accommodate such travelers, either on the same plane or perhaps, if needed, on completely different flights catering to both types of travelers? And, again, let’s remember that a lot of airlines aren’t going to want to deal with any of this and, therefore, most of them will likely tightly self-regulate cell phone talking on their own.

The bottom line is that we should not foreclose experimentation and choice so hastily when competition might spur creative solutions to complex problems. Not everything needs to be a federal matter.


]]> 0
Filing to FAA on Drones & “Model Aircraft” Tue, 23 Sep 2014 20:37:00 +0000

drone picToday, Ryan Hagemann and I filed comments with the Federal Aviation Administration (FAA) in its proceeding on the “Interpretation of the Special Rule for Model Aircraft.” This may sound like a somewhat arcane topic but it is related to the ongoing policy debate over the integration of unmanned aircraft systems (UASs)—more commonly referred to as drones—into the National Airspace System. As part of the FAA Modernization and Reform Act of 2012, Congress required the FAA to come up with a plan by September 2015 to accomplish that goal. As part of that effort, the FAA is currently accepting comments on its enforcement authority over model aircraft. Because the distinction between “drones” and “model aircraft” is blurring rapidly, the outcome of this proceeding could influence the outcome of the broader debate about drone policy in the United States.

In our comment to the agency, Hagemann and I discuss the need for the agency to conduct a thorough review of the benefits and costs associated with this rule. We argue this is essential because airspace is poised to become a major platform for innovation if the agency strikes the right balance between safety and innovation. To achieve that goal, we stress the need for flexibility and humility in interpreting older standards, such as “line of sight” restrictions, as well as increasingly archaic “noncommercial” vs. “commercial” distinctions or “hobbyists” vs. “professional” designations.

We also highlight the growing tension between the agency’s current regulatory approach and the First Amendment rights of the public to engage in peaceful, information-gathering activities using these technologies. (Importantly, on that point, we attached to our comments a new Mercatus Center working paper by Cynthia Love, Sean T. Lawson, and Avery Holton entitled, “News from Above: First Amendment Implications of the Federal Aviation Administration Ban on Commercial Drones.” See my coverage of the paper here.)

Finally, Hagemann and I close by noting the important role that voluntary self-regulation and codes of conduct already play in governing proper use of these technologies. We also argue that other “bottom-up” remedies are available and should be used before the agency imposes additional restrictions on this dynamic, rapidly evolving space.

You can download the complete comment on the Mercatus Center website here. (Note: The Mercatus Center filed comments with the FAA earlier about the prompt integration of drones into the nation’s airspace. You can read those comments here.)

Additional Reading

]]> 0
Problems with Precautionary Principle-Minded Tech Regulation & a Federal Robotics Commission Mon, 22 Sep 2014 15:55:03 +0000

If there are two general principles that unify my recent work on technology policy and innovation issues, they would be as follows. To the maximum extent possible:

  1. We should avoid preemptive and precautionary-based regulatory regimes for new innovation. Instead, our policy default should be innovation allowed (or “permissionless innovation”) and innovators should be considered “innocent until proven guilty” (unless, that is, a thorough benefit-cost analysis has been conducted that documents the clear need for immediate preemptive restraints).
  2. We should avoid rigid, “top-down” technology-specific or sector-specific regulatory regimes and/or regulatory agencies and instead opt for a broader array of more flexible, “bottom-up” solutions (education, empowerment, social norms, self-regulation, public pressure, etc.) as well as reliance on existing legal systems and standards (torts, product liability, contracts, property rights, etc.).

I was very interested, therefore, to come across two new essays that make opposing arguments and proposals. The first is this recent Slate oped by John Frank Weaver, “We Need to Pass Legislation on Artificial Intelligence Early and Often.” The second is Ryan Calo’s new Brookings Institution white paper, “The Case for a Federal Robotics Commission.”

Weaver argues that new robot technology “is going to develop fast, almost certainly faster than we can legislate it. That’s why we need to get ahead of it now.” In order to preemptively address concerns about new technologies such as driverless cars or commercial drones, “we need to legislate early and often,” Weaver says. Stated differently, Weaver is proposing “precautionary principle”-based regulation of these technologies. The precautionary principle generally refers to the belief that new innovations should be curtailed or disallowed until their developers can prove that they will not cause any harms to individuals, groups, specific entities, cultural norms, or various existing laws, norms, or traditions.

Calo argues that we need “the establishment of a new federal agency to deal with the novel experiences and harms robotics enables” since there exists “distinct but related challenges that would benefit from being examined and treated together.” These issues, he says, “require special expertise to understand and may require investment and coordination to thrive.

I’ll address both Weaver and Calo’s proposals in turn.

Problems with Precautionary Regulation

Let’s begin with Weaver proposed approach to regulating robotics and autonomous systems.

What Weaver seems to ignore—and which I discuss at greater length in my latest book—is that “precautionary” policy-making typically results in technological stasis and lost opportunities for economic and social progress. As I noted in my book, if we spend all our time living in constant fear of worst-case scenarios—and premising public policy upon such fears—it means that best-case scenarios will never come about. Wisdom and progress are born from experience, including experiences that involve risk and the possibility of occasional mistakes and failures. As the old adage goes, “nothing ventured, nothing gained.”

More concretely, the problem with “permissioning” innovation is that traditional regulatory policies and systems tend to be overly-rigid, bureaucratic, costly, and slow to adapt to new realities. Precautionary-based policies and regulatory systems focus on preemptive remedies that aim to predict the future, and future hypothetical problems that may not ever come about. As a result, preemptive bans or highly restrictive regulatory prescriptions can limit innovations that yield new and better ways of doing things.

Weaver doesn’t bother addressing these issues. He instead advocates regulating “early and often” without stopping to think through the potential costs of doing so. Yet, all regulation has trade-offs and opportunity costs. Before we rush to adopt rules based on knee-jerk negative reactions to new technology, we should conduct comprehensive benefit-cost analysis of the proposals and think carefully about what alternative approaches exist to address whatever problems we have identified.

Incidentally, Weaver also does not acknowledge the contradiction inherent in his thinking when he says robotic technology “is going to develop fast, almost certainly faster than we can legislate it. That’s why we need to get ahead of it now.” Well, if robotic technology is truly developing “faster than we can legislate it,” then “getting out ahead of it” would be seemingly impossible! Unless, that is, he envisions regulating robotic technologies so stringently as to effectively bring new innovation to a grinding halt (or banning altogether).

To be clear, my criticisms should not be read to suggest that zero regulation is the best option. There are plenty of thorny issues that deserve serious policy consideration and perhaps even some preemptive rules. But how potential harms are addressed matters deeply. We should exhaust all other potential nonregulatory remedies first — education, empowerment, transparency, etc. — before resorting to preemptive controls on new forms of innovation. In other words, ex post (or after the fact) solutions should generally trump ex ante (preemptive) controls.

I’ll say more on this point in the conclusion since my response addresses general failings in Ryan Calo’s Federal Robotics Commission proposal, to which we now turn.

Problems with a Federal Robotics Commission

Moving on to Calo, it is important to clarify what he is proposing because he is careful not to overstate his case in favor of a new agency for robotics. He elaborates as follows:

“The institution I have in mind would not “regulate” robotics in the sense of fashioning rules regarding their use, at least not in any initial incarnation. Rather, the agency would advise on issues at all levels—state and federal, domestic and foreign, civil and criminal—that touch upon the unique aspects of robotics and artificial intelligence and the novel human experiences these technologies generate. The alternative, I fear, is that we will continue to address robotics policy questions piecemeal, perhaps indefinitely, with increasingly poor outcomes and slow accrual of knowledge. Meanwhile, other nations that are investing more heavily in robotics and, specifically, in developing a legal and policy infrastructure for emerging technology, will leapfrog the U.S. in innovation for the first time since the creation of steam power.”

Here are some of my concerns with Calo’s proposed Federal Robotics Commission.

Will It Really Just Be an Advisory Body?

First, Calo claims he doesn’t want a formal regulatory agency, but something more akin to a super-advisory body. He does, however, sneak in that disclaimer that he doesn’t envision it to be regulatory “at least not in any initial incarnation.” Perhaps, then, he is suggesting that more formal regulatory controls would be in the cards down the road. It remains unclear.

Regardless, I think it is a bit disingenuous to propose the formation of a new governmental body like this and pretend that it will not someday very soon come to possess sweeping regulatory powers over these technologies. Now, you may well feel that that is a good thing. But I fear that Calo is playing a bit of game here by asking the reader to imagine his new creation would merely stick to an advisory role.

Regulatory creep is real. There just aren’t too many examples of agencies being created solely for their advisory expertise and then not also getting into the business of regulating the technology or topic that is included in that agency’s name. And in light of some of Calo’s past writing and advocacy, I can’t help but think he is actually hoping that the agency comes to take on a greater regulatory role over time. Regardless, I think we can bank on that happening and I that there are reasons to worry about it for reasons noted above and which I will elaborate on below.

Incidentally, if Calo is really more interested in furthering just this expert advisory capacity, there are plenty of other entities (including non-governmental bodies) that could play that role. How about the National Science Foundation, for example? Or how about a multi-stakeholder body consisting of many different experts and institutions? I could go on, but you get the point. A single point of action is also a single point of failure. I don’t want just one big robotics bureaucracy making policy or even advising. I’d prefer a more decentralized approach, and one that doesn’t carry a (potential) big regulatory club in its hand.

Public Choice / Regulatory Capture Problems

Second, Calo underestimates the public choice problems of creating a sector-specific or technology-specific agency just for robotics. To his credit, he does admit that, “agencies have their problems, of course. They can be inefficient and are subject to capture by those they regulate or other special interests.” He also notes he has criticized other agencies for various failings. But he does not say anything more on this point.

Let’s be clear. There exists a long and lamentable history of sector-specific regulators being “captured” by the entities they regulate. To read the ugly reality, see my compendium, “Regulatory Capture: What the Experts Have Found.” That piece documents what leading academics of all political stripes have had to say about this problem over the past century. No one ever summarized the nature and gravity of this problem better than the great Alfred Kahn in his masterpiece, The Economics of Regulation: Principles and Institutions (1971):

“When a commission is responsible for the performance of an industry, it is under never completely escapable pressure to protect the health of the companies it regulates, to assure a desirable performance by relying on those monopolistic chosen instruments and its own controls rather than on the unplanned and unplannable forces of competition. [. . . ] Responsible for the continued provision and improvement of service, [the regulatory commission] comes increasingly and understandably to identify the interest of the public with that of the existing companies on whom it must rely to deliver goods.” (pgs. 12, 46)

The history of the Federal Communications Commission (FCC) is highly instructive in this regard and was documented in a 66-page law review article I penned with Brent Skorup entitled, “A History of Cronyism and Capture in the Information Technology Sector,” (Journal of Technology Law & Policy, Vol. 18, 2013). Again, it doesn’t make for pleasant reading. Time and time again, instead of serving the “public interest,” the FCC served private interests. The entire history of video marketplace regulation is one of the most sickening examples to consider since there have almost eight decades worth of case studies of the broadcast industry using regulation as a club to beat back new entry, competition, and innovation. [Skorup and I have another paper discussing that specific history and how to go about reversing it.] This history is important because, in the early days of the Commission, many proponents thought the FCC would be exactly the sort of “expert” independent agency that Calo envisions his Federal Robotics Commission would be. Needless to say, things did not turn out so well.

But the FCC isn’t the only guilty offender in this regard. Go read the history about how airlines so effectively cartelized their industry following World War II with the help of the Civil Aeronautics Board. Thankfully, President Jimmy Carter appointed Alfred Kahn to clean things up in the 1970s. Kahn, a life-long Democrat, came to realize that the problem of capture was so insidious and inescapable that abolition of the agency was the only realistic solution to make sure consumer welfare would improve. As a result, he and various other Democrats in the Carter Administration and in Congress worked together to sunset the agency and its hideously protectionist, anti-consumer policies. (Also, please read this amazing 1973 law review article on “Economic Regulation vs. Competition,” by Mark Green and Ralph Nader if you need even more proof of why this is a such a problem.)

In other words, the problem of regulatory capture is not something one can casually dismiss. The problem is still very real and deserves more consideration before we casually propose creating new agencies, even “advisory” agencies. At a minimum, when proposing new agencies, you need to get serious about what sort of institutional constraints you might consider putting in place to make sure that history does not repeat itself. Because if you don’t, various large, well-heeled, and politically-connected robotics companies could come to capture any new “Federal Robotics Commission” in very short order.

Can We Clean Up Old Messes Before Building More Bureaucracies?

Third, speaking of agencies, if it is the case that the alphabet soup collection of regulatory agencies we already have in place are not capable of handling “robotics policy” right now, can we talk about reforming them (or perhaps even getting rid of a few of them) first? Why must we just pile yet another sector-specific or technology-specific regulator on top of the many that already exist? That’s just a recipe for more red tape and potential regulatory capture. Unless you believe there is value in creating bureaucracy for the sake of creating bureaucracy, there is no excuse for not phasing out agencies that failed in their original mission, or whose mission is now obsolete, for whatever reason. This is a fundamental “good government” issue that politicians and academics of all stripes should agree on.

Calo indirectly addresses this point by noting that “we have agencies devoted to technologies already and it would be odd and anomalous to think we are done creating them.” Curiously, however, he spends no time talking about those agencies or asking whether they have done a good job. Again, the heart of Calo’s argument comes down the assertion that another specialized, technology-specific “expert” agency is needed because there are “novel” issues associated with robotics. Well, if it is true, as Calo suggests, that we have been down this path before (and we have), and if you believe our economy or society has been made better off for it, then you need to prove it. Because the objection to creating another regulatory bureaucracy is not simply based on distaste for Big Government; it comes down to the simple questions: (1) Do these things work; and (2) Is there a better alternative?

This is where Calo’s proposal falls short. There is no effort to prove that technocratic or “scientific” bureaucracies, on net, are worth their expense (to taxpayers) or cost (to society, innovation, etc.) when compared to alternatives. Of course, I suspect this is where Calo and I might part ways regarding what metrics we would use to gauge success. I’ll save that discussion for another day and shift to what I regard as the far more serious deficiency of Calo’s proposal.

Do We Become Global Innovation Leaders Through Bureaucratic Direction?

Fourth, and most importantly, Calo does not offer any evidence to prove his contention that we need a sector-specific or technology-specific agency for robotics in order to develop or maintain America’s competitive edge in this field. Moreover, he does not acknowledge how his proposal might have the exact opposite result. Let me spend some time on this point because this is what I find most problematic about his proposal.

In his latest Brookings essay and his earlier writing about robotics, Calo keeps suggesting that we need a specialized federal agency for robotics to avoid “poor outcomes” due to the lack of “a legal and policy infrastructure for emerging technology.” He even warns us that other countries who are looking into robotics policy and regulation more seriously “will leapfrog the U.S. in innovation for the first time since the creation of steam power.”

Well, on that point, I must ask: Did America need a Federal Steam Agency to become a leader in that field? Because unless I missed something in history class, steam power developed fairly rapidly in this country without any centralized bureaucratic direction. Or how about a more recent example: Did America need a Federal Computer Commission or Federal Internet Commission to obtain or maintain a global edge in computing, the Internet, or the Digital Economy?

To the contrary, we took the EXACT OPPOSITE approach. It’s not just that no new agencies were formed to guide the development of computing or the Internet in this country. It’s that our government made a clear policy choice to break with the past by rejecting top-down, command-and-control regulation by unelected bureaucrats in some shadowy Beltway agency.

Incidentally, it was Democrats who accomplished this. While many Republicans today love to crack wise-ass comments about Al Gore and the Internet while simultaneously imagining themselves to be the great defenders of Internet freedom, the reality is that we have the Clinton Administration and one its most liberal members—Ira Magaziner—to thank for the most blessedly “light-touch,” market-oriented innovation policy that the world has ever seen.

What did Magaziner and the Clinton Administration do? They crafted the amazing 1997 Framework for Global Electronic Commerce, a statement of the Administration’s principles and policy objectives toward the Internet and the emerging digital economy. It recommended reliance upon civil society, contractual negotiations, voluntary agreements, and ongoing marketplace experiments to solve information age problems. First, “the private sector should lead. The Internet should develop as a market driven arena not a regulated industry,” the Framework recommended. “Even where collective action is necessary, governments should encourage industry self-regulation and private sector leadership where possible.” Second, “governments should avoid undue restrictions on electronic commerce” and “parties should be able to enter into legitimate agreements to buy and sell products and services across the Internet with minimal government involvement or intervention.”

I’ve argued elsewhere that the Clinton Administration’s Framework, “remains the most succinct articulation of a pro-freedom, innovation-oriented vision for cyberspace ever penned.” Of course, this followed the Administration’s earlier move to allow the full commercialization of the Internet, which was even more important. The policy disposition they established with these decisions resulted in an unambiguous green light for a rising generation of creative minds who were eager to explore this new frontier for commerce and communications. And to reiterate,they did it without any new bureaucracy.

If You Regulate “Robotics,” You End Up Regulating Computing & Networking

Incidentally, I do not see how we could create a new Federal Robotics Commission without it also becoming a de facto Federal Computing Commission. Robotics and the many technologies and industries it already includes — driverless cars, commercial drones, Internet of Things, etc. — is becoming a hot policy topic, and proposals for regulation are already flying. These robotic technologies are developing on top of the building blocks of the Information Revolution: microprocessors, wireless networks, sensors, “big data,” etc.

Thus, I share Cory Doctorow’s skepticism about how one could logically separate “robotics” from these other technologies and sectors for regulatory purposes:

I am skeptical that “robot law” can be effectively separated from software law in general. … For the life of me, I can’t figure out a legal principle that would apply to the robot that wouldn’t be useful for the computer (and vice versa).

In his Brookings paper, Calo responded to Doctorow’s concern as follows:

the difference between a computer and a robot has largely to do with the latter’s embodiment. Robots do not just sense, process, and relay data. Robots are organized to act upon the world physically, or at least directly. This turns out to have strong repercussions at law, and to pose unique challenges to law and to legal institutions that computers and the Internet did not.

I find this fairly unconvincing. Just because robotic technologies have a physical embodiment does not mean their impact on society is all that more profound than computing, the Internet, and digital technologies. Consider all the hand-wringing going on today in cybersecurity circles about how hacking, malware, or various other types of digital attacks could take down entire systems or economies. I’m not saying I buy all that “technopanic” talk (and here are about three dozens of my essays arguing the contrary), but the theoretical ramifications are nonetheless on par with dystopian scenarios about robotics.

The Alternative Approach

Of course, it certainly may be the case that some worst-case scenarios are worth worrying about in both cases—for robotics and computing, that is. Still, is a Federal Robotics Commission or a Federal Computing Commission really the sensible way to address those issues?

To the contrary, this is why we have a Legislative Branch! So many of the problems of our modern era of dysfunctional government are rooted in an unwise delegation of authority to administrative agencies. Far too often, congressional lawmakers delegate broad, ambiguous authority to agencies instead of facing up to the hard issues themselves. This results in waste, bloat, inefficiencies, and an endless passing of the buck.

There may very well be some serious issues raised by robotics and AI that we cannot ignore, and which may even require a little preemptive, precautionary policy. And the same goes for general computing and the Internet. But that is not a good reason to just create new bureaucracies in the hope that some set of mythical technocratic philosopher kings will ride in to save the day with their supposed greater “expertise” about these matters. Either you believe in democracy or you don’t. Running around calling for agencies and unelected bureaucrats to make all the hard choices means that “the people” have even less of a say in these matters.

Moreover, there are many other methods of dealing with robotics and the potential problems robotics might create than through the creation of new bureaucracy. The common law already handles many of the problems that both Calo and Weaver are worried about. To the extent robotic systems are involved in accidents that harm individuals or their property, product liability law will kick in.

On this point, I strongly recommend another new Brookings publication. John Villasenor’s outstanding April white paper, “Products Liability and Driverless Cars: Issues and Guiding Principles for Legislation,” correctly argues that,

“when confronted with new, often complex, questions involving products liability, courts have generally gotten things right. … Products liability law has been highly adaptive to the many new technologies that have emerged in recent decades, and it will be quite capable of adapting to emerging autonomous vehicle technologies as the need arises.”

Thus, instead of trying to micro-manage the development of robotic technologies in an attempt to plan for every hypothetical risk scenario, policymakers should be patient while the common law evolves and liability norms adjust. Traditionally, the common law has dealt with products liability and accident compensation in an evolutionary way through a variety of mechanisms, including strict liability, negligence, design defects law, failure to warn, breach of warranty, and so on. There is no reason to think the common law will not adapt to new technological realities, including robotic technologies. (I address these and other “bottom-up” solutions in my new book.)

In the meantime, let’s exercise some humility and restraint here and avoid heavy-handed precautionary regulatory regimes or the creation of new technocratic bureaucracies. And let’s not forget that many solutions to the problems created by new robotic technologies will develop spontaneously and organically over time as individuals and institutions learn to cope and “muddle through,” as they have many times before.


Additional Reading

]]> 0
Private Drones & the First Amendment Fri, 19 Sep 2014 17:56:24 +0000

DroneThe use of unmanned aircraft systems, or “drones,” for private and commercial uses remains the subject of much debate. The issue has been heating up lately after Congress ordered the Federal Aviation Administration (FAA) to integrate UASs into the nation’s airspace system by 2015 as part of the FAA Modernization and Reform Act of 2012.

The debate has thus far centered mostly around the safety and privacy-related concerns associated with private use of drones. The FAA continues to move slowly on this front based on a fear that private drones could jeopardize air safety or the safety of others on the ground. Meanwhile, some privacy advocates are worried that private drones might be used in ways that invade private spaces or even public areas where citizens have a reasonable expectation of privacy. For these and other reasons, the FAA’s current ban on private operation of drones in the nation’s airspace remains in place.

But what about the speech-related implications of this debate? After all, private and commercial UASs can have many peaceful, speech-related uses. Indeed, to borrow Ithiel de Sola Pool’s term, private drones can be thought of as “technologies or freedom” that expand and enhance the ability of humans to gather and share information, thus in turn expanding the range of human knowledge and freedom.

A new Mercatus Center at George Mason University working paper, “News from Above: First Amendment Implications of the Federal Aviation Administration Ban on Commercial Drones,” deals with these questions.  This 59-page working paper was authored by Cynthia Love, Sean T. Lawson, and Avery Holton. (Love is currently a Law Clerk for Judge Carolyn B. McHugh in 10th Circuit U.S. Court of Appeals. Lawson and Holton are affliated with the Department of Communication at the University of Utah.)

“To date, little attention has been paid to the First Amendment implications of the [FAA] ban,” note Love, Lawson, and Holton. Their article argues that “aerial photography with UASs, whether commercial or not, is protected First Amendment activity, particularly for news-gathering purposes. The FAA must take First Amendment-protected uses of this technology into account as it proceeds with meeting its congressional mandate to promulgate rules for domestic UASs.” They conclude by noting that “The dangers of [the FAA's] regulatory approach are no mere matter of esoteric administrative law. Rather, as we have demonstrated, use of threats to enforce illegally promulgated rules, in particular a ban on journalistic use of UASs, infringes upon perhaps our most cherished constitutional right, that of free speech and a free press.”

The authors note that we already have a well-established set of principles that guide how government may set content-neutral regulations related to the time, place, or manner for how certain technologies can be used. Unfortunately, the FAA doesn’t seem to be paying any attention to this time-tested jurisprudence. As the authors note:

Because the airspace within a public forum should itself be considered a public forum, the government may only restrict the journalistic use of UAS technology with content-neutral regulations of the time, place, or manner of such use. Such regulations must be “justified without reference to the content of the regulated speech,” be “narrowly tailored to serve a significant government interest,” and “leave open ample alternative channels of communication.” The FAA’s blanket ban on commercial use fails to meet this test. The FAA’s ban is not a reasonable time, place, or manner restriction.

This new paper by Love, Lawson, and Holton will hopefully inform future policymaking and judicial activity on this front and, if nothing else, make the FAA to realize that it is not above the law–and in this case the First Amendment–when it comes to drone policy. Please read the entire paper for more details. It is exceptionally well done and could be a real game-changer in these debates.

P.S. I plan on attaching Love, Lawson, and Holton’s paper to my filing to the FAA next week in its proceeding on model aircraft regulation. The filing date for that proceeding was extended this summer and comments are now due next week. I will post my filing here shortly. The Mercatus Center filed comments with the FAA earlier about the prompt integration of drones into the nation’s airspace. You can read those comments here. You can also read Eli Dourado’s excellent Wired editorial on the matter here and here’s a video of me talking about these issues on the Stossel show a few months ago.

]]> 0
New Paper: “Removing Roadblocks to Intelligent Vehicles and Driverless Cars” Wed, 17 Sep 2014 15:03:42 +0000

Driverless CarI’m pleased to announce that the Mercatus Center at George Mason University has just released my latest working paper, “Removing Roadblocks to Intelligent Vehicles and Driverless Cars.” This paper, which was co-authored with Ryan Hagemann, has been accepted for publication in a forthcoming edition of the Wake Forest Journal of Law & Policy.

In the paper, Hagemann and I explore the growing market for both “connected car” technologies as well as autonomous (or “driverless”) vehicle technology. We argue that intelligent-vehicle technology will produce significant benefits. Most notably, these technologies could save many lives. In 2012, 33,561 people were killed and 2,362,000 injured in traffic crashes, largely as a result of human error. Reducing the number of accidents by allowing intelligent vehicle technology to flourish would constitute a major public policy success. As Philip E. Ross noted recently at IEEE Spectrum, thanks to these technologies, “eventually it will be positively hard to use a car to hurt yourself or others.” The sooner that day arrives, the better.

These technologies could also have positive environmental impacts in the form of improved fuel economy, reduced traffic congestion, and reduced parking needs. They might also open up new mobility options for those who are unable to drive, for whatever reason. Any way you cut it, these are exciting technologies that promise to substantially improve human welfare.

Of course, as with any new disruptive technology, connected cars and driverless vehicles raise a variety of economic, social, and ethical concerns. Hagemann and I address some of the early policy concerns about these technologies (safety, security, privacy, liability, etc.) and we outline a variety of “bottom-up” solutions to ensure that innovation continues to flourish in this space. Importantly, we also argue that policymakers should keep in mind that individuals have gradually adapted to similar disruptions in the past and, therefore, patience and humility are needed when considering policy for intelligent-vehicle systems.

More generally, we note that the debate over intelligent vehicle technologies foreshadows many other tech policy debates to come in that it raises the larger question of what principle will guide the future of technological progress. Will “permissionless innovation” be our lodestar, allowing individuals to pursue a world of which they can, as of now, only dream? Or will “precautionary principle”-based reasoning prevail instead, driven by a desire to preserve the status quo?

To the maximum extent possible, we argue, policymakers should embrace permissionless innovation for intelligent vehicles. Creative minds–especially those most vociferously opposed to technological change–will always be able to concoct horrific-sounding scenarios about the future. Best-case scenarios will never develop if we are gripped by fear of the worst-case scenarios and try to preemptively plan for all of them with policy interventions.

This 55-page (double-spaced) working paper is available on the Mercatus Center website as well as SSRN, Research Gate, and Scribd. In coming weeks and months, we’ll be writing more about the themes addressed in this paper. Stay tuned, things are unfolding rapidly in this highly innovative arena.


Additional Reading

]]> 0
Slide Presentation: Policy Issues Surrounding the Internet of Things & Wearable Technology Fri, 12 Sep 2014 16:04:09 +0000

On Thursday, it was my great pleasure to present a draft of my forthcoming paper, “The Internet of Things & Wearable Technology: Addressing Privacy & Security Concerns without Derailing Innovation,” at a conference that took place at the Federal Communications Commission on “Regulating the Evolving Broadband Ecosystem.” The 3-day event was co-sponsored by the American Enterprise Institute and the University of Nebraska College of Law.

The 65-page working paper I presented is still going through final peer review and copyediting, but I posted a very rough first draft on SSRN for conference participants. I expect the paper to be released as a Mercatus Center working paper in October and then I hope to find a home for it in a law review. I will post the final version once it is released.

In the meantime, however, I thought I would post the 46 slides I presented at the conference, which offer an overview of the nature of the Internet of Things and wearable technology, the potential economic opportunities that exist in this space, and the various privacy and security challenges that could hold this technological revolution back. I also outlined some constructive solutions to those concerns. I plan to be very active on these issues in coming months.

Additional Reading




]]> 0
How Universal Service Fails Us Sat, 23 Aug 2014 15:56:26 +0000

If there is one thing I have learned in almost 23 years of covering communications and media regulation it is this: No matter how well-intentioned, regulation often has unintended consequences that hurt the very consumers the rules are meant to protect. Case in point: “universal service” mandates that require a company to serve an entire area as a condition of offering service at all. The intention is noble: Get service out to everyone in the community, preferably at a very cheap rate. Alas, the result of mandating that result is clear: You get less competition, less investment, less innovation, and less consumer choice. And often you don’t even get everyone served.

Consider this Wall Street Journal article today, “Google Fiber Is Fast, but Is It Fair? The Company Provides Neighborhoods With Faster and Cheaper Service, but Are Some Being Left Behind?” In the story, Alistair Barr notes that:

U.S. policy long favored extending service to all. AT&T touted its “universal service” in advertisements more than a century ago. The concept was codified in a 1934 law requiring nationwide “wire and radio services” to reach everyone at “reasonable charges.” In exchange for wiring a community, telecommunications providers often gained a monopoly. Cities made similar deals with cable-TV providers beginning in the 1960s.

The problem, of course, is that while this model allowed for the slow spread of service to most communities, it came at a very steep cost: Monopoly and plain vanilla service. I documented this in a 1994 essay entitled, “Unnatural Monopoly: Critical Moments in the Development of the Bell System Monopoly.” As well-intentioned regulatory mandates started piling up, competition slowly disappeared. And a devil’s deal was eventually cut between regulators and AT&T to adopt the company’s advertising motto — “One Policy, One System, Universal Service” — as the de facto law of the land.

It took us almost a century to dig ourselves out of that mess and move towards telecommunications competition. Alas, we’re still living with the vestiges of this old regulatory mentality. Cities and counties across America still impose a wide variety of “universal service” regulatory mandates. Again, their intention is noble: They want everyone in their community served. You can’t blame them for that. But the result is still the same: Limited facilities-based competition and investment.

And so we return to today’s Wall Street Journal story about Google Fiber, which explains how local officials are finally starting to understand these realities. The story notes:

In 2011, Google struck a deal with authorities in both Kansas City, Kan., and Kansas City, Mo., to build the service based on customer demand. City officials say they didn’t push hard for universal coverage because they thought faster Internet service would boost the local economy and they were competing against so many other cities. “The main point was to win and bring that infrastructure to our city,” said Rick Usher, assistant city manager of Kansas City, Mo. As phone and cable companies slowed their own expansion plans, more cities allowed the selective approach.

Google’s ‘build-to-demand’ model is catching on because it produces results: More infrastructure investment, innovation, and competition. Traditional telecom and broadband operators are prepared to step up investment, too, when the incentives are right:

Verizon was required by cities and some state laws to build and offer its FiOS service widely across cities. It stopped expanding to new cities in 2010; to date, it has spent more than $23 billion on the FiOS rollout. Chief Financial Officer Fran Shammo said in March that the company wouldn’t expand to additional markets until FiOS had “finally returned its cost of capital.” If Verizon resumes expansion, the company would consider Google’s build-to-demand model because it has the potential to be more profitable, said Chris Levendos, a Verizon executive overseeing the FiOS build-out in Manhattan.

Others are doing just that. AT&T said in April it would offer Internet speeds of up to one gigabit in as many as 100 cities. It is building to demand and working with local authorities to reduce construction costs, the company said. Tuesday, it said it would bring the high-speed service to Cupertino, Calif., close to Google’s headquarters. This approach “starts to make this business model look quite attractive,” John Stankey, AT&T’s chief strategy officer, said at an investor conference on Aug. 13.

Again, when you get the incentives right and give investors and innovators a green light, they will seize the opportunity. And that’s even true — actually, it is especially true – for high fixed-cost investments like fiber networks.

But wait, aren’t there some pockets of the population that will fall through the cracks under this alternative arrangement? In the short-term, potentially yes. But the right answer to that “digital divide” problem is never to restrict short-term investment and innovation opportunities just because you think you have a better, more “well intentioned” plan. That is the crucial mistake policymakers made in the past. Their desire to get everyone served at the exact same time with the exact same plain vanilla service meant we got sub-optimal technologies and stagnant markets with little hope of any new innovation or investment over the long-haul.

This is how “universal service” consistently fails us. Universal service sells us short. It sells human ingenuity short. The logic that motivates universal service regulation is that: ‘Well, this is about the best we can do. Let’s just get everyone some basic level of service and that will be just and good.’  Can you imagine if we would have applied this logic to other major markets and technologies?!

But what about the under-served communities? First, when you allow new innovation in networks, you never know how or where they might spread next. If you have more competitors offering unique networks architectures and services, there is a very good chance that entrepreneurial minds will figure out how to push out the boundaries of what is possible, especially in terms of how the service is delivered.

Consider this: Back in the old days, did it really make sense to try to stretch a thin copper wire way, way out into the middle of every valley, desert, farm field, and mountain? The myopic universal service mindset says: ‘Well, that’s all we had at the time.’ Perhaps for a time it really was. But how much quicker might we have seen some sort of alternative system if we hadn’t locked in those old assumptions as policy requirements? Is it impossible to believe that wireless technologies might have developed much more quickly if the incentives would have been right? Again, there was no reason for any innovators or investors to even consider the idea at a time when policymakers were mandating copper wires be stretched to every corner of the land, and as they were showering favored companies with subsidies to achieve that goal. That’s not something a new innovator could compete with, and so no one did. It would have been like policymakers saying we needed a “universal service” policy for cheap hamburgers for the masses and then showering McDonald’s with subsidies since they were the first one in many local markets who could deliver on that promise. Had we had such a universal cheap hamburger policy, do you think any other fast food places would have ever come to town and tried to compete against those subsidized burgers? Not likely.

The lesson for today’s policymakers is clear: Open up markets, relax regulatory burdens, eliminate discriminatory taxes and subsidies, and clear away other barriers to investment. Then see what happens. As the Google Fiber experience suggests, innovative minds can and will emerge to offer constructive solutions and slowly spread new networks and technologies.

OK, but won’t there still be some communities that are underserved, even with all that new innovation and investment. It’s certainly possible. And where those communities exist, some government action may be necessary to incentivize the spread of some sort of network to them, or even have the government build it for the community. I’m not opposed to that. (Have you ever driven through the hills of West Virginia or the mountains of rural Western states? Hard places to get wired networks out to!) I’m not very optimistic local governments will do a very good job of building sophisticated networks because they already have a horrible track record in this regard. But, again, I don’t oppose local action on this front if no other alternatives appear after a certain period of time.

But, again, the answer here is not crazy national and state-based universal service mandates that regulate everyone in every community as if they had the same problem. Let competition and innovation work its magic where it can and do not mess that up. Where it proves much harder for that network competition and innovation to take root, use smart incentives to get companies to build out their networks further, or offer alternative wireless infrastructure of some sort, or just have the government build the networks themselves. But we should always give competition and innovation the benefit of the doubt and see what happens first.

So, let me perfectly clear what I am saying here: GOOD INTENTIONS ARE NEVER ENOUGH! [And yes, I am using all caps because I am shouting!] The next time somebody starts mouthing something about how they have the moral high ground in these debates because their intentions are supposedly pure as the driven snow, ask them to show you results. Tell them you want evidence that their intentions have actually produced something concrete and positive for society. If their answer is, in essence, ‘Well, with our regulatory mandates we can at least get everybody some basic level of really crappy monopoly service,’ then tell them that they can take their good intentions and shove them. We can do better.

]]> 0
Comments to the New York Department of Financial Services on the Proposed Virtual Currency Regulatory Framework Thu, 14 Aug 2014 14:48:37 +0000

Today my colleague Eli Dourado and I have filed a public interest comment with the New York Department of Financial Services on their proposed “BitLicense” regulatory framework for digital currencies. You can read it here. As we say in the comment, NYDFS is on the right track, but ultimately misses the mark:

State financial regulators around the country have been working to apply their existing money transmission licensing statutes and regulations to new virtual currency businesses. In many cases, existing rules do not take into account the unique properties of recent innovations like cryptocurrencies. With this in mind, the department sought to develop rules that were “tailored specifically to the unique characteristics of virtual currencies.”

As Superintendent Benjamin Lawsky has stated, the aim of this project is “to strike an appropriate balance that helps protect consumers and root out illegal activity—without stifling beneficial innovation.” This is the right goal and one we applaud. It is a very difficult balance to strike, however, and we believe that the BitLicense regulatory framework as presently proposed misses the mark, for two main reasons.

First, while doing much to take into account the unique properties of virtual currencies and virtual currency businesses, the proposal nevertheless fails to accommodate some of the most important attributes of software-based innovation. To the extent that one of its chief goals is to preserve and encourage innovation, the BitLicense proposal should be modified with these considerations in mind—and this can be done without sacrificing the protections that the rules will afford consumers. Taking into account the “unique characteristics” of virtual cur-rencies is the key consideration that will foster innovation, and it is the reason why the department is creating a new BitLicense. The department should, therefore, make sure that it is indeed taking these features into account.

Second, the purpose of a BitLicense should be to take the place of a money transmission license for virtual currency businesses. That is to say, but for the creation of a new BitLicense, virtual currency businesses would be subject to money transmission licensing. Therefore, to the extent that the goal behind the new BitLicense is to protect consumers while fostering innovation, the obligations faced by BitLicensees should not be any more burdensome than those faced by traditional money transmitters. Otherwise, the new regulatory framework will have the opposite effect of the one intended. If it is more costly and difficult to acquire a BitLicense than a money transmission license, we should expect less innovation. Additional regulatory burdens would put BitLicensees at a relative disadvantage, and in several instances the proposed regulatory framework is more onerous than traditional money transmitter licensing.

As Superintendent Lawsky has rightly stated, New York should avoid virtual currency rules that are “so burdensome or unwieldy that the technology can’t develop.” The proposed BitLicense framework, while close, does not strike the right balance between consumer protection and innovation. For example, its approach to consumer protection through disclosures rather than prescriptive precautionary regulation is the right approach for giving entrepreneurs flexibility to innovate while ensuring that consumers have the information they need to make informed choices. Yet there is much that can be improved in the framework to reach the goal of balancing innovation and protection. Below we outline where the framework is missing the mark and recommend some modifications that will take into account the unique properties of virtual currencies and virtual currency businesses.

We hope this comment will be helpful to the department as it further develops its proposed framework, and we hope that it will publish a revised draft of the framework and solicit a second round of comments so that we can make sure we all get it right. And it’s important that we get it right.

Other jurisdictions, such as London, are looking to become the “global centre of financial innovation,” as Chancellor George Osborne put it in a recent speech about Bitcoin. If New York drops the ball, London may just pick it up. As Garrick Hileman, economic historian at the London School of Economics, told CNet last week:

The chancellor is no doubt aware that very little of the $250 million of venture capital which has been invested in Bitcoin startups to date has gone to British-based companies. Many people believe Bitcoin will be as big as the Internet. Today’s announcement from the chancellor has the potential to be a big win for the UK economy. The bottom line on today’s announcement is that Osborne thinks he’s spotted an opportunity for the City and Silicon Roundabout to siphon investment and jobs away from the US and other markets which are taking a more aggressive Bitcoin regulatory posture.

Let’s get it right.

]]> 0
Study: No, US Broadband is not Falling Behind Wed, 13 Aug 2014 16:25:08 +0000

There’s a small but influential number of tech reporters and scholars who seem to delight in making the US sound like a broadband and technology backwater. A new Mercatus working paper by Roslyn Layton, a PhD fellow at a research center at Aalborg University, and Michael Horney a researcher at the Free State Foundation, counter that narrative and highlight data from several studies that show the US is at or near the top in important broadband categories.

For example, per Pew and ITU data, the vast majority of Americans use the Internet and the US is second in the world in data consumption per capita, trailing only South Korea. Pew reveals that for those who are not online the leading reasons are lack of usability and the Internet’s perceived lack of benefits. High cost, notably, is not the primary reason for infrequent use.

I’ve noted before some of the methodological problems in studies claiming the US has unusually high broadband prices. In what I consider their biggest contribution to the literature, Layton and Horney highlight another broadband cost frequently omitted in international comparisons: the mandatory media license fees many nations impose on broadband and television subscribers.

These fees can add as much as $44 to the monthly cost of broadband. When these fees are included in comparisons, American prices are frequently an even better value. In two-thirds of European countries and half of Asian countries, households pay a media license fee on top of the subscription fees to use devices such as connected computers and TVs.

…When calculating the real cost of international broadband prices, one needs to take into account media license fees, taxation, and subsidies. …[T]hese inputs can materially affect the cost of broadband, especially in countries where broadband is subject to value-added taxes as high as 27 percent, not to mention media license fees of hundreds of dollars per year.

US broadband providers, the authors point out, have priced broadband relatively efficiently for heterogenous uses–there are low-cost, low-bandwidth connections available as well as more expensive, higher-quality connections for intensive users.

Further, the US is well-positioned for future broadband use. Unlike many wealthy countries, Americans typically have access, at least, to broadband from telephone companies (like AT&T DSL or UVerse) as well as from a local cable provider. Competition between ISPs has meant steady investment in network upgrades, despite the 2008 global recession. The story is very different in much of Europe, where broadband investment, as a percentage of the global total, has fallen noticeably in recent years. US wireless broadband is also a bright spot: 97% of Americans can subscribe to 4G LTE while only 26% in the EU have access (which partially explains, by the way, why Europeans often pay less for mobile subscriptions–they’re using an inferior product).

There’s a lot to praise in the study and it’s necessary reading for anyone looking to understand how US broadband policy compares to other nations’. The fashionable arguments that the US is at risk of falling behind technologically were never convincing–the US is THE place to be if you’re a tech company or startup, for one–but Layton and Horney show the vulnerability of that narrative with data and rigor.

]]> 1
Is STELA the Vehicle for Video Reform? Fri, 08 Aug 2014 18:21:31 +0000

Even though few things are getting passed this Congress, the pressure is on to reauthorize the Satellite Television Extension and Localism Act (STELA) before it expires at the end of this year. Unsurprisingly, many have hoped this “must pass bill” will be the vehicle for broader reform of video. Getting video law right is important for our content rich world, but the discussion needs to expand much further than STELA.

Over at the American Action Forum, I explore a bit of what would be needed, and just how far the problems are rooted:

The Federal Communications Commission’s (FCC) efforts to spark localism and diversity of voices in broadcasting stands in stark contrast to relative lack of regulation governing non-broadcast content providers like Netflix and HBO, which have revolutionized delivery and upped the demand for quality content. These amorphous social goals also have limited broadcasters. Without any consideration for the competitive balance in a local market, broadcasters are barred in what they can own, are saddled with various programming restrictions, and are subject to countless limitations in the use of their spectrum. Moreover, the FCC has sought to outlaw deals between broadcasters who negotiate jointly for services and ads.

In the effort to support specific “public interest” goals, the FCC has implemented certain regulations which have cabined both broadcasters and paid TV distributors. In turn, these regulations forced companies to develop in proscribed ways, and in turn prompted further regulatory action when they have tried to innovate. Speaking about this cat-and-mouse game in the financial sector, Professor Edward Kane termed the relationship, the “regulatory dialectic.”

But unwrapping the regulatory dialectic in video law will require a vehicle far more expansive than STELA. Ultimately, I conclude,

Both the quality of programming and the means of accessing it have undergone dramatic changes in the past two decades but the regulations have not. Consumer preferences and choices are shifting, which needs to be met by alterations in the regulatory regime. STELA is one part of the puzzle, but like so many other areas of telecommunication law, a comprehensive look at the body of laws ruling video is needed. It is increasingly clear that the laws governing programming must be updated to meet the 21st century marketplace.

On this site especially, there has been a vigorous debate on just what this framework would entail. For a more comprehensive look, check out:

  • Geoffrey Manne’s testimony on STELA before the House of Representatives’ Energy and Commerce;
  • Adam Thierer’s and Brent Skorup’s paper on video law entitled, “Video Marketplace Regulation: A Primer on the History of Television Regulation and Current Legislative Proposals”;
  • Ryan Radia’s blog post entitled, “A Free Market Defense of Retransmission Consent”;
  • Fred Campbell’s white paper on the “Future of Broadcast Television,” as well as his various posts on the subject;
  • And Hance Hanley’s posts on video law.
]]> 0
You know how IP creates millions of jobs? That’s pseudoscientific baloney Wed, 06 Aug 2014 14:26:56 +0000

In 2012, the US Chamber of Commerce put out a report claiming that intellectual property is responsible for 55 million US jobs—46 percent of private sector employment. This is a ridiculous statistic if you merely stop and think about it for a minute. But the fact that the statistic is ridiculous doesn’t mean that it won’t continue to circulate around Washington. For example, last year Rep. Marsha Blackburn cited it uncritically in an oped in The Hill.

In a new paper from Mercatus (here’s the PDF), Ian Robinson and I expose this statistic, and others like them, as pseudoscience. They are based on incredibly shoddy and misleading reasoning. Here’s the abstract of the paper:

In the past two years, a spate of misleading reports on intellectual property has sought to convince policymakers and the public that implausibly high proportions of US output and employment depend on expansive intellectual property (IP) rights. These reports provide no theoretical or empirical evidence to support such a claim, but instead simply assume that the existence of intellectual property in an industry creates the jobs in that industry. We dispute the assumption that jobs in IP-intensive industries are necessarily IP-created jobs. We first explore issues regarding job creation and the economic efficiency of IP that cut across all kinds of intellectual property. We then take a closer look at these issues across three major forms of intellectual property: trademarks, patents, and copyrights.

As they say, read the whole thing, and please share with your favorite IP maximalist.

]]> 1
New York’s financial regulator releases a draft of ‘BitLicense’ for Bitcoin businesses. Here are my initial thoughts. Thu, 17 Jul 2014 17:56:26 +0000

Today the New York Department of Financial Services released a proposed framework for licensing and regulating virtual currency businesses. Their “BitLicense” proposal [PDF] is the culmination of a yearlong process that included widely publicizes hearings.

My initial reaction to the rules is that they are a step in the right direction. Whether one likes it or not, states will want to license and regulate Bitcoin-related businesses, so it’s good to see that New York engaged in a thoughtful process, and that the rules they have proposed are not out of the ordinary.

That said, I’m glad DFS will be accepting comments on the proposed framework because there are a few things that can probably be improved or clarified. For example:

  1. Licensees would be required to maintain “the identity and physical addresses of the parties involved” in “all transactions involving the payment, receipt, exchange or conversion, purchase, sale, transfer, or transmission of Virtual Currency.” That seems a bit onerous and unworkable.

    Today, if you have a wallet account with Coinbase, the company collects and keeps your identity information. Under New York’s proposal, however, they would also be required to collect the identity information of anyone you send bitcoins to, and anyone that sends bitcoins to you (which might be technically impossible). That means identifying every food truck you visit, and every alpaca sock merchant you buy from online.

    The same would apply to merchant service companies like BitPay. Today they identify their merchant account holders–say a coffee shop–but under the proposed framework they would also have to identify all of their merchants’ customers–i.e. everyone who buys a cup of coffee. Not only is this potentially unworkable, but it also would undermine some of Bitcoin’s most important benefits. For example, the ability to trade across borders, especially with those in developing countries who don’t have access to electronic payment systems, is one of Bitcoin’s greatest advantages and it could be seriously hampered by such a requirement.

    The rationale for creating a new “BitLicense” specific to virtual currencies was to design something that took the special characteristics of virtual currencies into account (something existing money transmission rules didn’t do). I hope the rule can be modified so that it can come closer to that ideal.

  2. The definition of who is engaged in “virtual currency business activity,” and thus subject to the licensing requirement, is quite broad. It has the potential to swallow up online wallet services, like Blockchain, who are merely providing software to their customers rather than administering custodial accounts. It might potentially also include non-financial services like Proof of Existence, which provides a notary service on top of the Bitcoin block chain. Ditto for other services, perhaps like NameCoin, that use cryptocurrency tokens to track assets like domain names.

  3. The rules would also require a license of anyone “controlling, administering, or issuing a Virtual Currency.” While I take this to apply to centralized virtual currencies, some might interpret it to also mean that you must acquire a license before you can deploy a new decentralized altcoin. That should be clarified.

In order to grow and reach its full potential, the Bitcoin ecosystem needs regulatory certainty from dozens of states. New York is taking a leading role in developing that a regulatory structure and the path it chooses will likely influence other states. This is why we have to make sure that New York gets it right. They are on the right track and I look forward to engaging in the comment process to help them get all the way there.

]]> 4
SCOTUS Rules in Favor of Freedom and Privacy in Key Rulings Thu, 26 Jun 2014 07:36:08 +0000

Yesterday, June 25, 2014, the U.S. Supreme Court issued two important opinions that advance free markets and free people in Riley v. California and ABC v. AereoI’ll soon have more to say about the latter case, Aereo, in which my organization filed a amicus brief along with the International Center for Law and Economics. But for now, I’d like to praise the Court for reaching the right result in a duo of cases involving police warrantlessly searching cell phones incident to lawful arrests.

Back in 2011, when I wrote in a feature story in Ars Technica—which I discussed on these pages—police in many jurisdictions were free to search the cell phones of individuals incident to their arrest. If you were arrested for a minor traffic violation, for instance, the unencrypted contents of your cell phone were often fair game for searches by police officers.

Now, however, thanks to the Supreme Court, police may not search an arrestee’s cell phone incident to her or his arrest—without specific evidence giving rise to an exigency that justifies such a search. Given the broad scope of offenses for which police may arrest someone, this holding has important implications for individual liberty, especially in jurisdictions where police often exercise their search powers broadly.


]]> 0
Muddling Through: How We Learn to Cope with Technological Change Tue, 17 Jun 2014 17:38:18 +0000

How is it that we humans have again and again figured out how to assimilate new technologies into our lives despite how much those technologies “unsettled” so many well-established personal, social, cultural, and legal norms?

In recent years, I’ve spent a fair amount of time thinking through that question in a variety of blog posts (“Are You An Internet Optimist or Pessimist? The Great Debate over Technology’s Impact on Society”), law review articles (“Technopanics, Threat Inflation, and the Danger of an Information Technology Precautionary Principle”), opeds (“Why Do We Always Sell the Next Generation Short?”), and books (See chapter 4 of my new book, “Permissionless Innovation: The Continuing Case for Comprehensive Technological Freedom”).

It’s fair to say that this issue — how individuals, institutions, and cultures adjust to technological change — has become a personal obsession of mine and it is increasingly the unifying theme of much of my ongoing research agenda. The economic ramifications of technological change are part of this inquiry, of course, but those economic concerns have already been the subject of countless books and essays both today and throughout history. I find that the social issues associated with technological change — including safety, security, and privacy considerations — typically get somewhat less attention, but are equally interesting. That’s why my recent work and my new book narrow the focus to those issues.

Optimistic (“Heaven”) vs. Pessimistic (“Hell”) Scenarios

Modern thinking and scholarship on the impact of technological change on societies has been largely dominated by skeptics and critics.

In the past century, for example, French philosopher Jacques Ellul (The Technological Society), German historian Oswald Spengler (Man and Technics), and American historian Lewis Mumford (Technics and Civilization) penned critiques of modern technological processes that took a dour view of technological innovation and our collective ability to adapt positively to it. (Concise summaries of their thinking can be found in Christopher May’s edited collection of essays, Key Thinkers for the Information Society.)

These critics worried about the subjugation of humans to “technique” or “technics” and feared that technology and technological processes would come to control us before we learned how to control them. Media theorist Neil Postman was the most notable of the modern information technology critics and served as the bridge between the industrial era critics (like Ellul, Spengler, and Mumford) and some of today’s digital age skeptics (like Evgeny Morozov and Nick Carr). Postman decried the rise of a “technopoly” — “the submission of all forms of cultural life to the sovereignty of technique and technology” — that would destroy “the vital sources of our humanity” and lead to “a culture without a moral foundation” by undermining “certain mental processes and social relations that make human life worth living.” We see that attitude on display in countless works of technological criticism since then.

Of course, there’s been some pushback from some futurists and technological enthusiasts. But there’s often a fair amount of irrational exuberance at work in their tracts and punditry. Many self-proclaimed “futurists” have predicted that various new technologies would produce a nirvana that would overcome human want, suffering, ignorance, and more.

In a 2010 essay, I labeled these two camps technological “pessimists” and “optimists.” It was a crude and overly-simplistic dichotomy, but it was an attempt to begin sketching out a rough taxonomy of the personalities and perspectives that we often seen pitted against each other in debates about the impact of technology on culture and humanity.

Sadly, when I wrote that earlier piece, I was not aware of a similar (and much better) framing of this divide that was developed by science writer Joel Garreau in his terrific 2005 book, Radical Evolution: The Promise and Peril of Enhancing Our Minds, Our Bodies — and What It Means to Be Human. In that book, Garreau is thinking in much grander terms about technology and the future than I was in my earlier essay. He was focused on how various emerging technologies might be changing our very humanity and he notes that narratives about these issues are typically framed in “Heaven” versus “Hell” scenarios.

Under the “Heaven” scenario, technology drives history relentlessly, and in almost every way for the better. As Garreau describes the beliefs of the Heaven crowd, they believe that going forward, “almost unimaginably good things are happening, including the conquering of disease and poverty, but also an increase in beauty, wisdom, love, truth, and peace.” (p. 130) By contrast, under the “Hell” scenario, “technology is used for extreme evil, threatening humanity with extinction.” (p. 95) Garreau notes that what unifies the Hell scenario theorists is the sense that in “wresting power from the gods and seeking to transcend the human condition,” we end up instead creating a monster — or maybe many different monsters — that threatens our very existence. Garreau says this “Frankenstein Principle” can be seen in countless works of literature and technological criticism throughout history, and it is still very much with us today. (p. 108)

Theories of Collapse: Why Does Doomsaying Dominate Discussions about New Technologies?

Indeed, in examining the way new technologies and inventions have long divided philosophers, scientists, pundits, and the general public, one can find countless examples of that sort of fear and loathing at work. “Armageddon has a long and distinguished history,” Garreau notes. “Theories of progress are mirrored by theories of collapse.” (p. 149)

In that regard, Garreau rightly cites Arthur Herman’s magisterial history of apocalyptic theories, The Idea of Decline in Western History, which documents “declinism” over time. The irony of much of this pessimistic declinist thinking, Herman notes, is that:

In effect, the very things modern society does best — providing increasing economic affluence, equality of opportunity, and social and geographic mobility — are systematically deprecated and vilified by its direct beneficiaries. None of this is new or even remarkable.” (p. 442)

Why is that? Why has the “Hell” scenario been such a dominant reoccurring theme in past writing and commentary throughout history, even though the general trend has been steady improvements in human health, welfare, and convenience?

There must be something deeply rooted in the human psyche that accounts for this tendency. As I have discussed in my new book as well as my big “Technopanics” law review article, our innate tendency to be pessimistic but also want to be certain about the future means that “the gloom-mongers have it easy,” as author Dan Gardner argues in his book, Future Babble: Why Expert Predictions Are Next to Worthless, and You Can Do Better. He continues on to note of the techno-doomsday pundits:

Their predictions are supported by our intuitive pessimism, so they feel right to us. And that conclusion is bolstered by our attraction to certainty. As strange as it sounds, we want to believe the expert predicting a dark future is exactly right, because knowing that the future will be dark is less tormenting than suspecting it. Certainty is always preferable to uncertainty, even when what’s certain is disaster. (p. 140-1)

Similarly, in his new book, Smarter Than You Think: How Technology Is Changing Our Minds for the Better, Clive Thompson notes that “dystopian predictions are easy to generate” and “doomsaying is emotionally self-protective: if you complain that today’s technology is wrecking the culture, you can tell yourself you’re a gimlet-eyed critic who isn’t hoodwinked by high-tech trends and silly, popular activities like social networking. You seem like someone who has a richer, deeper appreciation for the past and who stands above the triviality of today’s life.” (p. 283)

Another explanation is that humans are sometimes very poor judges of the relative risks to themselves or those close to them. Harvard University psychology professor Steven Pinker, author of The Blank Slate: The Modern Denial of Human Nature, notes:

The mind is more comfortable in reckoning probabilities in terms of the relative frequency of remembered or imagined events. That can make recent and memorable events—a plane crash, a shark attack, an anthrax infection—loom larger in one’s worry list than more frequent and boring events, such as the car crashes and ladder falls that get printed beneath the fold on page B14. And it can lead risk experts to speak one language and ordinary people to hear another. (p. 232)

Put simply, there exists a wide variety of explanations for why our collective first reaction to new technologies often is one of dystopian dread. In my work, I have identified several other factors, including: generational differences; hyper-nostalgia; media sensationalism; special interest pandering to stoke fears and sell products or services; elitist attitudes among intellectuals; and the so-called “third-person effect hypothesis,” which posits that when some people encounter perspectives or preferences at odds with their own, they are more likely to be concerned about the impact of those things on others throughout society and to call on government to “do something” to correct or counter those perspectives or preferences.

Some combination of these factors ends up driving the initial resistance we have see to new technologies that disrupted long-standing social norms, traditions, and institutions. In the extreme, it results in that gloom-and-doom, sky-is-falling disposition in which we are repeatedly told how humanity is about to be steam-rolled by some new invention or technological development.

The “Prevail” (or “Muddling Through”) Scenario

“The good news is that end-of-the-world predictions have been around for a very long time, and none of them has yet borne fruit,” Garreau reminds us. (p. 148) Why not? Let’s get back to his framework for the answer. After discussing the “Heaven” (optimistic) and “Hell” (skeptical or pessimistic) scenarios cast about by countless tech writers throughout history, Garreau outlines a third, and more pragmatic “Prevail” option, which views history “as a remarkably effective paean to the power of humans to muddle through extraordinary circumstances.”

That pretty much sums up my own perspective on things, and in the remainder of this essay I want sketch out the reasons why I think the “prevail” or “muddling through” scenario offers the best explanation for how we learn to cope with technological disruption and prosper in the process.

As Garreau explains it, under the “Prevail” scenario, “humans shape and adapt [technology] in entirely new directions.” (p. 95) “Just because the problems are increasing doesn’t mean solutions might not also be increasing to match them,” he rightly notes. (p. 154) As John Seely Brown and Paul Duguid noted in their excellent 2001, “Response to Bill Joy and the Doom-and-Gloom Technofuturists”:

technological and social systems shape each other. The same is true on a larger scale. […] Technology and society are constantly forming and reforming new dynamic equilibriums with far-reaching implications. The challenge for futurology (and for all of us) is to see beyond the hype and past the over-simplifications to the full import of these new sociotechnical formations.  Social and technological systems do not develop independently; the two evolve together in complex feedback loops, wherein each drives, restrains and accelerates change in the other.

It is this process of “constantly forming and reforming new dynamic equilibriums” that interests me most. In a recent exchange with Michael Sacasas – one of the most thoughtful modern technology critics I’ve come across — I noted that the nature of individual and societal acclimation to technological change is worthy of serious investigation if for no other reason that it has continuously happened! What I hope to better understand is the process by which we humans have again and again figured out how to assimilate new technologies into their lives despite how much those technologies disrupted our personal, social, economic, cultural, and legal norms.

In a response to me, Sacasas put forth the following admonition: “That people eventually acclimate to changes precipitated by the advent of a new technology does not prove that the changes were inconsequential or benign.” This is undoubtedly true, but it does not undermine the reality of societal adaptation. What can we learn from this? What were the mechanics of that adaptive process? As social norms, personal habits, and human relationships were disrupted, what helped us muddle through and find a way of coping with new technologies? Likewise, as existing markets and business models were disrupted, how were new ones formulated in response to the given technological disruption? Finally, how did legal norms and institutions adjust to those same changes?

Of course, this raises an entirely different issue: What metrics are we using to judge whether “the changes were inconsequential or benign”? As I noted in my exchange with Sacasas, at the end of the day, it may be that we won’t be able to even agree on a standard by which to make that judgment and will instead have to settle for a rough truce about what history has to teach us that might be summed up by the phrase: “something gained, something lost.”

Resiliency: Why Do the Skeptics Never Address It (and Its Benefits)?

Nonetheless, I believe that while technological change often brings sweeping and quite consequential change, there is great value in the very act of living through it.

In my work, including my latest little book, I argue that humans have exhibited the uncanny ability to adapt to changes in their environment, bounce back from adversity, and learn to be resilient over time. A great deal of wisdom is born of experience, including experiences that involve risk and the possibility of occasional mistakes and failures while both developing new technologies and learning how to live with them. I believe it wise to continue to be open to new forms of innovation and technological change, not only because it provides breathing space for future entrepreneurialism and invention, but also because it provides an opportunity to see how societal attitudes toward new technologies evolve — and to learn from it. More often than not, I argue, citizens have found ways to adapt to technological change by employing a variety of coping mechanisms, new norms, or other creative fixes.

What we’re talking about here is resiliency. Andrew Zolli and Ann Marie Healy, authors of Resilience: Why Things Bounce Back, define resilience as “the capacity of a system, enterprise, or a person to maintain its core purpose and integrity in the face of dramatically changed circumstances.” (p. 7) “To improve your resilience,” they note, “is to enhance your ability to resist being pushed from your preferred valley, while expanding the range of alternatives that you can embrace if you need to. This is what researchers call preserving adaptive capacity—the ability to adapt to changed circumstances while fulfilling once core purpose—and it’s an essential skill in an age of unforeseeable disruption and volatility.” (p. 7-8, emphasis in original) Moreover, they note, “by encouraging adaptation, agility, cooperation, connectivity, and diversity, resilience-thinking can bring us to a different way of being in the world, and to a deeper engagement with it.” (p. 16)

Even if you one doesn’t agree with all of that, again, I would think one would find great value in studying the process by which such adaptation happens precisely because it does happen so regularly. And then we could argue about whether it was all really worth it! Specially, was it worth whatever we lost in the process (i.e., a change in our old moral norms, our old privacy norms, our old institutions, our old business models, our old laws, or whatever else)?

As Sacasas correctly argues, “That people before us experienced similar problems does not mean that they magically cease being problems today.” Again, quite right. On the other hand, the fact that people and institutions learned to cope with those concerns and become more resilient over time is worthy of serious investigation because somehow we “muddled through” before and we’ll have to muddle through again. And, again, what we learned from living through that process may be extremely valuable in its own right.

Of Course, Muddling Through Isn’t Always Easy

Now, let’s be honest about this process of “muddling through”: it isn’t always neat or pretty. To put it crudely, sometimes muddling through really sucks! Think about the modern technologies that violate our visceral sense of privacy and personal space today. I am an intensely private person and if I had a life motto it would probably be: “Leave Me Alone!” Yet, sometimes there’s just no escaping the pervasive reach of modern technologies and processes. On the other hand, I know that, like so many others, I derive amazing benefits from all these new technologies, too. So, like most everyone else I put up with the downsides because, on net, there are generally more upsides.

Almost every digital service that we use today presents us with these trade-offs. For example, email has allowed us to connect with a constantly growing universe of our fellow humans and organizations. Yet, spam clutters our mailboxes and the sheer volume of email we get sometimes overwhelms us. Likewise, in just the past five years, smartphones have transformed our lives in so many ways for the better in terms of not just personal convenience but also personal safety. On the other hand, smartphones have become more than a bit of nuisance in certain environments (theaters, restaurants, and other closed spaces.) And they also put our safety at risk when we use them while driving automobiles.

But, again, we adjust to most of these new realities and then we find constructive solutions to the really hard problems – yes, and that sometimes includes legal remedies to rectify serious harms. But a certain amount of social adaptation will, nonetheless, be required. Law can only slightly slow that inevitability; it can’t stop it entirely. And as messy and uncomfortable as muddling through can be, we have to (a) be aware of what we gain in the process and (b) ask ourselves what the cost of taking the alternative path would be. Attempts to through a wrench in the works and derail new innovations or delay various types of technological change are always going to be tempting, but such interventions will come at a very steep cost: less entreprenurialism, diminished competition, stagnant markets, higher prices, and fewer choices for citizens. As I note in my new book, if we spend all our time living in constant fear of worst-case scenarios — and premising public policy upon such fears — it means that many best-case scenarios will never come about.

Social Resistance / Pressure Dynamics

There’s another part to this story that often gets overlooked. “Muddling through” isn’t just some sort of passive process where individuals and institutions have to figure out how to cope with technological change. Rather, there is an active dynamic at work, too. Individuals and institutions push back and actively shape their tools and systems.

In a recent Wired essay on public attitudes about emerging technologies such as the controversial Google Glass, Issie Lapowsky noted that:

If the stigma surrounding Google Glass (or, perhaps more specifically, “Glassholes”) has taught us anything, it’s that no matter how revolutionary technology may be, ultimately its success or failure ride on public perception. Many promising technological developments have died because they were ahead of their times. During a cultural moment when the alleged arrogance of some tech companies is creating a serious image problem, the risk of pushing new tech on a public that isn’t ready could have real bottom-line consequences.

In my new book, I spend some time think about this process of “norm-shaping” through social pressure, activist efforts, educational steps, and even public shaming. A recent Ars Technica essay by Joe Silver offered some powerful examples of how when “shamed on Twitter, corporations do an about-face.” Silver notes that “A few recent case-study examples of individuals who felt they were wronged by corporations and then took to the Twitterverse to air their grievances show how a properly placed tweet can be a powerful weapon for consumers to combat corporate malfeasance.” In my book and in recent law review articles, I have provided other examples how this works at both a corporate and individual level to constrain improper behavior and protect various social norms.

Edmund Burke once noted that, “Manners are of more importance than laws. Manners are what vex or soothe, corrupt or purify, exalt or debase, barbarize or refine us, by a constant, steady, uniform, insensible operation, like that of the air we breathe in.” Cristina Bicchieri, a leading behavioral ethicist, calls social norms “the grammar of society” because,

like a collection of linguistic rules that are implicit in a language and define it, social norms are implicit in the operations of a society and make it what it is. Like a grammar, a system of norms specifies what is acceptable and what is not in a social group. And analogously to a grammar, a system of norms is not the product of human design and planning.

Put simply, more than law can regulate behavior — whether it is organizational behavior or individual behavior. It’s yet another way we learn to cope and “muddle through” over time. Again, check out my book for several other examples.

A Case Study: The Long-Standing “Problem” of Photography

Let’s bring all this together and be more concrete about it by using a case study: photography. With all the talk of how unsettling various modern technological developments are, they really pale in comparison to just how jarring the advent of widespread public photography must have been in the late 1800s and beyond. “For the first time photographs of people could be taken without their permission—perhaps even without their knowledge,” notes Lawrence M. Friedman in his 2007 book, Guiding Life’s Dark Secrets: Legal and Social Controls over Reputation, Propriety, and Privacy.

Thus, the camera was viewed as a highly disruptive force as photography became more widespread. In fact, the most important essay ever written on privacy law, Samuel D. Warren and Louis D. Brandeis’s famous 1890 Harvard Law Review essay on “The Right to Privacy,” decried the spread of public photography. The authors lamented that “instantaneous photographs and newspaper enterprise have invaded the sacred precincts of private and domestic life” and claimed that “numerous mechanical devices threaten to make good the prediction that ‘what is whispered in the closet shall be proclaimed from the house-tops.’”

Warren and Brandeis weren’t alone. Plenty of other critics existed and many average citizens were probably outraged by the rise of cameras and public photography. Yet, personal norms and cultural attitudes toward cameras and public photography evolved quite rapidly and they became ingrained in human experience. At the same time, social norms and etiquette evolved to address those who would use cameras in inappropriate, privacy-invasive ways.

Again, we muddled through. And we’ve had to continuously muddle through in this regard because photography presents us with a seemingly endless set of new challenges. As cameras grow still smaller and get integrated into other technologies (most recently, smartphones, wearable technologies, and private drones), we’ve had to learn to adjust and accommodate. With wearables technologies (check out Narrative, Butterflye, and Autographer, for example), personal drones (see “Drones are the future of selfies,”) and other forms of microphotography all coming online now, we’ll have to adjust still more and develop new norms and coping mechanisms. There’s never going to be an end to this adjustment process.

Toward Pragmatic Optimism

Should we really remain bullish about humanity’s prospects in the midst of all this turbulent change? I think so.

Again, long before the information revolution took hold, the industrial revolution produced its share of cultural and economic backlashes, and it is still doing so today. Most notably, many Malthusian skeptics and environmental critics lamented the supposed strain of population growth and industrialization on social and economic life. Catastrophic predictions followed.

In his 2007 book, Prophecies of Doom and Scenarios of Progress, Paul Dragos Aligicia, a colleague of mine at the Mercatus Center, documented many of these industrial era “prophecies of doom” and described how this “doomsday ideology” was powerfully critiqued by a handful of scholars — most notably Herman Kahn and Julian Simon. Aligicia explains that Kahn and Simon argued for, “the alternative paradigm, the pro-growth intellectual tradition that rejected the prophecies of doom and called for realism and pragmatism in dealing with the challenge of the future.”

Kahn and Simon were pragmatic optimists or what author Matt Ridley calls “rational optimists.” They were bullish about the future and the prospects for humanity, but they were not naive regarding the many economic and scosial challenges associated with technological change. Like Kahn and Simon, we should embrace the amazing technological changes at work in today’s information age but with a healthy dose of humility and appreciation for the disruptive impact and pace of that change.

But the rational optimists never get as much attention as the critics and catastrophists. “For 200 years pessimists have had all the headlines even though optimists have far more often been right,” observes Ridley. “Arch-pessimists are feted, showered with honors and rarely challenged, let alone confronted with their past mistakes.” At least part of the reason for that, as already noted, goes back to the amazing rhetorical power of good intentions. Techno-pessimists often exhibit a deep passion about their particular cause and are typically given more than just the benefit of doubt in debates about progress and the future; they are treated as superior to opponents who challenge their perspectives or proposals. When a privacy advocate says they are just looking out consumers, or an online safety claims they have the best interests of children in mind, or a consumer advocate argues that regulation is needed to protect certain people from some amorphous harm, they are assuming the moral high ground through the assertion of noble-minded intentions. Even if their proposals will often fail to bring about the better state of affairs they claim or derail life-enriching innovations, they are more easily forgiven for those mistakes precisely because of their fervent claim of noble-minded intentions.

If intentions are allowed to trump empiricism and a general openness to change, however, the results for a free society and for human progress will be profoundly deleterious. That is why, when confronted with pessimistic, fear-based arguments, the pragmatic optimist must begin by granting that the critics clearly have the best of intentions, but then point out how intentions can only get us so far in the real-world, which is full of complex trade-offs.

The pragmatic optimist must next meticulously and dispassionately outline the many reasons why restricting progress or allowing planning to enter the picture will have many unintended consequences and hidden costs. The trade-offs must be explained in clear terms. Examples of previous interventions that went wrong must be proffered.

The Evidence Speaks for Itself

Luckily, we pragmatic optimists have plenty of evidence working in our favor when making this case. As Pulitzer Prize-winning historian Richard Rhodes noted in his 1999 book, Visions of Technology: A Century of Vital Debate About Machines Systems And The Human World:

it’s surprising that [many intellectual] don’t value technology; by any fair assessment, it has reduced suffering and improved welfare across the past hundred years. Why doesn’t this net balance of benevolence inspire at least grudging enthusiasm for technology among intellectuals? (p. 23)

Great question, and one that we should never stop asking the techno-critics to answer. After all, as Joel Mokyr notes in his wonderful 1990 book, Lever of Riches: Technological Creativity and Economic Progress, “Without [technological creativity], we would all still live nasty and short lives of toil, drudgery, and discomfort.” (p. viii) “Technological progress, in that sense, is worthy of its name,” he says. “It has led to something that we may call an ‘achievement,’ namely the liberation of a substantial portion of humanity from the shackles of subsistence living.” (p. 288) Specifically,

The riches of the post-industrial society have meant longer and healthier lives, liberation from the pains of hunger, from the fears of infant mortality, from the unrelenting deprivation that were the part of all but a very few in preindustrial society. The luxuries and extravagances of the very rich in medieval society pale compared to the diet, comforts, and entertainment available to the average person in Western economies today. (p. 303)

In his new book, Smaller Faster Lighter Denser Cheaper: How Innovation Keeps Proving the Catastrophists Wrong, Robert Bryce hammers this point home when he observes that:

The pessimistic worldview ignores an undeniable truth: more people are living longer, healthier, freer, more peaceful, lives than at any time in human history… the plain reality is that things are getting better, a lot better, for tens of millions of people around the world. Dozens of factors can be cited for the improving conditions of humankind. But the simplest explanation is that innovation is allowing us to do more with less.

This is framework Herman Kahn, Julian Simon, and the other champions of progress used to deconstruct and refute the pessimists of previous eras. In line with that approach, we modern pragmatic optimists must continuously point to the unappreciated but unambiguous benefits of technological innovation and dynamic change. But we should also continue to remind the skeptics of the amazing adaptability of the human species in the face of adversity. As Kahn taught us long ago, is that when it comes to technological progress and humanity’s ingenious responses to it, “we should expect to go on being surprised” — and in mostly positive ways. Humans have consistently responded to technological change in creative, and sometimes completely unexpected ways. There’s no reason to think we can’t get through modern technological disruptions using similar coping and adaptation strategies. As Mokyr noted in his recent City Journal essay on “The Next Age of Invention”:

Much like medication, technological progress almost always has side effects, but bad side effects are rarely a good reason not to take medication and a very good reason to invest in the search for second-generation drugs. To a large extent, technical innovation is a form of adaptation—not only to externally changing circumstances but also to previous adaptations.

In sum, we need to have a little faith in the ability of humanity to adjust to an uncertain future, no matter what it throws at us. We’ll muddle through and come out better because of what we have learned in the process, just as we have so many times before.

I’ll give venture capitalist Marc Andreessen the last word on this since he’s been on an absolute tear on Twitter lately when discussing many of the issues I’ve raised in this essay. While addressing the particular fear that automation is running amuck and that robots will eat all our jobs, Andreessen eloquently noted:

We have no idea what the fields, industries, businesses, and jobs of the future will be. We just know we will create an enormous number of them. Because if robots and AI replace people for many of the things we do today, the new fields we create will be built on the huge number of people those robots and AI systems made available. To argue that huge numbers of people will be available but we will find nothing for them (us) to do is to dramatically short human creativity. And I am way long human creativity.

Me too, buddy. Me too.


Additional Reading:

Journal articles & book chapters:

Blog posts:

]]> 4
New Law Review Article: “Privacy Law’s Precautionary Principle Problem” Mon, 16 Jun 2014 17:50:30 +0000

My latest law review article is entitled, “Privacy Law’s Precautionary Principle Problem,” and it appears in Vol. 66, No. 2 of the Maine Law Review. You can download the article on my Mercatus Center page, on the Maine Law Review website, or via SSRN. Here’s the abstract for the article:

Privacy law today faces two interrelated problems. The first is an information control problem. Like so many other fields of modern cyberlaw—intellectual property, online safety, cybersecurity, etc.—privacy law is being challenged by intractable Information Age realities. Specifically, it is easier than ever before for information to circulate freely and harder than ever to bottle it up once it is released.

This has not slowed efforts to fashion new rules aimed at bottling up those information flows. If anything, the pace of privacy-related regulatory proposals has been steadily increasing in recent years even as these information control challenges multiply.

This has led to privacy law’s second major problem: the precautionary principle problem. The precautionary principle generally holds that new innovations should be curbed or even forbidden until they are proven safe. Fashioning privacy rules based on precautionary principle reasoning necessitates prophylactic regulation that makes new forms of digital innovation guilty until proven innocent.

This puts privacy law on a collision course with the general freedom to innovate that has thus far powered the Internet revolution, and privacy law threatens to limit innovations consumers have come to expect or even raise prices for services consumers currently receive free of charge. As a result, even if new regulations are pursued or imposed, there will likely be formidable push-back not just from affected industries but also from their consumers.

In light of both these information control and precautionary principle problems, new approaches to privacy protection are necessary. We need to invert the process of how we go about protecting privacy by focusing more on practical “bottom-up” solutions—education, empowerment, public and media pressure, social norms and etiquette, industry self-regulation and best practices, and an enhanced role for privacy professionals within organizations—instead of “top-down” legalistic solutions and regulatory techno-fixes. Resources expended on top-down regulatory pursuits should instead be put into bottom-up efforts to help citizens better prepare for an uncertain future.

In this regard, policymakers can draw important lessons from the debate over how best to protect children from objectionable online content. In a sense, there is nothing new under the sun; the current debate over privacy protection has many parallels with earlier debates about how best to protect online child safety. Most notably, just as top-down regulatory constraints came to be viewed as constitutionally-suspect and economically inefficient, and also highly unlikely to even be workable in the long-run for protecting online child safety, the same will likely be true for most privacy related regulatory enactments.

This article sketches out some general lessons from those online safety debates and discusses their implications for privacy policy going forward.

Read the full article here [PDF].

Related Material:


Adam Thierer – Privacy Law's Precautionary Problem (Maine Law Review, 2014) by Adam Thierer

]]> 0
video: Cap Hill Briefing on Emerging Tech Policy Issues Thu, 12 Jun 2014 15:53:33 +0000

I recently did a presentation for Capitol Hill staffers about emerging technology policy issues (driverless cars, the “Internet of Things,” wearable tech, private drones, “biohacking,” etc) and the various policy issues they would give rise to (privacy, safety, security, economic disruptions, etc.). The talk is derived from my new little book on “Permissionless Innovation,” but in coming months I will be releasing big papers on each of the topics discussed here.

Additional Reading:

]]> 0
Has Copyright Gone Too Far? Watch This “Hangout” to Find Out Tue, 10 Jun 2014 01:20:19 +0000

Last week, the Mercatus Center and the R Street Institute co-hosted a video discussion about copyright law. I participated in the Google Hangout, along with co-liberator Tom Bell of Chapman Law School (and author of the new book Intellectual Privilege), Mitch Stoltz of the Electronic Frontier Foundation, Derek Khanna, and Zach Graves of the R Street Institute. We discussed the Aereo litigation, compulsory licensing, statutory damages, the constitutional origins of copyright, and many more hot copyright topics.

You can watch the discussion here:


]]> 0
Outdated Policy Decisions Don’t Dictate Future Rights in Perpetuity Mon, 09 Jun 2014 13:19:04 +0000

Congressional debates about STELA reauthorization have resurrected the notion that TV stations “must provide a free service” because they “are using public spectrum.” This notion, which is rooted in 1930s government policy, has long been used to justify the imposition of unique “public interest” regulations on TV stations. But outdated policy decisions don’t dictate future rights in perpetuity, and policymakers abandoned the “public spectrum” rationale long ago.

All wireless services use the public spectrum, yet none of them are required to provide a free commercial service except broadcasters. Satellite television operators, mobile service providers, wireless Internet service providers, and countless other commercial spectrum users are free to charge subscription fees for their services.

There is nothing intrinsic in the particular frequencies used by broadcasters that justifies their discriminatory treatment. Mobile services use spectrum once allocated to broadcast television, but aren’t treated like broadcasters.

The fact that broadcast licenses were once issued without holding an auction is similarly irrelevant. All spectrum licenses were granted for free before the mid-1990s. For example, cable and satellite television operators received spectrum licenses for free, but are not required to offer their video services for free.

If the idea is to prevent companies who were granted free licenses from receiving a “windfall”, it’s too late. As Jeffrey A. Eisenach has demonstrated, “the vast majority of current television broadcast licensees [92%] have paid for their licenses through station transactions.”

The irrelevance of the free spectrum argument is particularly obvious when considering the differential treatment of broadcast and satellite spectrum. Spectrum licenses for broadcast TV stations are now subject to competitive bidding at auction while satellite television licenses are not. If either service should be required to provide a free service on the basis of spectrum policy, it should be satellite television.

Although TV stations were loaned an extra channel during the DTV transition, the DTV transition is over. Those channels have been returned and were auctioned for approximately $19 billion in 2008. There is no reason to hold TV stations accountable in perpetuity for a temporary loan.

Even if there were, the loan was not free. Though TV stations did not pay lease fees for the use of those channels, they nevertheless paid a heavy price. TV stations were required to invest substantial sums in HDTV technology and to broadcast signals in that format long before it was profitable. The FCC required “rapid construction of digital facilities by network-affiliated stations in the top markets, in order to expose a significant number of households, as early as possible, to the benefits of DTV.” TV stations were thus forced to “bear the risks of introducing digital television” for the benefit of consumers, television manufacturers, MVPDs, and other digital media.

The FCC did not impose comparable “loss leader” requirements on MVPDs. They are free to wait until consumer demand for digital and HDTV content justifies upgrading their systems — and they are still lagging TV stations by a significant margin. According to the FCC, only about half of the collective footprints of the top eight cable MVPDs had been transitioned to all-digital channels at the end of 2012. By comparison, the DTV transition was completed in 2009.

There simply is no satisfactory rationale for requiring broadcasters to provide a free service based on their use of spectrum or the details of past spectrum licensing decisions. If the applicability of a free service requirement turned on such issues, cable and satellite television subscribers wouldn’t be paying subscription fees.

]]> 0
Son’s Criticism of U.S. Broadband Misleading and Misplaced Mon, 02 Jun 2014 23:43:19 +0000

Chairman and CEO Masayoshi Son of SoftBank again criticized U.S. broadband (see this and this) at last week’s Code Conference.

The U.S. created the Internet, but its speeds rank 15th out of 16 major countries, ahead of only the Philippines.  Mexico is No. 17, by the way.

It turns out that Son couldn’t have been referring to the broadband service he receives from Comcast, since the survey data he was citing—as he has in the past—appears to be from OpenSignal and was gleaned from a subset of the six million users of the OpenSignal app who had 4G LTE wireless access in the second half of 2013.

Oh, and Son neglected to mention that immediately ahead of the U.S. in the OpenSignal survey is Japan.

Son, who is also the chairman of Sprint, has a legitimate grievance with overzealous U.S. antitrust enforcers.  But he should be aware that for many years the proponents of network neutrality regulation have cited international rankings in support of their contention that the U.S. broadband market is under-regulated.

It is a well-established fact that measuring broadband speeds and prices from one country to the next is difficult as a result of “significant gaps and variations in data collection methodologies,” and that “numerous market, regulatory, and geographic factors determine penetration rates, prices, and speeds.”  See, e.g., the  Federal Communications Commission’s most recent International Broadband Data Report.  In the case of wireless services, as one example, the availability of sufficient airwaves can have a huge impact on speeds and prices.  Airwaves are assigned by the FCC.

There are some bright spots in the broadband comparisons published by a number of organizations.

For example,  U.S. consumers pay the third lowest average price for entry-level fixed broadband of 161 countries surveyed by ITU (the International Telecommunications Union).

And as David Balto notes over at Huffington Post, Akamai reports that the average connection speeds in Japan and the U.S. aren’t very far apart—12.8 megabits per second in Japan versus 10 Mbps in the U.S.

Actual speeds experienced by broadband users reflect the service tiers consumers choose to purchase, and not everyone elects to pay for the highest available speed. It’s unfair to blame service providers for that.

A more relevant metric for judging service providers is investment.  ITU reports that the U.S. leads every other nation in telecommunications investment by far.  U.S. service providers invested more than $70 billion in 2010 versus less than $17 billion in Japan.  On a per capita basis, telecom investment in the U.S. is almost twice that of Japan.

In Europe, per capita investment in telecommunications infrastructure is less than half what it is in the U.S., according to Martin Thelle and Bruno Basalisco.

Incidentally, the European Commission has concluded,

Networks are too slow, unreliable and insecure for most Europeans; Telecoms companies often have huge debts, making it hard to invest in improvements. We need to turn the sector around so that it enables more productivity, jobs and growth.

It should be noted that for the past decade or so Europe has been pursuing the same regulatory strategy that net neutrality boosters are advocating for the U.S.  Thelle and Basalisco observe that,

The problem with the European unbundling regulation is that it pitted short-term consumer benefits, such as low prices, against the long-run benefits from capital investment and innovation. Unfortunately, regulators often sacrificed the long-term interest by forcing an infrastructure owner to share its physical wires with competing operators at a cheap rate. Thus, the regulated company never had a strong incentive to invest in new infrastructure technologies — a move that would considerably benefit the competing operators using its infrastructure.

Europe’s experience with the unintended consequences of unnecessary regulation is perhaps the most useful lesson the U.S. can learn from abroad.

]]> 5