Supporters of Title II reclassification for broadband Internet access services point to the fact that some wireless services have been governed by a subset of Title II provisions since 1993.  No one is complaining about that.  So what, then, is the basis for opposition to similar regulatory treatment for broadband?

Austin Schlick, the former FCC general counsel, outlined the so-called “Third Way” legal framework for broadband in a 2010 memo that proposed Title II reclassification along with forbearance of all but six of Title II’s 48 provisions.  He noted that “this third way is a proven success for wireless communications.”  This is the model that President Obama is backing.  Title II reclassification “doesn’t have to be a big deal,” Harold Feld reminds us, since the wireless industry seems to be doing okay despite the fact mobile phone service was classified as a Title II service in 1993.

To be clear, only mobile voice services are subject to Title II, since the FCC classified broadband access to the Internet over wireless networks as an “information” service (and thus completely exempt from Title II) in March of 2007.

Sec. 6002(c) of the Omnibus Budget Reconciliation Act of 1993 (Public Law 103-66) modified Sec. 332 of the Communications Act so commercial mobile services would be treated “as a common carrier … except for such provisions of title II as the Commission may specify by regulation as inapplicable…”

The FCC commendably did forbear.  Former Chairman Reed E. Hundt would later boast in his memoir that the commission “totally deregulated the wireless industry.” He added that this was possible thanks to a Democratic Congress and former Vice President Al Gore’s tie-breaking Senate vote. Continue reading →

Last week marked the conclusion of the ITU’s Plenipotentiary Conference, the quadrennial gathering during which ITU member states get together to revise the treaty that establishes the Union and conduct other high-level business. I had the privilege of serving as a member of the US delegation, as I did for the WCIT, and to see the negotiations first hand. This year’s Plenipot was far less contentious than the WCIT was two years ago. For other summaries of the conference, let me recommend to you Samantha Dickinson, Danielle Kehl, and Amb. Danny Sepulveda. Rather than recap their posts or the entire conference, I just wanted to add a couple of additional observations.

We mostly won on transparent access to documents

Through my involvement with WCITLeaks, I have closely followed the issue of access to ITU documents, both before and during the Plenipot. My assessment is that we mostly won.

Going forward, most inputs and outputs to ITU conferences and assemblies will be available to the public from the ITU website. This excludes a) working documents, b) documents related to other meetings such as Council Working Groups and Study Groups, and c) non-meeting documents that should be available to the public.

However, in February, an ITU Council Working Group will be meeting to develop what is likely to be a more extensive document access policy. In May, the whole Council will meet to provisionally approve an access policy. And in 2018, the next Plenipot will permanently decide what to do about this provisional access policy.

There are no guarantees, and we will need to closely monitor the outcomes in February and May to see what policy is adopted—but if it is a good one, I would be prepared to shut down WCITLeaks as it would become redundant. If the policy is inadequate, however, WCITLeaks will continue to operate until the policy improves.

I was gratified that WCITLeaks continued to play a constructive role in the discussion. For example, in the Arab States’ proposal on ITU document access, they cited us, considering “that there are some websites on the Internet which are publishing illegally to the public ITU documents that are restricted only to Member States.” In addition, I am told that at the CEPT coordination meeting, WCITLeaks was thanked for giving the issue of transparency at the ITU a shot in the arm.

A number of governments were strong proponents of transparency at the ITU, but I think special thanks are due to Sweden, who championed the issue on behalf of Europe. I was very grateful for their leadership.

The collapse of the WCIT was an input into a harmonious Plenipot

We got through the Plenipot without a single vote (other than officer elections)! That’s great news—it’s always better when the ITU can come to agreement without forcing some member states to go along.

I think it’s important to recognize the considerable extent to which this consensus agreement was driven by events at the WCIT in 2012. At the WCIT, when the US (and others) objected and said that we could not agree to certain provisions, other countries thought we were bluffing. They decided to call our bluff by engineering a vote, and we wisely decided not to sign the treaty, along with 54 other countries.

In Busan this month, when we said that we could not agree to certain outcomes, nobody thought we were bluffing. Our willingness to walk away at the WCIT gave us added credibility in negotiations at the Plenipot. While I also believe that good diplomacy helped secure a good outcome at the Plenipot, the occasional willingness to walk the ITU off a cliff comes in handy. We should keep this in mind for future negotiations—making credible promises and sticking to them pays dividends down the road.

The big question of the conference is in what form will the India proposal re-emerge

At the Plenipot, India offered a sweeping proposal to fundamentally change the routing architecture of the Internet so that a) IP addresses would be allocated by country, like telephone numbers, with a country prefix and b) domestic Internet traffic would never be routed out of the country.

This proposal was obviously very impractical. It is unlikely, in any case, that the ITU has the expertise or the budget to undertake such a vast reengineering of the Internet. But the idea would also be very damaging from the perspective of individual liberty—it would make nation-states, even more than the are now, mediators of human communication.

I was very proud that the United States not only made the practical case against the Indian proposal, it made a principled one. Amb. Sepulveda made a very strong statement indicating that the United States does not share India’s goals as expressed in this proposal, and that we would not be a part of it. This statement, along with those of other countries and subsequent negotiations, effectively killed the Indian proposal at the Plenipot.

The big question is in what form this proposal will re-emerge. The idea of remaking the Internet along national lines is unlikely to go away, and we will need to continue monitoring ITU study groups to ensure that this extremely damaging proposal does not raise its head.

In my previous essay, I discussed a new white paper by my colleague Robert Graboyes, Fortress and Frontier in American Health Care, which examines the future of medical innovation. Graboyes uses the “fortress vs frontier” dichotomy to help explain different “visions” about how public policies debates about technological innovation in the health care arena often play out.  It’s a terrific study that I highly recommend for all the reasons I stated in my previous post.

As I was reading Bob’s new report, I realized that his approach shared much in common with a couple of other recent innovation policy paradigms I have discussed here before from Virginia Postrel (“Stasis” vs. “Dynamism”), Robert D. Atkinson (“Preservationists” vs. “Modernizers”), and myself (“Precautionary Principle” vs. “Permissionless Innovation”). In this essay, I will briefly relate Bob’s’ approach to those other three innovation policy paradigms and then note a deficiency with our common approaches. I’ll conclude by briefly discussing another interesting framework from science writer Joel Garreau. Continue reading →

Robert-GraboyesI want to bring to everyone’s attention an important new white paper by Dr. Robert Graboyes, a colleague of mine at the Mercatus Center at George Mason University who specializes in the economics of health care. His new 67-page study, Fortress and Frontier in American Health Care, seeks to move away from the tired old dichotomies that drive health care policy discussions: Left versus Right, Democrat versus Republican, federal versus state, and public versus private, and so on. Instead, Graboyes seeks to reframe the debate over the future of health care innovation in terms of “Fortress versus Frontier” and to highlight what lessons we can learn from the Internet and the Information Revolution when considering health care policy.

What does Graboyes mean by “Fortress and Frontier”? Here’s how he explains this conflict of visions:

The Fortress is an institutional environment that aims to obviate risk and protect established producers (insiders) against competition from newcomers (outsiders). The Frontier, in contrast, tolerates risk and allows outsiders to compete against established insiders. . . .  The Fortress-Frontier divide does not correspond neatly with the more familiar partisan or ideological divides. Framing health care policy issues in this way opens the door for a more productive national health care discussion and for unconventional policy alliances. (p. 4)

He elaborates in more detail later in the paper: Continue reading →

How Much Tax?

by on October 30, 2014 · 0 comments

As I and others have recently noted, if the Federal Communications Commission reclassifies broadband Internet access as a “telecommunications” service, broadband would automatically become subject to the federal Universal Service tax—currently 16.1%, or more than twice the highest state sales tax (California–7.5%), according to the Tax Foundation.

Erik Telford, writing in The Detroit News, has reached a similar conclusion.

U.S. wireline broadband revenue rose to $43 billion in 2012 from $41 billion in 2011, according to one estimate. “Total U.S. mobile data revenue hit $90 billion in 2013 and is expected to rise above $100 billion this year,” according to another estimate.  Assuming that the wireline and wireless broadband industries as a whole earn approximately $150 billion this year, the current 16.1% Universal Service Contribution Factor would generate over $24 billion in new revenue for government programs administered by the FCC if broadband were defined as a telecommunications service.

The Census Bureau reports that there were approximately 90 million households with Internet use at home in 2012. Wireline broadband providers would have to collect approximately $89 from each one of those households in order to satisfy a 16.1% tax liability on earnings of $50 billion. There were over 117 million smartphone users over the age of 15 in 2011, according to the Census Bureau. Smartphones would account for the bulk of mobile data revenue. Mobile broadband providers would have to collect approximately $137 from each of those smartphone users to shoulder a tax liability of 16.1% on earnings of $100 billion. Continue reading →

Evan Selinger, a super-sharp philosopher of technology up at the Rochester Institute of Technology, is always alerting me to interesting new essays and articles and this week he brought another important piece to my attention. It’s a short new article by Arturo Casadevall, Don Howard, and Michael J. Imperiale, entitled, “The Apocalypse as a Rhetorical Device in the Influenza Virus Gain-of-Function Debate.” The essay touches on something near and dear to my own heart: the misuse of rhetoric in debates over the risk trade-offs associated with new technology and inventions. Casadevall, Howard, and Imperiale seek to “focus on the rhetorical devices used in the debate [over infectious disease experiments] with the hope that an analysis of how the arguments are being framed can help the discussion.”

They note that “humans are notoriously poor at assessing future benefits and risks” and that this makes many people susceptible to rhetorical ploys based on the artificial inflation of risks. Their particular focus in this essay is the debate over so-called “gain-of-function” (GOF) experiments involving influenza virus, but what they have to say here about how rhetoric is being misused in that field is equally applicable to many other fields of science and the policy debates surrounding various issues. The last two paragraphs of their essay are masterful and deserve everyone’s attention: Continue reading →

Would the Federal Communications Commission expose broadband Internet access services to tax rates of at least 16.6% of every dollar spent on international and interstate data transfers—and averaging 11.23% on transfers within a particular state and locality—if it reclassifies broadband as a telecommunications service pursuant to Title II of the Communications Act of 1934?

As former FCC Commissioner Harold Furchtgott-Roth notes in a recent Forbes column, the Internet Tax Freedom Act only prohibits state and local taxes on Internet access.  It says nothing about federal user fees.  The House Energy & Commerce Committee report accompanying the “Permanent Internet Tax Freedom Act” (H.R. 3086) makes this distinction clear.

The law specifies that it does not prohibit the collection of the 911 access or Universal Service Fund (USF) fees. The USF is imposed on telephone service rather than Internet access anyway, although the FCC periodically contemplates broadening the base to include data services.

The USF fee applies to all interstate and international telecommunications revenues.  If the FCC reclassifies broadband Internet access as a telecommunications service in the Open Internet Proceeding, the USF fee would automatically apply unless and until the commission concluded a separate rulemaking proceeding to exempt Internet access.  The Universal Service Contribution Factor is not insignificant. Last month, the commission increased it to 16.1%.  According to Furchtgott-Roth,

At the current 16.1% fee structure, it would be perhaps the largest, one-time tax increase on the Internet.  The FCC would have many billions of dollars of expanded revenue base to fund new programs without, according to the FCC, any need for congressional authorization.

Continue reading →

Last week, it was my pleasure to speak at a Cato Institute event on “The End of Transit and the Beginning of the New Mobility: Policy Implications of Self-Driving Cars.” I followed Cato Institute Senior Fellow Randal O’Toole and Marc Scribner, a Research Fellow at the Competitive Enterprise Institute. They provided a broad and quite excellent overview of all the major issues at play in the debate over driverless cars. I highly recommend you read the excellent papers that Randal and Marc have published on these issues.

My role on the panel was to do a deeper dive into the privacy and security implications of not just the autonomous vehicles of our future, but also the intelligent vehicle technologies of the present. I discussed these issues in greater detail in my recent Mercatus Center working paper, “Removing Roadblocks to Intelligent Vehicles and Driverless Cars,” which was co-authored with Ryan Hagemann. (That article will appear in a forthcoming edition of the Wake Forest Journal of Law & Policy.)  I’ve embedded the video of the event down below (my remarks begin at the 38:15 mark) as well as my speaking notes. Again, please consult the longer paper for details.


Continue reading →

Good news! As the ITU’s Plenipotentiary Conference gets underway in Busan, Korea, the heads of delegation have met and decided to open up access to some of the documents associated with the meeting. At this time, it is only the documents that are classified as “contributions“—other documents such as meeting agendas, background information, and terms of reference remain password protected. It’s not clear yet whether that is an oversight or an intentional distinction. While I would prefer all documents to be publicly available, this is a very welcome development. It is gratifying to see the ITU membership taking transparency seriously.

Special thanks are due to ITU Secretary-General Hamadoun Touré. When Jerry Brito and I launched WCITLeaks in 2012, at first, the ITU took a very defensive posture. But after the WCIT, the Secretary-General demonstrated tremendous leadership by becoming a real advocate for transparency and reform. I am told that he was instrumental in convincing the heads of delegation to open up access to Plenipot documents. For that, Dr. Touré has my sincere thanks—I would be happy to buy him a congratulatory drink when I arrive in Busan, although I doubt his schedule would permit it.

It’s worth noting that this decision only applies to the Plenipotentiary conference. The US has a proposal that will be considered at the conference to make something like this arrangement permanent, to instruct the incoming SG to develop a policy of open access to all ITU meeting documents. That is a development that I will continue to watch closely.

Although SOPA was ignominiously defeated in 2012, the content industry never really gave up on the basic idea of breaking the Internet in order to combat content piracy. The industry now claims that a major cause of piracy is search engines returning results that direct users to pirated content. To combat this, they would like to regulate search engine results to prevent them from linking to sites that contain pirated music and movies.

This idea is problematic on many levels. First, there is very little evidence that content piracy is a serious concern in objective economic terms. Most content pirates would not, but for the availability of pirated content, empty their wallets to incentivize the creation of more movies and music. As Ian Robinson and I explain in our recent paper, industry estimates of the jobs created by intellectual property are absurd. Second, there are serious free speech implications associated with regulating search engine results. Search engines perform an information distribution role similar to that of newspapers, and they have an editorial voice. They deserve protection from censorship as long as they are not hosting the pirated material themselves. Third, as anyone who knows anything about the Internet knows, nobody uses the major search engines to look for pirated content. The serious pirates go straight to sites that specialize in piracy. Fourth, this is all part of a desperate attempt by the content industry to avoid modernizing and offering more of their content online through convenient packages such as Netflix.

As if these were not sufficient reason to reject the idea of “SOPA for Search Engines,” Google has now announced that they will be directing users to legitimate digital content if it is available on Netflix, Amazon, Google Play, Spotify, or other online services. The content industry now has no excuse—if they make their music and movies available in convenient form, users will see links to legitimate content even if they search for pirated versions.

star-trek-search-results

Google also says they will be using DMCA takedown notices as an input into search rankings and autocomplete suggestions, demoting sites and terms that are associated with piracy. This is above and beyond what Google needs to do, and in fact raises some concerns about fraudulent DMCA takedown notices that could chill free expression—such as when CBS issued a takedown of John McCain’s campaign ad on YouTube even though it was likely legal under fair use. Google will have to carefully monitor the DMCA takedown process for abuse. But in any case, these moves by Google should once and for all put the nail in the coffin of the idea that we should compromise the integrity of search results through government regulation for the sake of fighting a piracy problem that is not that serious in the first place.