September 2012

By Geoffrey Manne, Matt Starr & Berin Szoka

“Real lawyers read the footnotes!”—thus did Harold Feld chastise Geoff and Berin in a recent blog post about our CNET piece on the Verizon/SpectrumCo transaction. We argued, as did Commissioner Pai in his concurrence, that the FCC provided no legal basis for its claims of authority to review the Commercial Agreements that accompanied Verizon’s purchase of spectrum licenses—and that these agreements for joint marketing, etc. were properly subject only to DOJ review (under antitrust).

Harold insists that the FCC provided “actual analysis of its authority” in footnote 349 of its Order. But real lawyers read the footnotes carefully. That footnote doesn’t provide any legal basis for the FTC to review agreements beyond a license transfer; indeed, the footnote doesn’t even assert such authority. In short, we didn’t cite the footnote because it is irrelevant, not because we forgot to read it.

First, a reminder of what we said:

The FCC’s review of the Commercial Agreements accompanying the spectrum deal exceeded the limits of Section 310(d) of the Communications Act. As Commissioner Pai noted in his concurring statement, “Congress limited the scope of our review to the proposed transfer of spectrum licenses, not to other business agreements that may involve the same parties.” We (and others) raised this concern in public comments filed with the Commission. Here’s the agency’s own legal analysis — in full: “The Commission has authority to review the Commercial Agreements and to impose conditions to protect the public interest.” There’s not even an accompanying footnote.

Even if Harold were correct that footnote 349 provides citations to possible sources of authority for the FCC to review the Commercial Agreements, it remains irrelevant to our claim: The FCC exceeded its authority under 310(d) and asserted its authority under 310(d) without any analysis or citation. Footnote 349 begins with the phrase, “[a]side from Section 310(d)….” It is no surprise, then, that the footnote contains no analysis of the agency’s authority under that section. Continue reading →

The findings and recommendations of the PCAST described above are an obvious attempt by the Administration to usurp Congressional authority and muscle it out of its constitutional jurisdiction over commercial spectrum use.

And one would expect that some in Congress would be downright angry that the Chairman of the FCC, an independent agency, is supporting a Presidential power grab.

The House Energy and Commerce Committee’s Subcommittee on Communications and Technology is holding a hearing this morning to examine how federal agencies and commercial wireless companies might benefit from more efficient government use of spectrum. The hearing is intended to address a report issued by the President’s Council of Advisors on Science and Technology (PCAST) that rejects the Constitutional role of Congress in managing our nation’s spectrum resources and neuters the FCC. The issues raised in the PCAST report should be subject to further study and not implemented through an unconstitutional Presidential memorandum. Only Congress can delegate this authority. Continue reading →

Christopher Steiner, author of Automate This: How Algorithms Came to Rule the World, discusses his new book. Steiner originally set about studying the prevalence of algorithms in Wall Street stock trading but soon found they were everywhere. Stock traders were the first to use algorithms as a substitute for human judgment to make trades automatically, allowing for much faster trading. But now algorithms are used to diagnose illnesses, interpret legal documents, analyze foreign policy, and write newspaper articles. Algorithms have even been used to look at how people form sentences to determine that person’s personality and mental state so that customer service agents can deal with upset customers better. Steiner discusses the benefits–and risks–of algorithmic automation and how it will change the world.

Listen to the Podcast

Download MP3

Related Links

In a [recent post](http://www.forbes.com/sites/timothylee/2012/09/08/the-weird-economics-of-utility-networks/), Tim Lee does a good job of explaining why facilities-based competition in broadband is difficult. He writes,

>As Verizon is discovering with its FiOS project, it’s much harder to turn a profit installing the second local loop; both because fewer than 50 percent of customers are likely to take the service, and because competition pushes down margins. And it’s almost impossible to turn a profit providing a third local loop, because fewer than a third of customers are likely to sign up, and even more competition means even thinner margins.

Tim thus concludes that

>the kind of “facilities-based” competition we’re seeing in Kansas City, in which companies build redundant networks that will sit idle most of the time, is extremely wasteful. In a market where every household has n broadband options (each with its own fiber network), only 1/n local loops will be in use at any given time. The larger n is, the more resources are wasted on redundant infrastructure.

I don’t understand that conclusion. You would imagine that redundant infrastructure would be built only if it is profitable to its builder. Tim is right we probably should not expect more than a few competitors, but I don’t see how more than one pipe is necessarily wasteful. If laying down a second set of pipes is profitable, shouldn’t we welcome the competition? The question is whether that second pipe is profitable without government subsidy.

That brings me to a larger point: I think what Tim is missing is what makes Google Fiber so unique. Tim is assuming that all competitors in broadband will make their profits from the subscription fees they collect from subscribers. As we all know, that’s not [how Google tends to operate](http://elidourado.com/blog/theory-of-google/). Google’s primary business model is advertising, and that’s likely from [where they expect their return to come](http://community.nasdaq.com/News/2012-08/google-seeking-more-ad-impressions-with-fast-fiber.aspx?storyid=162788). One of Google Fiber’s price points is [free](http://www.techdirt.com/blog/innovation/articles/20120726/11200919842/google-fiber-is-official-free-broadband-up-to-5-mbps-pay-symmetrical-1-gbps.shtml), so we might expect greater adoption of the service. That’s disruptive innovation that could sustainably increase competition and bring down prices for consumers–without a government subsidy.

Kansas City sadly gave Google all sorts of subsidies, like free power and rackspace for its servers as [Tim has pointed out](http://arstechnica.com/tech-policy/2012/09/how-kansas-city-taxpayers-support-google-fiber/), but it also cut serious red tape. For example, there is no build-out requirement for Google Fiber, a fact [now bemoaned](http://www.wired.com/business/2012/09/google-fiber-digital-divide/) by digital divide activists. Such requirements, I would argue, are the [true cause](http://news.cnet.com/How-to-squelch-growth-of-the-high-speed-Net/2010-1034_3-6106690.html) of the unused and wasteful overbuilding that Tim laments.

So what matters more? The in-kind subsidies or the freedom to build only where it’s profitable? I think that’s the empirical question we’re really arguing about. It’s not a forgone conclusion of broadband economics that [there can be only one](http://www.youtube.com/watch?v=4AoOa-Fz2kw). And do we want to limit competition in part of a municipality in order to achieve equity for the whole? That’s another question over which “original recipe” and bleeding-heart libertarians may have a difference of opinion.

Psychologists Daniel Simons and Christopher Chabris had an interesting editorial in The Wall Street Journal this weekend asking, “Do Our Gadgets Really Threaten Planes?” They conducted an online survey of 492 American adults who have flown in the past year and found that “40% said they did not turn their phones off completely during takeoff and landing on their most recent flight; more than 7% left their phones on, with the Wi-Fi and cellular communications functions active. And 2% pulled a full Baldwin, actively using their phones when they weren’t supposed to.”

Despite the widespread prevalence of such law-breaking activity, planes aren’t falling from the sky and yet the Federal Aviation Administration continues to enforce the rule prohibiting the use of digital gadgets during certain times during flight. “Why has the regulation remained in force for so long despite the lack of solid evidence to support it?” Simons and Chabris ask. They note:

Human minds are notoriously overzealous “cause detectors.” When two events occur close in time, and one plausibly might have caused the other, we tend to assume it did. There is no reason to doubt the anecdotes told by airline personnel about glitches that have occurred on flights when they also have discovered someone illicitly using a device. But when thinking about these anecdotes, we don’t consider that glitches also occur in the absence of illicit gadget use. More important, we don’t consider how often gadgets have been in use when flights have been completed without a hitch. Our survey strongly suggests that there are multiple gadget violators on almost every flight.

That’s all certain true, but what actually motivated this ban — and has ensured its continuation despite a lack of evidence it is needed to diminish technological risk — is the precautionary principle. As the authors correct note: Continue reading →

The privacy debate has been increasingly shaped by an apparent consensus that de-identifying sets of personally identifying information doesn’t work.  In particular, this has led the FTC to abandon the PII/non-PII distinction on the assumption that re-identification is too easy.  But a new paper shatters this supposed consensus by rebutting the methodology of Latanya Sweeney’s seminal 1997 study of re-identification risks, which in turn, shaped the HIPAA’s rules for de-identification of health data and the larger privacy debate ever since.

This new critical paper, “The ‘Re-Identification’ of Governor William Weld’s Medical Information: A Critical Re-Examination of Health Data Identification Risks and Privacy Protections, Then and Now” was published by Daniel Barth-Jones, an epidemiologist and statistician at Columbia University. After carefully re-examining the methodology of Sweeney’s 1997 study, he concludes that re-identification attempts will face “far-reaching systemic challenges” that are inherent in the statistical methods used to re-identify. In short, re-identification turns out to be harder than it seemed—so our identity can more easily be obscured in large data sets. This more nuanced story must be understood by privacy law scholars and public policy-makers if they want to realistically assess current privacy risks posed by de-identified data—not just for health data, but for all data.

The importance of Barth-Jones’s paper is underscored by the example of Vioxx, which stayed on the market years longer than it should have because of HIPAA’s privacy rules, thus resulting in  88,000 and 139,000 unnecessary heart attacks, and 27,000-55,000 avoidable deaths—as University of Arizona Law Professor Jane Yakowitz Bambauer explained in a recent Huffington Post piece.

Ultimately, overstating the risk of re-identification causes policymakers to strike the wrong balance in the trade-off of privacy with other competing values.  As Barth-Jones and Yakowitz have suggested, policymakers should instead focus on setting standards for proper de-identification of data that are grounded in a rigorous statistical analysis of re-identification risks.  A safe harbor for proper de-identification, combined with legal limitations on re-identification, could protect consumers against real privacy harms while still allowing the free flow of data that drives research and innovation throughout the economy.

Unfortunately, the Barth-Jones paper has not received the attention it deserves.  So I encourage you consider writing about this, or just take a moment to share this with your friends on Twitter or Facebook.

In my last post, I discussed an outstanding new paper from Ronald Cass on “Antitrust for High-Tech and Low: Regulation, Innovation, and Risk.” As I noted, it’s one of the best things I’ve ever read about the relationship between antitrust regulation and the modern information economy. That got me thinking about what other papers on this topic that I might recommend to others. So, for what it’s worth, here are the 12 papers that have most influenced my own thinking on the issue. (If you have other suggestions for what belongs on the list, let me know. No reason to keep it limited to just 12.)

  1. J. Gregory Sidak & David J. Teece, “Dynamic Competition in Antitrust Law,” 5 Journal of Competition Law & Economics (2009).
  2. Geoffrey A. Manne &  Joshua D. Wright, “Innovation and the Limits of Antitrust,” 6 Journal of Competition Law & Economics, (2010): 153
  3. Joshua D. Wright, “Antitrust, Multi-Dimensional Competition, and Innovation: Do We Have an Antitrust-Relevant Theory of Competition Now?” (August 2009).
  4. Daniel F. Spulber, “Unlocking Technology: Antitrust and Innovation,” 4(4) Journal of Competition Law & Economics, (2008): 915.
  5. Ronald Cass, “Antitrust for High-Tech and Low: Regulation, Innovation, and Risk,” 9(2) Journal of Law, Economics and Policy, Forthcoming (Spring 2012)
  6. Richard Posner, “Antitrust in the New Economy,” 68 Antitrust Law Journal, (2001).
  7. Stan J. Liebowitz & Stephen E. Margolis,”Path Dependence, Lock-in, and History,” 11(1) Journal of Law, Economics and Organization, (April 1995): 205-26.
  8. Robert Crandall and Charles Jackson, “Antitrust in High-Tech Industries,” Technology Policy Institute (December 2010).
  9. Bruce Owen, “Antitrust and Vertical Integration in ‘New Economy’ Industries,” Technology Policy Institute (November 2010).
  10. Douglas H. Ginsburg & Joshua D. Wright, “Dynamic Analysis and the Limits of Antitrust Institutions,” 78 (1) Antitrust Law Journal (2012): 1-21.
  11. Thomas Hazlett, David Teece, Leonard Waverman, “Walled Garden Rivalry: The Creation of Mobile Network Ecosystems,” George Mason University Law and Economics Research Paper Series, (November 21, 2011), No. 11-50.
  12. David S. Evans, “The Antitrust Economics of Two Sided Markets.”

Ronald Cass, Dean Emeritus of Boston University School of Law, has penned the best paper on antitrust regulation that you will read this year, especially if you’re interested in the relationship between antitrust and  information technology sectors.  His paper is entitled, “Antitrust for High-Tech and Low: Regulation, Innovation, and Risk,” and it makes two straightforward points:

  1. Antitrust enforcement has characteristics and risks similar to other forms of regulation.
  2. Antitrust authorities need to exercise special care in making enforcement decisions respecting conduct of individual dominant firms in high-technology industries.

Here are some highlights from the paper that build on those two points. Continue reading →

The New WCITLeaks

by on September 6, 2012 · 0 comments

Today, Jerry and I are pleased to announce a major update to WCITLeaks.org, our project to bring transparency to the ITU’s World Conference on International Telecommunications (WCIT, pronounced wicket).

If you haven’t been following along, WCIT is an upcoming treaty conference to update the International Telecommunication Regulations (ITRs), which currently govern some parts of the international telephone system, as well as other antiquated communication methods, like telegraphs. There has been a push from some ITU member states to bring some aspects of Internet policy into the ITRs for the first time.

We started WCITLeaks.org to provide a public hosting platform for people with access to secret ITU documents. We think that if ITU member states want to discuss the future of the Internet, they need to do so on an open and transparent basis, not behind closed doors.

Today, we’re taking our critique one step further. Input into the WCIT process has been dominated by member states and private industry. We believe it is important that civil society have its say as well. That is why we are launching a new section of the site devoted to policy analysis and advocacy resources. We want the public to have the very best information from a broad spectrum of civil society, not just whatever information most serves interests of the ITU, member states, and trade associations.

Continue reading →

Adam Thierer, senior research fellow at the Mercatus Center at George Mason University, discuses recent calls for nationalizing Facebook or at least regulating it as a public utility. Thierer argues that Facebook is not a public good in any formal economic sense, and nationalizing the social network would be a big step in the wrong direction. He argues that nationalizing the network is neither the only nor the most effective means of solving privacy concerns that surround Facebook and other social networks. Nor is Facebook is a monopoly, he says, arguing that customers have many other choices. Thierer also points out that regulation is not without its problems including the potential that a regulator will be captured by the regulated network thus making monopoly a self-fulfilling prophecy.

Listen to the Podcast

Download MP3

Related Links