Articles by Jerry Brito

Jerry is a senior research fellow at the Mercatus Center at George Mason University, and director of its Technology Policy Program. He also serves as adjunct professor of law at GMU. His web site is jerrybrito.com.


Today the Mercatus Center at George Mason University has released a new working paper by Boston College Law School Professor Daniel Lyons entitled, “The Impact of Data Caps and Other Forms of Usage-Based Pricing for Broadband Access.”

There’s been much hand-wringing about fixed and mobile broadband services increasingly looking to move to usage-based pricing or to impose data caps. Some have even suggested an outright ban on the practice. As Adam Thierer has catalogued in these pages, the ‘net neutrality’ debate has in many ways been leading to this point: pricing flexibility vs. price controls.

In his new paper, Lyons explores the implications of this trend toward usage-based pricing. He finds that data caps and other forms of metered consumption are not inherently anti-consumer or anticompetitive.

Rather, they reflect different pricing strategies through which a broadband company may recover its costs from its customer base and fund future infrastructure investment. By aligning costs more closely with use, usage-based pricing may effectively shift more network costs onto those consumers who use the network the most. Companies can thus avoid forcing light Internet users to subsidize the data-heavy habits of online gamers and movie torrenters. Usage-based pricing may also help alleviate network congestion by encouraging customers, content providers, and network operators to use broadband more efficiently.

Opponents of usage-based pricing have noted that data caps may be deployed for anticompetitive purposes. But data caps can be a problem only when a firm with market power exploits that power in a way that harms consumers. Absent a specific market failure, which critics have not yet shown, broadband providers should be free to experiment with usage-based pricing and other pricing strategies as tools in their arsenal to meet rising broadband demand. Public policies allowing providers the freedom to experiment best preserve the spirit of innovation that has characterized the Internet since its inception.

Lyons does a magnificent job of walking the reader through every aspect of the usage-based pricing issue, its benefits as a cost-recovery and congestion management tool, and its potential anticompetitive effects. “Ultimately, data caps and other pricing strategies are ways that broadband companies can distinguish themselves from one another to achieve a competitive advantage in the marketplace,” he concludes. “When firms experiment with different business models, they can tailor services to niche audiences whose interests are inadequately satisfied by a one-size-fits-all flat-rate plan. Absent anticompetitive concerns, public policy should encourage companies to experiment with different pricing models as a way to compete against one another.”

Scott Shackelford, assistant professor of business law and ethics at Indiana University, and author of the soon-to-be-published book Managing Cyber Attacks in International Law, Business, and Relations: In Search of Cyber Peace, explains how polycentric governance could be the answer to modern cybersecurity concerns.

Shackelford  originally began researching collective action problems in physical commons, including Antarctica, the deep sea bed, and outer space, where he discovered the efficacy of polycentric governance in addressing these issues. Noting the similarities between these communally owned resources and the Internet, Shackelford was drawn to the idea of polycentric governance as a solution to the collective action problems he identified in the online realm, particularly when it came to cybersecurity.

Shackelford contrasts the bottom-up form of governance characterized by self-organization and networking regulations at multiple levels to the increasingly state-centric approach prevailing in forums like the International Telecommunication Union (ITU).  Analyzing the debate between Internet sovereignty and Internet freedom through the lens of polycentric regulation, Shackelford reconceptualizes both cybersecurity and the future of Internet governance.

Download

Related Links

Designer Dan Provost, co-founder of the indie hardware and software company Studio Neat, and co-author of It Will Be Exhilarating: Indie Capitalism and Design Entrepreneurship in the 21st Century, discusses how technological innovation helped him build his business. Provost explains how he and his co-founder Tom Gerhardt were able to rely on crowdfunding to finance their business. Avoiding loans or investors, he says, has allowed them to more freely experiment and innovate. Provost also credits 3D printing for his company’s success, saying their hardware designs–very popular tripod mounts for the iPhone and a stylus for the iPad–would not have been possible without the quick-prototyping technology.

Download

Related Links

Vinton Cerf, one of the “fathers of the internet,” discusses what he sees as one of the greatest threats to the internet—the encroachment of the United Nations’ International Telecommunications Union (ITU) into the internet realm. ITU member states will meet this December in Dubai to update international telecommunications regulations and consider proposals to regulate the net. Cerf argues that, as the face of telecommunications is changing, the ITU is attempting to justify its continued existence by expanding its mandate to include the internet. Cerf says that the business model of the internet is fundamentally different from that of traditional telecommunications, and as a result, the ITU’s regulatory model will not work. In place of top-down ITU regulation, Cerf suggests that open multi-stakeholder processes and bilateral agreements may be a better solutions to the challenges of governance on the internet.

Download

Related Links

Tomorrow the Information Economy Project at George Mason University wil present the latest installment of its Tullock Lecture series, featuring Dr. Bronwyn Howell of the New Zealand Institute for the Study of Competition and Regulation. Here is the notice:

Dr. Bronwyn Howell – Tuesday, Sept. 25, 2012
New Zealand Institute for the Study of Competition and Regulation
4:00 to 5:30 pm @ Founder’s Hall Room 111, GMU School of Law, 3301 Fairfax Drive, Arlington, Va. Reception to Follow in the Levy Atrium, 5:30-6:30 pm Admission is free but seating is limited.

“Regulating Broadband Networks: The Global Data for Evidence-Based Public Policy:” Policy makers in the U.S. and around the world are wrestling with “the broadband problem” – how to get advanced forms of Internet access to businesses and consumers. A variety of regulatory approaches have been used, some focusing on incentives to drive deployment of rival networks, others on network sharing mandates or government subsidies. Despite a wealth of diverse experience, there seems to be a great deal of confusion about what the data actually suggest. Few people have studied these data more carefully, however, than New Zealand economist Bronwyn Howell, who will frame the lessons of the global broadband marketplace. Prof. Howell will be introduced by Dr. Scott Wallsten, Senior Fellow at the Technology Policy Institute, who served as Economics Director for the FCC’s National Broadband Plan. RSVP online here or by email to iep.gmu@gmail.com.

Aereo LogoRyan Radia recently posted an impassioned and eminently reasonable defense of copyright with which I generally agree, especially since he acknowledges that “our Copyright Act abounds with excesses and deficiencies[.]” However, Ryan does this in the context of defending broadcaster rights against internet retransmitters, such as ivi and Aereo, and I have a bone to pick with that. He writes,

[Copyright] is why broadcasters may give their content away for free to anybody near a metropolitan area who has an antenna and converter box, while simultaneously preventing third parties like ivi from distributing the same exact content (whether free of charge or for a fee). At first, this may seem absurd, but consider how many websites freely distribute their content on the terms they see fit. That’s why I can read all the Techdirt articles I desire, but only on Techdirt’s website. If copyright protection excluded content distributed freely to the general public, creators of popular ad-supported content would soon find others reproducing their content with fewer ads.

I think what Ryan is missing is that copyright is not why broadcasters give away their content for free over the air. The real reason is that they are required to do so as a condition of their broadcast license. In exchange for free access to one of the main inputs of their business–spectrum–broadcasters agree to make their signal available freely to the public. Also, the fact that TV stations broadcast to metro areas (and not regionally or nationally) is not the product of technical limitations or business calculus, but because the FCC decided to only offer metro-sized licenses in the name of “localism.” That’s not a system I like, but it’s the system we have.

So, if what the public gets for giving broadcasters free spectrum is the right to put up an antenna and grab the signals without charge, why does it matter how they do it? To me a service like Aereo is just an antenna with a very long cable to one’s home, just like the Supreme Court found about CATV systems in Fortnightly. What broadcasters are looking to do is double-dip. They want free spectrum, but then they also want to use copyright to limit how the public can access their over-the-air signals. To address Ryan’s analogy from above, Techdirt is not like a broadcaster because it isn’t getting anything from the government in exchange for a “public interest” obligation.

Ideally, of course, spectrum would be privatized. In that world I think we’d see little if any ad-supported broadcast TV because there are much better uses for the spectrum. If there was any broadcast TV, it would be national or regional as there is hardly any market for local content. And the signal would likely be encrypted and pay-per-view, not free over-the-air. In such a world the copyright system Ryan favors makes sense, but that’s not the world we live in. As long as the broadcasters are getting free goodies like spectrum and must-carry, their copyright claims ring hollow.

Ryan Radia, associate director of technology studies at the Competitive Enterprise Institute, discusses the amicus brief he helped author in the case of Verizon v. Federal Communications Commission now before the D.C. Circuit Court of Appeals. Radia analyzes the case, which will determine the fate of the FCC’s net neutrality rule. While Verizon is arguing that the FCC does not have the authority to issue suce rules, Radia says that the constitutional implications of the net neutrality rule are more important. He explains that the amicus brief outlines both First and Fifth Amendment arguments against the rule, stating that net neutrality impinges on the speech of Internet service providers and constitutes an illegal taking of their private property.

[Flash 9 is required to listen to audio.]

Download

Related Links

Christopher Steiner, author of Automate This: How Algorithms Came to Rule the World, discusses his new book. Steiner originally set about studying the prevalence of algorithms in Wall Street stock trading but soon found they were everywhere. Stock traders were the first to use algorithms as a substitute for human judgment to make trades automatically, allowing for much faster trading. But now algorithms are used to diagnose illnesses, interpret legal documents, analyze foreign policy, and write newspaper articles. Algorithms have even been used to look at how people form sentences to determine that person’s personality and mental state so that customer service agents can deal with upset customers better. Steiner discusses the benefits–and risks–of algorithmic automation and how it will change the world.

Listen to the Podcast

Download MP3

Related Links

In a [recent post](http://www.forbes.com/sites/timothylee/2012/09/08/the-weird-economics-of-utility-networks/), Tim Lee does a good job of explaining why facilities-based competition in broadband is difficult. He writes,

>As Verizon is discovering with its FiOS project, it’s much harder to turn a profit installing the second local loop; both because fewer than 50 percent of customers are likely to take the service, and because competition pushes down margins. And it’s almost impossible to turn a profit providing a third local loop, because fewer than a third of customers are likely to sign up, and even more competition means even thinner margins.

Tim thus concludes that

>the kind of “facilities-based” competition we’re seeing in Kansas City, in which companies build redundant networks that will sit idle most of the time, is extremely wasteful. In a market where every household has n broadband options (each with its own fiber network), only 1/n local loops will be in use at any given time. The larger n is, the more resources are wasted on redundant infrastructure.

I don’t understand that conclusion. You would imagine that redundant infrastructure would be built only if it is profitable to its builder. Tim is right we probably should not expect more than a few competitors, but I don’t see how more than one pipe is necessarily wasteful. If laying down a second set of pipes is profitable, shouldn’t we welcome the competition? The question is whether that second pipe is profitable without government subsidy.

That brings me to a larger point: I think what Tim is missing is what makes Google Fiber so unique. Tim is assuming that all competitors in broadband will make their profits from the subscription fees they collect from subscribers. As we all know, that’s not [how Google tends to operate](http://elidourado.com/blog/theory-of-google/). Google’s primary business model is advertising, and that’s likely from [where they expect their return to come](http://community.nasdaq.com/News/2012-08/google-seeking-more-ad-impressions-with-fast-fiber.aspx?storyid=162788). One of Google Fiber’s price points is [free](http://www.techdirt.com/blog/innovation/articles/20120726/11200919842/google-fiber-is-official-free-broadband-up-to-5-mbps-pay-symmetrical-1-gbps.shtml), so we might expect greater adoption of the service. That’s disruptive innovation that could sustainably increase competition and bring down prices for consumers–without a government subsidy.

Kansas City sadly gave Google all sorts of subsidies, like free power and rackspace for its servers as [Tim has pointed out](http://arstechnica.com/tech-policy/2012/09/how-kansas-city-taxpayers-support-google-fiber/), but it also cut serious red tape. For example, there is no build-out requirement for Google Fiber, a fact [now bemoaned](http://www.wired.com/business/2012/09/google-fiber-digital-divide/) by digital divide activists. Such requirements, I would argue, are the [true cause](http://news.cnet.com/How-to-squelch-growth-of-the-high-speed-Net/2010-1034_3-6106690.html) of the unused and wasteful overbuilding that Tim laments.

So what matters more? The in-kind subsidies or the freedom to build only where it’s profitable? I think that’s the empirical question we’re really arguing about. It’s not a forgone conclusion of broadband economics that [there can be only one](http://www.youtube.com/watch?v=4AoOa-Fz2kw). And do we want to limit competition in part of a municipality in order to achieve equity for the whole? That’s another question over which “original recipe” and bleeding-heart libertarians may have a difference of opinion.

Adam Thierer, senior research fellow at the Mercatus Center at George Mason University, discuses recent calls for nationalizing Facebook or at least regulating it as a public utility. Thierer argues that Facebook is not a public good in any formal economic sense, and nationalizing the social network would be a big step in the wrong direction. He argues that nationalizing the network is neither the only nor the most effective means of solving privacy concerns that surround Facebook and other social networks. Nor is Facebook is a monopoly, he says, arguing that customers have many other choices. Thierer also points out that regulation is not without its problems including the potential that a regulator will be captured by the regulated network thus making monopoly a self-fulfilling prophecy.

Listen to the Podcast

Download MP3

Related Links