If the FCC stops moving forward on Internet transformation, the universal service and intercarrier compensation reform order will become a death warrant for telephone companies.
CLIP hosted an event earlier this month to discuss Internet transformation. What is Internet transformation? In a recent op-ed, FCC Commissioner Ajit Pai noted that it “is really two different things—a technology revolution and a regulatory transition.”
The technology revolution began with the commercialization of the Internet, which enables the delivery of any communications service over any network capable of handling Internet Protocol (IP). According to the National Broadband Plan, the “Internet is transforming the landscape of America more rapidly and more pervasively than earlier infrastructure networks.” In little more than a decade, the Internet destroyed the monopoly structure of the old communications industry from within and replaced it with intermodal competition. Continue reading →
Perry Keller, Senior Lecturer at the Dickson Poon School of Law at King’s College London, and author of the recently released paper “Sovereignty and Liberty in the Internet Era,” discusses how the internet affects the relationship between the state and the media. According to Keller, media has played a formative role in the development of the modern state and, as it evolves, the way in which the state governs must change as well. However, that does not mean that there is a one-size-fits-all solution. In fact, as Keller demonstrates using real-world examples in the U.S., U.K., E.U., and China, the ways in which new media is governed can differ radically based upon the local legal and cultural environment.
Download
Related Links
Looking for a concise overview of how Internet architecture has evolved and a principled discussion of the public policies that should govern the Net going forward? Then look no further than Christopher Yoo‘s new book, The Dynamic Internet: How Technology, Users, and Businesses are Transforming the Network. It’s a quick read (just 140 pages) and is worth picking up. Yoo is a Professor of Law, Communication, and Computer & Information Science at the University of Pennsylvania and also serves as the Director of the Center for Technology, Innovation & Competition there. For those who monitor ongoing developments in cyberlaw and digital economics, Yoo is a well-known and prolific intellectual who has established himself as one of the giants of this rapidly growing policy arena.
Yoo makes two straight-forward arguments in his new book. First, the Internet is changing. In Part 1 of the book, Yoo offers a layman-friendly overview of the changing dynamics of Internet architecture and engineering. He documents the evolving nature of Internet standards, traffic management and congestion policies, spam and security control efforts, and peering and pricing policies. He also discusses the rise of peer-to-peer applications, the growth of mobile broadband, the emergence of the app store economy, and what the explosion of online video consumption means for ongoing bandwidth management efforts. Those are the supply-side issues. Yoo also outlines the implications of changes in the demand-side of the equation, such as changing user demographics and rapidly evolving demands from consumers. He notes that these new demand-side realities of Internet usage are resulting in changes to network management and engineering, further reinforcing changes already underway on the supply-side.
Yoo’s second point in the book flows logically from the first: as the Internet continues to evolve in such a highly dynamic fashion, public policy must as well. Yoo is particularly worried about calls to lock in standards, protocols, and policies from what he regards as a bygone era of Internet engineering, architecture, and policy. “The dramatic shift in Internet usage suggests that its founding architectural principles form the mid-1990s may no longer be appropriate today,” he argues. (p. 4) “[T]he optimal network architecture is unlikely to be static. Instead, it is likely to be dynamic over time, changing with the shifts in end-user demands,” he says. (p. 7) Thus, “the static, one-size-fits-all approach that dominates the current debate misses the mark.” (p. 7) Continue reading →
I’ve been hearing more rumblings about “API neutrality” lately. This idea, which originated with Jonathan Zittrain’s book, The Future of the Internet–And How to Stop It, proposes to apply Net neutrality to the code/application layer of the Internet. A blog called “The API Rating Agency,” which appears to be written by Mehdi Medjaoui, posted an essay last week endorsing Zittrain’s proposal and adding some meat to the bones of it. (My thanks to CNet’s Declan McCullagh for bringing it to my attention).
Medjaoui is particularly worried about some of Twitter’s recent moves to crack down on 3rd party API uses. Twitter is trying to figure out how to monetize its platform and, in a digital environment where advertising seems to be the only business model that works, the company has decided to establish more restrictive guidelines for API use. In essence, Twitter believes it can no longer be a perfectly open platform if it hopes to find a way to make money. The company apparently believes that some restrictions will need to be placed on 3rd party uses of its API if the firm hopes to be able to attract and monetize enough eyeballs.
While no one is sure whether that strategy will work, Medjaoui doesn’t even want the experiment to go forward. Building on Zittrain, he proposes the following approach to API neutrality:
- Absolute data to 3rd party non-discrimination : all content, data, and views equally distributed on the third party ecosystem. Even a competitor could use an API in the same conditions than all others, with not restricted re-use of the data.
- Limited discrimination without tiering : If you don’t pay specific fees for quality of service, you cannot have a better quality of service, as rate limit, quotas, SLA than someone else in the API ecosystem.If you pay for a high level Quality of service, so you’ll benefit of this high level quality of service, but in the same condition than an other customer paying the same fee.
- First come first served : No enqueuing API calls from paying third party applications, as the free 3rd-party are in the rate limits.
Before I critique this, let’s go back and recall why Zittrain suggested we might need API neutrality for certain online services or digital platforms. Continue reading →
In my last post, I discussed an outstanding new paper from Ronald Cass on “Antitrust for High-Tech and Low: Regulation, Innovation, and Risk
.” As I noted, it’s one of the best things I’ve ever read about the relationship between antitrust regulation and the modern information economy. That got me thinking about what other papers on this topic that I might recommend to others. So, for what it’s worth, here are the 12 papers that have most influenced my own thinking on the issue. (If you have other suggestions for what belongs on the list, let me know. No reason to keep it limited to just 12.)
- J. Gregory Sidak & David J. Teece, “Dynamic Competition in Antitrust Law,” 5 Journal of Competition Law & Economics (2009).
- Geoffrey A. Manne & Joshua D. Wright, “Innovation and the Limits of Antitrust,” 6 Journal of Competition Law & Economics, (2010): 153
- Joshua D. Wright, “Antitrust, Multi-Dimensional Competition, and Innovation: Do We Have an Antitrust-Relevant Theory of Competition Now?” (August 2009).
- Daniel F. Spulber, “Unlocking Technology: Antitrust and Innovation,” 4(4) Journal of Competition Law & Economics, (2008): 915.
- Ronald Cass, “Antitrust for High-Tech and Low: Regulation, Innovation, and Risk
,” 9(2) Journal of Law, Economics and Policy, Forthcoming (Spring 2012)
- Richard Posner, “Antitrust in the New Economy,” 68 Antitrust Law Journal, (2001).
- Stan J. Liebowitz & Stephen E. Margolis,”Path Dependence, Lock-in, and History,” 11(1) Journal of Law, Economics and Organization, (April 1995): 205-26.
- Robert Crandall and Charles Jackson, “Antitrust in High-Tech Industries,” Technology Policy Institute (December 2010).
- Bruce Owen, “Antitrust and Vertical Integration in ‘New Economy’ Industries,” Technology Policy Institute (November 2010).
- Douglas H. Ginsburg & Joshua D. Wright, “Dynamic Analysis and the Limits of Antitrust Institutions,” 78 (1) Antitrust Law Journal (2012): 1-21.
- Thomas Hazlett, David Teece, Leonard Waverman, “Walled Garden Rivalry: The Creation of Mobile Network Ecosystems,” George Mason University Law and Economics Research Paper Series, (November 21, 2011), No. 11-50.
- David S. Evans, “The Antitrust Economics of Two Sided Markets.”
I have always found it strange that the ACLU speaks with two voices when it comes to user empowerment as a response to government regulation of the Internet. That is, when responding to government efforts to regulate the Internet for online safety or speech purposes, the ACLU stresses personal responsibility and user empowerment as the first-order response. But as soon as the conversation switches to online advertising and data collection, the ACLU suggests that people are basically sheep who can’t possibly look out for themselves and, therefore, increased Internet regulation is essential. They’re not the only ones adopting this paradoxical position. In previous essays I’ve highlighted how both EFF and CDT do the same thing. But let me focus here on ACLU.
Writing today on the ACLU “Free Future” blog, ACLU senior policy analyst Jay Stanley cites a new paper that he says proves “the absurdity of the position that individuals who desire privacy must attempt to win a technological arms race with the multi-billion dollar internet-advertising industry.” The new study Stanley cites says that “advertisers are making it impossible to avoid online tracking” and that it isn’t paternalistic for government to intervene and regulate if the goal is to enhance user privacy choices. Stanley wholeheartedly agrees. In this and other posts, he and other ACLU analysts have endorsed greater government action to address this perceived threat on the grounds that, in essence, user empowerment cannot work when it comes to online privacy.
Again, this represents a very different position from the one that ACLU has staked out and brilliantly defended over the past 15 years when it comes to user empowerment as the proper and practical response to government regulation of objectionable online speech and pornography. For those not familiar, beginning in the mid-1990s, lawmakers started pursuing a number of new forms of Internet regulation — direct censorship and mandatory age verification were the primary methods of control — aimed at curbing objectionable online speech. In case after case, the ACLU rose up to rightly defend our online liberties against such government encroachment. (I was proud to have worked closely with many former ACLU officials in these battles.) Most notably, the ACLU pushed back against the Communications Decency Act of 1996 (CDA) and the Child Online Protection Act of 1998 (COPA) and they won landmark decisions for us in the process. Continue reading →
Yesterday POLITICO Pro said both political parties are on the verge of declaring support for some version of Internet freedom in their 2012 platforms. The Democratic platform contained a lengthy statement in 2008, but according to Politico, its 2012 platform will consist of a simple sentence about protecting the open Internet. Politico also noted that, though Republicans hardly mentioned the Internet in 2008, they are expected to consider several Internet proposals during their platform meeting early next week. Will the new Republican platform address Internet freedom? If so, what is the platform likely to say? Continue reading →
Google’s first lesson for building affordable, one Gbps fiber networks with private capital is crystal clear: If government wants private companies to build ultra high-speed networks, it should start by waiving regulations, fees, and bureaucracy.
Executive Summary
For three years now the Obama Administration and the Federal Communications Commission (FCC) have been pushing for national broadband connectivity as a way to strengthen our economy, spur innovation, and create new jobs across the country. They know that America requires more private investment to achieve their vision. But, despite their good intentions, their policies haven’t encouraged substantial private investment in communications infrastructure. That’s why the launch of Google Fiber is so critical to policymakers who are seeking to promote investment in next generation networks.
The Google Fiber deployment offers policymakers a rare opportunity to examine policies that successfully spurred new investment in America’s broadband infrastructure. Google’s intent was to “learn how to bring faster and better broadband access to more people.” Over the two years it planned, developed, and built its ultra high-speed fiber network, Google learned a number of valuable lessons for broadband deployment – lessons that policymakers can apply across America to meet our national broadband goals.
To my surprise, however, the policy response to the Google Fiber launch has been tepid. After reviewing Google’s deployment plans, I expected to hear the usual chorus of Rage Against the ISP from Public Knowledge, Free Press, and others from the left-of-center, so-called “public interest” community (PIC) who seek regulation of the Internet as a public utility. Instead, they responded to the launch with deafening silence.
Maybe they were stunned into silence. Google’s deployment is a
real-world rejection of the public interest community’s regulatory agenda more powerful than any hypothetical. Google is building fiber in Kansas City because its officials were willing to waive regulatory barriers to entry that have discouraged broadband deployments in other cities. Google’s first lesson for building affordable, one Gbps fiber networks with private capital is crystal clear: If government wants private companies to build ultra high-speed networks, it should start by waiving regulations, fees, and bureaucracy
. Continue reading →
On Forbes today, I have a long article on the progress being made to build gigabit Internet testbeds in the U.S., particularly by Gig.U.
Gig.U is a consortium of research universities and their surrounding communities created a year ago by Blair Levin, an Aspen Institute Fellow and, recently, the principal architect of the FCC’s National Broadband Plan. Its goal is to work with private companies to build ultra high-speed broadband networks with sustainable business models .
Gig.U, along with Google Fiber’s Kansas City project and the White House’s recently-announced US Ignite project, spring from similar origins and have similar goals. Their general belief is that by building ultra high-speed broadband in selected communities, consumers, developers, network operators and investors will get a clear sense of the true value of Internet speeds that are 100 times as fast as those available today through high-speed cable-based networks. And then go build a lot more of them.
Google Fiber, for example, announced last week that it would be offering fully-symmetrical 1 Gbps connections in Kansas City, perhaps as soon as next year. (By comparison, my home broadband service from Xfinity is 10 Mbps download and considerably slower going up.)
US Ignite is encouraging public-private partnerships to build demonstration applications that could take advantage of next generation networks and near-universal adoption. It is also looking at the most obvious regulatory impediments at the federal level that make fiber deployments unnecessarily complicated, painfully slow, and unduly expensive.
I think these projects are encouraging signs of native entrepreneurship focused on solving a worrisome problem: the U.S. is nearing a dangerous stalemate in its communications infrastructure. We have the technology and scale necessary to replace much of our legacy wireline phone networks with native IP broadband. Right now, ultra high-speed broadband is technically possible by running fiber to the home. Indeed, Verizon’s FiOS network currently delivers 300 Mbps broadband and is available to some 15 million homes.
Continue reading →
On CNET today, I’ve posted a long critique of the recent report by the President’s Council of Advisors on Science and Technology (PCAST) urging the White House to reverse course on a two-year old order to free up more spectrum for mobile users.
In 2010, soon after the FCC’s National Broadband Plan raised alarms about the need for more spectrum for an explosion in mobile broadband use, President Obama issued a Memorandum ordering federal agencies to free up as much as 500 MHz. of radio frequencies currently assigned to them.
After a great deal of dawdling, the National Telecommunications and Information Administration, which oversees spectrum assignments within the federal government, issued a report earlier this year that seemed to offer progress. 95 MHz. of very attractive spectrum could in fact be cleared in the ten years called for by the White House.
But reading between the lines, it was clear that the 20 agencies involved in the plan had no serious intention of cooperating. Their cost estimates for relocation (which were simply reported by NTIA without any indication of how they’d been arrived at or even whether NTIA had been given any details) appeared to be based on an amount that would make any move economically impossible. Continue reading →