Broadband & Neutrality Regulation

Before commenting on Lawrence Lessig’s latest call to abolish the Federal Communications Commission (he issued a similar call for the FCC’s abolition earlier this year, which I commented on here), let’s recall what Tim Lee posted yesterday about “Real Regulators“:

Too many advocates of regulation seem to have never considered the possibility that the FCC bureaucrats in charge of making these decisions at any point in time might be lazy, incompetent, technically confused, or biased in favor of industry incumbents. That’s often what “real regulators” are like, and it’s important that when policy makers are crafting regulatory scheme, they assume that some of the people administering the law will have these kinds of flaws, rather than imagining that the rules they write will be applied by infallible philosopher-kings.

Ironically, Prof. Lessig — who typically defends many forms of high-tech regulation like Net neutrality and online content labeling — is essentially agreeing with Tim’s critique of bureaucracy. But Lessig seems to ignore the underlying logic of Tim’s critique and instead imagines that we need only reinvent bureaucracy in order to save it. But I’m getting ahead of myself. First, let’s hear what Lessig proposes.

In a Newsweek column this week entitled “Reboot the FCC,” Lessig argues that the FCC is beyond saving because, instead of protecting innovation, the agency has succumb to an “almost irresistible urge to protect the most powerful instead.” Consequently, he continues:

The solution here is not tinkering. You can’t fix DNA. You have to bury it. President Obama should get Congress to shut down the FCC and similar vestigial regulators, which put stability and special interests above the public good. In their place, Congress should create something we could call the Innovation Environment Protection Agency (iEPA), charged with a simple founding mission: “minimal intervention to maximize innovation.” The iEPA’s core purpose would be to protect innovation from its two historical enemies–excessive government favors, and excessive private monopoly power.

As was the case with his earlier call to “blow up the FCC,” I am tickled to hear Lessig call for shutting down an agency that many of us have been fighting against for the last few decades. (Here’s a 1995 blueprint for abolishing the FCC that I contributed to, and here’s PFF’s recent “DACA” project to comprehensively reform and downsize the agency.)

But is Lessig really calling for the same sort of sweeping regulatory reform and downsizing that others have been calling for? And has he identified the real source of the problem that he hopes to correct?  I don’t think so. There are 3 basic problems with the argument Lessig is putting forward in his essay. I will address each in turn.

Continue reading →

The 12 Days of Christmas

by on December 24, 2008 · 12 comments

EFF-style.

Real Regulators

by on December 24, 2008 · 32 comments

Don’t miss Jim Harper’s excellent post on the strange way people have responded to the failures of regulation on wall street. In a Meet the Press exchange, we learn that people reported Bernie Madoff’s suspicious books to the SEC, which chose not to do anything about it. And it was agreed around the table that the Madoff affair debunks “the idea that wealthy individuals and ‘sophisticated’ institutional investors don’t need the protection of government regulators.” “There’s no question we need a real regulator,” says CNBC’s Erin Burnett.

The problem is that we had a “real regulator.” Ponzi schemes and dishonest bookkeeping are already illegal. Had the SEC been so motivated, it had all the authority it needed to investigate Madoff’s books, discover the problems, and shut his firm down. In a rational world, this would be taken as a cautionary tale about the dangers of assuming that regulators will be vigilant, competent, or interested in defending the interests of the general public rather than those with political clout. Instead, we live in a bizarro world in which people believe that the SEC’s failure to do its job is an illustration of the need to give agencies like the SEC more power.

We of course see the same sort of confusion in debates over regulation of the technology sector. For example, the leading network neutrality proposals invariably wind up placing a significant amount of authority in the hands of the FCC to decide the exact definition of network neutrality and to resolve complex questions about what constitutes a network neutrality violation. Too many advocates of regulation seem to have never considered the possibility that the FCC bureaucrats in charge of making these decisions at any point in time might be lazy, incompetent, technically confused, or biased in favor of industry incumbents. That’s often what “real regulators” are like, and it’s important that when policy makers are crafting regulatory scheme, they assume that some of the people administering the law will have these kinds of flaws, rather than imagining that the rules they right will be applied by infallible philosopher-kings.

In several of our previous podcasts (see episodes 34, 35,and 37), we’ve discussed what we’ve called the “Comcast Kerfuffle,” which was the controversy surrounding the steps Comcast took to manage BitTorrent traffic on its networks. Critics called it a violation of Net neutrality principles while Comcast and others called it sensible network management.

This week we saw a new kerfuffle of sorts develop over the revelation in a Monday front-page Wall Street Journal story that Google had approached major cable and phone companies and supposedly proposed to create a fast lane for its own content. What exactly is it that Google is proposing, and does it mean – as the Wall Street Journal and some others have suggested – that Google is somehow going back on their support for Net neutrality principles and regulation? More importantly, what does it all mean for the future of the Internet, network management, and consumers. That’s what we discussed on the TLF’s latest “Tech Policy Weekly” podcast.

Today’s 30-minute discussion featured two of our regular contributors at the TLF, who both wrote about this issue multiple times this week. Cord Blomquist of the Competitive Enterprise Institute wrote about the issue here and here, and Bret Swanson of the Progress & Freedom Foundation wrote about it here and here.  To help us wade through some of the more technical networking issues in play, we were also joined on the podcast by Richard Bennett, a computer scientist and network engineer guru who blogs at Broadband Politics as well as Circle ID and he also pens occasional columns for The Register.  Also appearing on the show was Adam Marcus, Research Fellow & Senior Technologist at PFF, who wrote a “nuts and bolts” essay full of excellent technical background on edge caching and net neutrality.

You can download the MP3 file here, or use the online player below to start listening to the show right now.

[display_podcast]

The introduction below was originally written by Adam Thierer, but now that I (Adam Marcus) am a full-fledged TLF member, I have taken authorship.


My PFF colleague Bret Swanson had a nice post here yesterday talking about the evolution of the debate over edge caching and network management (“Bandwidth, Storewidth, and Net Neutrality“), but I also wanted to draw your attention to related essay by another PFF colleague of mine. Adam Marcus, who serves as a Research Fellow and Senior Technologist at PFF, has started a wonderful series of “Nuts & Bolts” essays meant to “provide a solid technical foundation for the policy debates that new technologies often trigger.” His latest essay is on Network neutrality and edge caching, which has been the topic of heated discussion since the Wall Street Journal’s front-page story on Monday that Google had approached major cable and phone companies and supposedly proposed to create a fast lane for its own content.

Anyway, Adam Marcus gave me permission to reprint the article in its entirety down below. I hope you find this background information useful.


Nuts and Bolts: Network neutrality and edge caching

by Adam Marcus, Progress & Freedom Foundation

December 17, 2008

This is the second in a series of articles about Internet technologies. The first article was about web cookies. This article explains the network neutrality debate. The goal of this series is to provide a solid technical foundation for the policy debates that new technologies often trigger. No prior knowledge of the technologies involved is assumed.

To understand the network neutrality debate, you must first understand bandwidth and latency. There are lots of analogies equating the Internet to roadways, but it’s because the analogies are quite instructive. For example, if one or two people need to travel across town, a fast sports car is probably the fastest method. But if 50 people need to travel across town, it may require 25 trips in a single sports car. So a bus which can transport all 50 people in a single trip may be “faster” overall. The sports car is faster, but the bus has more capacity. Bandwidth is a measure of capacity, of how much data can be transmitted in a fixed period of time. It is usually measured in Megabits per second (Mbps). Latency is a measure of speed, of the time it takes a single packet data to travel between two points. It is usually measured in milliseconds. The “speeds” that ISPs advertise have nothing to do with latency; they’re actually referring to bandwidth. ISPs don’t advertise latency because its different for each different site you’re trying to reach. Continue reading →

Very happy to see the discussion over The Wall Street Journal‘s Google/net neutrality story. Always good to see holes poked and the truth set free.

But let’s not allow the eruptions, backlashes, recriminations, and “debunkings” — This topic has been debunked. End of story. Over. Sit down! — obscure the still-fundamental issues. This is a terrific starting point for debate, not an end.

Content delivery networks (CDNs) and caching have always been a part of my analysis of the net neutrality debate. Here was testimony that George Gilder and I prepared for a Senate Commerce Committee hearing almost five years ago, in April 2004, where we predicted that a somewhat obscure new MCI “network layers” proposal, as it was then called, would be the next big communications policy issue. (At about the same time, my now-colleague Adam Thierer was also identifying this as an emerging issue/threat.)

Gilder and I tried to make the point that this “layers” — or network neutrality — proposal would, even if attractive in theory, be very difficult to define or implement. Networks are a dynamic realm of ever-shifting bottlenecks, where bandwidth, storage, caching, and peering, in the core, edge, and access, in the data center, on end-user devices, from the heavens and under the seas, constantly require new architectures, upgrades, and investments, thus triggering further cascades of hardware, software, and protocol changes elsewhere in this growing global web. It seemed to us at the time, ill-defined as it was, that this new policy proposal was probably a weapon for one group of Internet companies, with one type of business model, to bludgeon another set of Internet companies with a different business model. 

We wrote extensively about storage, caching, and content delivery networks in the pages of the Gilder Technology Report, first laying out the big conceptual issues in a 1999 article, “The Antediluvian Paradigm.” [Correction: “The Post-Diluvian Paradigm”] Gilder coined a word for this nexus of storage and bandwidth: Storewidth. Gilder and I even hosted a conference, also dubbed “Storewidth,” dedicated to these storage, memory, and content delivery network technologies. See, for instance, this press release for the 2001 conference with all the big players in the field, including Akamai, EMC, Network Appliance, Mirror Image, and one Eric Schmidt, chief executive officer of . . . Novell. In 2002, Google’s Larry Page spoke, as did Jay Adelson, founder of the big data-center-network-peering company Equinix, Yahoo!, and many of the big network and content companies. Continue reading →

Claims that Google has abandoned its stance on network neutrality have been thoroughly debunked, as Cord and Adam note below. Over at Broadband Reports, Karl Bode explains that Google is seeking edge-caching agreements, not preferential treatment. Edge-caching involves Google housing its content on servers located inside consumer ISP networks, cutting bandwidth costs by allowing users to access Google content located just a few hops away.

Even though edge-caching doesn’t violate network neutrality as defined by Google, it’s still one of the many advantages that big players have over new entrants. Edge-caching isn’t a “fast track,” as the WSJ imprecisely terms it, but rather a short track—functionally, there’s a lot of similarity between the two. As Richard Bennett has explained time and time again, being close to end users is quite advantageous even without preferential treatment, as it eliminates the need to push vast amounts of data across the congestion-prone core of the public Internet.

We’ve heard about how edge-caching enables content providers and ISPs to cut their bandwidth bills and make more efficient use of finite network resources. Both of these are true, but there’s more—edge caching makes it much less likely that users will experience long load times or buffering hiccups while watching streaming video online. That high-def YouTube clip might take a few extra seconds to buffer if it has to make its way through congested central network exchanges—not so, however, if that video is housed just a few hops away, within your ISP’s network.

Continue reading →

Over just the past 24 hours, there’s been quite a hullabaloo surrounding the Wall Street Journal’s controversial front-page story on Google’s edge caching plan and whether it violates Net neutrality. (See Cord’s post and Bret’s). Lessig calls it a “made-up drama“, David Isenberg says it’s “bogus” and “bullshit,” and Google’s Rick Whitt has said it’s much ado about nothing.

Regardless, here’s the important thing not to overlook about this episode: It is a prime example of the what Tim Lee has referred to as “the fundamental problem of backlash” that ensues whenever there is even a hint of a potential violation of network neutrality (however one defines it). As Tim argued in his excellent Cato paper on Net neutrality, “No widespread manipulation would go unnoticed for very long,” and a “firestorm of controversy would… be unleashed if a major network owner embarked on a systematic campaign of censorship on its network.” (p. 23). Indeed, this (non-)story about Google’s edge-caching plans have spawned an intense “firestorm of controversy” over the past 24 hours and it doesn’t even involve serious network meddling or censorship! I’ve been trying to keep up with all the traffic about this on TechMeme and Google News during that time, but I have given up trying to digest it all. (Take a look at those snapshots I pasted down below to get a feel for the volume we are talking about here).

In that regard, I love this quote from the always-bloodthirsty Tim Karr of the (inappropriately-named) regulatory activist group Free Press:

If Google or any other tech company were secretly violating Net Neutrality, there would be an absolute and cataclysmic backlash from the grassroots and netroots who have made Net Neutrality a signature issue in 21st Century politics. The Internet community would come crashing down on their heads like Minutemen on Benedict Arnold.

Indeed, that’s exactly what we saw today. But it wasn’t just pro-regulatory fanatics like Free Press. The entire tech and business blogoshere and even some of the mainstream media were on top of this. That’s the “fundamental problem of backlash” at work, and with a vengeance.

TechMeme Google headlines

Google headlines 2

Big news in these parts.

The celebrated openness of the Internet — network providers are not supposed to give preferential treatment to any traffic — is quietly losing powerful defenders. Google Inc. has approached major cable and phone companies that carry Internet traffic with a proposal to create a fast lane for its own content, according to documents reviewed by The Wall Street Journal. Google has traditionally been one of the loudest advocates of equal network access for all content providers.

TLFers and commenters: Go.

censored-pornChairman Mao–er… Martin–has canceled (WSJ) the FCC’s December 18 meeting, when the Commission was set to vote on Martin’s proposal to rig an auction to give away a valuable piece of spectrum (“AWS-3”) to M2Z networks.  In exchange for a sweetheart deal on the spectrum, the company would have been required to use a quarter of it to provide a free (but very slow) wireless broadband service.  Martin had initially proposed to require that the service be made porn-free, but eventually suggested that users over 18 would be able to opt-out of network-level filtering.

Two weeks ago, when it became clear that Martin would attempt to ram this proposal through while he still could, I asked how the ascendant Left would respond:

Will the defenders of free expression triumph over those who see ensuring free broadband as a social justice issue?  Or will those on the Left who usually joining us in opposing censorship simply remain silent as the government extends the architecture of censoring the “public airways” onto the Net (where the underlying rationale of traditional broadcast regulation–that parents are powerless–does not apply)?

I’m glad to see that the deathblow to this unconstitutional proposal did indeed come from the political Left–specifically, from Sen. John Rockefeller, (D-W.Va.) and Rep. Henry Waxman, (D-Calif.), who will be responsible for overseeing the FCC in the new Congress.  (The Bush administration had already opposed the proposal, as with so many of Martin’s abuses, had failed to stop it.)

With President-elect Obama having declared that, “Here in the country that invented the Internet, every child should have the chance to get online,” it seems almost certain that the Administration will press ahead with some kind of universal broadband proposal of its own.  But what would such a proposal look like?  If it’s another public broadband utility, would it include network-level filtration like Martin’s proposal?  If so, will the Democratic opponents of government censorship stick by their principles and fight that, too?

I suspect we may find that what’s constitutional is politically impossible (unfiltered free Internet) and what’s politically possible (filtered free Internet) is unconstitutional. Continue reading →