Having a Sense of Proportion on Network Neutrality

by on November 14, 2008 · 7 comments

On Wednesday I responded to the first half of Steve Schultze’s critique of my network neutrality paper, which focused on my historical argument about the dangers of unintended consequences. Let me now turn to the second half of his post, which I regard as closer to the core of my paper’s argument.

One of the frustrating things about the network neutrality debate is that every proponent of network neutrality regulation seems to have a different story about the types of regulation he or she is concerned about. Some are worried about ISPs targeting particular applications to be degraded or blocked. Others are worried that ISPs will use the threat of blockage to force website operators to pay for access to their customers. Still others believe that ISPs will use subtle traffic shaping schemes to advantage their own content. Still others believe that ISPs will construct a “fast lane” and relegate the rest of the web to a pipe that never gets much faster than today’s Internet connections. Still others are worried about the potential for ISP censorship.

I’ve found that any time I take one of these ISP strategies seriously and put forth an argument about why it’s unlikely to be feasible or profitable, the response from supporters of regulation is often to concede that the particular scenario I’ve chosen is not realistic (terms like “straw man” sometimes come up), but that I haven’t accounted for some other scenario that’s much more likely to occur. Now, the Internet is a big, complicated place, and so it’s not possible to enumerate every conceivable way that an ISP could screw around with traffic and prove that none of them could ever be profitable. In my paper, I tried to pick the scenarios that are most commonly discussed and describe why I think they are likely to be poor business strategies for network providers, but I didn’t—and can’t—analyze the favorite scenario of every single network neutrality activist.

But here’s a pattern that I think is revealing: supporters of regulation tend to describe things in apocalyptic terms. We’re told that if regulations aren’t enacted soon, online innovation, competition, and maybe even freedom of speech are in jeopardy. It’s claimed that the stakes are too high to wait and see if actual problems develop. Yet I’ve found that when you get down to specifics, the savvier advocates of regulation concede that in fact the stakes aren’t really that high. For example, I have yet to find anyone willing to seriously defend Yochai Benkler’s claim that we should be worried about network owners censoring online speech.

Rather, the response is invariably to shift the focus to more plausible but far less significant infringements of network neutrality: isolated incidents like the Comcast/BitTorrent controversy rather than comprehensive plans to transform the Internet into AOL 2.0.

Yet advocates of regulation tend to get sloppy about these distinctions. They seem to believe that if an ISP has the power to block a single website or application, then it necessarily has the power to undertake much more ambitious discriminatory schemes. If Comcast can block BitTorrent today, it can censor liberal blogs or charge websites outrageous fees tomorrow.

So for example, take this passage from Steve’s post:

Lee claims that even if [sporadic interference with network neutrality] occurred, it would not be a real problem because it wouldn’t be severe. “To be sure, such discrimination would be a headache for these firms, but a relatively small chance of being cut off from a minority of residential customers is unlikely to rank very high on an entrepreneur’s list of worries.” His assumption that the chance of being cut off is “small” is belied by recent experience in the Comcast/BitTorrent case. The idea that one would be cut off only from a “minority of residential customers” is technically true because no one firm currently controls over 50% of residential connections, but there are some truly significant market shares that entrepreneurs would undoubtedly care about.

It seems that I didn’t make the passage he’s criticizing here (on page 24 of my paper) as clear as I could have. This passage comes after a lengthy section of the paper where I argue that various economic and technological obstacles would make it unprofitable for ISPs to engage in comprehensive filtering of Internet applications and content. Even if my argument is right, there remains the risk that some ISPs will engage in counterproductive filtering in the mistaken belief that doing so will enhance their bottom line. Comcast’s interference with BitTorrent seems to be in this category.

My point in this passage was simply that there’s a big difference between the kind of random, sporadic interference with BitTorrent we saw from Comcast last year, and the kind of comprehensive traffic filtering that network neutrality activists fear. Comprehensive filtering would create a serious disincentive for entrepreneurship. Sporadic interference with individual applications or websites just isn’t in the same category, because by definition it affects only a tiny fraction of the web. That’s not to say this isn’t something to be concerned about—I wasn’t shy about criticizing Comcast myself. It’s just to say a sense of perspective is important. It matters whether Comcast’s actions last year were an isolated incident or the first signs of a much broader trend toward widespread Internet filtering. I see no evidence of the latter, and theoretical reasons to think the former is more likely.

It’s worth remembering that alarmism about the future of the Web is almost as old as the Web itself. Steve suggests that the open Internet was made newly precarious by the Brand X decision in 2005, but advocates of regulation have been predicting impending disaster for a lot longer than that. For example, people tend to forget that Larry Lessig’s 1999 classic Code and other Laws of Cyberspace included some fairly specific predictions about the end of the open Internet that turned out to be spectacularly wrong. This isn’t to say that there’s nothing to worry about. I think it’s great that network neutrality activists are publicizing the importance of open networks and pressuring network providers to respect their users’ freedom. But I think it does mean that it would be a massive overreaction to impose new regulations on a network that has thrived for more than a decade without active FCC oversight.

  • MikeRT

    Question for you, Tim. Wouldn't it be possible to solve the issue of websites being blackmailed by ISPs on bandwidth issues by trying to punish the ISPs under truth-in-advertising regulations for the abuse of the ISP's customers who have been lead to believe by ISP advertising to believe that the ISP would not behave like that?

  • MikeRT

    Question for you, Tim. Wouldn't it be possible to solve the issue of websites being blackmailed by ISPs on bandwidth issues by trying to punish the ISPs under truth-in-advertising regulations for the abuse of the ISP's customers who have been lead to believe by ISP advertising to believe that the ISP would not behave like that?

  • MikeRT

    Question for you, Tim. Wouldn't it be possible to solve the issue of websites being blackmailed by ISPs on bandwidth issues by trying to punish the ISPs under truth-in-advertising regulations for the abuse of the ISP's customers who have been lead to believe by ISP advertising to believe that the ISP would not behave like that?

  • Pingback: fantasy football asia

  • Pingback: prix de l'immobilier

  • Pingback: Dentist Camberley

  • Pingback: youtube.com/user/AdvMedCertification

Previous post:

Next post: