Net Neutrality and Architecture Avoidance

by on September 22, 2009 · 7 comments

If I can amplify a bit on a post at the Cato blog earlier today, I want to clarify that I fully agree some of the ISP behaviors that net neutrality proponents have identified as demanding a regulatory response really are seriously problematic. My point of departure is that I’d rather see if there are narrower grounds for addressing the objectionable behaviors than making sweeping rules about network architecture. So in the case of Comcast’s throttling of BitTorrent, which is the big one that seems to confirm the fears of the neutralists, I think it’s significant that for a long while the company was—”lying about” assumes intent, so  I’ll charitably go with “misrepresenting”—their practices. And I don’t think you need any controversial premises about optimal network management to think that it’s impermissible for a company to charge a fee for a service, and then secretly cripple that service. So without even having to hit the more controversial “nondiscrimination” principle Julius Genachoswki proposed on Monday, you can point to this as a failure of the “transparency” principle, about which I think there’s a good deal more consensus. Now, there are bigger guns out there looking for dodgy filtering practices these days, so I’d expect the next attempt at this sort of thing to get caught more quickly, but by all means, enforce transparency about business practices too. Consumers have a right to get the service they’ve bought without having to be 1337 haxx0rz to discover how they’re being shortchanged. But before we get the feds involve in writing code for ISP routers, I’d like to see whether that proves sufficient to limit genuinely objectionable deviations from neutrality.

There’s a hoary rule of jurisprudence called the canon of constitutional avoidance. It means, very crudely, that judges don’t decide broad constitutional questions—they don’t go mucking with the basic architecture of the legal system—when they have some narrower grounds on which to rule. So if, for instance, there are two reasonable interpretations of a statute, one of which avoids a potential conflict with a constitutional rule, judges are supposed to prefer that interpretation. It’s not always possible, of course: Sometime judges have to tackle the big, broad questions. But it’s supposed to be something of a last resort. Lawyers and civil liberties advocates, of course, tend to get more animated by those broad principles, whether the First Amendment or end-to-end. But there’s often good reason to start small—to look to the specific fact patterns of problem cases and see whether there are narrower bases for resolution. It may turn out that in the kinds of cases that neutralists rightly warn could harm innovation, it’s not one big principle, but a diverse array of responses or fixes that will resolve the different issues. In a case like this one, perhaps a mix of mandated transparency, consumer demand, and user adaptation (e.g. encrypting traffic) will get you the same (or a better) result than an architectural mandate.

One reason to prefer narrower solutions is that the more sweeping your fix is, the broader and more unpredictable the effects will tend to be.  So, in the Cato post, I floated the possibility that a neutrality mandate might skew investment incentives, and I’d like to elaborate a little on what I had in mind there. In wireline we have a legacy system where the open Internet is transiting over the very same coax cables as more traditional television signals, which now includes an array of not-so-traditional services like On Demand. Now, neutrality advocates are pretty explicit that they’re totally cool with this, though there’s nothing more discriminatory and closed than cable TV, where the menu of content you can access is rigidly determined by your service provider.  Not only are these signals sharing (finite) space on a wire, they’re often bundled in one package, so consumers pay a discounted price for getting their TV and Internet together. I may even have two Comcast wires from the same line coming into my TV set, allowing me to download the same show from Comcast’s On Demand or the Playstation Store. But what Comcast can’t do, consistent with principles of neutrality, is fold their video offerings into the data stream, but with priority for their packets  that allows me to download the same array of movies and shows at the speed I’m used to, rather than at the somewhat lower speed at which I can download Playstation content.

Obviously there are numerous reasons cable companies continue to maintain segregated networks, some of which, again, have to do with cable being legacy tech. I’m not really interested in getting tangled in the question of the real-world conditions under which it would be more efficient to combine them. I am interested in the possibility that if it were more efficient, an overly broad rule designed as a response to a narrow problem with BitTorrent throttling could nevertheless provide a strong incentive to keep them segregated—and, ironically, for the very type of reason neutrality rules are supposed to make moot: to avoid cannibalizing their video offerings.

In the wireless context, think of a technology like MediaFLO. That stands for Forward Link Only—a one-way video stream from a tower to a mobile device, with interactivity provided by a conventional 3G connection. There are various reasons, again, why it might be efficient for spectrum to be allocated to this delivery mechanism rather than having people download their video on all fours with every other packet on a generic LTE Internet connection. But it seems to me that a really bad reason to allocate spectrum this way is that you’ve got a regulatory asymmetry that lets you take advantage of cross-subsidies for content delivered this way at high speed, but not if you want to prioritize it on the all-purpose 4G network.

Just to be clear, I’m not claiming this particular thing will happen—I have no idea whether it’s remotely probable for any specific market or technology.  But I very much doubt anyone can say how significant this type of allocative bias would be—certainly not five years down the line with whatever other standards are in the offing. And that’s the problem: To weigh the effects of the broader rule, you need to start factoring in effects like these, which seems like an impossible task. But if your concern is that the owners of the physical layer are going to leverage their control of the platform to privilege their content on the network, it seems like you’ve got to be equally concerned about whether they’ll privilege networks for their content. Put another way, it seems like there’s a potential tension between a policy of neutrality within the network and a policy that’s neutral across networks. I can’t predict how serious an issue that will be in two years or ten, and if I had to bet I’d put my money on the open, neutral network beating out some wireless Minitel. All the old walled-garden online services of the 90s turned out to be no competition for the unfettered Internet… and it’s for this very reason that I expect packet discrimination to be a losing proposition for ISPs, with or without regulation. But there are reasons things at least might be different for wireless. Until we know, I’d rather stick with the narrowest available fixes to such particular problems as do crop up, and then figure out as we go whether a broader remedy is needed, than have an overbroad fix that prompts some further lurching correction when we figure out, belatedly, what unintended second-order effects our first solution created.

Addendum: My friend Tom Lee from Sunlight Foundation (confusingly distinct from colleague Tim Lee!) has some characteristically smart things to say, and suggests that while disagreement is bound to persist, arguments in this space appear to be getting less hysterical and stupid.  Which if true would mean Net Neutrality is on some kind of countercyclical trend to… every single other aspect of American political discourse.  Hope springs eternal.

Addendum II: From Tom’s post:

[T]he FCC is essentially saying that if ISP, Inc. is interested in undertaking some network monkey business, it would behoove them to get on the phone with Washington before they get on the phone with Cisco.  This is a burden, I suppose, but network-wide changes are a big enough deal and pursued at a sufficiently careful pace that I don’t think it’s likely to be a particularly onerous one.

So, the thing we all like about end-to-end is that it enables innovation by decentralizing experimentation—you don’t need permission to connect a new application or device to the network. What I like about end-to-end in markets is that you don’t need permission to hook up a new business model either. Obviously, ISPs are vastly fewer and far slower moving than coders and users. Maybe he’s right that this makes the burden relatively low relative to the gains. But I’d like to see some of the neutralists go a little fractal and turn that geek candlepower to the question of how the market itself might be maintained as a more open platform rather than looking for the best network management strategies. Benkler has suggested spectrum commons as a means of introducing last-mile competition—a real Public Option, as it were—and something like that strikes me as more attractive than duopoly, whether or not it’s regulated duopoly.

Addendum III: Jon Zittrain notes that he had a similar thought in his book The Future of the Internet (and How to Stop It):

The cable television experience is a walled garden. Should a cable or satellite company choose to offer a new feature in the lineup called the “Internet channel,” it could decide which Web sites to allow and which to prohibit. It could offer a channel that remains permanently tuned to one Web site, or a channel that could be steered among a preselected set of sites, or a channel that can be tuned to any Internet destination the subscriber enters so long as it is not on a blacklist maintained by the cable or satellite provider. Indeed, some video game consoles are configured for broader Internet access in this manner. Puzzlingly, parties to the network neutrality debate have yet to weigh in on this phenomenon.


  • dm

    Really a very nice piece. The pointer to Tom Lee's essay is particularly appreciated, but the notion of “architecture avoidance” is a nice insight to bring to this discussion.

    I’d like to see some of the neutralists go a little fractal and turn that geek candlepower to the question of how the market itself might be maintained as a more open platform rather than looking for the best network management strategies

    That geek candlepower tends to shine brightest when applied to standards and protocols that support flexibility in applications. That requires a certain openness (your “transparency”, in a way), but also an informal form of regulation known as standards and standards bodies (even though those standards begin as “rough consensus and running code”).

    Rough consensus has worked for a long time and, surprisingly, largely continues to work. There are things, for example, one could do to one's TCP implementation that would make one's TCP transfers much faster, at the cost of network congestion for everyone else — this would be a competitive advantage, but it seems no self-respecting engineer will lower themselves to put those changes into a product because their peers would disdain them.

    But some of the net neutrality debate appears to be a break down in that rough consensus. Where does one go then?

  • dm

    Really a very nice piece. The pointer to Tom Lee's essay is particularly appreciated, but the notion of “architecture avoidance” is a nice insight to bring to this discussion.

    I’d like to see some of the neutralists go a little fractal and turn that geek candlepower to the question of how the market itself might be maintained as a more open platform rather than looking for the best network management strategies

    That geek candlepower tends to shine brightest when applied to standards and protocols that support flexibility in applications. That requires a certain openness (your “transparency”, in a way), but also an informal form of regulation known as standards and standards bodies (even though those standards begin as “rough consensus and running code”).

    Rough consensus has worked for a long time and, surprisingly, largely continues to work. There are things, for example, one could do to one's TCP implementation that would make one's TCP transfers much faster, at the cost of network congestion for everyone else — this would be a competitive advantage, but it seems no self-respecting engineer will lower themselves to put those changes into a product because their peers would disdain them.

    But some of the net neutrality debate appears to be a break down in that rough consensus. Where does one go then?

  • Pingback: total war rome 2 free download

  • Pingback: find 1300 numbers directory au

  • Pingback: 1300 word numbers

  • Pingback: fantasy football

  • Pingback: AMC YouTube Page

Previous post:

Next post: