Network Neutrality Is the Division of Labor

by on November 12, 2006 · 12 comments

Brooke Oberwetter and I have been having an interesting discussion here and here about network neutrality. I want to start by emphasizing that I wholeheartedly agree with Brooke’s broad point that technologies change over time, and so we should be skeptical of proposals to artificially restrain that evolution by giving government bureaucrats the power to second-guess the design decisions of network engineers. Doubtless the Internet will evolve in ways that neither of us can predict today, and we don’t want the law to stand in the way.

But Brooke went beyond that general point and offered some specific examples of ways she thinks the Internet might change. Her main contention seems to be that the end-to-end principle is overrated, and that “the only reason they’re so revered is because they are simply what is.” I think this is fundamentally mistaken: people have good reasons for revering the end-to-end principle, and abandoning it would be a very bad idea. I’ll discuss some of her specific arguments below the fold.

Tim, I think the bigger issue that you aren’t wrapping your head around is that you’re assuming that the network will always operate exactly as it does today. This is a 30 year old technology we’re talking about that hasn’t seen any real change to its basic operating protocols. To assume that the way it works now is the best possible way for it to continue working is woefully shortsighted, I think. End-to-end and TCP aren’t written in stone, and from my understanding, the only reason they’re so revered is because they are simply what is.

This is a bit like saying that 60-cycle AC power is a 100-year-old technology, and so we should be looking at things to replace it. The basic specs for TCP/IP haven’t changed in 20 years because there’s no reason for them to change. Like 60-cycle AC power, it’s more important that everyone agree on the same standard than what the particular standard is. And like 60-cycle power, it doesn’t place any particular constraints on what kinds of applications can be built atop it. I can power my iPods and DVD players with 60-cycle power just as well as my grandparents could power their toasters and Victrolas. Likewise, YouTube and World of Warcraft work just as well via TCP/IP as email and usenet did in the 1980s.

And that’s why the end-to-end principle is revered. The people who launched YouTube didn’t have to talk to Comcast or Verizon or anybody else to get help making their application work with their networks, any more than Apple had to call Pepco to make sure the iPod would work on its electrical grid.

Now obviously, some of the specifics of TCP/IP might change over time to accommodate the needs of new applications. There is, in fact, a new version of the IP protocol called IPv6 that offers a variety of enhancements to the way the current version (IPv4) operates. But IPv6 relies on the end-to-end principle every bit as much as IPv4 does. There’s no reason to expect that to change in the future.

I think the concept of the dumb network and the end-to-end principle make less sense now than they did when there was less variety in applications and less variety in content. As you pointed out to me, right now, identifying the kind of content (video, voice, email, web, etc) being sent over the package-switched dumb network is a difficult technical challenge. Well, maybe some applications need a network to evolve on which such differentiations aren’t a difficult technical challenge. And maybe some consumers want a network that can keep objectionable content out of their homes without having to rely on end-technologies or pop-up blocking software. Wayne has made the argument in favor of the “splinternets” on several occasions, and it is a very reasoned and serious–though unpopular–argument that network neutrality should be abandoned as the Internet’s broad organizing principle.

That first sentence gets the situation precisely backwards. The astonishing variety of applications and content we now enjoy would be impossible without the end-to-end principle. Imagine if every Internet startup had to meet with every ISP in the world (or at least send them an application or a check) before launching a new website. The reason they don’t have to do that is because open standards like TCP/IP ensure that data flowing across the Internet gets treated the same way regardless of whose network it’s on.

I’m having trouble imagining a kind of application that would require a network to be able to distinguish among different kinds of content. I certainly don’t any reason to think that network routers in the middle of the network would be better at filtering “objectionable content” than dedicated mail servers at the endpoints.

One possible example is quality-of-service guarantees, which I discussed here. There’s considerable disagreement among computer scientists about whether building QoS into the IP layer of the networking stack would be a good idea. My sense is that the majority view is that it would not, but there are certainly plenty of people on the other side. For our purposes, though, I think the important point is that adding QoS guarantees to the TCP/IP protocol stack would be a very modest adjustment to the end-to-end principle. Packets might come stamped with a priority level, and route higher-priority packets before lower-priority ones. But they still wouldn’t be “smart” in the sense of looking inside the packet to figure out where the packet came from or what kind of packet–video, voice, email, web, etc–it is. QoS would be a minor adjustment of end-to-end, not an abandonment of it.

I have to confess I don’t quite understand what “splinternets” are or what problem they would solve. I remember reading a bit of Wayne’s work on the subject when he was at Cato, and I don’t think he ever laid his proposal out in much detail. The connectedness of the Internet is precisely what makes the Internet useful. It seems to me it would be quite unfortunate if, say, we had ComcastNet and VerizonNet, and ComcastNet customers couldn’t send emails to VerizonNet customers and vice versa. (CEI’s blog appears to be undergoing some kind of upgrade, so I can’t link directly to it, but here is a Google cache of a recent blog post on the subject)

I think it’s important to keep in mind that the end-to-end principle is a principle, not a protocol or standard. It’s not tied to TCP/IP or any other specific technology. We could build a totally new network based on a completely different architecture, and we might still choose to build it in a way that’s consistent with the end-to-end principle.

In fact, one way for libertarians to think about the subject is that the end-to-end principle is an example of the division of labor. Companies like Verizon and Comcast know a lot about how to get packets from point A to point B. They’re generally not so good at designing great computer games, web sites, or video applications. On the other hand, companies like Google and Yahoo are good at designing (or buying other companies that design) great applications but they aren’t necessarily good at designing better networks. So the end-to-end principle is just the principle that each company should focus on doing what it does best. Verizon and Comcast should focus on delivering packets quickly without worrying about what’s in them. Google and Yahoo should focus on creating applications and content without worrying about the details of how the packets will be delivered to their destination. Each firm focuses on its comparative advantage, and the result is greater efficiency all around.

Libertarians generally extol the division of labor as an example of free markets at their finest. It’s not clear to me why we should be skeptical of the end-to-end principle, which at root is just the division of labor applied to network design.

  • http://abstractfactory.blogspot.com/ Cog

    Good post, but you are likely to get a few dings from networking nerds.

    First, a number of important Internet applications don’t run on TCP/IP, but rather custom (non-TCP) transports built on raw IP datagrams. This includes most real-time applications, including streaming audio/video and online games. TCP is the “reliable” transport that you use when you want every byte to arrive at its destination in order; if it’s more important to get a large fraction of bytes to the destination in a timely manner, you want something else. These other transports also respect the end-to-end principle, of course.

    Second, it’s not really accurate to say that TCP/IP doesn’t need to be improved. The research literature on extensions to TCP/IP is legion. Just peek into a Google Scholar search for transport protocol. TCP has many warts. For example, TCP confuses congestion with random packet drops due to unreliable links, making it pretty bad for networks with wireless hops.

    However, this doesn’t really bolster the position of the anti-neutrality brigades. Proposed improvements to TCP/IP generally preserve the division of labor between the application and network provider.

    And besides, it’s kind of silly to compare the work of networking researchers with the crude extortion that Verizon and SBC have in mind. The scenarios that Brooke’s speculating about are fantasies, for all the reasons you’ve already pointed out and more. What we’ll see instead, if the telcos decide they’re not afraid of regulation anymore, is much simpler: SBC will degrade service for non-SBC VoIP applications in order to drive customers to SBC’s VoIP service. Comcast will degrade service for non-Comcast Video over Internet applications in order to drive customers to Comcast’s own video offerings. You’ll be able to pick your poison — degraded audio, or degraded video? wheee — but until competitors like wide area wireless and broadband over power lines actually come online in a serious way, you won’t see reform.

    I also want to say that in your previous posts, you’ve seemed overly sanguine about the prospects for wide area wireless and other new competitors to the DSL/cable duopoly. These are technologies that, as far as I can tell, have been in the “three to five years” range for, well, three to five years. Of course, all tech watchers know that “three to five years” is code for “we have no clue when it will happen”. Frankly, if either one becomes a serious competitor nationwide before the decade’s out, I’ll be amazed.

  • http://abstractfactory.blogspot.com/ Cog

    Good post, but you are likely to get a few dings from networking nerds.

    First, a number of important Internet applications don’t run on TCP/IP, but rather custom (non-TCP) transports built on raw IP datagrams. This includes most real-time applications, including streaming audio/video and online games. TCP is the “reliable” transport that you use when you want every byte to arrive at its destination in order; if it’s more important to get a large fraction of bytes to the destination in a timely manner, you want something else. These other transports also respect the end-to-end principle, of course.

    Second, it’s not really accurate to say that TCP/IP doesn’t need to be improved. The research literature on extensions to TCP/IP is legion. Just peek into a Google Scholar search for transport protocol. TCP has many warts. For example, TCP confuses congestion with random packet drops due to unreliable links, making it pretty bad for networks with wireless hops.

    However, this doesn’t really bolster the position of the anti-neutrality brigades. Proposed improvements to TCP/IP generally preserve the division of labor between the application and network provider.

    And besides, it’s kind of silly to compare the work of networking researchers with the crude extortion that Verizon and SBC have in mind. The scenarios that Brooke’s speculating about are fantasies, for all the reasons you’ve already pointed out and more. What we’ll see instead, if the telcos decide they’re not afraid of regulation anymore, is much simpler: SBC will degrade service for non-SBC VoIP applications in order to drive customers to SBC’s VoIP service. Comcast will degrade service for non-Comcast Video over Internet applications in order to drive customers to Comcast’s own video offerings. You’ll be able to pick your poison — degraded audio, or degraded video? wheee — but until competitors like wide area wireless and broadband over power lines actually come online in a serious way, you won’t see reform.

    I also want to say that in your previous posts, you’ve seemed overly sanguine about the prospects for wide area wireless and other new competitors to the DSL/cable duopoly. These are technologies that, as far as I can tell, have been in the “three to five years” range for, well, three to five years. Of course, all tech watchers know that “three to five years” is code for “we have no clue when it will happen”. Frankly, if either one becomes a serious competitor nationwide before the decade’s out, I’ll be amazed.

  • http://www.techliberation.com/ Tim

    Cog: My understanding is that “TCP/IP” is a commonly-used shorthand for the suite of networking protocols that includes TCP, IP, UDP, etc. An application that uses a custom transport layer atop IP is still broadly speaking a member of the TCP/IP family. I didn’t really belabor the point because, as you point out, other transport protocols are based on the end-to-end principle as much as TCP is.

    Now that I think about it, it strikes me that the end-to-end principle is really a property of the IP layer. If you’ve got a network that faithfully transmits packets from A to B, that’s going to be a neutral network pretty much what transports different applications use.

    You’ve got a good point about new wireless technologies. I’ll confess I don’t know enough about them to have a strong opinion about whether they’re vaporware or not. However, I don’t think that my arguments necessarily hinge on the imminent availability of more wireless broadband technologies.

    I doubt SBC will find it as easy as you suggest to degrade VoIP service. First, the FCC already has some authority over voice service and in at least one case (the Madison River case) has employed it to prevent such shenanigans. In the second case, I think any attempt to degrade VoIP will lead to a sort of cat-and-mouse game that SBC is likely to lose. There are lots of ways you can camouflage a VoIP packet as some other kind of packet, and companies like Vonage will very quickly employ such tactics if SBC starts interrupting their service. Finally, even in a duopoly, customer outrage can have some impact. Given the outrage that’s been generated over an almost entirely hypothetical threat, my sense is that there would be an even bigger uproar if a big phone company engaged in actual discrimination.

    With video you have the added dimension that the vast majority of video programming isn’t real time, and so a lot of the tactics you might use against voice (such as introducing jitter) simply won’t work to disrupt a video service. My sense is that a lot of video will be distributed to users’ hard drives via peer-to-peer networks like BitTorrent for viewing on demand. It’s not obvious to me how Comcast would go about disrupting that, short of simply capping total bandwidth, which would make their overall Internet product much less appealing.

    Wireless Internet technologies may not mature in the 3-5 year time frame, but it strikes me as very likely that they will be available in the 10-15 year timeline. Any legislation we enact today will almost certainly be on the books at that point, and if history is any guide, there’s a very real threat it will eventually be twisted into a barrier to entry for new entrants (see: the ICC and CAB in the 70s, cable entry into the phone market in the ’90s, telco entry into the video market today). The absolute worst-case scenario would be one in which we enact regulations that prove to be unnecessary, but that the telcos have twisted into a barrier to entry by the time wireless technology matures sometime next decade.

  • http://www.techliberation.com/ Tim

    Cog: My understanding is that “TCP/IP” is a commonly-used shorthand for the suite of networking protocols that includes TCP, IP, UDP, etc. An application that uses a custom transport layer atop IP is still broadly speaking a member of the TCP/IP family. I didn’t really belabor the point because, as you point out, other transport protocols are based on the end-to-end principle as much as TCP is.

    Now that I think about it, it strikes me that the end-to-end principle is really a property of the IP layer. If you’ve got a network that faithfully transmits packets from A to B, that’s going to be a neutral network pretty much what transports different applications use.

    You’ve got a good point about new wireless technologies. I’ll confess I don’t know enough about them to have a strong opinion about whether they’re vaporware or not. However, I don’t think that my arguments necessarily hinge on the imminent availability of more wireless broadband technologies.

    I doubt SBC will find it as easy as you suggest to degrade VoIP service. First, the FCC already has some authority over voice service and in at least one case (the Madison River case) has employed it to prevent such shenanigans. In the second case, I think any attempt to degrade VoIP will lead to a sort of cat-and-mouse game that SBC is likely to lose. There are lots of ways you can camouflage a VoIP packet as some other kind of packet, and companies like Vonage will very quickly employ such tactics if SBC starts interrupting their service. Finally, even in a duopoly, customer outrage can have some impact. Given the outrage that’s been generated over an almost entirely hypothetical threat, my sense is that there would be an even bigger uproar if a big phone company engaged in actual discrimination.

    With video you have the added dimension that the vast majority of video programming isn’t real time, and so a lot of the tactics you might use against voice (such as introducing jitter) simply won’t work to disrupt a video service. My sense is that a lot of video will be distributed to users’ hard drives via peer-to-peer networks like BitTorrent for viewing on demand. It’s not obvious to me how Comcast would go about disrupting that, short of simply capping total bandwidth, which would make their overall Internet product much less appealing.

    Wireless Internet technologies may not mature in the 3-5 year time frame, but it strikes me as very likely that they will be available in the 10-15 year timeline. Any legislation we enact today will almost certainly be on the books at that point, and if history is any guide, there’s a very real threat it will eventually be twisted into a barrier to entry for new entrants (see: the ICC and CAB in the 70s, cable entry into the phone market in the ’90s, telco entry into the video market today). The absolute worst-case scenario would be one in which we enact regulations that prove to be unnecessary, but that the telcos have twisted into a barrier to entry by the time wireless technology matures sometime next decade.

  • http://www.ceiopenmarket.org Brooke

    Tim, reading the July 31 post you linked to above has confused me immensely:

    “It may be that QoS can be deployed in a cost-effective manner, and that non-QoS network management techniques simply won’t give us the quality of service we need for high-bandwidth, interactive applications. Which is why we should leave network owners with some freedom to experiment. No one has a monopoly of wisdom on network design, , and if anyone did, it certainly wouldn’t be Congress or the FCC!”

    Between that and the idea that packets could be “stamped” to identify their priority level, I don’t see how this is any different than my original post, other than that you’ve added some detail by identifying a mechanism that makes it plausible, QoS agreements. I also don’t understand how that could be considered a “modest” adjustment to the end to end principle; the network identifies which content is priority and which is not and it discriminates between them accordingly. I’m assuming there’s a price involved? I’m assuming no one is forced into these contracts? I’m assuming that, say, two providers of identical applications are not both required to spend more to upgrade to priority service agreements? That is, one could upgrade to compete on speed and another could abstain to compete on overhead? That’s what it sounds like…

    Anyway, other quick comments:

    1. I’m not suggesting abandoning current protocols just because they’re old, I’m suggesting looking into alternatives that meet current (and future) technologies’ needs as they arise. And I’m suggesting thinking outside the end-to-end box to do so.

    2. “I’m having trouble imagining a kind of application that would require a network to be able to distinguish among different kinds of content.” Yes, yes you are. You’re having trouble imagining anything that departs too terribly far from what exists right now.

    3. Splinternets are to the Internet what gated communities are to the real world: ridiculous bubbles where people voluntarily give up some of their liberties in exchange for peace of mind/religious filters/no spam/etc. I wouldn’t do it and you wouldn’t do it, but there are a lot of joyless, soulless zombies out there who would. Let us not begrudge them their Epcot Center version of the Internet.

    4. Yes, division of labor is good. But what good is fast delivery if grandma’s china gets broken during the move? Variability in packet delivery isn’t such a big deal for browsing or e-mail; but it’s wicked annoying for streaming video and such. A smart network that can determine what sort of packets it’s carrying might some day be able allocate bandwidth resources not just more efficiently, but more effectively, so that end products are delivered intact. If that isn’t a part of the delivery process, I don’t know under whose divided portion of the labor it falls.

  • http://www.ceiopenmarket.org Brooke

    Tim, reading the July 31 post you linked to above has confused me immensely:

    “It may be that QoS can be deployed in a cost-effective manner, and that non-QoS network management techniques simply won’t give us the quality of service we need for high-bandwidth, interactive applications. Which is why we should leave network owners with some freedom to experiment. No one has a monopoly of wisdom on network design, , and if anyone did, it certainly wouldn’t be Congress or the FCC!”

    Between that and the idea that packets could be “stamped” to identify their priority level, I don’t see how this is any different than my original post, other than that you’ve added some detail by identifying a mechanism that makes it plausible, QoS agreements. I also don’t understand how that could be considered a “modest” adjustment to the end to end principle; the network identifies which content is priority and which is not and it discriminates between them accordingly. I’m assuming there’s a price involved? I’m assuming no one is forced into these contracts? I’m assuming that, say, two providers of identical applications are not both required to spend more to upgrade to priority service agreements? That is, one could upgrade to compete on speed and another could abstain to compete on overhead? That’s what it sounds like…

    Anyway, other quick comments:

    <ol>
    <li>

    I’m not suggesting abandoning current protocols just because they’re old, I’m suggesting looking into alternatives that meet current (and future) technologies’ needs as they arise. And I’m suggesting thinking outside the end-to-end box to do so.

    </li>
    <li>

    “I’m having trouble imagining a kind of application that would require a network to be able to distinguish among different kinds of content.” Yes, yes you are. You’re having trouble imagining anything that departs too terribly far from what exists right now.

    </li>
    <li>

    Splinternets are to the Internet what gated communities are to the real world: ridiculous bubbles where people voluntarily give up some of their liberties in exchange for peace of mind/religious filters/no spam/etc. I wouldn’t do it and you wouldn’t do it, but there are a lot of joyless, soulless zombies out there who would. Let us not begrudge them their Epcot Center version of the Internet.

    </li>
    <li>

    Yes, division of labor is good. But what good is fast delivery if grandma’s china gets broken during the move? Variability in packet delivery isn’t such a big deal for browsing or e-mail; but it’s wicked annoying for streaming video and such. A smart network that can determine what sort of packets it’s carrying might some day be able allocate bandwidth resources not just more efficiently, but more effectively, so that end products are delivered intact. If that isn’t a part of the delivery process, I don’t know under whose divided portion of the labor it falls.

    </li>
    </ol>

  • http://www.techliberation.com/ Tim Lee

    Brooke,

    In my opinion, such a QoS scheme is unlikely to actually be helpful, for the reasons Ed Felten gives here. Now, we could be wrong, and so I certainly think it’s a good idea to avoid regulations that would unnecessarily prevent companies from experimenting with such guarantees.

    If that’s all you’re saying, then we agree. But you seemed to be making the stronger claim that network discrimination will be essential to an innovative Internet. I think it’s possible that this is true, but I think it’s probably not. And I certainly don’t think a search engine is a good example, because a search engine isn’t going to be helped by QoS guarantees. QoS guarantees are helpful for streaming applications like VoIP. They aren’t especially helpful for the Web, where occasional network congestion doesn’t significantly affect the browsing experience.

    I think our disagreement may ultimately boil down to a matter of tone and semantics. I see the end-to-end principle as just that: a principle. It’s not a hard-and-fast rule, and there may very well be particular instances where it makes sense to relax it. But I think the phrase “abandoning network neutrality” suggests something much more radical, in which more and more of the intelligence is moved to the middle of the network. I think that would be a disaster.

    Likewise, I think the crack about “drinking the kool-aid at the altar of Internet worship” is unfair to network neutrality advocates. These are reasonable people who have serious concerns about the future evolution of the network. I think their concerns are unfounded, but not so unfounded that we should dismiss them as religious zealots.

  • http://www.techliberation.com/ Tim Lee

    Brooke,

    In my opinion, such a QoS scheme is unlikely to actually be helpful, for the reasons Ed Felten gives here. Now, we could be wrong, and so I certainly think it’s a good idea to avoid regulations that would unnecessarily prevent companies from experimenting with such guarantees.

    If that’s all you’re saying, then we agree. But you seemed to be making the stronger claim that network discrimination will be essential to an innovative Internet. I think it’s possible that this is true, but I think it’s probably not. And I certainly don’t think a search engine is a good example, because a search engine isn’t going to be helped by QoS guarantees. QoS guarantees are helpful for streaming applications like VoIP. They aren’t especially helpful for the Web, where occasional network congestion doesn’t significantly affect the browsing experience.

    I think our disagreement may ultimately boil down to a matter of tone and semantics. I see the end-to-end principle as just that: a principle. It’s not a hard-and-fast rule, and there may very well be particular instances where it makes sense to relax it. But I think the phrase “abandoning network neutrality” suggests something much more radical, in which more and more of the intelligence is moved to the middle of the network. I think that would be a disaster.

    Likewise, I think the crack about “drinking the kool-aid at the altar of Internet worship” is unfair to network neutrality advocates. These are reasonable people who have serious concerns about the future evolution of the network. I think their concerns are unfounded, but not so unfounded that we should dismiss them as religious zealots.

  • http://www.ceiopenmarket.org Brooke

    It sounds more and more like our entire disagreement is based on the my using search engines as an example, to which I happily say, “My bad.” When I write about tech stuff, I like to ask myself if I’ve put it in words my mother could understand. So I used search engines, because they’re an application everyone’s familiar with and because Google is always boasting about its speed. (Yes, I know that Google’s speed is a function of their capital investment, not of a QoS agreement, but you can certainly see how one might get the impression that speed is very much a dimension on which search engines compete.) So apologies for the bad example. Maybe next time you can say, “Hey, Brooke, that’s a bad example; here’s a better one,” instead of saying “Hey Brooke, your line of thinking stupid and indicative of your utter lack of tech savvy.”

    Also, as I said before, there are a lot of smart people–yourself included–who think net neutrality is good. I don’t think they think it because they drink the kool-aid. I do, however, think there are a lot of people who did drink the kool-aid–maybe some of those people are even smart people who think net neutrality is good. When they make good arguments, I’ll engage in that discussion. When they begin referring to themselves as evangelists and proselytizing about the spirit of the one true net, I’m less likely to take them seriously.

  • http://www.ceiopenmarket.org Brooke

    It sounds more and more like our entire disagreement is based on the my using search engines as an example, to which I happily say, “My bad.” When I write about tech stuff, I like to ask myself if I’ve put it in words my mother could understand. So I used search engines, because they’re an application everyone’s familiar with and because Google is always boasting about its speed. (Yes, I know that Google’s speed is a function of their capital investment, not of a QoS agreement, but you can certainly see how one might get the impression that speed is very much a dimension on which search engines compete.) So apologies for the bad example. Maybe next time you can say, “Hey, Brooke, that’s a bad example; here’s a better one,” instead of saying “Hey Brooke, your line of thinking stupid and indicative of your utter lack of tech savvy.”

    Also, as I said before, there are a lot of smart people–yourself included–who think net neutrality is good. I don’t think they think it because they drink the kool-aid. I do, however, think there are a lot of people who did drink the kool-aid–maybe some of those people are even smart people who think net neutrality is good. When they make good arguments, I’ll engage in that discussion. When they begin referring to themselves as evangelists and proselytizing about the spirit of the one true net, I’m less likely to take them seriously.

  • http://www.techliberation.com/ Tim Lee

    It does sound like much of our disagreement centers around that particular example. My apologies for not approaching the subject in a more constructive fashion.

  • http://www.techliberation.com/ Tim Lee

    It does sound like much of our disagreement centers around that particular example. My apologies for not approaching the subject in a more constructive fashion.

Previous post:

Next post: