Brooke Oberwetter and I have been having an interesting discussion here and here about network neutrality. I want to start by emphasizing that I wholeheartedly agree with Brooke’s broad point that technologies change over time, and so we should be skeptical of proposals to artificially restrain that evolution by giving government bureaucrats the power to second-guess the design decisions of network engineers. Doubtless the Internet will evolve in ways that neither of us can predict today, and we don’t want the law to stand in the way.
But Brooke went beyond that general point and offered some specific examples of ways she thinks the Internet might change. Her main contention seems to be that the end-to-end principle is overrated, and that “the only reason they’re so revered is because they are simply what is.” I think this is fundamentally mistaken: people have good reasons for revering the end-to-end principle, and abandoning it would be a very bad idea. I’ll discuss some of her specific arguments below the fold.
Tim, I think the bigger issue that you aren’t wrapping your head around is that you’re assuming that the network will always operate exactly as it does today. This is a 30 year old technology we’re talking about that hasn’t seen any real change to its basic operating protocols. To assume that the way it works now is the best possible way for it to continue working is woefully shortsighted, I think. End-to-end and TCP aren’t written in stone, and from my understanding, the only reason they’re so revered is because they are simply what is.
This is a bit like saying that 60-cycle AC power is a 100-year-old technology, and so we should be looking at things to replace it. The basic specs for TCP/IP haven’t changed in 20 years because there’s no reason for them to change. Like 60-cycle AC power, it’s more important that everyone agree on the same standard than what the particular standard is. And like 60-cycle power, it doesn’t place any particular constraints on what kinds of applications can be built atop it. I can power my iPods and DVD players with 60-cycle power just as well as my grandparents could power their toasters and Victrolas. Likewise, YouTube and World of Warcraft work just as well via TCP/IP as email and usenet did in the 1980s.
And that’s why the end-to-end principle is revered. The people who launched YouTube didn’t have to talk to Comcast or Verizon or anybody else to get help making their application work with their networks, any more than Apple had to call Pepco to make sure the iPod would work on its electrical grid.
Now obviously, some of the specifics of TCP/IP might change over time to accommodate the needs of new applications. There is, in fact, a new version of the IP protocol called IPv6 that offers a variety of enhancements to the way the current version (IPv4) operates. But IPv6 relies on the end-to-end principle every bit as much as IPv4 does. There’s no reason to expect that to change in the future.
I think the concept of the dumb network and the end-to-end principle make less sense now than they did when there was less variety in applications and less variety in content. As you pointed out to me, right now, identifying the kind of content (video, voice, email, web, etc) being sent over the package-switched dumb network is a difficult technical challenge. Well, maybe some applications need a network to evolve on which such differentiations aren’t a difficult technical challenge. And maybe some consumers want a network that can keep objectionable content out of their homes without having to rely on end-technologies or pop-up blocking software. Wayne has made the argument in favor of the “splinternets” on several occasions, and it is a very reasoned and serious–though unpopular–argument that network neutrality should be abandoned as the Internet’s broad organizing principle.
That first sentence gets the situation precisely backwards. The astonishing variety of applications and content we now enjoy would be impossible without the end-to-end principle. Imagine if every Internet startup had to meet with every ISP in the world (or at least send them an application or a check) before launching a new website. The reason they don’t have to do that is because open standards like TCP/IP ensure that data flowing across the Internet gets treated the same way regardless of whose network it’s on.
I’m having trouble imagining a kind of application that would require a network to be able to distinguish among different kinds of content. I certainly don’t any reason to think that network routers in the middle of the network would be better at filtering “objectionable content” than dedicated mail servers at the endpoints.
One possible example is quality-of-service guarantees, which I discussed here. There’s considerable disagreement among computer scientists about whether building QoS into the IP layer of the networking stack would be a good idea. My sense is that the majority view is that it would not, but there are certainly plenty of people on the other side. For our purposes, though, I think the important point is that adding QoS guarantees to the TCP/IP protocol stack would be a very modest adjustment to the end-to-end principle. Packets might come stamped with a priority level, and route higher-priority packets before lower-priority ones. But they still wouldn’t be “smart” in the sense of looking inside the packet to figure out where the packet came from or what kind of packet–video, voice, email, web, etc–it is. QoS would be a minor adjustment of end-to-end, not an abandonment of it.
I have to confess I don’t quite understand what “splinternets” are or what problem they would solve. I remember reading a bit of Wayne’s work on the subject when he was at Cato, and I don’t think he ever laid his proposal out in much detail. The connectedness of the Internet is precisely what makes the Internet useful. It seems to me it would be quite unfortunate if, say, we had ComcastNet and VerizonNet, and ComcastNet customers couldn’t send emails to VerizonNet customers and vice versa. (CEI’s blog appears to be undergoing some kind of upgrade, so I can’t link directly to it, but here is a Google cache of a recent blog post on the subject)
I think it’s important to keep in mind that the end-to-end principle is a principle, not a protocol or standard. It’s not tied to TCP/IP or any other specific technology. We could build a totally new network based on a completely different architecture, and we might still choose to build it in a way that’s consistent with the end-to-end principle.
In fact, one way for libertarians to think about the subject is that the end-to-end principle is an example of the division of labor. Companies like Verizon and Comcast know a lot about how to get packets from point A to point B. They’re generally not so good at designing great computer games, web sites, or video applications. On the other hand, companies like Google and Yahoo are good at designing (or buying other companies that design) great applications but they aren’t necessarily good at designing better networks. So the end-to-end principle is just the principle that each company should focus on doing what it does best. Verizon and Comcast should focus on delivering packets quickly without worrying about what’s in them. Google and Yahoo should focus on creating applications and content without worrying about the details of how the packets will be delivered to their destination. Each firm focuses on its comparative advantage, and the result is greater efficiency all around.
Libertarians generally extol the division of labor as an example of free markets at their finest. It’s not clear to me why we should be skeptical of the end-to-end principle, which at root is just the division of labor applied to network design.
Comments on this entry are closed.