Tim Lee on Net Neut: “The Durable Internet”

by on November 12, 2008 · 13 comments

Tim Lee’s long anticipated Cato Institute Policy Analysis has been released today.

The Durable Internet: Preserving Network Neutrality without Regulation is a must-read for people on both sides of the debate over network neutrality regulation.

What I like best about this paper is how Tim avoids joining one “team” or another. He evenly gives each side its due – each side is right about some things, after all – and calls out the specific instances where he thinks each is wrong.

Tim makes the case for treating the “end-to-end principle” as an important part of the Internet’s fundamental design. Tim disagrees with the people who argue for a network with “smarter” innards and believes that neutrality advocates seek the best engineering for the network. But they are wrong to believe that the network is fragile or susceptible to control. The Internet’s end-to-end architecture is durable, despite examples where it is not an absolute.

Tim has history lessons for those who believe that regulatory control of network management will have salutary effects. Time and time again, regulatory agencies have fallen into service of the industries they regulate.

“In 1970,” Tim tells us, “a report released by a Ralph Nader group described the [Interstate Commerce Commission] as ‘primarily a forum at which transportation interests divide up the national transportation market.’” Such is the likely fate of the Internet were management of it given to regulators at the FCC and their lobbyist friends at Verizon, AT&T, Comcast, and so on.

This paper has something for everyone, and will be a reference work as the network neutrality discussion continues. Highly recommended: The Durable Internet: Preserving Network Neutrality without Regulation.

  • sjschultze

    I critique Tim's article:
    http://managingmiracles.blogspot.com/2008/11/ti

    Excerpt:

    Cato has finally gotten around to publishing Tim Lee's article, “The Durable Internet: Preserving Network Neutrality without Regulation.” I first saw a draft of his paper in March, and Tim engaged in a good spirited back-and-forth with me over email. The primary failings that I perceived then remain un-addressed in this final version. They are twofold:

    1. The fallacy that any non-discrimination regulation is the same as the combined force of all misguided regulation since the advent of administrative agencies

    2. The fallacy that there is an underlying “durability” of the technology/market structures of the internet that will successfully resist strong carrier incentives

    Tim Lee's article repeats but then goes beyond the standard refrain of no-government-regulation libertarianism. However, his novel arguments for why the internet will take care of itself are not persuasive. Ultimately, we are left with his well-put argument for the benefits of network neutrality, but without any assurances that it will be preserved. Into this vacuum might flow reasonable discussion of how targeted government regulation might be the only means of achieving the ends we both seek.

  • http://bennett.com/blog Richard Bennett

    Tim exaggerates the role of end-to-end control in the overall structure of the Internet, and attributes many qualities to it that it does not in fact dictate. There is no inconsistency at all between an architecture where end systems negotiate flow rates with each other and service levels from the network itself. This insistence that network architecture is either/or is the fundamental failing of advocates to try to make end-to-end a religious creed instead of an engineering work item.

    This essay doesn't move the debate forward, unfortunately.

  • http://www.tc.umn.edu/~leex1008 Tim Lee

    There is no inconsistency at all between an architecture where end systems negotiate flow rates with each other and service levels from the network itself.

    I can't parse this sentence.

  • http://bennett.com/blog Richard Bennett

    I left a phrase out, read: “there is no inconsistency at all between an architecture where end systems negotiate flow rates with each other and one in which end systems negotiate service levels from the network itself.”

    On the Internet, and in fact in any layered-architecture network, protocols at a given layer negotiate with partners at the same layer and also with the lower layer. For example, TCP negotiates window size with the partner TCP across the network, and also negotiates it with the network core, which signals congestion by dropping packets. In the Internet's odd allocation of functions, TCP doesn't know why a packet has been lost, which could have happened due to a noise hit on a Wi-Fi channel, a congested route, or a buffer shortage at the other end, so it treats all more or less the same (some TCP stacks have special handling for the first packet drop in a while, to accommodate Wi-Fi, but that's not universal.)

    Imagine a network with two service levels, one for low-latency delivery and the other for low-cost delivery. The first class is volume-limited and the second isn't. Does this network allow as much innovation and experimentation as the existing Internet, and does it do any damage to any end-to-end principles we have the closet?

    Network neutrality is not “the end-to-end principle,” it's a wholly distinct regulatory argument.

  • http://www.tc.umn.edu/~leex1008 Tim Lee

    I don't have any beef with a end-to-end-friendly, two-tiered network like the one you describe other than being skeptical that it could be made to work on a network with a billion people. My sense is that some NN advocates (including Tim Wu) feel the same way.

    A big part of the problem, of course, is that network neutrality means different things to different people who advocate it. But at least some NN advocates mean something like end-to-end, and for clarity I chose to use that definition.

  • http://bennett.com/blog Richard Bennett

    The network I've described, the one that Tim Wu doesn't think practical, is actually the thing we call The Internet today. There is one service level for UDP and another for TCP. The network's core routers treat their packets differently when applying congestion avoidance algorithms, which in effect creates two classes of service. UDP doesn't have to back off under congestion, but TCP does. So thanks for proving my point: a dumb core is not an actual part of the Internet's operation today, nor has it ever been.

    It's a fallacy to assert that an “end-to-end principle” dominates the Internet's design and is responsible for its success. The Internet absolutely depends on quite a different principle or architecture, namely the “network-to-network principle” that governs the agreements between network carriers to exchange packets with each other. Without a robust “network-to-network principle” the Internet wouldn't be an Internet, it would be an Intranet. Intranets still have end-to-end protocols, but they're not as interesting as a world-wide network that connects every system to every other system. So the dogma of “end-to-end” is a false emphasis.

    Incidentally, the “End to End Arguments” paper was originally published in 1981, before the Great Switchover, but even then it was historical revisionism. The architects of TCP/IP weren't out to assert their independence from authority as much as to make a grant-friendly network, and none of the three authors of the paper was among them. They were a second generation of network theorists looking to protect the flame.

    In my experience, virtually no network engineers and architects working today have read that paper, so it's only “seminal” among law professors and other policy wonks. Doctrinaire recitations tend not to have much audience among actual engineers.

  • Brett Glass

    Actually, the “end to endians” are merely one camp in the debate over Internet regulation (aka “network neutrality), though they are the most extreme ones to be sure. They are also the ones with the most counterfactual ideas. They champion the idea of a “dumb” network, when in fact the core routers that hold the Internet together today (and without which it would fall apart in a minute) are in fact very intelligent special purpose supercomputers. They reject any form of network management — as if the Internet were still an academic “toy” network with no potentially hostile users who would abuse it for things such as spam or malware. And they totally ignore economics, failing to recognize that ISPs do have cost constraints and that users won't pay to use an Internet which is unreliable or insecure.

    Yes, the ICC and other examples of “regulatory capture” are good arguments against regulation of the Internet. But so are the specifics of the proposed regulation. Except for the few provisions which merely prohibit anticompetitive conduct, most of the provisions of these regulations are designed to get someone something for nothing — in short, to favor some interest group. Which is anything but “neutral.”

  • http://www.starstyle.lv Apgerbu Interneta veikals

    Thanks for share it,i will come back again!

  • http://www.starstyle.lv Apgerbu Interneta veikals

    Thanks for share it,i will come back again!

  • http://www.starstyle.lv Apgerbu Interneta veikals

    Thanks for share it,i will come back again!

  • Pingback: DDOS Protected

  • Pingback: meubels online

  • Pingback: barclays

Previous post:

Next post: