Is Prioritizing Actually Useful?

by on February 6, 2007 · 6 comments

One of the most interesting things in Cory Doctorow’s article was a link to this network neutrality write-up from last year in Salon. I thought this was fascinating:

There is fractious division among network engineers on whether prioritizing certain time-sensitive traffic would actually improve network performance. Introducing intelligence into the Internet also introduces complexity, and that can reduce how well the network works. Indeed, one of the main reasons scientists first espoused the end-to-end principle is to make networks efficient; it seemed obvious that analyzing each packet that passes over the Internet would add some computational demands to the system.

Gary Bachula, vice president for external affairs of Internet2, a nonprofit project by universities and corporations to build an extremely fast and large network, argues that managing online traffic just doesn’t work very well. At the February Senate hearing, he testified that when Internet2 began setting up its large network, called Abilene, “our engineers started with the assumption that we should find technical ways of prioritizing certain kinds of bits, such as streaming video, or video conferencing, in order to assure that they arrive without delay. As it developed, though, all of our research and practical experience supported the conclusion that it was far more cost effective to simply provide more bandwidth. With enough bandwidth in the network, there is no congestion and video bits do not need preferential treatment.”

Today, Bachula continued, “our Abilene network does not give preferential treatment to anyone’s bits, but our users routinely experiment with streaming HDTV, hold thousands of high-quality two-way videoconferences simultaneously, and transfer huge files of scientific data around the globe without loss of packets.”

Not only is adding intelligence to a network not very useful, Bachula pointed out, it’s not very cheap. A system that splits data into various lanes of traffic requires expensive equipment, both within the network and at people’s homes. Right now, broadband companies are spending a great deal on things like set-top boxes, phone routers and other equipment for their advanced services. “Simple is cheaper,” Bachula said. “Complex is costly”–a cost that may well be passed on to customers.

I think the people arguing for network prioritization mistakenly believe that this is a new issue. In fact, this is an issue that network engineers have been arguing about for decades, and people have been predicting the imminent collapse of the Internet due to congestion problems for years. I think it’s safe to say that the doomsayers of the 1990s were wrong; introducing congestion pricing wasn’t necessary and probably wouldn’t have been helpful a decade ago. It’s not obvious why today is any different.

  • http://www.codemonkeyramblings.com MikeT

    I am a big fan of the idea of metered bandwidth. $15/month should get you 1.5Mbps with a general guarantee of 1.5Mbps, but only about 2-3GB of bandwidth use a month as part of the $15. I would have no problem paying an extra $0.25-$0.50/GB for a few years while the network expands, then rates drop as the applications and network adjust accordingly. We have a system that doesn’t punish users for using that $40/month connection to consume 600GB of content, and that doesn’t provide the services advertised to the people who use only at the most a few GB a month. That ought to change.

  • http://www.codemonkeyramblings.com MikeT

    I am a big fan of the idea of metered bandwidth. $15/month should get you 1.5Mbps with a general guarantee of 1.5Mbps, but only about 2-3GB of bandwidth use a month as part of the $15. I would have no problem paying an extra $0.25-$0.50/GB for a few years while the network expands, then rates drop as the applications and network adjust accordingly. We have a system that doesn’t punish users for using that $40/month connection to consume 600GB of content, and that doesn’t provide the services advertised to the people who use only at the most a few GB a month. That ought to change.

  • Christopher Anderson

    It’s great for a researcher building a next generation network to simply recommend adding capacity. This has been the LAN model for a long time in private internal networks and LANs with Ethernet have grown in orders of magnitude. This is how Ethernet beat out ATM in the LAN 10 years ago or so. Now that those very same LANs are deploying VOIP they wish for some ATM features such as proritization. And now they add that prioritization (‘e’ tagging for example) to deploy VOIP more often than they upgrade the whole thing to 10G.

    There are however business concerns in the real world. Business concerns such as resource scarcity and profit. In a ISP business model, users are charged for service. If usage goes up, but subscriptions do not (e.g. average user consumes more bandwidth) there is financial motivation to prioritize or shape the network, not to add capacity and sacrifice profit with higher outlays of capital, as long as the ‘quality’ or ‘satisfaction’ as observed by the consumer does not suffer.

    We can compare this to the freeways systems in some ways. Should we simply add capacity or should we both capacity and intelligence? Is it better to build new freeways to solve every transportation problem, or do traffic congestion maps, signs, radio broadcasts etc also make sense?

    Does automobile built-in GPS + Maps showing traffic congestion cost too much and require too much processing power?

    And don’t let us forget about Moore’s law, processing double every 18 months or so (or at least transistor count does). What is expensive and complex today will be cheap and simple soon due to the exponential increase in computational capability.

    Additionally, the researchers line of reason also only extends to a bandwidth infinite network. For example, wireless networks nearly require prioritization schemes to function both at capacity and performance. This is something we learned the hard way in WLANs (and why city wide WLANs won’t work in the long run). In this case, the frequency is a scarce resource that must be shared as effectively as possible. Many argue the best way to do this in a WWAN (such as Mobile WiMAX) is through priortization and service differentiation for customers, e.g. One user who primarily wants downloads can and should pay a different price than another user who requires VoIP phones to always work and can live with less download throughput. Tagging these packets differently at an edge device is already trivial and soon to be simpler, additional prioritizing in the core isn’t too far behind.

    Should user A be forced to suffer in his occasional new video watching because user B steals pornography content via peer to peer networks all hours of the day?

    In a more fundamental sense, Internet connections are valuable resources that require infrastructure. Resources need to be distributed. In capatilist system we use private ownership to distribute these resources via the communication medium of money (most efficient system discovered so far!). The government should referee but not play goalie in this distribution game. Therefore, I believe that private enterprises will have the motivation to shape and prioritize traffic and then will. As long as the government referees the game and doesn’t play, we will have the most efficient distribution of these resources as these ISP will compete for our dollars. Much better than the government stealing our dollars and then running the network as they see fit.

  • Christopher Anderson

    It’s great for a researcher building a next generation network to simply recommend adding capacity. This has been the LAN model for a long time in private internal networks and LANs with Ethernet have grown in orders of magnitude. This is how Ethernet beat out ATM in the LAN 10 years ago or so. Now that those very same LANs are deploying VOIP they wish for some ATM features such as proritization. And now they add that prioritization (‘e’ tagging for example) to deploy VOIP more often than they upgrade the whole thing to 10G.

    There are however business concerns in the real world. Business concerns such as resource scarcity and profit. In a ISP business model, users are charged for service. If usage goes up, but subscriptions do not (e.g. average user consumes more bandwidth) there is financial motivation to prioritize or shape the network, not to add capacity and sacrifice profit with higher outlays of capital, as long as the ‘quality’ or ‘satisfaction’ as observed by the consumer does not suffer.

    We can compare this to the freeways systems in some ways. Should we simply add capacity or should we both capacity and intelligence? Is it better to build new freeways to solve every transportation problem, or do traffic congestion maps, signs, radio broadcasts etc also make sense?

    Does automobile built-in GPS + Maps showing traffic congestion cost too much and require too much processing power?

    And don’t let us forget about Moore’s law, processing double every 18 months or so (or at least transistor count does). What is expensive and complex today will be cheap and simple soon due to the exponential increase in computational capability.

    Additionally, the researchers line of reason also only extends to a bandwidth infinite network. For example, wireless networks nearly require prioritization schemes to function both at capacity and performance. This is something we learned the hard way in WLANs (and why city wide WLANs won’t work in the long run). In this case, the frequency is a scarce resource that must be shared as effectively as possible. Many argue the best way to do this in a WWAN (such as Mobile WiMAX) is through priortization and service differentiation for customers, e.g. One user who primarily wants downloads can and should pay a different price than another user who requires VoIP phones to always work and can live with less download throughput. Tagging these packets differently at an edge device is already trivial and soon to be simpler, additional prioritizing in the core isn’t too far behind.

    Should user A be forced to suffer in his occasional new video watching because user B steals pornography content via peer to peer networks all hours of the day?

    In a more fundamental sense, Internet connections are valuable resources that require infrastructure. Resources need to be distributed. In capatilist system we use private ownership to distribute these resources via the communication medium of money (most efficient system discovered so far!). The government should referee but not play goalie in this distribution game. Therefore, I believe that private enterprises will have the motivation to shape and prioritize traffic and then will. As long as the government referees the game and doesn’t play, we will have the most efficient distribution of these resources as these ISP will compete for our dollars. Much better than the government stealing our dollars and then running the network as they see fit.

  • http://bennett.com/blog Richard Bennett

    The infamous Bachula testimony has made all the rounds, and is typically cited by the pro-regulation crowd as “proof” that there’s no legitimate value in prioritizing traffic on the Internet. But few people have read the report on which Bachula (a lobbyist, not an engineer) based his testimony. You can join this select group by clicking here. Here’s the abstract:

    Between May 1998 and October 2001, Internet2 worked to specify and deploy the QBone Premium Service (QPS) [QBone], an interdomain virtual
    leased-line IP service built on diff-serv [RFC2475] forwarding primitives and hereafter referred to simply as “Premium”. Despite considerable effort and success with proof-of-concept demonstrations, this effort yielded no operational deployments and has been suspended indefinitely.

    In this document, we attempt to explain the reasons for this failure. The focus is on non-architectural, and largely non-technical, obstacles to deployment, foremost among them: Premium’s poor incremental deployment properties, intimidating new complexity for network operators, missing functionality on routers, and serious economic challenges.

    The costs of Premium are too high relative to the perceived benefits. Moreover, even if successfully deployed, Premium fundamentally changes the Internet architecture, running contrary to the end-to-end design principle, and threatening the future scalability and flexibility of the Internet. The conclusions reached herein apply not just to Premium, but to any IP quality of service (QoS) architecture offering a service guarantee.

    You don’t have to be an engineer to grasp these key points: Internet2 hasn’t looked at this problem in over five years; their failure to make it work was not related to “architectural issues” as much as deployment issues; the researchers invoke religion (“end-to-end”) to explain why they feel they shouldn’t look too closely at DiffServ.

    Reading on, you’ll see that the ultimate hang-up I2 encountered was a feature of the old-fashioned routers they used: in those days, QoS was implemented in firmware, not in hardware, so there was an automatic 30% performance hit in turning it on. Today’s routers do this in hardware, so the issue goes away completely.

    Bachula’s testimony was both misleading and harmful, and because of it I’d like to see federal funding for Internet2 suspended.

  • http://bennett.com/blog Richard Bennett

    The infamous Bachula testimony has made all the rounds, and is typically cited by the pro-regulation crowd as “proof” that there’s no legitimate value in prioritizing traffic on the Internet. But few people have read the report on which Bachula (a lobbyist, not an engineer) based his testimony. You can join this select group by clicking here. Here’s the abstract:

    Between May 1998 and October 2001, Internet2 worked to specify and deploy the QBone Premium Service (QPS) [QBone], an interdomain virtual
    leased-line IP service built on diff-serv [RFC2475] forwarding primitives and hereafter referred to simply as “Premium”. Despite considerable effort and success with proof-of-concept demonstrations, this effort yielded no operational deployments and has been suspended indefinitely.

    In this document, we attempt to explain the reasons for this failure. The focus is on non-architectural, and largely non-technical, obstacles to deployment, foremost among them: Premium’s poor incremental deployment properties, intimidating new complexity for network operators, missing functionality on routers, and serious economic challenges.
    The costs of Premium are too high relative to the perceived benefits. Moreover, even if successfully deployed, Premium fundamentally changes the Internet architecture, running contrary to the end-to-end design principle, and threatening the future scalability and flexibility of the Internet. The conclusions reached herein apply not just to Premium, but to any IP quality of service (QoS) architecture offering a service guarantee.

    You don’t have to be an engineer to grasp these key points: Internet2 hasn’t looked at this problem in over five years; their failure to make it work was not related to “architectural issues” as much as deployment issues; the researchers invoke religion (“end-to-end”) to explain why they feel they shouldn’t look too closely at DiffServ.

    Reading on, you’ll see that the ultimate hang-up I2 encountered was a feature of the old-fashioned routers they used: in those days, QoS was implemented in firmware, not in hardware, so there was an automatic 30% performance hit in turning it on. Today’s routers do this in hardware, so the issue goes away completely.

    Bachula’s testimony was both misleading and harmful, and because of it I’d like to see federal funding for Internet2 suspended.

Previous post:

Next post: