More on the Economics of Prioritization

by on February 8, 2007 · 18 comments

In response to my post on network prioritization on Tuesday, Christopher Anderson left a thoughtful comment that represents a common, but in my opinion mistaken, perspective on the network discrimination question:

It’s great for a researcher building a next generation network to simply recommend adding capacity. This has been the LAN model for a long time in private internal networks and LANs with Ethernet have grown in orders of magnitude. This is how Ethernet beat out ATM in the LAN 10 years ago or so. Now that those very same LANs are deploying VOIP they wish for some ATM features such as proritization. And now they add that prioritization (‘e’ tagging for example) to deploy VOIP more often than they upgrade the whole thing to 10G.

There are however business concerns in the real world. Business concerns such as resource scarcity and profit. In a ISP business model, users are charged for service. If usage goes up, but subscriptions do not (e.g. average user consumes more bandwidth) there is financial motivation to prioritize or shape the network, not to add capacity and sacrifice profit with higher outlays of capital, as long as the ‘quality’ or ‘satisfaction’ as observed by the consumer does not suffer.

Anderson is presenting a dichotomy between what we might call the frugal option of using prioritization schemes to make more efficient use of the bandwidth we’ve got and the profligate option of simply building more capacity when we run out of the bandwidth we’ve got. The former is supposed to be the cheap, hard-headed, capitalist way of doing things, while the latter is the sort of thing that works fine in the lab, but is too wasteful to work in the real world.


I think this dichotomy is false. Prioritization does not come for free. Let me highlight two ways in which I think Anderson’s argument underestimates the costs of introducing prioritization into the network. The first is the really obvious one: for a given throughput, a “smart” router with more sophisticated prioritization will require more transistors, and therefore will be more expensive, than a “dumb” router that simply routes packets on a first-in, first-out basis. That means that the choice is not between spending more money on capacity or pushing the “prioritize” button on their network control panel. The choice is between spending money on routers (and other stuff) necessary for a faster network or spending money on routers (and other stuff) necessary for smarter network.

Now, it’s an empirical question whether, for a given budget, you’ll see a larger improvement in network performance from buying faster equipment than you will from buying smarter equipment. My second-hand and very cursory understanding of the research is that this is a subject on which network engineers disagree, but that the most common view is that the “faster” option tends to work better. But obviously someone could come along and prove them wrong in the future. What’s not in doubt, though, is that to make an apples-to-apples comparison, you have to consider the costs of prioritization alongside the costs of adding capacity.

Anderson also ignores what I think are likely to be the more significant costs of a prioritization scheme, and that’s the need to build the kind of global infrastructure that would be necessary to make a network-wide prioritization scheme functional. A prioritization scheme that only worked on one network wouldn’t be terribly useful, since the high-priority packets could still get stuck in congestion the moment they left their home network. So if we’re serious about giving a customer end-to-end quality of service guarantees for their VoIP calls, we need to come up with a standard that the vast majority of ISPs adopt. Such a standard would almost certainly come with substantial overhead. Someone would have to police each user (or ISP’s) use of the prioritization flags to ensure no one simply marks all of their packets high priority. There’d need to be an elaborate billing system so that the payment for prioritization is distributed in an equitable manner to the various network owners who carried the traffic for part of its journey.

I’m skeptical about whether such a system is even feasible, but perhaps it is. What can’t be disputed, however, is that all that infrastructure would cost money. Which means that Anderson is completely wrong about this:

In capatilist system we use private ownership to distribute these resources via the communication medium of money (most efficient system discovered so far!).

In fact, money is an extraordinarily inefficient means of communicating value when the size of the transactions are small. As I’ve said before, this is precisely why micropayments never took off, and are unlikely ever to do so. It’s also why ISPs have almost entirely moved to flat-rate, rather than hourly, billing. Below a certain level, the costs of financial transactions overwhelm the value of what’s being exchanged.

In those situations, it often makes more sense to come up with non-monetary ways to allocate resources. Bartering systems such as BitTorrent are one particularly elegant solution. But in other cases, flat rate pricing and ad hoc rationing turns out to work pretty well too. There’s a reason that ISPs in the 1990s all abandoned hourly rates and implemented flat-rate billing instead. For somewhat complex reasons, it turned out that the cost of trying to bill users by the bit simply outweighed the benefits of doing so.

Now again, it’s not impossible that technological breakthroughs will change this equation and make more finely-grained pricing cost-effective. But the important point is that you can’t treat such pricing as free. The more complex your pricing scheme is, the more resources (both labor and transistors) it takes to implement it. Prioritization only makes sense if the costs–including the administrative and technological overhead–outweigh the benefits.

  • Charles

    I believe part of Anderson’s argument in that post was that in the U.S. market, consumers are charged a flat rate regardless of the bandwidth they use. Hence prioritizing gives the isp the chance to deploy a differentiable service where a new revenue stream is possible. You pay so much for the basic service and you pay more for the better service. This is obviously not true in all markets. I believe it is customary in Canada (or at least in Quebec) to charge for bandwidth utilization with a base price for a certain amount of bandwidth and a price per usage on top. Obviously, in this case, adding bandwidth to their network gives the isp a possibly increased revenue stream (assuming everyone goes over their flat rate every once in a while).

  • Charles

    I believe part of Anderson’s argument in that post was that in the U.S. market, consumers are charged a flat rate regardless of the bandwidth they use. Hence prioritizing gives the isp the chance to deploy a differentiable service where a new revenue stream is possible. You pay so much for the basic service and you pay more for the better service. This is obviously not true in all markets. I believe it is customary in Canada (or at least in Quebec) to charge for bandwidth utilization with a base price for a certain amount of bandwidth and a price per usage on top. Obviously, in this case, adding bandwidth to their network gives the isp a possibly increased revenue stream (assuming everyone goes over their flat rate every once in a while).

  • http://www.techliberation.com/ Tim Lee

    Charles,

    Capping a customer’s bandwidth and charging extra for going over the cap isn’t prioritization. It’s just metering. No one’s arguing that should be illegal, are they?

    And it’s not true that US customers pay the same rate regardless of how much bandwidth they use. Customers aren’t charged by the bit, but they do pay different rates depending on the average speed of their connections. So an ISP facing high usage could perfectly well charge more for their high-bandwidth plans. This is, again, not something that anyone is suggesting should be illegal.

  • http://www.techliberation.com/ Tim Lee

    Charles,

    Capping a customer’s bandwidth and charging extra for going over the cap isn’t prioritization. It’s just metering. No one’s arguing that should be illegal, are they?

    And it’s not true that US customers pay the same rate regardless of how much bandwidth they use. Customers aren’t charged by the bit, but they do pay different rates depending on the average speed of their connections. So an ISP facing high usage could perfectly well charge more for their high-bandwidth plans. This is, again, not something that anyone is suggesting should be illegal.

  • Charles

    Tim,
    I agree with you. I’m just pointing out my understanding of the post you were quoting. What I understood was that, at least in this market, prioritazition was preferable because it provided a more immediate way of recuperating investment.

    Note that I’m not saying I agree with what he wrote, just that my understanding of his post seemed different than yours. Then again, I had to read it several times before getting the point. No disrespect to Anderson, but it wasn’t the best worded post I’d seen around here. All this to say, I may be misreading his thoughts.

  • Charles

    Tim,
    I agree with you. I’m just pointing out my understanding of the post you were quoting. What I understood was that, at least in this market, prioritazition was preferable because it provided a more immediate way of recuperating investment.

    Note that I’m not saying I agree with what he wrote, just that my understanding of his post seemed different than yours. Then again, I had to read it several times before getting the point. No disrespect to Anderson, but it wasn’t the best worded post I’d seen around here. All this to say, I may be misreading his thoughts.

  • http://bennett.com/blog Richard Bennett

    Back to your post, I see several of the common mistakes civilians generally make in trying to grasp network engineering. The most significant one is your attribution of cost to prioritization and not to over-provisioning. Speed takes transistors too, so the real trade-off that engineers make in designing network protocols is whether it’s more effective to devote logic to speed or to management. In real networks we always have some element of both, so the tradeoff is matter of degree. And as a practical matter, modern routers have hardware support for DiffServ, so it really is simply a matter of pressing the priority button for most ISPs.

    Given that many of the circuits comprising the Internet core are leased, and those leases have a hard bandwidth limit, the choice that your ISP has to make when customers complain about VoIP not working right is to press the priority button on the router or spend more money on a faster circuit. The economics of that choice are not very hard to calculate.

  • http://bennett.com/blog Richard Bennett

    Back to your post, I see several of the common mistakes civilians generally make in trying to grasp network engineering. The most significant one is your attribution of cost to prioritization and not to over-provisioning. Speed takes transistors too, so the real trade-off that engineers make in designing network protocols is whether it’s more effective to devote logic to speed or to management. In real networks we always have some element of both, so the tradeoff is matter of degree. And as a practical matter, modern routers have hardware support for DiffServ, so it really is simply a matter of pressing the priority button for most ISPs.

    Given that many of the circuits comprising the Internet core are leased, and those leases have a hard bandwidth limit, the choice that your ISP has to make when customers complain about VoIP not working right is to press the priority button on the router or spend more money on a faster circuit. The economics of that choice are not very hard to calculate.

  • http://www.techliberation.com/ Tim Lee

    Richard,

    Obviously over-provisioning has costs. What makes you think I believe otherwise? I didn’t comment on that specifically because I thought it too obvious to mention. But yes, when upgrading a network, both increasing speed and increasing management capability cost money, and so you face a trade-off between them. That’s what I was trying to say when I wrote that “to make an apples-to-apples comparison, you have to consider the costs of prioritization alongside the costs of adding capacity.”

  • http://www.techliberation.com/ Tim Lee

    Richard,

    Obviously over-provisioning has costs. What makes you think I believe otherwise? I didn’t comment on that specifically because I thought it too obvious to mention. But yes, when upgrading a network, both increasing speed and increasing management capability cost money, and so you face a trade-off between them. That’s what I was trying to say when I wrote that “to make an apples-to-apples comparison, you have to consider the costs of prioritization alongside the costs of adding capacity.”

  • http://bennett.com/blog Richard Bennett

    Right, both have costs. As it turns out in the real world, increased provisioning solves some traffic congestion problems optimally, and prioritization solves some other problems. Let’s say you have a network that has adequate capacity to meet consumer demand all the time except for Wednesday evenings between six and seven PM. The cost of increasing your bandwidth to provide a benefit for that limited window is very high compared to the cost of prioritizing traffic. And if the traffic mix contains a lot of BitTorrent streams, you can downgrade their priority without anybody actually noticing. So your clear winner in this instance is prioritization.

    Now if you’re in a situation of chronic congestion, things are obviously different. But even in that state, if a lot of the traffic is background stuff like BitTorrent, downgrading its priority in order to boost interactive priority (Voice especially) makes a lot of sense.

    Now you also make some arguments about pricing and the costs of enforcing pricing. I think these are wrong. Billing is something that computers are exceptionally good at, so it costs very little to charge for digital things. Dial-up ISPs didn’t go from metered service to flat-rate because billing was hard, they did it because it actually reduced peak-time demand for system resources. If people can only login for an hour a day, they’re going to all pretty much login at the same time and use the network heavily. If they can stay connected all the time, more of the usage will be at off-hours when system load isn’t as high. So for dial-up, flat rate spreads demand over a wider window of time and that’s good for the network.

    Regarding the cost of network prioritization hardware, consider that every WiFi adapter sold today has it built-in, so the advance of Moore’s Law has made the traditional speed vs. control arguments in network engineering moot.

  • http://bennett.com/blog Richard Bennett

    Right, both have costs. As it turns out in the real world, increased provisioning solves some traffic congestion problems optimally, and prioritization solves some other problems. Let’s say you have a network that has adequate capacity to meet consumer demand all the time except for Wednesday evenings between six and seven PM. The cost of increasing your bandwidth to provide a benefit for that limited window is very high compared to the cost of prioritizing traffic. And if the traffic mix contains a lot of BitTorrent streams, you can downgrade their priority without anybody actually noticing. So your clear winner in this instance is prioritization.

    Now if you’re in a situation of chronic congestion, things are obviously different. But even in that state, if a lot of the traffic is background stuff like BitTorrent, downgrading its priority in order to boost interactive priority (Voice especially) makes a lot of sense.

    Now you also make some arguments about pricing and the costs of enforcing pricing. I think these are wrong. Billing is something that computers are exceptionally good at, so it costs very little to charge for digital things. Dial-up ISPs didn’t go from metered service to flat-rate because billing was hard, they did it because it actually reduced peak-time demand for system resources. If people can only login for an hour a day, they’re going to all pretty much login at the same time and use the network heavily. If they can stay connected all the time, more of the usage will be at off-hours when system load isn’t as high. So for dial-up, flat rate spreads demand over a wider window of time and that’s good for the network.

    Regarding the cost of network prioritization hardware, consider that every WiFi adapter sold today has it built-in, so the advance of Moore’s Law has made the traditional speed vs. control arguments in network engineering moot.

  • Christopher Anderson

    In response:

    Ã?¢â?¬Ã?¢ The first is the really obvious one: for a given throughput, a “smart” router with more sophisticated prioritization will require more transistors, and therefore will be more expensive, than a “dumb” router that simply routes packets on a first-in, first-out basis. That means that the choice is not between spending more money on capacity or pushing the “prioritize” button on their network control panel. The choice is between spending money on routers (and other stuff) necessary for a faster network or spending money on routers (and other stuff) necessary for smarter network.

    Your basic assumption seems to be that network utilization does not vary. In fact utilization is highly variable. Real time applications in converged networks require fundamentally different network characteristics than traditional applications such as the file transfers the Internet is so currently popular for. File transfers and real time applications do not suffer the same way during peak utilization times. A minimal amount of Ã?¢â?¬Ã??smart’ can guarantee real time applications during peak times, without adding additional capacity. To be more specific, file transfers can tolerate variable bit rates and variable latency (jitter). Real time applications cannot. Simple big buffer FIFO techniques don’t as much sense for converged networks with high utilization. With low utilization of course, it doesn’t matter. In many cases, current Ã?¢â?¬Ã??smart’ capability is already available in excess to meet some basic prioritization requirements without expanding capacity of any sort.

    You also seem to forget wireless networks. Bandwidth is a scarce resource in a wireless network. It is much more efficient to prioritize real time traffic and allow best effort services to utilizing the remaining variable capacity in a wireless network. This is why WiMAX was designed to support QoS from the start.

    Ã?¢â?¬Ã?¢ A prioritization scheme that only worked on one network wouldn’t be terribly useful, since the high-priority packets could still get stuck in congestion the moment they left their home network.

    Nope. You neglect contention in the access layer and in first hop backhauls. A wireless network is a great example where a prioritization scheme in the RF devices alone allows substantial improvement in the user experience. Buffering and then empting a real time queue first and a best effort queue later allows real time traffic to continue to operate during peak utilization, if utilization reaches capacity.

    At 6PM, when everyone is home using the Internet, VoIP calls still need to always work. In your best effort only model, certain critical services such as emergency calls over IP are not possible unless the peak is always less than capacity, even if peak utilization is orders of magnitude higher than average utilization.

    You also neglect the fact that all consumer broadband services are over-subscribed, typically 10:1 or more. A consumer broadband Ã?¢â?¬Ã??line rate’ is not guaranteed. Compare a 1.5 Mbps home DSL line (over-subscribed) monthly charge to a data T1 service (dedicated circuit). In the over-subscription model, prioritization makes sense for adding real time traffic revenue streams.

    Ã?¢â?¬Ã?¢ So if we’re serious about giving a customer end-to-end quality of service guarantees for their VoIP calls, we need to come up with a standard that the vast majority of ISPs adopt.

    It’s called IMS. It comes from the mobile provider world. The basic idea is to allow service guarantees and billing over an IP network. http://en.wikipedia.org/wiki/IP_Multimedia_Subs…>
    Ã?¢â?¬Ã?¢ Someone would have to police each user (or ISP’s) use of the prioritization flags to ensure no one simply marks all of their packets high priority.

    This is a trivial problem.

    Ã?¢â?¬Ã?¢ There’d need to be an elaborate billing system so that the payment for prioritization is distributed in an equitable manner to the various network owners who carried the traffic for part of its journey.

    IMS. Think service billing, but priority is a part of the service guarantee.

    Ã?¢â?¬Ã?¢ In fact, money is an extraordinarily inefficient means of communicating value when the size of the transactions are small. As I’ve said before, this is precisely why micropayments never took off, and are unlikely ever to do so. It’s also why ISPs have almost entirely moved to flat-rate, rather than hourly, billing. Below a certain level, the costs of financial transactions overwhelm the value of what’s being exchanged.

    I think we are moving to a service billing model. I think money is the best way to communication resource availability. We are moving to a model where I may pay dramatically less for a multimedia experience depending upon current resource availability to me.

    Example with oranges: Weather damages orange crop and the prices go up. For lunch, I want to buy an orange or an apple. This is a very small transaction. Do I need to stay abreast of current events to realize the effect of relative orange scarcity for my buying habits? Nope, I just notice that the price has doubled for an orange and I instead eat an apple. Money worked to communicate that oranges are relatively scarce compared to apples and abstracted away the need for me to know why.

    Example with a digital movie: A new movie is released for digital consumption. I have the option of purchasing the right to watch this movie from two vendors. Vendor A runs a large data center that I can connect to over the Internet. To watch this movie I need to download the bits across multiple ISPs, each charging each other fees that are then charged to my ISP and me (eventually). Vendor B instead hosts a caching server in my ISP. If multiple customers of my ISP watch the movie from Vendor B, the total bandwidth bill to my ISP is lessened as the bits are only downloaded once from the Internet and not during each customer viewing experience. One can even distribute this initial download via an out of band network such as a satellite receiver or optical disc, eliminating all upstream bandwidth charges. Do I as the customer need to know about Vendor B’s advanced distributed caching network? No, I just need to know that Vendor B offers me a better price. This better price reflects the resource availability and abstracts away my need to know why.

    Capacity rationing does not solve this problem as well as the distributed information system of money.

    To restate one of my earlier points :

    I believe that private enterprises will have the motivation to shape and prioritize traffic and then will. As long as the government referees the game and doesn’t play, we will have the most efficient distribution of these resources as these ISP will compete for our dollars. Much better than the government stealing our dollars and then running the network as they see fit.

    I’ll add now “Or forcing us to deal with bureaucrats to offer innovative Internet services.”

    Of course, in the long run, the US is a small market anyway.

  • Christopher Anderson

    In response:

    Ã?¢â?¬Ã?¢ The first is the really obvious one: for a given throughput, a “smart” router with more sophisticated prioritization will require more transistors, and therefore will be more expensive, than a “dumb” router that simply routes packets on a first-in, first-out basis. That means that the choice is not between spending more money on capacity or pushing the “prioritize” button on their network control panel. The choice is between spending money on routers (and other stuff) necessary for a faster network or spending money on routers (and other stuff) necessary for smarter network.

    Your basic assumption seems to be that network utilization does not vary. In fact utilization is highly variable. Real time applications in converged networks require fundamentally different network characteristics than traditional applications such as the file transfers the Internet is so currently popular for. File transfers and real time applications do not suffer the same way during peak utilization times. A minimal amount of Ã?¢â?¬Ã??smart’ can guarantee real time applications during peak times, without adding additional capacity. To be more specific, file transfers can tolerate variable bit rates and variable latency (jitter). Real time applications cannot. Simple big buffer FIFO techniques don’t as much sense for converged networks with high utilization. With low utilization of course, it doesn’t matter. In many cases, current Ã?¢â?¬Ã??smart’ capability is already available in excess to meet some basic prioritization requirements without expanding capacity of any sort.

    You also seem to forget wireless networks. Bandwidth is a scarce resource in a wireless network. It is much more efficient to prioritize real time traffic and allow best effort services to utilizing the remaining variable capacity in a wireless network. This is why WiMAX was designed to support QoS from the start.

    Ã?¢â?¬Ã?¢ A prioritization scheme that only worked on one network wouldn’t be terribly useful, since the high-priority packets could still get stuck in congestion the moment they left their home network.

    Nope. You neglect contention in the access layer and in first hop backhauls. A wireless network is a great example where a prioritization scheme in the RF devices alone allows substantial improvement in the user experience. Buffering and then empting a real time queue first and a best effort queue later allows real time traffic to continue to operate during peak utilization, if utilization reaches capacity.

    At 6PM, when everyone is home using the Internet, VoIP calls still need to always work. In your best effort only model, certain critical services such as emergency calls over IP are not possible unless the peak is always less than capacity, even if peak utilization is orders of magnitude higher than average utilization.

    You also neglect the fact that all consumer broadband services are over-subscribed, typically 10:1 or more. A consumer broadband Ã?¢â?¬Ã??line rate’ is not guaranteed. Compare a 1.5 Mbps home DSL line (over-subscribed) monthly charge to a data T1 service (dedicated circuit). In the over-subscription model, prioritization makes sense for adding real time traffic revenue streams.

    Ã?¢â?¬Ã?¢ So if we’re serious about giving a customer end-to-end quality of service guarantees for their VoIP calls, we need to come up with a standard that the vast majority of ISPs adopt.

    It’s called IMS. It comes from the mobile provider world. The basic idea is to allow service guarantees and billing over an IP network. http://en.wikipedia.org/wiki/IP_Multimedia_Subsystem>

    Ã?¢â?¬Ã?¢ Someone would have to police each user (or ISP’s) use of the prioritization flags to ensure no one simply marks all of their packets high priority.

    This is a trivial problem.

    Ã?¢â?¬Ã?¢ There’d need to be an elaborate billing system so that the payment for prioritization is distributed in an equitable manner to the various network owners who carried the traffic for part of its journey.

    IMS. Think service billing, but priority is a part of the service guarantee.

    Ã?¢â?¬Ã?¢ In fact, money is an extraordinarily inefficient means of communicating value when the size of the transactions are small. As I’ve said before, this is precisely why micropayments never took off, and are unlikely ever to do so. It’s also why ISPs have almost entirely moved to flat-rate, rather than hourly, billing. Below a certain level, the costs of financial transactions overwhelm the value of what’s being exchanged.

    I think we are moving to a service billing model. I think money is the best way to communication resource availability. We are moving to a model where I may pay dramatically less for a multimedia experience depending upon current resource availability to me.

    Example with oranges: Weather damages orange crop and the prices go up. For lunch, I want to buy an orange or an apple. This is a very small transaction. Do I need to stay abreast of current events to realize the effect of relative orange scarcity for my buying habits? Nope, I just notice that the price has doubled for an orange and I instead eat an apple. Money worked to communicate that oranges are relatively scarce compared to apples and abstracted away the need for me to know why.

    Example with a digital movie: A new movie is released for digital consumption. I have the option of purchasing the right to watch this movie from two vendors. Vendor A runs a large data center that I can connect to over the Internet. To watch this movie I need to download the bits across multiple ISPs, each charging each other fees that are then charged to my ISP and me (eventually). Vendor B instead hosts a caching server in my ISP. If multiple customers of my ISP watch the movie from Vendor B, the total bandwidth bill to my ISP is lessened as the bits are only downloaded once from the Internet and not during each customer viewing experience. One can even distribute this initial download via an out of band network such as a satellite receiver or optical disc, eliminating all upstream bandwidth charges. Do I as the customer need to know about Vendor B’s advanced distributed caching network? No, I just need to know that Vendor B offers me a better price. This better price reflects the resource availability and abstracts away my need to know why.

    Capacity rationing does not solve this problem as well as the distributed information system of money.

    To restate one of my earlier points :

    I believe that private enterprises will have the motivation to shape and prioritize traffic and then will. As long as the government referees the game and doesn’t play, we will have the most efficient distribution of these resources as these ISP will compete for our dollars. Much better than the government stealing our dollars and then running the network as they see fit.

    I’ll add now “Or forcing us to deal with bureaucrats to offer innovative Internet services.”

    Of course, in the long run, the US is a small market anyway.

  • Henry Miller

    Prioritization is fine, so long as you only use it for minority cases when you don’t have the bandwidth required. I won’t notice if my bittorrent download rate is slightly below optimum, but if it is substantially below optimum I will notice. If it gets bad enough I will change providers. I currently use a wireless ISP at my house, but I have cable and DSL options. The ISP I use currently fits my needs best, but they are not optimal for everyone.

    If my ISP uses QOS to make sure VOIP works for me, 100% with no noticeable loss to other activities I’m happy. However if the loss to other activities becomes noticeable I will get mad.

    Thus anyone using QOS still needs to pay attention to network useage. They still need proactive network speed upgrades.

    In the future cable TV (in highest quality high definition) will be delivered over IP. (There may be other uses for such bandwidth that we cannot imagine yet because we don’t have it) Not today, but not too many years in the future. Anyone in the network business needs to plan all upgrades with the question of how it will get them closer to that world. (Of course you will need upgrades that are dead ends just to compete along the way, but remember the end will demand a lot more than we get now)

  • Henry Miller

    Prioritization is fine, so long as you only use it for minority cases when you don’t have the bandwidth required. I won’t notice if my bittorrent download rate is slightly below optimum, but if it is substantially below optimum I will notice. If it gets bad enough I will change providers. I currently use a wireless ISP at my house, but I have cable and DSL options. The ISP I use currently fits my needs best, but they are not optimal for everyone.

    If my ISP uses QOS to make sure VOIP works for me, 100% with no noticeable loss to other activities I’m happy. However if the loss to other activities becomes noticeable I will get mad.

    Thus anyone using QOS still needs to pay attention to network useage. They still need proactive network speed upgrades.

    In the future cable TV (in highest quality high definition) will be delivered over IP. (There may be other uses for such bandwidth that we cannot imagine yet because we don’t have it) Not today, but not too many years in the future. Anyone in the network business needs to plan all upgrades with the question of how it will get them closer to that world. (Of course you will need upgrades that are dead ends just to compete along the way, but remember the end will demand a lot more than we get now)

  • http://bennett.com/blog Richard Bennett

    Google and AT&T; are learning that delivering HDTV over the Internet is harder than it looks. “In the future” is a pretty expansive term, and in this case it’s more likely to be to be “years away” than “right around the corner.” See this post and one it links.

  • http://bennett.com/blog Richard Bennett

    Google and AT&T are learning that delivering HDTV over the Internet is harder than it looks. “In the future” is a pretty expansive term, and in this case it’s more likely to be to be “years away” than “right around the corner.” See this post and one it links.

Previous post:

Next post: