Comcast, Reset Packets, and Network Neutrality

by on October 22, 2007 · 15 comments

Comcast was kind enough to invite me to a conference call between one of their engineers and some think tank folks. They feel their policies have been mischaracterized in the press. While I found some of the information they shared helpful, I frankly don’t think they helped their case very much.

While he didn’t say so explicitly, the Comcast guy seemed to implicitly concede that the basic allegations are true. He emphasized that they were not blocking any traffic, but that in high-congestion situations they did “delay” peer-to-peer traffic to ease the load. Apparently the Lotus Notes thing was a bug that they’re working to fix. He refused to go into much detail about exactly how this “delay” was accomplished, but presumably if the AP’s story about TCP resets were inaccurate, he would have said so.

To be fair, most of the people on the call were lawyers or economists, not technologists, so it’s possible he just didn’t think anyone other than me would care about these details. Still, it seems like part of the the point of having an engineer on the call would be to answer engineering-type questions. He also made a couple of points that I found a little patronizing. For example, he emphasized that most users wouldn’t even be able to detect the traffic-shaping activities they use without special equipment and training. Which is true, I guess, but rather beside the point.

If you haven’t read it yet, I recommend the discussion in response to Jerry’s post. I don’t know enough about the internals of cable modem protocols to know for sure who’s right, but Tom seems to me to make a good point when he says that forging reset packets is a wasteful and disruptive way to accomplish traffic shaping. The TCP/IP protocol stack is layered for a reason, and I can’t see any reason for routers to be mucking around at the TCP layer, when throttling can perfectly well be accomplished in a protocol-neutral manner at the IP layer.

Someone asked why Comcast didn’t throttle on a user-by-user basis rather than a protocol-by-protocol basis, and he said they were concerned with the privacy implications of that approach. That doesn’t make a lot of sense to me. Very few users are going to consider the number of bits they’ve transferred in a given time period to be confidential information.

We also asked about why there wasn’t more transparency about what throttling methods were being used and against which protocols. Apparently, Comcast feels that disclosing those sorts of details will make it easier for users to circumvent their throttling efforts. That doesn’t strike me as terribly persuasive; customers are entitled to know what they’re getting for their money, and people are going to figure it out sooner or later anyway. All secrecy accomplishes is to make them look bad when someone discovers it and reports it to the press.

With all that said, I’m not sure I see an obvious policy response. It seems to me that regardless of what the law says, there’s always going to be a certain amount of cat-and-mouse between ISPs and the heaviest network users. As Don Marti has pointed out, workarounds are easy to find. Add in a healthy dose of negative publicity, and it seems to me that while Comcast’s behavior is far from laudable, it’s far from obvious it’s a serious enough problem to justify giving the FCC the opportunity to second-guess every ISP’s routing policies.

  • http://bennett.com/blog Richard Bennett

    Tim makes a number of erroneous assumptions in this post, but let’s just focus on one of them: “The TCP/IP protocol stack is layered for a reason, and I can’t see any reason for routers to be mucking around at the TCP layer, when throttling can perfectly well be accomplished in a protocol-neutral manner at the IP layer.”

    The problem Comcast needs to address is at neither the TCP layer or the IP layer, it’s lower down at the MAC layer. The cable modem protocol, DOCSIS, requires each user to request bandwidth for upstream transfers one packet at a time. All applications using TCP or even UDP with feedback need to send upstream ACKs even if most of their traffic is downstream.

    The problem that DOCSIS has is that the bandwidth request packets are sent in a slot in which all the cable modems on the wire are permitted to transmit, so these packets are lost under high load due to collisions. When the bandwidth request fails due to a collision with other bandwidth requests, the cable remains idle in the upstream direction and nobody gets to transmit. So everybody’s user experience goes to hell.

    Delaying or discarding IP packets will not alleviate this problem, it will simply cause more retransmissions and essentially make it worse. The only way out in DOCSIS 1.1 and 2.0 is to reduce the number of bandwidth requests, which is what Comcast’s method does.

    From a network engineering perspective, Comcast is behaving pretty much as they should behave. Please read the paper I cited in the previous comment: The Interaction Between the DOCSIS 1.1/2.0 MAC Protocol and TCP Application Performance. It’s not that hard to understand this: “Figure 5 shows that the collision rates get extremely high as the number of active CMs increase. When only 100 users are active, the collision rate is about 50%. What makes this result alarming is that the web traffic model accounts for the heavy tailed distribution associated with web user idle times.

    Consequently, the number of users actually competing for bandwidth at any given time is much less than 100. As the load increased, the collision rate approached 90-100% depending on the MAP_TIME setting.”

    Comcast has to reduce the rate of DOCSIS collisions to keep customers happy. The method they use is the best one going.
    ~

  • http://www.freedom-to-tinker.com Ed Felten

    Richard,

    The paper you cite points out inefficiencies in DOCSIS (when used with TCP, which it almost always will be). But it doesn’t provide technical justification for the steps Comcast is taking. The paper’s main result is that DOCSIS has trouble when lots of users are browsing the Web. Then why doesn’t Comcast throttle Web traffic? Why do they throttle by sending RSTs rather than dropping packets or adjusting behavior at the DOCSIS level? Isn’t the real problem that their network is underprovisioned, with too many users sharing the same termination system?

  • http://bennett.com/blog Richard Bennett

    Tim makes a number of erroneous assumptions in this post, but let’s just focus on one of them: “The TCP/IP protocol stack is layered for a reason, and I can’t see any reason for routers to be mucking around at the TCP layer, when throttling can perfectly well be accomplished in a protocol-neutral manner at the IP layer.”

    The problem Comcast needs to address is at neither the TCP layer or the IP layer, it’s lower down at the MAC layer. The cable modem protocol, DOCSIS, requires each user to request bandwidth for upstream transfers one packet at a time. All applications using TCP or even UDP with feedback need to send upstream ACKs even if most of their traffic is downstream.

    The problem that DOCSIS has is that the bandwidth request packets are sent in a slot in which all the cable modems on the wire are permitted to transmit, so these packets are lost under high load due to collisions. When the bandwidth request fails due to a collision with other bandwidth requests, the cable remains idle in the upstream direction and nobody gets to transmit. So everybody’s user experience goes to hell.

    Delaying or discarding IP packets will not alleviate this problem, it will simply cause more retransmissions and essentially make it worse. The only way out in DOCSIS 1.1 and 2.0 is to reduce the number of bandwidth requests, which is what Comcast’s method does.

    From a network engineering perspective, Comcast is behaving pretty much as they should behave. Please read the paper I cited in the previous comment: The Interaction Between the DOCSIS 1.1/2.0 MAC Protocol and TCP Application Performance. It’s not that hard to understand this: “Figure 5 shows that the collision rates get extremely high as the number of active CMs increase. When only 100 users are active, the collision rate is about 50%. What makes this result alarming is that the web traffic model accounts for the heavy tailed distribution associated with web user idle times.

    Consequently, the number of users actually competing for bandwidth at any given time is much less than 100. As the load increased, the collision rate approached 90-100% depending on the MAP_TIME setting.”

    Comcast has to reduce the rate of DOCSIS collisions to keep customers happy. The method they use is the best one going.
    ~

  • http://bennett.com/blog Richard Bennett

    DOCSIS assumes that most traffic will be downstream, and patterns of usage are bursty. Plug those assumptions into a deployment model and some figures pop out regarding the number of modems per CMTS that may reasonably be deployed.

    Servers violate these assumptions, and make network access hell for everybody. Comcast bans servers inside its network, and that’s just what a BitTorrent seed is.

    BitTorrent downloads aren’t affected, and neither is seeding at the same time that a download is taking place.

    So what’s the problem?

    You ask: “Why do they throttle by sending RSTs rather than dropping packets or adjusting behavior at the DOCSIS level?”

    If the problem is that excessive requests for upstream bandwidth result in collisions, and the number of requests that rate as “excessive” is small, then there really isn’t anything they can do short of RSTs to reduce the number of requests, is there?

    And yes, overprovisioning the network would defer the problem, but they’d rather spend the money that would take upgrading to DOCSIS 3.0. I don’t blame them, it’s a wise move.

  • http://www.freedom-to-tinker.com Ed Felten

    Richard,

    The paper you cite points out inefficiencies in DOCSIS (when used with TCP, which it almost always will be). But it doesn’t provide technical justification for the steps Comcast is taking. The paper’s main result is that DOCSIS has trouble when lots of users are browsing the Web. Then why doesn’t Comcast throttle Web traffic? Why do they throttle by sending RSTs rather than dropping packets or adjusting behavior at the DOCSIS level? Isn’t the real problem that their network is underprovisioned, with too many users sharing the same termination system?

  • http://bennett.com/blog Richard Bennett

    DOCSIS assumes that most traffic will be downstream, and patterns of usage are bursty. Plug those assumptions into a deployment model and some figures pop out regarding the number of modems per CMTS that may reasonably be deployed.

    Servers violate these assumptions, and make network access hell for everybody. Comcast bans servers inside its network, and that’s just what a BitTorrent seed is.

    BitTorrent downloads aren’t affected, and neither is seeding at the same time that a download is taking place.

    So what’s the problem?

    You ask: “Why do they throttle by sending RSTs rather than dropping packets or adjusting behavior at the DOCSIS level?”

    If the problem is that excessive requests for upstream bandwidth result in collisions, and the number of requests that rate as “excessive” is small, then there really isn’t anything they can do short of RSTs to reduce the number of requests, is there?

    And yes, overprovisioning the network would defer the problem, but they’d rather spend the money that would take upgrading to DOCSIS 3.0. I don’t blame them, it’s a wise move.

  • John Jackson

    Servers violate these assumptions, and make network access hell for everybody. Comcast bans servers inside its network, and that’s just what a BitTorrent seed is.

    And so in effect are applications such as iChat, which send lots of traffic upstream. I find that an iChat session with someone on a cable network is generally terrible. They get a great picture from me, while I get a blocky bunch of pixels since their upload speed is atrocious.

    Having cable networks really defeats the idea of having dumb networks and smart applications.

  • John Jackson

    Servers violate these assumptions, and make network access hell for everybody. Comcast bans servers inside its network, and that’s just what a BitTorrent seed is.

    And so in effect are applications such as iChat, which send lots of traffic upstream. I find that an iChat session with someone on a cable network is generally terrible. They get a great picture from me, while I get a blocky bunch of pixels since their upload speed is atrocious.

    Having cable networks really defeats the idea of having dumb networks and smart applications.

  • Ryan Radia

    Richard, I’ve read all your comments on this issue and you’ve made a convincing case that from a technical standpoint, what Comcast is doing is the most efficient way of preventing network congestion on the node level. I appreciate your shedding some light on why Comcast chose this method, and had Comcast simply offered that explanation I think people would be less angry and more understanding.

    I’d still like to know why Rogers and Shaw seem to be able to manage P2P traffic using QoS and deep packet inspection instead of spoofing RST packets, considering they use the same version of DOCSIS. And why is Comcast focusing on a single protocol? There are lots of internet applications causing node saturation that Comcast could limit. A protocol-neutral solution would be less likely to draw ire from net neutrality advocates.

    Also, wouldn’t you agree that Comcast would appear less sinister if instead of manipulating network traffic, policies were implemented to discourage demand for bandwidth? What about providing customers a finite amount of upstream bandwidth during peak hours, or charging extra to individuals who want to use Bittorrent without their connections being terminated? These methods really wouldn’t be too hard to deploy. In fact, Comcast allegedly has a tiered pricing model ready to roll out but they are scared to do it. But what are they scared of? Surely, the blog hell they’re experiencing at the moment is far worse than if they had designed a transparent, clear-cut method for allowing customers to work within Comcast’s limitations.

  • http://bennett.com/blog Richard Bennett

    I’m not familiar with the methods Shaw and Rogers use, beyond the reports that they limit BT bandwidth and (in the case of Rogers) also prevent seeding: see Azurueas Wiki. There are other ways to do this, of course, and a running game of escalation between BT and the ISPs as the ISPs develop new ways to detect BT and it devises new ways to hide.

    Such methods as deep packet inspection and QoS have also drawn the ire of NN dogmatists, so it wouldn’t benefit Comcast any to use them instead of the ad hoc admission control they’re using.

    One thing I’ll gladly concede is that Comcast hasn’t handled the media and public relations aspect of the kerfuffle at all well, and haven’t even defended themselves from the charges of identity theft lobbed from Susan Crawford et. al. The game of blog journalism is rough and tumble, and Comcast is too polite and too aloof to play it well.

  • Ryan Radia

    Richard, I’ve read all your comments on this issue and you’ve made a convincing case that from a technical standpoint, what Comcast is doing is the most efficient way of preventing network congestion on the node level. I appreciate your shedding some light on why Comcast chose this method, and had Comcast simply offered that explanation I think people would be less angry and more understanding.

    I’d still like to know why Rogers and Shaw seem to be able to manage P2P traffic using QoS and deep packet inspection instead of spoofing RST packets, considering they use the same version of DOCSIS. And why is Comcast focusing on a single protocol? There are lots of internet applications causing node saturation that Comcast could limit. A protocol-neutral solution would be less likely to draw ire from net neutrality advocates.

    Also, wouldn’t you agree that Comcast would appear less sinister if instead of manipulating network traffic, policies were implemented to discourage demand for bandwidth? What about providing customers a finite amount of upstream bandwidth during peak hours, or charging extra to individuals who want to use Bittorrent without their connections being terminated? These methods really wouldn’t be too hard to deploy. In fact, Comcast allegedly has a tiered pricing model ready to roll out but they are scared to do it. But what are they scared of? Surely, the blog hell they’re experiencing at the moment is far worse than if they had designed a transparent, clear-cut method for allowing customers to work within Comcast’s limitations.

  • http://bennett.com/blog Richard Bennett

    I’m not familiar with the methods Shaw and Rogers use, beyond the reports that they limit BT bandwidth and (in the case of Rogers) also prevent seeding: see Azurueas Wiki. There are other ways to do this, of course, and a running game of escalation between BT and the ISPs as the ISPs develop new ways to detect BT and it devises new ways to hide.

    Such methods as deep packet inspection and QoS have also drawn the ire of NN dogmatists, so it wouldn’t benefit Comcast any to use them instead of the ad hoc admission control they’re using.

    One thing I’ll gladly concede is that Comcast hasn’t handled the media and public relations aspect of the kerfuffle at all well, and haven’t even defended themselves from the charges of identity theft lobbed from Susan Crawford et. al. The game of blog journalism is rough and tumble, and Comcast is too polite and too aloof to play it well.

  • http://bennett.com/blog Richard Bennett

    Pardon my fingers, the Wiki I mentioned is here: http://www.azureuswiki.com/index.php/Bad_ISPs#Canada.

  • http://bennett.com/blog Richard Bennett

    Pardon my fingers, the Wiki I mentioned is here: http://www.azureuswiki.com/index.php/Bad_ISPs#Canada.

  • http://add-blogbanner.blogspot.com pramodh

    omcast is in the cable TV business. BitTorrent is an efficient way to deliver video content to large numbers of consumers – which makes BitTorrent a natural competitor to cable TV. BitTorrent isn't a major rival yet, but it might plausibly develop into one.

Previous post:

Next post: