If a Trend Can’t Continue, It Won’t Continue

by on June 24, 2008 · 10 comments

Ars has an interesting interview with the head of Sandvine, the company at the heart of the Comcast Kerfuffle:

What Caputo seems to think he’s doing with Sandvine is enabling “all-you-can-eat” models at reasonable prices. People who argue for network neutrality are “painting the service providers into a corner,” he says in the interview. “If all packets are created equal then it’s equal utility and we should be charging on a per-packet basis, and I don’t think anybody wants to go there.”

Without traffic management, especially of P2P, the idea is that prices would either go up or congestion might reach truly terrible new heights, and Caputo believes that most users would rather just throttle P2P; let it work, but slowly and in the background, so that ISPs don’t need to make expensive infrastructure improvements and everyone can continue eating at the buffet for $30 or $40 a month. We might also see tiers emerge that allow P2P users free rein for, say $70 a month, while non-P2P users could keep paying lower prices. Caputo insists, “it’s going to be laughable in the next two or three years that people used to say all packets should be treated equally.”

This strikes me as seriously misguided. The obvious problem is the issue of evasion. For example, I’ve written before about BitTorrent header encryption, a technique that helps BitTorrent users evade deep packet inspection. No doubt Sandvine is working on finding new ways to detect encrypted BitTorrent packets. And if they do, the BitTorrent hordes will start looking for better evasion strategies. An arms race, and one that Sandvine is unlikely to win.

The more fundamental problem, though, is that it’s not clear what business strategy Sandvine thinks it’s supporting. Right now, there’s a reasonably clear distinction between “bad” peer-to-peer technologies (which use a lot of bandwidth and are mostly used for “bad” purposes) and “good” everything else. But there’s no reason to think this distinction will hold up in the long run. We’ve seen the rapid growth of video services, and that will only accelerate as people start downloading high-definition movies over the Internet. You’ll start to see video junkies consuming as much bandwidth as the worst BitTorrent abuser but isn’t doing anything anyone would regard as unethical or even abusive. Networks owners who over-provisioned will face the same dilemma, but without the easy out of being able to pick on “bad” technologies.

At that point, networks will have three options. One is to start picking and choosing among network applications. Deciding that, say, iTunes is a “good” application and gets priority bandwidth, while Hulu is a “bad” application that needs to be throttled. This strikes me as rather unworkable, even leaving aside the evasion issues. The more fundamental problem is that large ISPs are sluggish bureaucratic organizations. Their default pose will have to be to disallow applications until they’ve pass some kind of approval process, and the approval process will inevitably be overly conservative. That means that in the long run, networks that choose this strategy will put themselves at a serious competitive disadvantage over competitors that choose neutral management strategies.

Indeed, the best prioritization scheme would probably be “prioritize low-bandwidth applications over high-bandwidth ones.” But at the limit, this strategy becomes indistinguishable from metering. Metering could take two forms. One would be to simply enforce some kind of rough per-user equality: that is, during high-congestion periods, drop the packets of the heaviest users first. This would achieve largely the same effect as per-application metering with few of the downsides. When users needed good performance, they could switch off their peer-to-peer clients and get priority service. The rest of the time, they would get whatever bandwidth was left over after lower-bandwidth users have had their fill.

The second option is some form of metering, which Adam has championed here at TLF. I think this could work reasonably well, provided that it comes with a generous flat-rate bandwidth allotment high enough that most users won’t reach it. I think the metering from the first bit is likely to be a customer service disaster because casual Internet users will find the idea of paying by the megabit (which is a totally mysterious unit of measurement to most people) confusing. But an ISP that capped bandwidth at, say, the 95th percentile and metered above that point could do pretty well.

The final option is to maintain the current “all you can eat” business model: building out more capacity and accepting that things will get congested during peak periods. For all the hand-wringing we’ve seen in recent years, this strategy has served us well for more than a decade and I think it may continue to work. We’ve had alarmist predictions of impending doom for just as long, and so far they’ve always been wrong. In practice, the Internet has a variety of self-regulating mechanisms that tend to prevent things from getting out of control. One of them is the bare fact that truly high-bandwidth applications don’t take off until there’s a critical mass of users with sufficient bandwidth to take advantage of them. High-def video applications won’t appear magically at a fixed date. They’ll succeed when enough home users have broadband connections fast enough to accommodate them.

It’s important to bear in mind that it’s probably difficult for ISPs to keep a sense of perspective on this. They’re on the front lines, always dealing with the latest bandwidth crisis. Their time is focused on those parts of the network that are facing congestion problems, and they’re forever being forced to invest in upgrades to keep up with the growth in demand. From their perspective, they’ve managed to stay a step ahead of demand growth only with the greatest effort, and it seems like they’re constantly being overwhelmed by traffic growth.

Yet things have looked roughly the same for more than a decade, and as far as I can see, there’s no reason to think this is a unique moment of crisis. ISPs will probably figure out ways to accommodate most of their customers’ bandwidth needs most of the time, as they have since the mid-1990s.

“Network management” with deep packet inspection strikes me as a clumsy solution to an oversold problem. The Internet was designed to give end-users control for good reasons, and from where I sit DPI looks like a counterproductive effort to change that. Certainly Caputo’s blustery assertion that neutral networks will be a thing of the past by 2011 doesn’t pass the straight face test.

Previous post:

Next post: