Network Neutrality and Jitter

by on July 11, 2006 · 14 comments

Ed Felten has a great new paper out on network neutrality regulations. Here’s his policy conclusion:

The network neutrality issue is more complex and subtle than most of the advocates on either side would have you believe. Net neutrality advocates are right to worry that ISPs can discriminate–and have the means and motive to do so–in ways that might be difficult to stop. Opponents are right to say that enforcing neutrality rules may be difficult and error-prone. Both sides are right to say that making the wrong decision can lead to unintended side-effects and hamper the Internet’s development.

There is a good policy argument in favor of doing nothing and letting the situation develop further. The present situation, with the network neutrality issue on the table in Washington but no rules yet adopted, is in many ways ideal. ISPs, knowing that discriminating now would make regulation seem more necessary, are on their best behavior; and with no rules yet adopted we don’t have to face the difficult issues of linedrawing and enforcement. Enacting strong regulation now would risk side-effects, and passing toothless regulation now would remove the threat of regulation. If it is possible to maintain the threat of regulation while leaving the issue unresolved, time will teach us more about what regulation, if any, is needed.

And here’s one other thought that struck me while reading the paper:


Felten says the following in his discussion of encryption:

But the ISP can use a different and more effective strategy. If the ISP wants to hamper a particular application, and there is a way to manipulate the user’s traffic that affects that application much more than it does other applications, then the ISP has a way to punish the targeted application. Recall from earlier that VoIP is especially sensitive to jitter (unpredictable changes in delay), but most other applications can tolerate jitter without much trouble. If the ISP imposes jitter on all of the user’s packets, the result will be a big problem for VoIP services, but will not have much impact on other applications.

This policy would disproportionately harm any application that requires low-latency communications. Obviously, VoIP is one of the most important such applications, but it’s far from the only one. Others include network video game such as World of Warcraft and Quake, Unix shell and X Windows sessions, and virtual desktop applications like VNC.

And this problem will get worse as web applications get more interactive and “ajaxy.” Google Maps, for example, loads new tiles on the fly as the user scrolls around the map. If you introduced a 1-second jitter to a user’s broadband connection, there’d be a notable delay in loading map tiles. More and more, web applications depend on constant communications between a client and server.

Hence, while ISPs can discriminate against all low-latency applications, I don’t see how they can distinguish among different low-latency applications. If they want to make their pipe unusuable for VoIP, they have to make it unusable for XBox Live and Remote Desktop too. Which raises the stakes of a “jitter” policy considerably, because you’re going to piss off a lot of customers who weren’t using VoIP at all.

But couldn’t you just add jitter to encrypted packets and leave unencrypted packets alone? The problem is that it’s not always possible to determine what’s an encrypted packet. It’s a trivial matter to disguise an encrypted VoIP session as, say, a World of Warcraft game. The packets would have the same format as WoW game data, but the data within the packets would be voice data rather than game data.

Of course, you might be able to create a filter that distinguishes real WoW packets from bogus ones, but there are two problems with that. First, Internet protocols, like most things on the Internet, have a long tail distribution. Which means your filter would have to know about hundreds and hundreds of different protocols in order to effectively block stenographic encryption.

More importantly, you’d be in an arms race with hundreds of clever hackers, who would find ever more clever ways to hide encrypted packets in unencrypted protocols. And it’s likely that the hackers would find ways to rapidly distribute new hacks. We can see this phenomenon in unauthorized instant messenger clients: They all use the same libraries to achieve interoperability. When AOL or Yahoo! changed their protocol in hopes of cutting off access by unauthorized clients, the creators of the library would find a new work-around, release it, and the authors of all the unauthorized clients would incorporate the new library into the applications.

The entire process typically took about 2 days, and only required the user to download and install a new version of the application. It was entirely done by volunteers. Now imagine how much harder it would be if there were multiple commercial companies developing, sharing, and distributing workarounds.

Now, it might be that ordinary users won’t even have the stomach for the occasional delays such an arms race would create. But that cuts in both directions. If Vonage works for 2 months and then suddenly stops working for a day or two until Vonage releases a new workaround, Comcast (or whomever) is likely to get flooded with angry customer calls. People are much more incensed at having an existing service cut off than at never having a service available in the first place.

In short, I don’t believe Comcast or Verizon could block VoIP without making the Internet substantially less useful to a large number of non-VoIP users. There are too many smart hackers with too many tools in their toolboxes for blocking software to ever be especially effective. And that’s hardly a good business strategy.

Comments on this entry are closed.

Previous post:

Next post: