Multicast and Network Neutrality

by on May 9, 2006

Robert X Cringely has an interesting article about the future of digital content distribution and peer-to-peer networks. I think his big thesis–that the existing one-to-many, end-to-end model for distributing video content won’t scale–is right. But I think he’s missing a few things when he points to peer-to-peer technologies as the savior.

Here’s the technical problem: Right now, if ABC wants to deliver 20 million copies of Desperate Housewives over the Internet, it woul have to transmit the same stream of bits 20 million times to its ISP. The ISP, in turn, might have to transmit 5 million copies to each of 4 peers. Those peers, in turn, might have to transmit a million copies to each of 5 of its peers. And so on down the line, until each end user receives a single copy of the content. That’s wasteful, because sending 20 million redundant copies of a file uses a lot of bandwidth.

In a perfect world, ABC should only have to transmit one copy to its ISP, and the ISP, in turn, should only have to transmit one copy to each interested peer, and so on. Each Internet node would receive one copy and transmit several, until everyone who wants a copy is able to get one. Geeks call this multicast. It’s theoretically part of the TCP/IP protocol suite, but for a variety of technical reasons I don’t fully understand, it hasn’t proved feasible to implement multicast across the Internet as a whole.

However, there are plenty of quasi-multicast technologies out there. One of the most important is Akamai’s EdgePlatform. It’s a network of 18,000 servers around the world that serve as local caches for distributing content. So when a company like Apple wants to distribute 20 million copies of a file, it doesn’t have to transmit it 20 million times. Instead, it transmits the content to Akamai’s servers (and presumably Akamai’s servers distribute it among themselves in a peer-to-peer fashion) and then users download the files from the Akamai server that’s topologically closest to them on the network.


This sort of arrangement gives you much of the advantages of true multicast without having to solve the thorny technical problems raised by genuine multicast at the IP level. And it’s also not really a “peer to peer” solution. Akamai’s servers are commercial boxes owned by Akamai, and they charge content companies for the use of the network.

This sort of technology is likely to be a major component of any widescale video broadcasting over the Internet. Indeed, the logical people to do this sort of local caching are broadband ISPs themselves. Comcast could set up a bunch of caching servers that receive content from the Internet and re-transmit it to its own customers. Not only could it likely charge content companies for the service, but they’d be saving themselves money on bandwidth too, because traffic over their backbone links would drop.

For the most part, peer-to-peer applications perform an equivalent function. But they’re unlikely to be as efficient or as reliable. Peer-to-peer applications generate a lot of unnecessary traffic themselves, because content is downloaded to a node and then immediately uploaded over the same node. Although this is often bandwidth that wouldn’t have been used anyway, it’s clearly not the most efficient way of doing things. Moreover, although peer-to-peer networks do their best to find the peers that are closest to them, their knowledge of network topography can’t possibly be as good as the knowledge possessed by the ISP. Hence, peer-to-peer networks will have much less ability to optimize the process to minimize the number of redundant packets transmitted.

None of which is to say peer-to-peer applications are bad. They clearly have some advantages, most notably the fact that they don’t require any dedicated hardware. However, as the scale of digital distribution grows, it seems likely that they will be partially supplanted by more efficient and robust caching schemes. And if the ISPs are smart, they’ll do the caching themselves.

Which brings me to the network neutrality point: how would a network neutrality rule regard this sort of scheme? After all, Akamai already does precisely what NN advocates fret about: they allow big companies to pay for their content to get to consumers faster. It seems to me that’s undisputably a good thing in this case, but it’s not clear how the FCC would regard it when applying a NN rule. Would a network neutrality rule forbid Comcast from signing deals with content companies to cache their content locally for faster delivery? If so, is that a good thing or a bad thing?

Update: You should all read the comments by George below, who offers a compelling argument that peer-to-peer swarms will generally outperform centrally-managed caching schemes.

Comments on this entry are closed.

Previous post:

Next post: