Some basics about edge caching, network management, & Net neutrality

by on December 18, 2008 · 7 comments

The introduction below was originally written by Adam Thierer, but now that I (Adam Marcus) am a full-fledged TLF member, I have taken authorship.
___________________________________________________

My PFF colleague Bret Swanson had a nice post here yesterday talking about the evolution of the debate over edge caching and network management (“Bandwidth, Storewidth, and Net Neutrality“), but I also wanted to draw your attention to related essay by another PFF colleague of mine. Adam Marcus, who serves as a Research Fellow and Senior Technologist at PFF, has started a wonderful series of “Nuts & Bolts” essays meant to “provide a solid technical foundation for the policy debates that new technologies often trigger.” His latest essay is on Network neutrality and edge caching, which has been the topic of heated discussion since the Wall Street Journal’s front-page story on Monday that Google had approached major cable and phone companies and supposedly proposed to create a fast lane for its own content.

Anyway, Adam Marcus gave me permission to reprint the article in its entirety down below. I hope you find this background information useful.
___________________________________________________

Nuts and Bolts: Network neutrality and edge caching

by Adam Marcus, Progress & Freedom Foundation

December 17, 2008

This is the second in a series of articles about Internet technologies. The first article was about web cookies. This article explains the network neutrality debate. The goal of this series is to provide a solid technical foundation for the policy debates that new technologies often trigger. No prior knowledge of the technologies involved is assumed.

To understand the network neutrality debate, you must first understand bandwidth and latency. There are lots of analogies equating the Internet to roadways, but it’s because the analogies are quite instructive. For example, if one or two people need to travel across town, a fast sports car is probably the fastest method. But if 50 people need to travel across town, it may require 25 trips in a single sports car. So a bus which can transport all 50 people in a single trip may be “faster” overall. The sports car is faster, but the bus has more capacity. Bandwidth is a measure of capacity, of how much data can be transmitted in a fixed period of time. It is usually measured in Megabits per second (Mbps). Latency is a measure of speed, of the time it takes a single packet data to travel between two points. It is usually measured in milliseconds. The “speeds” that ISPs advertise have nothing to do with latency; they’re actually referring to bandwidth. ISPs don’t advertise latency because its different for each different site you’re trying to reach.

The Internet consists of devices and wires connecting those devices. The speed of data along the wires is fixed–there are no fast lanes and slow lanes. The only way to increase speeds is to either travel a shorter path or to get priority at the routers, the virtual traffic lights of the Internet. ISPs advertise bandwidth because with more bandwidth, more data can get to you in fewer trips, making your broadband connection seem much faster than a dial-up connection.

Sometimes latency and bandwidth are important and sometimes they’re not that important. The typical response time between any two points on the Internet is 1/5th of one second, so the difference between a relatively fast and relatively slow connection isn’t much. If you’re sending an email (without any attachments) or chatting with someone using an Instant Messaging program, you’re not using much bandwidth and if your messages are delayed by a second it’s probably not a problem. Or when Microsoft Windows is downloading system updates in the background, whether the download completes in a few minutes or an hour really doesn’t matter–as long as it completes. The emails and IMs are low-bandwidth and the system updates are usually high-bandwidth, but in both of these examples, latency is not that important. But if you’re playing a real-time online multiplayer game, making a VoIP phone call, videoconferencing, or remotely connecting to another computer using pcAnywhere, GoToMyPC, or Remote Desktop Services, both bandwidth and latency are important. Without a high-bandwidth low-latency connection, you’ll experience drop-outs and lag. NOTE – Latency is a measure of time, so the lower the latency the better.

Latency is most affected by the Internet equivalent to traffic lights: routers. Data transmitted over the Internet is sent in packets which contain a header that specifies, among a few other things, the IP address of the intended destination computer. Between every connection sits a router. For every packet that arrives at every router, the router must look at its header to determine where to send it, and then forward the packet out along the proper connection. Normally, routers inspect and forward packets with almost no delay. But when there are too many packets for a router to handle or the tubes get filled, the packets are temporarily queued in the router’s memory. This queuing imposes some delay. If the memory becomes full, the router drops (deletes) some of the packets and tries to keep going. If the sending computer doesn’t get a response in a certain amount of time, it assumes the packet has been dropped and sends it again, resulting in even more delay. On average, about 6% of packets are lost.

One way to deal with overloaded routers is to simply install more and bigger routers. Another method is to build more connections so packets don’t have to travel through as many routers. But both of these options are costly and it’s not clear whether simply increasing capacity will be enough to keep pace with increasing demand. A third option is to prioritize the packets. Prioritizing packets is kind of like the Mobile InfraRed Transmitter (MIRT) system that allows emergency response vehicles (e.g. fire, police, and EMS) to immediately turn specially-equipped traffic lights green. Most people would probably agree that this form of traffic priortization is a good idea. But when referring to the Internet, talk of traffic prioritization starts arguments.

The Network Neutrality Debate: What’s It All About

The network neutrality debate is a debate about the best method to manage traffic on the Internet. Those who advocate for network neutrality are actually advocating for legislation that would set strict rules for how ISPs manage traffic. They essentially want to re-classify ISPs as common carriers. Those on the other side of the debate believe that the government is unable to set rules for something that changes as rapidly as the Internet. They want ISPs to have complete freedom to experiment with different business models and believe that anything that approaches real discrimination will be swiftly dealt with by market forces.

But what both sides seem to ignore is that traffic must be managed. Even if every connection and router on the Internet is built to carry ten times the expected capacity, there will be occassional outages. It is foolish to believe that routers will never become overburdened–they already do. Current routers already have a system for prioritizing packets when they get overburdened; they just drop all packets received after their buffers are full. This system is fair, but it’s not optimized.

The network neutrality debate needs to shift to a debate on what should be prioritized and how. One way packets can be prioritized is by the type of data they’re carrying. Applications that require low latency would be prioritized and those that don’t require low latency would not be prioritized. But who makes the determinations? What happens if someone hacks their computer to prioritize packets that shouldn’t be? Another method is for ISPs to offer prioritization for a fee. ISPs could determine who should get prioritization based on the source or destination IP address in the packet header, or content providers could pay ISPs to prioritize only packets they tag with a special marker.

Opponents of network neutrality mandates argue that it’s simply not feasible to increase capacity to the extend that would be necessary without prioritization. They believe that with prioritization, they will be able charge more for faster access to those willing to pay, and the increased revenue will provide the funding necessary to upgrade the networks, which will benefit everyone. As the saying goes, a rising tide lifts all boats. Network neutrality advocates fear that if ISPs are allowed to charge for prioritization, they will have no incentive to increase speeds for those who don’t pay for prioritization. While that may be true, price discrimination is very different from other forms of discrimination. It would be a real shame if the net neutrality debate over latency hampered efforts to increase bandwidth. Even common carriers were not restricted from setting different prices for different classes of service, they simply had to offer the same rates to all comers. If those who claim the Internet should be a completely level playing field applied the same logic to the phone system, toll-free numbers wouldn’t be allowed.

Edge Caching: What It Is and Isn’t

Monday’s Wall Street Journal ran an article suggesting that Google is abandoning its stance as an advocate for Network Neutrality because of a plan to set up edge caching servers. Edge caching is just a way to more efficiently balance the costs of storage space and bandwidth in an attempt to decrease latency. It a way to move content “closer” to the end-users that view it to avoid the latency that occurs as packets traverse longer distances across the network.

To continue the roadways analogy, imagine the Internet arranged like a city. The end-users are all in the suburbs and the data they want to access is downtown in the network’s “core.” With this model, every request from a user needs to “commute” from the suburbs to the core, and the requested data needs to then travel from the core all the way back to the suburbs. Just like companies realized that setting up satellite offices nearer to its workers would decrease commuting times and increase productivity, content providers have realized that setting up edge caching servers at major ISPs decreases latency and saves on bandwidth costs.

Edge caching doesn’t work for all types of Internet content. If the content changes rapidly, edge caching doesn’t save much bandwidth because you’re constantly pushing new content to the edge servers. But for popular YouTube videos, edge caching is a great way for Google to save on bandwidth costs. Before Google bought YouTube, YouTube outsourced the hosting of its videos to edge caching provider LimeLight. So its no surprise that Google is now looking to do the same with its own edge caching servers.

The fact that Google can afford to set up edge caching servers around the network does give it a bit of an advantage. But the advantage is mostly a savings in bandwidth costs for the content provider. The use of edge servers is meant to be almost unperceptable to users. Accessing content from edge servers may be a bit faster for users, but nobody is being discriminated against and most content on the Internet is not latency-sensitive. In the example of Internet video, the difference between playing a video hosted on an edge caching server versus playing video from a server located far away may be just a matter of a few seconds delay before the video begins playing.

Some, like the Wall Street Journal, argue that even edge caching violates the net neutrality principle of the Internet being a level playing field. I would suggest that only discriminatory practices, such as an ISP offering packet prioritization to only some companies, should be considered a violation of net neutrality principles.

As Google points out, other companies are free to set up their own edge caching servers or use one of the many companies that offer edge caching services. There have been economies of scale in other industries for generations. The fact that edge caching provides economies of scale for Internet content providers is not a game changer. On the Internet, just as in other media industries, it’s not who can get their goods to market the fastest, it’s whose content best satisfies their audiences.

— Adam Marcus (adamm@pff.org)

Previous post:

Next post: