PFF Filing on Network Management Practices

by on February 13, 2008 · 0 comments

My Progress & Freedom Foundation colleagues Ken Ferree, president of PFF, and Bret Swanson, a senior fellow with PFF, have filed comments today at the FCC in the heated proceeding about broadband network management policies. [Note: For more background, listen to our recent TLF podcast on the issue.] In their filing, Ken and Bret argue that:

Traffic shaping or channeling by broadband Internet access providers should be no more controversial than the examples provided above. Broadband access is not an unlimited resource. To the contrary, video and other rich media applications are profoundly changing the nature and volume of Internet traffic, straining network capacity. Video applications require between 100 and 1,000 times more bandwidth than static applications involving text, voice, or simple graphics. As video and graphics move to high-definition, many observers believe that web content and applications will grow in data-density by yet an additional factor of 10. Internet and IP traffic in the U.S. could grow more than 50-fold by 2015. The challenge facing providers of broadband access is how to maintain high-speed service for the vast majority of consumers while demands on the network mount.
[…]
Far from some nefarious plot to undermine the communications of their own subscribers, broadband access providers using traffic management tools to maintain the highest level of service for the greatest number of users simply are mirroring the commercially reasonable conduct of service providers everywhere, in nearly every field.

They go on to detail the technical reasons why various types of network management activities are necessary and beneficial:

In many cases bandwidth can act as a substitute for quality of service (QoS), and vice versa. The mix of raw capacity, or bandwidth (which is a physical resource) and of coding and traffic management (which are logical resources) is the very stuff of network architecture and planning. Network architecture decisions are based on a complex interplay of bandwidth technologies, digital technologies, capital and operating expenses, financial projections, and of course the business plan.

The use of buffering, queuing, scheduling, marking, labeling, parsing, replicating, prioritizing, modifying, metering, policing, collision avoiding, packet re-setting, and packet re-sending is becoming ubiquitous. Today’s newest communications equipment is specifically designed for ever-more fine grained “traffic management” so that “triple play services”—voice, data, video—and service level agreements—SLAs—can be delivered efficiently and robustly on converged networks.

They also address arguments that alternative forms or management would be superior…

Some opponents of the traffic management techniques under question have proposed that service providers instead use some form of “dynamic throttling.” They assert that the techniques under question–namely packet re-sets–are too crude and blunt. More sophisticated and agile methods should be used, they say. But networks are made of real hardware and software that must last years to recoup large investments. Most networks today do not have the capabilities called for by the critics. Because networks require large capital investments and must last many years, they are at the outset rather capacious. Only as new applications and demands grow does congestion normally arise. Congestion is then relieved through a mix of traffic management and capacity increases. But it is often possible to deploy traffic management solutions more quickly than it is to build more capacity. Thus in the intervening period, we may see disputes, like the one at question here.

New, more supple traffic management technologies are indeed on the way, but it will take years to deploy them across the world’s networks. In addition, it is by no means obvious that the newer techniques will satisfy the critics. Many of the harshest critics of today’s relatively crude traffic management techniques have denounced the new, sophisticated, and supposedly menacing QoS technologies. Too crude, or too sophisticated? Which is it? One can only conclude that the critics do not want service providers to be able to manage their networks at all.

I think that last point is extremely important and entirely correct. As I have stressed before in some of my writing on Net neutrality, network management regulations limiting the flexibility of network owners to respond to traffic / congestion issues would essentially tell infrastructure operators, and potential future operators of high-speed networks: your networks are yours in name only and the larger community of Internet users–through the FCC or other regulatory bodies–will be free to set the parameters of how your infrastructure will be used in the future. Hearing that message, it is fair to ask why a network operator or potential operator would ever want to invest another penny of risk capital in a sector that was essentially governed as a monolithic commons or public good.

One final point that Ken and Bret discuss in their filing is worth mentioning. There’s a lot of talk around town right now that the likely outcome of this FCC proceeding will be some sort of disclosure or transparency mandate regarding network management policies. As I said on this week’s TLF podcast, it’s tough to be against disclosure or transparency. And, generally speaking, I agree with Ken and Bret that “Consumers should have information about the impact and resulting service quality of a service provider’s network policies.”

But disclosure of general practices is one thing; disclosure of specific practices is quite another. The FCC shouldn’t be in the business of forcing network operators to hand their entire network management playbook over to the world. As Ken and Bret note, “Any rules that forced service providers to divulge particular methods of network management would be highly counterproductive. Such disclosures of trade secrets could allow wrongdoers to attack networks in a way that erodes service quality and security.” Internet engineering guru Richard Bennett made this point quite eloquently on a previous TLF podcast:

You have to understand that Comcast is playing a cat and mouse game with BitTorrent. And if you look into the details of how BitTorrent is engineered, it’s fairly obvious that concealment of BitTorrent streams from traffic shaping and admission control and other sorts of network management technologies is an explicit goal of the project. Every concealment method that you can think of is used by BitTorrent to escape detection by the kind of network management systems that people like Comcast have to run. So to the extent that Comcast is transparent, they’re simply making themselves vulnerable to a new version of BitTorrent that can escape whatever techniques they’re employing.

Anyway, make sure to read the entire PFF filing by Ken and Bret.

Previous post:

Next post: