Bandwidth, Storewidth, and Net Neutrality

by on December 16, 2008 · 14 comments

Very happy to see the discussion over The Wall Street Journal‘s Google/net neutrality story. Always good to see holes poked and the truth set free.

But let’s not allow the eruptions, backlashes, recriminations, and “debunkings” — This topic has been debunked. End of story. Over. Sit down! — obscure the still-fundamental issues. This is a terrific starting point for debate, not an end.

Content delivery networks (CDNs) and caching have always been a part of my analysis of the net neutrality debate. Here was testimony that George Gilder and I prepared for a Senate Commerce Committee hearing almost five years ago, in April 2004, where we predicted that a somewhat obscure new MCI “network layers” proposal, as it was then called, would be the next big communications policy issue. (At about the same time, my now-colleague Adam Thierer was also identifying this as an emerging issue/threat.)

Gilder and I tried to make the point that this “layers” — or network neutrality — proposal would, even if attractive in theory, be very difficult to define or implement. Networks are a dynamic realm of ever-shifting bottlenecks, where bandwidth, storage, caching, and peering, in the core, edge, and access, in the data center, on end-user devices, from the heavens and under the seas, constantly require new architectures, upgrades, and investments, thus triggering further cascades of hardware, software, and protocol changes elsewhere in this growing global web. It seemed to us at the time, ill-defined as it was, that this new policy proposal was probably a weapon for one group of Internet companies, with one type of business model, to bludgeon another set of Internet companies with a different business model. 

We wrote extensively about storage, caching, and content delivery networks in the pages of the Gilder Technology Report, first laying out the big conceptual issues in a 1999 article, “The Antediluvian Paradigm.” [Correction: “The Post-Diluvian Paradigm”] Gilder coined a word for this nexus of storage and bandwidth: Storewidth. Gilder and I even hosted a conference, also dubbed “Storewidth,” dedicated to these storage, memory, and content delivery network technologies. See, for instance, this press release for the 2001 conference with all the big players in the field, including Akamai, EMC, Network Appliance, Mirror Image, and one Eric Schmidt, chief executive officer of . . . Novell. In 2002, Google’s Larry Page spoke, as did Jay Adelson, founder of the big data-center-network-peering company Equinix, Yahoo!, and many of the big network and content companies.

This interplay between bandwidth, storage, and latency, caching, content, and conduit, was the very point of the conference. What are the technical and economic trade-offs? Where will the Net be modular? And where will it be integrated? Where will content be stored, and who will pay? In many ways, the conference was ahead of its time. And my humble view is that Schmidt and Page may have even adopted some of the key insights of these conferences and turned them into some of Google’s most successful applications and architectures. A talk by Yale computer scientist David Gelernter in particular, I remember, seemed to have a profound impact on the way attendees visualized this coming “cloud” that would enable the death of the desktop. Remember, at the time, Google was still just a search engine company that hosted its then-thousands of servers in the data centers of Equinix and a few other hosting companies. Today, Google, with its global cloud platform and desktop killing apps, has become the supreme storewidth company.

I offer this background because some of us have been thinking about these topics for a (relatively) long time. When we first began analyzing this new “network layers” and then “network neutrality” policy concept five or more years ago, we did so with these profound architectural questions in mind. The Net, and the bits and applications traversing it, moves so fast, that we need all these technical solutions — routing, switching, QoS, CDNs, etc. — to make it work, let alone make it fast and robust.  

So yesterday’s Wall Street Journal story was not noteworthy for exposing some brand new network technology or architectural scheme. No, it seemed noteworthy (again, pending the accuracy of the reporting and the follow-on assertions) because (1) it highlighted the reality of this already existing architecture — something a few of us have been trying for years to expose and highlight as a shortcoming of the neutrality concept — and (2) suggested Google and others were softening their stance on the net neutrality policy issue. 

Now it’s perfectly possible the article is mistaken, that no one is softening on the push for net neutrality regulation. Let’s have the truth, indeed. But it is a good thing that we are getting deeper into the technology and architecture of the Net because a clearer understanding will expose net neutrality’s big flaws. As Gilder and I surmised five years ago, net neutrality, as ill-defined as it still is after all this time, seems one group’s attempt to get the upper hand on competitors using the heavy hand of government. My networks, good; your networks, bad. My content delivery bandwidth-saving latency-reducing fix, good; your content-delivery bandwidth-saving latency-reducing method, “evil.”

More to come. . . .

Correction: The issue of the Gilder Technology Report I referred to was of course titled “The Post-Diluvian Paradigm.” The meaning of this title was that after the flood of bandwidth — or capacity — was deployed, we would still need latency- and hop-reducing and other performance-enhancing technologies and architectures to make the cloud function robustly.

Previous post:

Next post: