I want to associate myself with Cord’s excellent post about whether Google pays its fair share for bandwidth. Let me also add a more theoretical observation: we know Google is paying its fair share because if it weren’t, the companies that provide it with bandwidth would raise their rates. That may sound like a tautology, but I think it’s actually an important point that tends to get lost in these discussions. Nobody forces ISPs to interconnect, so we can be confident that each party to the web of interconnection agreements we call the Internet is getting at least as much value out as he puts in.
The obsession with whether Google is paying its “fair share” for bandwidth is nonsense for precisely the same reasons it’s silly to fret over trade deficits and “unfair” trade deals. In both cases, people fail to appreciate that we’re talking about positive-sum interactions: interactions in which both parties are made better off and no one is made worse off. Just as the buyer and seller in any given international transaction values what he’s getting more than what he’s giving up, so too does everyone in the chain of contracts between me and Google get more from carrying our traffic than the cost of doing so. Each is making a profit, or at least expected to make a profit ex ante when they agreed to carry it. Which means that in both cases, interfering is likely to only reduce
This means that if you add up all the value Google gets from the Internet and subtract Google’s costs, of course the number you get would be positive. The fact that Google benefits more from the Internet than it pays isn’t an indictment of Google, it’s a reflection of basic economics. I’ve written before that the commonplace idea that there’s no such thing as a free lunch is actually nonsense. To the contrary, a market economy is a free lunch for everyone who participates: almost everybody gets dramatically more value from their participation in the economy than their cost of participation. The same is true of the Internet. Google, Google’s users, Google’s customers, and various network owners are all “free riding” on each other. And this isn’t a problem, it’s the whole point of having positive-sum social institutions like the Internet.
Precursor LLC released a study that claims to have calculated Google’s total bandwidth use declaring “Google uses 21 times more bandwidth than it pays for.”
The study is an attempt to foil Google’s pursuit of Net Neutrality as a federal policy by claiming that Google is already a kind of free-rider and its policy goals will only allow it to mooch more.
The study estimates the total bandwidth “used” by Google in a circuitous way. It calculates the bandwidth Google-originating data uses while traveling around the web, adds that to bandwidth used by search bots sending data back to Google, then assigns a dollar value to that bandwidth, and then compares that to an estimate of Google’s total outlays for bandwidth (a number which had to estimated as Google does not disclose this number).
The result: Google doesn’t pay for all the bandwidth used by data flowing in and out of its servers.
But this is true for any site on the web!
Over the past year or so, many market-oriented critics of Google, like Scott Cleland and Richard Bennett, have criticized the company for aligning itself with Left-leaning causes and intellectuals. Lately, however, what I find interesting is how many leading leftist intellectuals and organizations have begun turning on the company and becoming far more critical of the America’s greatest capitalist success story of the past decade. The reason this concerns me is that I see a unholy Right-Left alliance slowly forming that could lead to more calls for regulation not just of Google, but the entire search marketplace. In other words, “Googlephobia” could bubble over into something truly ugly.
Consider the comments of Tim Wu and Lawrence Lessig in Jeff Rosen’s huge New York Times Magazine article this weekend, “Google’s Gatekeepers.” Along with Yochai Benkler, Lessig and Wu form the Holy Trinity of the Digital Left; they set the intellectual agenda for the Left on information technology policy issues. Rosen quotes both Wu and Lessig in his piece going negative on Google. Wu tells Rosen that “To love Google, you have to be a little bit of a monarchist, you have to have faith in the way people traditionally felt about the king.” Moreover:
[Hat tip to Richard Bennett for the recommendation here..] I haven’t had a chance to read through the entire thing yet, but this new study by Nemertes Research seems worthy of attention: “Internet Interrupted: Why Architectural Limitations Will Fracture the ‘Net.” From the exec sum:
In 2007, Nemertes Research conducted the first-ever study to independently model Internet and IP infrastructure (which we call “capacity”) and current and projected traffic (which we call “demand”) with the goal of evaluating how each changes over time. In that study, we concluded that if current trends were to continue, demand would outstrip capacity before 2012. Specifically, access bandwidth limitations will throttle back innovation, as users become increasingly frustrated with their ability to run sophisticated applications over primitive access infrastructure. This year, we revisit our original study, update the data and our model, and extend the study to look beyond physical bandwidth issues to assess the impact of potential logical constraints. Our conclusion? The situation is worse than originally thought! We continue to project that capacity in the core, and connectivity and fiber layers will outpace all conceivable demand for the near future. However, demand will exceed access line capacity within the next two to four years. Even factoring in the potential impact of a global economic recession on both demand (users purchasing fewer Internet-attached devices and services) and capacity (providers slowing their investment in infrastructure) changes the impact by as little as a year (either delaying or accelerating, depending on which is assumed to have the greater effect).
This is a subject that my colleague Bret Swanson has written a great deal about, so I’m sure he’ll be commenting on this study at some point. Even if you don’t agree with the conclusion Nemertes reaches, as Richard Bennett notes, the report is well worth reading just the background information on public and private peering, content delivery networks, and overlay networks.
Kevin Donovan has a thoughtful post about “The Durable Internet.” He asks:
Now, there are examples of trickle down and mass rebellion. Tim does a nice job in “The Durable Net” of exploring these and does the most to bring me closer to faith in lay users. He cites the Digg rebellion against censorship and the fight for open IM protocols. But in my observation, very few non-technical folks use Adium or the other IM unifiers. In fact, iChat and AIM are dominant defaults. As for the Digg example, the users of Digg tend to be technically inclined and the cost of posting a hex code and pushing “Digg” are so minimal that, yes, even my mother could do it (though I doubt she would). It is possible that the select few will be motivated enough to free their own iPhone or create tools to detect violations of the end-to-end principle, but I worry that the critical mass will not be reached. Although 40% of Saudis are disturbed by Internet censorship, I’d be willing to bet that 40% do not nor can they make use of Tor or Psiphon or the other anti-censorship technologies. These are the people who would suffer from a non-generative, non-neutral future if the technical few do not successfully defend their interests. I’m mostly thinking out loud, so I’d love to hear your thoughts: are users capable of protecting their interests?
In my paper, I go into a lot of detail with specific examples in which open technologies persevered in the face of organized resistance. But let me step back and make a more general point about the underlying argument of that section of the paper: In a nutshell, we should be optimistic about the future of open platforms for the same reason we’re in favor of open platforms in the first place. Put simply, they work better. Open platforms harness the distributed knowledge of millions of people and produce ecosystems that is greater than the sum of their parts. Closed platforms are hampered by the limitations of central planning, and as a result they tend to be sterile, inflexible, and incapable of keeping up with developments on more open platforms. Continue reading →
Brian Boyko at Network Performance Daily has a thorough interview with yours truly about The Durable Internet. Brian asked some really sharp questions and helped to flesh out some of the thornier aspects of my argument. Check it out.
The Technology Liberation Front is the tech policy blog dedicated to keeping politicians' hands off the 'net and everything else related to technology.