Bill and Keep and the Free Market

by on July 31, 2008 · 20 comments

My last post sparked some interesting discussion about the economics of the Internet. With all due respect to my co-blogger Hance, though, this is precisely the sort of thing I was talking about:

[Tim's post] unfortunately overlooks the essence of what NN regulation is really about as far as commercial entities are concerned, i.e., profitable online properties don’t want to be asked or obliged to negotiate service agreements with network providers in which they agree to share some of their profits with network providers for the upkeep of the Internet and for the improvement of the overall online experience — just like retailers in a shopping mall share a small percentage of their profits with the landlord.

Bret likewise says that “NN advocates have for several years now wanted to force service providers into one business plan where the end-user pays ALL the costs of the network.” It will surely be news to Eric Schmidtt, Steve Ballmer, and Jerry Yang that they aren’t “obliged to negotiate service agreements with network providers.” In point of fact, Google, Microsoft, Yahoo! and other big service providers pay millions of dollars to their ISPs to help finance the “upkeep of the Internet.” The prices they pay are negotiated in a fully competitive market.

Here’s a thumbnail sketch of how the Internet is structured: it’s made up of thousands of networks of various sizes, with lots and lots of interconnections between them. When two networks decide to interconnect, they typically evaluate their relative sizes. If one network is larger or better-connected than the other, the smaller network will typically pay the larger network for connectivity. If the networks are roughly the same size, they will typically swap traffic on a settlement-free basis.

The result is that a typical packet on the Internet will generally go “upstream” to progressively larger networks, it may then cross a peering point between two equally-sized networks, and then it will go back “downstream” to its destination. Payments along any route flow “upstream” from each end to the peering point in the middle. The net result is that each side of any given connection roughly pays his cost to get to his side of the Internet’s “backbone.” Google pays the cost of getting its traffic to the backbone, and then consumers pay the cost of getting the traffic from the backbone to their homes.

I take Hance’s complaint to be that Google should be paying for more than just its “half” of the connection: that Google should help defray the costs of getting packets from the backbone to individual consumers. To take Hance’s shopping mall example, this would be akin to Macy’s being required to contribute to the upkeep not only of the streets around its store, but also to the residential streets and driveways of every one of its customers, no matter how far away those customers live. That’s not how we do street finance, and it’s not how Internet pricing works either.

He doesn’t really elaborate on why this model would be better, but the primary reason not to do so is a matter of simple arithmetic. Under the present Internet architecture, any given node on the network only has to negotiate contractual relationships with the nodes immediately adjacent to it. For most people, that’s a single payment to the people “upstream.” For ISPs, that may mean paying a handful of different companies for “upstream” connectivity and charging a lot of “downstream” people for access. The result is that the number of contractual relationships is of the same order of magnitude as the number of nodes on the network.

In contrast, if every pair of nodes on the network needed to negotiate a contractual relationship, the number of contracts is potentially the square of the number of nodes. There are about a billion nodes on the Internet, so there could theoretically be 10^18 contracts signed. Even if we just assume contracts between websites and residential ISPs, we’re still talking about millions of websites and thousands of ISPs. Abandoning the current peering model for some kind of cost-recovery model would mean an enormous increase in logistical overhead.

Bret says that “we should let the market find the way in this dynamic arena,” and I agree. The thing is, the market has been finding the way for the last decade; that’s how we got the current “bill and keep” pricing structure. Which is precisely why I think it’s silly for free-market social scientists to be criticizing it. It’s one thing to say that the FCC shouldn’t rule out alternative pricing mechanisms—I quite agree—but it’s quite another to suggest that the status quo is defective and needs to be changed posthaste. I think both the evidence and libertarian theory suggest otherwise.

  • Bret Swanson

    Tim, what planet are you on?

    These are the very points we’ve all been making for YEARS. We’ve been saying that the NN advocates don’t know how the Net works. That the content companies already purchase all sorts of bandwidth and SLA-level services from the network companies. That the CDNs already help speed priority content to priority customers. That new switching and routing technologies are already sorting packets based on priority. Yes, yes, yes. Keep letting these private architectures, partnerships, vending relationships, and prices evolve.

    Do NOT inject a new regulatory regime into this mix. It is ***Net Neutrality*** that would be the break with the status quo market of Internet technologies, products, and relationships.

  • Tim Lee

    That the CDNs already help speed priority content to priority customers.

    Sure, but this isn’t a violation of network neutrality. For the most part, CDNs are ordinary web servers that provide load-balancing via a clever DNS hack.

    That new switching and routing technologies are already sorting packets based on priority.

    Really? There’s some of this going on on private networks, but I’m not aware of this occurring on the public Internet, and that seems unlikely to change.

    Do NOT inject a new regulatory regime into this mix.

    Agreed.

    It is ***Net Neutrality*** that would be the break with the status quo market of Internet technologies, products, and relationships.

    Here’s where you lose me. As far as I can see, network neutrality (as a technical principle, not a regulatory scheme) is the Internet status quo. With rare and counterproductive exceptions, Internet routers route packets without regard for their contents. I think that’s a good thing, and I’d like to see it preserved. I’m just not convinced that putting the FCC in charge is the way to do it.

  • Bret Swanson

    Tim, what planet are you on?

    These are the very points we’ve all been making for YEARS. We’ve been saying that the NN advocates don’t know how the Net works. That the content companies already purchase all sorts of bandwidth and SLA-level services from the network companies. That the CDNs already help speed priority content to priority customers. That new switching and routing technologies are already sorting packets based on priority. Yes, yes, yes. Keep letting these private architectures, partnerships, vending relationships, and prices evolve.

    Do NOT inject a new regulatory regime into this mix. It is ***Net Neutrality*** that would be the break with the status quo market of Internet technologies, products, and relationships.

  • Tim Lee

    That the CDNs already help speed priority content to priority customers.

    Sure, but this isn’t a violation of network neutrality. For the most part, CDNs are ordinary web servers that provide load-balancing via a clever DNS hack.

    That new switching and routing technologies are already sorting packets based on priority.

    Really? There’s some of this going on on private networks, but I’m not aware of this occurring on the public Internet, and that seems unlikely to change.

    Do NOT inject a new regulatory regime into this mix.

    Agreed.

    It is ***Net Neutrality*** that would be the break with the status quo market of Internet technologies, products, and relationships.

    Here’s where you lose me. As far as I can see, network neutrality (as a technical principle, not a regulatory scheme) is the Internet status quo. With rare and counterproductive exceptions, Internet routers route packets without regard for their contents. I think that’s a good thing, and I’d like to see it preserved. I’m just not convinced that putting the FCC in charge is the way to do it.

  • Hance Haney

    “To take Hance’s shopping mall example, this would be akin to Macy’s being required to contribute to the upkeep not only of the streets around its store, but also to the residential streets and driveways of every one of its customers, no matter how far away those customers live. That’s not how we do street finance, and it’s not how Internet pricing works either.”

    Tim – I am not advocating that but I am also not advocating against it. If content and delivery providers agree to subsidize last-mile connections to some extent with advertising revenue — as opposed to letting content providers keep all of it — they should be allowed to do so.

    Remember, Google CEO Eric Schmidt suggested the possibility that cell phones out to be free, subsidized by targeted ads.

    Why would it be okay to let the market decide to subsidize cell phones, if it wants, but not fiber, coaxial or DSL connections? The only reason I can think of is because cable and phone companies used to be monopolies. But they aren’t monopolies anymore — not even close — although here as in so many other places there is admittedly a problem of lagging perception.

  • http://www2.blogger.com/profile/14380731108416527657 Steve R.

    Bret, I find much of the discussion on Net Neutrality, from those opposed to it, to avoid a very fundamental concern. That concern is that the internet users should have a guarantee that their information is transmitted.

    Congestion management is a concern, the problem that I have is that users really never know whether an ISP has a legitimate basis for failing to to route packets or if the failure to route a packet is simply a “dirty trick”.

    When we go the the post office to mail a package, we expect the package to be delivered. If we are anxious we can spend $20 to overnight it, if it isn’t time sensitive we can pay $0.42. However, we still expect it to be delivered.

    Anecdotal evidence based on reading many posts on this issue demonstrates that some companies cannot be trusted to deliver content. To resolve this type of demonstrated abuse without resorting to the evil word of “regulation” I would expect those opposed to net neutrality suggest an industry “code of conduct” to assure that content is delivered and not conveniently “lost”. We can discuss pricing and congestion management till we are blue in the face, but it does not address the fundamental concern of requiring that content be delivered. So far, all I hear is monotonous chant “Don’t regulate us”, but I never hear anything about responsibility. If companies abuse their freedoms, then they deserve to be regulated.

  • Hance Haney

    “To take Hance’s shopping mall example, this would be akin to Macy’s being required to contribute to the upkeep not only of the streets around its store, but also to the residential streets and driveways of every one of its customers, no matter how far away those customers live. That’s not how we do street finance, and it’s not how Internet pricing works either.”

    Tim – I am not advocating that but I am also not advocating against it. If content and delivery providers agree to subsidize last-mile connections to some extent with advertising revenue — as opposed to letting content providers keep all of it — they should be allowed to do so.

    Remember, Google CEO Eric Schmidt suggested the possibility that cell phones out to be free, subsidized by targeted ads.

    Why would it be okay to let the market decide to subsidize cell phones, if it wants, but not fiber, coaxial or DSL connections? The only reason I can think of is because cable and phone companies used to be monopolies. But they aren’t monopolies anymore — not even close — although here as in so many other places there is admittedly a problem of lagging perception.

  • http://www2.blogger.com/profile/14380731108416527657 Steve R.

    Bret, I find much of the discussion on Net Neutrality, from those opposed to it, to avoid a very fundamental concern. That concern is that the internet users should have a guarantee that their information is transmitted.

    Congestion management is a concern, the problem that I have is that users really never know whether an ISP has a legitimate basis for failing to to route packets or if the failure to route a packet is simply a “dirty trick”.

    When we go the the post office to mail a package, we expect the package to be delivered. If we are anxious we can spend $20 to overnight it, if it isn’t time sensitive we can pay $0.42. However, we still expect it to be delivered.

    Anecdotal evidence based on reading many posts on this issue demonstrates that some companies cannot be trusted to deliver content. To resolve this type of demonstrated abuse without resorting to the evil word of “regulation” I would expect those opposed to net neutrality suggest an industry “code of conduct” to assure that content is delivered and not conveniently “lost”. We can discuss pricing and congestion management till we are blue in the face, but it does not address the fundamental concern of requiring that content be delivered. So far, all I hear is monotonous chant “Don’t regulate us”, but I never hear anything about responsibility. If companies abuse their freedoms, then they deserve to be regulated.

  • http://bobbrill.net Bob Brill

    Tim: Thank you for the post, and your weaving of the technology and business explanation — brought to mind Byte magazine last decade. I also cited you on my blog last month. Regards, Bob

  • http://bobbrill.net Bob Brill

    Tim: Thank you for the post, and your weaving of the technology and business explanation — brought to mind Byte magazine last decade. I also cited you on my blog last month. Regards, Bob

  • http://bennett.com/blog Richard Bennett

    Tim says: “…CDNs are ordinary web servers that provide load-balancing…”

    That’s not nearly accurate enough for this esteemed blog, Tim. CDNs are *networks* of web servers placed close to large pockets of consumers to deliver traffic more quickly than a single complex of servers can ever hope to do. They exploit the flaw in the Jacobson Algorithm that allows nearby senders to open their window faster than far-away servers. If NN were applied consistently with the fantasy that “all packets are equal” they would have to be regarded as a violation of NN.

    But if NN means anything at all, it means that carriers can’t create artificial scarcity in order to extract monopoly rents. It’s naive, because scarcity creates itself.

  • Tim Lee

    Richard, that’s a good point about CDNs. You’re right that they’re networks of web servers rather than individual webservers. But the point is that they communicate with each other using vanilla TCP/IP and don’t get any kind of special treatment at the IP layer.

  • http://bennett.com/blog Richard Bennett

    Tim says: “…CDNs are ordinary web servers that provide load-balancing…”

    That’s not nearly accurate enough for this esteemed blog, Tim. CDNs are *networks* of web servers placed close to large pockets of consumers to deliver traffic more quickly than a single complex of servers can ever hope to do. They exploit the flaw in the Jacobson Algorithm that allows nearby senders to open their window faster than far-away servers. If NN were applied consistently with the fantasy that “all packets are equal” they would have to be regarded as a violation of NN.

    But if NN means anything at all, it means that carriers can’t create artificial scarcity in order to extract monopoly rents. It’s naive, because scarcity creates itself.

  • Tim Lee

    Richard, that’s a good point about CDNs. You’re right that they’re networks of web servers rather than individual webservers. But the point is that they communicate with each other using vanilla TCP/IP and don’t get any kind of special treatment at the IP layer.

  • http://bennett.com/blog Richard Bennett

    Right, they exploit a flaw that the neutoids aren’t even aware of.

  • http://bennett.com/blog Richard Bennett

    Right, they exploit a flaw that the neutoids aren’t even aware of.

  • Pingback: bmx pas cher

  • Pingback: vakantiehuis belgie ardennen particu

  • Pingback: 1300 number australia

  • Pingback: premier league philippines

Previous post:

Next post: