Network Neutrality and Transaction Costs

by on November 14, 2008 · 22 comments

A few more people have weighed in on my new paper. I tend to think that if I’m angering both sides of a given debate, I must be doing something right, so I’m going to take the fact that fervent neutrality opponent Richard Bennett hated the study as a good sign.

Others have been more positive. Mike Masnick has an extremely generous write-up over at Techdirt. And at Ars Technica, my friend Julian has the most extensive critique so far.

I thought most of it was spot on, but this seemed worth commenting on:

Lee thinks that the history of services like America Online shows that “walled garden” approaches tend to fail because consumers demand the full range of content available on the “unfettered Internet.” But most service providers already offer “tiered” service, in that subscribers can choose from a variety of packages that provide different download speeds at different prices. Many of these include temporary speed “boosts” for large downloads.

If many subscribers are demonstrably willing to accept slower pipes than the network can provide, companies providing streaming services that require faster connections may well find it worth their while to subsidize a more targeted “boost” for those users in order to make their offerings more attractive. In print and TV, we see a range of models for divvying up the cost of getting content to the audience—from paid infomercials to ad-supported programming to premium channels—and it’s never quite clear why the same shouldn’t pertain to online.

The key point here is the relative transaction costs of managing a proprietary network versus an open one. As we’ve learned from the repeated failure of micropayments, financial transactions are surprisingly expensive. The infrastructure required to negotiate, meter, and bill for connectivity, content, or other services means that overly-complicated billing schemes tend to collapse under their own weight. Likewise, proprietary content and services have managerial overhead that open networks don’t. You have to pay a lot of middle managers, salesmen, engineers, lawyers, and the like to do the sorts of things that happen automatically on an open network.

Now, in the older media Julian mentions, this overhead was simply unavoidable. Newspaper distribution cost a significant amount of money, and so newspapers had no choice but to charge their customers, pay their writers, sign complex deals with their advertisers, etc. Similarly, television stations had extremely scarce bandwidth, and so it made sense to expend resources to make sure that only the best content went on the air.

The Internet is the first medium where content can go from a producer to many consumers with no human beings intermediating the process. And because there are no human beings in between, the process is radically more efficient. When I visit the New York Times website, I’m not paying the Times for the content and they’re not paying my ISP for connectivity. That means that the Times‘s web operation can be much smaller than its subscription and distribution departments.

In a world where these transaction costs didn’t exist, you’d probably see the emergence of the kinds of complex financial transactions Julian envisions here. But given the existence of these transaction costs, the vast majority of Internet content creators will settle for free, best-effort connectivity rather than going to the trouble of negotiating separate agreements with dozens of different ISPs. Which means that if ISPs only offer high-speed connectivity to providers who pay to be a part of their “walled garden,” the service will wind up being vastly inferior (and as a consequence much less lucrative) than it would be if they offered full-speed access to the whole Internet.

  • http://bennett.com/blog Richard Bennett

    “Hate” is a strong word, but it's a more or less appropriate description of my reaction to your paper (I wouldn't call it a “study”, btw, it's more of an “argument”.) The entire net neutrality discussion annoys me because combatants make such a hash of the technical subject matter, conflating so many distinct concepts and making so many demonstrably false claims. We all like the Internet's openness, for example, but what reason do we have to guess that it follows from a decentralized control structure? There are many decentralized networks that aren't at all open, such as military battlefield networks, or even the Internet itself before 1992 when it was only “open” to a select group of academics and defense contractors. The argument against prioritization errs in assuming that access to content is the Internet's only, or chief function. We need prioritization so that non-content-based network services – VoIP is the classic example – can thrive on a network where most traffic is content-based.

    In general, your error is to concede a point that's larger than the regulatory kerfuffle, namely that the Internet is an internet because of its end-to-end behavior. By endorsing that view you lend support to the regulators, who simply have to show that they can prevent the ISPs and carriers from owning the FCC to be successful.

    In fact, the FCC has come under the effective control of Google and other content mongers in recent years, who have essentially a 1.000 batting average in terms of convincing the regulator to adopt their program. The Internet is rapidly becoming a network optimized for ad sales, and that's a much bigger threat to innovation than the optimizations that carriers need to make to service plans and peering agreements to turn it into a network that supports innovative non-content-related applications.

    The “stupid network” construct is fine for content, not so fine for real-time communication. This point needs to be made clear to regulators less they subtract value from the Internet to mollify the angry mob of consumer advocates, law professors, and professional telco haters who clamor for more and better constraints on network operators.

  • http://www.techliberation.com Adam Thierer

    Richard… Do you have any articles or studies — written by you or anyone else — that substantiates your alternative conception of the Net's underlying technical nature? In raising this question, I do not mean to challenge your assertion, rather, I would be very interested in reading such material for my own edification.

    Tim … how do you respond to Richard’s alternative interpretation of the Net’s technical underpinnings?

  • http://bennett.com/blog Richard Bennett

    It shouldn't be too controversial that the Internet is a collection of different protocols, some end-to-end and some net-to-net; the controversy is about which is more important. E2e folks claim their slice is all-important without any evidence in support.

    It's simply a matter of opinion which is the key, and the opinion on the e2e side is relatively uninformed.

  • http://www.tc.umn.edu/~leex1008 Tim Lee

    I think it's a lot of nitpicking and little bit of disagreement about the technical feasibility of providing performance guarantees on a network as large and heterogenous as the the Internet.

    As far as the nitpicking goes, if you examine the Internet with a powerful enough microscope, you'll find stuff that could be plausibly described as non-neutral routing behavior. It's true, for example, that routers employ a variety of low-level optimization techniques that don't quite measure up to the ideal of completely neutral routing. However, I think this is missing the forest for the trees. Throughout its history, what has made TCP/IP different from other networks has been that it has been more decentralized and offered end users fewer guarantees than competing networks. Many partisans for other networks regarded this as a weakness, but of course it worked out pretty well.

    As Richard would know if he read my paper carefully, my advocacy of end-to-end isn't a dogmatic opposition to any sort of routing optimization, nor am I even opposed, in principle to network owners offering prioritized services, although I'm skeptical that will work very well. Rather, I think the fundamental question is who will be in control: users or network owners. I think that any prioritization scheme that gets implemented should respect the end-to-end principle in the sense that the prioritization levels should be set by end users, rather than networks themselves trying to calculate the appropriate priority level using techniques like DPI.

    I give a couple of quotes from prominent network engineers who don't believe that end-to-end prioritization if feasible on a network the size of the Internet. Richard apparently disagrees with them. I'm not an expert on network architecture, so I'm not going to make a strong statement either way on that, but at a minimum I think we can say that a lot of people have talked about adding prioritization to TCP/IP networks and we have yet to see anyone deploy it at Internet scales.

  • http://bennett.com/blog Richard Bennett

    This is a pretty good example of mixing apples and oranges. Tim says: “what has made TCP/IP different from other networks has been that it has been more decentralized and offered end users fewer guarantees than competing networks”

    Huh? Look at the IP header. It has had a Type of Service field from Day One, out of respect to the diverse Link Layer networks that offer a range of delivery services. Network peering agreements are free to honor this field or to ignore it as they see fit, and the same goes for MPLS, another technique for specification of flow specfications. And we have a similar technique in IPv6, called “Flow Label Specifications.”

    So there's no doubt that traffic classification and differential treatment has been part of the Internet Architecture from the beginning (even ARPANET had it before TCP/IP was invented.) The actual issue is to what extent it's ever been part of Internet Operation, something that doesn't actually flow right out of architecture. The architecture permits a wide range of operational strategies, and network operators decide which to use.

    There is an active debate inside the router protocols community about how much QoS the current revision of BGP actually permits. There are a number of things that are done in BGP that are suboptimal from the scaling perspective but necessary, such as muti-homing. But I digress.

    I'm not nit-picking when I say e2e is less important than n2n. The big pushers of e2e are lawyers and other non-technical people who couldn't describe BGP if their lives depended on it, but BGP is actually the key to the modern, open, commercial Internet.

  • http://bennett.com/blog Richard Bennett

    On the subject of the allegedly prominent network engineers who claim prioritization is infeasible, I see a quote from Ed Felten to the effect that some applications can be re-written to use buffering instead of QoS and a dated reference to the work of some unnamed engineers in the Internet2 project that it was expensive. The first comment is off-topic, as it is certainly the case that some applications cannot be rewritten so as not to need QoS, VoIP being the best known.

    And on the other point, that's what happens when you do your research on Wikipedia. Stanislav Shalunov reported some ten years ago that the Cisco 7500 slowed down a hell of a lot when it had to enforce DiffServ, no big revelation when you consider that it did DiffServ in firmware rather than in hardware. Modern routers can do DiffServ at wirespeed, so that objection has long since been irrelevant. Last I heard, SS was working for BitTorrent, Inc, btw.

    Which brings us to the problem with over-provisioning as a method of achieving QoS (not an alternative to it, but as a means to the end of providing a statistical guarantee of low latency delivery.) That method only works if there aren't applications standing in wait that can consume all the bandwidth added to the Internet and its access networks. When bulk data transfer on the Internet was confined to gentlemanly ftp and disks were expensive, it was a plausible argument, but it's not any more. The Japan data tells us that the more bandwidth a provider adds, the more is consumed by P2P as a percentage of overall volume. Bummer.

    That's the danger of backward-looking tech policy, Tim. People who write rules are always fighting yesterday's war with the tools of tomorrow. It's best not to justify their quixotic quest.

  • http://www.tc.umn.edu/~leex1008 Tim Lee

    I'm not sure what point you're making here. As I said in my paper, DiffServ is a e2e-friendly way to do packet prioritization. My guess is that it's not going to be practical to implement DiffServ on the public Internet, but the argument of my paper doesn't depend on that.

  • http://bennett.com/blog Richard Bennett

    The point I'm making is that your assertion: “I give a couple of quotes from prominent network engineers who don't believe that end-to-end prioritization if feasible on a network the size of the Internet” doesn't hold water.

    My main point is that there's a real danger in e2e exceptionalism. Real networks allow the e2e layer and the n2n layer to communicate and negotiate with each other. The NN platform argues that such negotiation is always harmful to the consumer and must be prevented at all costs. You essentially agree with them on this point, and only quibble about the need to prevent it with regulation.

    All they have to do to win the argument is to show that negotiation is taking place, and then we all lose. That's the problem I noted in my first comment on your paper – the point you concede is the critical one. Negotiation is good, we don't need a wall of separation between end-users and network providers.

  • http://www.tc.umn.edu/~leex1008 Tim Lee

    OK, I stand corrected. I quoted one prominent network engineer who found end-to-end prioritization impractical.

    I think you're choosing a hyper-technical meaning of “network neutrality” that doesn't accurately reflect the concerns of actual network neutrality advocates. I don't entirely blame you for this, since they seem to have difficulty agreeing among themselves about what they mean, but I also don't think you're likely to convince anyone by throwing up an impenetrable wall of technical jargon.

  • http://bennett.com/blog Richard Bennett

    Right, when we're discussing the regulations that should surround a technology, the last thing we want to do is make an effort to understand how that technology actually works. Best to hide behind ideology and let the technology fend for itself.

    You betcha.

  • http://www.techliberation.com Adam Thierer

    Richard… I encourage you to put together a mega-post on your blog putting some meat on the bones of what you have said here. I think this is a very interesting discussion you guys have going here — although I do wish we could keep it more civil — and I would love to get even more background.

  • http://bennett.com/blog Richard Bennett

    That's not a bad idea at all. If we're ever going to get serious about regulating the Internet, as I think we must, we'll have to cut through the e2e fog and get to the essentials of the matter. This is probably a good time to open that discussion.

  • Gus

    What vitrol… sheesh. There's more than one sphere in which any idea must survive. The technical details of how things work speak to the question of “Can we?” There's another important question: “Should we?” To answer that we need to decide what we value the most, then the technical details become relevant. I don't participate in file-sharing except when some product I use for other purposes uses it to distribute content (certain prominent games come to mind). Yet I still feel that it's not right for the providers to get me comming and going. It's just double dipping and hiding the costs from the consumer. I'm willing to pay for bandwith twice (once for my connection, once for buisness's connection speed as part of the cost of goods), but I don't think I should have to pay four times. (Add in fees for all the people who want me to reach them quickly to the cost of goods, and add in fees I might pay so that any1 can get to me quickly, not just my provider's customers!)

    I just don't like the idea of needing to pay for a service on the net, and then add another item/region whatever to my service plan with my provider so it can get to me from some other segment of the internet (which is where this leads)

    My values are this: I want my costs up front, and one time. I'm fine with paying for a level of of service for a specific technology (such as VOIP), but once I do I don't want to have to keep adjusting things when my friends move or change their provider. Pay for the service, not pay for whom I get to talk to using the service.

    (and I've used VOIP as an example but you can extend this to any technology, and “I” could probably be replaced any individual or a business and I suspect that unless they own part of or are paid by a service provider they'll agree)

  • http://gathman.org/vitae Stuart Gathman

    One thing that seems to be missed is that prioritizing internet packets is not the problem. *Where* the packets are prioritized and who makes the decision is the key. I envision an internet where ISPs support QoS, but do *not* decide which packets are high priority. They simply charge more for higher priorities. Thus, consumers would have an “optimize VOIP” setting on their consumer router that would give real-time priority to their VOIP packets – reducing dropout (assuming the other end does so as well), but raising their ISP bill. Linux hackers would create their own complex rules in iptables (this can be done now – but ISPs don't honor QoS flags from the customer). A consumer that wants faster web browsing can press the “Turbo” button on their router – for faster browsing and higher bills.

  • http://gathman.org/vitae Stuart Gathman

    Hmm, what I missed at first is that “e2e” is what I was asking for. That buzzword seems to mean that end users and suppliers (end points) choose priority – not the ISP. It is highly annoying (or worse) when the ISP “prioritizes” your traffic for you without your consent (other than a blanket prioritization of all your traffic without respect to content – say because you went beyond your bandwidth limit and haven't paid to upgrade yet).

  • Brett Glass

    It's unclear that the transaction costs involved in “express delivery” of specific content are onerous at all. Somehow, if I order a book from Amazon.com, I have no problem specifying UPS Ground (or UPS Chopped, or UPS Sliced), or FedEx 2-day delivery, or FedEx overnight delivery. Likewise, many vendors (including Amazon) offer specials in which they eat part of the cost of shipping.

    Tiered service (analogous to different shipping options), or having the content provider pay for some or all of the extra bandwidth necessary to deliver the content quickly (analogous to a “shipping special”), is perfectly reasonable and isn't “non-neutral” at all. In fact, prohibiting these practices would be “non-neutral,” because it would bias the Net toward a particular business model that favors certain parties at the expense of others.

  • http://bennett.com/blog Richard Bennett

    Your values are well and good, but you're missing a very relevant fact when you whine about broadband billing: your residential broadband bill is as low as it is because you've agreed to share resources with your neighbors. If you're not being a good neighbor, you're not playing by the rules, and for that you get whacked with a higher bill.

    That's the relevant point on values.

  • Marcus Brubaker

    The key difference here is YOU decide to pay for faster service from UPS. Imagine if you paid for overnight delivery. The UPS truck gets to the airport to send your package on it's way and is stopped at the gate. A security guard says “Sorry. Fedex just paid us extra for priority access to the gate. You'll have to wait while the 20 Fedex trucks behind you go first unless you want to pay for express access. By the way, the airport will not reschedule the departure time of your plane so if you don't pay now your cargo won't make the flight.” UPS may take the hit on cost but your next shipment will cost more. This wasn't done in response to increased traffic or backups at the gate. It's just a way to get more money. At this point someone might say “Too bad UPS. Thats the cost of doing business.” Now imagine Fedex signs an exclusive priority access deal that causes all other trucks to be forced to wait at the gate for two hours even if there is no Fedex truck waiting. UPS isn't given the option to just pay more for better service and there are no other airports near by. This is the concern I have.

  • Brett Glass

    The absurd, far flung scenario which you describe above is not only not analogous to what happens when ISPs manage Internet traffic — it does not occur in real life in the shipping of packages! UPS, Federal Express, and even the US Postal Service offer expedited service for an additional fee. This does not affect the service guarantees for non-expedited service.

  • Brett Glass

    The absurd, far flung scenario which you describe above is not only not analogous to what happens when ISPs manage Internet traffic — it does not occur in real life in the shipping of packages! UPS, Federal Express, and even the US Postal Service offer expedited service for an additional fee. This does not affect the service guarantees for non-expedited service.

  • Brett Glass

    The absurd, far flung scenario which you describe above is not only not analogous to what happens when ISPs manage Internet traffic — it does not occur in real life in the shipping of packages! UPS, Federal Express, and even the US Postal Service offer expedited service for an additional fee. This does not affect the service guarantees for non-expedited service.

  • Pingback: no no hair removal m&m super bowl commercial 2013

Previous post:

Next post: