November 2008

During my recent debate with Jonathan Zittrain about his book The Future of the Internet, I argued that there was just no way to bottle up digital generativity and that he had little to fear in terms of the future of the Net or digital devices being “sterile, tethered,” and closed. I noted that the iPhone — which Jonathan paints as the villain in his drama — is the perfect example of how people will make a device more generative even when the manufacturers didn’t originally plan for it or allow it. I went so far as to joke that there were countless ways to hack your iPhone now, so much so that I wouldn’t be surprised if one day soon our iPhones would be taking out the trash and mowing our lawns!

Well, I was engaging in a bit of hyperbole there, but I am consistently amazed by what people can make their digital devices do. Witness the fact that some enterprising soul has found a way to turn the iPhone into a flute! Better yet, they have trained a group to play “Stairway to Heaven” using that application!! It’s enough to make one wonder: How long before someone converts the iPhone into a bong?

[Uttered to JZ in my best stoner voice…] “Seriously, dude, generativity is alive and well. Now chill, and pass the iBong.”

A few more people have weighed in on my new paper. I tend to think that if I’m angering both sides of a given debate, I must be doing something right, so I’m going to take the fact that fervent neutrality opponent Richard Bennett hated the study as a good sign.

Others have been more positive. Mike Masnick has an extremely generous write-up over at Techdirt. And at Ars Technica, my friend Julian has the most extensive critique so far.

I thought most of it was spot on, but this seemed worth commenting on:

Lee thinks that the history of services like America Online shows that “walled garden” approaches tend to fail because consumers demand the full range of content available on the “unfettered Internet.” But most service providers already offer “tiered” service, in that subscribers can choose from a variety of packages that provide different download speeds at different prices. Many of these include temporary speed “boosts” for large downloads.

If many subscribers are demonstrably willing to accept slower pipes than the network can provide, companies providing streaming services that require faster connections may well find it worth their while to subsidize a more targeted “boost” for those users in order to make their offerings more attractive. In print and TV, we see a range of models for divvying up the cost of getting content to the audience—from paid infomercials to ad-supported programming to premium channels—and it’s never quite clear why the same shouldn’t pertain to online.

The key point here is the relative transaction costs of managing a proprietary network versus an open one. As we’ve learned from the repeated failure of micropayments, financial transactions are surprisingly expensive. The infrastructure required to negotiate, meter, and bill for connectivity, content, or other services means that overly-complicated billing schemes tend to collapse under their own weight. Likewise, proprietary content and services have managerial overhead that open networks don’t. You have to pay a lot of middle managers, salesmen, engineers, lawyers, and the like to do the sorts of things that happen automatically on an open network.

Now, in the older media Julian mentions, this overhead was simply unavoidable. Newspaper distribution cost a significant amount of money, and so newspapers had no choice but to charge their customers, pay their writers, sign complex deals with their advertisers, etc. Similarly, television stations had extremely scarce bandwidth, and so it made sense to expend resources to make sure that only the best content went on the air.

The Internet is the first medium where content can go from a producer to many consumers with no human beings intermediating the process. And because there are no human beings in between, the process is radically more efficient. When I visit the New York Times website, I’m not paying the Times for the content and they’re not paying my ISP for connectivity. That means that the Times‘s web operation can be much smaller than its subscription and distribution departments.

In a world where these transaction costs didn’t exist, you’d probably see the emergence of the kinds of complex financial transactions Julian envisions here. But given the existence of these transaction costs, the vast majority of Internet content creators will settle for free, best-effort connectivity rather than going to the trouble of negotiating separate agreements with dozens of different ISPs. Which means that if ISPs only offer high-speed connectivity to providers who pay to be a part of their “walled garden,” the service will wind up being vastly inferior (and as a consequence much less lucrative) than it would be if they offered full-speed access to the whole Internet.

Let me make a few final points about Steve Schultze’s network neutrality post. Steve writes:

The last-mile carrier “D” need not block site “A” or start charging everyone extra to access it, it need only degrade (or maintain current) quality of service to nascent A (read: Skype, YouTube, BitTorrent) to the point that it is less useable. This is neither a new limitation (from the consumers perspective) nor an explicit fee. If one a user suddenly lost all access to 90% of the internet, the last-mile carrier could not keep their business (or at least price). But, discrimination won’t look like that. It will come in the form of improving video services for providers who pay. It will come in the form of slightly lower quality Skyping which feels ever worse as compared to CarrierCrystalClearIP. It will come in the form of [Insert New Application] that I never find out about because it couldn’t function on the non-toll internet and the innovators couldn’t pay up or were seen as competitors.

I think there are several problems with this line of argument. First, notice that the kind of discrimination he’s describing here is much more modest than the scenarios commonly described by network neutrality activists. Under the scenario he’s describing, all current Internet applications will continue to work for the foreseeable future, and any new Internet applications that can work with current levels of bandwidth will work just fine. If this is how things are going to play out, we’ll have plenty of time to debate what to do about it after the fact.

But this isn’t how things have been playing out. If Steve’s story were true, we would expect the major network providers to be holding broadband speeds constant. But there’s no sign that they’re doing that. To the contrary, Verizon is pouring billions of dollars into its FiOS service, and Comcast has responded by upgrading to DOCSIS 3.0. Maybe we’ll begin to see major providers shift away from offering faster Internet access toward offering proprietary network services instead, but I don’t see any evidence of that.

Also, it’s worth remembering that many broadband providers already have a proprietary high-bandwidth video service. It’s called cable television and it’s explicitly exempted from network neutrality rules by Snowe-Dorgan. If the worry is that Comcast will choose to devote its bandwidth to a proprietary digital video service rather than providing customers with enough bandwidth to download high-def videos from the providers of their choice, that ship sailed a long time ago, and no one is seriously advocating legislation to change it. Note also that Comcast’s 250 GB bandwidth cap would not have been illegal under Snowe-Dorgan. Network neutrality legislation just doesn’t address this particular concern.

The reason broadband providers are likely to continue offering fast, unfettered access to the Internet is that consumers are going to continue demanding it. Providers may offer various proprietary digital services, but those services just aren’t going to be a viable replacement for unfettered Internet access, any more than AOL and Compuserve were viable replacements for the Internet of the 1990s. Broadband providers are ultimately in business to make money, and refusing to offer high-speed, unfettered Internet access means leaving money on the table.

Finally, this paragraph seems to misunderstand the concept of settlement-free peering:

Lee makes the argument that the current norm of “settlement-free” peering in the backbone of the internet will restrict last-mile providers’ ability to discriminate and to create a two-tiered internet because they will be bound by the equal treatment terms of the agreements. This is not supported by practical evidence, given the fact that none of the push-back against existing discriminatory practices has come from network peers. It is also not supported by sound economic reasoning. It is certainly not in backbone-provider E’s business interest to raise prices for all of its customers (an inevitable result). But, assuming E does negotiate for equal terms, the best-case scenario is that E becomes a more expensive “premium” backbone provider by paying monopoly rents to last-mile provider D, while F becomes a “budget” backbone provider by opting out (and hence attracts the “budget” customers).

“Settlement-free” means that no money exchanges hands. If D and E are peers [this example assumes that D is a “last mile” backbone provider like Verizon and E and F are competitive “tier 1” providers such as Level 3 or Global Crossing], that by definition means that E pays D nothing to carry its traffic, and vice versa. So I don’t understand what Steve means by “negotiate for equal terms.” If D had the ability to charge E for interconnection, it would be doing so already. The fact that they are peering suggests that D does not believe it has enough leverage to get any money out of E. If E is interconnecting with D for free, it’s hard to see how F could undercut that.

On Wednesday I responded to the first half of Steve Schultze’s critique of my network neutrality paper, which focused on my historical argument about the dangers of unintended consequences. Let me now turn to the second half of his post, which I regard as closer to the core of my paper’s argument.

One of the frustrating things about the network neutrality debate is that every proponent of network neutrality regulation seems to have a different story about the types of regulation he or she is concerned about. Some are worried about ISPs targeting particular applications to be degraded or blocked. Others are worried that ISPs will use the threat of blockage to force website operators to pay for access to their customers. Still others believe that ISPs will use subtle traffic shaping schemes to advantage their own content. Still others believe that ISPs will construct a “fast lane” and relegate the rest of the web to a pipe that never gets much faster than today’s Internet connections. Still others are worried about the potential for ISP censorship.

I’ve found that any time I take one of these ISP strategies seriously and put forth an argument about why it’s unlikely to be feasible or profitable, the response from supporters of regulation is often to concede that the particular scenario I’ve chosen is not realistic (terms like “straw man” sometimes come up), but that I haven’t accounted for some other scenario that’s much more likely to occur. Now, the Internet is a big, complicated place, and so it’s not possible to enumerate every conceivable way that an ISP could screw around with traffic and prove that none of them could ever be profitable. In my paper, I tried to pick the scenarios that are most commonly discussed and describe why I think they are likely to be poor business strategies for network providers, but I didn’t—and can’t—analyze the favorite scenario of every single network neutrality activist.

But here’s a pattern that I think is revealing: supporters of regulation tend to describe things in apocalyptic terms. We’re told that if regulations aren’t enacted soon, online innovation, competition, and maybe even freedom of speech are in jeopardy. It’s claimed that the stakes are too high to wait and see if actual problems develop. Yet I’ve found that when you get down to specifics, the savvier advocates of regulation concede that in fact the stakes aren’t really that high. For example, I have yet to find anyone willing to seriously defend Yochai Benkler’s claim that we should be worried about network owners censoring online speech.

Rather, the response is invariably to shift the focus to more plausible but far less significant infringements of network neutrality: isolated incidents like the Comcast/BitTorrent controversy rather than comprehensive plans to transform the Internet into AOL 2.0.

Yet advocates of regulation tend to get sloppy about these distinctions. They seem to believe that if an ISP has the power to block a single website or application, then it necessarily has the power to undertake much more ambitious discriminatory schemes. If Comcast can block BitTorrent today, it can censor liberal blogs or charge websites outrageous fees tomorrow.

So for example, take this passage from Steve’s post:
Continue reading →

There’s news today that the Department of Justice (DOJ) is imposing fines on three leading electronics manufacturers — LG Display Co. Ltd., Sharp Corp. and Chunghwa Picture Tubes Ltd. — “for their roles in conspiracies to fix prices in the sale of liquid crystal display (LCD) panels.” According to the DOJ’s press release, of the $585 million in fines, LG will pay $400 million, the second highest criminal fine ever imposed by the DOJ’s Antitrust Division.

Regardless of the merits of the DOJ’s case, I have to ask: Has there ever been a worse attempt at fixing prices in the entire history of price fixing? After all, have you looked at flat-screen prices lately? They do nothing but fall, fall, fall — fast! Here are some numbers from Steve Lohr’s New York Times article about the DOJ case:

The LCD business is a $100-billion-a-year market and growing, but prices are falling relentlessly. Recently, panel prices have often been cut in half each year, a downward trajectory even steeper than in other technology markets known for steady price pressure, like those for computer chips and hard drives. In the last six months alone, the price of a 15.4-inch panel for a notebook PC has dropped to $63, from $97, and a 32-inch LCD for a television has gone to $223, from $321, according to iSuppli, a market research firm. The price-fixing conspiracy, industry analysts said, was an effort to slow the speed of price declines. “These companies were trying to get a toehold to protect profits in a very difficult market,” said Richard Doherty, director of research at Envisioneering, a technology consulting firm.

Yeah, well, that “toehold” didn’t protect squat. And how could it; it’s not like these are the only three companies in the LCD business.  And you’ll forgive those of us who only have plasmas or projectors in our homes for wondering what the big deal is (although I am certainly aware that LCDs are the primary technology for smaller flat screen displays in computer monitors, cell phones, and other handhelds).

But hey, I’m sure the DOJ’s effort was worth it at some level. Some lucky handful of consumers will probably get a check for 65 cents once the class action dust settles on this one. In the meantime, if there is some sort of Antitrust Hall of Fame out there, I hearby nominate LG, Sharp, and Chunghwa for the “Worst Price Fixers in History” award.

This catfight between Ron Rosenbaum of Slate and Jeff Jarvis of Buzz Machine about the future of journalism in the Internet Age is quite a heated affair. But what I found most interesting about it is that it reflects one element of the Net “optimist — pessimist” divide that I have been writing about here recently. Specifically, it touches on the divide over whether the Internet and digital technologies are reshaping the media marketplace and the field of journalism for better or for worse.

Rosenbaum is playing the pessimist role here and asking some sharp questions about the advice being dished out by “Web futurists” and “new-media gurus” as it relates the reversing the decline of the journalism profession. Rosenbaum says that the problem with Jarvis is that:

he’s become increasingly heartless about the reporters, writers, and other “content providers” who have been put out on the street by the changes in the industry. Not only does he blame the victims, he denies them the right to consider themselves victims. They deserve their miserable fate — and if they don’t know it, he’ll tell them why at great length. Sometimes it sounds as if he’s virtually dancing on their graves.

Continue reading →

My new network neutrality paper has prompted a cogent criticism from Steve Schultze at Harvard’s Berkman Center. Since Steve has helpfully broken his critique up into two parts, I’ll deal with them in turn. Here’s his first point:

The gating question is whether or not the elements of the Interstate Commerce Commission that led to the inefficiencies that Lee claims are at all related to the non-disciminatory language that he claims connect the two. If and only if the answer is “yes,” then a responsible analysis would consider whether or not the markets are relatively analogous, whether or not the administrative agencies tend toward the same failures, and whether the costs of regulation truly outweigh the benefits. In short, it is not enough to simply assert that net neutrality smells like the ICA, therefore it is doomed to fail.

I think this rather badly misunderstands the thrust of my argument with respect to the ICC (and the CAB and FCC). I’m absolutely not claiming that enacting network neutrality regulations will lead to exactly the same policy failures that befell the ICC. This would be a silly thing to argue, especially since (we hope) policymakers learn from their mistakes and take steps to avoid the precise mistakes they made in the past.

So my point is not that “net neutrality smells like the ICA, therefore it is doomed to fail.” Let me re-state my point this way: imagine putting yourself in the shoes of an average member of Congress in 1887. You’re worried about the monopolistic behavior of the railroads, and you’re about to vote on legislation that will require rates to be reasonable, non-discriminatory, and so forth. You would be extremely surprised to learn that the commission whose creation you just supported would wind up working primarily to limit competition and lining the pockets of incumbent railroads. That’s not what the legislation said it would do, that’s not what you intended to accomplish, but it turns out that’s what actually did happen.

Now imagine it’s 2009, and you’re a member of Congress deciding whether to vote on legislation. You’re worried about the monopolistic behavior of the telcos, and you’re about to vote on legislation that will require their routing policies to be reasonable, non-discriminatory, and so forth. My point is simply that there’s a significant probability that the practical effect of that legislation will be very different from what you or the legislation’s authors intended. And that history tells us that the regulatory process has a systematic bias in favor of well-organized incumbents and against poorly-organized consumers. And so if you’re going to give a regulatory agency more power, you’d better be absolutely certain you know what you’re doing, because any mistakes are likely to benefit industry incumbents at the expense of consumers.

What specific problems will we have? Well, it’s hard to say. That’s why it’s called “unintended consequences.” If we could predict exactly how legislation would be applied, the argument for regulation would be a lot stronger. My point is that precisely because it’s hard to predict how regulation will be applied, and because industry incumbents have more influence than the rest of us, we shouldn’t be too cavalier about giving regulators more power.

With that caveat in mind, I do point to some aspects of popular network neutrality proposals that could lead to problems. Most importantly, I have yet to see anyone produce a clear and unambiguous definition of network neutrality. Indeed, network neutrality advocates disagree among themselves about such issues as prioritization and DNS servers. Legal ambiguity creates a variety of problems, including discretion in the hands of regulators and increased difficulty for private-sector actors to determine what the law requires of them.

But to demand that I predict exactly what problems network neutrality legislation will cause is to miss the point. One of the biggest reasons reason we should be reluctant to enact network neutrality regulation is that legislation often has untintended consequences. Now, obviously that doesn’t mean that regulation is never a good idea, but it does mean that we should regard regulation as a last resort to deal with clear problems we can’t solve in other ways. It’s not a good way to deal with the kind of highly speculative threats that are the bread and butter of network neutrality activists.

Tim Lee’s long anticipated Cato Institute Policy Analysis has been released today.

The Durable Internet: Preserving Network Neutrality without Regulation is a must-read for people on both sides of the debate over network neutrality regulation.

What I like best about this paper is how Tim avoids joining one “team” or another. He evenly gives each side its due – each side is right about some things, after all – and calls out the specific instances where he thinks each is wrong.

Tim makes the case for treating the “end-to-end principle” as an important part of the Internet’s fundamental design. Tim disagrees with the people who argue for a network with “smarter” innards and believes that neutrality advocates seek the best engineering for the network. But they are wrong to believe that the network is fragile or susceptible to control. The Internet’s end-to-end architecture is durable, despite examples where it is not an absolute.

Tim has history lessons for those who believe that regulatory control of network management will have salutary effects. Time and time again, regulatory agencies have fallen into service of the industries they regulate.

“In 1970,” Tim tells us, “a report released by a Ralph Nader group described the [Interstate Commerce Commission] as ‘primarily a forum at which transportation interests divide up the national transportation market.'” Such is the likely fate of the Internet were management of it given to regulators at the FCC and their lobbyist friends at Verizon, AT&T, Comcast, and so on.

This paper has something for everyone, and will be a reference work as the network neutrality discussion continues. Highly recommended: The Durable Internet: Preserving Network Neutrality without Regulation.

ZDnet ran a story last week citing how security guru Bruce Schneier slams the US-VISIT program, which collects biometrics from people entering the country, saying that it has “zero benefit.”

I respect and like Bruce – he will be a participant in a major counterterrorism strategy conference we are having at the Cato Institute in January – but I have to voice my disagreement with him on this score. My belief is that border biometrics have an extremely small benefit – a benefit that rounds to zero, and one that is more than cancelled out by the costs. But not zero.

Continue reading →

NebuAd Lawsuit

by on November 12, 2008 · 30 comments

I don’t have an opinion about the specific legal issues involved, but I think the general approach of this lawsuit against NebuAd is the right one. Consumers have a reasonable expectation of privacy when they sign up for Internet service. As it happens, I was a Charter customer during the last three years, and I don’t remember them disclosing that they would be sharing the contents of my Internet communications to a third party for advertising purposes.