Articles by Tim Lee

Timothy B. Lee (Contributor, 2004-2009) is an adjunct scholar at the Cato Institute. He is currently a PhD student and a member of the Center for Information Technology Policy at Princeton University. He contributes regularly to a variety of online publications, including Ars Technica, Techdirt, Cato @ Liberty, and The Angry Blog. He has been a Mac bigot since 1984, a Unix, vi, and Perl bigot since 1998, and a sworn enemy of HTML-formatted email for as long as certain companies have thought that was a good idea. You can reach him by email at leex1008@umn.edu.


Why Be an Internet Optimist?

by on November 24, 2008 · 19 comments

Kevin Donovan has a thoughtful post about “The Durable Internet.” He asks:

Now, there are examples of trickle down and mass rebellion. Tim does a nice job in “The Durable Net” of exploring these and does the most to bring me closer to faith in lay users. He cites the Digg rebellion against censorship and the fight for open IM protocols. But in my observation, very few non-technical folks use Adium or the other IM unifiers. In fact, iChat and AIM are dominant defaults. As for the Digg example, the users of Digg tend to be technically inclined and the cost of posting a hex code and pushing “Digg” are so minimal that, yes, even my mother could do it (though I doubt she would).

It is possible that the select few will be motivated enough to free their own iPhone or create tools to detect violations of the end-to-end principle, but I worry that the critical mass will not be reached. Although 40% of Saudis are disturbed by Internet censorship, I’d be willing to bet that 40% do not nor can they make use of Tor or Psiphon or the other anti-censorship technologies. These are the people who would suffer from a non-generative, non-neutral future if the technical few do not successfully defend their interests.

I’m mostly thinking out loud, so I’d love to hear your thoughts: are users capable of protecting their interests?

In my paper, I go into a lot of detail with specific examples in which open technologies persevered in the face of organized resistance. But let me step back and make a more general point about the underlying argument of that section of the paper: In a nutshell, we should be optimistic about the future of open platforms for the same reason we’re in favor of open platforms in the first place. Put simply, they work better. Open platforms harness the distributed knowledge of millions of people and produce ecosystems that is greater than the sum of their parts. Closed platforms are hampered by the limitations of central planning, and as a result they tend to be sterile, inflexible, and incapable of keeping up with developments on more open platforms.
Continue reading →

A fantastic post from Matt Yglesias:

The basic business outlook is very focused on the key role of the executive. Good, profitable, growing firms are run by brilliant executives. And the ability of the firm to grow and be profitable is evidence of its executives’ brilliance. And profit ultimately stems from executive brilliance. This is part of the reason that CEO salaries need to keep escalating — recruiting the best is integral to success. The leaders of large firms become revered figures. Not only important because, in practice their decisions are significant. But they become celebrities and dispensers of advice and wisdom. Their success stems from overall brilliance, and thus they must have enlightening things to say on a variety of subjects.

The thing about this is that if this were generally true — if the CEOs of the Fortune 500 were brilliant economic seers — then it would really make a lot of sense to implement socialism. Real socialism. Not progressive taxation to finance a mildly redistributive welfare state. But “let’s let Vikram Pandit and Jeff Immelt centrally plan the economy — after all, they’re really brilliant!”

But in the real world, the point of markets isn’t that executives are clever and bureaucrats are dimwitted. The point is that nobody is all that brilliant. Nobody really has a reliable method of surveying the scene and accurately gauging What Is To Be Done. But in a market economy, we don’t need anyone to have such a method. Instead, a bunch of people get to do some inquiries into the issue and then give it their best shot. And the ones who are wrong will fail. And the ones who are right will succeed.

This is spot-on, and it’s a theme that I’ve blogged about in the past. One of the reasons I think that Matt’s point isn’t more obvious is that most industries are relatively homogenous, and so it’s hard to make an apples-to-apples comparison of different forms of industrial organization.

One of the things that makes the software industry industry interesting is that you have genuine institutional experimentation. You have 2-person startups toppling multi-billion-dollar firms. You have free software projects embarrassing proprietary software companies with budgets three orders of magnitude larger. You have venture capital firms investing tens of millions of dollars and angel investors investing tens of thousands. And so we get some real data on how efficient different forms of economic organization are. And it turns out that the centralized, bureacratics ones tend to be massively wasteful, relative to other ways of organizing software development.

Of course, software is special in part because their primary output is made of infinitely reproducible bits. You need a certain minimum of capital to start a car company or a bank. But still, it’s worth keeping in mind that the inefficiency of central planning isn’t limited to the government—the government just happens to be the largest bureaucracy with the least competition. But other organizations exhibit the same problems in proportion to their size and lack of competition. The larger an organization is, the more dysfunctional it’s likely to be and the harder it will be to reform. Which is one of the many reasons I hope Congress let’s GM collapse under its own weight.

Boyko on the Durable Internet

by on November 19, 2008 · 4 comments

Brian Boyko at Network Performance Daily has a thorough interview with yours truly about The Durable Internet. Brian asked some really sharp questions and helped to flesh out some of the thornier aspects of my argument. Check it out.

Matt Yglesias gets through the TSA checkpoint with a Swiss Army Knife and the agents don’t bat an eye. But God help him if he tries to bring a can of shaving cream that’s more than 3 oz onto an airplane.

A few more people have weighed in on my new paper. I tend to think that if I’m angering both sides of a given debate, I must be doing something right, so I’m going to take the fact that fervent neutrality opponent Richard Bennett hated the study as a good sign.

Others have been more positive. Mike Masnick has an extremely generous write-up over at Techdirt. And at Ars Technica, my friend Julian has the most extensive critique so far.

I thought most of it was spot on, but this seemed worth commenting on:

Lee thinks that the history of services like America Online shows that “walled garden” approaches tend to fail because consumers demand the full range of content available on the “unfettered Internet.” But most service providers already offer “tiered” service, in that subscribers can choose from a variety of packages that provide different download speeds at different prices. Many of these include temporary speed “boosts” for large downloads.

If many subscribers are demonstrably willing to accept slower pipes than the network can provide, companies providing streaming services that require faster connections may well find it worth their while to subsidize a more targeted “boost” for those users in order to make their offerings more attractive. In print and TV, we see a range of models for divvying up the cost of getting content to the audience—from paid infomercials to ad-supported programming to premium channels—and it’s never quite clear why the same shouldn’t pertain to online.

The key point here is the relative transaction costs of managing a proprietary network versus an open one. As we’ve learned from the repeated failure of micropayments, financial transactions are surprisingly expensive. The infrastructure required to negotiate, meter, and bill for connectivity, content, or other services means that overly-complicated billing schemes tend to collapse under their own weight. Likewise, proprietary content and services have managerial overhead that open networks don’t. You have to pay a lot of middle managers, salesmen, engineers, lawyers, and the like to do the sorts of things that happen automatically on an open network.

Now, in the older media Julian mentions, this overhead was simply unavoidable. Newspaper distribution cost a significant amount of money, and so newspapers had no choice but to charge their customers, pay their writers, sign complex deals with their advertisers, etc. Similarly, television stations had extremely scarce bandwidth, and so it made sense to expend resources to make sure that only the best content went on the air.

The Internet is the first medium where content can go from a producer to many consumers with no human beings intermediating the process. And because there are no human beings in between, the process is radically more efficient. When I visit the New York Times website, I’m not paying the Times for the content and they’re not paying my ISP for connectivity. That means that the Times‘s web operation can be much smaller than its subscription and distribution departments.

In a world where these transaction costs didn’t exist, you’d probably see the emergence of the kinds of complex financial transactions Julian envisions here. But given the existence of these transaction costs, the vast majority of Internet content creators will settle for free, best-effort connectivity rather than going to the trouble of negotiating separate agreements with dozens of different ISPs. Which means that if ISPs only offer high-speed connectivity to providers who pay to be a part of their “walled garden,” the service will wind up being vastly inferior (and as a consequence much less lucrative) than it would be if they offered full-speed access to the whole Internet.

Let me make a few final points about Steve Schultze’s network neutrality post. Steve writes:

The last-mile carrier “D” need not block site “A” or start charging everyone extra to access it, it need only degrade (or maintain current) quality of service to nascent A (read: Skype, YouTube, BitTorrent) to the point that it is less useable. This is neither a new limitation (from the consumers perspective) nor an explicit fee. If one a user suddenly lost all access to 90% of the internet, the last-mile carrier could not keep their business (or at least price). But, discrimination won’t look like that. It will come in the form of improving video services for providers who pay. It will come in the form of slightly lower quality Skyping which feels ever worse as compared to CarrierCrystalClearIP. It will come in the form of [Insert New Application] that I never find out about because it couldn’t function on the non-toll internet and the innovators couldn’t pay up or were seen as competitors.

I think there are several problems with this line of argument. First, notice that the kind of discrimination he’s describing here is much more modest than the scenarios commonly described by network neutrality activists. Under the scenario he’s describing, all current Internet applications will continue to work for the foreseeable future, and any new Internet applications that can work with current levels of bandwidth will work just fine. If this is how things are going to play out, we’ll have plenty of time to debate what to do about it after the fact.

But this isn’t how things have been playing out. If Steve’s story were true, we would expect the major network providers to be holding broadband speeds constant. But there’s no sign that they’re doing that. To the contrary, Verizon is pouring billions of dollars into its FiOS service, and Comcast has responded by upgrading to DOCSIS 3.0. Maybe we’ll begin to see major providers shift away from offering faster Internet access toward offering proprietary network services instead, but I don’t see any evidence of that.

Also, it’s worth remembering that many broadband providers already have a proprietary high-bandwidth video service. It’s called cable television and it’s explicitly exempted from network neutrality rules by Snowe-Dorgan. If the worry is that Comcast will choose to devote its bandwidth to a proprietary digital video service rather than providing customers with enough bandwidth to download high-def videos from the providers of their choice, that ship sailed a long time ago, and no one is seriously advocating legislation to change it. Note also that Comcast’s 250 GB bandwidth cap would not have been illegal under Snowe-Dorgan. Network neutrality legislation just doesn’t address this particular concern.

The reason broadband providers are likely to continue offering fast, unfettered access to the Internet is that consumers are going to continue demanding it. Providers may offer various proprietary digital services, but those services just aren’t going to be a viable replacement for unfettered Internet access, any more than AOL and Compuserve were viable replacements for the Internet of the 1990s. Broadband providers are ultimately in business to make money, and refusing to offer high-speed, unfettered Internet access means leaving money on the table.

Finally, this paragraph seems to misunderstand the concept of settlement-free peering:

Lee makes the argument that the current norm of “settlement-free” peering in the backbone of the internet will restrict last-mile providers’ ability to discriminate and to create a two-tiered internet because they will be bound by the equal treatment terms of the agreements. This is not supported by practical evidence, given the fact that none of the push-back against existing discriminatory practices has come from network peers. It is also not supported by sound economic reasoning. It is certainly not in backbone-provider E’s business interest to raise prices for all of its customers (an inevitable result). But, assuming E does negotiate for equal terms, the best-case scenario is that E becomes a more expensive “premium” backbone provider by paying monopoly rents to last-mile provider D, while F becomes a “budget” backbone provider by opting out (and hence attracts the “budget” customers).

“Settlement-free” means that no money exchanges hands. If D and E are peers [this example assumes that D is a “last mile” backbone provider like Verizon and E and F are competitive “tier 1” providers such as Level 3 or Global Crossing], that by definition means that E pays D nothing to carry its traffic, and vice versa. So I don’t understand what Steve means by “negotiate for equal terms.” If D had the ability to charge E for interconnection, it would be doing so already. The fact that they are peering suggests that D does not believe it has enough leverage to get any money out of E. If E is interconnecting with D for free, it’s hard to see how F could undercut that.

On Wednesday I responded to the first half of Steve Schultze’s critique of my network neutrality paper, which focused on my historical argument about the dangers of unintended consequences. Let me now turn to the second half of his post, which I regard as closer to the core of my paper’s argument.

One of the frustrating things about the network neutrality debate is that every proponent of network neutrality regulation seems to have a different story about the types of regulation he or she is concerned about. Some are worried about ISPs targeting particular applications to be degraded or blocked. Others are worried that ISPs will use the threat of blockage to force website operators to pay for access to their customers. Still others believe that ISPs will use subtle traffic shaping schemes to advantage their own content. Still others believe that ISPs will construct a “fast lane” and relegate the rest of the web to a pipe that never gets much faster than today’s Internet connections. Still others are worried about the potential for ISP censorship.

I’ve found that any time I take one of these ISP strategies seriously and put forth an argument about why it’s unlikely to be feasible or profitable, the response from supporters of regulation is often to concede that the particular scenario I’ve chosen is not realistic (terms like “straw man” sometimes come up), but that I haven’t accounted for some other scenario that’s much more likely to occur. Now, the Internet is a big, complicated place, and so it’s not possible to enumerate every conceivable way that an ISP could screw around with traffic and prove that none of them could ever be profitable. In my paper, I tried to pick the scenarios that are most commonly discussed and describe why I think they are likely to be poor business strategies for network providers, but I didn’t—and can’t—analyze the favorite scenario of every single network neutrality activist.

But here’s a pattern that I think is revealing: supporters of regulation tend to describe things in apocalyptic terms. We’re told that if regulations aren’t enacted soon, online innovation, competition, and maybe even freedom of speech are in jeopardy. It’s claimed that the stakes are too high to wait and see if actual problems develop. Yet I’ve found that when you get down to specifics, the savvier advocates of regulation concede that in fact the stakes aren’t really that high. For example, I have yet to find anyone willing to seriously defend Yochai Benkler’s claim that we should be worried about network owners censoring online speech.

Rather, the response is invariably to shift the focus to more plausible but far less significant infringements of network neutrality: isolated incidents like the Comcast/BitTorrent controversy rather than comprehensive plans to transform the Internet into AOL 2.0.

Yet advocates of regulation tend to get sloppy about these distinctions. They seem to believe that if an ISP has the power to block a single website or application, then it necessarily has the power to undertake much more ambitious discriminatory schemes. If Comcast can block BitTorrent today, it can censor liberal blogs or charge websites outrageous fees tomorrow.

So for example, take this passage from Steve’s post:
Continue reading →

My new network neutrality paper has prompted a cogent criticism from Steve Schultze at Harvard’s Berkman Center. Since Steve has helpfully broken his critique up into two parts, I’ll deal with them in turn. Here’s his first point:

The gating question is whether or not the elements of the Interstate Commerce Commission that led to the inefficiencies that Lee claims are at all related to the non-disciminatory language that he claims connect the two. If and only if the answer is “yes,” then a responsible analysis would consider whether or not the markets are relatively analogous, whether or not the administrative agencies tend toward the same failures, and whether the costs of regulation truly outweigh the benefits. In short, it is not enough to simply assert that net neutrality smells like the ICA, therefore it is doomed to fail.

I think this rather badly misunderstands the thrust of my argument with respect to the ICC (and the CAB and FCC). I’m absolutely not claiming that enacting network neutrality regulations will lead to exactly the same policy failures that befell the ICC. This would be a silly thing to argue, especially since (we hope) policymakers learn from their mistakes and take steps to avoid the precise mistakes they made in the past.

So my point is not that “net neutrality smells like the ICA, therefore it is doomed to fail.” Let me re-state my point this way: imagine putting yourself in the shoes of an average member of Congress in 1887. You’re worried about the monopolistic behavior of the railroads, and you’re about to vote on legislation that will require rates to be reasonable, non-discriminatory, and so forth. You would be extremely surprised to learn that the commission whose creation you just supported would wind up working primarily to limit competition and lining the pockets of incumbent railroads. That’s not what the legislation said it would do, that’s not what you intended to accomplish, but it turns out that’s what actually did happen.

Now imagine it’s 2009, and you’re a member of Congress deciding whether to vote on legislation. You’re worried about the monopolistic behavior of the telcos, and you’re about to vote on legislation that will require their routing policies to be reasonable, non-discriminatory, and so forth. My point is simply that there’s a significant probability that the practical effect of that legislation will be very different from what you or the legislation’s authors intended. And that history tells us that the regulatory process has a systematic bias in favor of well-organized incumbents and against poorly-organized consumers. And so if you’re going to give a regulatory agency more power, you’d better be absolutely certain you know what you’re doing, because any mistakes are likely to benefit industry incumbents at the expense of consumers.

What specific problems will we have? Well, it’s hard to say. That’s why it’s called “unintended consequences.” If we could predict exactly how legislation would be applied, the argument for regulation would be a lot stronger. My point is that precisely because it’s hard to predict how regulation will be applied, and because industry incumbents have more influence than the rest of us, we shouldn’t be too cavalier about giving regulators more power.

With that caveat in mind, I do point to some aspects of popular network neutrality proposals that could lead to problems. Most importantly, I have yet to see anyone produce a clear and unambiguous definition of network neutrality. Indeed, network neutrality advocates disagree among themselves about such issues as prioritization and DNS servers. Legal ambiguity creates a variety of problems, including discretion in the hands of regulators and increased difficulty for private-sector actors to determine what the law requires of them.

But to demand that I predict exactly what problems network neutrality legislation will cause is to miss the point. One of the biggest reasons reason we should be reluctant to enact network neutrality regulation is that legislation often has untintended consequences. Now, obviously that doesn’t mean that regulation is never a good idea, but it does mean that we should regard regulation as a last resort to deal with clear problems we can’t solve in other ways. It’s not a good way to deal with the kind of highly speculative threats that are the bread and butter of network neutrality activists.

NebuAd Lawsuit

by on November 12, 2008 · 30 comments

I don’t have an opinion about the specific legal issues involved, but I think the general approach of this lawsuit against NebuAd is the right one. Consumers have a reasonable expectation of privacy when they sign up for Internet service. As it happens, I was a Charter customer during the last three years, and I don’t remember them disclosing that they would be sharing the contents of my Internet communications to a third party for advertising purposes.

Me around the Web

by on November 10, 2008 · 6 comments

Over at Ars Technica, the final installment of my story on self-driving cars is up. This one focuses on the political and regulatory aspects of self-driving technologies. In particular, I offer three suggestions for the inevitable self-driving regulatory regime:

Three principles should govern the regulation of self-driving cars. First, it’s important to ensure that regulation be a complement to, rather than a substitute for, liability for accidents. Private firms will always have more information than government regulators about the safety of their products, and so the primary mechanism for ensuring car safety will always be manufacturers’ desires to avoid liability. Tort law gives carmakers an important, independent incentive to make safer cars. So while there may be good arguments for limiting liability, it would be a mistake to excuse regulated auto manufacturers from tort liability entirely.

Second, regulators should let industry take the lead in developing the basic software architecture of self-driving technologies. The last couple of decades have given us many examples of high-tech industries converging on well-designed technical standards. It should be sufficient for regulators to examine these standards after they have been developed, rather than trying to impose government-designed standards on the industry.

Finally, regulators need to bear in mind that too much regulation can be just as dangerous as too little. If self-driving cars will save lives, then delaying their introduction can kill just as many people as approving a dangerous car can. Therefore, it’s important that regulators focus narrowly on safety and that they don’t impose unrealistically high standards. If self-driving software can be shown to be at least as safe as the average human driver, it should be allowed on the road.

Meanwhile, Josephine Wolff at the Daily Princetonian was kind enough to quote me in an article about self-driving technologies. For the record, I was exaggerating a bit when I said “The only reasons there are pilots is because people feel safer with pilots.” Most aspects of flying can be done on autopilot, but I’m not sure we’re to the point where you could literally turn on the autopilot, close the cockpit door, and let the plane take you to the destination.

And if any TLF readers are in the Princeton area, I hope you’ll come to my talk on the future of self-driving technology, which will be a week from Thursday.

Finally, over at Techdirt, I’ve got the final installment of my series (1 2 3 4) on network neutrality regulation. I’ve got a new Cato Policy Analysis coming out later this week that will expand on many of the themes of those posts. Stay tuned.