Articles by Tim Lee

Timothy B. Lee (Contributor, 2004-2009) is an adjunct scholar at the Cato Institute. He is currently a PhD student and a member of the Center for Information Technology Policy at Princeton University. He contributes regularly to a variety of online publications, including Ars Technica, Techdirt, Cato @ Liberty, and The Angry Blog. He has been a Mac bigot since 1984, a Unix, vi, and Perl bigot since 1998, and a sworn enemy of HTML-formatted email for as long as certain companies have thought that was a good idea. You can reach him by email at leex1008@umn.edu.


I contributed the Cato institute’s side in this debate at Opposing Viewpoints. I took the “no” position to the question “Should the Government Regulate Net Neutrality?” Arguing opposite are the Save the Internet coalition, the Open Internet Coalition, and Public Knowledge.

I wrote my points before seeing the other peoples’ contributions, but my take on the debate is best summarized by this comment which should be appearing on the site in the near future:

What’s striking about the arguments of all three pro-regulation contributors is that while they adopt the rhetoric of urgency, none of them has offered a specific explanation of what will happen if Congress does not enact new regulations. It may very well be true that the major incumbents would like to transform the Internet into a proprietary network, but thus far, there are precious few examples of them actually attempting to do so. Indeed, the only example of any significance that they’ve been able to cite is Comcast’s interference with BitTorrent. And that example certainly doesn’t support their argument.

Comcast interfered in a relatively minor way with one of the dozens of applications on the Internet. For its trouble, the company got a bunch of negative publicity, a customer backlash, and (thanks to header encryption technology) no real control over the use of BitTorrent on its network. By March, Comcast was in full-scale retreat, signing an agreement with BitTorrent, Inc, and pledging to stop blocking BitTorrent traffic by the end of the year.

By the time the FCC ruled on the issue in July, its involvement had been rendered completely superfluous by the progress of events. Comcast wasn’t posing a looming threat to network neutrality that the FCC had beaten back. Comcast had already surrendered months ago, and the FCC showed up long after the battle was over to claim credit for the victory.

The other examples network neutrality activists like to cite are even weaker. For example, Verizon briefly refused to give an SMS short code to a pro-choice group. Not only did this incident have nothing to do with the Internet, but Verizon voluntarily reversed itself just a few days later. Once again, any regulatory response would have been far too late to make a difference.

In short, there’s no “precipice” here. Network owners don’t have a magic wand that will transform the Internet into a proprietary network. The filtering and blocking tools network providers do have are clumsy and easily circumvented. There are plenty of people monitoring broadband providers’ behavior, and they will ensure that any network neutrality violations get widely publicized. Network owners saw what happened to Comcast, and they learned that interfering with network neutrality is a bad business strategy: it’s more likely to produce angry customers than larger profits.

The advocates of new regulation have been predicting the imminent death of network neutrality for three years now. Yet network neutrality is no more endangered today than it was at the height of the Congressional debate over network neutrality in 2006. If we do start to see real evidence that technological and market forces are inadequate to protect the neutral Internet, there will be plenty of time to debate and pass appropriate regulations at that point. But it would be a mistake to pass new regulations now based on purely speculative concerns.

Peering and Transit at Ars

by on September 2, 2008 · 5 comments

My favorite thing about Ars Technica (aside from the fact that I get to write for them) is their in-depth features on technical issues. Out today is the best discussion I’ve seen of transit and peering for the lay reader. One key section:

I once heard the following anecdote at a RIPE meeting.

Allegedly, a big American software company was refused peering by one of the incumbent telco networks in the north of Europe. The American firm reacted by finding the most expensive transit route for that telco and then routing its own traffic to Europe over that link. Within a couple of months, the European CFO was asking why the company was paying out so much for transit. Soon afterward, there was a peering arrangement between the two networks…

Tier 1 networks are those networks that don’t pay any other network for transit yet still can reach all networks connected to the internet. There are about seven such networks in the world. Being a Tier 1 is considered very “cool,” but it is an unenviable position. A Tier 1 is constantly faced with customers trying to bypass it, and this is a threat to its business. On top of the threat from customers, a Tier 1 also faces the danger of being de-peered by other Tier 1s. This de-peering happens when one Tier 1 network thinks that the other Tier 1 is not sufficiently important to be considered an equal. The bigger Tier 1 will then try to get a transit deal or paid peering deal with the smaller Tier 1, and if the smaller one accepts, then it is acknowledging that it is not really a Tier 1. But if the smaller Tier 1 calls the bigger Tier 1’s bluff and actually does get de-peered, some of the customers of either network can’t reach each other.

When I first learned about the Internet’s basic peering model, it seemed like there was a real danger of a natural monopoly developing if too many tier 1 providers merge or collude. But what this misses is that larger networks are facing a constant threat of having their customers bypass them and peer directly with other customers. As a result, even if there were only one tier 1 provider, that provider wouldn’t have that much monopoly power, because any time it raised its prices it would see its largest customers start building out infrastructure to bypass its network.

In effect, the BGP protocol that controls the interactions of the various network creates a highly liquid market for interconnection. Because a network has the technical ability to change its local topology in a matter of hours, it’s always in a reasonably strong bargaining position, even when dealing with a larger network.

Things are trickier in the “last mile” broadband market, but at least if we’re talking about the Internet backbone, this is a fiercely competitive market and seems likely to remain that way for the foreseeable future.

Bandwidth Cap Worries

by on August 30, 2008 · 15 comments

Susan Crawford worries about the implications of Comcast’s bandwidth cap:

Comcast sees a future in which people use the internet to send a few emails or look at a few web pages. They don’t want people watching HD content from other sources online, because that doesn’t fit their business model. So rather than increase capacity, they’d rather lower expectations. 250GB/month is about 50-60 HD movies a month, but we’re not necessarily going to be watching movies. Maybe we’ll be doing constant HD video sessions with other freelancers, or interacting with big groups all over the world in real-time. Who knows what we’ll be doing – it’s all in the future.

But rather than build towards a user-powered future, Comcast wants to shape that future — in advance — in its own image. The company is not offering additional bandwidth packages to people who want more. They just want to be able to shut service off at a particular point – a point of bandwidth use that most people aren’t using right now, so that they won’t be unhappy. By the time we all want to be doing everything online, Comcast users (the company hopes) won’t expect anything better.

There are several observations to make here. In the first place, there isn’t an either-or choice between building more capacity and limiting current users. Comcast is doing both. They’re upgrading to DOCSIS 3.0 at the same time they’re experimenting with new usage limits. Obviously, the ideal situation is when capacity upgrades are sufficient to accommodate increased demand. But if it’s not, network owners have to do something about it. A high and transparent bandwidth cap isn’t a terrible approach.

Second, this cap really is quite high. 250 GB/month is roughly 1 Mbps for every waking hour, or 10 Mbps (which is faster than my current broadband connection) for about 2 hours a day. Her estimate of 50-60 HD movies a month sounds high to me, but certainly there’s enough bandwidth there to download more HD movies than the average family watches in a month.

Third, there’s absolutely no reason to think that this cap is permanent, or that they won’t give consumers reasonable options to get more bandwidth. Comcast is in business to make money. There’s lots of valuable content on the Internet. Therefore, it’s in Comcast’s interest to sell consumers the bandwidth they need to access the Internet content they want. Now, Comcast might charge more for a really high-speed, high-cap Internet access plan. That’s their right, and I’m at a loss to see why it would be a problem. Infrastructure upgrades cost money. It’s only fair to charge the most to the people who use the infrastructure the most. Provided that users do have the option to access the content they want, I fail to see what the problem is.

Finally, Crawford is upset that usage of Comcast’s digital voice service isn’t counted against the cap. But VoIP uses so little bandwidth that as a practical matter, this will matter very little. More to the point, if Crawford is worried about Comcast dedicating bandwidth to its own proprietary services, I’ve got a much bigger target for her to worry about: cable television. Comcast’s cable service has been sucking up bandwidth that could have otherwise gone to Internet connectivity for decades. Does Crawford think it’s unethical for Comcast to offer traditional cable television service? If not, then how is offering dedicated bandwidth to Comcast’s VoIP offering any different?

As I said in my last post, Lindberg uses a number of computer metaphors to explain legal concepts. Here’s one I thought was particularly clever:

The shortcut in Figure 5-1 [a screenshot of the Firefox shortcut on a Windows desktop] is not the Firefox web browser itself. Rather, this icon is associated with a shortcut, or link. It points to the real executable file, which is located somewhere else on the disk. Without the linked application, the shortcut has no purpose. In fact, in Windows, a shortcut without a properly linked application reverts to a generic icon. It is the linked aplication that gives the shortcut both its appearance and its meaning.

Like the shortcut icon, a trademark is a symbol that is linked in the mind of consumers with a real company or with real products and services. WIthout the association of the symbol with the real product, service, or company, the symbols that we currently recognize as trademarks would be nothing but small, unrelated bits of art. The purpose of the trademark is to be a pointer to the larger “real” entity that the trademark represents. It is the larger entity that defines the trademark and gives it form and meaning.

[This post will be geekier than average. Apologies in advance to non-programmers]

One of the interesting aspects of Intellectual Property and Open Source is the frequent use of programming metaphors to explain legal concepts. Given the audience, it’s a clever approach. Most of the analogies work well. A few fall flat.

I found one analogy particularly illuminating, albeit not in quite the way Lindberg intended. He analogizes the patent system to memoization, the programming technique in which a program stores the results of past computations in a table to avoid having to re-compute them. If computing a value is expensive, but recalling it from a table is cheap, memoization can dramatically speed up computation. Lindberg then compares this to the patent system:

The patent system as a whole can be compared to applying memoization to the process of invention. Creating a new invention is like calling an expensive function. Just as it is inefficient to recompute the Fibonacci numbers for each function invocation, it is inefficient to force everyone facing a technical problem to independently invent the solution to that problem. The patent system acts like a problem cache, storing the solutions to specific problems for later recall. The next time someone has the same problem, the saved solution (as captured by the patent document) can be used.

Just as with memoization, there is a cost associated with the patent process, specifically, the 20-year term of exclusive rights associated with the patent. Nevertheless, the essence of the utilitarian bargain is that granting temporary exclusive rights to inventions is ultimately less expensive than forcing people to independently recreate the same invention.

The caveat at the beginning of the second paragraph is huge. In the software industry, at least, any patent filed in the 1980s is virtually worthless today. But even setting that point aside, Lindberg’s analogy provides a helpful analogy to explain why patents are a bad fit for the software industry: it’s like implementing memoization using a lookup table without a hash function.
Continue reading →

I’m reviewing Van Lindberg’s Intellectual Property and Open Source for Ars Technica. The first chapter is an introduction to the theoretical concepts that Lindberg describes as the “foundations of intellectual property law”—public goods, free-riding, market failure, and so forth. I’ve found several of the assertions in this chapter frustrating.

For example, on p. 8, Lindberg writes:

We want more knowledge (or more generally, more information) in society. As discussed above, however, normal market mechanisms do not provide incentives for individuals to create and share new knowledge

Italics mine. Now, this claim is simply untrue. Normal market mechanisms do, in fact, create incentives for individuals to create and share new knowledge. Mike Masnick has offered one excellent explanation of how they do so. See also Chris Sprigman and Jacob Loshin and the restaurant industry. Plainly, lots of new knowledge is created without the benefit of copyright, patent, or trade secret protection.

It’s likely that Lindberg is just being sloppy here, that he meant that markets do not provide sufficient incentives for creativity. This is a perfectly plausible view—indeed, it’s the mainstream view among scholars of patent and copyright policy. But even this weaker formulation is controversial. Boldrin and Levine, for example, are two respected economists who deny it. Even this weaker formulation, therefore, is too strong. Certainly many scholars (myself included) believe markets produce insufficient creative expression, but the point has certainly not been proven conclusively.
Continue reading →

Still More xkcd

by on August 26, 2008 · 8 comments

Apropos Julian’s excellent story of watchlist incompetence, a Slashdot commenter linked to this gem:

TLF Ads

by on August 23, 2008 · 9 comments

Just to chime on Berin’s post two other things that readers ought to know: the ad revenue we generate is trivial—on the order of dozens of dollars per month—and none of us get a dime of it as individuals. Rather, the money gets plowed into shared expenses for the site, such as advertising and promotional materials, hosting costs, etc. Sonia won’t get a dime of the advertising revenue generated by the McCain ads on this site, so whatever her reasons for praising his tech agenda, the lure of dozens of dollars of McCain payola from this site wasn’t among them.

One of the frustrating things about telecom debates is participants’ tendency to play fast and loose with the numbers. This tendency exists on both sides, but I think it’s more pronounced for the pro-regulatory side. Consider, for example, Susan Crawford’s post from last week on John McCain’s tech agenda:

First, here’s the fact: We don’t have a functioning “free market” in online access. John McCain thinks we do. That kind of magical thinking takes real practice.

Instead, we’ve got four or so enormous companies that control most of the country’s access, and they’re probably delighted that McCain is promising not to regulate them.

I can’t think of any plausible way of defining the broadband market that gives you four as the number of major firms. We have three major telephone companies and (depending on where you draw the line) somewhere between four and eight major cable companies. And that, of course, is focusing exclusively on high-speed residential service. T-Mobile and Sprint provide lower-speed wireless Internet access, and there are a number of companies that provide access to business customers.

Maybe that’s just nitpicking about the numbers, but her qualitative view of the marketplace is just as distorted:
Continue reading →

Tom Lee critiques professional gossip-turned-professional-navel-gazer Emily Gould, who has a new article about the supposed shallowness of Shirky style Internet triumphalism:

Gould thinks Shirky is a callow idealist, but he’s not. He’s just noting the incredible bounty that technology can afford us while politely declining to complain about the places where it falls short.

Not only is Gould preoccupied with the latter, she’s blind to the former. And hey, I can relate. Digital technology has its own Benjaminian aura, you know — excitement born of novelty, and exclusivity, and revolutionary rhetoric. Once that novelty wears off, though, things can start to look kind of drab. I mean, it’s exciting that the world has collaboratively built an encyclopedia! But it is an encyclopedia. And the idea of an encyclopedia — a comprehensive reference document written without passion or position — is actually kind of boring. The same holds for social communication and our lofty rhetoric about the triumph of a world where information can flow freely. Once you’re done patting yourself on the back you need to start paying attention to what people are actually saying. And that’s hard. Sometimes it’s even boring.

It’s depressing when you realize how much of your excitement about a thing was tied up in its aura; to find out that superficial considerations formed the basis of your enthusiasm. I struggle with this myself: I’m overcome with contempt at every useless, vowel-less internet startup I see, its founders desperate to think of themselves as brilliant revolutionaries despite no one — least of all them — actually caring a whit about what they say they’re trying to do. But that contempt is motivated in no small part by feeling the exact same ignoble impulse.

I think this is basically right, but I’d make a somewhat stronger case. It’s certainly true that the most superficial aspects of the Internet get a lot of press, but I think it’s important not to let the existence of such froth obscure the enormous flow of real, non-superficial value that the Internet revolution is producing. The non-frothy parts of the Internet seem boring precisely because they’ve become so profoundly important to our society that we’ve started taking them for granted.
Continue reading →