Articles by Tim Lee

Timothy B. Lee (Contributor, 2004-2009) is an adjunct scholar at the Cato Institute. He is currently a PhD student and a member of the Center for Information Technology Policy at Princeton University. He contributes regularly to a variety of online publications, including Ars Technica, Techdirt, Cato @ Liberty, and The Angry Blog. He has been a Mac bigot since 1984, a Unix, vi, and Perl bigot since 1998, and a sworn enemy of HTML-formatted email for as long as certain companies have thought that was a good idea. You can reach him by email at leex1008@umn.edu.


A couple of years ago I plugged Jerry Brito’s spectrum commons paper. What I said in that post is still true:it’s a great paper that highlights the central challenge of the commons approach. Specifically, a commons will typically require a controller, that controller will almost always be the government, and there’s therefore a danger of re-introducing all the maladies that have traditionally afflicted command-and-control regulation of spectrum.

I’m re-reading the paper after having read the FCC’s spectrum task force report, and while I still agree with the general thrust of Jerry’s paper, I think he overstates his case in a few places. In particular:

Only if spectrum is first allocated for flexible use, with few if any conditions on its use, can a commons or a property rights regime help overcome the inefficiencies of command-and-control spectrum management. For example, if spectrum is allocated for flexible use, a property rights regime will allow the owner of spectrum to put it to the most valuable use or sell it to someone who will. Similarly, if there are no restrictions on use, a commons will allow anyone to use the spectrum however she sees fit, thus overcoming command-and-control misallocation.

However, while title to spectrum could theoretically be auctioned off in fee simple with no strings attached, a government-created and -managed commons will always have its usage rules set through a command-and-control process. Users of a government commons might not be explicitly restricted in the applications they can deploy over the spectrum, but they will have to comply with the sharing rules that govern the commons. Sharing rules, which will be established through regulation, will in turn limit the types and number of applications that can be deployed.

I think the difficulty here is that just as Benkler and Lessig over-idealize the commons by ignoring the inevitable role for government in setting standards, so this over-idealizes the spectrum property regime. It’s not true that spectrum “could theoretically be auctioned off in fee simple with no strings attached.” The key thing to remember here is that electromagnetic waves don’t respect boundaries established by the legal system. There will always be a need for technical rules to prevent interference between adjacent rights holders. If you hold a spectrum right in a geographic territory adjacent to mine, the government is going to have to have some rules about how much of your transmissions can “leak” onto my property before it counts as a trespass.
Continue reading →

I’m finally reading Cato’s 2006 Policy Analysis on spectrum property rights. It’s got a lot of good information, but this sentence made me do a double-take:

In free space, radio waves steadily weaken in a very uniform, predictable way and at a rate that depends on frequency. In particular, the higher the frequency, the faster the waves weaken. In the real world—on the earth and in its environs—the situation is much more complicated, and radio links are affected by the earth itself, the atmosphere, and the intervening topography and natural and manmade objects such as foliage and buildings.

It’s been a while since I took physics, but I seem to recall (and Wikipedia seems to agree) that the strength of an electromagnetic wave falls with the square of the distance from the source. Indeed, this result seems to be compelled by the geometry of the situation and the conservation of energy. What am I missing?

Assuming I’m not just confused, one possibility is that they’re talking about propagation in the atmosphere rather than free space. It appears to be true that lower-frequency radio waves travel further along the surface of the Earth because they are affected more by the Earth’s atmosphere.

My last post sparked some interesting discussion about the economics of the Internet. With all due respect to my co-blogger Hance, though, this is precisely the sort of thing I was talking about:

[Tim’s post] unfortunately overlooks the essence of what NN regulation is really about as far as commercial entities are concerned, i.e., profitable online properties don’t want to be asked or obliged to negotiate service agreements with network providers in which they agree to share some of their profits with network providers for the upkeep of the Internet and for the improvement of the overall online experience — just like retailers in a shopping mall share a small percentage of their profits with the landlord.

Bret likewise says that “NN advocates have for several years now wanted to force service providers into one business plan where the end-user pays ALL the costs of the network.” It will surely be news to Eric Schmidtt, Steve Ballmer, and Jerry Yang that they aren’t “obliged to negotiate service agreements with network providers.” In point of fact, Google, Microsoft, Yahoo! and other big service providers pay millions of dollars to their ISPs to help finance the “upkeep of the Internet.” The prices they pay are negotiated in a fully competitive market.

Here’s a thumbnail sketch of how the Internet is structured: it’s made up of thousands of networks of various sizes, with lots and lots of interconnections between them. When two networks decide to interconnect, they typically evaluate their relative sizes. If one network is larger or better-connected than the other, the smaller network will typically pay the larger network for connectivity. If the networks are roughly the same size, they will typically swap traffic on a settlement-free basis.
Continue reading →

The Journal has an editorial today on Kevin Martin’s crusade against Comcast that generally reaches the right conclusions—network neutrality regulations aren’t necessary, and even if they were the FCC doesn’t have the authority to impose them unilaterally. But in the process, they repeat a line that is repeated fairly often by free-marketeers, but is nevertheless seriously confused:

Net neutrality proponents want all Internet traffic treated “equally.” They would prohibit Internet service providers from using price to address the ever-growing popularity of streaming video and other bandwidth-intensive programs that cause bottlenecks.

I don’t know of any plausible interpretation of any network neutrality proposal that would preclude ISPs from using price to address bandwidth scarcity. To the contrast, as Adam noted earlier today, network neutrality advocates like Tim Wu are quite clear that metering, bandwidth caps, and charging different rates for different connection speeds would be legal under leading network neutrality proposals.

In fact, this passage gets things almost precisely backwards. What network neutrality proposals are designed to do, rather, is to prevent ISPs from dealing with congestion using non-price, content-based routing policies. Snowe-Dorgan, for example, would have required that ISPs “Not impose a charge on the basis of the type of content, applications, or services made available via the Internet into the network of such broadband service provider.” “Quantity of bandwidth consumed” is not on that list.

This is part of the broader problem of opponents of network neutrality regulation becoming knee-jerk critics of network neutrality as such. The arguments against network neutrality as a technical principle aren’t especially strong, and they’re especially likely to be confused when they’re made by people who know very little about how the Internet works. As I’ll argue in my forthcoming Cato Policy Analysis, the best reasons to oppose network neutrality regulation isn’t because network neutrality is a bad idea (it isn’t), but because network neutrality opponents (1) overestimate the fragility of network neutrality and (2) underestimate the unintended consequences that are likely to flow from new regulations.

As an aside, it’s kind of ironic that so many network neutrality critics have found themselves in the position of critiquing the structure of an industry—long-haul Internet access—that was forged by more than a decade of brutal market competition. The Internet’s current “neutral” architecture and the web of contractual relationships that binds it together has endured for a quarter-century precisely because it’s phenomenally efficient. Google and Yahoo don’t pay Verizon and AT&T for “last mile” bandwidth because those kinds of payments would greatly increase the Internet’s billing overhead with no real benefit. Ordinarily, when the marketplace produces an outcome, free marketeers are inclined to leave well enough alone. But in their zeal to stop network neutrality regulation, a lot of free marketeers have become amateur network architects, insisting that unless AT&T can charge YouTube extra money for video downloads (or whatever), the Internet will grind to a halt. There are some real network engineers who make arguments of this sort, and they should be taken seriously, but when it’s made by people whose expertise is in the social sciences, it just makes them look silly.

Customer-Owned Fiber

by on July 30, 2008 · 8 comments

I’ve got a new piece up at Ars Technica that explores the concept of customer-owned fiber. It was inspired by a post by Google’s Derek Slater, who is working with Tim Wu on a paper making the case for customer-owned fiber in more detail.

The structure of today’s telecom market is roughly analogous to a road system in which most peoples’ driveways were owned by a private company. (To make the analogy, you’d have to imagine that most homes have two driveways owned by different companies) If another company owns your driveway, that company has a significant amount of leverage over you. Regulatory proposals like “open access” and “network neutrality” are like taking this system of third-party-owned driveways for granted and trying to use regulatory levers to ensure that companies don’t abuse that power.

But a much better approach may to recognize that the whole setup is screwy: it makes more sense for the owner of a particular piece of property to also own that part of the telecommunications infrastructure that’s exclusive to that property. A world of customer-owned fiber would solve a lot of the thorny policy problems because the barriers to entry in telecommunications markets would be radically lower. Entering the ISP market in a given neighborhood would simply require running a single strand of fiber to that neighborhood’s peering point.

The question is how you get there from here. In the near future, this will no longer be just a theoretical discussion, as a private company in Ottawa recently completed construction of a 400-household fiber network that it plans to sell to local homeowners. The preliminary cost estimate, based on 10 percent take-up, is $2700 per participating home. A higher sign-up rate means a lower cost per home. We’ll know in a few months if that estimate was optimistic or pessimistic.

In my piece, I talk about the economics of fiber rollout and the likely obstacles. I’m skeptical that it can be made to work because the costs are substantial and this is an obstacle with a lot of interia. There is also a group of incumbent companies—not a cartel—who are likely to do everything they can to kill such an effort if it gained momentum. But I think it’s absolutely worth trying, because if it could be made to work, the benefits would be huge.

Over at Techdirt I respectfully disagree with Adam’s broadside against Tim Wu’s “absurd” piece on the broadband cartel.

Tim Wu’s an ideologically savvy guy, and he’s a master at deploying libertarian rhetoric in defense of not-very-libertarian proposals. I get that, and I’m perfectly willing to call him out when he does so. But in other cases, Wu makes arguments that are just straight-forewordly libertarian. For example, I’m finding it hard to detect the hidden socialist message in this passage:

Our current approach is a command and control system dating from the 1920s. The federal government dictates exactly what licensees of the airwaves may do with their part of the spectrum. These Soviet-style rules create waste that is worthy of Brezhnev.

Many “owners” of spectrum either hardly use the stuff or use it in highly inefficient ways. At any given moment, more than 90 percent of the nation’s airwaves are empty.

Now, as I say in my Techdirt post, Wu would take this line of reasoning in a somewhat different direction than most of us libertarians would. Wu wants to allocate more spectrum to use as a commons, whereas TLFers would generally like to see it allocated to a system of private ownership. But the op-ed isn’t an argument for spectrum commons, it’s an argument against the FCC’s current command-and-control model.

Even if Wu’s article were a brief for spectrum commons, I think we should remember what Adam so eloquently wrote in 2002:

The intellectual battle between adherents to the property rights and commons models of spectrum governance has been a refreshing telecommunications debate for two reasons. First, at the heart of both models is a desire to promote increased flexibility, innovation, and efficient use of the spectrum resource. More important, both groups generally agree that the current command-and-control system is a complete failure and must be replaced. Indeed, both commons and property rights proponents question the continuing need for the FCC in this process at all. Second, and perhaps because of these preceding points, this war of ideas has not been characterized by the rancor typically witnessed in other telecom industry disputes.

Exactly right. So I hope we can dial down the rancor a couple of notches, acknowledge that Wu makes some valid (even, dare I say it, libertarian) points, and engage the arguments Wu actually makes, rather than trying to ferret out the secret agenda lurking behind his words.

What Mike Said

by on July 29, 2008 · 10 comments

Sometimes Mike Masnick has posts that are so spot-on that I can’t resist quoting them almost in their entirety:

As you may recall, a few years back, the entertainment industry pushed for the FCC to mandate a broadcast flag that would allow it to define rules for whether or not its content could be recorded by DVRs. The courts rightfully determined that such a mandate was outside the scope of the FCC’s authority. However, an FCC ruling on net neutrality is basically covering identical grounds, yet many of the groups cheering this decision are the same who fought against the Broadcast Flag, claiming the FCC had no mandate.

Now, to be clear, the concept of network neutrality is definitely a good thing — but having the FCC suddenly put itself in charge of regulating such things (even if it’s regulating it in a reasonable manner) is really dangerous. Those who are celebrating this decision should be worried about what it means. Specifically, they’re going to have little leg to stand on when the FCC next tries to mandate something outside of its authority (which is almost certainly going to happen in the near future).

That doesn’t mean that the apocalyptic predictions from the industry will come true, however. Represented by a positively ridiculous and blatantly silly editorial in the Washington Post by FCC commissioner Robert McDowell, it’s pure rubbish to suggest that this ruling by the FCC means the internet might “grind to a halt” is totally unsubstantiated sensationalism that has been shown time and time and time again to be false. There isn’t a serious bandwidth crunch — and whatever potential crunch may be coming could be dealt with by some modest improvements in infrastructure, not necessarily by breaking network neutrality, which is more of an attempt to double charge for bandwidth than anything else.

However, supporters of net neutrality may be making a big mistake in cheering on the FCC as it expands its authority in this area. The FCC has never been about protecting consumer rights, and granting them this authority (which the law appears not to do) opens the door to a lot more trouble down the road.

Lucky for me, Mike isn’t a stickler about enforcing his rights under copyright.

I’ve got two new articles on patent reform out today, and by sheer coincidence, both of them are related to the work of law professor John Duffy. First, over at Ars Technica, I analyze John Duffy’s article at Patently-O, where he argued that the US Patent Office has shown a growing hostility toward software patents over the last couple of years. He seems to be right that the Patent Office is becoming more skeptical about software patents, but of course we have a difference of opinion about whether this is a good thing:

Duffy seems to regard the end of software patents as a calamity for innovative companies, but his argument is awfully thin. Duffy focuses on Google’s PageRank patent, which he has long regarded as a poster child for software patenting. He describes it as “surely one of the most famous and valuable of all modern software patents,” and suggests that the invalidation of Google’s patents would be a calamity for the company. Curiously, however, he never explains how Google benefits from this or other patents in its portfolio.

Google derives little, if any, of its revenue from patent royalties and has managed to dominate the search engine marketplace without suing its major rivals for patent infringement. Indeed, it appears that the primary function of Google’s patent portfolio is as a defensive stockpile to be used if any competitors should sue it for patent infringement. If that’s true, then the only real effect of software patent abolition on Google would be that the company could lay off its patent lawyers.

Continue reading →

Skype Back Door?

by on July 26, 2008 · 12 comments

How credible are these rumors? It seems like it should be possible to confirm or deny them by either monitoring Skype network traffic (to see if it’s sending data to a third party) or by reverse-engineering the Skype binaries. It also seems like if the “back door” were made available to a significant fraction of the world’s governments, it would be a hard thing to keep secret.

On the other hand, the showdown I predicted has not yet occurred, so it’s conceivable that Skype reached some kind of accommodation with US and EU regulators and quietly pushed a back door out with new versions of the software.

Update: One Slashdot commenter points to This report from Black Hat on efforts to reverse-engineer Skype. Looks like they’ve gone out of their way to thwart both tactics. Everything’s encrypted, and the peer-to-peer architecture means that the client sometimes randomly transmits data when you’re not making calls.

RIP Randy Pausch

by on July 25, 2008 · 9 comments

Randy Pausch, a computer science professor who was diagnosed with pancreatic cancer in 2006, died today. You can watch his amazing and now-famous “last lecture,” delivered in September, here:

You can also buy a copy of his book here.