Articles by Tim Lee

Timothy B. Lee (Contributor, 2004-2009) is an adjunct scholar at the Cato Institute. He is currently a PhD student and a member of the Center for Information Technology Policy at Princeton University. He contributes regularly to a variety of online publications, including Ars Technica, Techdirt, Cato @ Liberty, and The Angry Blog. He has been a Mac bigot since 1984, a Unix, vi, and Perl bigot since 1998, and a sworn enemy of HTML-formatted email for as long as certain companies have thought that was a good idea. You can reach him by email at leex1008@umn.edu.


Check out my write-up of the State Secrets Protection Act, which is Ted Kennedy’s answer to the Bush administration’s habit of answering every lawsuit with “we can’t litigate about that because it’s a state secret. We can’t tell you why it’s a state secret because that’s a state secret too.” It would create some clear ground rules regarding when the state secrets privilege can be invoked and how judges should deal with such assertions. I haven’t given this a great deal of thought, but from a quick read-through of the bill, it seems like a pretty worthwhile approach.

More Broadband Progress

by on April 24, 2008 · 6 comments

Comcast recently unveiled a 50 Mbps broadband connection for $150/month, and has promised to have it available nationwide (and most likely bring the price down somewhat) by 2010. Verizon is in the process of rolling out a new fiber infrastructure that will allow it to offer a similar deal (you can also get an 8 Mbps connection for $50/month). All of this makes Qwest look like a relative laggard, with its announcement that it will soon be offering a 20 Mbit service for $100/month, and 12 Mbps for $50/month. And AT&T brings up the rear with plans for “only” 10 Mbit service.

One sometimes sees hand-wringing about the anemic state of the American broadband market. To the extent that other countries are doing even better (a debatable contention), we should certainly be looking for ways to make the broadband market more competitive. No doubt things would be progressing even faster if there were more players in the market. But the strong claim you sometimes see that there’s something deeply dysfunctional about the progress of the US broadband industry, is positively silly.

If Comcast and Verizon deliver on their promises to roll out 50 Mbit service by the end of the decade, and if prices follow their historical pattern of dropping over time, consumers in their service footprints will have seen the average speed of a $50 Internet connection increase by three orders of magnitude in about 15 years. And it’s been a reasonably steady pace of growth: in round numbers, $50 would buy you a 56kbps connection in 1998, a 512kbps connection in 2001, a 5Mbps connection in 2006, and (I hope) a 50 Mbps connection sometime early next decade. Things have consistently improved by a factor of 10 every 4-6 years.

It’s interesting to look back at the broadband argument we were having a couple of years ago. In mid-2006, TLF reader Luis Villa was reporting that he and his folks were still seeing typical broadband speeds of 2 Mpbs. Maybe he can chime in to let us know if that number has gone up at all. Personally, my apartment in St. Louis gets around 5 Mbps for about $30/month, and the house where I’m currently staying in DC has a 17 Mbps Comcast connection.

One of the things that makes it hard to judge is that broadband speeds and prices don’t tend to grow continuously and evenly around the country. Rather, carriers take turns leap-frogging one another, with each upgrade accompanied by a temporary price increase. So it can be tricky to judge the average rate of improvement by looking at just one market, because one market may seem to stagnate for several years at a time. But if one looks at the country as a whole, and focuses on time horizons closer to a decade, I think it’s undeniable that things are improving at a fairly rapid pace.

Don Marti offers a good example of the point of my last post:

The Smoot-Hawley Tariff was law. Many of the “code is law” design choices are similar: they can cause big effects without necessarily implementing the lawmaker’s intent.

I couldn’t have put it better myself. And this brings to mind another important point about “law” that I think is missed in Lessig’s “code is law” formulation: the real “law” Lessig is analogizing computer code to often isn’t exactly omnipotent. The War on Drugs is law but there’s still lots of drug use going on. Strict gun control is law in some big cities but there’s still plenty of gun violence. The McCain-Feingold soft money ban is law but there’s still plenty of corruption in Washington. The copyright system is law but there’s still plenty of illicit file sharing going on. And in all of these cases, the law has had significant effects that were very different from their authors’ intent.

And indeed, if we’re going to analogize ISPs to legislators, it’s important to remember that legislators have several key advantages over ISPs. In the first place, actual legislators can have you thrown in jail, whereas the worst an ISP can do to you is disconnect your service. Second, governments have effectively unlimited sums of money to waste on futile efforts like the drug war. In contrast, ISPs are profit-seeking institutions so there’s some limits on the amount of idiocy they can undertake before they’re forced to stop due to financial considerations. Finally, the law has at least some moral suasion behind it; the fact that something is illegal at least causes people to mostly hide the fact that they’re doing it. In contrast, people evading the “law” of an ISP would be able to claim the moral high ground.

Yet despite all these advantages, our government hasn’t come close to stopping people from using drugs, sharing files, hiring prostitutes, owning guns, corrupting public officials, and so forth. Not because the laws for those purposes were badly designed, or because the people implementing them haven’t tried hard enough, but because there are fundamental limits to government’s ability to control the behavior of citizens in a free society.

I think something similar is true of a large-scale network like the Internet. Incompetent network owners, as the “governments” of their corners of the Internet, can certainly cause a lot of havoc, just like real governments. But their ability to control what their customers do is constrained for many of the same reasons that governments’ ability to stop drug use or file sharing is constrained. And that means that network owners that try to restrict their customers’ use of their network are as likely to shoot themselves in the foot as they are to enhance their bottom lines.

Writing my critique of Larry Lessig’s Stanford lecture brought to mind an important ambiguity in Lessig’s oft-repeated slogan that code is law. I think the phrase actually has two distinct meanings that are often conflated but actually have quite different implications. One is, I think, indisputably true, whereas the other one is at best greatly overstated.

It’s certainly true that code is law in the sense that technological architectures place important constraints on the actions of their users. A TCP/IP network allows me to do different things than Verizon’s mobile market or the global ATM network. Wikipedia is the way it is largely because the software on which it runs constrains users in certain ways and empowers them in others. CALEA likely has given law enforcement agencies surveillance powers they couldn’t have had otherwise. To the extent that this was Lessig’s point, he’s obviously right.

However, when Lessig says “code is law,” he often seems to be making a significantly stronger claim about the power of code as law: not simply that code constrains us in some ways, but that the authors of code have a great deal of control over the exact nature of the constraints the technology they build will place on people. So that virtually any outcome the code designer wants to achieve, within reason, can be achieved if the code is designed by a sufficiently clever and determined programmer.

This stronger formulation strikes me as obviously wrong. This is true for at least two reasons. First, the set of tools available to the code-writer is often rather limited. For one thing, barring major breakthroughs in AI technology, many concepts and categories that are common sense to human beings cannot easily be translated into code. Rules like “block messages critical of president Bush” or “don’t run applications that undermine our business model” can’t easily be translated into hardware or software. There are a variety of heuristics that can be used to approximate these results, but they’re almost always going to be possible for human beings to circumvent.

The deeper challenge is a Hayekian point about spontaneous order: with a sufficiently complex and powerful technological platform, it often will not be possible to even predict, much less control, how the technology will be used in practice. Complex technologies often exhibit emergent properties, with the whole exhibiting behaviors that “emerge” from the complex interplay of much simpler constituent parts. It would have been hard for anyone to predict, for example, that the simple rules of wiki software could form the basis for a million-entry encyclopedia. Indeed, it’s pretty clear that Wikipedia was possible only because the site’s creators gave up any semblance of centralized control and allowed spontaneous order to work its magic.

A similar point applies to the Internet, and to the network owners that nominally control it. They certainly have some levers they can push to change certain micro-level characteristics of their networks, just as Jimmy Wales could make changes to the code of Wikipedia. But there’s no reason to think that either Wikipedia or Comcast would have any way to either predict or control what effects any given micro-level change to their platforms would have on the macro-level behavior of the whole system. They’re both large, complex systems whose millions of participants are each pursuing their own idiosyncratic objectives, and they’re likely to react and interact in surprising ways.

Now, the fact that Comcast can’t predict or control what effects its tinkering with its network might have does not, of course, mean that it won’t try. But it does mean that we should be skeptical of just-so stories about telcos turning the Internet into a well-manicured walled garden. When you create complex, open systems, they tend to take on a life of their own, and once you’ve ceded centralized authority, it can be very difficult to take back. Code may be law, but I think it’s a much more limited kind of law than a lot of people seem to think.

Over at Techdirt, I’ve got a series of posts that highlight some of the major arguments in my forthcoming paper on network neutrality. I’m particularly pleased with the latest installment of the series:

He claims that “owners have the power to change [the Internet’s architecture], using it as a tool, not to facilitate competition but to weaken competition.” Do they? He doesn’t spend any time explaining how networks would do this, or what kind of architectural changes he has in mind. But he does give an example that I think is quite illuminating, although not quite in the way he had in mind.

Lessig imagines a world of proprietary power outlets, in which the electricity grid determines the make and model of an appliance before deciding whether to supply it with power. So your power company might charge you one price for a Sony TV, another price for a Hitachi TV, and it might refuse to work at all with an RCA TV. Lessig is certainly right that that would be a bad way for the electricity grid to work, and it would certainly be a headache for everybody if things had been set up that way from the beginning. But the really interesting question is what a power company would have to do if it wanted to switch an existing electricity grid over to a discriminatory model. Because the AT&Ts and Comcasts of the world wouldn’t be starting from scratch; they’d be changing an existing, open network.

I focus on an aspect of the debate that I’ve seen receive almost no attention elsewhere: assuming that network discrimination remains legal, how much power could network owners actually exert over the use of their networks? There’s an assumption on both sides that ownership of a network automatically translates to comprehensive control over how it’s used. But I have yet to see anyone give a plausible explanation of how a last mile provider would get from the open network they’ve got now to the tightly-controlled one that network neutrality advocates fear. It’s always simply asserted, as Lessig does here, and then taken as a given for the rest of the argument. It’s a weakness in the pro-regulation argument that I wish more critics of regulation would highlight.

I go on to discuss the difficulties of real-world architectural transitions. Read the whole thing here.

I want to second Adam’s great post on the silly alarmism over the state of the media. I never watch network TV, so I don’t have an opinion on whether it’s become a cultural wasteland, but to the extent that that’s true, it’s primarily because there are so many alternative entertainment sources (basic and premium cable, DVDs, the Internet, indy movie theaters) competing for more discerning viewers. There’s a lot more of everything—great entertainment and drivel alike—being produced. And anyone who cares is free to seek out great shows like The Wire rather than watching the latest formulaic crap on NBC.

This, indeed, mirrors the broader critique that’s commonly leveled at the Internet (ironically, it’s often made by defenders of traditional mass media) that most of what’s on the Internet—be it blogs, YouTube videos, amateur poetry, or whatever—is crap. This is true. But it’s also totally irrelevant, because nobody spends their time consuming the median content online. Rather, they have a variety of increasingly sophisticated filters at their exposure that allow them to find the best stuff and ignore the rest. What we ought to care about isn’t the quality of the average content that’s available, but the quality of the average content that’s actually consumed, as judged by the person consuming it. It’s almost a tautology that more options means people will be able to find more stuff they’ll like.

By the same token, there’s no reason to care especially about the quality of the average network TV show when people are abandoning network TV in droves in favor of higher-quality content available elsewhere. What matters is whether there’s enough high-quality stuff for people to watch, and on that score things have never been better.

No Intelligence Allowed

by on April 21, 2008 · 10 comments

Since ads for Ben Stein’s “Intelligent Design” movie is on heavy rotation over on the right-hand side of the page, now seems like a good time to reiterate that most of the ads on this site are automatically placed by Google, and shouldn’t be taken as an endorsement by TLF or any of its contributors. Ron Bailey’s take on Expelled seems pretty spot-on to me.

ALF 5 Tonight!

by on April 21, 2008 · 8 comments

The big day has arrived. Tonight at 5:30, we’ll be getting together to drink, talk tech policy, and raise money for the Jefferson 1. Please note that the location has been changed from what was originally announced: it’ll be at Science Club, 1136 19th Street, NW, Washington D.C. Hope to see you there.

Threat Level has been providing gavel-to-gavel coverage of the murder trial of Linux developer Hans Reiser, who is accused of killing his wife. His defense attorney’s argument is that Reiser is an jerk, but being an jerk doesn’t make you a murderer:

Hans and Nina met in 1998, in Russia, when he was overseas hiring programmers. He picked her out of a mail-order bride catalog, where she was advertised as “5279 Nina.” They married the following year after she became pregnant with their first of two children.

DuBois, as he displayed for jurors Nina Reiser’s bride advertisement, said she moved to divorce him five years later, just as she became a U.S. citizen.

“She had an ulterior motive to marry Hans,” DuBois said.

“It couldn’t have been out of love that she married Hans Reiser,” DuBois said. “I can’t see anybody loving Hans Reiser.”

“He has to be one of the least attractive people you can imagine,” DuBois continued. “And she’s a doll.”

Sounds like a charming guy. I feel really sorry for his two kids.

Change of Venue for ALF 5

by on April 17, 2008 · 5 comments

I foolishly neglected to do my homework, and unfortunately, the 18th St. Lounge is closed on Monday afternoons. So ALF 5 will be at Science Club instead. Still April 21, still 5:30 to 7:30. Hope to see you there.