Articles by Tim Lee

Timothy B. Lee (Contributor, 2004-2009) is an adjunct scholar at the Cato Institute. He is currently a PhD student and a member of the Center for Information Technology Policy at Princeton University. He contributes regularly to a variety of online publications, including Ars Technica, Techdirt, Cato @ Liberty, and The Angry Blog. He has been a Mac bigot since 1984, a Unix, vi, and Perl bigot since 1998, and a sworn enemy of HTML-formatted email for as long as certain companies have thought that was a good idea. You can reach him by email at leex1008@umn.edu.


Anti-spam Theater

by on January 17, 2008 · 8 comments

When I stumbled across John Gilmore’s argument against the heavy-handed tactics of the anti-spam cabal a while back, I was surprised to find it pretty compelling. My years as a sysadmin had drilled into my head that Open Relays Are Bad, but this is an awfully good point:

What’s the difference between an “open router” and an “open relay”? An open router takes any packet that you send it, and forwards it toward its destination. An open relay takes any email that you send it, and forwards it toward its destination. They’re the same thing, just operating at different levels of the protocol stack. Should we outlaw open routers? Look at all these evil guys on the Internet backbone, all over companies and campuses, and even in private homes! They’re routing packets without authenticating who sent each one! They’ll accept packets from ANYWHERE ON THE INTERNET, and just send them onward, even if they contain spam or viruses! There oughta be a law!!! If we just shut down all those guys with their big Cisco spam tools, then we wouldn’t get any spam any more. Let’s all black-hole every packet that comes from any ISP that doesn’t authenticate every packet. We have perfectly good standards for authenticating packets (IPSEC — I even funded the free Linux implementation, called FreeS/WAN.) so lack of standards is no excuse. Come on guys, if we apply your rationale about open relays just two levels down in the protocol stack, we ought to shut down the entire Internet. What makes the application-level email service on port 25 so special? (Both sarcasm and logical argument are probably lost on this audience, but I’ll give it a try.)

And…

Continue reading →

Public Service Announcement

by on January 15, 2008 · 2 comments

Incidentally, I think I might have to try this next time I fly. Chris says that not only does the TSA rarely give him a lot of trouble, but often it actually gets him through the line faster, because they stick him in a special (shorter) trouble-makers’ line. Or if you don’t feel like explicitly asserting your rights, you can just state that you forgot to bring your ID.

Memo to Facebook…

by on January 15, 2008 · 2 comments

“Neoconservative libertarianism” is an oxymoron. And if you find it shocking that some rich entrepreneurs are libertarians, you need to get out more.

Cloud Computing Conference

by on January 15, 2008 · 0 comments

I’m at Princeton’s cloud computing workshop. One of the most interesting people I’ve met here is Chris Soghoian, creator of the legendary TSA boarding pass generator and author of a CNet blog on privacy and security issues. Somehow until now his blog had not yet made its way into my feed reader, but that oversight has been corrected.

One of the conference speakers is the founder of wesabe, a fascinating site for managing your finances. As Luis notes, they seem to be a company that takes security and user autonomy seriously.

Steve Bellovin points out a silly proposal to require licenses for Geiger counters. Like Bellovin, I’m at a loss as to why anyone would think this was a good idea. The police department says the legislation would “prevent false alarms and unnecessary public concern,” but it’s not clear either that false alarms are a major problem, or that this registration requirement would prevent them. Strangely enough, the article doesn’t cite a single example in which “false alarms” created serious problems for anybody.

A couple of other problems with the legislation spring to mind. First, it’s likely to be totally unenforceable. Geiger counters are widely available for a few hundred dollars. Any New Yorker who wants one will have little trouble going to New Jersey and buying one.

Second, I got to play around with a Geiger counter in my high school physics class. Does this legislation have an exception for instructional use? If not, this seems like a serious burden on education for now good reason.

ISPs Aren’t “Editors”

by on January 10, 2008 · 7 comments

I also disagreed with this part of Yoo’s argument:

The Internet has historically been regarded as a “pull” technology in which end users specified the exact content that they wished to see. The explosion of content on the World Wide Web has increasingly given the Internet the characteristics of a “push” technology in which end users rely on intermediaries to aggregate content into regular e-mail bulletins. Even search engine technologies have begun to exhibit forms of editorial discretion as they begin to compete on the quality of their search methodologies.

Mandating content nondiscrimination would represent an ill-advised interference with the exercise of editorial discretion that is playing an increasingly important role on the Internet. Editors perform numerous functions, including guaranteeing quality and ensuring that customers receive an appropriate mix of material. For example, consider the situation that would result if a publication such as Sports Illustrated could not exercise editorial control over its pages. One particular issue of the magazine might consist solely of articles on one sport without any coverage of other sports, and there would be no way to guarantee the quality of the writing…

The same principles apply to the Internet as it moves away from person-to-person communications to media content. This shift argues in favor of allowing telecommunications networks to exercise editorial control. Indeed, anyone con- fronting the avalanche of content available on the Internet can attest to the benefits provided by editorial filters. This transition also weakens the case for network neutrality.

I think this misfires on several levels. The first is that he’s mischaracterizing what advocates of network neutrality regulations are trying to accomplish. I don’t know of any prominent advocates of regulation who think the regulations should apply to Google’s search engine, much less Sports Illustrated’s home page. Of course editorial discretion is important in a world of increasing information.

Continue reading →

Ron Paul

by on January 10, 2008 · 34 comments

Since I’ve mentioned Ron Paul a few times in this space, I wanted to mention that after appalling examples of racist and anti-gay sentiments from his newsletters came to light, I would no longer characterize myself as a Ron Paul supporter. Before Tuesday, the only evidence of Paul’s racism I’d seen was one issue of the newsletter. I took Paul at his word that the comments in question were written without his knowledge or approval, and that the writer was let go when they were brought to his attention. But now it appears that at least a dozen issues of his newsletter over a period of some 5 years contained similarly appalling comments. I no longer find Paul’s rationalizations plausible. Whether Paul wrote the newsletters himself is irrelevant. If he is not a bigot himself, he had no qualms about associating with bigots over the course of many years. I have more thoughts on Paul’s newsletters here and here.

One of the things I disagreed with in Yoo’s paper is that he puts a lot of stock in the notion that Akamai is a violation of network neutrality. Akamai is a distributed caching network that speeds the delivery of popular content by keeping copies of it at various points around the ‘net so that there’s likely to be a cache near any given end user. Yoo says that the existence of Akamai “attests to the extent to which the Internet is already far from ‘neutral.'” I think this is either an uncharitable interpretation of the pro-regulation position or a misunderstanding of how Akamai works.

Network neutrality is about the routing of packets. A network is neutral if it faithfully transmits information from one end of the network to the other and doesn’t discriminate among packets based on their contents. Neutrality is, in other words, about the behavior of the routers that move packets around the network. It has nothing to do with the behavior of servers at the edges of the network because they don’t route anyone’s packets.

Now, Yoo thinks content delivery networks like Akamai violate network neutrality:

When a last-mile network receives a query for content stored on a content delivery network, instead of blindly directing that request to the designated URL, the content delivery network may redirect the request to a particular cache that is more closely located or less congested. In the process, it can minimize delay and congestion costs by taking into account the topological proximity of each server, the load on each server, and the relative congestion of different portions of the network. In this manner, content delivery networks can dynamically manage network traffic in a way that can minimize transmission costs, congestion costs, and latency…

The problem is that content delivery networks violate network neutrality. Not only does URL redirection violate the end-to-end argument by introducing intelligence into the core of the network; the fact that content delivery networks are commercial entities means that their benefits are available only to those entities willing to pay for their services.

I think Yoo is misreading how Akamai works because he’s takes the word “network” too literally. Content delivery networks are not “networks” in the strict sense of physical infrastructure for moving data around. The Akamai “network” is just a bunch of servers sprinkled around the Internet. They use vanilla Internet connections to communicate with each other and the rest of the Internet. Internet routers route Akamai packets exactly the same way they route any other packets.

The “intelligence at the core of the network” Yoo discusses doesn’t actually exist in routers (which would violate network neutrality), but in Akamai’s magical DNS servers. DNS is the protocol that translates a domain name like techliberation.com to an IP address like 72.32.122.135. When you query an Akamai DNS server, it calculates which of its thousands of caching servers is likely to provide the best performance for your particular request (based on your location, the load on various servers, congestion, and other factors) and returns its IP address. Now, from the perspective of the routers that make up “the core of the network,” DNS is just another application, like the web or email. Nothing a DNS server does can violate network neutrality, just as nothing a web server does can violate network neutrality, because both operate entirely at the application layer.

So in a strict technical sense, Akamai is entirely consistent with network neutrality. It’s an ordinary Internet application that works just fine on a vanilla Internet connection. Now, it is true that one of the way Akamai enhances performance is by placing some of its caching servers inside the networks of broadband providers. This improves performance by moving the servers closer to the end user, and it saves broadband providers money by minimizing the amount of traffic that traverses their backbones. This might be a violation of some extremely broad version of network neutrality, and there’s certainly reason to worry that an overzealous future FCC might start trying to regulate the relationship between ISPs and Akamai. But Akamai is not, as Yoo would have it, evidence that the Internet is already non-neutral.

Reader Deane had some great questions that I thought would be worth addressing in a new post:

Your major contention seems to be that closed-source software would be less efficient than open source because the latter is more decentralized..

but arnt’ you really talking about a management style here? I think its completely plausible for a closed-source software to have a very decentralized development process.. isnt that how google seems to do stuff? i dont know about microsoft.

Does this have anything to do with the source being open? i dont think so.

Any product development seems to need a degree of centralization, with teams, firms n so on. the degree of centralization depending on the relative cost/benefits of the management decision.

So i don’t see the point really of having a debate on open source vs closed source software. or to the fact that what’s efficient at creativity n so on, as libertarians i thought we’d trust the market to make those kind of decisions, on a product by product basis.

First of all, let me make clear that I don’t see this as a debate about “open source vs. closed source software.” Both styles of software development have their place, and I certainly wouldn’t want to be misunderstood as being opposed to closed-source development. I just think that the advantages of open source software development processes tend to be underestimated, and that in particular Lanier’s criticisms were rather misguided.

Continue reading →

I just finished a second read-through of Chris Yoo’s excellent paper, “Network Neutrality and the Economics of Congestion.” It’s an excellent paper, and in this post I’m going to highlight some of the points I found most compelling. In a follow-up post I’ll offer a few criticisms of parts of the paper I didn’t find persuasive.

One point Yoo makes very well is that large companies’ ability to coerce their media environment is often overrated. He points out that people often overestimate the ability of media companies to dominate the online discussion. He points out, for example, that fears that the AOL Time Warner merger would become an unstoppable online juggernaut turned out to be overblown. The merged firm turned out to have little ability to shape the browsing habits of AOL customers, and AOL continued to bleed customers.

Similarly, Yoo makes the important point that when evaluating the ability of a broadband provider to coerce a website operator, it is the broadband company’s national market share, not its local market share, that matters:

application and content providers care about the total number of users they can reach. So long as their total potential customer base is sufficiently large, it does not really matter whether they are able to reach users in any particular city. This point is well illustrated by a series of recent decisions regarding the market for cable television programming. As the FCC and the D.C. Circuit recognized, a television programmer ’s viability does not depend on its ability to reach viewers in any particular localities, but rather on the total number of viewers it is able to reach nationwide. So long as a cable network can reach a sufficient number of viewers to ensure viability, the fact that a particular network owner may refuse carriage in any particular locality is of no consequence. The FCC has similarly rejected the notion that the local market power enjoyed by early cellular telephone providers posed any threat to the cellular telephone equipment market, since any one cellular provider represented a tiny fraction of the national equipment market. Simply put, it is national reach, not local reach, that matters. This in turn implies that the relevant geographic market is a national one, not a local one. What matters is not the percentage of broadband subscribers that any particular provider controls in any geographic area, but rather the percentage of a nationwide pool of subscribers that that provider
controls.

Once the relevant market is properly defined in this manner, it becomes clear that the broadband market is too unconcentrated for vertical integration to pose a threat to competition. The standard measure of market concentration is the Hershman-Hirfindahl Index (HHI), which is calculated by summing the squares the market shares of each individual firm. The guidelines employed by the Justice Department and the Federal Trade Commission establish 1800 as the HHI threshold for determining when vertical integration would be a cause for anticompetitive concern. The FCC has applied an HHI threshold of 2600 in its recent review of mergers in the wireless industry. The concentration levels for the broadband industry as of September 2005 yields an HHI of only 1110, well below the thresholds identified above. The imminent arrival of 3G, WiFi, WiMax, BPL, and other new broadband technologies promises to deconcentrate this market still further in the near future.

To put this in concrete terms, if Verizon wants to twist Google’s arm into paying fees for the privilege of Google’s customers, Verizon’s relatively limited national market share (about 9 percent) doesn’t give it a whole lot of leverage. It is, of course, important to Google to be able to reach Verizon’s customers, but Google has enough opportunities in the other 91 percent of the broadband market that it wouldn’t be catastrophic if Verizon blocked its customers from accessing Google sites. Conversely, Google would have a strong incentive not to accede to Verizon’s demands because if it did so it would immediately face demands from the other 91 percent of the broadband market for similar payoffs. Which mens that Verizon, knowing that blocking Google would be a PR disaster for itself and that Google would be unlikely to capitulate quickly, isn’t likely to try such a stunt.

The analysis would be different if we had a single firm that controlled a significant chunk of the broadband market—say 50 percent. But there aren’t any firms like that. The largest appear to be Comcast and AT&T, with about 20 percent each. That’s a small enough market share that they’re unlikely to have too much leverage over web providers, which makes me think it unlikely that broadband providers would have the ability to unilaterally impose a “server pays” regime on them.