Articles by Tim Lee

Timothy B. Lee (Contributor, 2004-2009) is an adjunct scholar at the Cato Institute. He is currently a PhD student and a member of the Center for Information Technology Policy at Princeton University. He contributes regularly to a variety of online publications, including Ars Technica, Techdirt, Cato @ Liberty, and The Angry Blog. He has been a Mac bigot since 1984, a Unix, vi, and Perl bigot since 1998, and a sworn enemy of HTML-formatted email for as long as certain companies have thought that was a good idea. You can reach him by email at leex1008@umn.edu.


Yesterday I responded to Solveig Singleton’s comments about the Linux DVD player issue. Today I want to focus on the other major argument of her posts, namely that DRM is sufficiently effective that the DMCA is worth it even if it does have some disadvantages.

Unfortunately, the debate over whether DRM is effective often has a “Does not! Does too!” quality to it. I’m going to try to dig into the matter a little more deeply to see if we can get past that to some substantive discussion of when DRM is effective and when it isn’t. I apologize in advance for the length of this post.

Continue reading →

Kerr on NSA and FISA

by on May 11, 2006

Orin Kerr has a lengthy analysis of the latest NSA spying revelations. He concludes that it doesn’t violate the Fourth Amendment but likely runs afoul of several statutes. This paragraph didn’t strike me as being quite right:

The legality of the program under FISA is somewhat similar to the legality of the NSA program we learned about a few months ago. The key question is, did the monitoring constitute “electronic surveillance” under FISA, and if so, does the Authorization to Use Military Force allow it? Note that FISA’s definition of “electronic surveillance” goes beyond accessing only content information and extends to some non-content information. If the program did involve “electronic surveillance” under FISA, then we’re right back to the same question that has been raised about the legality of the known NSA domestic surveillance program. If that’s right, your views of the legality of the new NSA program will pretty much coincide with your views of the legality of the NSA program disclosed a few months ago.

It seems to me that one of the arguments frequently deployed by the president and his supporters is that the wiretapping program only targetted calls international calls. On this theory, FISA doesn’t apply at all because FISA only governs domestic surveillance. I don’t think I buy that agument, but I can easily see someone who does concluding that the wire-tapping program is legal, but this new program is not.

Remember how the administration said it was only monitoring international calls? Well, never mind. It turns out that the NSA is building a database of every phone call made in the United States:

The NSA program reaches into homes and businesses across the nation by amassing information about the calls of ordinary Americans–most of whom aren’t suspected of any crime. This program does not involve the NSA listening to or recording conversations. But the spy agency is using the data to analyze calling patterns in an effort to detect terrorist activity, sources said in separate interviews. “It’s the largest database ever assembled in the world,” said one person, who, like the others who agreed to talk about the NSA’s activities, declined to be identified by name or affiliation. The agency’s goal is “to create a database of every call ever made” within the nation’s borders, this person added.

Having the NSA know who I called is less creepy than having them all recorded. But it’s still creepy. And probably illegal.

MPEG Patent Thicket

by on May 11, 2006 · 24 comments

In comments to my big DVD post, Urijah points out another obstacle to a completely free and legal version of MPlayer or Xine: the MPEG format is heavily encumbered by patents, and commercial entities generall must pay $2.50 per installation for a license.

I haven’t looked into this issue in detail, but if this article is right, this problem likely extends beyond MPEG-2 to other video-playback technologies:

All patents in the list of the MPEG licence association in regard to the MPEG-4 standard were examined and analysed. After intensive study of relevant literature and more than 100 patents of the relevant companies we can say now: Upon careful examination, we can not find any advances over the prior art in said list that could justify the granting of a patent. Most of these patents should be attackable in court, but who would take the burden of litigation against 900 patents owned by dozens of large companies?

When people talk about “the MPEG-2 patents,” they aren’t referring to a specific patent that describes the MPEG-2 standard. Rather, they refer to 640 holders of patents related to various aspects of the MPEG-2 standard. If 640 patents describe a single video format, it’s a safe bet that a substantial fraction of them cover any conceivable alternative video format. Which means that technically speaking, all free video-playing software is probably technically infringing on numerous patents.

It’s also quite possible that the MPEG-LA wouldn’t bother suing an open source project, which doesn’t have any money anyway. At worst, I would think MPlayer and Xine could charge people $2.50 to download copies of MPlayer or Xine and turn the tribute revenue over to the MPEG-LA. They could still distribute the source code, so this wouldn’t greatly hamper its development as an open source project.

In any event, the best solution is to repeal software patents, which impede innovation in this and many other software categories.

Mind the Spin

by on May 10, 2006 · 10 comments

Art Brodsky of Public Knowledge blogs about the fantastic success of the British telecom market:

The advertisement on the wall in the subway station was hard to believe–a broadband service with 24 meg download for about $45 per month. That was the good news. Unfortunately, the service isn’t available in the U.S. The ad was on the wall of tube stop in London and the company, Be, http://www.bethere.co.uk is British. Just to rub it in a little, it gets better. There is also a cheaper option, about $25 per month, which still gets you the 24 mbps download, but with a slower upload speed. This in a city in which a bottle of water will set you back about $2.25. Now, let’s contrast that combination of price and service with an ad in today’s Washington Post, in which Verizon will sell you the blinding speed of 768 kbps for $17.99 per month with a yearly contract. And for one more bit of shopping–Verizon’s FIOS service, their fiber optic super-speedy, up to 30 mbps version. What will that cost you? According to the Verizon web site, up to 30 mbps can be had for between $180 per month and $200 per month.

Sounds pretty terrible! In the United States, you have to pay about 4 times as much for slightly more bandwidth, or you can pay slightly less for 1/30 the bandwidth. However, with a little bit of research it becomes apparent that Brodsky might be cherry-picking his examples just a little bit.

Let’s start with the high end. For $45–exactly the same price as Be’s unlimited plan–you can get FiOS from Verizon at a speed of 15 megs down and 2 megs up. That’s clearly slower downstream, although not incredibly slow, and slightly faster on the upload speed.

As for the low end, in my area Charter’s offering you a 3 Mbit service for $19.99 for the first six months, after which it goes up to $29.99. 24 is obviously a lot more than 3! At least, until you read the fine print: Brodsky doesn’t mention that the low-end $25 Be plan has a 1 GB download cap. That means that if you download at full speed, you get to saturate your 24 mbit connection for a whopping 5 and a half minutes every month. If you shell out another $5/month, you can get another 5 GB, which means you can download at full speed for half an hour every month. Clearly, this plan is not designed for people who would make much use of the full 24 mbit link.

I’d like to learn more about the British model. It does sound like they have more compeition which is intrigueing. But in any event, if you do an apples-to-apples comparison, it doesn’t look to me like the British are very far ahead of us in price/performance.

On Linux DVD Players

by on May 10, 2006 · 16 comments

Ms. Singleton has wrapped up her two-part critique of me and my paper. She takes a few more personal jabs at me than I think was called for, but she also make some substantive arguments. I’m going to keep my focus on the latter.

Singleton makes two broad points: first, CSS licensing is not a barrier to the creation of Linux DVD players, and in any event, Linux DVD players are a tiny market so we shouldn’t be too concerned about them. And secondly, DRM is a more effective piracy deterrent than I say it is. I’ll address her first point here, and come back to the second point in a future post.

I want to start by stepping back for a bit of perspective. In my paper, I claimed there were no Linux software DVD players. It turns out that isn’t true. There appear to be two DVD-playing programs for desktop Linux operating systems: One for the Linspire version of Linux, and the other for TurboLinux. (She also mentions LinDVD, which is designed for proprietary set-top boxes, not general-purpose computers, which was what I was talking about the in the paper)

Continue reading →

Just What Consumers Need

by on May 10, 2006

David Berlind points out a charming feature of the new BitTorrent movie-distrubtion network: its DRM scheme apparently isn’t compatible with the other DRM schemes already on the market. His reaction to that is about the same as mine:

To go with yet another proprietary DRM technology when the market is already full of exisiting non-interoperable ones that are screwing it up is quite an unnatural act and evidence that either Warner Bros., the Motion Picture Association of America (MPAA), or the movie industry as a whole have no clue how to right a ship that’s about sink as it floods with stupidity. So. Let’s see. I need PlaysForSure-compliant technology to playback content X, FairPlay-compliant technology to playback content Y, and Bittorrent technology to playback content Z. Why don’t we bring back BetaMax and VHS while we’re at it?

He mentions Sun’s DReaM as an alternative, but as I’ve written before it’s not clear to me how that would be an improvement. DReaM would simply be a fourth (or fifth if you count Google’s DRM) incompatible DRM scheme. The fundamental problem here is that Hollywood is prioritizing (ineffective) piracy-fighting higher than giving their customers products they’ll actually want to buy.

BitTorrent and Piracy

by on May 10, 2006 · 2 comments

Apropos the discussion of peer-to-peer technologies below, I have to say that the headline of this Forbes article, “WB Sails With Tech Pirate,” is rather obnoxious. Here’s how the article concludes:

However, BitTorrent raised $8.75 million last year in a bid to transform itself from the leading developer of piracy software into a legitimate company that distributes content on the Internet.

BitTorrent is a software tool for efficient file distribution. Do a lot of people use it to commit copyright infringement? Sure. But the same can be said of many other Internet technologies. Indeed, most users who download illegal files with BitTorrent find those files using web-based directories of files available for download. It makes as much sense to say that the web is “piracy software” because many people find illegal BitTorrent trackers using web-based search engines.

Just like the Web, BitTorrent has plenty of legitimate uses. Many open source projects, including SUSE Linux and OpenOffice, use BitTorrent to distribute their software to save money on bandwidth. Blizzard’s World of Warcraft distributes software updates via BitTorrent.

Finally, the Forbes article seems to have difficulty distinguishing between the company and the technology. BitTorrent-the-technology is open source software which anyone is free to use to distribute any type of content. BitTorrent-the-company runs a search engine that allows users to find Torrent files for download. BitTorrent removes links to illegal files as they’re discovered. The fact that some people use BitTorrent to distribute illegal files is no more the company’s fault than it’s Apache’s fault if somebody uses their web server to distribute infringing files.

Larry Lessig periodically links to this 2000 article in the Prospect about network neutrality. In it, he makes the closest thing I’ve seen to a convincing argument that network neutrality regulation was responsible the Internet’s growth:

But there is one part of the Internet where end-to-end is more than just a norm. Here the principle has the force of law, and the network owner cannot favor one kind of content over another or prefer one form of service over another. Instead the network owner must keep its network open for any application or use the customers might demand. Competitors must be allowed to interconnect; consumers must be allowed to try new uses. In this part of the Internet, “open access” is the rule. This part of the Internet is–ironically enough–the telephone network, where because of increasing regulation imposed by the D.C. Circuit Court of Appeals in the 1970s–leading to a breakup of AT&T by the Justice Department in 1984 and culminating with the Telecommunications Act of 1996–the old telephone network has been replaced with a new one over which the owner has very little control. Instead, the FCC spends an extraordinary amount of effort making sure the telephone lines remain open to innovators and consumers on terms analogous to the terms required by an end-to-end principle: nondiscrimination and a right to access.

The argument here is that by ensuring that any consumer could call any ISP in his or her area code, the FCC’s regulation of the telephone network had the unintended consequence of imposing a de facto network neutrality rule on telecom companies, thereby ensuring that the Baby Bells couldn’t leverage their ownership of phone lines to control the Internet.

This isn’t a crazy argument. I’m rather annoyed that my local telco, SBC (now AT&T) requires me to get Yahoo-branded Internet service, even if I’d rather connect via another ISP. The fact that anybody could become an ISP by connecting to the public phone network indisputably had a positive impact on competition among ISPs.

Still, this argument doesn’t strike me as being quite right.

Continue reading →

Robert X Cringely has an interesting article about the future of digital content distribution and peer-to-peer networks. I think his big thesis–that the existing one-to-many, end-to-end model for distributing video content won’t scale–is right. But I think he’s missing a few things when he points to peer-to-peer technologies as the savior.

Here’s the technical problem: Right now, if ABC wants to deliver 20 million copies of Desperate Housewives over the Internet, it woul have to transmit the same stream of bits 20 million times to its ISP. The ISP, in turn, might have to transmit 5 million copies to each of 4 peers. Those peers, in turn, might have to transmit a million copies to each of 5 of its peers. And so on down the line, until each end user receives a single copy of the content. That’s wasteful, because sending 20 million redundant copies of a file uses a lot of bandwidth.

In a perfect world, ABC should only have to transmit one copy to its ISP, and the ISP, in turn, should only have to transmit one copy to each interested peer, and so on. Each Internet node would receive one copy and transmit several, until everyone who wants a copy is able to get one. Geeks call this multicast. It’s theoretically part of the TCP/IP protocol suite, but for a variety of technical reasons I don’t fully understand, it hasn’t proved feasible to implement multicast across the Internet as a whole.

However, there are plenty of quasi-multicast technologies out there. One of the most important is Akamai’s EdgePlatform. It’s a network of 18,000 servers around the world that serve as local caches for distributing content. So when a company like Apple wants to distribute 20 million copies of a file, it doesn’t have to transmit it 20 million times. Instead, it transmits the content to Akamai’s servers (and presumably Akamai’s servers distribute it among themselves in a peer-to-peer fashion) and then users download the files from the Akamai server that’s topologically closest to them on the network.

Continue reading →