Articles by Tim Lee

Timothy B. Lee (Contributor, 2004-2009) is an adjunct scholar at the Cato Institute. He is currently a PhD student and a member of the Center for Information Technology Policy at Princeton University. He contributes regularly to a variety of online publications, including Ars Technica, Techdirt, Cato @ Liberty, and The Angry Blog. He has been a Mac bigot since 1984, a Unix, vi, and Perl bigot since 1998, and a sworn enemy of HTML-formatted email for as long as certain companies have thought that was a good idea. You can reach him by email at leex1008@umn.edu.


More Common Sense

by on February 24, 2006

It appears that Judge Spenser flinched from issuing a BlackBerry injunction. He continued to rattle his saber, but he stopped short of ordering the network shut down. And for good reason–you don’t piss off the majority of the nation’s lawyers and business execs without consequences.

Meanwhile, the patent office issued another rejection of an NTP patent. Things look pretty bleak for the nation’s leading patent troll.

Uncommon Common Sense

by on February 24, 2006

I never want to disappoint Jim (and in my timezone it’s only 9:46 AM, so he wins his bet), but I don’t have a whole lot to say about this other than “duh”.

Update: Well, OK, I do have one other thing to say about Goldberg’s comments: Steve Jobs agrees with him:

A lot of [music execs] didn’t use computers–weren’t on e-mail; didn’t really know what Napster was for a few years. They were pretty doggone slow to react. Matter of fact, they still haven’t really reacted, in many ways. And so they’re fairly vulnerable to people telling them technical solutions will work, when they won’t. When we first went to talk to these record companies–you know, it was a while ago. It took us 18 months. And at first we said: None of this technology that you’re talking about’s gonna work. We have Ph.D.’s here, that know the stuff cold, and we don’t believe it’s possible to protect digital content.

Of course, he doesn’t talk about this point much now that DRM is helping Apple to cement their dominance of the digital music business. But Jobs knows perfectly well that DRM doesn’t benefit the recording industry.

It’s a safe bet that Larry Page knows it too, but made the same calculation Jobs did: Hollywood won’t do business with Google unless Google agrees to wrap their content in DRM.

Judge Martz Needs Internet Smartz

by on February 23, 2006

Having read the Perfect 10 v. Google decision, I agree with Fred von Lohmann’s analysis of it: this is a basically solid decision that goes off the rails because Judge Martz didn’t seem clear on the relationship between Google Image Search and AdSense.

Here’s how those two products work: Google Image Search is a search engine for images. It does not serve ads. AdSense is a third-party ad program whereby any website on the Internet can allow Google to place ads on their site in exchange for a cut of the revenues. The relationship between these programs is… well, there isn’t really a relationship, except they’re both Google products. Sometimes users find infringing pages using Google Image Search that have AdSense ads on them. The court decided this was evidence that Google Image Search was profiting off of infringement.

But that’s ridiculous. Google Image Search doesn’t give any particular preference to web sites that serve up AdSense ads. And AdSense serves up ads regardless of what search engine brought the user to the site. If Google cancelled Google Image Search altogether, there’s little reason to think AdSense would suffer financially–users would likely find the same pages using other search engines.

If this standard is to be taken seriously, search engine companies are going to have to divest themselves of all other online services that might involve infringing copyrights. Yahoo! will have to sell off GeoCities. Microsoft will have to stop selling IIS, its web server.

Google Image Search and AdSense are unrelated products. It makes no sense to consider them as a single product for the purposes of fair use analysis. That should be obvious to anyone with substantial experience using the web. It seems like a reasonable assumption that Judge Matz isn’t the most Internet-savvy guy around.

Update: Oops! It looks like I imagined an “r” in Judge Matz’s name. Sorry!

Video Security Blanket

by on February 23, 2006

Via the Commons Music blog, I see this in-depth article about the fact that hardly any graphics cards you buy today will be compatible with the forthcoming HDCP copy-protection standard:

HDCP stands for High-bandwidth Digital Content Protection and is an Intel-initiated program that was developed with Silicon Image. This content protection system is mandatory for high-definition playback of HD-DVD or Blu-Ray discs. If you want to watch movies at 1980×1080, your system will need to support HDCP. If you don’t have HDCP support, you’ll only get a quarter of the resolution. As part of the Windows-Vista Ready Monitor article, I was going to publish a list of all of the graphics cards that currently support HDCP. I mean, I remember GPUs dating as far back as the Radeon 8500 that had boasted of HDCP support. Turns out, we were all deceived. Although ATI has had “HDCP support” in their GPUs since the Radeon 8500, and NVIDIA has had “HDCP support” in their GPUs since the GeForce FX5700, it turns out that things are more complicated–just because the GPU itself supports HDCP doesn’t mean that the graphics card can output a DVI/HDCP compliant stream. There needs to be additional support at the board level, which includes licensing the HDCP decoding keys from the Digital Content Protection, LLC (a spin-off corporation within the walls of Intel).

The more I read about these kinds of enterprises, the more I’m struck by how brittle they are. Each and every component in the HDCP content stream–the optical drive, the operating system, the graphics card, and the monitor, and numerous small components, must be specifically reviewed and approved by the HDCP consortium to make sure that they follow the rules. The millions of drives, computers, graphics cards, and monitors that were designed prior to the release of the HDCP spec (i.e. virtually all the video hardware in use today–even hardware that’s physically capable of playing high-resolution video) will have to be thrown out if consumers want to view Blue-Ray or HD-DVD content. This is a tremendous cost in time, money, and consumer inconvenience.

Yet if a vulnerability is found in even one of those components (something that history and theory say is inevitable), the entire exercise becomes pointless. Somebody will exploit the vulnerability to decode the file and upload it to a P2P networks. At that point, all the DRM in the world won’t stop someone from downloading an unprotected copy.

The HDCP effort is akin to adding a third deadbolt to your front door when the back door doesn’t even have a lock. It might make some of us feel better, but it’s not going to do much to stop the bad guys.

Mistrust-based DRM

by on February 22, 2006 · 2 comments

Last week, Randy Picker wrote about an idea for “mistrust based” digital rights management technology:

Watermarks are a form of identity-based DRM. The embedded watermark would allow a content owner to scan p2p networks in search of available content. Having found the content and the associated identity, the content owner would be able to respond to the illegal distribution. But respond how and won’t the anti-DRM software just strip the watermark anyhow? This is where mistrust comes in. In embedding identity into content, we may also need to embed access to something valuable, a hostage or mini-bond as it were. Consider a couple of versions of this. If access to content brought with it full-access to a customer’s account, customers would be quite careful about sharing access to the content.

Today, Ed Felten reacts to the proposal:

In the more traditional system, the watermark is secret–it can be read only by the copyright owner or its agents–and users fear being sued for infringement if their files end up on P2P. In Randy’s system, the watermark is public–anybody can read it–and users fear being victimized by fraud if their files end up on P2P. I’ll call these two alternatives “secret-watermark” and “public-watermark”. How do they compare? For starters, a secret watermark is much harder for an adversary to find and remove. If a watermark is public, everybody knows exactly where in the music it is stored. Common sense, and experience too, says that if you know where in a file information is stored, you can modify that part of the file and obliterate the information. But if the watermark is secret, then an adversary isn’t told where to look for it or how to change the file to remove it. Robustness of the watermark is an important issue that has been the downfall of past watermark systems. A bigger problem with the public-watermark design, I think, are the forces unleashed when your design principle is to enable fraud. For example, the system will lose its force if unrelated anti-fraud measures become more effective, or if the financial system acts to protect users from fraud. Today, a consumer’s liability for fraudulent credit card transactions is capped at $50, and credit card companies often forgive even that $50. (You could use some other account information instead of the credit card number, but similar issues would still apply.) Copyright owners would be the only online merchants who wanted a higher level of fraud on the Net.

I think Felten has the better argument here. Like most DRM proposals, Picker’s idea is great in theory but is likely to fall short when it comes to implementation. It’s much easier to imagine a watermark scheme with the characteristics Picker describes than to build one. Both Picker’s and Felten’s posts are worth reading in full.

DRM in Action

by on February 21, 2006 · 2 comments

I’ve just finished reading Felten and Halderman’s excellent paper on the XCP and MediaMax copy-protection schemes adopted by Sony BMG. It’s well worth the read if you’re interested in getting a glimpse at the real-world implementation details faced by DRM designers.

What I found most striking was how unsophisticated most of the security mechanisms in these programs were. Felten and Halderman found several ways that a moderately technically sophisticated user could defeat the controls (that’s not counting “hold down the shift key” and “get a Mac”). It’s not clear to me what Sony BMG was trying to accomplish with this software, but it clearly wasn’t to keep determined users from getting unscrambled copies of their music.

A couple of weeks ago, in comments, I got a tongue-lashing from Solveig Singleton for my suggestion that DRM was a legal, rather than a purely private, enforcement mechanism:

The DMCA is certainly a legal barrier. And to some extent, effective DRM, or some of it, relies indirectly in turn on some kind of backup by the DMCA, enough to stop the commercial proliferation of cracking tools. But DRM and the DMCA are not the same thing!!! DRM is a private mechanism. Its basic operation is physical. Like a lock on a door. The fact that a policeman will bust you if you break a lock doesn’t make the lock any less a private mechanism. It has costs, but these are quite different from the costs of a legal mechanism as such.

But it appears that her colleague, James DeLong, disagrees with her:

Continue reading →

Mea Culpa on GoodMail

by on February 18, 2006

I haven’t checked Declan’s site in a few days, but I see that he’s posted a couple of insightful emails about the Yahoo/AOL/Goodmail pay-for-email program I last week:

Imagine that you are an online service that needs to ensure that a customer order confirmation, or an equivalent critical transaction message, is delivered to the customer. Then imagine that you are offered a means of safely and reliably identifying this specific class of mail, so that it receives differential handling. The incentives for a company to pay to ensure that delivery are substantial. And that is what the recent announcement is about. It concerns a means of ensuring delivery of “transactional” mail. This is quite different from “marketing” mail and it is not in the least controversial.

This makes a lot more sense to me, and it makes me think my previous comments criticizing the program were too hasty. I thought it was a bad idea because much of the media coverage suggested that AOL’s long-term goal was to make all commercial bulk emailers pay postage if they wanted to reach AOL users. But it sounds like the purpose is rather different: it guarantees that high-value content like travel itineraries and bank statements will get through spam filters, while the treatment of other mail remains unchanged.

This is particularly important because many spammers do their best to emulate legitimate documents like bank statements, in the hopes of tricking users into clicking them. That makes it difficult for spam filters to tell the difference, and raises the risks of both false positives and false negatives. Not only do users benefit by getting their email expeditiously, but more importantly, the email would come with a “seal of approval” that will assure the user that the email is genuine.

If Declan’s commenter is right, this is not primarily about marketing emails, as the media reports I read implied. And it certainly isn’t targeted at bulk email in general. While companies certainly could use this service to ensure their monthly emails and such get through, many are likely to conclude it’s not worth the expense: 95% of their email likely makes it through already, and it’s probably not worth the cost to reach that final 5%. But on the other hand, when I purchase an airplane ticket, it’s pretty important that my itinerary reach my inbox. I bet Travelocity will be more than happy to kick in a quarter of a penny to make sure it reaches me.

So I take it back: this does sound like a promising concept. I should have done more digging before badmouthing it.

TechDirt points out yet another article about how the content industries are shooting themselves in the foot with overly aggressive copy protection. Next-generation video formats will only allow themselves to be viewed at full resolution on certain hardware. A lot of computer hardware being sold today doesn’t make the cut, despite the fact that they are physically capable of displaying the content at full quality. The result: you buy a shiny new computer with a Blu-Ray drive, and find that it plays Blu-Ray movies at lower quality than your old computer played DVDs. That will really get users excited about adopting the new format!

Why is Hollywood going out of their way to piss off their own customers?

The copy-protection muddle stems from Hollywood studios’ desire to avoid the film piracy that was born when tools for unlocking the encryption technology on today’s DVDs began spreading online in late 1999.

But that completely misunderstands why the DVD’s copy protection system failed. Hackers didn’t use the “analog hole” to record unprotected copies of DVD content. Rather, they reverse-engineered the encryption itself, allowing them to decode DVDs directly. All the “analog hole” countermeasures in the world won’t do a bit of good once the content itself has been decrypted.

This wouldn’t be the first time Hollywood has done its best to strangle a promising technology in its cradle by treating its customers like criminals.

Update: Ars highlights another way that copy protection on next-gen video formats is likely to irritate customers: because the copy-protection specs are still being negotiated with barely a month to go before release, the first batch of HD-DVD players will probably require a firmware upgrade before they’ll be able to actually play videos.

Software Libre

by on February 17, 2006

I often scratch my head when James DeLong writes about open source software:

Torwalds emphasis on reciprocity as a dominant value is right. It is a word used often here at PFF to describe the workings of the IP system, and to explain why unauthorized P2P violates the social contract. But he has an awfully limited view of reciprocity in that he insists that code can only be traded for code. This may do in a research context, but once one enters the world of affairs, not even the most primitive barter economy trades like for like. Og the Cro Magnon traded meat for a finely crafted club, or a log canoe for a tent. Now, granted, Torwalds is not talking about trading exactly the same code, but this is still a strange and unnecessary conttraint.

This is, shall we say, a strange and unnecessary argument. We on the libertarian side of the fence often extol markets, commerce, and for-profit institutions, because they work very well and provide us all with a lot of goods and services we value. But I think we too often fall into the trap of assuming that market institutions are always superior to non-market institutions, or (even worse) that for-profit institutions are always superior to not-for-profit ones.

But that’s silly. The essence of the libertarian position is that decentralized, voluntary institutions are better than centralized, coercive ones. As it happens, markets are one of the most important examples of a decentralized, non-coercive institution. But it’s far from the only one. Churches, private charities, universities, think tanks, and families are all examples of private organizations that do good things without primarily relying on the profit motive. I can’t remember ever reading a libertarian attack churches because they rely so much on volunteers rather than paid workers to get things done. Volunteering at your church is an example of reciprocity that doesn’t involve an exchange of money. We libertarians usually praise such arrangements as worthwhile alternatives to government coercion.

An open source software projects is another example of a private, decentralized, voluntary institution. It’s the sort of thing that free-market types should be promoting, as another example of how valuable products can be created without regulations and subsidies. Yet DeLong regularly does just the opposite.

Now obviously, the fact that DeLong’s criticism isn’t intrinsically libertarian doesn’t mean it’s wrong. Here’s what he’s missing: Torvalds demands reciprocity in the form of code rather than money because the source code is actually useful to him. Ed Felten named his blog “Freedom to Tinker” for a reason. Software that comes with its source code is more useful than software that doesn’t. Being able to “tinker” with the software we use is an ability many of us programmers value, and it’s taken away from us by proprietary software.

DeLong seems to think that open source programmers are just ideologically driven zealots who don’t like paying for things. But that misunderstands their motivation. Primarily, their concern is technical, not ideological or financial. The ability to examine and change a program’s source code is valuable, independently of whether you paid for the software in the first place, and independently of whether you’re planning to share it with others. So Torvald’s motivation in trading code for code is that he actually wants the code. Not because he hates the profit motive, but simply because the code is useful to him, and he can’t get it with proprietary software.

This is a point that non-programmers have trouble understanding. When they hear the phrase “free software,” they hear “software I don’t have to pay for.” That’s not what the phrase means–it’s an unfortunate limitation of the English language. The open source movement uses the phrase “free as in free speech, not free beer” to try to explain the distinction. It’s about what you can do with the software, not how much you paid for it. This confusion doesn’t exist as much in Spanish, where there are different words for these two concepts: “gratis” for “free as in beer,” and “libre” for “free as in speech.” The purpose of the GPL is to preserve the latter for the benefit of programmers. The former is just an incidental benefit to users.