Articles by Tim Lee

Timothy B. Lee (Contributor, 2004-2009) is an adjunct scholar at the Cato Institute. He is currently a PhD student and a member of the Center for Information Technology Policy at Princeton University. He contributes regularly to a variety of online publications, including Ars Technica, Techdirt, Cato @ Liberty, and The Angry Blog. He has been a Mac bigot since 1984, a Unix, vi, and Perl bigot since 1998, and a sworn enemy of HTML-formatted email for as long as certain companies have thought that was a good idea. You can reach him by email at leex1008@umn.edu.


Making a Cat That Barks

by on December 15, 2005 · 4 comments

This BusinessWeek column sounds very sensible:

Situations like this, together with the Sony BMG mess, have given the whole concept of DRM a bad name. To win public acceptance, the industries involved–content, information technology, and consumer electronics–are going to have to put maneuvering for advantage aside and stick to clear, consumer-first goals. Above all, users should not have to notice the existence of the particular DRM as long as they abide by clearly stated copying limitations. Digital content should use standard DRM technology built into players such as iTunes and Windows Media Player. And any content should play on any device that can physically display it, without regard to operating system. The entertainment industry has a great opportunity for new markets, and the PC and consumer-electronics industries have an opening for new products. But realizing this potential will require all of them to show some respect for their customers.

This is an admirable sentiment. There’s just one problem: “standard DRM technology” is a contradiction in terms. There’s never been such a thing, and there never will be. DRM technology is proprietary by necessity.

As I’ve argued in the past, DRM schemes must be proprietary formats, with a single authority (say, Apple or Microsoft) setting the rules and deciding who may participate. Moreover, the security of the format is inversely proportional to the number of devices that adopt it. Every new device is another opportunity for hackers to break it.

I think it’s hard to over-estimate the importance of this point. It’s easy to gloss it over in policy debates, to assume that achieving interoperability is just a technical problem that the geeks are working on and will solve in a few years. But it’s not. Building an interoperable DRM is like making a cat that barks.

The problem is that the vast majority of the people who write about technology policy aren’t programmers. They don’t really have a clear idea of what DRM does, so they don’t have the technical background to evaluate the claims of the DRM snake-oil salesmen. When a big technology company announces an “open” DRM format, the tech press reports on it dutifully, without really pressing the company for details.

If they did, I suspect that they would find that the various “open” and “interoperable” DRM schemes now being developed are vapor-ware: years from completion and with a lot of the implementation details not quite worked out yet. It’s easy to talk about interoperable DRM in the abstract. But so far, no one has succeeded in actually implementing such a system. That’s not a coincidence, because what they’re trying to do is, as Ed Felten puts it, a “logical impossibility.”

Humorous Site of the Day

by on December 14, 2005 · 2 comments

I could waste countless hours perusing patently silly, a blog featuring ridiculous “inventions” that have been granted patent protection.

What’s scary is that these inventions are mostly things that would be obvious (or obviously useless) to a sixth grader. If there are dozens of those, imagine how many thousands of illegitimate patents there are on subjects requiring some technical know-how to evaluate obviousness and usefulness.

Credit Where Due

by on December 13, 2005

I’ve been beating up on PFF a lot lately, so I think I ought to give credit where credit is due: this May amicus brief urging the Supreme Court to grant cert in KSR v. Teleflex in order to re-consider the patent obviousness issue, is excellent. They do a fantastic job of explaining the dangers of granting patents too easily. I found this passage particularly entertaining:

The defects of such a doctrine may well be illustrated by the notorious Patent 6,368,227, “Method of Swinging on a Swing,” obtained by a five-year old whose parent happened to be a patent lawyer. It is also called the “sideways swinging” patent, because that is what it covers–the idea that a swing can be made to move sideways as well as back and forth by pulling on the chains in a particular way. The Patent Commissioner ordered a re-examination of the ‘227 patent on May 21, 2002,17 and ultimately PTO found sufficient prior art in patents granted in 1939, 1943, and 1969 to result in its invalidation. But the case ought never to have gotten so far; scarce patent-examiner hours and public resources had to be expended to officially recognize the obvious. The difficulty the office faced resulted from the fact that the Federal Circuit’s standard forbade the examiners to take notice of what, literally, any child would know. As a newspaper report on the matter said, “The patent office is searching for documented proof that children have always powered their swings by pulling on the chains. Then, and only then, will it kill the patent as quietly as possible.” Had USPTO been unable to find written proof of something known to all, then under the Federal Circuit test the patent would have stood. “[D]eficiencies of the cited references cannot be remedied by the Board’s general conclu sions about what is ‘basic knowledge’or ‘common sense.'” “‘Common knowledge and common sense,’even if assumed to derive from the agency’s expertise, do not substitute for authority when the law requires authority.” In the swing patent case, ultimately, the PTO reached the sensible result. But the Federal Circuit decisions present an obstacle to the office’s doing so in a significant number of cases. For example, in 1999, the Federal Circuit reversed the PTO’s rejection of a patent application for orange trash bags with jack-o-lantern faces. A prior art search had turned up instructions for a children’s craft project involving the drawing of pumpkin faces on large orange bags. But this was not sufficient, because the instructions referred to paper bags, not to trash bags.

All of the concerns they raise–that people don’t always publish obvious ideas, that spurious patents create a “landmine” for genuine innovators, that mere combinations of well-known elements don’t constitute a new invention–apply with a vengeance to software. It’s possible (I’ll have to do some more reading and thinking before I make up my mind) that revising the obviousness standard as they propose would solve the problems with software patents. Regardless, PFF’s proposals for revising the obviousness standard deserve a serious look by Congresscritters as they ponder the grim possibility of a future without Blackberries.

Update: I’m also curious what DeLong had in mind when he mentioned software patents in the post I criticized last week. Having read his brief, it’s clear he does understand the dangers of granting patents too freely, which makes me wonder why he singled out the “no software patents” view for criticism. The criticisms of software patents that I’ve seen, at least, rest heavily on the contention that software “inventions” are all almost always obvious to a skilled practitioner of software development. I imagine he would at least agree with that critique, if not with the proposed remedy.

A Solution in Search of a Problem

by on December 13, 2005

Via TechDirt,, the Wall Street Journal is reporting that Harper Collins is going to scan its books and provide the digital scans for use by search engines such as Google.

I don’t get it.

Presumably this was contemplated as a response to Google Book Search, but I don’t really see how it clarifies or addresses any of the concerns raised by that case. The question at issue in that case is whether Google has the right to scan and index publishers’ books without their permission. Obviously, if Harper Collins is giving Google the digital copies itself, then it’s implicitly giving Google permission, which kind of makes the lawsuit a moot point, doesn’t it?

Perhaps Harper Collins’s executives think that scanning the books themselves and letting Google “borrow” the digital copies would somehow make the process more secure. But that doesn’t make a lot of sense. This program simply changes the source of Google’s data. It doesn’t do anything, as far as I can tell, to change how and where the data is stored within Google’s search engine. Google is still going to need to keep local copies of the books (or at least indexes of them, from which the books can easily be reconstructed) on its servers for performance reasons, so it’s not likely to reduce the total number of copies in circulation.

My guess is that HC’s leadership simply isn’t thinking clearly about the way digital content works. If the “digital copies” in question were physical books, it would make sense for HC to keep the physical copies in its warehouse and “lend” them to Google and others to use and then “give back.” That would increase security because “the originals” would always stay in HC’s possession. But digital data doesn’t work like that. Digital data isn’t “moved,” it’s copied. So when Google “borrows” HC’s digital books, it is, in fact, making a copy of them. That copy is just as good as the original, and every bit as much a security risk.

The other possibility is that they’re hoping the courts will be similarly confused by the analogy. Even though it doesn’t make any sense in a digital context, it’s possible that a judge will be persuaded that allowing the copyright holder to hold “the original” copy of a digital book is more secure or otherwise more legitimate than allowing Google to create “its own” original. HC’s move might be savvy legal strategy even if it doesn’t make any sense from a technical perspective.

Update: Jerry suggests that licensing this database to smaller search engines that lack the resources to scan the books on their own could be a nice revenue stream, which is an excellent point. However, I don’t see how that helps “protect authors’ rights,” which is what most news stories claim the point is. This story, for example, quotes a HC executive complaining that there are “too many digital copies” of the books around, which I think fundamentally misunderstands how search engines work. A good search engine needs to be able to do full-text searches of the book, and to do that, you almost certainly need to have a copy of the full text on your server. If the goal of this project is to protect authors’ rights, I still say they’re barking up the wrong tree.

I just read the Federal Circuit decision from last year in the NTP v. RIM decision. And if I’m reading it right (I should stress here that I’m not an expert on patent law) NTP’s patents covered a relatively broad class of wireless email services: more or less, wireless email systems in which the user could both view his email on a wireless device and download them to a desktop computer.

That’s simply ridiculous. Email has been around for more than a quarter century. Wireless technology has been around for decades. The idea of combining the two is blindingly obvious. (And it would have been pretty obvious even back in 1991, when the first NTP patent was granted) Once the technologies for wireless transmission of digital data became cheap enough to be cost-effective for consumer products, it was inevitable that people would exchange email with it.

In other words, once you’ve got wireless technology and an email network, combining the two is a “shallow” problem. It takes some engineering know-how to do, but it doesn’t require any great flashes of genius. This, I think, is true of virtually all programming tasks. The challenge in software development lies in managing the complexity created when you’re building a program that has thousands of components that must all work together. The best programmers are those who can make a program that’s more than the sum of its parts by organizing them in a particularly clever or elegant manner. But no one component by itself is an “invention.”

Imagine if, in 1920, somebody had tried to patent car radios. At that time, cars and radios were both well-known inventions, but (based on a very cursory Google search, at least) you couldn’t buy a car with a radio. The patent office, I assume, would have thrown the patent application out, ruling that combining two well-known devices in a common-sense way isn’t a new invention. It doesn’t take very much effort to think of the concept, and there’s no reason why someone should be able to extort money from everyone else who stumbles on the idea simply because he happened to think of it first.

Patent law requires that inventions be “non-obvious” precisely in order to prevent precisely that kind of extortion. The idea is that the inventor should have to expend a significant amount of effort developing the new invention before the invention will merit the protection of patent law.

But viewed from this perspective, virtually all software “inventions” are obvious–that is, they involve combining well-known components (albeit a large number of them) in common-sense ways. They only look non-obvious to non-programmers because the non-programmers aren’t familiar with the underlying components. Unfortunately, non-programmers tend to be the ones who make decisions in patent cases.

Competing with Free

by on December 12, 2005 · 2 comments

I have to admit I’m surprised and a little saddened to see that Overpeer is being shut down. Overpeer worked for the recording industry to pollute peer-to-peer networks with bogus versions of its songs. Apparently, the peer-to-peer networks have instituted new user-rating systems that have made Overpeer’s tactics increasingly ineffective.

I’m surprised it happened so quickly. It was of course inevitable that the peer-to-peer programs would adapt by offering users ways to filter the bad songs out of the system, but I would think Overpeer could take countermeasures, such as automated positive rankings of the bogus songs. But it seems the peer-to-peer networks won this particular arms race in just three years.

This, I think, is one more data point in favor of the thesis that the record labels need to focus less on the stick of preventing piracy (although they should certainly do some of that) and more on the carrot of providing users with easy-to-use, convenient, and affordable legitimate download options. They’ve made some baby steps in the right direction, but they still mostly sell low-quality audio files encumbered with irritating and restrictive “digital rights management.” Improving the quality of the songs they sell online, and abandoning digital rights management, would be important steps toward enticing customers back into the legal fold.

In the long run, I think they’re going to need to be more radical. Google is probably the best model. Google gives away online services worth billions of dollars and funds their efforts with ads. So here’s one model: imagine if the recording industry set up free, ad-supported Internet radio stations. They could do things that ordinary radio stations could never do. For example, users could be required to fill out a survey giving some basic demographic information (age, zip code, industry). Then the ads on each Internet radio stream could be targetted at that individual user. Advertisers could also buy up ads to play with particular playlists, of which there could be thousands. The Britney Spears playlist might have ads targetting teeny boppers, while the oldies playlist would have ads targeted at middle aged people. This could conceivably generate considerably more revenue than traditional radio stations, since advertisers will pay more for precisely targetted advertising.

To be clear, I’m not claiming that peer-to-peer infringement is acceptable, or that the RIAA should stop trying to prevent it. But I also think they have to face the fact that, sooner or later, this is a war they’re likely to lose. So they need to be thinking about what they’re going to do if that happens. You can, in fact, compete with free, (Google has made billions doing just that) but it requires more creativity than the recording industry has shown to date.

BlackBerry Extortion

by on December 12, 2005

I linked to this classic article on the problems of software patents on Saturday. I think this passage is worth highlighting:

Even the giants cannot protect themselves with cross-licensing from companies whose only business is to obtain exclusive rights to patents and then threaten to sue. For example, consider the New York-based Refac Technology Development Corporation, representing the owner of the “natural order recalc” patent. Contrary to its name, Refac does not develop anything except lawsuits–it has no business reason to join a cross-licensing compact. Cadtrak, the owner of the exclusive-or patent, is also a litigation company. Refac is demanding five percent of sales of all major spread-sheet programs. If a future program infringes on twenty such patents–and this is not unlikely, given the complexity of computer programs and the broad applicability of many patents–the combined royalties could exceed 100% of the sales price. (In practice, just a few patents can make a program unprofitable.)

Sound familiar? When this was written, in 1991, Research in Motion was an obscure developer of wireless networking components, and the first BlackBerries were a decade away. NTP hadn’t even been founded yet. Yet this passage perfectly describes the RIM-NTP controversy. NTP doesn’t do anything useful, it’s strictly a lawsuit shop, or “patent trolling” firm. As Fortune describes it:

NTP has this remarkable power because it is nearing victory in its four-year-old patent litigation with Research in Motion (Research), the maker of the BlackBerry. RIM faces the real likelihood of a court-ordered BlackBerry blackout (government devices would be exempted) unless it agrees to pay essentially whatever sum NTP names, which some analysts think will approach ten figures. However the endgame plays out, it vividly illustrates a recurring lightning-rod issue in patent debates–one that pits the information technology industry, which favors reform, against many others, such as the pharmaceutical industry, which don’t. Should plaintiffs like NTP–which does not market a competing product, never has, and never will–be entitled to an automatic injunction shutting down a productive infringer such as RIM? NTP was founded in 1991 by the late inventor Thomas Campana and his patent attorney, Donald Stout, of Arlington, Va. It has no employees and makes no products. Its main assets, Campana’s patents, have spent most of the past decade in Stout’s file drawer. But in 2002 a federal jury found that RIM had infringed five NTP patents that relate to integrating e-mail systems with wireless networks. An appellate court largely agreed in August 2005, and in late October the U.S. Supreme Court declined to issue a stay while it ponders whether to hear the case.

I think it’s impossible to over-emphasize the importance of this point: NTP is not a BlackBerry competitor marketing a competing product. Its only “product” is lawsuits against companies that have the misfortune of developing products that happen to resemble those described in NTP’s patents. How exactly does this kind of extortion “promote the progress of science and the useful arts?”

The Fortune article, by the way, is worth reading in full.

Cable Franchise Reform

by on December 12, 2005

I’ve got a new article in last Friday’s Kansas City Business Journal on the need for cable franchise reform. My article focuses on the Missouri system, (since I work at a Missouri think tank) but this is an issue that’s applicable across the country. Most states (with Texas being the notable exception) have an outmoded “municipal franchising” regime in which each city government gets to make a soviet-style 5-year plan for cable service in their community. This might have made sense in the 1970s when each community only had one option for pay TV service, but it makes no sense whatsoever when virtually every consumer has satellite as an option, and the Baby Bells are pouring money into building out fiber networks in order to offer a third alternative. Today, the franchises themselves have become a major barrier to entry.

They dealt with this in Texas by replacing the local franchising system with a streamlined state-wide franchise. Instead of having to negotiate with hundreds of city governments for permission to offer video service, you just file a single application with the state government. This is a big step in the right direction–one that other states should emulate.

I haven’t had time to read enough to know exactly what technology Research in Motion and NTP are fighting about, but I think it’s shocking to see accounts like this of the dispute:

The patent office has issued preliminary rejections of all five NTP patents that a jury in 2002 decided RIM had infringed upon with the BlackBerry device and service. NTP has downplayed those rulings as largely procedural, while RIM has called the rejections proof that the technology behind its popular BlackBerry handhelds and e-mail service is not stolen.

After 3 years of intense litigation, it still isn’t clear whether RIM’s technology is “stolen” or not. That should send chills down the spine of anyone who values private property and the rule of law. Perhaps the most basic characteristic of any good system of property rights is predictability. An economic actor needs to know where the property lines–physical or intellectual–are so that he can avoid crossing them without permission. Even copyright, which certainly has its share of fuzzy lines, at least has a reliable method of staying on the safe side–when in doubt, don’t make unauthorized copies. That’s not true of software patents, which can cover anything that some patent office bureaucrats decides to declare an “invention.”

So what exactly was RIM supposed to do when it first deployed this technology? Patent searches are expensive, and even with unlimited resources it’s unlikely that you could ever find every patent that could potentially be construed as infringing. Even if you could find every such patent, some patents are so vague that it might not be possible to re-design your product to avoid infringing them.

Is this really how we want to run our software industry? Do we really want to require lawyers to inspect every line of computer code to make sure a programmer didn’t accidentally “invent” something that some other company previously patented?

Software isn’t like most R&D. A large software product consists of hundreds and hundreds of components that perform various tasks. Many of these components could be considered “inventions” by the patent office. Yet it’s unlikely that any of these “inventions” are unique–other programmers, dealing with similar challenges on other projects, are likely to independently “invent” the same techniques. Indeed, programmers consider such “inventions” to be so commonplace that they don’t bother to write them down. That’s why it’s often difficult to find prior art for things that are painfully obvious to any competent programmer.

In effect, patents create a legal minefield for software developers simply trying to go about their business. Because the patent office gives out patents so promiscuously, the developer has no way of predicting when code he writes might run afoul of a somebody’s patent. That means that even if he developed every line of code himself, without looking at anyone else’s code, he still can’t be sure that somebody won’t come along and sue him for patent infringement.

If there’s a silver lining to this fiasco, it’s that all the Hill staffers, judges, and patent office bureaucrats who created the current mess will have the opportunity to think long and hard about software patent reform while they’re waiting for their BlackBerries to start working again. Geeks have been bitching about the problems with the software patent regime for more than a decade, but if our complaints won’t get their attention, maybe a judicially-mandated BlackBerry blackout will.

More Felten on CD-DRM

by on December 9, 2005

I don’t want to turn this blog into a Felten-summary service, but I couldn’t resist linking to a pair of fantastic posts over at Freedom to Tinker.

First, Ed Felten explains why we shouldn’t be surprised that MediaMax, like XCP, has security flaws. Security is all about managing risk, and SunComm, like First4Internet designed their software with reckless disregard for the risks it might impose on users. So while the particular bugs that have been discovered were almost certainly an honest mistake, those bugs would have been much less harmful had they not been so cavalier about disregarding ordinary security practices in developing their spyware-like software.

In his second post, Prof. Felten explains that it wasn’t a coincidence that both XCP and MediaMax behaved like spyware. By its nature, DRM software is designed to restrict how users use their computers. Obviously, most users would rather not have that software on their computers at all. So in order to function, the software must deceive the user into install itself, and then must avoid detection and/or resist removal. And what do you know, that’s exactly the same design parameters that spyware authors face. Is it any wonder they came up with similar solutions?

Anyway, he explains all of this much better than me, so go read his first post and his second post.