Articles by Tim Lee

Timothy B. Lee (Contributor, 2004-2009) is an adjunct scholar at the Cato Institute. He is currently a PhD student and a member of the Center for Information Technology Policy at Princeton University. He contributes regularly to a variety of online publications, including Ars Technica, Techdirt, Cato @ Liberty, and The Angry Blog. He has been a Mac bigot since 1984, a Unix, vi, and Perl bigot since 1998, and a sworn enemy of HTML-formatted email for as long as certain companies have thought that was a good idea. You can reach him by email at leex1008@umn.edu.


I’ve got a new article up at Ars on AT&T v. Microsoft:

The Software Freedom Law Center filed an amicus brief urging the court to take this line of reasoning even further. The courts have long held that laws of nature, abstract ideas, and mathematical algorithms are not eligible for patent protection. Software is nothing more than a description of a mathematical algorithm. Therefore, the center asks the courts to declare that software is not eligible for patent protection on that basis.

The brief, written by SFLC counsel Eben Moglen, argues that the Court of Appeals for the Federal Circuit, which has had jurisdiction over all patent cases since it was created in the 1980s, has gone off the rails in recent years. Moglen claims that the federal circuit has misread the Supreme Court’s precedents on the patentability of software from the 1970s and early 1980s in a way that removes almost any limit on the scope of patentable subject matter.

Continue reading →

Philips, one of the amici in Microsoft v. AT&T, demonstrates some of the same conceptual confusions as Seth Waxman did in oral arguments:

[Not] every computer program is a component of a patented invention. But a program that has the same technical effect as an electronic hardware component surely is. In particular, there are two factors that illustrate that executable software or firmware code is in fact a component of a patented invention.

First, executable code is distributed in its final form such that it cannot be changed. The software developer designs the software in the form of source code, and then fixes it in an executable form by compiling it. The act of compiling manufactures the executable code. In order to modify the executable code, it must be decompiled, modified, then recompiled–a process similar to using a sample to manufacture new copies of a gear. Although the software developer may allow the installer to customize certain parameters, the installer is not allowed to modify the executable code. For example, Microsoft requires original equipment manufacturers (“OEMs”) to attach a Certificate of Authenticity to each fully assembled computer system. This
certificate assures customers that they have acquired “genuine Microsoft Windows software.”

Continue reading →

I just read through this morning’s oral arguments in the case of Microsoft v. AT&T. It’s a fascinating case because it beautifully highlights the conceptual confusion that lies at the root of software patents.

The case involves a patent dispute between AT&T and Microsoft, in which AT&T claimed some Microsoft software infringed on an AT&T patent. They’ve sorted things out with regard to domestic infringement, but their dispute is over whether Microsoft is liable for infringement overseas. What happened is that Microsoft shipped a single copy of its software to an overseas distributor, who in turn installed thousands of copies of the software on overseas computers.

Under a 1984 revision to patent law, it’s patent infringement to ship the components of a patented invention overseas for the purpose of evading US patents by having the final assembly of the components occur overseas. AT&T claims that Microsoft is liable because its software was a “component” under the law. Microsoft counters that software cannot be a “component,” because it’s an abstract string of 1s and 0s. The component, they argued, was the individual copy of the software, which was created overseas.

The really illuminating thing about the oral arguments, for my money, is when the counsel for AT&T, Seth Waxman, contorts himself into pretzels trying to argue that software is more than just information:

Continue reading →

Over at my other blog, Brian Moore points out this article that looks back of the great off-shoring debate of three years ago:

Then-candidate John Kerry issued a statement denouncing what he called “Benedict Arnold CEOs” who shipped U.S. jobs overseas. The airwaves and cables fairly hummed with angry talk about offshoring.

And what happened next? Nothing.

Nothing, that is, like the massive outflow of jobs that many feared. Employment growth, which had been notably slow after the 2001 recession, picked up in the United States. (We’ve gained more than five million jobs since early 2004.) Recruiters who specialize in information-technology workers say they have more openings than they can fill…

Most economists who’ve looked at the issue rate the long-run economic impact of offshoring as either (1) minimal, or (2) positive. Using overseas workers to save money or boost productivity generally results in better or cheaper services, which in turn leads to more competition, more innovation, and growth.

But you don’t have to take my word for it. Listen to Scott Kirwin, who made a return appearance in December to Wired magazine. Things have changed. He shut down his anti-offshoring Web site in 2006 and has since found himself a better job in the software business. “I don’t view outsourcing as the big threat it was,” he told the magazine. “In the end, America may be stronger for it.”

I wonder if any of the pundits who excoriated Greg Mankiw in 2004 are ready to apologize yet.

I’m sure Leander Kahney of Wired makes a lot of sense when he’s talking about music and copyright protection, but when the topic is schools, he seems completely clueless:

Jobs has also been a longtime advocate of a school voucher system, another ridiculous idea based on the misplaced faith that the mythical free market will fix schools by giving parents choice.

Jobs argues that vouchers will allow parents, the “customers,” to decide where to send their kids to school, and the free market will sort it out. Competition will spur innovation, improve quality and drive bad schools (and bad teachers) out of business. The best schools will thrive.

Continue reading →

Wikinomics

by on February 21, 2007

I’m reading Wikinomics: How Mass Collaboration Changes Everything. The authors, Don Tapscott and Anthony Williams, have managed an impressive feat: they’ve translated Yochai Benkler’s The Wealth of Networks into marketing copy. Well, OK, that’s a little bit unfair. But their book is definitely unlike I’ve read about peer production. To my knowledge, it’s the first book on the subject that’s pitched toward business leaders rather than academics or techies. Accordingly, it stays at the treetop level and focuses on business implications wherever possible, explaining what peer production is, why businesses should care, and how businesses can use peer production to their advantage.

The authors are extremely enthusiastic about the phenomena they describe. They’re positively effusive in their predictions that mass collaborations will revolutionize industries, empower consumers, and democratize markets. Yet despite the gee-whiz tone, this is not a shallow book. They do a good job of summarizing the thesis of Benkler’s “Coase’s Panguin” without getting bogged in academic formalisms. They discuss the various controveries surrounding Wikipedia (such as this one), mostly coming down on the pro-Wikipedia side of the arguments. And they introduce the reader to a variety of programs, companies, and concepts–blogs, Linux, Apache, Flickr, Boing Boing, Second Life, etc. Obviously, few of these are new to TLF readers, but for the business people who are the book’s target audience, this is likely to be a welcome introduction to the concepts.

Continue reading →

Cool! I just stumbled across this 4-year-old post at Catallarchy making a point that I’ve mentioned a few times in the past: peer production isn’t an assault on the principles of a free society, but an extension of those principles to aspects of human life that don’t directly involve money. Jonathan Wilde offers the blogosphere (and specifically, technorati) as an example of the same phenomenon:

One of the things that undoubtedly adds to Technorati’s success is that Sifry knows blogging. He runs a blog himself. He has likely had to spend a late night tinkering with Movable Type. At one time or another, he probably has wanted to know who is reading his blog, or has wanted a way to search other blogs. He has, in the words of Friedrich A. Hayek, “the knowledge of the particular circumstances of time and place”.

What inventions like Technorati do is give structure to the blogosphere. And Technorati is not the only tool that does this. The Truth Laid Bear Blog Ecosystem acts as a filtering mechanism to display the blogs that are most frequently linked by other blogs. Blogrolling can create a useful, easily manipulated directory of blogs to visit regularly. The Trackback feature in Movable Type and Typepad has made it easier to see which other bloggers are commenting on your posts on their own blogs. The comments feature allows interactive discussion to take place without interfering with the media look of a blog. Archiving by category, date, and author allows readers easy ways of browsing the past material. RSS feeds allow delivery of blog content to newsreaders so that readers can organize their favorite blogs in a single window.

Each of these implementations were created by different individuals, such as Sifry, pursuing their own ends. There was no central authority barking out orders or making grand designs. The inception of a solid anatomy to the blogosphere was an entirely peripheral phenomenon.

This is an excellent point, and one that Jim Harper and I are hoping to expand upon in the near future: a lot of the intellectual tools that libertarians use to analyze markets apply equally well to other, non-monetary forms of decentralized coordination. It’s a shame that some libertarians see open source software, Wikipedia, and other peer-produced wealth as a threat to the free market rather than a natural complement.

Michael Geist has an excellent BBC article on a recent report purportedly documenting inadequate copyright protections outside of the United States. But as Geist explains, in many cases it’s US law that’s out of touch:

Countries that have preserved their public domain by maintaining their term of copyright protection at the international treaty standard of life of the author plus an additional fifty years are criticised for not matching the US extension to life plus 70 years.

There are literally hundreds of similar examples, as countries from Europe, Asia, Africa, North and South America are criticised for not adopting the DMCA, not extending the term of copyright, not throwing enough people in jail, or creating too many exceptions to support education and other societal goals.

In fact, the majority of the world’s population finds itself on the list, with 23 of the world’s 30 most populous countries targeted for criticism (the exceptions are the UK, Germany, Ethiopia, Iran, France, Congo, and Myanmar).

Countries singled out for criticism should not be deceived into thinking that their laws are failing to meet an international standard, no matter what US lobby groups say.

Rather, those countries should know that their approach – and the criticism that it inevitably brings from the US – places them in very good company.

The really funny thing about this (aside from us being on a list with Iran, the Congo, and Myanmar) is that on multiple occasions I’ve heard it argued that we needed to pass the DMCA and the CTEA to comply with international treaty obligations, as though we’re somehow behind the rest of the world in expanding copyright protections. But of course, those “treaty obligations” were largely imposed at the behest of the American copyright lobby.

Matt Yglesias channels Adam Thierer’s point about the XM-Sirius merger:

If The New York Times says a Sirius-XM merger is “sure to raise antitrust issues” then I’m happy to believe them. I have a hard time seeing a serious issue here, however. As is typical in these cases, the relevant think is the definition of the market. If you think there’s a discrete “satellite radio” market then, yes, a combined Sirius-XM entity would clearly have monopoly power in that market. Realistically, though, the product both Sirius and XM are selling–audio broadcasts–is one for which there’s a great deal of competition. Cable and satellite television providers are capable of delivering similar content, though in not as convenient-to-use a manner. People can listen to CDs, buy internet music subcription services, subscribe to “podcasts,” and, of course, satellite radio needs to compete with its freely available terrestrial radio counterpart.

After all, at the moment I–like most Americans–don’t have a satellite radio subscription even though I’m pretty gadget inclined. The logic of the business is that the merged entity needs to grow, which is to say continue trying to offer a deal that people find appealing compared to our many other entertainment options, not our satellite radio options.

Another, purely pragmatic consideration that makes me think this merger will be a good thing is that satellite radio is currently locked in a couple of high-profile lobbying battles in which they are, as far as I can see, on the side of the angels. They’re battling the RIAA over the “analog hole.” And they’re also fighting a protectionist proposal by terrestrial broadcasters to ban satellite radio from offering local programming. The merged company is likely to have the financial resources to retain a higher caliber of lobbying talent which will, I hope, allow them to prevail in both of those fights. Obviously, I’d prefer if companies could just focus on their business and not retain lobbyists at all, but if the RIAA and broadcasters are going to push blatantly anti-consumer legislation, I’d at least like to see the other side have the resources to fight back.

Invention vs. Innovation

by on February 20, 2007 · 28 comments

Mike Masnick draws a distinction I hadn’t given much thought to before:

Over at Computerworld, Mike Elgan has written up a great piece highlighting how the iPhone is a fantastic piece of innovation that really has very little new in it. As we pointed out when we questioned Apple’s claim to 200 patents around the iPhone, the multi-touch interface isn’t new and has been publicly demonstrated numerous times. Elgan points to that video as well as other examples of how almost all of the “new” things in the iPhone have actually been around for quite some time–but that what’s special about the iPhone is that it will really be the first time that such features and tools are available to the general public, and how it’s then likely to move those same features from research labs into all sorts of common computing applications. That’s great for everyone–but it’s about innovation, not invention, and it seems like the market can do a great job rewarding such innovation without resorting to patent-based monopolies.

If you think about it, Apple’s strength really doesn’t come from inventing things. I’ve been a Mac guy for pretty much my whole life, and so during college I was one of those people who’d watch the Steve Jobs keynote every year. Almost every time, my techie officemates would see a new Apple product and say “hey, there’s nothing new there. Linux has been able to do that for 6 months.”

Continue reading →