Articles by Tim Lee

Timothy B. Lee (Contributor, 2004-2009) is an adjunct scholar at the Cato Institute. He is currently a PhD student and a member of the Center for Information Technology Policy at Princeton University. He contributes regularly to a variety of online publications, including Ars Technica, Techdirt, Cato @ Liberty, and The Angry Blog. He has been a Mac bigot since 1984, a Unix, vi, and Perl bigot since 1998, and a sworn enemy of HTML-formatted email for as long as certain companies have thought that was a good idea. You can reach him by email at leex1008@umn.edu.


Non-DRM DRM?

by on June 6, 2007 · 0 comments

Mike Masnick wonders if Lala is engaging in Newspeak when it describes its tracks as “DRM-free.” Something certainly smells fishy:

We noted the oddity of supposedly DRM-free files only being able to be loaded onto iPods, since that suggested there clearly was some form of restriction on the files. However, it’s becoming clear that there are certainly some types of DRM being used. In Bob Lefsetz’ latest blog post, he notes that each file has a watermark that identifies its owner, and if you’re not the owner, you won’t be able to play that song. In other words, the supposedly DRM-free tracks… have DRM. It’s just a slightly different type of DRM.

I don’t think this is necessarily true. It’s possible, for example, that it’s just a watermark, in which case the files wouldn’t play in other Lala players but it would play in any other music player. Of course, that would be kind of a stupid business strategy, because it would put your own software at a disadvantage. But maybe the labels, who are not exactly known for their business savvy, were convinced that would be an effective piracy deterrent.

I haven’t had time to look into this in a lot of detail, but so far I haven’t been able to find a clear description of how the watermarking system would work. The Lefsetz reference is rather vague. Does anyone know if Lala has made a clear statement of exactly what format the songs will be in and how the watermarking will work?

ATI and Crippleware

by on June 5, 2007 · 0 comments

Danny O’Brien points out that ATI is releasing software “upgrades” that reduce the functionality of its hardware:

The latest update to ATI’s Catalyst drivers now offers”improved TV quality and Broadcast Flag support which enables full US terrestrial DTV support”.

It’s a little unclear from that README whether the new support is for a new, hardware revision of ATI’s Theater 650 digital TV tuner, or simply a new software implementation of the digital TV copy control for current owners of the Theater 650. However you look at it, though, “broadcast flag support” is hardly an upgrade.

Prior to such support, you could be confident that you could use these cards for their given purpose: to record whatever you want off the air, whenever you want, in whatever format you want. Now, ATI, recently purchased by AMD, is announcing support for equipment’s right to take that power away from you, and substitute a crippled subset of their tuner’s capabilities whenever a broadcaster commands it.

But this isn’t just an unfeature: it’s an unnecessary unfeature. You can have full terrestial HD support without the Broadcast Flag – mainly because thousands of concerned citizens fought hard for that right. AMD must surely have noticed that the Broadcast Flag proposal has been dead for over two years, ever since the courts threw it out as FCC overreach. Thanks in part to your letters and calls, no politician has managed to sneak it into law since.

It doesn’t seem like reducing the functionality of your products is a very good business strategy.

Spectrum Collusion?

by on June 5, 2007 · 0 comments

The Washington Post reports on an effort by Public Knowledge, Google, and others to change several parameters of the upcoming 700 MHz auction:

The groups called on the FCC to require that the winners of a chunk of the spectrum allow “open access” — sell access to competitors on a wholesale basis. The open access requirement would allow a nearly unlimited number of competitors to offer wireless broadband services, said Gigi Sohn, president of Public Knowledge.

The groups also want the FCC to conduct anonymous auctions — where bidders wouldn’t know whom they’re bidding against. In the past, large wireless carriers have used transparent auctions to drive up prices on chunks of spectrum being bid on by small competitors, said Gregory Rose, an econometrician and game theorist working with open access proponents. In many cases, many larger carriers then dropped their bids after the smaller carriers were eliminated, he said.

Some carriers have also engaged in retaliatory bidding against other companies that bid on spectrum they were interested in, he said. The retaliating bidder will bid on spectrum the second company is interested in, “as a signal to say, ‘back off on the license I want, or I will drive the price of the license you want through the roof,'” Rose said.

As I wrote last week, I’m skeptical about the FCC telling the auction winners what to do with the spectrum once they’ve purchased it. But requiring that the bids be anonymous strikes me as a sensible idea. As I understand it (and I haven’t looked into the relevant research in any detail), in a less-than-liquid market like this, the details of the auction rules matter quite a bit. We should all be able to support the idea that the auction rules should be carefully designed to minimize the potential for collusive behavior by bidders.

I’ve gotten too busy to do my weekly software patent series, but if I were still doing it, this would be a great installment. Mike Masnick says this is the patent in question:

A database search system that retrieves multimedia information in a flexible, user friendly system. The search system uses a multimedia database consisting of text, picture, audio and animated data. That database is searched through multiple graphical and textual entry paths. Those entry paths include an idea search, a title finder search, a topic tree search, a picture explorer search, a history timeline search, a world atlas search, a researcher’s assistant search, and a feature articles search.

A search engine that lets you search using multiple types of media and multiple criteria? It’s no wonder Britannica is legendary for its innovative products.

Here’s one other section of Braden’s post that I found problematic:

You don’t have to be a Nobel prize-winning economist to understand that an emphasis on building an ICT sector around inexpensive labor will drive wages down. In a global economy, a worker in a lesser developed country could live on just dollars a day. A race to the bottom is the kind of race the EU will wish it hadn’t entered, let alone run.

The FLOSS study authors say that developers will be so inexpensive that even small and medium-sized companies will hire them to work in-house, which they say will help local employment. However, in a globalized race to the bottom, it’s not a stretch to say that the EU would lose to even cheaper programmers in China, India and the former Soviet bloc. In the U.S., for example, the cost savings of IT offshoring in 2004 reached $7.0 billion, according to a study by ITAA — a 36.2% savings rate…

By increasing demand for FLOSS through preferences and mandates, the EU will find that in a “Flat World”, lower cost developers from other countries would rush to fill that demand. The result is more likely to be an increase in offshoring to China and India—not job creation in the EU.

I think it’s debatable if free software will drive down programming wages, or if most of those jobs can be outsourced to China in either event. But let’s assume he’s right about both of those things. When he writes that this would be “a race to the bottom is the kind of race the EU will wish it hadn’t entered, let alone run,” he seems to be suggesting that such a “race to the bottom” would be a bad thing. It seems to me that this is at odds with the principles of basic economics.

Continue reading →

I thought Braden made some good points in his post yesterday about free software in Europe, but I thought this argument was a bit wide of the mark:

OTJ training has always been a great way to learn programming skills. But is working on FLOSS projects the only way to learn how to code? Buried in a footnote (#48), the report says that “perhaps” OTJ training works for proprietary software too. However, it is a different kind of experience, according to the authors. They say FLOSSers can start in their teens and can learn at home, whereas developers of proprietary software have to learn on the job or start their own firms.

Does FLOSS actually promise new and better ways to learn how to write software? Not really. Aspiring developers have always been able to learn programming skills pre-employment and on their own time. And there’s always been a network of online resources and training material for developers of proprietary software (think: Novell and Microsoft developer certifications).

It’s obviously possible to pick up some programming skills using online resources and proprietary developer certifications, but there are some important differences that make free software decisively better when it comes to developing job skills. In general, practice is more useful the closer it is to the real thing. When you contribute to an open source project, you’re making real changes to real code that are being used by real people. You learn about software development in its full complexity and nuance. There’s just no way that an MSDN training manual can compete with that, no matter how well-written it might be.

Secondly, your contributions to a free software project can often lead directly to paid opportunities helping clients to integrate the software you’re working on into their environment and customize it for their needs. I rather doubt anyone is able to get consulting work based on the example programs they write on their MSDN certification test.

Finally, your contributions to a free software project can allow you to stand out in a way that an MSDN certification cannot. By necessity, a certification program focuses on whether you’ve learned a fairly fixed set of skills. You can demonstrate basic competence through an MSDN certification, but it’s difficult to stand out from the pack. In contrast, when you contribute to free software, it provides a much more nuanced basis on which to evaluate your performance. The MSDN certification shows you studied for a test. The free software contribution shows you can actually produce working code. I have trouble imagining a competent IT manager being more impressed by an MSDN certification than a significant contribution to a free software project.

You should be sure to check out Tim Wu’s smart comments on my Wireless Carterfone article.

I’m currently reading Virginia Postrel’s excellent The Future and Its Enemies. Chapter four gives an excellent exposition of tacit knowledge. It occurs to me that the insights of the chapter bear directly on patent policy:

As Polanyi suggested, much of our most important knowledge is tacit—difficult to articulate, even to ourselves. Contrary to Sale’s imaginings, such knowledge is expensive to share, assuming it can be transferred at all. It is “sticky,” in management scholar Eric von Hippel’s term: “costly to acquire, transfer, and use in a new locus.” Von Hippel notes, for instance, the difficulty of duplicating a scientific apparatus. Subtle information about the lab environment, or procedures that people at the original site take for granted, can make the difference between success and failure. “It’s very difficult to make a carbon copy,” say s a researchers quoted by von Hippel. “You can make a near one, but if it turns out that what’s critical is the way he glued his transducers, and he forgets to tell you that the technician always puts a copy of Physical Review on top of them for weight, well, it could make all the difference.

As a result of this stickiness, tacit knowledge often travels only through apprenticeship, the trial-and-error process of learning from a master. (A form of “apprenticeship” is essentially how as children we learn such complex basic skills as speech. Writing in the 1950s, Polanyi argued that the art of scientific research, as opposed to the scientific information that can be taught in a classroom, had still not passed much beyond the European centers where it had originated centuries earlier: without the opportunity offered to young scientists to serve an apprenticeship in Europe, and without the migration of European scientists to new countries, research centres overseas could hardly have made much headway.”

The application to patent debates should be pretty obvious. Some patent proponents blithely assume that you can copy an invention as easily as you can copy a song or a piece of paper. It’s pretty often, for example, to see an argument that without patent protection, a small software company wouldn’t be able to negotiate on an equal footing with a large one, because the large one will simply listen to the smaller company’s pitch, take careful notes, and then steal the company’s idea without paying a penny.

The problem with this story is that it completely ignores the role of tacit knowledge in duplicating technology. If it’s difficult to duplicate a scientific expermient when the technical details of that experiment are publicly available, how much more difficult is it to duplicate a new technology based on the fragmentary information you get from a technology demo? A company seeking to duplicate a competitor’s technology will typically be forced to go through virtually the same trial-and-error process the original company went through. Which means that in many cases, licensing the smaller company’s technology will be the faster and cheaper than trying to re-invent the wheel.

Obviously, the force of this argument will vary with the degree to which products embody tacit knowledge. For example, it seems like pharmaceutical products would be easier to copy than others because they can be characterized by their chemical formulas. Software seems to be at the opposite extreme—especially if copyright law prevents the verbatim copying of source code. There’s a tremendous amount of tacit knowledge embedded in any software product of non-trivial complexity, so the idea that software companies can duplicate their competitor’s products quickly and easily is unrealistic.

DeLong leaves PFF

by on June 1, 2007 · 0 comments

I’ve been thinking about the best way to respond to the news of Jim DeLong’s apparent semi-retirement from the public policy world. For the last decade, DeLong has been the most prolific, and perhaps the most influential, libertarian thinker on patent and copyright issues. He has probably done more than any other person in the libertarian think tank world to promote the view that patents and copyrights are no different than any other kind of property right, and that libertarians should therefore almost always come down in favor of broadening the scope and duration of copyright and patent rights, for stiffening the penalties for violating these laws, and for enacting new regulations of third parties to make it easier for copyright holders to enforce their rights.

I’ve criticized DeLong’swritings repeatedly on this blog, so I won’t re-hash those arguments. But I am disappointed that (with one exception I can recall) DeLong never engaged any of my criticisms. Perhaps he was offended by the derisive tone some of my posts took. Maybe my work just never made it onto his radar screen. In any event, I think it’s sad that a significant opportunity for substantive engagement on these issues was missed. DeLong often seemed to be arguing with caricatures of his ideological opponents, ignoring the more nuanced substantive work he could have found if he’d looked for it. I was particularly disappointed that he never took the time to offer a substantive critique of my DMCA paper, one work of mine that I know he did read. I doubt he would have been able to change my mind (or vice versa), but I bet I would have learned a few things from his criticisms.

One place I do have to give DeLong credit is his amicus brief (with TLF contributor Solveig Singleton) in the Teleflex case. This was probably the most important patent case in the last quarter-century, and in my view he came down on the right side of it, recognizing that the patent system becomes an obstacle to progress if patents are granted too liberally.

In the latest installment of TechKnowledge, I critique Tim Wu’s recent article on “wireless Carterfone”:

True, a government-designed standard is not impossible, but “not impossible” is a long way from a good idea. Indeed, Wu seems to be implicitly conceding that it is far from the “simple requirement” he touts in his Forbes article. He seems to be proposing that the FCC dictate to wireless carriers what network services they must offer, who may access them, on what terms, and at what price.

History suggests that such efforts often end badly. Even when a government-created monopoly situation makes public utility regulation unavoidable, as in the Carterfone case, it can take a decade or longer for the dust to settle. The Clinton-era FCC attempted to create competition in the telephone and DSL markets by requiring Baby Bells to “unbundled” their local phone lines and lease them at FCC-determined prices to competitors. The Bells ultimately killed the plan using a combination of lobbying, litigation, and foot-dragging. But for the nine years between the passage of the Telecom Act in 1996 and the Supreme Court’s Brand X decision in 2005, telecommunications firms spent tens of millions of dollars on lawyers and lobbyists to seek advantage in the regulatory arena.

Continue reading →