Articles by Tim Lee

Timothy B. Lee (Contributor, 2004-2009) is an adjunct scholar at the Cato Institute. He is currently a PhD student and a member of the Center for Information Technology Policy at Princeton University. He contributes regularly to a variety of online publications, including Ars Technica, Techdirt, Cato @ Liberty, and The Angry Blog. He has been a Mac bigot since 1984, a Unix, vi, and Perl bigot since 1998, and a sworn enemy of HTML-formatted email for as long as certain companies have thought that was a good idea. You can reach him by email at leex1008@umn.edu.


Movies and Granularity

by on May 17, 2006 · 4 comments

According to its website, Star Wreck took 7 years to create and involved more than 300 volunteers. If we assume that each volunteer contributed time worth $5000 per year to the project, that means that the labor cost of the project was about $10 million dollars. Alternatively, if only count the efforts of the ten principal crew members, and assume they each contributed $25,000 worth of ther time each year–far less than you’d have to pay to get a professional cameraman, to say nothing of a good director or producer–the movie still required about $2 million in labor costs alone.

Now, obviously, that’s not a problem if the participants enjoyed the project enough to do it as volunteers. Plenty of good software is produced that way. But it seems unlikely that you’d be able to get anywhere near enough volunteers to make the number and quality of movies that are on the market today. Moreover, we have to keep in mind that science fiction fans are (and I can say this because I am one) weird. A little bit obsessive, perhaps. These are people who sign up for Klingon language camps and go to conventions dressed up in funny costumes. There’s nothing wrong with that, but you’re not likely to inspire that kind of devotion in the creation of a typical romantic comedy.

Moreover, they acknowledge that the quality of their acting is sub-par:

The movie is being produced by a small team lead by Samuli Torssonen, supported by a large group of both amateur and professional volunteers interested in the project. As the production team is self taught, In the Pirkinning could be called an amateur movie – but we are aiming at professional standards in all aspects of the production (except acting, even though we do our best it is more or less impossible to find a cast of experienced actors for a project of this size and budget).

As Yochai Benkler argues in Coase’s Penguin, part of what makes peer production viable is granularity: the ability of individual volunteers to make small contributions to a much larger project. It seems to me that in most cases, a movie is insufficiently granular for peer production to be a viable option. You’ve gotta have at least a dozen people–the director, the producer, the principal actors, and the technical leads–working more or less full time over several months (or part time over several years). The number of people who are willing to devote seven years of their life to producing a single movie is probably rather small. In fact, it may be limited to the sort of people who can tell you, in great detail, the differences between a Ferengi and a Cardassian. Again, there’s nothing wrong with that (I can quote Ferengi “Rules of Acquisition” myself), but I doubt you can build an entire film industry on that kind of devotion.

The last major topic of Solveig Singleton’s defense of the DMCA that I want to address is the interoperability issue. Here’s her take on it:

The main obstacle to more interoperable DRM or reverse engineering is not the DMCA, it is a business problem. DRM has an advantage in security and speed to market when only one company need be involved in its development. A more open process is slow and the result is not usually cutting-edge. There are endless negotiations, a host of issues with compatibility with legacy equipment, and serious trust issues. CSS protection for DVD’s was jeopardized partly because a licensee, Xing, neglected to encrypt a key (though the system had other weaknesses as well). The more players, the more risk. Let us examine the example of Apple and the iPod/iTunes model more closely. Apple has the idea of selling music at a low price and making money on the hardware, just as radio broadcasters once sponsored programming in the hope of selling radios. To offer a library of music, Apple needs to convince music rights-holders that the files aren’t going to show up free everywhere. This they are most likely to be able to do if they control the security technology. Furthermore, would Apple bother if it anticipates other companies coming in and cutting into the profits they want on the hardware? Unlikely; this was the reason that radio broadcasters were moved to advertising to fund radio, but this won’t work in a digital world if advertising can easily be stripped out. Finally, what gains do we get if, say, Real Networks hacks the iPod, Apple puts out a fix, Real Networks hacks it again, and Apple fixes it again? There’s nothing there for consumers or entrepreneurs but quicksand. We have not Schumpeter, describing the process by which the candle is replaced by the light bulb, but Hobbes’ war of all against all. Meanwhile, there is plenty of competition in the tunes market without breaking Apple’s code. There are many different kinds and levels of competition.

Now, what’s most interesting about this passage is that it concedes one of the central contentions of my paper: that the DMCA doesn’t merely assist in the enforcement of copyrights, but actually creates a new kind of quasi-property right in technology platforms. In my paper, I spend a fair amount of time talking about the IBM PC example as a model for how interoperability and reverse engineering worked before the enactment of the DMCA. IBM would have loved to prevent competitors from building PC clones, but copyright law didn’t give them any way to do that.

Continue reading →

Two Great Papers

by on May 16, 2006

I spent part of my weekend attacking the big stack of reading material I’ve been accumulating recently, and I wanted to (rather belatedly) plug two excellent papers.

The first is Yochai Benkler’s “Coase’s Penguin”, which is a few years old now but every bit as relevant today as it was when it came out in 2002. It offers a framework for analyzing peer-production processes (such as the development of open source software) that puts it on par with the firm and the market as methods for organizing cooperative ventures.

His central insight is that many types of information-processing tasks are characterized by enormous variations in motivation, knowledge, and talents of potential producers, and that this information often cannot be efficiently standardized in a manner necessary to transmission via prices (for markets) or hierarchical decision-making (in firms). Because firms and markets segregate people and resources into many private fiefdoms, it’s unlikely that the person best suited to perform a particular task will be found in the firm that needs the task to be performed. And if the tasks are granular enough (say, fixing a bug in the Linux kernel) the search and transaction costs of finding the right person and negotiating a contract with him or her might be too large to justify it. In contrast, because peer production allows people to self-select into the projects that interest them most, people can often be allocated to projects at very low cost.

I found this argument particularly compelling because it fits with my experiences trying to find people to do work for the Show-Me Institute. It’s very difficult to judge from a resume whether a particular individual would make a good intern, research assistant, write a good paper, etc. One of the best ways of improving your chances of getting a job in a think tank or a public policy magazine like Reason is to start a blog. In the first place, a blog allows a potential employer to peruse real-world examples of your work and judge whether you’re a good writer without having to ask you to do writing samples specifically for him or her. In the second place, the blogosphere has some built-in mechanisms to aid potential employers to sift through the potential candidates. Particularly talented bloggers can catch the attention of other bloggers, who add them to their blogrolls. Finally, (and this is the part where Benkler’s argument is particularly relevant) starting a blog might help you to find opportunities you wouldn’t otherwise have even known about. If you write a blog about foreign policy, for example, there might be people out there looking for foreign policy writers who you wouldn’t have been able to find via other means.

So I highly recommend Benkler’s paper if you haven’t read it yet.

The other paper, ironically enough, is largely a critique of one of Benkler’s other pet issues, instituting a spectrum commons. It’s by my friend and co-blogger Jerry Brito. It’s titled The Spectrum Commons in Theory and Practice.

This isn’t an issue I’d really looked at in any detail before I read Jerry’s paper. I’d read various people claim that you could manage spectrum as a commons, and I was always suspicious of the argument. Jerry’s paper confirmed my suspicions. He argues that in a world of scarcity, a commons pre-supposes a manager whose job it is to set ground rules that can prevent over-grazing of the commons. A spectrum commons can theoretically be managed either by a private company or by the government, but in practice, spectrum commons end up being managed by the government. And as Jerry shows in an extended example on the history of the FCC’s decision making concerning the 3650 MHz frequency band, the FCC’s first experiment in creating a spectrum commons was far from ideal. First, the proceedings generated a lot of rent-seeking by those lobbying for competing uses of the spectrum. And second, it’s not clear that the rules the FCC ultimately imposed will lead to efficient utilization of the spectrum.

Jerry concludes that a spectrum commons is not a “third way” between command-and-control and markets. A commons requires a regulator, and the issue is still who regulates, the government or a private actor? Jerry makes a persuasive argument that creating property rights in spectrum and allowing private actors to deploy it for their highest-valued use is likely to lead to the most efficient utilization.

More King Kong

by on May 15, 2006 · 22 comments

Mike Masnick offers another answer to the King Kong question:

Last month, at the CATO Institute conference on copyrights, someone from NBC Universal asked both Professor David Levine and me how NBC could keep making $200 million movies like King Kong without super strong copyright regulations. We each gave our answers that didn’t satisfy some. However, as I noted in the recap to the event, the guy from NBC Universal was asking the wrong question. It’s like going back to the early days of the PC and asking how IBM would keep making mainframes. The point is that $200 million movies may mostly be a thing of the past. The near immediate response from NBC Universal and other stronger copyright supporters is that this is a “loss” to society–since we want these movies. However, that shows a misunderstanding of the answer. No one is saying to make worse movies–but to recognize that it should no longer cost so much to make a movie. The same economics professor, David Levine, who was asked the question is now highlighting exactly this point on his blog. Last week there was a lot of publicity around a group of Finns who created a Star Trek spoof and are trying to help others make and promote inexpensive, high quality movies as well. Levine notes that the quality of the spoof movie is astounding–not all that far off from what you’d expect from a huge blockbuster sci-fi picture, but was done with almost no budget at all. Given the advances in technology, the quality is only going to improve. So, again, it would appear that a big part of the answer to the $200 million movie question is simply that anyone spending $200 million on a movie these days is doing an awful job containing costs.

This is a good argument, but I still don’t entirely buy it. Certainly, this gives us a reason to think the optimal price for movies in the future will not be $200 million, as better technology allows us to cut the costs of the expensive special effects and film-based recording technologies that contribute to the cost of Hollywood movies. Certainly, that will bring down the cost of blockbuster movies somewhat.

But I don’t think it gets you down to the point where you could fund movies entirely through indirect means like advertising and a first-mover advantage. I’m no expert on the economics of the film industry, but it seems to me that technology is not the dominant, and probably not even the most important, cost driver for the typical Hollywood film. Rather, I have the impression (please correct me if I’m wrong) that much of the cost is driven by the immense amount of labor required to create a top-quality film. You’ve got lighting crews, camera crews, makeup crews, set crews, sound crews, post-production crews, and on and on. Another major cost is the environment in which actors act. Either you have to move your cast and crew to a new location, which involves a lot of travel costs, or you have to construct sets, which requires considerable materials and labor for their construction.

Even costs that are primarily technological have a major labor component. For example, a lot of special effects require people to build computer models, design textures for the models, set up and run complex motion-capture software, and tweak and polish the finished product to make sure it looks realistic, etc. Even if the cost of the hardware and software drops to zero, salaries for the artists and technicians who run all that equipment are likely to be significant. Video game companies have become dominated less and less by programmers and more by creative types who take the models built by the programmers and bring them to life with artwork and stories.

In short, movies in the future may not cost $200 million, but it seems likely to me that there will be movies for which (say) a $50 million budget is entirely justified. I remain skeptical that studios could recoup that kind of investment absent some help from copyright law. And I’m not willing to shrug my shoulders and accept that those kinds of movies simply wouldn’t be made any more.

So I remain pro-copyright. The world would be a much simpler place if we didn’t have to bother with intellectual property rights, but unfortunately, the world seems to be a complicated place.

My co-blogger Solveig Singleton has a new paper out about the DMCA. I’ll probably have some crticism of it in a future post, but I wanted to start off with a point I find pretty persuasive:

In the world of some libertarian DMCA critics (including a slimmer version of myself, some years back), legal barriers enforced in lawsuits against myriad copying individuals are a mainstay. More vigorous enforcement is sometimes presented as an alternative to the DMCA. With respect to my peers, this is non-responsive. The problem that the DMCA is intended to solve is in large part the limited usefulness of ordinary enforcement mechanisms; it does not solve the problem to invoke them… The Internet lacks a dispute resolution mechanism appropriate to quickly resolve millions of small-value disputes, especially where the parties are geographically dispersed. The courts have serious limitations here; they are far too slow and far too expensive. They will work as a last resort in disputes where large value is at stake. This simply does not describe illicit personal copying by individuals. One sometimes hears commentators speaking as if it would work to just crack down on individuals in a few token, high-visibility cases. But this is neither fair to those individuals, nor will it deter. Study after study of deterrence suggests that harsh penalties do little or nothing if the probability of being caught remains below a certain threshold.

In my paper, I cited lawsuits against file-sharers as one possible weapon available to the recording industry, but I did so half-heartedly. I fear SIngleton may be right about the futility of ever more lawsuits as a means of deterring casual infringement. When I was in DC for the Cato copyright conference, I had lunch with two good friends who are fresh out of college. When I told them the subject of the conference, the conversation soon turned toward their own experiences with peer-to-peer files sharing. They told me–with no apparent guilt–about the peer-to-peer programs they use to download copyrighted content.

Now, these friends would be mortified to be caught stealing a candy bar at 7-11. And in my experience, their attitude is typical of young adults their age. I didn’t ask, but if I had, I suspect they would have been nonchalant about the possibility of being hit by an RIAA lawsuit. The odds of being caught are pretty low, and for logistical and PR reasons the RIAA can’t be too draconian with the people they catch.

Continue reading →

Techdirt flags another inaccurate and alarmist story about the dangers of allowing others to borrow your WiFi connection:

To demonstrate the danger, trouble shooters from Data Doctors take us “War Driving”. It’s where hackers drive around neighborhoods, and for the sport of it, record the address of unsecured networks and map them out on the Internet. James Chandler, Data Doctors: “It’s strictly a game to them, but they provide a tool for anybody who is interested in the malicious, the criminal intent.” And after war driving for just a few minutes, we find dozens of open home networks, some even registered in the family name. James Chandler, Data Doctors: “They’re basically telling you, ‘I’m free, I’m not secure, connect to me.’ All I have to do is click a button and I’m ready to connect.” And from the car, they could access everything sent across the signal–account numbers, passwords, business documents. If that’s not enough, they can make you criminally liable by downloading illegal material, using your computer’s Internet address.

The article makes the unsupported assumption that anyone seeking a WiFi connection must have criminal intent. In point of fact, a lot of wardrivers are just looking for a convenient place to check their email. Mike also points out that a hacker can only access traffic that’s not encrypted, and nowadays encryption is a standard feature of any website that deals with sensitive data.

As I wrote back in March, the increasing ubiquity of free wireless networks is a generally positive development. While we can and should educate users about the possible risks and how to lock down their network if they choose to do so, we should also recognize that if done properly, the risk of sharing your WiFi is quite small, and the potential to make the world a more helpful place is significant.

Fred Von Lohmann criticizes Solveig Singleton’s new essay for failing to discuss the darknet critique of DRM. Ray [Gifford, I assume] at PFF had this reaction:

The talismanic authority of the Darknet paper baffles me. It simply proves too much; namely that because some can circumvent DRM and, in a sense IPR, therefore there should be no DRM. It is not a completely bankrupt argument–you can certainly argue that the net effect of, say, antitrust laws or drug laws is negative. With Darknet, though, it is treated at something of an instant QED. To the contrary, it simply makes the point that a black market will arise on the internet for illicitly copied content. The challenge of law (and the usefulness of property rights) is, through social norms and legal sanctions, to shrink the size of that black market so that productive activity continues and a market thrives. Seems to me, on balance, that’s what the DMCA is doing in its best instantiations.

I think this response misunderstands the darknet critique, and the nature of peer-to-peer file distribution more generally.

Continue reading →

Seth Finkelstein offers a reubuttal to Solveig Singleton’s contention that DeCSS wasn’t related to the effort to build an open Linux DVD player.

Ed Felten links to an alarming report about flaws in Diebold’s voting machines:

The attacks described in Hursti’s report would allow anyone who had physical access to a voting machine for a few minutes to install malicious software code on that machine, using simple, widely available tools. The malicious code, once installed, would control all of the functions of the voting machine, including the counting of votes. Hursti’s findings suggest the possibililty of other attacks, not described in his report, that are even more worrisome. In addition, compromised machines would be very difficult to detect or to repair. The normal procedure for installing software updates on the machines could not be trusted, because malicious code could cause that procedure to report success, without actually installing any updates. A technician who tried to update the machine’s software would be misled into thinking the update had been installed, when it actually had not. On election day, malicious software could refuse to function, or it could silently miscount votes.

As I’ve written before, I’m not convinced there are any good reasons to use computerized voting machines. It seems to be driven by a simplistic notion that computerized stuff is always better than non-computerized stuff. But as Felten says, these sorts of vulnerabilities are inevitable on a general-purpose computer.

The most important features for a voting machine are reliability and transparency. In general, the simpler a machine is, the easier it is to verify that it’s working correctly and the more likely ordinary voters are to trust it. Optical-scan voting machines appear to be plenty reliable, and they have the advantage that if anything goes wrong, there’s always an option for a manual recount.

When it comes to voting, we should be very, very hesitant to fix what’s not broken.

Joe at TechDirt debunks a reactionary column about the need for tech workers to unionize:

Historically, a powerful union tool has been the ability to exclude non-members from the workforce. This is why unions are so vehemently against “right to work” laws, or these days, outsourcing labor overseas. Closely related to this is opposition to technologies that reduce the need for human employees, like in the example of the plumbers that were against waterless urinals. Though such a mentality is completely anathema to the tech world, it’s not surprising to see it at a column called The Luddite–the fear that technology would take jobs away from humans was the same fear that the original Luddites had. Even more important, perhaps, is that the delineation between labor and management–central to the union ethos–doesn’t hold at most technology companies. Often, company equity is part of an employee’s compensation package; so even if their wages seem to stagnate due to competition from Indian programmers, they benefit when their company saves money.

Indeed.