Open Source, Open Standards & Peer Production

Movies and Granularity

by on May 17, 2006 · 4 comments

According to its website, Star Wreck took 7 years to create and involved more than 300 volunteers. If we assume that each volunteer contributed time worth $5000 per year to the project, that means that the labor cost of the project was about $10 million dollars. Alternatively, if only count the efforts of the ten principal crew members, and assume they each contributed $25,000 worth of ther time each year–far less than you’d have to pay to get a professional cameraman, to say nothing of a good director or producer–the movie still required about $2 million in labor costs alone.

Now, obviously, that’s not a problem if the participants enjoyed the project enough to do it as volunteers. Plenty of good software is produced that way. But it seems unlikely that you’d be able to get anywhere near enough volunteers to make the number and quality of movies that are on the market today. Moreover, we have to keep in mind that science fiction fans are (and I can say this because I am one) weird. A little bit obsessive, perhaps. These are people who sign up for Klingon language camps and go to conventions dressed up in funny costumes. There’s nothing wrong with that, but you’re not likely to inspire that kind of devotion in the creation of a typical romantic comedy.

Moreover, they acknowledge that the quality of their acting is sub-par:

The movie is being produced by a small team lead by Samuli Torssonen, supported by a large group of both amateur and professional volunteers interested in the project. As the production team is self taught, In the Pirkinning could be called an amateur movie – but we are aiming at professional standards in all aspects of the production (except acting, even though we do our best it is more or less impossible to find a cast of experienced actors for a project of this size and budget).

As Yochai Benkler argues in Coase’s Penguin, part of what makes peer production viable is granularity: the ability of individual volunteers to make small contributions to a much larger project. It seems to me that in most cases, a movie is insufficiently granular for peer production to be a viable option. You’ve gotta have at least a dozen people–the director, the producer, the principal actors, and the technical leads–working more or less full time over several months (or part time over several years). The number of people who are willing to devote seven years of their life to producing a single movie is probably rather small. In fact, it may be limited to the sort of people who can tell you, in great detail, the differences between a Ferengi and a Cardassian. Again, there’s nothing wrong with that (I can quote Ferengi “Rules of Acquisition” myself), but I doubt you can build an entire film industry on that kind of devotion.

MikeT has a great post comparing the relative performance of Sun’s Java platform and Microsoft’s .NET platform:

Microsoft has refused to open up the .NET platform to the same degree as Sun has, and while there is freedom to implement parts of the base specifications, the legal status of any alternative implementation is in limbo because Microsoft won’t commit to the same open licensing of its patents for .NET that Sun has for Java. There is no “.NET Community Process” that can provide the same sort of assurance that the Java Community Process can provide. One of the benefits of the JCP has been that it has provided a few major corporations with sufficient incentive to write their own Java implementations that a credible open source implementation was not even started until recently whereas no on has hitherto dared to develop a commercial .NET reimplementation.

The very existance of these implementations is important because they provide the developers and users of Java and .NET with the assurance that they are not betting everything on just one vendor. Advocates of software patents almost always underestimate the importance of such multiple, non-patent-encumbered implementations to the success of a platform. Developing Java or .NET may be expensive, but for the actual developers and users who will make use of them, there is often even more to lose by tying an entire infrastructure to a technology that can only be provided by one vendor.

Lastly, consider this, you skeptics out there. Where would the market for Java application servers be today were it not for Apache Tomcat and JBoss providing high quality open source alternatives to the extremely expensive proprietary servers? This alone is a very good reason why software patents ought to go. Had Sun left other companies and open source groups in limbo on what they would do with their patents, few companies would have invested into the development of Java application servers. It was Sun’s decision to not use their patents except as a mechanism to control the purity of the implementation of the Java platform that really got things going. Since Microsoft has not yet committed to allowing others to develop .NET runtimes that are fully compatible with theirs without licensing technology from them, the market for .NET products has been primarily limited to those who wanted to replace existing Microsoft-specific code rather than build a totally new market the way that Java has started.

I can think of a few factors that might help explain Java’s lead (perhaps most notably, Sun had about a 4-year head start) but I think the general point is a sound one. In fact, this is a point I plan to stress at Wednesday’s Cato conference: just as planned economies don’t work as well as decentralized, market-oriented ones, so too do technological platforms controlled by one vendor tend not to do as well as platforms in which anyone is free to participate without seeking permission of a centralized authority. What software patents and the DMCA do, in effect, is encourage the technological equivalent of central planning. Central planning doesn’t work for national economies, and it doesn’t work for software platforms.

Epstein’s New Paper

by on April 20, 2006 · 14 comments

Richard Epstein has a new report on intellectual property. Epstein is a brilliant legal theorist (seriously–several of his books are classics of libertarian scholarship) but unfortunately, I think he analysis of IP issues–especially technology related IP issues–is hampered by his lack of familiarity with the underlying domain. Take this passage about open source, for example:

One ongoing question is how well open source stacks up against traditional proprietary software. Much depends on the scale of the enterprise. The decentralized methods for open source work well with small systems, but are difficult to maintain as the network expands–a problem that any proprietary system also faces in integrating backwards to existing products while introducing new products. In addition, loose cooperatives must organize to fend off outsiders claiming that the entire system incorporates their trade secrets or IP. The present SCO litigation, for example, puts the entire Linux system at risk on these grounds, prompting the formation of a litigation committee to coordinate the common defense. Right now at the heart of the movement lies a commercial joint venture spearheaded by well-established firms like IBM, Intel, and Hewlett-Packard, which develop service and proprietary programs that operate on top of an open source infrastructure. The new development gives ample testimony that no loose assemblage of voluntary contributors will be able to carry the day any longer.

To be honest, I’m not sure I follow the third sentence. I think that to the contrary, in many ways open source scales better than proprietary development models, because it takes advantage of the decentralized, spontaneous processes to solve problems rather than relying on hierarchical, top-down processes. Of course, generally speaking, large corporations like dealing with other large corporations for the IT needs, so it’s not surprising that IBM does a lot of business selling open source software (along with some of their own proprietary software) to Fortune 500 companies. But that’s not because open source can’t solve the technical problems of large companies. It’s simply that “open source,” as an idea, doesn’t have a sales force and can’t meet with corporate IT directors. IBM does, and can, so it tends to get the IT contracts. But most of the value was created by the volunteers who built the underlying software.

He then claims that “at the heart of the open source movement” are IBM, Intel, and HP. He doesn’t elaborate, but I assume he’s equating “open source” with “Linux.” This is misleading for several reasons. First of all, those companies might be spending the most money on Linux-related products, but they’re hardly the core of the Linux community. Linux is still developed by a decentralized group of mostly-volunteer programmers from a wide variety of institutions, led by Linus Torvalds. They probably don’t seem significant to Mr. Epstein because they don’t have PR departments or billion-dollar balance sheets, but they’re the ones who control the direction of the core product. The work of IBM, Intel, HP, and their ilk is largely focused on making Linux work better on their particular systems, as well as building software on top of Linux to meet the needs of particular clients. Obviously, that’s often helpful to the overall project, but it hardly puts Big Blue “at the heart” of the Linux effort.

But the broader point is that Linux is just one out of dozens of major, successful open source products that are used by millions of people every day–and most of them receive far less corporate support than does Linux. Most of them are programs that Epstein has probably never heard of–projects with names like Apache, Samba, Perl, Python, gcc, MySQL, KDE, Gnome, FreeBSD, OpenSSH–but that make up the “plumbing” that make the Internet work. Each of these projects has a core team made up of, well “a loose assemblage of voluntary contributors.” Some of them get corporate support, but that support is incidental to the projects’ viability in most cases. I can’t think of any recent developments that prove that the open source model will not “be able to carry the day any longer.” To the contrary, the open source development model continues to demonstrate its vitality by churning out spectacular products without significant corporate subsidies.

Now, obviously it wouldn’t be fair to expect a 50-something law professor to be intimately familiar with products like gcc and FreeBSD. Linux is the product that gets the most press, and IBM is the Linux contributor that gets the most attention, and so Epstein naturally assumes that IBM is the biggest driver of open source software.

It’s an understandable error, but these kinds of blind spots are dangerous when you’re doing public policy analysis. If you misdiagnose the source of innovation, you’re likely to misunderstand the institutions required to promote it. Computer geeks are the ones closest to the ground of high-tech innovation. When they’re shouting from the rooftops about problems with our IP system, I think the law professors of the world ought to pay a bit more attention to what they have to say.

OJ Simpson Open Source

by on April 15, 2006

A few months ago, I was pleased to see this post by ZDNet editor David Berlind, in which he did a great job of explaining why “open DRM” is a contradiction in terms:

Until last night, when I met Brad Templeton, chairman of the board at the Electronic Frontier Foundation, my position has basically been that DRM as an idea is a bad idea (especially the way it is being implemented) but that if we must have it, then at least let’s have one that’s based on an open standard so that the content you buy can flow frictionlessly from one of your devices to the other without running into a playback gotcha. But, based on what Templeton told me, I now realize that even an open standard won’t do much to solve the problem. This for me–a huge proponent of open standards–was such devastating news that Templeton will tell you that at first, I refused to believe it. But it’s true and perhaps just as troubling is how open source software is one of the reasons why.

Templeton taught me something about how DRM works that I had never stopped to consider. As it turns out, a proprietary DRM scheme relies on the proprietary closed source software that works with it to form the one-two punch of what makes DRM function. The great thing about open standards is that they make it possible for anybody including open source developers to implement them in their software. But if there was an open standard for DRM, the resulting open source implementations would very likely defeat the purpose of the DRM in the first place. The reason proprietary DRM works is that the vendor is in control of both the DRM technology that secures the content and the playback technology that knows how to unlock it and play it back. So, by virtue of what the proprietary playback software is capable of, that vendor is completely in charge of what happens to the content once it’s unlocked.

But in a Friday post, he seemed to be changing his tune, at least when it comes to Sun’s “open source” DRM scheme:

Continue reading →

Software Libre

by on February 17, 2006

I often scratch my head when James DeLong writes about open source software:

Torwalds emphasis on reciprocity as a dominant value is right. It is a word used often here at PFF to describe the workings of the IP system, and to explain why unauthorized P2P violates the social contract.

But he has an awfully limited view of reciprocity in that he insists that code can only be traded for code. This may do in a research context, but once one enters the world of affairs, not even the most primitive barter economy trades like for like. Og the Cro Magnon traded meat for a finely crafted club, or a log canoe for a tent.

Now, granted, Torwalds is not talking about trading exactly the same code, but this is still a strange and unnecessary conttraint.

This is, shall we say, a strange and unnecessary argument. We on the libertarian side of the fence often extol markets, commerce, and for-profit institutions, because they work very well and provide us all with a lot of goods and services we value. But I think we too often fall into the trap of assuming that market institutions are always superior to non-market institutions, or (even worse) that for-profit institutions are always superior to not-for-profit ones.

But that’s silly. The essence of the libertarian position is that decentralized, voluntary institutions are better than centralized, coercive ones. As it happens, markets are one of the most important examples of a decentralized, non-coercive institution. But it’s far from the only one. Churches, private charities, universities, think tanks, and families are all examples of private organizations that do good things without primarily relying on the profit motive. I can’t remember ever reading a libertarian attack churches because they rely so much on volunteers rather than paid workers to get things done. Volunteering at your church is an example of reciprocity that doesn’t involve an exchange of money. We libertarians usually praise such arrangements as worthwhile alternatives to government coercion.

An open source software projects is another example of a private, decentralized, voluntary institution. It’s the sort of thing that free-market types should be promoting, as another example of how valuable products can be created without regulations and subsidies. Yet DeLong regularly does just the opposite.

Now obviously, the fact that DeLong’s criticism isn’t intrinsically libertarian doesn’t mean it’s wrong. Here’s what he’s missing: Torvalds demands reciprocity in the form of code rather than money because the source code is actually useful to him. Ed Felten named his blog “Freedom to Tinker” for a reason. Software that comes with its source code is more useful than software that doesn’t. Being able to “tinker” with the software we use is an ability many of us programmers value, and it’s taken away from us by proprietary software.

DeLong seems to think that open source programmers are just ideologically driven zealots who don’t like paying for things. But that misunderstands their motivation. Primarily, their concern is technical, not ideological or financial. The ability to examine and change a program’s source code is valuable, independently of whether you paid for the software in the first place, and independently of whether you’re planning to share it with others. So Torvald’s motivation in trading code for code is that he actually wants the code. Not because he hates the profit motive, but simply because the code is useful to him, and he can’t get it with proprietary software.

This is a point that non-programmers have trouble understanding. When they hear the phrase “free software,” they hear “software I don’t have to pay for.” That’s not what the phrase means–it’s an unfortunate limitation of the English language. The open source movement uses the phrase “free as in free speech, not free beer” to try to explain the distinction. It’s about what you can do with the software, not how much you paid for it. This confusion doesn’t exist as much in Spanish, where there are different words for these two concepts: “gratis” for “free as in beer,” and “libre” for “free as in speech.” The purpose of the GPL is to preserve the latter for the benefit of programmers. The former is just an incidental benefit to users.

Via IPCentral, there’s an interesting article over at DRM Watch about the development of DRM standards.

The short version is: DRM standards continue to be a disaster. The only “standard” that has gotten any traction is the OMA DRM that’s used to lock content for mobile phones.

It’s not hard to see why mobile phone makers would have an easier time limiting copying than other platforms: mobile phones are proprietary devices on proprietary networks, and consumers use them to consume a small amount of proprietary content. (Amusingly, at one point it looked as though the annual licensing fees for OMA would exceed the value of all content traded using the scheme) The challenges faced by OMA are nothing like the challenges faced by someone distributing a lot of content on an open network like the Internet. OMA has hardly been a roaring success, and other DRM “standards” continue to be dead in the water:

The issue of technology licensing, and fees associated with it, pervades just about every DRM-related standards initiative–so much that it calls the term “standard” into question. Most DRM standards bodies are now really consortia that have IP licensing pools attached to them. Sun Microsystems is attempting to buck this trend with its DReaM Project, which it announced back in September: Sun intends to create an open DRM standard through collaborative community source development that “invents around” the existing patents. We believe this effort to be naive and unrealistic, and we do not expect it to succeed in its proposed form.

For anyone who’s familiar the way real open standards work, that ought to make your skin crawl. Genuine open standards like HTML, PDF, WiFi, etc, are available for anyone to implement, and to freely combine with other technologies to create something new. When I want to design a new web browser, I don’t have to run out and negotiate a licensing agreement with the company that owns the HTML standard. I don’t have to comply with hundreds of pages of detailed regulations before I’m allowed to release my product. And I don’t have to pay anyone royalties. The result of that openness has been a flourishing market for both web servers and web browsers, many of them developed by volunteers. The market would look very different if someone were collecting license fees on every web browser downloaded.

The expectation that “open standards” will be actually open standards not encumbered by restrictive licensing terms and burdensome royalties might be “naive,” but it’s been essential to the rapid growth of the Internet. I think DRM Watch is actually right that Sun is “naive and unrealistic” if it thinks it can develop an “open” DRM standard. But DRM watch seems to think that Sun should instead jump on board one of the more proprietary alternatives.

In contrast, I’m inclined to think that DRM is fundemantally at odds with the open, competitive technological environment from which the Internet emerged. The events of 2005 seem to provide more evidence of that thesis.

Creative Commons

by on December 29, 2005

Larry Lessig has spent the last week exhorting people to contribute to Creative Commons, the organization responsible for the Creative Commons license.

The CC license is a concept that every libertarian–regardless of their views on intellectual property–should be excited about. Sometimes, an author, artists, or musician chooses to market his or her products commercially. But other times, for a variety of reasons, a creator would simply like his or her works to be widely available. The creative commons license is a convenient and efficient means for copyright holders to make their works available to the public while placing various conditions on the way those rights are used. So, for example, a musician might release a song under a CC license that’s free for non-commercial use, but requires his permission before the song is used for commercial purposes.

Copyright is all about putting creators in control of their creations. The right of control includes the right to relinquish that control, or to relinquish some rights (such as the right to non-commercial use) while reserving others (such as the right to attribution and commercial exploitation). Thus, CC extends the power of copyright by giving authors finer-grained control over their rights under copyright.

I’d been aware of the CC license for a long time. What I didn’t realize, until recently, that Creative Commons, the organization, has about 20 employees and are involved in a wide variety of exciting projects. Lessig described the organization’s current activities in a series of emails over the last few weeks. They’re well worth reading to understand what the CC project is, what it’s trying to accomplish, and where it’s heading in the coming years.

Anyway, until now CC has been supported primarily by a handful of generous foundations. But they’re currently undergoing a public fundraising campaign in order to preserve their status as a publicly-supported non-profit organization–and the deadline is this Saturday! So check out his most recent appeal, which answers some common questions, and perhaps you’ll be persuaded that they’re an organization worthy of your support. I was.