May 2006

Mike at TechDirt takes issue with an article by Tom Lenard on C|Net that argues that market allocation of “white space” spectrum is more efficient than a “commons” designation. He writes that “Unlicensed spectrum is hardly a ‘centralized allocation system,’ and it’s hard to see how anyone could make such a claim with a straight face.” As I explained in a recent paper, in order to have a “commons” that works, you need to have rules that govern how devices operate in the space so that they don’t interfere with each other. For example, devices in the chunk of spectrum in which Wi-Fi operates, by regulation, cannot operate above 5 Watts EIRP. Therefore, the rules that govern the “commons” we now have are centrally planned by the government. It’s not controversial to say that central planning is inefficient because a planner cannot possibly have all the information about all the possible competing uses of the spectrum.

While there no doubt is a place for unlicensed devices, one has to admit that designating some spectrum as a commons with certain specific rules will prevent that spectrum from being used in another, perhaps more innovative way, that cannot operate within the commons’ rules. In a market you could just buy the spectrum and deploy your more innovative use; in a commons regime you would have to petition the central planner to change the rules (and we all know how well and how quickly that works.)

Second, Mike takes Lenard to task for not understanding the concept of zero, claiming that when scarcity is removed, market oriented folks have a hard time understanding policy. I tend to agree with him, and I think his is a great observation as it applies to intellectual property. Ideas truly are not scarce; their scarcity is created artificially through IP laws. However, I’m afraid that while new technologies have been able to eke out more communications capacity from existing spectrum, that capacity is still finite and, despite the rhetoric one often hears, spectrum scarcity has not been eliminated.

Mike writes that “what those who understand zero recognize, is that unlicensed spectrum turns spectrum into a free input, lowering the costs and allowing companies to provide products that serve the market at much more reasonable rates.” What he doesn’t see is that while unlicensed spectrum might be a “free input” for certain uses, a whole host of other uses are precluded. While a commons can allow low power, near range devices such as Wi-Fi, bluetooth, and cordless phones–great innovations all–you could not deploy a new national wireless competitor in voice or video over unlicensed spectrum. The only way the cost of spectrum could truly be zero is if all potential uses of spectrum could be deployed without precluding any other use. This is the case in intellectual property where I can use any idea as much as I want without ever affecting someone else’s ability to use that same idea. But it’s not the case for spectrum where one use of spectrum (even the use of spectrum for an unlicensed commons) will necessarily preclude some other potential use.

Movies and Granularity

by on May 17, 2006 · 4 comments

According to its website, Star Wreck took 7 years to create and involved more than 300 volunteers. If we assume that each volunteer contributed time worth $5000 per year to the project, that means that the labor cost of the project was about $10 million dollars. Alternatively, if only count the efforts of the ten principal crew members, and assume they each contributed $25,000 worth of ther time each year–far less than you’d have to pay to get a professional cameraman, to say nothing of a good director or producer–the movie still required about $2 million in labor costs alone.

Now, obviously, that’s not a problem if the participants enjoyed the project enough to do it as volunteers. Plenty of good software is produced that way. But it seems unlikely that you’d be able to get anywhere near enough volunteers to make the number and quality of movies that are on the market today. Moreover, we have to keep in mind that science fiction fans are (and I can say this because I am one) weird. A little bit obsessive, perhaps. These are people who sign up for Klingon language camps and go to conventions dressed up in funny costumes. There’s nothing wrong with that, but you’re not likely to inspire that kind of devotion in the creation of a typical romantic comedy.

Moreover, they acknowledge that the quality of their acting is sub-par:

The movie is being produced by a small team lead by Samuli Torssonen, supported by a large group of both amateur and professional volunteers interested in the project. As the production team is self taught, In the Pirkinning could be called an amateur movie – but we are aiming at professional standards in all aspects of the production (except acting, even though we do our best it is more or less impossible to find a cast of experienced actors for a project of this size and budget).

As Yochai Benkler argues in Coase’s Penguin, part of what makes peer production viable is granularity: the ability of individual volunteers to make small contributions to a much larger project. It seems to me that in most cases, a movie is insufficiently granular for peer production to be a viable option. You’ve gotta have at least a dozen people–the director, the producer, the principal actors, and the technical leads–working more or less full time over several months (or part time over several years). The number of people who are willing to devote seven years of their life to producing a single movie is probably rather small. In fact, it may be limited to the sort of people who can tell you, in great detail, the differences between a Ferengi and a Cardassian. Again, there’s nothing wrong with that (I can quote Ferengi “Rules of Acquisition” myself), but I doubt you can build an entire film industry on that kind of devotion.

Last week, in a much-anticipated decision, a judge in London ruled that Apple Computer’s use of the Apple logo on its iPod did not violate an agreement with Apple Corp., a Beatles-owned music label. Its no doubt a fascinating intellectual property case, and I’ve no doubt that TLF’s Tim Lee willl cover it (and blame the DMCA for it somehow).

But the real interesting thing here isn’t the case, but BBC’s coverage of it a few days later. It seems that the British network’s News 24 channel had meant to interview Guy Kewney, the editor of newswireless.net, and an expert on the subject. Turns out though that, due to an inexplicable Green Room fiasco, the man actually interviewed, live on-air, was Guy Goma, a Congolese IT expert who was at the BBC’s office’s for a job interview.

The video tells the whole story. Mr. Goma’s face shows horror when he is introduced as Mr. Kewney. The first question: was he surprised by the verdict? Yes, he was surprised, he says, no doubt sincerely.

Meanwhile, the real Guy Kewney, back in the reception area, sees himself being interviewed on TV. Well, not him, but another Guy. (Kewney later recounted the story in his blog.)

Meanwhile, the interview goes on. And it turns out that Guy Gomo, despite a thick accent and precious little knowledge of the subject, actually does rather well, mumbling things about Internet cafes and the popularity of downloads.

There are no doubt lessons here about 24-hour news and the mistakes that format engenders. But as a policy wonk and occasional guest on similar shows, I felt an enormous empathy for Mr. Goma. Who hasn’t felt that nagging fear that the next question will be on a subject they know nothing about? And that they will nevertheless be expected to answer?

The whole episode should raise concerns, moreover, for the whole policy wonk community. After all, if journalists find they can get half-decent interviews from someone randomly plucked from their lobby, we may be out of a job.

Well, there’s always the blog.

The last major topic of Solveig Singleton’s defense of the DMCA that I want to address is the interoperability issue. Here’s her take on it:

The main obstacle to more interoperable DRM or reverse engineering is not the DMCA, it is a business problem. DRM has an advantage in security and speed to market when only one company need be involved in its development. A more open process is slow and the result is not usually cutting-edge. There are endless negotiations, a host of issues with compatibility with legacy equipment, and serious trust issues. CSS protection for DVD’s was jeopardized partly because a licensee, Xing, neglected to encrypt a key (though the system had other weaknesses as well). The more players, the more risk.

Let us examine the example of Apple and the iPod/iTunes model more closely. Apple has the idea of selling music at a low price and making money on the hardware, just as radio broadcasters once sponsored programming in the hope of selling radios. To offer a library of music, Apple needs to convince music rights-holders that the files aren’t going to show up free everywhere. This they are most likely to be able to do if they control the security technology. Furthermore, would Apple bother if it anticipates other companies coming in and cutting into the profits they want on the hardware? Unlikely; this was the reason that radio broadcasters were moved to advertising to fund radio, but this won’t work in a digital world if advertising can easily be stripped out. Finally, what gains do we get if, say, Real Networks hacks the iPod, Apple puts out a fix, Real Networks hacks it again, and Apple fixes it again? There’s nothing there for consumers or entrepreneurs but quicksand. We have not Schumpeter, describing the process by which the candle is replaced by the light bulb, but Hobbes’ war of all against all. Meanwhile, there is plenty of competition in the tunes market without breaking Apple’s code. There are many different kinds and levels of competition.

Now, what’s most interesting about this passage is that it concedes one of the central contentions of my paper: that the DMCA doesn’t merely assist in the enforcement of copyrights, but actually creates a new kind of quasi-property right in technology platforms. In my paper, I spend a fair amount of time talking about the IBM PC example as a model for how interoperability and reverse engineering worked before the enactment of the DMCA. IBM would have loved to prevent competitors from building PC clones, but copyright law didn’t give them any way to do that.

Continue reading →

Assorted comments have filtered in concerning my DMCA paper, and I respond here:

SympathyI again underscore that I am sympathetic to Ed Felten and others caught up in litigation. My point was not to trivialize their concerns, but to put them in a larger perspective. The plight of a hungry person arrested for snatching bread is a real plight, but it is not an argument for abolishing the law in question. I am not an “act utilitarian.” I do not believe that every application of a rule needs to be optimal for the rule to do more good than harm. The courts are good at dealing with individual hard cases; forums exists for reform and for exemptions; but, still, in boundary disputes, lines must be drawn somewhere. If one has a strong concept of natural rights, that will be jarring. But I believe that rights need some flexibility or they will not survive the transformation of the economy into one where less value is bound up in physical capital and more in intellectual capital.

Piracy and more Piracy and Less PiracyYes, there is rather a lot of it, isn’t there? Offline, online, and so on. If DRM does nothing to impede it, and if keeping hacker/cracker tools in the realm of black market or grey market does nothing to inconvenience anyone even slightly, then that is certainly a problem. But I do think that there is a vast stretch amount of ground between failing completely and preventing all piracy of any kind. And although there is a great deal of piracy going on, there could easily be a great deal more.

A note about P2P. Of course the DMCA doesn’t do anything about that, the stuff is already decrypted. To address that problem, we have the Grokster case. Different problem, but similar analysis. Markets can contend with black-market P2P, fraught with viruses and other nasty things. The expectations of students are misleading here. Students are used to getting things for free from their parents and others. They generally do not buy the machines or software they use, and have little cash flow to spend on content. So they are not averse to risking giving their machines horrible diseases, and on the other hand “need” to get content for free. They have a great deal of time on their hands. Flash forward a few years; these same people have jobs, many of those jobs (an increasing number) will involve intellectual property (journalism, photography, science, trade secrets). They will be short on time and have more money. They will be much more wary of viruses. Their views are quite likely to evolve.

If DRM itself is all a waste of effort, well, one ought to see investors supporting business models that use something else. But, again, we see only a few small experiments. Very few. Very small. I find it extremely implausible that everyone across a wide range of content developers–games, music, movies, photos, books, and all their investors so on, should be entirely lacking in vision. There is money to be made here.

It remains possible that someone will come forward and discover how it is to be done without relying on any of the types of boundaries that have traditionally been used. The idea of voluntary compliance is attractive, but unrealistic in a large community. People do voluntarily comply with a great many laws. But how these norms came to be internalized is, in part, due to centuries of past enforcement patterns and the gradual evolution of human expectations accordingly.

It is also possible that some of the need for liability rules to take up the slack on the enforcement side would lessen if the Internet for other reasons evolves in the direction of being more friendly to enforcement. An infrastructure supporting identification, authentication, and reputation mechanisms might help. But bear in mind that should such an infrastructure develop, so will efforts to crack and spoof, and then we are right back at the problem of the DMCA again.

Beyond Short Papers and Hard Arguments The best response to my paper comes from Fred Von Loehmann at EFF, who brings the argument back to the question of whether the DMCA and/or DRM is needed at all. The larger point of my paper is that if the DMCA is necessary, the hard cases we have seen cannot justify its repeal, but rather call for tinkering or further explication from the courts; I underscore that my paper was not intended as a final or complete defense of the DMCA, it would have had to be much longer. But FVL’s argument takes us beyond the scope of the paper, where I think the debate is more serious.

To justify repeal, one would need to show that the DMCA is not necessary for the viability of markets. This argument can take two forms:

Continue reading →

In thinking about how much the communications and computing world has changed in just the last few decades, I’m always wondering how my kids will react when I tell them about how Dad used technology in the past. Case in point: “long-distance service.” My kids won’t even know what the heck that term means. And they will certainly laugh when I tell them how their grandmother used to impose a strict, time-limited plan on my calling activities to keep our phone bill down. (I used to have a girlfriend in high school who lived across an “inter-LATA” boundary which made my calls to her absurdly expensive even thought she was less than 30 minutes away. Once the monthly phone bill went over $100 bucks, my call privileges were severely curtailed by Mom!)

Anyway, what got me thinking about all this was this announcement yesterday by Skype that all US and Canadian-based Skype customers can now make free SkypeOut calls to traditional landline and mobile phones in the US and Canada. Sure, Skype isn’t a perfect substitute for traditional telephony. But it’s good enough for many. And its move will force other telecom providers to reconsider their current pricing plans and cut rates even further. One wonders how Vonage and other VoIP providers can survive in world of cut-throat competition.

Meanwhile, back in the surreal world of Washington, we continue to impose extensive regulations on various phone companies due to fears of consumer harm. Hmmm… let’s see, someone is providing free phone service to the world and regulators are still worried about consumer harm. Seems silly to me.

Two Great Papers

by on May 16, 2006

I spent part of my weekend attacking the big stack of reading material I’ve been accumulating recently, and I wanted to (rather belatedly) plug two excellent papers.

The first is Yochai Benkler’s “Coase’s Penguin”, which is a few years old now but every bit as relevant today as it was when it came out in 2002. It offers a framework for analyzing peer-production processes (such as the development of open source software) that puts it on par with the firm and the market as methods for organizing cooperative ventures.

His central insight is that many types of information-processing tasks are characterized by enormous variations in motivation, knowledge, and talents of potential producers, and that this information often cannot be efficiently standardized in a manner necessary to transmission via prices (for markets) or hierarchical decision-making (in firms). Because firms and markets segregate people and resources into many private fiefdoms, it’s unlikely that the person best suited to perform a particular task will be found in the firm that needs the task to be performed. And if the tasks are granular enough (say, fixing a bug in the Linux kernel) the search and transaction costs of finding the right person and negotiating a contract with him or her might be too large to justify it. In contrast, because peer production allows people to self-select into the projects that interest them most, people can often be allocated to projects at very low cost.

I found this argument particularly compelling because it fits with my experiences trying to find people to do work for the Show-Me Institute. It’s very difficult to judge from a resume whether a particular individual would make a good intern, research assistant, write a good paper, etc. One of the best ways of improving your chances of getting a job in a think tank or a public policy magazine like Reason is to start a blog. In the first place, a blog allows a potential employer to peruse real-world examples of your work and judge whether you’re a good writer without having to ask you to do writing samples specifically for him or her. In the second place, the blogosphere has some built-in mechanisms to aid potential employers to sift through the potential candidates. Particularly talented bloggers can catch the attention of other bloggers, who add them to their blogrolls. Finally, (and this is the part where Benkler’s argument is particularly relevant) starting a blog might help you to find opportunities you wouldn’t otherwise have even known about. If you write a blog about foreign policy, for example, there might be people out there looking for foreign policy writers who you wouldn’t have been able to find via other means.

So I highly recommend Benkler’s paper if you haven’t read it yet.

The other paper, ironically enough, is largely a critique of one of Benkler’s other pet issues, instituting a spectrum commons. It’s by my friend and co-blogger Jerry Brito. It’s titled The Spectrum Commons in Theory and Practice.

This isn’t an issue I’d really looked at in any detail before I read Jerry’s paper. I’d read various people claim that you could manage spectrum as a commons, and I was always suspicious of the argument. Jerry’s paper confirmed my suspicions. He argues that in a world of scarcity, a commons pre-supposes a manager whose job it is to set ground rules that can prevent over-grazing of the commons. A spectrum commons can theoretically be managed either by a private company or by the government, but in practice, spectrum commons end up being managed by the government. And as Jerry shows in an extended example on the history of the FCC’s decision making concerning the 3650 MHz frequency band, the FCC’s first experiment in creating a spectrum commons was far from ideal. First, the proceedings generated a lot of rent-seeking by those lobbying for competing uses of the spectrum. And second, it’s not clear that the rules the FCC ultimately imposed will lead to efficient utilization of the spectrum.

Jerry concludes that a spectrum commons is not a “third way” between command-and-control and markets. A commons requires a regulator, and the issue is still who regulates, the government or a private actor? Jerry makes a persuasive argument that creating property rights in spectrum and allowing private actors to deploy it for their highest-valued use is likely to lead to the most efficient utilization.

More King Kong

by on May 15, 2006 · 22 comments

Mike Masnick offers another answer to the King Kong question:

Last month, at the CATO Institute conference on copyrights, someone from NBC Universal asked both Professor David Levine and me how NBC could keep making $200 million movies like King Kong without super strong copyright regulations. We each gave our answers that didn’t satisfy some. However, as I noted in the recap to the event, the guy from NBC Universal was asking the wrong question. It’s like going back to the early days of the PC and asking how IBM would keep making mainframes. The point is that $200 million movies may mostly be a thing of the past. The near immediate response from NBC Universal and other stronger copyright supporters is that this is a “loss” to society–since we want these movies. However, that shows a misunderstanding of the answer. No one is saying to make worse movies–but to recognize that it should no longer cost so much to make a movie. The same economics professor, David Levine, who was asked the question is now highlighting exactly this point on his blog. Last week there was a lot of publicity around a group of Finns who created a Star Trek spoof and are trying to help others make and promote inexpensive, high quality movies as well. Levine notes that the quality of the spoof movie is astounding–not all that far off from what you’d expect from a huge blockbuster sci-fi picture, but was done with almost no budget at all. Given the advances in technology, the quality is only going to improve. So, again, it would appear that a big part of the answer to the $200 million movie question is simply that anyone spending $200 million on a movie these days is doing an awful job containing costs.

This is a good argument, but I still don’t entirely buy it. Certainly, this gives us a reason to think the optimal price for movies in the future will not be $200 million, as better technology allows us to cut the costs of the expensive special effects and film-based recording technologies that contribute to the cost of Hollywood movies. Certainly, that will bring down the cost of blockbuster movies somewhat.

But I don’t think it gets you down to the point where you could fund movies entirely through indirect means like advertising and a first-mover advantage. I’m no expert on the economics of the film industry, but it seems to me that technology is not the dominant, and probably not even the most important, cost driver for the typical Hollywood film. Rather, I have the impression (please correct me if I’m wrong) that much of the cost is driven by the immense amount of labor required to create a top-quality film. You’ve got lighting crews, camera crews, makeup crews, set crews, sound crews, post-production crews, and on and on. Another major cost is the environment in which actors act. Either you have to move your cast and crew to a new location, which involves a lot of travel costs, or you have to construct sets, which requires considerable materials and labor for their construction.

Even costs that are primarily technological have a major labor component. For example, a lot of special effects require people to build computer models, design textures for the models, set up and run complex motion-capture software, and tweak and polish the finished product to make sure it looks realistic, etc. Even if the cost of the hardware and software drops to zero, salaries for the artists and technicians who run all that equipment are likely to be significant. Video game companies have become dominated less and less by programmers and more by creative types who take the models built by the programmers and bring them to life with artwork and stories.

In short, movies in the future may not cost $200 million, but it seems likely to me that there will be movies for which (say) a $50 million budget is entirely justified. I remain skeptical that studios could recoup that kind of investment absent some help from copyright law. And I’m not willing to shrug my shoulders and accept that those kinds of movies simply wouldn’t be made any more.

So I remain pro-copyright. The world would be a much simpler place if we didn’t have to bother with intellectual property rights, but unfortunately, the world seems to be a complicated place.

My co-blogger Solveig Singleton has a new paper out about the DMCA. I’ll probably have some crticism of it in a future post, but I wanted to start off with a point I find pretty persuasive:

In the world of some libertarian DMCA critics (including a slimmer version of myself, some years back), legal barriers enforced in lawsuits against myriad copying individuals are a mainstay. More vigorous enforcement is sometimes presented as an alternative to the DMCA. With respect to my peers, this is non-responsive. The problem that the DMCA is intended to solve is in large part the limited usefulness of ordinary enforcement mechanisms; it does not solve the problem to invoke them…

The Internet lacks a dispute resolution mechanism appropriate to quickly resolve millions of small-value disputes, especially where the parties are geographically dispersed. The courts have serious limitations here; they are far too slow and far too expensive. They will work as a last resort in disputes where large value is at stake. This simply does not describe illicit personal copying by individuals. One sometimes hears commentators speaking as if it would work to just crack down on individuals in a few token, high-visibility cases. But this is neither fair to those individuals, nor will it deter. Study after study of deterrence suggests that harsh penalties do little or nothing if the probability of being caught remains below a certain threshold.

In my paper, I cited lawsuits against file-sharers as one possible weapon available to the recording industry, but I did so half-heartedly. I fear SIngleton may be right about the futility of ever more lawsuits as a means of deterring casual infringement. When I was in DC for the Cato copyright conference, I had lunch with two good friends who are fresh out of college. When I told them the subject of the conference, the conversation soon turned toward their own experiences with peer-to-peer files sharing. They told me–with no apparent guilt–about the peer-to-peer programs they use to download copyrighted content.

Now, these friends would be mortified to be caught stealing a candy bar at 7-11. And in my experience, their attitude is typical of young adults their age. I didn’t ask, but if I had, I suspect they would have been nonchalant about the possibility of being hit by an RIAA lawsuit. The odds of being caught are pretty low, and for logistical and PR reasons the RIAA can’t be too draconian with the people they catch.

Continue reading →

Techdirt flags another inaccurate and alarmist story about the dangers of allowing others to borrow your WiFi connection:

To demonstrate the danger, trouble shooters from Data Doctors take us “War Driving”. It’s where hackers drive around neighborhoods, and for the sport of it, record the address of unsecured networks and map them out on the Internet.

James Chandler, Data Doctors: “It’s strictly a game to them, but they provide a tool for anybody who is interested in the malicious, the criminal intent.”

And after war driving for just a few minutes, we find dozens of open home networks, some even registered in the family name.

James Chandler, Data Doctors: “They’re basically telling you, ‘I’m free, I’m not secure, connect to me.’ All I have to do is click a button and I’m ready to connect.”

And from the car, they could access everything sent across the signal–account numbers, passwords, business documents. If that’s not enough, they can make you criminally liable by downloading illegal material, using your computer’s Internet address.

The article makes the unsupported assumption that anyone seeking a WiFi connection must have criminal intent. In point of fact, a lot of wardrivers are just looking for a convenient place to check their email. Mike also points out that a hacker can only access traffic that’s not encrypted, and nowadays encryption is a standard feature of any website that deals with sensitive data.

As I wrote back in March, the increasing ubiquity of free wireless networks is a generally positive development. While we can and should educate users about the possible risks and how to lock down their network if they choose to do so, we should also recognize that if done properly, the risk of sharing your WiFi is quite small, and the potential to make the world a more helpful place is significant.