May 2008

A923E041-59A0-4EA3-89E7-71F6FCE3D838.jpgLawrence Lessig has an op-ed in the New York Times today calling the orphan works bill now before Congress “unfair and unwise.” He agrees that the orphan works problem is real and merits an immediate response, but finds fault with the bill because it is unfair to copyright holders who have relied on existing law and “because for all this unfairness, it simply wouldn’t do much good.” Lessig writes: “The uncertain standard of the bill doesn’t offer any efficient opportunity for libraries or archives to make older works available, because the cost of a ‘diligent effort’ is not going to be cheap.” Instead Lessig suggests an alternative reform:

Congress could easily address the problem of orphan works in a manner that is efficient and not unfair to current or foreign copyright owners. Following the model of patent law, Congress should require a copyright owner to register a work after an initial and generous term of automatic and full protection.

For 14 years, a copyright owner would need to do nothing to receive the full protection of copyright law. But after 14 years, to receive full protection, the owner would have to take the minimal step of registering the work with an approved, privately managed and competitive registry, and of paying the copyright office $1.

This rule would not apply to foreign works, because it is unfair and illegal to burden foreign rights-holders with these formalities. It would not apply, immediately at least, to work created between 1978 and today. And it would apply to photographs or other difficult-to-register works only when the technology exists to develop reliable and simple registration databases that would make searching for the copyright owners of visual works an easy task.

I’ve addressed his concerns about fairness and have critiqued his proposal before, but I’d like to restate the latter here now.

An orphan work is a work that one finds but has no idea who its owner is or even if it’s copyrighted. The uncertainty is crippling because if one uses it one runs the risk of being sued for stiff damages. Therefore works that would otherwise spawn new creation (and therefore promote the progress of science) go unused.

Let’s say I find a photograph in my school’s archive that I would like to reproduce in a book I’m writing. The photo has no marks on it and there’s no other information, a true orphan work. How would Lessig’s proposal apply?
Continue reading →

Megan McArdle’s critique of Dean Baker’s post on free trade is mostly solid, but I think her reply on copyright and patent protections is a little bit off base:

Property rights are not inconsistent with free trade. I cannot justify selling stolen televisions on the grounds that this is just the working of the free market. The US thinks, with good reason, that intellectual property protections benefit everyone in the country over the long run. Thus, it enforces them by preventing other industries from selling property here that has, legally, been stolen.

How is this different from labor and environmental standards, liberals will ask. Well, we have copyright and patents because otherwise, you have goods with an enormous positive externality, but virtually no positive internality. Companies that use patented ideas without paying for them are creating a big negative externality–reduced incentive to innovate–while internalizing all the benefit from doing so. This is one of those situations where we look for some sort of legal arrangement, which we might call, oh, “intellectual property law”, to keep those skewed incentives from making us all ultimately worse off.

In the case of labor and environmental standards, whatever negatives there are are largely internalized to the countries. The awfulness of low wages and environmental standards is presumably even more awful if you are already extremely poor with limited recourse to a safety net. You’re unlikely to end up with an inefficient outcome.

There are a number of problems with this argument:

It’s inaccurate, or at least begging the question, to say that a company that infringes a patent is “creating a big negative externality.” Such a company is certainly failing to create an incentive for future patenting, but this is only a negative externality if we assume as our baseline a world in which all infringers obtain licenses and all patent royalties create incentives for innovation. In the real world, neither of these conditions hold. For example, when an extremely poor nation allows local pharmaceutical companies to produce patented drugs for the local market, it is not necessarily the case that the patent holder is thereby deprived of significant income. Most of the people who buy such patent-infringing drugs would not have been able to afford the drugs at anything close to full price.

Continue reading →

If you love video games and follow video game politics closely, then you really should add “Bruce on Games” to your reading list. It’s the blog of Bruce Everiss, a UK-based video games industry guru. I always enjoy reading his essays, and I almost always find myself in agreement with him. I’m not sure, however, that I would just let any kid of any age play Grand Theft Auto as he suggested in this essay a few weeks ago, “Let the Kids Play GTA IV.” I think the hyper-violent stuff should be kept away from the really young kids until parents think they are ready for it. Regardless, I absolutely love this passage from that essay:

It is the job of parents to bring up their children, it is not the job of government. Unfortunately anyone can have a child any time they want, if they are physically capable. There is no intelligence test, no aptitude test and no means test. So all sorts of unsuitable people become parents. And governments use this as an excuse to force stupid legislation on the rest of us. We have nanny states that poke their noses into areas where they have no business and where things would work a lot better without them.

Amen, brother. I also found myself giving a second “amen” out loud to this passage:
Continue reading →

I’m on today’s Cato podcast, giving some background on the network neutrality fight.

Here’s a video highlighting the Peer-to-Patent project originated by Beth Noveck and New York Law School’s “Do Tank.”

Whether because of inappropriately low standards for granting patents or recent decades’ outburst of inventiveness in technological fields, the Patent and Trademark Office is swamped. Patent examiners lack the breadth of knowledge in relevant fields to do the job they should be doing on each patent application. Drawing on the knowledge of interested and knowledgeable people can only improve the process, and this project aims to do just that.

I’ve written favorably about Peer-to-Patent a couple of times, but here’s a cautionary note: A successful Peer-to-Patent would result in a dispersion of power from patent examiners and the USPTO to the participants in the project. Surface support from the USPTO notwithstanding, the application of public choice theory to bureaucracies tells us that the agency won’t give up this power without a fight.

Much of my draft paper, Private Prediction Markets and the Law, focuses on nuts-and-bolts fixes for the legal uncertainty that currently afflicts private prediction markets under U.S. law. I’ll say more about those in later posts to Agoraphilia and Midas Oracle. The paper also dicusses a more theoretical and general issue, though: The benefits of designing regulatory schemes to include exit options.

The Commodity Futures Trading Commission recently issued a request for comments about whether and how it should regulate prediction markets. In earlier papers, I explained why the CFTC cannot rightly claim jurisdiction over many types of prediction markets. I recap that view in my most recent paper, but add some suggestions about how the CFTC might properly regulate some types of prediction markets. In brief, I suggest that the CFTC build exit options into any regulations it writes for prediction markets, allowing those who run such markets the same sort of freedom of choice that U.S. consumers already enjoy, thanks to internet access to overseas markets like Intrade, with regard to using prediction markets. Here’s an excerpt from the paper:

Those practical limits on the CFTC’s power should encourage it to write any new regulations so as to allow qualifying prediction markets to operate legally, and fairly freely, under U.S. law. . . . Ideally, the CFTC would offer prediction markets something like these three tiers, each divided from the next with clear boundaries.
  • Designated Contract Markets. Regulations designed for designated contract markets, such as the HedgeStreet Exchange, would apply to retail prediction markets that offer trading in binary option contracts and significant hedging functions.
  • Exempt Markets. Regulations for “exempt” markets, which impose only limited anti-fraud and manipulation rules, would apply to prediction markets that:

    • offer trading in binary option contracts;
    • thanks to market capitalization limits or other CFTC-defined safe harbor provisions do not primarily support significant hedging functions; and
    • offer retail trading on a for-profit basis.

  • No Action Markets. A general “no action” classification, similar to the one now enjoyed by the Iowa Electronic Markets, would apply to any market that duly notifies traders of its legal status and that is either:

    • a public prediction market run by a tax-exempt organization offering trading in binary option contracts but not offering significant hedging functions;
    • a private prediction market offering trading in binary option contracts, but not significant hedging functions, only to members of a particular firm; or
    • any prediction market that offers only spot trading in conditional negotiable notes.

Notably, regulation under either of the first two regimes would definitely afford a prediction market the benefit of the CFTC’s power to preempt state laws. It remains rather less clear whether the third and lightest regulatory regime would offer the same protection, though the cover afforded by its two “no action” letters has allowed the Iowa Electronic Markets to fend off state regulators. Markets that by default qualify for the third regulatory tier described above thus might want to opt into the second tier, so as to win a guarantee against state anti-gambling laws and the like. So long as they satisfy the first two conditions for such an “exempt market” status, public prediction markets run by non-profit organizations or private prediction markets that offer trading only to members of a particular firm should have that right. Why offer this sort of domestic exit option? Because it would, like the exit option already open to U.S. residents who opt to trade on overseas prediction markets, have the salutatory effect of curbing the CFTC’s regulatory zeal.

The footnotes omitted from the above text includes this observation: “Because they fall outside the CFTC’s jurisdiction, markets offering only spot trading in conditional negotiable notes could not opt into the second regulatory tier.”

Please feel free to download the draft paper and offer me your coments.

[Crossposted at Agoraphilia, Technology Liberation Front, and Midas Oracle.]

video game consoles Over at the New York Times Bits blog, Eric Taub is wondering who is winning the (video game) console wars. But the more interesting question is: How is it that we been lucky enough to have sustained, vigorous competition among three major platform developers for so long?

Honestly, I never understood how there was enough room for 3 competing consoles in the video game market. I figured that if consumers didn’t do in one of the platforms first that game developers would sink one of them in the name of simplifying development and minimizing costs. In fact, last October, an EA executive called for a “single, open platform” for developers to replace the competing console model. It would be interesting to see how a single platform impacted game development, but I think most of us find real benefits from having competing consoles at our disposal.

For example, I’m lucky enough to own both an XBox 360 and a Sony PS3, and although most of the games I play are available on both, each system has its own advantages and keeps the other one on its toes. Specifically, the Xbox offers an outstanding online marketplace with tons of great downloadable content, including HD movies and more TV shows than I can count. Sony, by contrast, is struggling to catch up to Microsoft’s online offerings, but the PS3 is an outstanding media player in its own right. Most electronics and home theater magazines agree that the PS3 is still the best Blu-Ray player on the market today. And, although I don’t have a Nintendo Wii, I think we can all appreciate the innovative controller that Nintendo brought to the market and the way it has injected an entirely new element into the home console wars. Finally, I haven’t even mentioned the unique advantages that the PC platform offers gamers who are into simulators or more intense online, interactive gameplay than what consoles offer.

In sum, video game console competition is playing out quite nicely, even though I still find it hard to understand how all 3 systems (4 if you include the PC market) continue to co-exist.

Not all muni wi-fi experiments are failing, but some rather important ones seem to be in serious trouble. EarthLink is abandoning the Philadelphia wifi network, which so many people placed great faith in 3 years ago. And MetroFi is selling muni Wi-Fi networks in Portland and other cities. I’ve been reading some stories and commentaries about what’s gone wrong, but I’d be interested in hearing others offer up their thoughts here. Here are a few general explanations that I’ve culled from these reports for you to build on, or just offer your own:

1. Wrong technology: Need to wait for WiMax or something more efficient (scalable) than WiFi.
2. Lack of demand, Part 1: Existing broadband providers are filling whatever need is out there.
3. Lack of demand, Part 2: Just not as many people want broadband as policymakers think.
4. Lack of investment or competence, Part 1: The private contractors didn’t know what they were doing or just didn’t invest the necessary resources.
5. Lack of investment or competence, Part 2: The local government didn’t know what they were doing or just didn’t have the heart in it.
6. Lack of awareness: Municipalities and corporate partners failed to promote the benefits of the systems.
7. Private machinations: It’s a conspiracy by private interests to quash the competition!
8. Wait, they’re not failing: We just need to give them more time to pan out.
9. Others???

Yglesias points to “Money Ruins Everything”, a paper by John Quiggin and Dan Hunter about the rise of peer production and its implications for public policy. It covers much the same ground as (and cites) Greg Lastowka and Dan Hunter’s excellent Cato Policy Analysis “Amateur-to-Amateur.” The basic point is that non-pecuniary motives have become more important in recent years, as illustrated by the success of projects like Wikipedia and Linux.

I’ll have more to say about the paper at Techdirt, but I wanted to note a couple of minor quibbles:

Criticism the first: On p. 235 the authors draw a distinction between free software (which has a sharp distinction between producers and consumers) and the blogosphere (in which producers and consumers are often one and the same). I don’t think this dichotomy works on either side of the ledger. On the one hand, free software has made the most progress in precisely those areas where software producers and consumers are the same people. A lot of Apache contributors, for example, work as commercial web developers and submit patches they’ve developed for their own use or the use of their clients. While certainly, there are a lot more users than developers for almost any software project, it’s quite common for the developers of a project to be drawn from the same pool as the users.
Continue reading →

I’m often asked what one can do to avoid becoming the victim of “identity theft” – actually identity fraud, the use of one’s personal information to impersonate, typically in the financial services world.

My advice is usually “not very much,” and I specifically recommend against any of the credit or ID theft monitoring services. My rough cost-benefit analysis of these services is that it isn’t worth $8 or $10 per month to avoid the relatively low risk of being a victim of any kind of serious identity fraud. Credit card fraud is the most common form of ‘identity theft.’ It threatens no liability and only a little bit of inconvenience to most consumers in the United States – consumers that are prudent, anyway. And I’ve never understood what these services would or could do to prevent or mitigate a true impersonation fraud.

The one thing they might do is place “fraud alerts” on your identity with credit bureaus, but that’s burning the village to save it. Anticipatorily sullying your own credit file may reduce your likelihood of being a subject of identity fraud, yes, but it destroys the benefit of having good credit in the first place – that’s what you’re trying to protect.

Now comes news that LifeLock, one of the most prominent purveyors of “proactive identity theft protection,” is being sued in several states. The allegations cluster around . . . oh, I’ll put it this way: B.S.ing people into paying them money. I don’t know whether the specific allegations are merited, or whether selling people assurance about something they needn’t fear is actionable, but my gut is that LifeLock is closer to a scam than a real service. It’s certainly not worth $100+ a year.

Check your bank and credit card statements when they come. You might get a copy of your credit file from each of the major credit bureaus if you’ve got a big financial transaction like a mhome purchase or refinancing. Other than that, my advice is to relax and have a good time. You’re not going to avoid being a subject of identity fraud using these services, and only in the rare, exotic case will being a victim of identity fraud cause you a great deal of harm.