My final contribution to the June edition of Cato Unbound is up. I criticize Doug Lichtman call for “more complicated [copyright policy] interventions that, by design, influence the development of technology tools and services”:
Back in the late 1990s, companies started to develop MP3 players that are essentially miniature musical jukeboxes. The recording industry sued to block their sale, but was unsuccessful. The result was a surge of innovation, culminating in the iTunes/iPod ecosystem that now dominates the digital music marketplace. It’s tough to say what would have happened if the recording industry had won that lawsuit, but I think it’s safe to say that it would have taken longer for portable music players to emerge on the scene, and that the digital music ecosystem would be less advanced today.
Fast forward a few years, and we can see that hard drives are now large enough that one could easily build a set-top box that does for your DVD collection what the first iPod does for your CDs. Insert each DVD you own once, and the box copies it to your hard drive. From then on, you can watch any DVD you own with the touch of a button. And of course, you’d likely be able to do much more than that: stream movies wirelessly to different TVs around your house, stream them to yourself while you’re on the road, transfer them to an iPod or other mobile device to watch on the road, and so forth. Even more important, the existence of a competitive DVD jukebox market would likely produce spin-off innovations, just as the MP3 player did, with people developing devices, software, and accessories that interoperate with the DVD jukeboxes.
Unfortunately, Hollywood sued the first DVD jukebox out of existence. And this time, thanks to the DMCA, they’ve won. CDs have no copy protection, so under copyright law anyone is free to make a device to play or manipulate music on CDs. But DVDs do have copy protection, so in effect no one may innovate in the DVD marketplace without Hollywood’s blessing.
Libertarians are rightly uneasy with government “industrial policy,” efforts to reshape the marketplace by legislative or administrative fiat. In a sense, I think the theory Lichtman articulates suffers from much the same defect. Policy makers will never know if the extra creative works supposedly stimulated by the DMCA are worth more than the foregone innovations. We should therefore be suspicious of proposals to encourage the development of one part of the market at the expense of another. Such efforts rarely turn out as well as policymakers hope.
I’ve just finished reading this amazing paper by Gerard N. Magliocca about the 19th-century phenomenon of “patent sharks.” In the 1860s, the Patent Office inaugurated an experiment with eased standards on design patents for farm tools. The result was a flood of low-quality patents, and the emergence of a new character in the patent system: the “patent shark” who would show up in a small town with a fistful of patents and use them to extort money from hapless farmers whose farm tools may have been covered by the shark’s patent portfolio.
Farmers’ groups reacted with outrage and pushed Congress for legislative solutions that will sound eerily familiar to anyone who’s familiar with today’s patent debates: an “innocent user” defense that would shield a farmer who unwittingly uses a patented tool and changes to the rules regarding damages for infringement. These changes were never passed by Congress because they encountered the staunch opposition of the holders of other types of patents, who feared that they would undermine the rights of all patent holders.
The problem was ultimately solved when the Patent Office—and later the Congress—formally restored the higher bar for patentability that had prevailed prior to the Civil War. In other words, the solution to patenting was to abolish the class of patents that had created the “patent shark” problem in the first place. Magliocca suggests that the solution to today’s patent troll problem may be to abolish software and business method patents, the favorite of today’s patent sharks.
He notes several similarities between utility patents in the 19th century and software patents today. But one factor that I don’t think he emphasizes enough is the simple breadth of the subject matter being covered. The best patents—pharmaceutical patents, say—apply to a well-defined industry. Pharmaceutical companies need to monitor pharmaceutical patents in order to determine what they’re allowed to do. In contrast, every business on Earth uses software and “business methods.” Therefore, every business on Earth is a potential target. That means it’s much easier for trolls to find potential victims. It also means that the targets—many of whom don’t think of themselves as being in the software industry or the “business method industry”—will be ill-equipped to respond to the lawsuit.
Precisely the same observation applies to 19th century patent sharks. Because most people in the 19th century were farmers, patents on farm tools were likely to be infringed by millions of individual farmers who lacked the expertise to evaluate the patent and the resources to hire lawyers to defend themselves. Hence, 19th-century farmers, like 21st-century “business method” users, were easy pickings for patent sharks who preyed on their targets’ lack of preparation for patent litigation.
Magliocca closes his paper with the following slightly frustrating observation:
With respect to design patents granted in the past on incremental improvements, there was no real evidence that they helped anyone. The only concrete result was a school of rabid sharks. By contrast, it is hard to say that patents for software or business methods do not spur creativity in a meaningful way. Abolishing these patents may well cause more harm than the trolls do. Without more evidence on the effect opportunistic licensing has on high-tech investment, this analysis cannot rule out the possiblity that there is a justification for these technology patents that breaks the parallel with the design patents that were abolished during the nineteenth century.
To a large extent, I’m sure this is just an instance of academic caution. But while I suppose it’s true that the analysis in the paper “cannot rule out the possiblity that there is a justification for these technology patents,” I don’t think it’s “hard to say that patents for software or business methods do not spur creativity in a meaningful way.” That is, indeed, what the vast majority of software developers will tell you, and it’s also what most software executives would have told you until they started amassing patent portfolios of their own. It is, moreover, strongly suggested by the evidence Bessen and Meurer have amassed on the subject.
Tim Lee has published a Cato TechKnowledge piece discussing the growing problem of “orphan works” – copyrighted material the owner of which can’t be found. He highlights the work of our own Jerry Brito.
Oakland Wireless appears to be in trouble. Add it to the list.
[Actually, is anyone out there keeping a running tally of the muni failures? If so, let me know so I can just start linking to it instead of all the random blog links. ]
With California’s law against talking on a cell phone while driving taking effect next week, Mike Masnick is asking what else should be banned while driving.
I think the TLF audience of public policy sophisticates could add to the tenor and quality of the list. I’ve done my part (comment #42), and I obviously need a life.
Via tenacious-Google-needler Scott Cleland, Vint Cerf apparently mused at a Personal Democracy Forum panel this week about whether the Internet should be nationalized. Erick Schoenfeld of TechCrunch who heard and reported the comment first-hand is not shy with his criticisms:
[N]ationalizing the Internet is bad idea. (I can’t believe I even have to say this). It would set a horrible precedent, would undermine confidence in the American economy, and would be difficult to pull off.
There are more reasons than that, and they include: slowing down decision-making about technical issues by subjecting them to regulatory processes; giving power over the Internet’s functioning to well-heeled interests most experienced and skilled at lobbying; giving power over Internet content to self-interested politicians; and much, much more.
An interesting thing about politics and public policy is that people who are expert in a subject matter are often deemed therefore to be experts in the public policies related to that subject matter. They’re not.
A fine technologist who has made great contributions, Vint Cerf has little awareness of the profound error it would be to make the Internet a public utility. Yet he’s one of the leaders put forward to promote Google’s ‘Internet for Everyone‘ campaign.
[Note: This is the third in a series of essays about the legacy of the Supreme Court’s FCC v. Pacifica Foundation decision, which celebrates its 30th anniversary on July 3rd. Part 1, presented a general overview of the issue. Part 2 sketched a short history of FCC indecency regulation. This installment will examine the misguided logic of the Court’s reasoning in Pacifica as it stood in 1978. Part 4 will then examine why that logic is even more misguided in light of modern developments.]
For the past three decades, regulation of television programming has been premised on the “pervasiveness rationale” as articulated in the landmark Supreme Court case
FCC v. Pacifica Foundation. In Pacifica, in a 5-4 plurality decision, the Court held:
Of all forms of communication, broadcasting has the most limited First Amendment protection. Among the reasons for specially treating indecent broadcasting is the uniquely pervasive presence that medium of expression occupies in the lives of our people. Broadcasts extend into the privacy of the home and it is impossible completely to avoid those that are patently offensive. Broadcasting, moreover, is uniquely accessible to children.
In one portion of the decision, Justice John Paul Stevens, who authored the majority opinion, even referred to broadcast signals as an “intruder” into the home.
There were always serious problems with the “media-as-invader” logic of Pacifica.
Continue reading →
Personalized medicine is touted as the wave of the future, but recent government action points to problems for Americans looking to join the health revolution. Last week, California’s Department of Public Health issued cease-and-desist letters to 13 genetic testing startups, threatening to deny service to consumers curious about their DNA.
“Any laboratory offering genetic tests to California residents must be licensed as a clinical laboratory in California. The tests must be ordered by a licensed physician and validated,” reads a statement on the department’s Web site. 23andMe didn’t require a physician’s note when this author and many others used its service, so it seems the company, along with most of the others, may be in trouble.
Despite this threat, 23andMe this week maintained that it is in compliance with California law and is continuing to operate in the state at this time. However, not all genomics firms are taking such an aggressive stance.
Sciona, which tests genes in order to offer nutritional and fitness advice, also received a cease-and-desist letter. The company’s reaction was to yank its US$299 products off the market in both California and New York, another state that is targeting the industry.
Those attempting to read their own genetic data, not somebody else’s, find it appalling that government would stand in the way. One’s genome contains important personal information that each individual should be able to access, without a doctor acting as gatekeeper. Tests like the ones 23andMe supply not only imply possible futures, but also reveal a lot about one’s past. There is something frighteningly Orwellian about government bureaucrats deciding that individuals are not allowed to view their body’s map without official permission.
It is appropriate, of course, for government agencies to enforce the laws on the books, which is what the California’s Department of Public Health is doing. However, when the old rules are so out of sync with the current health landscape provided by new technology, that calls for new rules. As with anything in the technology industry, the faster things are fixed, the better.
[…]
Read more
here.
Some surprising news from the folks at Broadcasting and Cable magazine: Barack Obama is now against restoring the Fairness Doctrine. In an email Wednesday to B&C,
press secretary Michael Ortiz wrote:
“Sen. Obama does not support re-imposing the Fairness Doctrine on broadcasters.” With John McCain already firmly in the anti-fairness regulation camp, that means that both major presidential candidates are now on record against reinstituting the former FCC policy.
So is it time for fans of the First Amendment to break open the bubbly? Well, not quite. While welcome, the Obama statement was hardly a vigorous denunciation of the doctrine, or its chilling effect on speech. In fact, it doesn’t seem the senator actually opposes the rule, as opposed to not supporting its return. (Notably, he hasn’t yet signed onto the “Broadcaster Freedom Act,” which would ban its re-imposition). According to Ortiz, the reason for the senator’s non-support is that he “considers this debate to be a distraction from the conversation we should be having about opening up the airwaves and modern communications to as many diverse viewpoints as possible.”
Not because it is a violation of free speech principles, or because it is insidious government censorship, not even because it is counter-productive, but because it’s a “distraction.”
Continue reading →
Beyond what Harper already said about it, I was searching for the right words to express how silly I find the far-fetched rhetorical B.S. being flung about to describe this quixotic new “Broadband for Everyone” crusade. And then I found this great little comment by Steve Boriss over at The Future of News blog. He really nails the utopian silliness that animates this movement in his essay, “Net neutrality proponents’ ideals as contradictory as French Revolution’s“:
Government regulation always begins with a call from those who claim they are only trying to right some hard-to-argue-against wrongs, but whose consequences are poorly thought out. Today we learn of a new such party, InternetForEveryone.org, which has a mission so contradictory that it almost makes my head explode. Their ideals call to mind the French Revolutionists, who called for “liberty, equality, and fraternity,” not realizing that liberty and equality are incompatible — that making people equal requires liberty-suppressing force. The new group calls for guaranteed high speed Internet access for everyone (a basic right of all Americans, they say), lower usage prices, more competition, and more innovation. Tell me, if we force Internet providers to give access to everyone, then force them to charge less than the marketplace tells them they should, where will the money come from for innovation? And what would happen to the potential profits that might entice others to join in the competition? Guess it will have to come from taxpayers and that government will have to run the show. InternetForEveryone.org claims to be neutral on the net, but it is surely not neutral on government — they want a lot more of it.
Exactly. It’s ‘something-for-nothing’ economics meets utopian egalitarianism as applied to broadband. But, as Steve notes, there is no free lunch. Every time I debate one of the people or groups involved in this movement, I always ask questions like: What about incentives to invest and innovate? What role do they play in your model? Where is the risk capital going to come from to build these high-fixed cost networks going forward? How will those networks be upgraded over time? And so on.
And they never have any good answers. To the extent they have any answers at all, it always seems to come back to the idea of treating broadband networks like a lazy public utility. You know, because we’ve had so much success with those! And yet, this crowd seems wants to paint a revisionist history of public utilities and try to convince us that we are just ONE MORE muni fiber or muni wi-fi experiment away from getting it right! Uh-huh, sure we are. Meanwhile, taxpayers are bailing out those past failed experiments all over America right now.
The fundamental problem with the entire Net neutrality movement can be summarized as follows: They obsess about investment and innovation
at the margin of networks but spend little time thinking about the preconditions for serious innovation and investment at the core of networks. Government micro-management ain’t ever going to get us where we need to be in that regard.