I haven’t had time to read enough to know exactly what technology Research in Motion and NTP are fighting about, but I think it’s shocking to see accounts like this of the dispute:

The patent office has issued preliminary rejections of all five NTP patents that a jury in 2002 decided RIM had infringed upon with the BlackBerry device and service.

NTP has downplayed those rulings as largely procedural, while RIM has called the rejections proof that the technology behind its popular BlackBerry handhelds and e-mail service is not stolen.

After 3 years of intense litigation, it still isn’t clear whether RIM’s technology is “stolen” or not. That should send chills down the spine of anyone who values private property and the rule of law. Perhaps the most basic characteristic of any good system of property rights is predictability. An economic actor needs to know where the property lines–physical or intellectual–are so that he can avoid crossing them without permission. Even copyright, which certainly has its share of fuzzy lines, at least has a reliable method of staying on the safe side–when in doubt, don’t make unauthorized copies. That’s not true of software patents, which can cover anything that some patent office bureaucrats decides to declare an “invention.”

So what exactly was RIM supposed to do when it first deployed this technology? Patent searches are expensive, and even with unlimited resources it’s unlikely that you could ever find every patent that could potentially be construed as infringing. Even if you could find every such patent, some patents are so vague that it might not be possible to re-design your product to avoid infringing them.

Is this really how we want to run our software industry? Do we really want to require lawyers to inspect every line of computer code to make sure a programmer didn’t accidentally “invent” something that some other company previously patented?

Software isn’t like most R&D. A large software product consists of hundreds and hundreds of components that perform various tasks. Many of these components could be considered “inventions” by the patent office. Yet it’s unlikely that any of these “inventions” are unique–other programmers, dealing with similar challenges on other projects, are likely to independently “invent” the same techniques. Indeed, programmers consider such “inventions” to be so commonplace that they don’t bother to write them down. That’s why it’s often difficult to find prior art for things that are painfully obvious to any competent programmer.

In effect, patents create a legal minefield for software developers simply trying to go about their business. Because the patent office gives out patents so promiscuously, the developer has no way of predicting when code he writes might run afoul of a somebody’s patent. That means that even if he developed every line of code himself, without looking at anyone else’s code, he still can’t be sure that somebody won’t come along and sue him for patent infringement.

If there’s a silver lining to this fiasco, it’s that all the Hill staffers, judges, and patent office bureaucrats who created the current mess will have the opportunity to think long and hard about software patent reform while they’re waiting for their BlackBerries to start working again. Geeks have been bitching about the problems with the software patent regime for more than a decade, but if our complaints won’t get their attention, maybe a judicially-mandated BlackBerry blackout will.

More on Susan Kennedy

by on December 9, 2005

California Gov. Arnold Schwarzenegger rocked the political world recently with the appointment of Susan Kennedy, Democrat and Public Utilities Commissioner (PUC) as his chief of staff. Republicans might feel snubbed, but Kennedy’s appointment is good for the technology sector.

A thriving technology sector is good for California, and next year key policy issues will affect both consumers and technology companies. These include the so-called “Consumer Bill of Rights,” cable franchise reform, and broadband deployment.

Although the Golden State is home to Silicon Valley, many legislators remain surprisingly unaware of how their actions affect innovation, economic growth, and consumer well being. Now that Susan Kennedy is joining forces with the Governator, that ignorance should start to dissipate.

A hard-working and tough-talking Susan Kennedy didn’t know much about telecommunications issues when she was appointed to the PUC just under three years ago. But after a lot of reading, observing, and discussing, she came to the same conclusion that any honest and informed person would: the telecommunications sector is over-regulated.

Read more here.

I just finished reading the complaint (PDF) by the publishers in their suit against Google. Two things struck me. The first is that they are not asking for damages; only injunctive relief. If Google was found liable of willful infringement, it would be on the hook for up to $150,000 per book scanned. The Author’s Guild suit, on the other hand, does ask for damages, which has caused much consternation. The second interesting thing is that rather than challenging Kelly v. Arriba-Soft, the publishers are merely trying to distinguish it. Here’s what they say:

Google analogizes the Google Library Project’s scanning of entire books to its reproduction of the content of websites for search purposes. This comparison fails. On the Internet, website owners have allowed their sites to be searchable via a Google (or other) search engine by not adopting one or more technological measures. That is not true of printed books found in library shelves. Moreover, books in libraries can be researched in a variety of ways without unauthorized copying. There is, therefore, no “need,” as Google would have it, to scan copyrighted books.

So what do these things mean? Is it just magnanimity on their part of the publishers? It could be that while the authors and their lawyers are just acting like one would expect a class in a class action suit to act (maximize damages), the publishers want to continue to work with Google on (what used to be called) Google Print Publisher, so they don’t want to destroy Google. On the Kelly point, I think this signals that the publishers understand that a court would have to be unbelievably shortsighted not to see the wisdom of Kelly, e.g. that a fantastically valuable set of services (search engines for starters) would be destroyed if they were saddled with the impossible transactions costs of having to ask permission of each site indexed. Maybe the publishers have figured out that their best course is to show that books are different. Of course, they’re not.

More Felten on CD-DRM

by on December 9, 2005

I don’t want to turn this blog into a Felten-summary service, but I couldn’t resist linking to a pair of fantastic posts over at Freedom to Tinker.

First, Ed Felten explains why we shouldn’t be surprised that MediaMax, like XCP, has security flaws. Security is all about managing risk, and SunComm, like First4Internet designed their software with reckless disregard for the risks it might impose on users. So while the particular bugs that have been discovered were almost certainly an honest mistake, those bugs would have been much less harmful had they not been so cavalier about disregarding ordinary security practices in developing their spyware-like software.

In his second post, Prof. Felten explains that it wasn’t a coincidence that both XCP and MediaMax behaved like spyware. By its nature, DRM software is designed to restrict how users use their computers. Obviously, most users would rather not have that software on their computers at all. So in order to function, the software must deceive the user into install itself, and then must avoid detection and/or resist removal. And what do you know, that’s exactly the same design parameters that spyware authors face. Is it any wonder they came up with similar solutions?

Anyway, he explains all of this much better than me, so go read his first post and his second post.

Need more proof that the a la carte debate has very little to do with economics and everything to do with content regulation? Well, here’s Parents Television Council’s Brent Bozell in the Los Angeles Times yesterday commenting on his desired outcome of an a la carte regulatory regime:

“Maybe you won’t have 100 channels, maybe you’ll only have 20. But good programming is going to survive, and you will get rid of some waste.”

Well isn’t that nice. Mr. Bozell is fine with consumer choices shrinking so long as what’s left on the air is the “good programming” that he desires. It just goes to show that, as I argued in an essay earlier this week, the fight over a la carte is really a moral battle about what we can see on cable and satellite television.

But is Mr. Bozell correct that a la carte “will get rid of some waste” on cable and satellite TV? As I suggest in my essay, it’s highly unlikely because one man’s trash is another man’s treasure. The networks that Mr. Bozell considers “waste” (Comedy Central, F/X, MTV, Spike, etc.) happen to be some of the most popular channels on cable and satellite today. And it’s likely to stay that way, even under an a la carte regulatory regime.

So, despite the crusade to “clean up” cable, people will still flock to those networks in fairly large numbers. And the channels that Bozell & Co. want everyone to get (religious and family-channels) could be threatened by a la carte if too few people choose to continue subscribing.

Groping in the Dark

by on December 9, 2005

James DeLong: “I am not a programmer.”

He can say that again!

He’s got a whole post on the implications of multi-threading for open source software. All he really proves is that he doesn’t understand the software development process:

IMHO, much of the general discussion of FOSS, Microsoft, patents, and other software issues has been based on an unspoken premise that software is a mature industry, with its great leaps of innovation behind it, and that public policy should be devoted not to fostering innovation but to turning software into a cheap commodity and to preventing its purveyors from milking products for which they have already recovered the creation costs.

If this premise is wrong, if the situation is one in which massive leaps of creativity are needed, along with the funding for such leaps, then a great many currently popular policy recommendations–such as “no software patents” or “FOSS preferences”–go out the window.

It’s hard to even know where to start. I don’t know of anyone on the copyleft side who bases their support for FOSS on this “unspoken premise.” (although it is, by definition, unspoken, so who knows?) Open source advocates argue that their development model is a better way of fostering innovation because it allows for the collaboration of thousands of the brightest people around the world. They believe they are the cutting edge of software development, at least in certain domains. For example, there’s a reason that Apache, MySQL, PHP, and Perl are among the most popular tools in web development.

The policy implications he cites are just non-sequiturs, and they show the same tendency to misrepresent (or maybe just fail to understand) his opponents. Programmers oppose software patents because they impede innovation by requiring software companies to hire lawyers in order to navigate the patent landmine. As is explained here, software is different from other kinds of inventions. Now, DeLong might not find that argument persuasive. But he should at least do us the courtesy of characterizing our arguments accurately. If he’s going to knock down straw men, he should make some effort to choose straw men that are at least tangentiallly related to his opponents’ actual argument.

The “FOSS preferences” argument is equally nonsensical. That debate is about things like office software and mail servers. These are not applications at the cutting edge of high-performance computing. Whatever the merits of using commercial software in such circumstances, certainly promoting the development of better multi-threading software isn’t one of them. If someone proposes FOSS preferences in the military or the National Weather Service, then we can talk, but as far as I know no one has.

These errors, I think, are a symptom of DeLong’s general cluelessness about how software actually works. Virtually every sentence he writes about technology is confused. (As just one example, some of the highest-performance commercial operating systems are “basically a spin-off of 1970s Unix.” So what?) I could make this already too-long post even longer by fisking every sentence of his post and correcting all the confusion found therein. But what would be the point? DeLong clearly feels his understanding of law and economics trump geeks’ understanding of how the policies he advocates affect their profession.

When geeks complain that software patents are impeding their work, he misrepresents and belittles their arguments without bothering to understand them. When they point out that open source development methods have compelling advantages for certain kinds of applications, he misrepresents and belittles their accomplishments without really understanding them. When we complain about the fact that DRM technologies lock open source software out of access to digital media, he pats us on the head and tells us that open source software isn’t that great anyway.

The problem is that most of the people making policy are just as clueless about technology as he is. So when he makes clueless but plausible-sounding arguments, most of them can’t tell the difference. And because he’s got a JD from Harvard and most geeks don’t, his arguments tend to carry more weight than ours do.

He says he “wants to hear more from the tech community.” That’s great. I just wish he’d listen.

Learning from the Old-Timers

by on December 8, 2005 · 2 comments

I don’t have a lot to add to Jim’s insightful post about software piracy and the varying approaches to it. I agree with Techdirt that the methodology they’re using appears to be bogus–obviously, not everyone who’s currently pirating software would purchase it if they weren’t able to get a pirated copy. Their hand-waving (and, to my mind, unpersuasive) response to this argument is on page 14 of their report.

I also agree with Techdirt and Jim that it’s unfortunate that the BSA is funding shoddy research, because I agree with their conclusion: software piracy is bad for all of us because it reduces incentives for software development. (whether cracking down on software piracy is the best use of scarce police resources is a more complicated question) Bogus research like this paper make it much easier for the anti-IP radicals of the world to merely dismiss everything the pro-IP side has to say, which I think is a mistake.

But if you’ll forgive me for jumping on my soapbox, I’d like to point out what the software industry is not doing, for the most part, in the face of widespread piracy of its products: it’s not resorting to anything resembling digital rights management, at least for ordinary consumer software. When I buy a copy of Office or Photoshop, I typically have to enter a serial number, but that’s about it. It doesn’t try to limit the number of times I can install the software on my computer. It doesn’t install spyware-like monitoring programs deep in the bowels of my operating system.

Continue reading →

The Congressional Research Service (CRS), which is basically a small, non-partisan think tank within Congress, just released a new report on the “Constitutionality of Applying the FCC’s Indecency Restriction to Cable Television.” While trying his best to avoid any controversial statements on the matter, the report’s author Henry Cohen, a legislative attorney at CRS, concludes that “It appears likely that a court would find that to apply the FCC’s indecency restriction to cable television would be unconstitutional.” Cohen addresses the many different ways the courts might approach the issue, but points out that the government’s case will be weak in almost every respect.

This is an issue we have invested a lot of intellectual energy in at Progress & Freedom Foundation. If you are interested in our take on the constitutionality of the many ways in which Congress might seek to expand content controls to cable and satellite television, you might want to read the following 3 reports:

* “Can Broadcast Indecency Regulations Be Extended to Cable Television and Satellite Radio?” by Robert Corn-Revere, PFF Progress on Point 12.8, May 2005.

* “Thinking Seriously about Cable & Satellite Censorship: An Informal Analysis of S-616, The Rockefeller-Hutchison Bill,” by Adam D. Thierer, PFF Progress on Point 12.6, April 2005.

* “‘Kid-Friendly’ Tiering Mandates: More Government Nannyism for Cable TV,” by Adam D. Thierer, PFF Progress Snapshot 1.2, May 2005.

The New Milenium Research Council today released a study on the benefits of broadband, which finds almost $1 trillion in benefits related to elderly and disabled Americans alone. Its a pretty good study, authored by Brooking’s Robert Litan. Overall, a good contribution to the debate, underscoring the importance of this technology. Of course, the number is just an estimate (as Litan himself says), since so many of the benefits of quick and easy Internet access just aren’t quantifiable. For instance, how do you value the innovations that haven’t yet been innovated? The analysis in the paper gives a pretty good sense of this unpredictablity. The report is worth reading and is sure to come up often in broadband policy debates.

Perspectives on Piracy

by on December 8, 2005 · 4 comments

The Business Software Alliance is touting a study reporting that: “Cutting the global piracy rate of 35 percent by 10 percentage points over four years could generate 2.4 million new jobs, $400 billion in economic growth and $67 billion in tax revenues worldwide.”

Tax revenue, huh? Sounds like a wonderful public-private partnership brewing. (Yes, sarcasm.)

What’s interesting about it is the interpretation or lack of interpretation being given the study in various quarters.

TechDirt, where I read about it first, provides a lot of interpretation:

Every so often the Business Software Alliance comes out with a press release, based on a study they paid IDC to do, where they misrepresent the issue of illegal software copying. They make huge claims that anyone with half a brain can see is incorrect. . . . The BSA pretends that every copy of software would have been bought if the copy wasn’t available. That seems to be their basis for saying it would help stimulate economies. They say things like: “Some companies know they are losing 40 percent of their business. If they could recoup that, they could employ more people.” Indeed, any company would like to sell more product–but many of the people copying software could never afford it, and never would buy it–so it’s pretty difficult to say they’re really “losses.” At the same time, the BSA seems to completely discount the other side of the equation. That is, companies who are illegally copying software are saving money that they can then invest in hiring more people. Also having the software often makes companies more productive, thereby helping the economy.

Over on another favorite resource, IP Central, the recounting of the BSA report entertains no such skepticism. Indeed, the conclusion is treated as obvious: more law enforcement. PFF was equally uncritical of the previous report, which TechDirt, a sensible market-oriented site, lambasted.

PFF is a good group of friends, old and new. They had a nice holiday reception last night and I took the opportunity to encourage a few of said friends to read TLF and, specifically, to engage with Tim Lee because he has a lot to say. (I ought to hurry and publish this post because he probably will have something to say about the BSA report before me if I don’t.)

Copyright is not only about income for content producers, but also overall welfare. More nuance in our thinking about copyright seems warranted and a more careful discussion of the issues among us free-market-types is needed.