Making a Cat That Barks

by on December 15, 2005 · 4 comments

This BusinessWeek column sounds very sensible:

Situations like this, together with the Sony BMG mess, have given the whole concept of DRM a bad name. To win public acceptance, the industries involved–content, information technology, and consumer electronics–are going to have to put maneuvering for advantage aside and stick to clear, consumer-first goals. Above all, users should not have to notice the existence of the particular DRM as long as they abide by clearly stated copying limitations. Digital content should use standard DRM technology built into players such as iTunes and Windows Media Player. And any content should play on any device that can physically display it, without regard to operating system.

The entertainment industry has a great opportunity for new markets, and the PC and consumer-electronics industries have an opening for new products. But realizing this potential will require all of them to show some respect for their customers.

This is an admirable sentiment. There’s just one problem: “standard DRM technology” is a contradiction in terms. There’s never been such a thing, and there never will be. DRM technology is proprietary by necessity.

As I’ve argued in the past, DRM schemes must be proprietary formats, with a single authority (say, Apple or Microsoft) setting the rules and deciding who may participate. Moreover, the security of the format is inversely proportional to the number of devices that adopt it. Every new device is another opportunity for hackers to break it.

I think it’s hard to over-estimate the importance of this point. It’s easy to gloss it over in policy debates, to assume that achieving interoperability is just a technical problem that the geeks are working on and will solve in a few years. But it’s not. Building an interoperable DRM is like making a cat that barks.

The problem is that the vast majority of the people who write about technology policy aren’t programmers. They don’t really have a clear idea of what DRM does, so they don’t have the technical background to evaluate the claims of the DRM snake-oil salesmen. When a big technology company announces an “open” DRM format, the tech press reports on it dutifully, without really pressing the company for details.

If they did, I suspect that they would find that the various “open” and “interoperable” DRM schemes now being developed are vapor-ware: years from completion and with a lot of the implementation details not quite worked out yet. It’s easy to talk about interoperable DRM in the abstract. But so far, no one has succeeded in actually implementing such a system. That’s not a coincidence, because what they’re trying to do is, as Ed Felten puts it, a “logical impossibility.”

Humorous Site of the Day

by on December 14, 2005 · 2 comments

I could waste countless hours perusing patently silly, a blog featuring ridiculous “inventions” that have been granted patent protection.

What’s scary is that these inventions are mostly things that would be obvious (or obviously useless) to a sixth grader. If there are dozens of those, imagine how many thousands of illegitimate patents there are on subjects requiring some technical know-how to evaluate obviousness and usefulness.

Gadget Hacking – RFID

by on December 14, 2005 · 4 comments

For those who think that consumers will be powerless in the face of the “worldwide RFID infrastructure” ™, I submit: DCist.

Wait. Isn’t DCist that local newsy ‘blog with plenty o’ nightlife information for D.C.-area residents? How could such an airy, lightweight site have anything to do with the privacy onslaught posed by RFID?

RFID is a highly technical challenge that furrows the brows of smart and serious technology analysts and privacy activists. It takes privacy expertise, public policymaking experience, and tech savvy to handle RFID. These things are in short supply among the – sorry to say it – great unwashed . . .

Balderdash.

DCist has posted a brief, entertaining recipe for hacking your Metro SmartTrip card. For entertainment, people are learning about technology. My favorite comment so far: “Big next winter: Smart trip mittens.”

Here’s the significance: Ordinary people are getting access to relevant information about where RFID is and how it works. Ordinary people are fully capable of understanding RFID. Ordinary people will use it to their advantage when they want and decline to use it when they want.

The premise of experts (a premise that serves the expert class quite well) is that people can’t figure this stuff out so they have to be protected from it by law and regulation, after thousands of hearings, meetings, forums, and conferences.

Repeat: balderdash.

This is evidence of what I wrote about some time ago: a variety of social forces will constrain and contain RFID.

Credit Where Due

by on December 13, 2005

I’ve been beating up on PFF a lot lately, so I think I ought to give credit where credit is due: this May amicus brief urging the Supreme Court to grant cert in KSR v. Teleflex in order to re-consider the patent obviousness issue, is excellent. They do a fantastic job of explaining the dangers of granting patents too easily. I found this passage particularly entertaining:

The defects of such a doctrine may well be illustrated by the notorious Patent 6,368,227, “Method of Swinging on a Swing,” obtained by a five-year old whose parent happened to be a patent lawyer. It is also called the “sideways swinging” patent, because that is what it covers–the idea that a swing can be made to move sideways as well as back and forth by pulling on the chains in a particular way.

The Patent Commissioner ordered a re-examination of the ‘227 patent on May 21, 2002,17 and ultimately PTO found sufficient prior art in patents granted in 1939, 1943, and 1969 to result in its invalidation. But the case ought never to have gotten so far; scarce patent-examiner hours and public resources had to be expended to officially recognize the obvious. The difficulty the office faced resulted from the fact that the Federal Circuit’s standard forbade the examiners to take notice of what, literally, any child would know. As a newspaper report on the matter said, “The patent office is searching for documented proof that children have always powered their swings by pulling on the chains. Then, and only then, will it kill the patent as quietly as possible.”

Had USPTO been unable to find written proof of something known to all, then under the Federal Circuit test the patent would have stood. “[D]eficiencies of the cited references cannot be remedied by the Board’s general conclu sions about what is ‘basic knowledge’or ‘common sense.'” “‘Common knowledge and common sense,’even if assumed to derive from the agency’s expertise, do not substitute for authority when the law requires authority.”

In the swing patent case, ultimately, the PTO reached the sensible result. But the Federal Circuit decisions present an obstacle to the office’s doing so in a significant number of cases. For example, in 1999, the Federal Circuit reversed the PTO’s rejection of a patent application for orange trash bags with jack-o-lantern faces. A prior art search had turned up instructions for a children’s craft project involving the drawing of pumpkin faces on large orange bags. But this was not sufficient, because the instructions referred to paper bags, not to trash bags.

All of the concerns they raise–that people don’t always publish obvious ideas, that spurious patents create a “landmine” for genuine innovators, that mere combinations of well-known elements don’t constitute a new invention–apply with a vengeance to software. It’s possible (I’ll have to do some more reading and thinking before I make up my mind) that revising the obviousness standard as they propose would solve the problems with software patents. Regardless, PFF’s proposals for revising the obviousness standard deserve a serious look by Congresscritters as they ponder the grim possibility of a future without Blackberries.

Update: I’m also curious what DeLong had in mind when he mentioned software patents in the post I criticized last week. Having read his brief, it’s clear he does understand the dangers of granting patents too freely, which makes me wonder why he singled out the “no software patents” view for criticism. The criticisms of software patents that I’ve seen, at least, rest heavily on the contention that software “inventions” are all almost always obvious to a skilled practitioner of software development. I imagine he would at least agree with that critique, if not with the proposed remedy.

After 15 years of covering communications and media policy in Washington, I have found that the most important book to keep handy is not any book of law or economics, but rather a good dictionary. That’s because I constantly need to reassure myself that I haven’t forgotten the true meaning of some words in the English language after hearing how they are used (and abused) by Washington policymakers.

Take the word “voluntary,” for example. It’s a fairly simply word that most of us learned very early in life. I didn’t really feel that I needed to look it up in my dictionary until I started working in Washington. Here in Washington, you see, “voluntary” appears to mean something very different that what we learned long ago in school. Consider this week’s announcement that the cable industry will “voluntarily” be adopting “family-friendly” tiers of programming.

Continue reading →

A Solution in Search of a Problem

by on December 13, 2005

Via TechDirt,, the Wall Street Journal is reporting that Harper Collins is going to scan its books and provide the digital scans for use by search engines such as Google.

I don’t get it.

Presumably this was contemplated as a response to Google Book Search, but I don’t really see how it clarifies or addresses any of the concerns raised by that case. The question at issue in that case is whether Google has the right to scan and index publishers’ books without their permission. Obviously, if Harper Collins is giving Google the digital copies itself, then it’s implicitly giving Google permission, which kind of makes the lawsuit a moot point, doesn’t it?

Perhaps Harper Collins’s executives think that scanning the books themselves and letting Google “borrow” the digital copies would somehow make the process more secure. But that doesn’t make a lot of sense. This program simply changes the source of Google’s data. It doesn’t do anything, as far as I can tell, to change how and where the data is stored within Google’s search engine. Google is still going to need to keep local copies of the books (or at least indexes of them, from which the books can easily be reconstructed) on its servers for performance reasons, so it’s not likely to reduce the total number of copies in circulation.

My guess is that HC’s leadership simply isn’t thinking clearly about the way digital content works. If the “digital copies” in question were physical books, it would make sense for HC to keep the physical copies in its warehouse and “lend” them to Google and others to use and then “give back.” That would increase security because “the originals” would always stay in HC’s possession. But digital data doesn’t work like that. Digital data isn’t “moved,” it’s copied. So when Google “borrows” HC’s digital books, it is, in fact, making a copy of them. That copy is just as good as the original, and every bit as much a security risk.

The other possibility is that they’re hoping the courts will be similarly confused by the analogy. Even though it doesn’t make any sense in a digital context, it’s possible that a judge will be persuaded that allowing the copyright holder to hold “the original” copy of a digital book is more secure or otherwise more legitimate than allowing Google to create “its own” original. HC’s move might be savvy legal strategy even if it doesn’t make any sense from a technical perspective.

Update: Jerry suggests that licensing this database to smaller search engines that lack the resources to scan the books on their own could be a nice revenue stream, which is an excellent point. However, I don’t see how that helps “protect authors’ rights,” which is what most news stories claim the point is. This story, for example, quotes a HC executive complaining that there are “too many digital copies” of the books around, which I think fundamentally misunderstands how search engines work. A good search engine needs to be able to do full-text searches of the book, and to do that, you almost certainly need to have a copy of the full text on your server. If the goal of this project is to protect authors’ rights, I still say they’re barking up the wrong tree.

I just read the Federal Circuit decision from last year in the NTP v. RIM decision. And if I’m reading it right (I should stress here that I’m not an expert on patent law) NTP’s patents covered a relatively broad class of wireless email services: more or less, wireless email systems in which the user could both view his email on a wireless device and download them to a desktop computer.

That’s simply ridiculous. Email has been around for more than a quarter century. Wireless technology has been around for decades. The idea of combining the two is blindingly obvious. (And it would have been pretty obvious even back in 1991, when the first NTP patent was granted) Once the technologies for wireless transmission of digital data became cheap enough to be cost-effective for consumer products, it was inevitable that people would exchange email with it.

In other words, once you’ve got wireless technology and an email network, combining the two is a “shallow” problem. It takes some engineering know-how to do, but it doesn’t require any great flashes of genius. This, I think, is true of virtually all programming tasks. The challenge in software development lies in managing the complexity created when you’re building a program that has thousands of components that must all work together. The best programmers are those who can make a program that’s more than the sum of its parts by organizing them in a particularly clever or elegant manner. But no one component by itself is an “invention.”

Imagine if, in 1920, somebody had tried to patent car radios. At that time, cars and radios were both well-known inventions, but (based on a very cursory Google search, at least) you couldn’t buy a car with a radio. The patent office, I assume, would have thrown the patent application out, ruling that combining two well-known devices in a common-sense way isn’t a new invention. It doesn’t take very much effort to think of the concept, and there’s no reason why someone should be able to extort money from everyone else who stumbles on the idea simply because he happened to think of it first.

Patent law requires that inventions be “non-obvious” precisely in order to prevent precisely that kind of extortion. The idea is that the inventor should have to expend a significant amount of effort developing the new invention before the invention will merit the protection of patent law.

But viewed from this perspective, virtually all software “inventions” are obvious–that is, they involve combining well-known components (albeit a large number of them) in common-sense ways. They only look non-obvious to non-programmers because the non-programmers aren’t familiar with the underlying components. Unfortunately, non-programmers tend to be the ones who make decisions in patent cases.

Competing with Free

by on December 12, 2005 · 2 comments

I have to admit I’m surprised and a little saddened to see that Overpeer is being shut down. Overpeer worked for the recording industry to pollute peer-to-peer networks with bogus versions of its songs. Apparently, the peer-to-peer networks have instituted new user-rating systems that have made Overpeer’s tactics increasingly ineffective.

I’m surprised it happened so quickly. It was of course inevitable that the peer-to-peer programs would adapt by offering users ways to filter the bad songs out of the system, but I would think Overpeer could take countermeasures, such as automated positive rankings of the bogus songs. But it seems the peer-to-peer networks won this particular arms race in just three years.

This, I think, is one more data point in favor of the thesis that the record labels need to focus less on the stick of preventing piracy (although they should certainly do some of that) and more on the carrot of providing users with easy-to-use, convenient, and affordable legitimate download options. They’ve made some baby steps in the right direction, but they still mostly sell low-quality audio files encumbered with irritating and restrictive “digital rights management.” Improving the quality of the songs they sell online, and abandoning digital rights management, would be important steps toward enticing customers back into the legal fold.

In the long run, I think they’re going to need to be more radical. Google is probably the best model. Google gives away online services worth billions of dollars and funds their efforts with ads. So here’s one model: imagine if the recording industry set up free, ad-supported Internet radio stations. They could do things that ordinary radio stations could never do. For example, users could be required to fill out a survey giving some basic demographic information (age, zip code, industry). Then the ads on each Internet radio stream could be targetted at that individual user. Advertisers could also buy up ads to play with particular playlists, of which there could be thousands. The Britney Spears playlist might have ads targetting teeny boppers, while the oldies playlist would have ads targeted at middle aged people. This could conceivably generate considerably more revenue than traditional radio stations, since advertisers will pay more for precisely targetted advertising.

To be clear, I’m not claiming that peer-to-peer infringement is acceptable, or that the RIAA should stop trying to prevent it. But I also think they have to face the fact that, sooner or later, this is a war they’re likely to lose. So they need to be thinking about what they’re going to do if that happens. You can, in fact, compete with free, (Google has made billions doing just that) but it requires more creativity than the recording industry has shown to date.

BlackBerry Extortion

by on December 12, 2005

I linked to this classic article on the problems of software patents on Saturday. I think this passage is worth highlighting:

Even the giants cannot protect themselves with cross-licensing from companies whose only business is to obtain exclusive rights to patents and then threaten to sue. For example, consider the New York-based Refac Technology Development Corporation, representing the owner of the “natural order recalc” patent. Contrary to its name, Refac does not develop anything except lawsuits–it has no business reason to join a cross-licensing compact. Cadtrak, the owner of the exclusive-or patent, is also a litigation company.

Refac is demanding five percent of sales of all major spread-sheet programs. If a future program infringes on twenty such patents–and this is not unlikely, given the complexity of computer programs and the broad applicability of many patents–the combined royalties could exceed 100% of the sales price. (In practice, just a few patents can make a program unprofitable.)

Sound familiar? When this was written, in 1991, Research in Motion was an obscure developer of wireless networking components, and the first BlackBerries were a decade away. NTP hadn’t even been founded yet. Yet this passage perfectly describes the RIM-NTP controversy. NTP doesn’t do anything useful, it’s strictly a lawsuit shop, or “patent trolling” firm. As Fortune describes it:

NTP has this remarkable power because it is nearing victory in its four-year-old patent litigation with Research in Motion (Research), the maker of the BlackBerry. RIM faces the real likelihood of a court-ordered BlackBerry blackout (government devices would be exempted) unless it agrees to pay essentially whatever sum NTP names, which some analysts think will approach ten figures.

However the endgame plays out, it vividly illustrates a recurring lightning-rod issue in patent debates–one that pits the information technology industry, which favors reform, against many others, such as the pharmaceutical industry, which don’t. Should plaintiffs like NTP–which does not market a competing product, never has, and never will–be entitled to an automatic injunction shutting down a productive infringer such as RIM?

NTP was founded in 1991 by the late inventor Thomas Campana and his patent attorney, Donald Stout, of Arlington, Va. It has no employees and makes no products. Its main assets, Campana’s patents, have spent most of the past decade in Stout’s file drawer. But in 2002 a federal jury found that RIM had infringed five NTP patents that relate to integrating e-mail systems with wireless networks. An appellate court largely agreed in August 2005, and in late October the U.S. Supreme Court declined to issue a stay while it ponders whether to hear the case.

I think it’s impossible to over-emphasize the importance of this point: NTP is not a BlackBerry competitor marketing a competing product. Its only “product” is lawsuits against companies that have the misfortune of developing products that happen to resemble those described in NTP’s patents. How exactly does this kind of extortion “promote the progress of science and the useful arts?”

The Fortune article, by the way, is worth reading in full.

Cable Franchise Reform

by on December 12, 2005

I’ve got a new article in last Friday’s Kansas City Business Journal on the need for cable franchise reform. My article focuses on the Missouri system, (since I work at a Missouri think tank) but this is an issue that’s applicable across the country. Most states (with Texas being the notable exception) have an outmoded “municipal franchising” regime in which each city government gets to make a soviet-style 5-year plan for cable service in their community. This might have made sense in the 1970s when each community only had one option for pay TV service, but it makes no sense whatsoever when virtually every consumer has satellite as an option, and the Baby Bells are pouring money into building out fiber networks in order to offer a third alternative. Today, the franchises themselves have become a major barrier to entry.

They dealt with this in Texas by replacing the local franchising system with a streamlined state-wide franchise. Instead of having to negotiate with hundreds of city governments for permission to offer video service, you just file a single application with the state government. This is a big step in the right direction–one that other states should emulate.