January 2007

Every week, I look at a software patent that’s been in the news. You can see previous installments in the series here. This week, my starting point is this story reporting that one of National Instruments’ patents on its LabVIEW software has been upheld. I have not been able to determine for certain which patent was upheld, but I’ve arbitrarily chosen this one as a likely candidate. If anyone knows for sure which patent was upheld, or how I can look that up, please let me know.

Here’s the abstract for this week’s patent:

A method for programming a computer to execute a procedure, is based on a graphical interface which utilizes data flow diagrams to represent the procedure. The method stores a plurality of executable functions, scheduling functions, and data types. A data flow diagram is assembled in response to the user input utilizing icons which correspond to the respective executable functions, scheduling functions, and data types which are interconnected by arcs on the screen. A panel, representative of an instrument front panel having input and output formats is likewise assembled for the data flow diagram. An executable program is generated in response to the data flow diagram and the panel utilizing the executable functions, scheduling functions, and data types stored in the memory. Furthermore, the executable functions may include user defined functions that have been generated using the method for programming. In this manner, a hierarchy of procedures is implemented, each represented by a data flow diagram.

Continue reading →

Since I’ve been criticizing a paper opposing neutrality regulation lately, it seemed only fair to criticize the other side for a change. A reader wrote in to point out that Bill Herman has recently released a new version of his paper advocating neutrality regulations. Before I get started with my critique, I want to mention that I’ve interacted with Herman in the past, and he’s a smart and thoughtful guy. I didn’t find myself persuaded by his paper, but it’s a well-written and thorough argument for his position.

In this post, I’ll critique one of the two arguments he makes in parts II and III of the paper, in which he sketches out the dangers posed by network discrimination: the contention that without new regulations, ISPs will begin censoring Internet content. It seems to me that the paper demonstrates one of the same shortcomings I noted in the Hahn/Litan paper: although it talks a lot about network discrimination in the abstract, it’s extremely vague about the details of how such a regime would actually work.

Continue reading →

…because if so, someone’s gotta send him this link:

Hat tip: EFF

Also, I’m pleased to see that EFF are demanding hearings to ensure that the Bush administration is telling the truth when they claim they’re finally complying with the law.

Kahle and Orphan Works

by on January 25, 2007

I’ve got a new article up at Ars about the Ninth Circuit’s Kahle decision:

Kahle plans to appeal the ruling to a larger panel of the Ninth Circuit, but their prospects don’t look good. With three Ninth Circuit judges already ruling against him, Kahle will face an uphill battle convincing the full Ninth Circuit that his arguments are different from those the Supreme Court raised in Eldred.

That’s a shame, because Kahle’s lawsuit highlights a serious and growing problem. New technologies are greatly enhancing the opportunity to make better use of older creative works. Books that have traditionally sat unread on dusty library shelves can now be made available in searchable form via the Internet. Old films that once languished unwatched in vaults could be digitized and made available for consumers to view in their living rooms. The main thing standing in the way is copyright law.

If the courts ultimately reject Kahle’s arguments, the battle to free orphan works will shift back to Congress. Some scholars have suggested that Congress should enact an orphan works defense that would shield individuals who reproduced a copyrighted work after making a diligent effort to find the copyright holder. The UK’s Gowers Review has recommended that a similar rule be adopted in the European Union. Although this would not make orphan works as widely available as placing them in the public domain, it might be enough for the likes of Kahle and Google.

Since it was a quasi-news article, I didn’t spend much time discussing the case on the merits. Although I certainly hope they prevail, their argument didn’t strike me as terribly strong. And even if the courts are sympathetic to their argument on the legal merits, it’s hard to see what remedy the courts could fashion. They certainly can’t throw all works created between 1964 and 1977 into the public domain, nor could they realistically reinstate a registration system that’s atrophied over the last decade. About all they could conceivably due is rule that the works will fall into the public domain by some particular date unless Congress acts first to reinstate the registration system. But it seems unlikely that a Supreme Court that shied away from locking horns with Congress in Eldred would take the even more confrontational stance that’s urged in this case.

On Knowing Your Subject

by on January 25, 2007 · 2 comments

“Cog” at The Abstract Factory has a pretty damning critique of the Hahn/Litan paper’s citation of RFCs. He considers each of the four RFCs in turn and makes a pretty compelling argument that none of them say what Hahn and Litan say they say.

Before I get to Cog’s post, I should note that I was wrong in saying they didn’t cite the RFCs in question. There is, apparently, a short version and a long version of the paper, and I was looking at the shorter version, from which the footnotes are omitted. But they did offer them in the full version.

Anyway, I’ll just quote Cog’s comments on one of the four RFCs Hahn and Litan quote:

Continue reading →

Bad News for Apple

by on January 24, 2007 · 14 comments

From the Financial Times:

Apple was dealt a blow in Europe on Wednesday when Norway’s powerful consumer ombudsman ruled that its iTunes online music store was illegal because it did not allow downloaded songs to be played on rival technology companies’ devices.

The decision is the first time any jurisdiction has concluded iTunes breaks its consumer protection laws and could prompt other European countries to review the situation.

The ombudsman has set a deadline of October 1 for the Apple to make its codes available to other technology companies so that it abides by Norwegian law. If it fails to do so, it will be taken to court, fined and eventually closed down.

Apple, whose iTunes dominates the legal download market, has its proprietory system Fairplay. Songs and tunes downloaded through iTunes are designed to work with Apple’s MP3 player iPod, but cannot be played on rival devices.

Although I applaud the goal of increasing competition in the legal download market, I don’t think having Norwegian bureaucrats overseeing Apple’s software development process is a good solution. But as I’ve pointed out before: this should be a surprise. Regulatory schemes like the DMCA (and the EUCD in Europe) frequently have unintended consequences. And often, those unintended consequences are often cited as a justification for enacting more regulations to mitigate the harms caused by the previous round of regulation.

What we need, instead, is deregulation: Congress should repeal the anti-circumvention provisions of the DMCA, so that companies are free to reverse-engineer Apple’s products in order to build compatible devices.

For markets, for fair use

by on January 24, 2007

In an op-ed in The American today (and also in comments to National Journal on the reintroduction of the Boucher fair use bill), PFF’s Patrick Ross writes that those of us who advocate reversing the DMCA and strengthening fair use rights have little faith in markets. According to him, curtailing the DMCA means government intervention in emerging markets.

What arguments like Patrick’s ignore is that copyright is unlike other property rights, copyright is a different animal. This is evident in the fact that the power to create copyright is one of the enumerated powers of Congress laid out in the Constitution. Copyright would not exist but for the grace of Congress. If Congress decides to create copyrights, it has complete discretion (within constitutional bounds) to set the outlines of copyright. Congress can decide, among many other parameters, that copyright is for only one year or for 100 or for any length of time in between. Therefore, whatever market in copyrighted works emerges once Congress has created copyright, it must conform to the shape of the copyright Congress created.

Continue reading →

More on the Hahn/Litan Paper

by on January 24, 2007

A couple of other quick points about the Hahn/Litan paper:

  • Throughout the paper, the authors fail to distinguish between neutrality as a means and neutrality as an end. The standard argument for regulation isn’t that all Internet services must operate at precisely the same speed. It’s that certain means of advantaging some traffic over others–namely, network providers setting up routing policies that prioritizes incoming traffic based on who has paid extra for the privilege–will be damaging to the Internet as a whole. You can agree or disagree with that premise, but I don’t think it’s that hard of a point to understand. And it obviously doesn’t implicate services like Akamai, which aren’t network providers at all, and who achieve “non-neutral” ends through scrupulously neutral means.
  • The paper’s citation of Ed Felten is a little bit odd. They describe him as a “proponent of the end-to-end principle,” which he is, but they fail to mention that he ultimately comes down against new regulations. I think that’s unfortunate, because I think Felten’s line of argument–that discrimination is a complicated concept, and writing a good neutrality rule will be a lot more difficult than people expect–is pretty compelling. Indeed, it’s precisely the sort of argument that should be old hat to two old hands at analyzing regulatory issues, an arena where the law of unintended consequences is constantly raising its ugly head. So it’s a little strange that the authors would implicitly lump Felten in with Wu and Lessig as a proponent of new regulation, rather than citing him as one of the most articulate skeptics of new regulation.

    I should emphasize that I agree with Hahn and Litan’s policy conclusion. And I certainly think it’s possible that priority-based pricing will be beneficial, and that’s a reasonable argument against premature regulatory intervention. But it doesn’t strike me as very likely, and I think the debate would be enhanced if those who did think it was likely (on both sides of the debate) paid a little bit more attention to the details of how it would actually work. I think that if they did so, supporters of regulation would find that it wasn’t as big a threat as they’d imagined, and critics would find that discrimination won’t solve as many problems as they hope it will.

  • The AEI-Brookings Joint Center has a new paper on network neutrality regulation.

    The economic logic of the paper is impeccable; price discrimination often benefits consumer because it allows service providers to provide premium services to those willing to pay, while giving other consumers the option of bare-bones service at cut-rate prices. The example they use is airline seats: both first class and coach passengers benefit from the airlines’ price discrimination–those who care about comfort get a nicer ride, while those who care more about price get a cheaper ticket.

    But like most people commenting on this issue–on both sides of the debate–Hahn and Litan are frustratingly vague about how exactly the price discrimination regime would work. For example:

    Continue reading →

    My favorite press critic, Jack Shafer of Slate, penned a fun piece last week entitled “The Case fo Killing the FCC and Selling Off the Spectrum.” The essay builds heavily on the work of Tom Hazlett and Peter Huber, two fine libertarian minds that many of us here at the TLF admire. Here’s some of what Shafer has to say:

    “Although today’s FCC is nowhere near as controlling as earlier FCCs, it still treats the radio spectrum like a scarce resource that its bureaucrats must manage for the “public good,” even though the government’s scarcity argument has been a joke for half a century or longer. The almost uniformly accepted modern view is that information-carrying capacity of the airwaves isn’t static, that capacity is a function of technology and design architecture that inventors and entrepreneurs throw at spectrum. To paraphrase this forward-thinking 1994 paper, the old ideas about spectrum capacity are out, and new ones about spectrum efficiency are in.

    Technology alone can’t bring the spectrum feast to entrepreneurs and consumers. More capitalism–not less–charts the path to abundance. Hazlett and others, going back to economist Ronald H. Coase in 1959, have advocated the establishment of spectrum property rights and would leave it to the market to reallocate the airwaves to the highest bidders. Such a price system would tend to encourage the further expansion of spectrum capacity.”

    Amen brother. Read the whole thing.