Patents

Nobel laureate Gary Becker and I are on the same page. He says patent terms should be short:

Major reforms to reduce these unproductive opportunities would include lowering typical patent length and the scope of innovations that are eligible for patents. The current patent length of 20 years (longer for drug companies) from the date of filing for a patent can be cut in half without greatly discouraging innovation. One obvious advantage of cutting patent length in half is that the economic cost from the temporary monopoly power given to patent holders would be made much more temporary. In addition, a shorter patent length gives patent holders less of an effective head start in developing follow on patents that can greatly extend the effective length of an original patent.

More importantly, he says we should carve out particularly troublesome areas, like software, from the patent system:

In narrowing the type of innovations that are patentable, one can start by eliminating the patenting of software. Disputes over software patents are among the most common, expensive, and counterproductive. Their exclusion from the patent system would discourage some software innovations, but the saving from litigation costs over disputed patent rights would more than compensate the economy for that cost. Moreover, some software innovations would be encouraged because the inability to patent software will eliminate uncertainty over whether someone else with a similar patent will sue and do battle in the courts.

[…]

In addition to eliminating patents on software, no patents should be allowed on DNA, such as identification of genes that appear to cause particular diseases. Instead, they should be treated as other scientific discoveries, and be in the public domain. The Supreme Court recently considered a dispute over whether the genes that cause BRCA1 and BRCA2 deviations and greatly raises the risk of breast cancer is patentable. Their ruling banned patenting of human DNA, and this is an important step in the right direction.

Other categories of innovations should also be excluded from the patent system. Essentially, patents should be considered a last resort, not a first resort, to be used only when market-based methods of encouraging innovations are likely to be insufficient, and when litigation costs will be manageable. With such a “minimalist” patent system, patent intermediaries would have a legitimate and possibly important role to play in helping innovators get and protect their patent rights.

It’s good to see a consensus for major reform developing among economists. I hope that legal scholars and policymakers will start to listen.

Today, the Obama administration announced 5 executive actions it is taking and 7 legislative proposals it is making to address the problem of patent trolls. While these are incremental steps in the right direction, they are still pretty weak sauce. The reforms could alleviate some of the litigation pressure on Silicon Valley firms, but there’s a long way to go if we want to have a patent system that maximized innovation.

The proposals aim to reduce anonymity in patent litigation, improve review at the USPTO, give more protection to downstream users, and improve standards at the International Trade Commission, a venue which has been gamed by patent plaintiffs. These are all steps worth taking. But they’re not enough. The White House’s press release quotes the president as saying that “our efforts at patent reform [i.e. the America Invents Act, passed in 2011] only went about halfway to where we need to go.” Presumably the White House believes these steps will take us the rest of the way there.

But the problem with computer-enabled patents isn’t merely that they result in a lot of opportunistic litigation, though they do. The problem is that almost every new idea is actually pretty obvious, in the sense that it is “invented” at the same time by lots of companies that are innovating in the same space. Granting patents in a field where everyone is innovating in the same way at the same time is a recipe for slowing down, not speeding up, innovation. Instead of just getting on with the process of building great new products, companies have to file for patents, assemble patent portfolios, license patents from competitors who “invented” certain software techniques a few months earlier, deal with litigation, and so on. A device like a smartphone requires thousands of patents to be filed, licensed, or litigated.

If we really want to speed up innovation, we need to take bolder steps. New Zealand recently abolished software patents by declaring that software is not an invention at all. It would be terrific if the White House would get behind that kind of bold thinking. In the meantime, we’ll have to watch closely as the Obama administration’s executive actions are implemented and its legislative recommendations move through Congress. I hope for the best, but for now I’m not too impressed.

Alex Tabarrok, author of the ebook Launching The Innovation Renaissance: A New Path to Bring Smart Ideas to Market Fast discusses America’s declining growth rate in total factor productivity, what this means for the future of innovation, and what can be done to improve the situation.

Accroding to Tabarrok, patents, which were designed to promote the progress of science and the useful arts, have instead become weapons in a war for competitive advantage with innovation as collateral damage. College, once a foundation for innovation, has been oversold. And regulations, passed with the best of intentions, have spread like kudzu and now impede progress to everyone’s detriment. Tabarrok outs forth simple reforms in each of these areas and also explains the role immigration plays in innovation and national productivity.

Download

Related Links

The US Patent and Trademark office is starting to recognize that it has a software patent problem and is soliciting suggestions for how to improve software patent quality. A number of parties such as Google and EFF have filed comments.

I am on record against the idea patenting software at all. I think it is too difficult for programmers, as they are writing code, to constantly check to see if they are violating existing software patents, which are not, after all, easy to identify. Furthermore, any complex piece of software is likely to violate hundreds of patents owned by competitors, which makes license negotiation costly and not straightforward.

However, given that the abolition of software patents seems unlikely in the medium term, there are some good suggestions in the Google and EFF briefs. They both note that the software patents granted to date have been overbroad, equivalent to patenting headache medicine in general rather than patenting a particular molecule for use as a headache drug.

This argument highlights one significant problem with patent systems generally, that they depend on extremely high-quality review of patent applications to function effectively. If we’re going to have patents for software, or anything else, we need to take the review process seriously. Consequently, I would favor whatever increase in patent application fees is necessary to ensure that the quality of review is rock solid. Give USPTO the resources it needs to comply with existing patent law, which seems to preclude such overbroad patents. Simply applying patent law consistently would reduce some of the problems with software patents.

Higher fees would also function as a Pigovian tax on patenting, disincentivizing patent protection for minor innovations. This is desirable because the licensing cost of these minor innovations is likely to exceed the social benefits the patents generate, if any.

While it remains preferable to undertake major patent reform, many of the steps proposed by Google and EFF are good marginal policy improvements. I hope the USPTO considers these proposals carefully.

Sean Flaim, an attorney focusing on antitrust, intellectual property, cyberlaw, and privacy, discusses his new paper “Copyright Conspiracy: How the New Copyright Alert System May Violate the Sherman Act,” recently published in the New York University Journal of Intellectual Property and Entertainment Law.

Flaim describes content owners early attempts to enforce copyright through lawsuit as a “public relations nightmare” that humanized piracy and created outrage over large fines imposed on casual downloaders. According to Flaim, the Copyright Alert System is a more nuanced approach by the content industry to crack down on copyright infringement online, which arose in response to a government failure to update copyright law to reflect the nature of modern information exchange.

Flaim explains the six stages of the Copyright Alert System in action, noting his own suspicions about the program’s states intent as a education tool for repeat violators of copyright law online. In addition to antitrust concerns, Flaim worries that appropriate cost-benefit analysis has not been applied to this private regulation system, and, ultimately, that private companies are being granted a government-like power to punish individuals for breaking the law.

Download

Related Links

Last week I attended an event on software patents at GW Law School. The event made me uncomfortable because it was—as one would expect at a law school event—dominated by lawyers. The concerns of the legal academics, practitioners, and lobbyists participating in the round table discussion were very different from those one would expect for a policy audience. For example, the participants agreed that there is no elegant way to partition software patents from other patents under current law and that current Supreme Court jurisprudence is unsophisticated, relying on the wrong sections of the U.S. Code.

Missing from the discussion was the single most important fact about patents: that they are negatively correlated with economic growth.

Continue reading →

While there is evidence that patents encourage investment in industries like pharmaceuticals and materials science, their effect on many other industries is markedly negative. In the computing, software, and Internet space, patents represent a serious barrier to innovation, as companies who need to assemble a huge number of licenses are subject to the holdout problem, and as incumbent or has-been firms use patents as weapons against more innovative upstarts. In some cases, these firms deliberately transfer patents to entities known as “trolls,” who exist solely for the purpose of suing the competition.

In theory, it is possible for firms to contract around these problems on a bilateral basis—as a basic reading of Coase suggests, because patents are inefficient in the tech industry, there exists in principle a bargain in which any two firms could agree to ignore patent law. The problem, of course, is the transaction costs. Transaction costs don’t merely add up in the tech industry; they multiply, because of holdout considerations and all the strategic maneuvering associated with firms competing on multiple margins.

I was thrilled, therefore, to see that Google is taking steps to solve this problem. They are proposing to set up a pool which would cross-license their patents to any other firms willing to reciprocate. All members of the pool would receive licenses to all of the patents in the pool. Unlike other existing patent pools, they seem to be interested in achieving the broadest possible participation, and it is being created purely for defensive purposes, not to receive a competitive advantage over firms excluded from the pool.

The proposal is still in a relatively early stage—they are still seeking feedback about which of four licenses the pool should use, which have different features such as permanence of licenses (“sticky” vs. “non-sticky”) and whether firms would be required to license their entire portfolio. For what it’s worth, I hope they choose the Sticky DPL, which seems like the most aggressive of the licenses in terms of taking weapons off the table.

An excellent feature of the pool, particularly if the participants decide to go with the Sticky DPL, is that it would feature very strong network effects. If several firms license their entire patent portfolios to the pool, then that strongly increases the incentive of other firms to join the pool. There is an intriguing tension here between the stated aim of the pool and the incentives pool members have to force other firms to join—by suing non-pool members who infringe on the pool’s patents, they can increase the membership of the pool. I do not strongly oppose this, but I imagine that there will be some philosophical discussion about whether such actions would be right.

Another wrinkle is that firms might transfer several crucial patents to trolls right before they join the pool (keeping a license for themselves, of course). More generally, they may look for legal ways to reap the benefits of the pool while continuing to use trolls to skirmish with their competitors.

But nevertheless, this is an encouraging development that I hope succeeds. If, as I strongly suspect, we are on the wrong side of the Tabarrok curve, the creation of a large cross-licensing pool could increase further the dynamism of our most dynamic industry.

Brookings has a new report out by Jonathan Rothwell, José Lobo, Deborah Strumsky, and Mark Muro that “examines the importance of patents as a measure of invention to economic growth and explores why some areas are more inventive than others.” (p. 4) Since I doubt that non-molecule patents have a substantial effect on growth, I was curious to examine the paper’s methodology. So I skimmed through the study, which referred me to a technical appendix, which referred me to the authors’ working paper on SSRN.

The authors are basically regressing log output per worker on 10-year-lagged measures of patenting in a fixed effects model using metropolitan areas in the United States.

Continue reading →

When the smoke cleared and I found myself half caught-up on sleep, the information and sensory overload that was CES 2013 had ended.

There was a kind of split-personality to how I approached the event this year. Monday through Wednesday was spent in conference tracks, most of all the excellent Innovation Policy Summit put together by the Consumer Electronics Association. (Kudos again to Gary Shapiro, Michael Petricone and their team of logistics judo masters.)

The Summit has become an important annual event bringing together legislators, regulators, industry and advocates to help solidify the technology policy agenda for the coming year and, in this case, a new Congress.

I spent Thursday and Friday on the show floor, looking in particular for technologies that satisfy what I coined the The Law of Disruption: social, political, and economic systems change incrementally, but technology changes exponentially.

What I found, as I wrote in a long post-mortem for Forbes, is that such technologies are well-represented at CES, but are mostly found at the edges of the show–literally. Continue reading →

Gabriella Coleman, the Wolfe Chair in Scientific and Technological Literacy in the Art History and Communication Studies Department at McGill University, discusses her new book, “Coding Freedom: The Ethics and Aesthetics of Hacking,” which has been released under a Creative Commons license.

Coleman, whose background is in anthropology, shares the results of her cultural survey of free and open source software (F/OSS) developers, the majority of whom, she found, shared similar backgrounds and world views. Among these similarities were an early introduction to technology and a passion for civil liberties, specifically free speech.

Coleman explains the ethics behind hackers’ devotion to F/OSS, the social codes that guide its production, and the political struggles through which hackers question the scope and direction of copyright and patent law. She also discusses the tension between the overtly political free software movement and the “politically agnostic” open source movement, as well as what the future of the hacker movement may look like.

Download

Related Links