Articles by Eli Dourado

Eli is a research fellow at the Mercatus Center at George Mason University with the Technology Policy Program. His research focuses on Internet governance, the economics of technology, and political economy. His personal site is elidourado.com.


People of the same trade seldom meet together, even for merriment and diversion, but the conversation ends in a conspiracy against the public.

— Adam Smith, The Wealth of Nations

As we approach the World Telecommunication/ICT Policy Forum, the debate over whether intergovernmental organizations like the International Telecommunication Union should have a role to play in Internet governance continues. One argument in favor of intergovernmentalism, advanced, for instance, by former ITU Counsellor Richard Hill (now operating his own ITU lobbying organization, delightfully named APIG), goes as follows:

Continue reading →

Last week I attended an event on software patents at GW Law School. The event made me uncomfortable because it was—as one would expect at a law school event—dominated by lawyers. The concerns of the legal academics, practitioners, and lobbyists participating in the round table discussion were very different from those one would expect for a policy audience. For example, the participants agreed that there is no elegant way to partition software patents from other patents under current law and that current Supreme Court jurisprudence is unsophisticated, relying on the wrong sections of the U.S. Code.

Missing from the discussion was the single most important fact about patents: that they are negatively correlated with economic growth.

Continue reading →

Benjamin Lennett and Danielle Kehl have an article in the Chronicle of Higher Education that is representative of a genre: worrying about the adverse consequences of mobile data “caps.” In this installment, Lennett and Kehl argue that pricing structures imposed by wireless carriers will limit the future of online education. “As a nation, we should embrace the potential benefits of online education. But we must not ignore the disparities that may keep many from taking advantage of those innovations,” they warn.

But are mobile data caps really what is holding back online education? Let’s take a look.
Continue reading →

While there is evidence that patents encourage investment in industries like pharmaceuticals and materials science, their effect on many other industries is markedly negative. In the computing, software, and Internet space, patents represent a serious barrier to innovation, as companies who need to assemble a huge number of licenses are subject to the holdout problem, and as incumbent or has-been firms use patents as weapons against more innovative upstarts. In some cases, these firms deliberately transfer patents to entities known as “trolls,” who exist solely for the purpose of suing the competition.

In theory, it is possible for firms to contract around these problems on a bilateral basis—as a basic reading of Coase suggests, because patents are inefficient in the tech industry, there exists in principle a bargain in which any two firms could agree to ignore patent law. The problem, of course, is the transaction costs. Transaction costs don’t merely add up in the tech industry; they multiply, because of holdout considerations and all the strategic maneuvering associated with firms competing on multiple margins.

I was thrilled, therefore, to see that Google is taking steps to solve this problem. They are proposing to set up a pool which would cross-license their patents to any other firms willing to reciprocate. All members of the pool would receive licenses to all of the patents in the pool. Unlike other existing patent pools, they seem to be interested in achieving the broadest possible participation, and it is being created purely for defensive purposes, not to receive a competitive advantage over firms excluded from the pool.

The proposal is still in a relatively early stage—they are still seeking feedback about which of four licenses the pool should use, which have different features such as permanence of licenses (“sticky” vs. “non-sticky”) and whether firms would be required to license their entire portfolio. For what it’s worth, I hope they choose the Sticky DPL, which seems like the most aggressive of the licenses in terms of taking weapons off the table.

An excellent feature of the pool, particularly if the participants decide to go with the Sticky DPL, is that it would feature very strong network effects. If several firms license their entire patent portfolios to the pool, then that strongly increases the incentive of other firms to join the pool. There is an intriguing tension here between the stated aim of the pool and the incentives pool members have to force other firms to join—by suing non-pool members who infringe on the pool’s patents, they can increase the membership of the pool. I do not strongly oppose this, but I imagine that there will be some philosophical discussion about whether such actions would be right.

Another wrinkle is that firms might transfer several crucial patents to trolls right before they join the pool (keeping a license for themselves, of course). More generally, they may look for legal ways to reap the benefits of the pool while continuing to use trolls to skirmish with their competitors.

But nevertheless, this is an encouraging development that I hope succeeds. If, as I strongly suspect, we are on the wrong side of the Tabarrok curve, the creation of a large cross-licensing pool could increase further the dynamism of our most dynamic industry.

Today, the House Science Committee is holding a hearing on “Cyber R&D Challenges and Solutions.” Under consideration is a bill reintroduced by Rep. Mike McCaul that takes numerous steps purported to increase the network security workforce. The bill passed overwhelmingly last year.

I have no doubt that, as we move more of our lives online, we need to draw more people into computer security. But just as we need more network security professionals, we need more programmers, geneticists, biomedical engineers, statisticians, and countless other professions. We will also continue to need some number of doctors, lawyers, mechanics, plumbers, and grocery clerks. Does it make sense to introduce legislation to fine tune the number of practitioners of every trade?

Of course not. Which raises the question: what is so special about computer security? And the answer, I think, is “nothing is so special about computer security.” More people will get trained in computer security if the returns to doing so are higher, and fewer people will get trained in computer security if the returns to doing so are lower. Entry into the computer security business is simply a function of supply and demand.

The Washington Post reports, “The median salary for a graduate earning a degree in security was $55,000 in 2009, compared with $75,000 for computer engineering.” Is it any surprise, then, that more smart, tech-savvy students have pursued the latter route in recent years?

Intervening in a market that shows no signs of failing can have lots of unintended consequences. Most obviously, subsidies would run the serious risk of drawing *too many* workers into the computer security workforce. Those workers might find that they spent years investing in specialized skills without as much of a payoff as they expected. Tinkering could also affect the composition of people drawn into the field, with ill effect, for example by lowering the equilibrium salary and reducing the incentive for those with natural talent and without the need for training to work in security.

The bottom line is that a shortage of a particular kind of worker is a problem that solves itself. As salaries for security workers get bid up, more people will get training in security. The supply and demand dynamic is completely sufficient to get people into the correct professions in sufficient numbers.

The McCaul bill works through various subsidies and governmental reports to try to accomplish the same thing that the market would do if left to operate on its own. If the government wants to hire more computer security professionals, let them pay the money needed to draw people into this field. But let’s not jump through needless hoops to accomplish what should really be a straightforward task.

When Jerry and I started WCITLeaks, we didn’t know if our idea would gain traction. But it did. We made dozens of WCIT-related documents available to civil society and the general public—and in some cases, even to WCIT delegates themselves. We are happy to have played a constructive role, by fostering improved access to the information necessary for the media and global civil society to form opinions on such a vital issue as the future of the Internet. You can read my full retrospective account of WCITLeaks and the WCIT over at Ars Technica.

But now it’s time to look beyond the WCIT. The WCIT revealed substantial international disagreement over the future direction of Internet governance, particularly on the issues of whether the ITU is an appropriate forum to resolve Internet issues and whether Internet companies such as Google and Twitter should be subject to the provisions of ITU treaties. This disagreement led to a split in which 55 countries opted not to sign the revised ITRs, the treaty under negotiation.

Continue reading →

Brookings has a new report out by Jonathan Rothwell, José Lobo, Deborah Strumsky, and Mark Muro that “examines the importance of patents as a measure of invention to economic growth and explores why some areas are more inventive than others.” (p. 4) Since I doubt that non-molecule patents have a substantial effect on growth, I was curious to examine the paper’s methodology. So I skimmed through the study, which referred me to a technical appendix, which referred me to the authors’ working paper on SSRN.

The authors are basically regressing log output per worker on 10-year-lagged measures of patenting in a fixed effects model using metropolitan areas in the United States.

Continue reading →

As some of you know, I’ve been closely following the World Conference on International Telecommunication, an international treaty conference in December that will revise rules, for example, on how billing for international phone calls is handled. Some participants are interested in broadening the scope of the current treaty to include rules about the Internet and services provided over the Internet.

I haven’t written much publicly about the WCIT lately because I am now officially a participant—I have joined the US delegation to the conference. My role is to help prepare the US government for the conference, and to travel to Dubai to advise the government on the issues that arise during negotiations.

To help the general public better understand what we can expect to happen at WCIT, Mercatus has organized an event next week that should be informative. Ambassador Terry Kramer, the head of the US delegation, will give a keynote address and take questions from the audience. This will be followed by what should be a lively panel discussion between me, Paul Brigner from the Internet Society, Milton Mueller from Syracuse University, and Gary Fowlie from the ITU, the UN agency organizing the conference. The event will be on Wednesday, November 14, at 2 pm at the W hotel in Washington.

If you’re in the DC area and are interested in getting a preview of the WCIT, I hope to see you at the event on Wednesday. Be sure to register now since we are expecting a large turnout.

The New WCITLeaks

by on September 6, 2012 · 0 comments

Today, Jerry and I are pleased to announce a major update to WCITLeaks.org, our project to bring transparency to the ITU’s World Conference on International Telecommunications (WCIT, pronounced wicket).

If you haven’t been following along, WCIT is an upcoming treaty conference to update the International Telecommunication Regulations (ITRs), which currently govern some parts of the international telephone system, as well as other antiquated communication methods, like telegraphs. There has been a push from some ITU member states to bring some aspects of Internet policy into the ITRs for the first time.

We started WCITLeaks.org to provide a public hosting platform for people with access to secret ITU documents. We think that if ITU member states want to discuss the future of the Internet, they need to do so on an open and transparent basis, not behind closed doors.

Today, we’re taking our critique one step further. Input into the WCIT process has been dominated by member states and private industry. We believe it is important that civil society have its say as well. That is why we are launching a new section of the site devoted to policy analysis and advocacy resources. We want the public to have the very best information from a broad spectrum of civil society, not just whatever information most serves interests of the ITU, member states, and trade associations.

Continue reading →

Today is a a big day for WCIT: Ambassador Kramer gave a major address on the US position and the Bono Mack resolution is up for a vote in the House. But don’t overlook this Portuguese language interview with ITU Secretary-General Hamadoun Touré.

In the interview, Secretary-General Touré says that we need $800 billion of telecom infrastructure investment over the next five years. He adds that this money is going to have to come from the private sector, and that the role of government is to adopt dynamic regulatory policies so that the investment will be forthcoming. It seems to me that if we want dynamism in our telecom sector, then we should have a free market in telecom services, unencumbered by…outdated international regulatory agencies such as the ITU.

The ITU has often insisted that it has no policy agenda of its own, that it is merely a neutral arbiter between member states. But in the interview, Secretary-General Touré calls the ETNO proposal “welcome,” categorically rejects Internet access at different speeds, and spoke in favor of global cooperation to prevent cyberwar. These are policy statements, so it seems clear that the ITU is indeed pursuing an agenda. And when the interviewer asks if Dr. Touré sees any risks associated with greater state involvement in telecom, he replies no.

If you’re following WCIT, the full interview is worth a read, through Google Translate if necessary. Hat tip goes to the Internet Society’s Scoop page for WCIT.