Articles by Eli Dourado

Eli is a research fellow at the Mercatus Center at George Mason University with the Technology Policy Program. His research focuses on Internet governance, the economics of technology, and political economy. His personal site is elidourado.com.


In 2012, the US Chamber of Commerce put out a report claiming that intellectual property is responsible for 55 million US jobs—46 percent of private sector employment. This is a ridiculous statistic if you merely stop and think about it for a minute. But the fact that the statistic is ridiculous doesn’t mean that it won’t continue to circulate around Washington. For example, last year Rep. Marsha Blackburn cited it uncritically in an oped in The Hill.

In a new paper from Mercatus (here’s the PDF), Ian Robinson and I expose this statistic, and others like them, as pseudoscience. They are based on incredibly shoddy and misleading reasoning. Here’s the abstract of the paper:

In the past two years, a spate of misleading reports on intellectual property has sought to convince policymakers and the public that implausibly high proportions of US output and employment depend on expansive intellectual property (IP) rights. These reports provide no theoretical or empirical evidence to support such a claim, but instead simply assume that the existence of intellectual property in an industry creates the jobs in that industry. We dispute the assumption that jobs in IP-intensive industries are necessarily IP-created jobs. We first explore issues regarding job creation and the economic efficiency of IP that cut across all kinds of intellectual property. We then take a closer look at these issues across three major forms of intellectual property: trademarks, patents, and copyrights.

As they say, read the whole thing, and please share with your favorite IP maximalist.

There seems to be increasing chatter among net neutrality activists lately on the subject of reclassifying ISPs as Title II services, subject to common carriage regulation. Although the intent in pushing reclassification is to make the Internet more open and free, in reality such a move could backfire badly. Activists don’t seem to have considered the effect of reclassification on international Internet politics, where it would likely give enemies of Internet openness everything they have always wanted.

At the WCIT in 2012, one of the major issues up for debate was whether the revised International Telecommunication Regulations (ITRs) would apply to Operating Agencies (OAs) or to Recognized Operating Agencies (ROAs). OA is a very broad term that covers private network operators, leased line networks, and even ham radio operators. Since “OA” would have included IP service providers, the US and other more liberal countries were very much opposed to the application of the ITRs to OAs. ROAs, on the other hand, are OAs that operate “public correspondence or broadcasting service.” That first term, “public correspondence,” is a term of art that means basically common carriage. The US government was OK with the use of ROA in the treaty because it would have essentially cabined the regulations to international telephone service, leaving the Internet free from UN interference. What actually happened was that there was a failed compromise in which ITU Member States created a new term, Authorized Operating Agency, that was arguably somewhere in the middle—the definition included the word “public” but not “public correspondence”—and the US and other countries refused to sign the treaty out of concern that it was still too broad.

If the US reclassified ISPs as Title II services, that would arguably make them ROAs for purposes at the ITU (arguably because it depends on how you read the definition of ROA and Article 6 of the ITU Constitution). This potentially opens ISPs up to regulation under the ITRs. This might not be so bad if the US were the only country in the world—after all, the US did not sign the 2012 ITRs, and it does not use the ITU’s accounting rate provisions to govern international telecom payments.

But what happens when other countries start copying the US, imposing common carriage requirements, and classifying their ISPs as ROAs? Then the story gets much worse. Countries that are signatories to the 2012 ITRs would have ITU mandates on security and spam imposed on their networks, which is to say that the UN would start essentially regulating content on the Internet. This is what Russia, Saudia Arabia, and China have always wanted. Furthermore (and perhaps more frighteningly), classification as ROAs would allow foreign ISPs to forgo commercial peering arrangements in favor of the ITU’s accounting rate system. This is what a number of African governments have always wanted. Ethiopia, for example, considered a bill (I’m not 100 percent sure it ever passed) that would send its own citizens to jail for 15 years for using VOIP, because this decreases Ethiopian international telecom revenues. Having the option of using the ITU accounting rate system would make it easier to extract revenues from international Internet use.

Whatever you think of, e.g., Comcast and Cogent’s peering dispute, applying ITU regulation to ISPs would be significantly worse in terms of keeping the Internet open. By reclassifying US ISPs as common carriers, we would open the door to exactly that. The US government has never objected to ITU regulation of ROAs, so if we ever create a norm under which ISPs are arguably ROAs, we would be essentially undoing all of the progress that we made at the WCIT in standing up for a distinction between old-school telecom and the Internet. I imagine that some net neutrality advocates will find this unfair—after all, their goal is openness, not ITU control over IP service. But this is the reality of international politics: the US would have a very hard time at the ITU arguing that regulating for neutrality and common carriage is OK, but regulating for security, content, and payment is not.

If the goal is to keep the Internet open, we must look somewhere besides Title II.

My friend Tim Lee has an article at Vox that argues that interconnection is the new frontier on which the battle for the future of the Internet is being waged. I think the article doesn’t really consider how interconnection has worked in the last few years, and consequently, it makes a big deal out of something that is pretty harmless.

How the Internet used to work

The Internet is a network of networks. Your ISP is a network. It connects to the other ISPs and exchanges traffic with them. Since connections between ISPs are about equally valuable to each other, this often happens through “settlement-free peering,” in which networks exchange traffic on an unpriced basis. The arrangement is equally valuable to both partners.

Not every ISP connects directly to every other ISP. For example, a local ISP in California probably doesn’t connect directly to a local ISP in New York. If you’re an ISP that wants to be sure your customer can reach every other network on the Internet, you have to purchase “transit” services from a bigger or more specialized ISP. This would allow ISPs to transmit data along what used to be called “the backbone” of the Internet. Transit providers that exchange roughly equally valued traffic with other networks themselves have settlement-free peering arrangements with those networks.

How the Internet works now

A few things have changed in the last several years. One major change is that most major ISPs have very large, geographically-dispersed networks. For example, Comcast serves customers in 40 states, and other networks can peer with them in 18 different locations across the US. These 18 locations are connected to each other through very fast cables that Comcast owns. In other words, Comcast is not just a residential ISP anymore. They are part of what used to be called “the backbone,” although it no longer makes sense to call it that since there are so many big pipes that cross the country and so much traffic is transmitted directly through ISP interconnection.

Another thing that has changed is that content providers are increasingly delivering a lot of a) traffic-intensive and b) time-sensitive content across the Internet. This has created the incentive to use what are known as content-delivery networks (CDNs). CDNs are specialized ISPs that locate servers right on the edge of all terminating ISPs’ networks. There are a lot of CDNs—here is one list.

By locating on the edge of each consumer ISP, CDNs are able to deliver content to end users with very low latency and at very fast speeds. For this service, they charge money to their customers. However, they also have to pay consumer ISPs for access to their networks, because the traffic flow is all going in one direction and otherwise CDNs would be making money by using up resources on the consumer ISP’s network.

CDNs’ payments to consumer ISPs are also a matter of equity between the ISP’s customers. Let’s suppose that Vox hires Amazon CloudFront to serve traffic to Comcast customers (they do). If the 50 percent of Comcast customers who wanted to read Vox suddenly started using up so many network resources that Comcast and CloudFront needed to upgrade their connection, who should pay for the upgrade? The naïve answer is to say that Comcast should, because that is what customers are paying them for. But the efficient answer is that the 50 percent who want to access Vox should pay for it, and the 50 percent who don’t want to access it shouldn’t. By Comcast charging CloudFront to access the Comcast network, and CloudFront passing along those costs to Vox, and Vox passing along those costs to customers in the form of advertising, the resource costs of using the network are being paid by those who are using them and not by those who aren’t.

What happened with the Netflix/Comcast dust-up?

Netflix used multiple CDNs to serve its content to subscribers. For example, it used a CDN provided by Cogent to serve content to Comcast customers. Cogent ran out of capacity and refused to upgrade its link to Comcast. As a result, some of Comcast’s customers experienced a decline in quality of Netflix streaming. However, Comcast customers who accessed Netflix with an Apple TV, which is served by CDNs from Level 3 and Limelight, never had any problems. Cogent has had peering disputes in the past with many other networks.

To solve the congestion problem, Netflix and Comcast negotiated a direct interconnection. Instead of Netflix paying Cogent and Cogent paying Comcast, Netflix is now paying Comcast directly. They signed a multi-year deal that is reported to reduce Netflix’s costs relative to what they would have paid through Cogent. Essentially, Netflix is vertically integrating into the CDN business. This makes sense. High-quality CDN service is essential to Netflix’s business; they can’t afford to experience the kind of incident that Cogent caused with Comcast. When a service is strategically important to your business, it’s often a good idea to vertically integrate.

It should be noted that what Comcast and Netflix negotiated was not a “fast lane”—Comcast is prohibited from offering prioritized traffic as a condition of its merger with NBC/Universal.

What about Comcast’s market power?

I think that one of Tim’s hangups is that Comcast has a lot of local market power. There are lots of barriers to creating a competing local ISP in Comcast’s territories. Doesn’t this mean that Comcast will abuse its market power and try to gouge CDNs?

Let’s suppose that Comcast is a pure monopolist in a two-sided market. It’s already extracting the maximum amount of rent that it can on the consumer side. Now it turns to the upstream market and tries to extract rent. The problem with this is that it can only extract rents from upstream content producers insofar as it lowers the value of the rent it can collect from consumers. If customers have to pay higher Netflix bills, then they will be less willing to pay Comcast. The fact that the market is two-sided does not significantly increase the amount of monopoly rent that Comcast can collect.

Interconnection fees that are being paid to Comcast (and virtually all other major ISPs) have virtually nothing to do with Comcast’s market power and everything to do with the fact that the Internet has changed, both in structure and content. This is simply how the Internet works. I use CloudFront, the same CDN that Vox uses, to serve even a small site like my Bitcoin Volatility Index. CloudFront negotiates payments to Comcast and other ISPs on my and Vox’s behalf. There is nothing unseemly about Netflix making similar payments to Comcast, whether indirectly through Cogent or directly, nor is there anything about this arrangement that harms “the little guy” (like me!).

For more reading material on the Netflix/Comcast arrangement, I recommend Dan Rayburn’s posts here, here, and here. Interconnection is a very technical subject, and someone with very specialized expertise like Dan is invaluable in understanding this issue.

NETmundial wrap-up

by on April 25, 2014 · 0 comments

NETmundial is over; here’s how it went down. Previous installments (1, 2, 3).

  • The final output of the meeting is available here. It is being referred to as the Multistakeholder Statement of São Paulo. I think the name is designed to put the document in contention with the Tunis Agenda. Insofar as it displaces the Tunis Agenda, that is fine with me.
  • Most of the civil society participants are not happy. Contrary to my prediction, in a terrible PR move, the US government (among others) weakened the language on surveillance. A statement on net neutrality also did not make it into the final draft. These were the top two issues for most of civil society participants.
  • I of course oppose US surveillance, but I am not too upset about the watered down language since I don’t see this as an Internet governance issue. Also, unlike virtually all of the civil society people, I oppose net neutrality laws, so I’m pleased with that aspect of the document.
  • What bothers me most in the final output are two statements that seem to have been snuck in at the last moment by the drafters without approval from others. These are real shenanigans. The first is on multistakeholderism. The Tunis language said that stakeholders should participate according to their “respective roles and responsibilities.” The original draft of the NETmundial document used the same language, but participants agreed to remove it, indicating that all stakeholders should participate equally and that no stakeholders were more special than others. Somehow the final document contained the sentence, “The respective roles and responsibilities of stakeholders should be interpreted in a flexible manner with reference to the issue under discussion.” I have no idea how it got in there. I was in the room when the final draft was approved, and that text was not announced.
  • Similarly, language in the “roadmap” portion of the document now refers to non-state actors in the context of surveillance. “Collection and processing of personal data by state and non-state actors should be conducted in accordance with international human rights law.” The addition of non-state actors was also done without consulting anyone in the final drafting room.
  • Aside from the surveillance issue, the other big mistake by the US government was their demand to weaken the provision on intermediary liability. As I understand it, their argument was that they didn’t want to consider safe harbor for intermediaries without a concomitant recognition of the role of intermediaries in self-policing, as is done through the notice-and-takedown process in the US. I would have preferred a strong, free-standing statement on intermediary liability, but instead, the text was replaced with OECD language that the US had previously agreed to.
  • Overall, the meeting was highly imperfect—it was non-transparent, disorganized, inefficient in its use of time, and so on. I don’t think it was a rousing success, but it was nevertheless successful enough that the organizers were able to claim success, which I think was their original goal. Other than the two last-minute additions that I saw (I wonder if there are others), nothing in the document gives me major heartburn, so maybe that is actually a success. It will be interesting to see if the São Paulo Statement is cited in other fora, and if they decide to repeat this process again next year.

Today is the second and final day of NETmundial and the third in my series (parts 1 and 2) of quick notes on the meeting.

  • Yesterday, Dilma Rousseff did indeed sign the Marco Civil into law as expected. Her appearance here began with the Brazilian national anthem, which is a very strange way to kick off a multistakeholder meeting.
  • The big bombshell in Rousseff’s speech was her insistence that the multilateral model can peacefully coexist with the multistakeholder model. Brazil had been making a lot of pro-multistakeholder statements, so many of us viewed this as something of a setback.
  • One thing I noticed during the speech was that the Portuguese word for “multistakeholder” actually literally translates as “multisectoral.” This goes a long way toward explaining some of the disconnect between Brazil and the liberals. Multisectoral means that representatives from all “sectors” are welcome, while multistakeholder implies that every stakeholder is welcome to participate, even if they sometimes organize into constituencies. This is a pretty major difference, and NETmundial has been organized on the former model.
  • The meeting yesterday got horribly behind schedule. There were so many welcome speeches, and they went so much over time, that we did not even begin the substantive work of the conference until 5:30pm. I know that sounds like a joke, but it’s not.
  • After three hours of substantive work, during which participants made 2-minute interventions suggesting changes to the text, a drafting group retreated to a separate room to work on the text of the document. The room was open to all participants, but only the drafting group was allowed to work on the drafting; everyone else could only watch (and drink).
  • As of this morning, we still don’t have the text that was negotiated last night. Hopefully it will appear online some time soon.
  • One thing to watch for is the status of the document. Will it be a “declaration” or a “chairman’s report” (or something else)? What I’m hearing is that most of the anti-multistakeholder governments like Russia and China want it to be a chairman’s report because that implies a lesser claim to legitimacy. Brazil, the hosts of the conference, presumably want to make a maximal claim to legitimacy. I tend to think that there’s enough wrong with the document that I’d prefer the outcome to be a chairman’s report, but I don’t feel too strongly.

As I blogged last week, I am in São Paulo to attend NETmundial, the meeting on the future of Internet governance hosted by the Brazilian government. The opening ceremony is about to begin. A few more observations:

  • The Brazilian Senate passed the landmark Marco Civil bill last night, and Dilma Rousseff, the Brazilian president, may use here appearance here today to sign it into law. The bill subjects data stored on Brazilians anywhere in the world to Brazilian jurisdiction and imposes net neutrality domestically. It also provides a safe harbor for ISPs and creates a notice-and-takedown system for offensive content.
  • Some participants are framing aspects of the meeting, particularly the condemnation of mass surveillance in the draft outcome document, as civil society v. the US government. There is a lot of concern that the US will somehow water down the surveillance language so that it doesn’t apply to the NSA’s surveillance. WikiLeaks has stoked some of this concern with breathless tweets. I don’t see events playing out this way. I am as opposed to mass US surveillance as anyone, but I haven’t seen much resistance from the US government participants in this regard. Most of the comments by the US on the draft have been benign. For example, WikiLeaks claimed that the US “stripped” language referring to the UN Human Rights Council; in fact, the US hasn’t stripped anything because it is not in charge (it can only make suggestions), and eliminating the reference to the HRC is actually a good idea because the HRC is a multilateral, not a multistakeholder, body. I expect a strong anti-surveillance statement to be included in the final outcome document. If it is not, it will probably be other governments, not the US, that block it.
  • In my view, the privacy section of the draft still needs work, however. In particular, it is important to cabin the paragraph to address governmental surveillance, not to interfere with voluntary, private arrangements in which users disclose information to receive free services.
  • I expect discussions over net neutrality to be somewhat contentious. Civil society participants are generally for it, with some governments, businesses, parts of the technical community, and yours truly opposed.
  • Although surveillance and net neutrality have received a lot of attention, they are not the important issues at NETmundial. Instead, look for the language that will affect “the future of Internet governance,” which is after all what the meeting is about. For example, will the language on stakeholders’ “respective roles and responsibilities” be stricken? This is language held over from the Tunis Agenda and it has a lot of meaning. Do stakeholders participate as equals or do they, especially governments, have separate roles? There is also a paragraph on “enhanced cooperation,” which is a codeword for governments running the show. Look to see in the final draft if it is still there.
  • Speaking of the final draft, here is how it will be produced: During the meeting, participants will have opportunities to make 2-minute interventions on specific topics. The drafting group will make note of the comments and then retreat to a drafting room to make final edits to the draft. This is, of course, not really the open governance process that many of us want for the Internet, one where select, unaccountable participants have the final say. Yet two days is not a long enough time to really have an open, free-wheeling drafting conference. I think the structure of the conference, driven by the perceived need to produce an outcome document with certainty, is unfortunate and somewhat detracts from the legitimacy of whatever will be produced, even though I expect the final document to be OK on substance.

Pre-NETmundial Notes

by on April 18, 2014 · 1 comment

Next week I’ll be in São Paulo for the NETmundial meeting, which will discuss “the future of Internet governance.” I’ll blog more while I’m there, but for now I just wanted to make a few quick notes.

  • This is the first meeting of its kind, so it’s difficult to know what to expect, in part because it’s not clear what others’ expectations are. There is a draft outcome document, but no one knows how significant it will be or what weight it will carry in other fora.
  • The draft outcome document is available here. The web-based tool for commenting on individual paragraphs is quite nice. Anyone in the world can submit comments on a paragraph-by-paragraph basis. I think this is a good way to lower the barriers to participation and get a lot of feedback.
  • I worry that we won’t have enough time to give due consideration to the feedback being gathered. The meeting is only two days long. If you’ve ever participated in a drafting conference, you know that this is not a lot of time. What this means, unfortunately, is that the draft document may be something of a fait accompli. Undoubtedly it will change a little, but the amount of changes that can be contemplated will be limited due to sheer time constraints.
  • Time will be even more constrained by the absurd amount of time allocated to opening ceremonies and welcome remarks. The opening ceremony begins at 9:30 am and the welcome remarks are not scheduled to conclude until 1 pm on the first day. This is followed by a lunch break, and then a short panel on setting goals for NETmundial, so that the first drafting session doesn’t begin until 2:30 pm. This seems like a mistake.
  • Speaking of the agenda, it was not released until yesterday. While NETmundial has indeed been open to participation by all, it has not been very transparent. An earlier draft outcome document had to be leaked by WikiLeaks on April 8. Not releasing an agenda until a few days before the event is also not very transparent. In addition, the processes by which decisions have been made have not been transparent to outsiders.

See you all next week.

Andrea Castillo and I have a new paper out from the Mercatus Center entitled “Why the Cybersecurity Framework Will Make Us Less Secure.” We contrast emergent, decentralized, dynamic provision of security with centralized, technocratic cybersecurity plans. Money quote:

The Cybersecurity Framework attempts to promote the outcomes of dynamic cybersecurity provision without the critical incentives, experimentation, and processes that undergird dynamism. The framework would replace this creative process with one rigid incentive toward compliance with recommended federal standards. The Cybersecurity Framework primarily seeks to establish defined roles through the Framework Profiles and assign them to specific groups. This is the wrong approach. Security threats are constantly changing and can never be holistically accounted for through even the most sophisticated flowcharts. What’s more, an assessment of DHS critical infrastructure categorizations by the Government Accountability Office (GAO) finds that the DHS itself has failed to adequately communicate its internal categories with other government bodies. Adding to the confusion is the proliferating amalgam of committees, agencies, and councils that are necessarily invited to the table as the number of “critical” infrastructures increases. By blindly beating the drums of cyber war and allowing unfocused anxieties to clumsily force a rigid structure onto a complex system, policymakers lose sight of the “far broader range of potentially dangerous occurrences involving cyber-means and targets, including failure due to human error, technical problems, and market failure apart from malicious attacks.” When most infrastructures are considered “critical,” then none of them really are.

We argue that instead of adopting a technocratic approach, the government should take steps to improve the existing emergent security apparatus. This means declassifying information about potential vulnerabilities and kickstarting the cybersecurity insurance market by buying insurance for federal agencies, which experienced 22,000 breaches in 2012. Read the whole thing, as they say.

Today on Capitol Hill, the House Energy and Commerce Committee is holding a hearing on the NTIA’s recent announcement that it will relinquish its small but important administrative role in the Internet’s domain name system. The announcement has alarmed some policymakers with a well-placed concern for the future of Internet freedom; hence the hearing. Tomorrow, I will be on a panel at ITIF discussing the IANA oversight transition, which promises to be a great discussion.

My general view is that if well executed, the transition of the DNS from government oversight to purely private control could actually help secure a measure of Internet freedom for another generation—but the transition is not without its potential pitfalls. Continue reading →

The Internet began as a U.S. military project. For two decades, the government restricted access to the network to government, academic, and other authorized non-commercial use. In 1989, the U.S. gave up control—it allowed private, commercial use of the Internet, a decision that allowed it to flourish and grow as few could imagine at the time.

Late Friday, the NTIA announced its intent to give up the last vestiges of its control over the Internet, the last real evidence that it began as a government experiment. Control of the Domain Name System’s (DNS’s) Root Zone File has remained with the agency despite the creation of ICANN in 1998 to perform the other high-level domain name functions, called the IANA functions.

The NTIA announcement is not a huge surprise. The U.S. government has always said it eventually planned to devolve IANA oversight, albeit with lapsed deadlines and changes of course along the way.

The U.S. giving up control over the Root Zone File is a step toward a world in which governments no longer assert oversight over the technology of communication. Just as freedom of the printing press was important to the founding generation in America, an unfettered Internet is essential to our right to unimpeded communication. I am heartened to see that the U.S. will not consider any proposal that involves IANA oversight by an intergovernmental body.

Relatedly, next month’s global multistakeholder meeting in Brazil will consider principles and roadmaps for the future of Internet governance. I have made two contributions to the meeting, a set of proposed high-level principles that would limit the involvement of governments in Internet governance to facilitating participation by their nationals, and a proposal to support experimentation in peer-to-peer domain name systems. I view these proposals as related: the first keeps governments away from Internet governance and the second provides a check against ICANN simply becoming another government in control of the Internet.