Articles by Eli Dourado

Eli is a research fellow at the Mercatus Center at George Mason University with the Technology Policy Program. His research focuses on Internet governance, the economics of technology, and political economy. His personal site is elidourado.com.


Last week, two very interesting events happened in the world of copyright and content piracy. First, the Pirate Bay, the infamous torrent hosting site, was raided by police and removed from the Internet. Pirate Bay co-founder Peter Sunde (who was no longer involved with the project) expressed his indifference to the raid; there was no soul left in the site, he said, and in any case, he is “pretty sure the next thing will pan out.”

Second, a leaked trove of emails from the Sony hack showed that the MPAA continues to pursue their dream of blocking websites that contribute to copyright infringement. With the failure of SOPA in 2012, the lobbying organization has pivoted to trying to accomplish the same ends through other means, including paying for state attorneys-general to attack Google for including some of these sites in their index. Over at TechDirt, Mike Masnick argues that some of this activity may have been illegal.

I’ll leave the illegality of the MPAA’s lobbying strategy for federal prosecutors to sort out, but like some others, I am astonished by the MPAA’s lack of touch with reality. They seem to believe that opposition to SOPA was a fluke, whipped up by Google, who they will be able to neutralize through their “Project Goliath.” And according to a meeting agenda reported on by TorrentFreak, they want to bring “on board ‘respected’ people in the technology sector to agree on technical facts and establish policy support for site blocking.”

The reality is that opposition to SOPA-style controls continues to remain strong in the tech policy community. The only people in Washington who support censoring the Internet to protect copyright are paid by Hollywood. If, through their generous war chest, the MPAA were able to pay a “respected” tech-sector advocate to build policy support for site blocking, that very fact would cause that person to lose respect.

Moreover, on a technical level, the MPAA is fighting a battle it is sure to lose. As Rick Falkvinge notes, the content industry had a unique opportunity in 1999 to embrace and extend Napster. Instead, it got Napster shut down, which eventually led to decentralized piracy over bittorrent. Now, it wants to shut down sites that index torrents, but torrent indexes are tiny amounts of data. The whole Pirate Bay index was only 90MB in 2012, and a magnet link for an individual torrent is only a few bytes. Between Bitmessage and projects like Bitmarkets, it seems extremely unlikely that the content industry will ever be able to shut down distribution of torrent data.

Instead of fighting this inevitable trend, the MPAA and RIAA should be trying to position themselves well in a world in which content piracy will always be possible. They should make it convenient for customers to access their paid content through bundling deals with companies like Netflix and Spotify. They should accept some background level of content piracy and embrace at least its buzz-generating benefits. They should focus on soft enforcement through systems like six strikes, which more gently nudge consumers to pay for content. And they should explicitly disavow any effort to censor the web—without such a disavowal, they are making enemies not just of tech companies, but of the entire community of tech enthusiasts and policy wonks.

Last week marked the conclusion of the ITU’s Plenipotentiary Conference, the quadrennial gathering during which ITU member states get together to revise the treaty that establishes the Union and conduct other high-level business. I had the privilege of serving as a member of the US delegation, as I did for the WCIT, and to see the negotiations first hand. This year’s Plenipot was far less contentious than the WCIT was two years ago. For other summaries of the conference, let me recommend to you Samantha Dickinson, Danielle Kehl, and Amb. Danny Sepulveda. Rather than recap their posts or the entire conference, I just wanted to add a couple of additional observations.

We mostly won on transparent access to documents

Through my involvement with WCITLeaks, I have closely followed the issue of access to ITU documents, both before and during the Plenipot. My assessment is that we mostly won.

Going forward, most inputs and outputs to ITU conferences and assemblies will be available to the public from the ITU website. This excludes a) working documents, b) documents related to other meetings such as Council Working Groups and Study Groups, and c) non-meeting documents that should be available to the public.

However, in February, an ITU Council Working Group will be meeting to develop what is likely to be a more extensive document access policy. In May, the whole Council will meet to provisionally approve an access policy. And in 2018, the next Plenipot will permanently decide what to do about this provisional access policy.

There are no guarantees, and we will need to closely monitor the outcomes in February and May to see what policy is adopted—but if it is a good one, I would be prepared to shut down WCITLeaks as it would become redundant. If the policy is inadequate, however, WCITLeaks will continue to operate until the policy improves.

I was gratified that WCITLeaks continued to play a constructive role in the discussion. For example, in the Arab States’ proposal on ITU document access, they cited us, considering “that there are some websites on the Internet which are publishing illegally to the public ITU documents that are restricted only to Member States.” In addition, I am told that at the CEPT coordination meeting, WCITLeaks was thanked for giving the issue of transparency at the ITU a shot in the arm.

A number of governments were strong proponents of transparency at the ITU, but I think special thanks are due to Sweden, who championed the issue on behalf of Europe. I was very grateful for their leadership.

The collapse of the WCIT was an input into a harmonious Plenipot

We got through the Plenipot without a single vote (other than officer elections)! That’s great news—it’s always better when the ITU can come to agreement without forcing some member states to go along.

I think it’s important to recognize the considerable extent to which this consensus agreement was driven by events at the WCIT in 2012. At the WCIT, when the US (and others) objected and said that we could not agree to certain provisions, other countries thought we were bluffing. They decided to call our bluff by engineering a vote, and we wisely decided not to sign the treaty, along with 54 other countries.

In Busan this month, when we said that we could not agree to certain outcomes, nobody thought we were bluffing. Our willingness to walk away at the WCIT gave us added credibility in negotiations at the Plenipot. While I also believe that good diplomacy helped secure a good outcome at the Plenipot, the occasional willingness to walk the ITU off a cliff comes in handy. We should keep this in mind for future negotiations—making credible promises and sticking to them pays dividends down the road.

The big question of the conference is in what form will the India proposal re-emerge

At the Plenipot, India offered a sweeping proposal to fundamentally change the routing architecture of the Internet so that a) IP addresses would be allocated by country, like telephone numbers, with a country prefix and b) domestic Internet traffic would never be routed out of the country.

This proposal was obviously very impractical. It is unlikely, in any case, that the ITU has the expertise or the budget to undertake such a vast reengineering of the Internet. But the idea would also be very damaging from the perspective of individual liberty—it would make nation-states, even more than the are now, mediators of human communication.

I was very proud that the United States not only made the practical case against the Indian proposal, it made a principled one. Amb. Sepulveda made a very strong statement indicating that the United States does not share India’s goals as expressed in this proposal, and that we would not be a part of it. This statement, along with those of other countries and subsequent negotiations, effectively killed the Indian proposal at the Plenipot.

The big question is in what form this proposal will re-emerge. The idea of remaking the Internet along national lines is unlikely to go away, and we will need to continue monitoring ITU study groups to ensure that this extremely damaging proposal does not raise its head.

Good news! As the ITU’s Plenipotentiary Conference gets underway in Busan, Korea, the heads of delegation have met and decided to open up access to some of the documents associated with the meeting. At this time, it is only the documents that are classified as “contributions“—other documents such as meeting agendas, background information, and terms of reference remain password protected. It’s not clear yet whether that is an oversight or an intentional distinction. While I would prefer all documents to be publicly available, this is a very welcome development. It is gratifying to see the ITU membership taking transparency seriously.

Special thanks are due to ITU Secretary-General Hamadoun Touré. When Jerry Brito and I launched WCITLeaks in 2012, at first, the ITU took a very defensive posture. But after the WCIT, the Secretary-General demonstrated tremendous leadership by becoming a real advocate for transparency and reform. I am told that he was instrumental in convincing the heads of delegation to open up access to Plenipot documents. For that, Dr. Touré has my sincere thanks—I would be happy to buy him a congratulatory drink when I arrive in Busan, although I doubt his schedule would permit it.

It’s worth noting that this decision only applies to the Plenipotentiary conference. The US has a proposal that will be considered at the conference to make something like this arrangement permanent, to instruct the incoming SG to develop a policy of open access to all ITU meeting documents. That is a development that I will continue to watch closely.

Although SOPA was ignominiously defeated in 2012, the content industry never really gave up on the basic idea of breaking the Internet in order to combat content piracy. The industry now claims that a major cause of piracy is search engines returning results that direct users to pirated content. To combat this, they would like to regulate search engine results to prevent them from linking to sites that contain pirated music and movies.

This idea is problematic on many levels. First, there is very little evidence that content piracy is a serious concern in objective economic terms. Most content pirates would not, but for the availability of pirated content, empty their wallets to incentivize the creation of more movies and music. As Ian Robinson and I explain in our recent paper, industry estimates of the jobs created by intellectual property are absurd. Second, there are serious free speech implications associated with regulating search engine results. Search engines perform an information distribution role similar to that of newspapers, and they have an editorial voice. They deserve protection from censorship as long as they are not hosting the pirated material themselves. Third, as anyone who knows anything about the Internet knows, nobody uses the major search engines to look for pirated content. The serious pirates go straight to sites that specialize in piracy. Fourth, this is all part of a desperate attempt by the content industry to avoid modernizing and offering more of their content online through convenient packages such as Netflix.

As if these were not sufficient reason to reject the idea of “SOPA for Search Engines,” Google has now announced that they will be directing users to legitimate digital content if it is available on Netflix, Amazon, Google Play, Spotify, or other online services. The content industry now has no excuse—if they make their music and movies available in convenient form, users will see links to legitimate content even if they search for pirated versions.

star-trek-search-results

Google also says they will be using DMCA takedown notices as an input into search rankings and autocomplete suggestions, demoting sites and terms that are associated with piracy. This is above and beyond what Google needs to do, and in fact raises some concerns about fraudulent DMCA takedown notices that could chill free expression—such as when CBS issued a takedown of John McCain’s campaign ad on YouTube even though it was likely legal under fair use. Google will have to carefully monitor the DMCA takedown process for abuse. But in any case, these moves by Google should once and for all put the nail in the coffin of the idea that we should compromise the integrity of search results through government regulation for the sake of fighting a piracy problem that is not that serious in the first place.

The ITU is holding its quadrennial Plenipotentiary Conference in Busan, South Korea from October 20 to November 7, 2014. The Plenipot, as it is called, is the ITU’s “supreme organ” (a funny term that I did not make up). It represents the highest level of decision making at the ITU. As it has for the last several ITU conferences, WCITLeaks will host leaked documents related to the Plenipot.

For those interested in transparency at the ITU, two interesting developments are worth reporting. On the first day of the conference, the heads of delegation will meet to decide whether documents related to the conference should be available to the public directly through the TIES system without a password. All of the documents associated with the Plenipot are already available in English on WCITLeaks, but direct public access would have the virtue of including those in the world who do not speak English but do speak one of the other official UN languages. Considering this additional benefit of inclusion, I hope that the heads of delegation will seriously consider the advantages of adopting a more open model for document access during this Plenipot. If you would like to contact the head of delegation for your country, you can find their names in this document. A polite email asking them to support open access to ITU documents might not hurt.

In addition, at the meeting, the ITU membership will consider a proposal from the United States to, as a rule, provide open access to all meeting documents.

open-access-ITU

This is what WCITLeaks has always supported—putting ourselves out of business. As the US proposal notes, the ITU Secretariat has conducted a study finding that other UN agencies are much more forthcoming in terms of public access to their documents. A more transparent ITU is in everyone’s interest—including the ITU’s. This Plenipot has the potential to remedy a serious deficiency with the institution; I’m cheering for them and hoping they get it right.

In 2012, the US Chamber of Commerce put out a report claiming that intellectual property is responsible for 55 million US jobs—46 percent of private sector employment. This is a ridiculous statistic if you merely stop and think about it for a minute. But the fact that the statistic is ridiculous doesn’t mean that it won’t continue to circulate around Washington. For example, last year Rep. Marsha Blackburn cited it uncritically in an oped in The Hill.

In a new paper from Mercatus (here’s the PDF), Ian Robinson and I expose this statistic, and others like them, as pseudoscience. They are based on incredibly shoddy and misleading reasoning. Here’s the abstract of the paper:

In the past two years, a spate of misleading reports on intellectual property has sought to convince policymakers and the public that implausibly high proportions of US output and employment depend on expansive intellectual property (IP) rights. These reports provide no theoretical or empirical evidence to support such a claim, but instead simply assume that the existence of intellectual property in an industry creates the jobs in that industry. We dispute the assumption that jobs in IP-intensive industries are necessarily IP-created jobs. We first explore issues regarding job creation and the economic efficiency of IP that cut across all kinds of intellectual property. We then take a closer look at these issues across three major forms of intellectual property: trademarks, patents, and copyrights.

As they say, read the whole thing, and please share with your favorite IP maximalist.

There seems to be increasing chatter among net neutrality activists lately on the subject of reclassifying ISPs as Title II services, subject to common carriage regulation. Although the intent in pushing reclassification is to make the Internet more open and free, in reality such a move could backfire badly. Activists don’t seem to have considered the effect of reclassification on international Internet politics, where it would likely give enemies of Internet openness everything they have always wanted.

At the WCIT in 2012, one of the major issues up for debate was whether the revised International Telecommunication Regulations (ITRs) would apply to Operating Agencies (OAs) or to Recognized Operating Agencies (ROAs). OA is a very broad term that covers private network operators, leased line networks, and even ham radio operators. Since “OA” would have included IP service providers, the US and other more liberal countries were very much opposed to the application of the ITRs to OAs. ROAs, on the other hand, are OAs that operate “public correspondence or broadcasting service.” That first term, “public correspondence,” is a term of art that means basically common carriage. The US government was OK with the use of ROA in the treaty because it would have essentially cabined the regulations to international telephone service, leaving the Internet free from UN interference. What actually happened was that there was a failed compromise in which ITU Member States created a new term, Authorized Operating Agency, that was arguably somewhere in the middle—the definition included the word “public” but not “public correspondence”—and the US and other countries refused to sign the treaty out of concern that it was still too broad.

If the US reclassified ISPs as Title II services, that would arguably make them ROAs for purposes at the ITU (arguably because it depends on how you read the definition of ROA and Article 6 of the ITU Constitution). This potentially opens ISPs up to regulation under the ITRs. This might not be so bad if the US were the only country in the world—after all, the US did not sign the 2012 ITRs, and it does not use the ITU’s accounting rate provisions to govern international telecom payments.

But what happens when other countries start copying the US, imposing common carriage requirements, and classifying their ISPs as ROAs? Then the story gets much worse. Countries that are signatories to the 2012 ITRs would have ITU mandates on security and spam imposed on their networks, which is to say that the UN would start essentially regulating content on the Internet. This is what Russia, Saudia Arabia, and China have always wanted. Furthermore (and perhaps more frighteningly), classification as ROAs would allow foreign ISPs to forgo commercial peering arrangements in favor of the ITU’s accounting rate system. This is what a number of African governments have always wanted. Ethiopia, for example, considered a bill (I’m not 100 percent sure it ever passed) that would send its own citizens to jail for 15 years for using VOIP, because this decreases Ethiopian international telecom revenues. Having the option of using the ITU accounting rate system would make it easier to extract revenues from international Internet use.

Whatever you think of, e.g., Comcast and Cogent’s peering dispute, applying ITU regulation to ISPs would be significantly worse in terms of keeping the Internet open. By reclassifying US ISPs as common carriers, we would open the door to exactly that. The US government has never objected to ITU regulation of ROAs, so if we ever create a norm under which ISPs are arguably ROAs, we would be essentially undoing all of the progress that we made at the WCIT in standing up for a distinction between old-school telecom and the Internet. I imagine that some net neutrality advocates will find this unfair—after all, their goal is openness, not ITU control over IP service. But this is the reality of international politics: the US would have a very hard time at the ITU arguing that regulating for neutrality and common carriage is OK, but regulating for security, content, and payment is not.

If the goal is to keep the Internet open, we must look somewhere besides Title II.

My friend Tim Lee has an article at Vox that argues that interconnection is the new frontier on which the battle for the future of the Internet is being waged. I think the article doesn’t really consider how interconnection has worked in the last few years, and consequently, it makes a big deal out of something that is pretty harmless.

How the Internet used to work

The Internet is a network of networks. Your ISP is a network. It connects to the other ISPs and exchanges traffic with them. Since connections between ISPs are about equally valuable to each other, this often happens through “settlement-free peering,” in which networks exchange traffic on an unpriced basis. The arrangement is equally valuable to both partners.

Not every ISP connects directly to every other ISP. For example, a local ISP in California probably doesn’t connect directly to a local ISP in New York. If you’re an ISP that wants to be sure your customer can reach every other network on the Internet, you have to purchase “transit” services from a bigger or more specialized ISP. This would allow ISPs to transmit data along what used to be called “the backbone” of the Internet. Transit providers that exchange roughly equally valued traffic with other networks themselves have settlement-free peering arrangements with those networks.

How the Internet works now

A few things have changed in the last several years. One major change is that most major ISPs have very large, geographically-dispersed networks. For example, Comcast serves customers in 40 states, and other networks can peer with them in 18 different locations across the US. These 18 locations are connected to each other through very fast cables that Comcast owns. In other words, Comcast is not just a residential ISP anymore. They are part of what used to be called “the backbone,” although it no longer makes sense to call it that since there are so many big pipes that cross the country and so much traffic is transmitted directly through ISP interconnection.

Another thing that has changed is that content providers are increasingly delivering a lot of a) traffic-intensive and b) time-sensitive content across the Internet. This has created the incentive to use what are known as content-delivery networks (CDNs). CDNs are specialized ISPs that locate servers right on the edge of all terminating ISPs’ networks. There are a lot of CDNs—here is one list.

By locating on the edge of each consumer ISP, CDNs are able to deliver content to end users with very low latency and at very fast speeds. For this service, they charge money to their customers. However, they also have to pay consumer ISPs for access to their networks, because the traffic flow is all going in one direction and otherwise CDNs would be making money by using up resources on the consumer ISP’s network.

CDNs’ payments to consumer ISPs are also a matter of equity between the ISP’s customers. Let’s suppose that Vox hires Amazon CloudFront to serve traffic to Comcast customers (they do). If the 50 percent of Comcast customers who wanted to read Vox suddenly started using up so many network resources that Comcast and CloudFront needed to upgrade their connection, who should pay for the upgrade? The naïve answer is to say that Comcast should, because that is what customers are paying them for. But the efficient answer is that the 50 percent who want to access Vox should pay for it, and the 50 percent who don’t want to access it shouldn’t. By Comcast charging CloudFront to access the Comcast network, and CloudFront passing along those costs to Vox, and Vox passing along those costs to customers in the form of advertising, the resource costs of using the network are being paid by those who are using them and not by those who aren’t.

What happened with the Netflix/Comcast dust-up?

Netflix used multiple CDNs to serve its content to subscribers. For example, it used a CDN provided by Cogent to serve content to Comcast customers. Cogent ran out of capacity and refused to upgrade its link to Comcast. As a result, some of Comcast’s customers experienced a decline in quality of Netflix streaming. However, Comcast customers who accessed Netflix with an Apple TV, which is served by CDNs from Level 3 and Limelight, never had any problems. Cogent has had peering disputes in the past with many other networks.

To solve the congestion problem, Netflix and Comcast negotiated a direct interconnection. Instead of Netflix paying Cogent and Cogent paying Comcast, Netflix is now paying Comcast directly. They signed a multi-year deal that is reported to reduce Netflix’s costs relative to what they would have paid through Cogent. Essentially, Netflix is vertically integrating into the CDN business. This makes sense. High-quality CDN service is essential to Netflix’s business; they can’t afford to experience the kind of incident that Cogent caused with Comcast. When a service is strategically important to your business, it’s often a good idea to vertically integrate.

It should be noted that what Comcast and Netflix negotiated was not a “fast lane”—Comcast is prohibited from offering prioritized traffic as a condition of its merger with NBC/Universal.

What about Comcast’s market power?

I think that one of Tim’s hangups is that Comcast has a lot of local market power. There are lots of barriers to creating a competing local ISP in Comcast’s territories. Doesn’t this mean that Comcast will abuse its market power and try to gouge CDNs?

Let’s suppose that Comcast is a pure monopolist in a two-sided market. It’s already extracting the maximum amount of rent that it can on the consumer side. Now it turns to the upstream market and tries to extract rent. The problem with this is that it can only extract rents from upstream content producers insofar as it lowers the value of the rent it can collect from consumers. If customers have to pay higher Netflix bills, then they will be less willing to pay Comcast. The fact that the market is two-sided does not significantly increase the amount of monopoly rent that Comcast can collect.

Interconnection fees that are being paid to Comcast (and virtually all other major ISPs) have virtually nothing to do with Comcast’s market power and everything to do with the fact that the Internet has changed, both in structure and content. This is simply how the Internet works. I use CloudFront, the same CDN that Vox uses, to serve even a small site like my Bitcoin Volatility Index. CloudFront negotiates payments to Comcast and other ISPs on my and Vox’s behalf. There is nothing unseemly about Netflix making similar payments to Comcast, whether indirectly through Cogent or directly, nor is there anything about this arrangement that harms “the little guy” (like me!).

For more reading material on the Netflix/Comcast arrangement, I recommend Dan Rayburn’s posts here, here, and here. Interconnection is a very technical subject, and someone with very specialized expertise like Dan is invaluable in understanding this issue.

NETmundial wrap-up

by on April 25, 2014 · 0 comments

NETmundial is over; here’s how it went down. Previous installments (1, 2, 3).

  • The final output of the meeting is available here. It is being referred to as the Multistakeholder Statement of São Paulo. I think the name is designed to put the document in contention with the Tunis Agenda. Insofar as it displaces the Tunis Agenda, that is fine with me.
  • Most of the civil society participants are not happy. Contrary to my prediction, in a terrible PR move, the US government (among others) weakened the language on surveillance. A statement on net neutrality also did not make it into the final draft. These were the top two issues for most of civil society participants.
  • I of course oppose US surveillance, but I am not too upset about the watered down language since I don’t see this as an Internet governance issue. Also, unlike virtually all of the civil society people, I oppose net neutrality laws, so I’m pleased with that aspect of the document.
  • What bothers me most in the final output are two statements that seem to have been snuck in at the last moment by the drafters without approval from others. These are real shenanigans. The first is on multistakeholderism. The Tunis language said that stakeholders should participate according to their “respective roles and responsibilities.” The original draft of the NETmundial document used the same language, but participants agreed to remove it, indicating that all stakeholders should participate equally and that no stakeholders were more special than others. Somehow the final document contained the sentence, “The respective roles and responsibilities of stakeholders should be interpreted in a flexible manner with reference to the issue under discussion.” I have no idea how it got in there. I was in the room when the final draft was approved, and that text was not announced.
  • Similarly, language in the “roadmap” portion of the document now refers to non-state actors in the context of surveillance. “Collection and processing of personal data by state and non-state actors should be conducted in accordance with international human rights law.” The addition of non-state actors was also done without consulting anyone in the final drafting room.
  • Aside from the surveillance issue, the other big mistake by the US government was their demand to weaken the provision on intermediary liability. As I understand it, their argument was that they didn’t want to consider safe harbor for intermediaries without a concomitant recognition of the role of intermediaries in self-policing, as is done through the notice-and-takedown process in the US. I would have preferred a strong, free-standing statement on intermediary liability, but instead, the text was replaced with OECD language that the US had previously agreed to.
  • Overall, the meeting was highly imperfect—it was non-transparent, disorganized, inefficient in its use of time, and so on. I don’t think it was a rousing success, but it was nevertheless successful enough that the organizers were able to claim success, which I think was their original goal. Other than the two last-minute additions that I saw (I wonder if there are others), nothing in the document gives me major heartburn, so maybe that is actually a success. It will be interesting to see if the São Paulo Statement is cited in other fora, and if they decide to repeat this process again next year.

Today is the second and final day of NETmundial and the third in my series (parts 1 and 2) of quick notes on the meeting.

  • Yesterday, Dilma Rousseff did indeed sign the Marco Civil into law as expected. Her appearance here began with the Brazilian national anthem, which is a very strange way to kick off a multistakeholder meeting.
  • The big bombshell in Rousseff’s speech was her insistence that the multilateral model can peacefully coexist with the multistakeholder model. Brazil had been making a lot of pro-multistakeholder statements, so many of us viewed this as something of a setback.
  • One thing I noticed during the speech was that the Portuguese word for “multistakeholder” actually literally translates as “multisectoral.” This goes a long way toward explaining some of the disconnect between Brazil and the liberals. Multisectoral means that representatives from all “sectors” are welcome, while multistakeholder implies that every stakeholder is welcome to participate, even if they sometimes organize into constituencies. This is a pretty major difference, and NETmundial has been organized on the former model.
  • The meeting yesterday got horribly behind schedule. There were so many welcome speeches, and they went so much over time, that we did not even begin the substantive work of the conference until 5:30pm. I know that sounds like a joke, but it’s not.
  • After three hours of substantive work, during which participants made 2-minute interventions suggesting changes to the text, a drafting group retreated to a separate room to work on the text of the document. The room was open to all participants, but only the drafting group was allowed to work on the drafting; everyone else could only watch (and drink).
  • As of this morning, we still don’t have the text that was negotiated last night. Hopefully it will appear online some time soon.
  • One thing to watch for is the status of the document. Will it be a “declaration” or a “chairman’s report” (or something else)? What I’m hearing is that most of the anti-multistakeholder governments like Russia and China want it to be a chairman’s report because that implies a lesser claim to legitimacy. Brazil, the hosts of the conference, presumably want to make a maximal claim to legitimacy. I tend to think that there’s enough wrong with the document that I’d prefer the outcome to be a chairman’s report, but I don’t feel too strongly.