Articles by Milton Mueller

Milton MuellerMilton L. Mueller is Professor at the Georgia Institute of Technology's School of Public Policy. He is the author of Will the Internet Fragment? (Polity Press, 2017) and Networks and States: The global politics of Internet governance (MIT Press, 2010). Visit his blog site http://internetgovernance.org


One would think that if there is any aspect of Internet policy that libertarians could agree on, it would be that the government should not be in control of basic internet infrastructure. So why are Tech Freedom and a few other so-called “liberty” groups making a big fuss about the plan to complete the privatization of ICANN? The IANA transition, as it has become known, would set the domain name system root, IP addressing and Internet protocol parameter registries free of direct governmental control, and make those aspects of the Internet transnational and self-governing.

Yet, the same groups that have informed us that net neutrality is the end of Internet freedom because it would have a government agency indirectly regulating discriminatory practices by private sector ISPs, are now trying to tell us that retaining direct U.S. government regulation of the content of the domain name system root, and indirect control of the domain name industry and IP addressing via a contract with ICANN, is essential to the maintenance of global Internet freedom. It’s insane.

One mundane explanation is that TechFreedom, which is known for responding eagerly to anyone offering them a check, has found some funding source that doesn’t like the IANA transition and has, in the spirit of a true political entrepreneur, taken up the challenge of trying to twist, turn and spin freedom rhetoric into some rationalization for opposing the transition. But that doesn’t explain the opposition of Senators Cruz and other conservatives who feign a concern for Internet freedom. No, I think this split represents something bigger. At bottom, it’s a debate about the role of nation-states in Internet governance and the state’s role in preserving freedom.

In this regard it would be good to review my May 2016 blog post at the Internet Governance Project, which smashes the myths being asserted about the US government’s role in ICANN. In it, I show that NTIA’s control of ICANN has never been used to protect Internet freedom, but has been used multiple times to limit or attack it. I show that the US control of the DNS root was never put into place to “protect Internet freedom,” but was established for other reasons, and that the US explicitly rejected putting a free expression clause in ICANN’s constitution. I show that the new ICANN Articles of Incorporation created as part of the transition contain good mission limitations and protections against content regulation by ICANN. Finally, I argued that in the real world of international relations (as opposed to the unilateralist fantasies of conservative nationalists) the privileged US role is a magnet for other governments, inviting them to push for control, rather than a bulwark against it.

Another libertarian tech policy analyst, Eli Dourado, has also argued that going ahead with the IANA transition is a ‘no-brainer.’

Assistant Secretary of Commerce Larry Strickling’s speech at the US Internet Governance Forum last month goes through the FUD being advanced by TechFreedom and the nationalist Republicans one by one. Among other points, he contends that if the U.S. tries to retain control, Internet infrastructure will become increasingly politicized as rival states, such as China, Russia and Iran, argue for a sovereignty-based model and try to get internet infrastructure in the hands of intergovernmental organizations:

Privatizing the domain name system has been a goal of Democratic and Republican administrations since 1997. Prior to our 2014 announcement to complete the privatization, some governments used NTIA’s continued stewardship of the IANA functions to justify their demands that the United Nations, the International Telecommunication Union or some other body of governments take control over the domain name system. Failing to follow through on the transition or unilaterally extending the contract will only embolden authoritarian regimes to intensify their advocacy for government-led or intergovernmental management of the Internet via the United Nations.

The TechFreedom “coalition letter” raises no new arguments or issues – it is a nakedly political appeal for Congress to intervene to stop the transition, based mainly on partisan hatred of the Obama administration. But I think this debate is highly significant nevertheless. It’s not about rational policy argumentation, it’s about the diverging political identity of people who say they are pro-freedom.

What is really happening here is a rift between nationalist conservativism of the sort represented by the Heritage Foundation and the nativists in the Tea Party, on the one hand, and true free market libertarians, on the other. The root of this difference is a radically different conception of the role of the nation-state in the modern world. Real libertarians see national borders as, at best, administrative necessary evils, and at worst as unjustifiable obstacles to society and commerce. A truly classical liberal ethic is founded on individual rights and a commitment to free and open markets and free political institutions everywhere, and thus is universalist and globalist in outlook. They see the economy and society as increasingly globalized, and understand that the institution of the state has to evolve in new directions if basic liberal and democratic values are to be institutionalized in that environment.

The nationalist Republican conservatives, on the other hand, want to strengthen the state. They are hemmed in by a patriotic and exceptionalist view of its role. Insofar as they are motivated by liberal impulses at all – and of course many parts of their political base are not – it is based on a conception of freedom situated entirely on national-level institutions. As such, it implies walling the world off or, worse, dominating the world as a pre-eminent nation-state. The rise of Trump and the ease with which he took over the Republican Party ought to be a signal to the real libertarians that the Republican Party is no longer viable as a lesser-of-two-evils home for true liberals. The base of the Republican Party, the coalition of constituencies and worldviews of which it is composed, is splitting into two camps with irreconcilable differences over fundamental issues. Good riddance to the nationalists, I say. This split poses a tremendous opportunity for libertarians to finally free themselves of the social conservatism, nationalistic militarists, nativists and theocrats that have dragged them down in the GOP.

In her UN General Assembly speech denouncing NSA surveillance, Brazil’s President Dilma Rousseff said:

Information and communications technologies cannot be the new battlefield between States. Time is ripe to create the conditions to prevent cyberspace from being used as a weapon of war, through espionage, sabotage, and attacks against systems and infrastructure of other countries. … For this reason, Brazil will present proposals for the establishment of a civilian multilateral framework for the governance and use of the Internet and to ensure the protection of data that travels through the web.

We share her outrage at mass surveillance. We share her opposition to the militarization of the Internet. We share her concern for privacy.

But when President Rousseff proposes to solve these problems by means of a “multilateral framework for the governance and use of the Internet,” she reveals a fundamental flaw in her thinking. It is a flaw shared by many in civil society.

You cannot control militaries, espionage and arms races by “governing the Internet.” Cyberspace is one of many aspects of military competition. Unless one eliminates or dramatically diminishes political and military competition among sovereign states, states will continue to spy, break into things, and engage in conflict when it suits their interests. Cyber conflict is no exception.

Rousseff is mixing apples and oranges. If you want to control militaries and espionage, then regulate arms, militaries and espionage – not “the Internet.”

This confusion is potentially dangerous. If the NSA outrages feed into a call for global Internet governance, and this governance focuses on critical Internet resources and the production and use of Internet-enabled services by civil society and the private sector, as it inevitably will, we are certain to get lots of governance of the Internet, and very little governance of espionage, militaries, and cyber arms.

In other words, Dilma’s “civilian multilateral framework for the governance and use of the Internet” is only going to regulate us – the civilian users and private sector producers of Internet products and services. It will not control the NSA, the Chinese Peoples Liberation Army, the Russian FSB or the British GCHQ.

Realism in international relations theory is based on the view that the international system is anarchic. This does not mean that it is chaotic, but simply that the system is composed of independent states and there is no central authority capable of coercing all of them into following rules. The other key tenet of realism is that the primary goal of states in the international system is their own survival.

It follows that the only way one state can compel another state to do anything is through some form of coercion, such as war, a credible threat of war, or economic sanctions. And the only time states agree to cooperate to set and enforce rules, is when it is in their self-interest to do so. Thus, when sovereign states come together to agree to regulate things internationally, their priorities will always be to:

  • Preserve or enlarge their own power relative to other states; and
  • Ensure that the regulations are designed to bring under control those aspects of civil society and business that might undermine or threaten their power.

Any other benefits, such as privacy for users or freedom of expression, will be secondary concerns. That’s just the way it is in international relations. Asking states to prevent cyberspace from being used as a weapon of war is like asking foxes to guard henhouses.

That’s one reason why it is so essential that these conferences be fully open to non-state actors, and that they not be organized around national representation.

Let’s think twice about linking the NSA reaction too strongly to Internet governance. There is some linkage, of course. The NSA revelations should remind us to be realist in our approach to Internet governance. This means recognizing that all states will approach Internet regulation with their own survival and power uppermost in their agenda; it also means that any single state cannot be trusted as a neutral steward of the global Internet but will inevitably use its position to benefit itself. These implications of the Snowden revelations need to be recognized. But let us not confuse NSA regulation with Internet regulation.

Remember all the businesses, internet techies and NGOs who were screaming about an “ITU takeover of the Internet” a year ago? Where are they now? Because this time, we actually need them.

May 14 – 21 is Internet governance week in Geneva. We have declared it so because there will be three events in that week for the global community concerned with global internet governance. From 14-16 May the International Telecommunication Union (ITU) holds its World Telecommunication Policy Forum (WTPF). This year it is devoted to internet policy issues. With the polarizing results of the Dubai World Conference on International Telecommunications (WCIT) still reverberating, the meeting will revisit debates about the role of states in Internet governance. Next, on May 17 and 18, the Graduate Institute of International and Development Studies and the Global Internet Governance Academic Network (GigaNet) will hold an international workshop on The Global Governance of the Internet: Intergovernmentalism, Multi-stakeholderism and Networks. Here, academics and practitioners will engage in what should be a more intellectually substantive debate on modes and principles of global Internet governance.

Last but not least, the UN Internet Governance Forum will hold its semi-annual consultations to prepare the program and agenda for its next meeting in Bali, Indonesia. The IGF consultations are relevant because, to put it bluntly, it is the failure of the IGF to bring governments, the private sector and civil society together in a commonly agreed platform for policy development that is partly responsible for the continued tension between multistakeholder and intergovernmental institutions. Whether the IGF can get its act together and become more relevant is one of the key issues going forward.

Continue reading →

On Wednesday, April 10, a bill “to Affirm the Policy of the United States Regarding Internet Governance” was marked up in the U.S. House of Representatives. The bill is an attempt to put a formal policy statement into statute law. The effective part says simply:

It is the policy of the United States to promote a global Internet free from government control and to preserve and advance the successful multistakeholder model that governs the Internet.

Yet this attempt to formulate a clear principle and make it legally binding policy has become controversial. This has happened because the bill brings to a head the latent contradictions and elisions that characterize U.S. international Internet policy. In the process it has driven a wedge between what was once a unified front by U.S. Democrats and Republicans against incursions into Internet governance by intergovernmental organizations such as the ITU.

The problem, it seems, is that the Democratic side of the aisle can’t bring itself to say that it is against ‘government control’ per se. Indeed, the bill has forced people linked to the Obama administration to come out and openly admit that ‘government control’ of the internet is OK when we exercise it; it’s just those other countries and international organizations that we need to worry about.

Continue reading →

ARIN is the Internet numbers registry for the North American region. It likes to present itself as a paragon of multistakeholder governance and a staunch opponent of the International Telecommunication Union’s encroachments into Internet governance. Surely, if anyone wants to keep the ITU out of Internet addressing and routing policy, it would be ARIN. And conversely, in past years the ITU has sought to carve away some of the authority over IP addressing from ARIN and other RIRs.

But wait, what is this? March 15 the ITU Secretary-General released a preparatory report for the ITU’s World Telecommunications Policy Forum, which will take place in Geneva May 14-16. The report contains 6 Internet-related policy resolutions “to provide a basis for discussion …focusing on key issues on which it would be desirable to reach conclusions.” Draft Opinion #3 pertains to Internet addressing. Among other things, the draft resolves:

  • “that needs-based address allocation should continue to underpin IP address allocation, irrespective of whether they are IPv6 or IPv4, and in the case of IPv4, irrespective of whether they are legacy or allocated address space;
  • “that all IPv4 transactions be reported to the relevant RIRs, including transactions of legacy addresses that are not necessarily subject to the policies of the RIRs regarding transfers, as supported by the policies developed by the RIR communities;”
  • “that policies of inter-RIR transfer across all RIRs should ensure that such transfers are needs based and be common to all RIRs irrespective of the address space concerned.”

These policy positions thrust the ITU and its intergovernmental machinery directly into the realm of IP addressing policy. But that is quite predictable; the ITU has always wanted to do that. What is unusual about these resolutions is that they bear an uncanny resemblance to the policy positions currently advocated by ARIN and the U.S. Department of Commerce.

Continue reading →

A market has developed in which specialized firms discover new vulnerabilities in software and sell that knowledge for tens or hundreds of thousands of dollars. These vulnerabilities are known as “zero day exploits” because there is no advance knowledge of them before they are used. In this blog post, we recognize that this market may require some kind of action, but reject simplistic calls for “regulation” of suppliers. We recommend focusing on the demand side of the market.

Although there is surprisingly little hard evidence of its scope and scale, the market for vulnerabilities is considered troublesome or dangerous by many. While the bounties paid may stimulate additional research into security, it is the exclusive and secret possession of this knowledge by a single buyer that raises concerns. It is clear that when a someone other than the software vendor pays $100,000 for a zero-day they are probably not paying for defense, but rather for an opportunity to take advantage of someone else’s vulnerability. Thus, the vulnerabilities remain unpatched. (Secrecy also makes the market rather inefficient; it may be possible to sell the same “secret” to several buyers.)

The supply side of the market consists of small firms and individuals with specialized knowledge. They compete to be the first to identify new vulnerabilities in software or information systems and then bring them to buyers. Many buyers are reputed to be government intelligence, law enforcement or military agencies using tax dollars to finance purchases. But we know less about the demand side than we should. The point, however, is that buyers are empowered to initiate an attack, a power that even legitimate organizations could easily abuse.

Insofar as the market for exploits shifts incentives away from publicizing and fixing vulnerabilities toward competitive efforts to gain private, exclusive knowledge of them so they can be held in reserve for possible use, the market has important implications for global security. It puts a premium on dangerous vulnerabilities, and thus may put the social and economic benefits of the Internet at risk. While the US might think it has an advantage in this competition, as a leader in the Internet economy and one of the most cyber-dependent countries, it also has the most to lose.

Unfortunately, so far the only policy response proposed has been vague calls for “regulation.” Chris Soghoian in particular has made “regulation” the basis of his response, calling suppliers “modern-day merchants of death” and claiming that “Security researchers should not be selling zero-days to middle man firms…These firms are cowboys and if we do nothing to stop them, they will drag the entire security industry into a world of pain.”

Such responses, however, are too long on moral outrage and too short on hard-headed analysis and practical proposals. The idea that “regulation” can solve the problem overlooks major constraints:

Continue reading →

There are hundreds of applications for generic words in ICANN’s new top level domain program. They include .BOOK, .MUSIC, .CLOUD, .ACCOUNTANT, .ARAB and .ART. Some of the applicants for these domains have chosen to make direct use of the name space under the TLD for their own sites rather than offering them for broad general use. Amazon, for example, would probably make .BOOK an extension of its online bookstore rather than part of a large-scale domain name registration business; Google would probably make .CLOUD an extension of its own cloud computing enterprises.

This is really no different from Barnes and Noble registering BOOK.COM and using it only for its bookstore, Scripps registering FOOD.COM and controlling the content of the site, or CNET registering NEWS.COM and making exclusive use of the site for its own news and advertising. Nor is it terribly different from the .MUSEUM top level domain.

Yet these proposals have generated a loud chorus of objections from competing businesses. They have dubbed these applications ‘closed generics’ and shouted so loud that ICANN is once again considering changing its policies in mid-implementation. ICANN staff has called for public comment and asked specifically whether it should change its rules to determine what is a ‘generic term’ and whether ICANN should enlarge even further its role as as a top-down regulator and dictate whether certain business models can be associated with certain domain names.

A group of Noncommercial Stakeholders Group (NCSG) members have weighed in with some badly-needed disinterested public comment. It isn’t about ‘open’ or ‘closed,’ they maintain, it is about the freedom to innovate.

As NCSG stakeholders, our position is driven neither by paying clients nor by an interest in the success of specific applications. It is based on a principled commitment to the ‘permissionless innovation’ that has made the Internet a source of creativity and growth. Our aim is to maximize the options available to DNS users and to minimize top-down controls. We support the freedom of individuals and organizations to register domains and use them legally in any way they see fit. We support experimentation with new ideas about what a TLD can do. We see no reason to impose ex ante restrictions on specific business models or methods of managing the name space under a TLD.

The group warns ICANN of the danger of giving itself the power to decide what qualifies as a ‘generic word’ and rejects any attempt to retroactively create new policies that would dictate business models for TLD applicants. Hopefully ICANN’s board will be able to look past the self-interested cries of businesses that want to eliminate competitors and consider the public interest in Internet freedom. The comments and list of supporters are available at this link.

In another blog post, I put the International Telecommunication Union’s WCIT into perspective. I ended that discussion with a question that no one else seems to be asking: should there be International Telecommunication Regulations (ITRs) at all? Why do we need them?

I don’t think we do need sector-specific international regulations. I think they can cause more trouble than benefit. To briefly explain why, I noted that every country has its own national regulations regarding interconnection, privacy, antitrust, consumer protection, and so on. Compatibility across platforms and services is much easier technically than it was in the 1930s and before, and tends to get worked out in the market through a variety of bridging technologies and nongovernmental standards forums. International telecommunications is a form of trade in services, and the WTO agreements already provide a sufficient regulatory basis for foreign or multinational providers to enter national markets and offer transnational services. Though not all countries are members of WTO, membership can be expanded and bilateral or regional agreements can supplement it.

Imagine my surprise when someone informed me that the Europeans were calling for the abrogation of the ITRs for exactly those reasons. Apparently they defended that position for years.  But the European drive to get rid of the ITRs was opposed and eventually blocked by — wait for it — the United States of America! The US, I am told, argued that the existing treaty was essential because most of the world’s international communications were regulated by it.

That puts a dramatically new spin on the US’s current campaign to fend off an ITU “takeover” of the Internet. If revision of the ITRs are such a threat to the Internet, why did the US insist on retaining them? If the ITRs are retained, it is inevitable that they would have to be updated and revised. and yet now, the US government is warning us that the revision process poses a major threat to the independence and freedom of the Internet. Something is wrong with this picture.

Most of my information about this is second-hand, from sources that want to remain off the record. But there is proof that the US has defended the importance of the ITRs in an ITU list of documents that can be viewed here. There, in a depository of an ITU expert group that was preparing the grounds for the WCIT, one finds a document submitted by the US entitled the “Continued Critical Role of the ITRs.” Now if you click on the link that I have mischievously placed to that document, you will be taken to a closed, login-required page; before you can read that document, you have to be a TIES member. In other words, this is yet another example of the closed nature of the ITU process. There is another set of papers here that would be of interest in understanding why we even have the ITRs. But they, too, are locked inside TIES.

And that means, this is a job for WCITleaks! The U.S. government should release this document, and if it doesn’t, inside whistleblowers and other people with access to a TIES account need to leak it to us.

ICANN’s plan to open up the domain name space to new top level domains is scheduled to begin January 12, 2012. This long overdue implementation is the result of an open process that began in 2006. It would, in fact, be more realistic to say that the decision has been in the works 15 years; i.e., since early 1997. That is when demand for new top-level domain names, and the need for other policy decisions regarding the coordination of the domain name system, made it clear that a new institutional framework had to be created. ICANN was the progressive and innovative U.S. response to that need. It was created to become a nongovernmental, independent, truly global and representative policy development authority.

The result has been far from perfect, but human institutions never are. Over the past 15 years, every stakeholder with a serious interest in the issue of top level domains has had multiple opportunities to make their voice heard and to shape the policy. The resulting new gTLD policy reflects that diversity and complexity. From our point of view, it is too regulatory, too costly, and makes too many concessions to content regulators and trademark holders. But it will only get worse with delay. The existing compromise output that came out of the process paves the way for movement forward after a long period of artificial scarcity, opening up new business opportunities.

Now there is a cynical, illegitimate last-second push by a few corporate interests in the United States to derail that process. Continue reading →

Paul Vixie, a renowned Internet pioneer who runs the Internet Systems Consortium, has written an article in ACM Queue attacking “those who would unilaterally supplant or redraw the existing Internet resource governance or allocation systems.” The publication of this article is a sign of a growing, important debate around the reform of IP address registries in the age of IPv4 exhaustion.

Vixie defends the Regional Internet Registries monopoly on IP address registration services and its current, needs-based policies toward address transfers. I am sure that Paul sincerely believes in the arguments he makes, but it’s also true that Vixie is the chairman of the Board of The American Registry for Internet Numbers (ARIN), the regional address registry for North America. When Vixie argues that ARIN’s exclusive control over Whois and address transfer services is beneficial and “in the physics” he is also defending the authority and revenue model of his own organization against a perceived threat.

And that takes us to another relevant fact. The argument Vixie makes is cast in generalities, but he is really attacking a specific firm, a holding company known as Denuo. Denuo has formed both a secondary marketplace called Addrex for the legitimate trading of IPv4 number blocks, and an IP Address Registrar company known as Depository. Let’s set aside Depository for the moment (I will come back to it) and concentrate on Addrex, which has become the first end-to-end platform for legacy address holders to sell their IPv4 number blocks. Famously, Addrex scored a major success as the intermediary for the Nortel-Microsoft trade. But Nortel-Microsoft was unusually visible because it had to go through bankruptcy court. Is anything else happening? I spoke to Addrex’s President Charles Lee since then to find out. “We are very busy signing up a growing number of global corporate and governmental customers to sell their unused assets,” he said. I asked him what the buyer side of the marketplace was beginning to look like and he said “Our value proposition to large Asian network operators has resonated quite effectively and we expect to enter into many agreements with them over the coming months.” Surely Vixie and the ARIN Board have gotten wind of this. So when Vixie begins a public attack on this company and its business model, he is signaling to the rest of us that ARIN is worried. Continue reading →