Miscellaneous

ICANN has posted an official “GAC Indicative Scorecard” in advance of the Feb. 28 showdown in Brussels between the Governmental Advisory Committee (GAC) and the ICANN Board. The “scorecard” is intended to identify the areas where the small number of governmental officials who participate in GAC differ from the positions developed by ICANN’s open policy development process. The scorecard constitutes a not-so subtle threat that ICANN should throw out its staff- and community-developed policies and make them conform to the GAC’s preferences. Amusingly, the so-called GAC position follows almost verbatim the text submitted as the “US position” back in January. It’s clear that the US calls the shots in GAC and that other governments, including the EU, are cast in the role of making minor modifications to U.S. initiatives.

There is one interesting modification, however. The new GAC scorecard still allows GAC to conduct an initial review of all new top level domain applications and still allows any GAC member to object to any string “for any reason.” But GAC has been publicly shamed into pulling back from the U.S. government’s recommendation that a single GAC objection, if not overruled by other governments, would kill the application. Instead, the GAC as a whole will “consider” any objection and develop written “advice” that will be forwarded to the Board. This would put such advice in the framework of ICANN’s bylaws, and thus the advice would not be binding on the board.

While it is heartening that public pressure has forced the governments to pull back from their more outrageous demands, the resulting procedure is still arbitrary and an unacceptable incursion on free expression and free markets. For a more complete analysis, see the IGP blog.

For better or worse, my first post here is going to be a rather urgent call to action. I’d like to encourage everyone who reads this blog to register their support for this petition. Entitled, “Say no to the GAC veto,” it expresses opposition to a shocking and dangerous turn in U.S. policy toward the global domain name system. It is a change that would reverse more than a decade of commitment to a transnational, bottom-up, civil society-led approach to governance of Internet identifiers, in favor of a top-down policy making regime dominated by national governments.

If the U.S. Commerce Department has its way, not only would national governments call the shots regarding what new domains could exist and what ideas and words they could and could not use, but they would be empowered to do so without any constraint or guidance from law, treaties or constitutions. Our own U.S. Commerce Department wants to let any government in the world censor a top level domain proposal “for any reason.” A government or two could object simply because they don’t like the person behind it, the ideas it espouses or they are offended by the name, and make these objections fatal. This kind of direct state control over content-related matters sets an ominous precedent for the future of Internet governance.

On February 28 and March 1, ICANN and its Governmental Advisory Committee will meet in Brussels to negotiate over ICANN’s program to add new top level domain names to the root. The U.S. commerce Department has chosen to make this meeting a showdown, in which the so-called Governmental Advisory Committee (GAC) will demand that the organization re-write and re-do policies and procedures ICANN and its stakeholder groups have been laboring to achieve agreement on for the past six years. The GAC veto, assailed by our petition, is only the most objectionable feature of a long list of bad ideas our Commerce Department is dragging into the consultation. We need to make a strong showing to ensure that ICANN has the backbone to resist these pressures

For those concerned about the role of the state in communications and information, I can’t think of a better, clearer flashpoint for focusing your efforts. A great deal of the Internet’s innovation and revolutionary character came from the fact that it escaped control of national states and began to evolve new, transnational forms of governance. As governments wake up to the power and potential of the Internet, they have increasingly sought to assert traditional forms of control.

The relationship between national governments and ICANN, which came into being during the Clinton administration as an attempt to “privatize” and globalize the policy making and coordination of the Internet’s domain name system, has always been a fraught one. Whatever its flaws (and they are many), ICANN at least gives us a globalized governance regime that is rooted in the Internet’s technical commnunity and users, and one step removed from the miasma of national governments and intergovernmental organizations. The GAC was initially just an afterthought tacked on to ICANN’s structure to appease the European Union. It was – and is still supposed to be – purely advisory in function. Initially it was conceived as simply providing ICANN with information about the way its policies interacted with national policies.

Those of you with long memories may be feeling a sense of deja vu. Didn’t we think we were settling the issue of an intergovernmental takeover of ICANN back in 2005, during the World Summit on the Information Society? Wasn’t it the U.S. government who went into that summit playing to fears of a “UN takeover of the Internet” and swearing that it was protecting the Internet from “burdensome intergovernmental oversight and control”? Wouldn’t most Americans be surprised to learn that the Commerce Department is now using ICANN’s Governmental Advisory Committee to reassert intergovernmental control over what kind of new web sites can be created? Ironically, the US has become the most formidable world advocate of burdensome government oversight and control in Internet governance. And it has done so without any public consultation or legal authority.

Please spread the word about this petition and use whatever channels you have to isolate the Commerce Department’s illegitimate incursions on constitutional free expression guarantees.

It’s my great pleasure to welcome Milton Mueller to the TLF as an occasional contributor. Milton is a professor at the School of Information Studies at Syracuse University. Cyberlaw and Internet policy scholars are certainly familiar with Milton’s impressive body of research on communications, media, and high-tech issues over the past 25 years. You can find much of it on his website here.  Regular readers of the TLF will also recall that I have praised Milton’s one-of-a-kind research on Internet governance issues, going so far as to label him the “de Tocqueville of cyberspace.”  His work with the Internet Governance Project and the Global Internet Governance Academic Network is truly indispensable, and in books like Ruling the Root: Internet Governance and the Taming of Cyberspace (2002) and his more recent Networks and States: The Global Politics of Internet Governance (2010), Milton brilliantly explores the forces shaping Internet policy across the globe. (Also, make sure to listen to this podcast that Jerry Brito did with Milton about the book and his ongoing research.)

More importantly, as I noted in my review of Network and States last year, Milton has sought to breathe new life into the old cyber-libertarian philosophy that was more prevalent during the Net’s founding era but has lost favor today. In the book, he notes that his “normative stance is rooted in the Internet’s early promise of unfettered and borderless global communication, and its largely accidental and temporary escape from traditional institutional mechanisms of control.”  He has also given our little movement its marching orders, arguing that “we need to find ways to translate classical liberal rights and freedom into a governance framework suitable for the global Internet.  There can be no cyberliberty without a political movement to define, defend, and institutionalize individual rights and freedoms on a transnational scale,” he says.  Terrific stuff, and I very much look forward to Milton developing this framework in more detail here at the TLF in coming years.

Milton will continue to do much of his blogging over at the Internet Governance Project blog, but will drop by here on occasion to cross-post some of those writings or to comment on other pressing Internet policy issues of the day. Welcome to the TLF, Milton!

Today I filed roughly 30 pages worth of comments with the Federal Trade Commission (FTC) in its proceeding on “Protecting Consumer Privacy in an Era of Rapid Change: a Proposed Framework for Businesses and Policy Makers.” [Other comments filed in the proceeding can be found here.] Down below, I’ve attached the Table of Contents from my filing so you can see the major themes I’ve addressed, and I’ve also attached the entire document in a Scribd reader. In coming days and weeks, I’ll be expanding upon some of these themes in follow-up essays.

In my filing, I argue that while it remains impossible to predict with precision the impact a new privacy regulatory regime will have the Internet economy and digital consumers, regulation will have consequences; of that much we can be certain.  As the FTC  and other policy makers move forward with proposals to expand regulation in this regard, it is vital that the surreal “something-for-nothing” quality of current privacy debate cease. Those who criticize data collection or online advertising and call for expanded regulation should be required to provide a strict cost-benefit analysis of the restrictions they would impose upon America’s vibrant digital marketplace.

In particular, it should be clear that the debate over Do Not Track and online advertising regulation is fundamentally tied up with the future of online content, culture, and services. Thus, regulatory advocates must explain how the content and services supported currently by advertising and marketing will be sustained if current online data collection and ad targeting techniques are restricted. Continue reading →

In his [column on Monday](http://www.nytimes.com/2011/02/15/opinion/15brooks.html?ref=davidbrooks), David Brooks put his finger on what I found most interesting about Tyler Cowen’s *[The Great Stagnation](http://www.amazon.com/gp/product/B004H0M8QS?ie=UTF8&tag=jerrybritocom)*. Namely:

>It could be that in an industrial economy people develop a materialist mind-set and believe that improving their income is the same thing as improving their quality of life. But in an affluent information-driven world, people embrace the postmaterialist mind-set. They realize they can improve their quality of life without actually producing more wealth.

As Tyler points out in this book, and catalogued at length in his other excellent book, *[Create Your Own Economy](http://www.amazon.com/gp/product/B002XULWOS?ie=UTF8&tag=jerrybritocom)*, recent increases in happiness come from growth in internal economies. That is, internal to humans. In the past, increased well-being came from not having a toilet and then having one, or the invention of cheap air travel. Today they come from blogging, watching *Lost* on Netflix, listening to a symphony from iTunes, tweeting with your friends, seeing their pictures on Facebook or Path, and learning and collaborating on Wikipedia. As a result, once one secures a certain income to cover basic needs, greater happiness and well-being can be had for virtually nothing.

The problem some see with this is that the Internet sector, while it may give us amazing innovations, produces little by way of revenue or jobs. Brooks also laments that because American’s have not come to grips with this growing distinction between wealth and standard of living, we tend to live beyond our means, which is certainly true in a personal and public fiscal sense.

But I’d like to see this seeming decoupling of wealth and well-being as an opportunity.
Continue reading →

You’ll want to visit, follow, friend, and whatever-the-hell-else-people-do the new Digital Liberty project from Americans for Tax Reform.

Digital Liberty’s introductory blog post says:

Digital Liberty is dedicated to preserving a free market by pushing back against heavy regulation and taxation of all things Internet, tech, telecom, and media. DigitalLiberty.net will serve as a resource for those who believe in constitutionally limited government by providing news updates and policy briefs on tech issues, sharing research from likeminded organizations, and serving the grassroots who believe that technology and media innovation thrives best when markets are free and individuals are free to choose.

Sounds good to me.

Digital Liberty isn’t really new, but an expansion of ATR’s work on tech freedom. They tell us that their Web site will provide news and policy briefs, share research from like-minded free-market organizations, and serve the grassroots focusing on free-market tech policy.

That’s DigitalLiberty.net. Right on to my brethren and sistren from the happy home of the leave-us-alone coalition!

Congrats are due to Tim Wu, who’s just been appointed as a senior advisor to the Federal Trade Commission (FTC). Tim is a brilliant and gracious guy; easily one of the most agreeable people I’ve ever had the pleasure of interacting with in my 20 years in covering technology policy. He’s a logical choice for such a position in a Democratic administration since he has been one of the leading lights on the Left on cyberlaw issues over the past decade.

That being said, Tim’s ideas on tech policy trouble me deeply. I’ll ignore the fact that he gave birth to the term “net neutrality” and that he chaired the radical regulatory activist group, Free Press. Instead, I just want to remind folks of one very troubling recommendation for the information sector that he articulated in his new book, The Master Switch: The Rise and Fall of Information Empires. While his book was preoccupied with corporate power and the ability of media and communications companies to posses a supposed “master switch” over speech or culture, I’m more worried about the “regulatory switch” that Tim has said the government should toss.

Tim has suggested that a so-called “Separations Principle” govern our modern information economy. “A Separations Principle would mean the creation of a salutary distance between each of the major functions or layers in the information economy,” he says. “It would mean that those who develop information, those who control the network infrastructure on which it travels, and those who control the tools or venues of access must be kept apart from one another.”  Tim calls this a “constitutional approach” because he models it on the separations of power found in the U.S. Constitution.

I critiqued this concept in Part 6 of my ridiculously long multi-part review of his new book, and I discuss it further in a new Reason magazine article, which is due out shortly. As I note in my Reason essay, Tim’s blueprint for “reforming” technology policy represents an audacious industrial policy for the Internet and America’s information sectors. Continue reading →

Via TechDirt, “The news media always need a bogeyman,” says Cracked.com in their well-placed attack on techno-panics, “5 Terrifying Online Trends (Invented By the News Media).” It’s a popular topic here, too.

(HT: Schneier) Here’s a refreshingly careful report on cybersecurity from the Organization for Economic Cooperation and Development’s “Future Global Shocks” project. Notably: “The authors have concluded that very few single cyber-related events have the capacity to cause a global shock.” There will be no cyber-“The Day After.”

Here are a few cherry-picked top lines:

Catastrophic single cyber-related events could include: successful attack on one of the underlying technical protocols upon which the Internet depends, such as the Border Gateway Protocol which determines routing between Internet Service Providers and a very large-scale solar flare which physically destroys key communications components such as satellites, cellular base stations and switches. For the remainder of likely breaches of cybsersecurity such as malware, distributed denial of service, espionage, and the actions of criminals, recreational hackers and hacktivists, most events will be both relatively localised and short-term in impact.

The vast majority of attacks about which concern has been expressed apply only to Internet-connected computers. As a result, systems which are stand-alone or communicate over proprietary networks or are air-gapped from the Internet are safe from these. However these systems are still vulnerable to management carelessness and insider threats.

Analysis of cybsersecurity issues has been weakened by the lack of agreement on terminology and the use of exaggerated language. An “attack” or an “incident” can include anything from an easily-identified “phishing” attempt to obtain password details, a readily detected virus or a failed log-in to a highly sophisticated multi-stranded stealth onslaught. Rolling all these activities into a single statistic leads to grossly misleading conclusions. There is even greater confusion in the ways in which losses are estimated. Cyberespionage is not a “few keystrokes away from cyberwar”, it is one technical method of spying. A true cyberwar is an event with the characteristics of conventional war but fought exclusively in cyberspace.

The hyping of “cyber” threats—bordering on hucksterism—should stop. Many different actors have a good deal of work to do on securing computers, networks, and data. But there is no crisis, and the likelihood of any cybersecurity failure causing a crisis is extremely small.

My colleague Dr. Richard Williams, who serves as the Director of Policy Research at the Mercatus Center, has just released an excellent little primer on “The Impact of Regulation on Investment and the U.S. Economy.” Those who attempt to track and analyze regulation in the communications and high-tech arenas will find the piece of interest since it provides an framework for how to evaluate the sensibility of new rules.

Williams, who is an expert in benefit-cost analysis and risk analysis, opens the piece by noting that:

The total cost of regulation in the United States is difficult to calculate, but one estimate puts the cost at $1.75 trillion in 2008. Total expenditures by the U.S. government were about $2.9 trillion in 2008. Thus, out of a total of $4.6 trillion in resources allocated by the federal government, 38% of the total is for regulations.

If regulations always produced goods and services that were valued as highly as market-produced goods and services, then this would not be a cause for alarm. But that is precisely what is not known. In fact, there is evidence to the contrary for many regulations. Where regulations take resources out of the private sector for less valuable uses, overall consumer welfare is diminished. … Regulation also impacts the creation and sustainability of jobs… [which] can have very real consequences for the economy.

He also explains how regulation can affect international competitiveness, especially when burdensome rules limit the ability of companies to attract capital for new innovations and investment. Continue reading →