On Wednesday morning, the U.S. House of Representatives Energy & Commerce Subcommittee on Communications and Technology will hold a hearing on “The Future of Video.”

As we Tech Liberators have long argued on these pages (12345, 6, 7), government’s hands have been all over the video market since its inception, primarily in the form of the FCC’s rulemaking and enforcement enabled by the Communications Act. While the 1996 Telecommunications Act scrapped some obsolete video regulations, volumes of outdated rules remain law, and the FCC wields vast and largely unchecked authority to regulate video providers of all shapes and sizes. Wednesday’s hearing offers members an excellent opportunity to question each and every law that enables governmental intervention—and restricts liberty in—the television market.

It’s high time for Congress to free up America’s video marketplace and unleash the forces of innovation. Internet entrepreneurs should be free to experiment with novel approaches to creating, distributing, and monetizing video content without fear of FCC regulatory intervention. At the same time, established media businesses—including cable operators, satellite providers, telecom companies, broadcast networks and affiliates, and studios—should compete on a level playing field, free from both federal mandates and special regulatory treatment.

The Committee should closely examine the Communications and Copyright Acts, and rewrite or repeal outright provisions of law that inhibit a free video marketplace. Adam Thierer has chronicled many such laws. The Committee should, among other reforms, consider:

Here’s to the success of Sen. Jim DeMint, Rep. Steve Scalise, and other members of Congress who are working to achieve real reform and ensure that the future of video is bounded only by the dreams of entrepreneurs.

Tyler Cowen [asks on his blog today](http://marginalrevolution.com/marginalrevolution/2012/06/today-is-probably-a-funny-blogging-day.html):

>By the way, didn’t it just [come out in](http://www.washingtonpost.com/world/national-security/us-israel-developed-computer-virus-to-slow-iranian-nuclear-efforts-officials-say/2012/06/19/gJQA6xBPoV_story.html) *The Washington Post* that the United States helped attack Iran with Flame, Stuxnet and related programs? If they did this to us, wouldn’t we consider it an act of war? Didn’t we just take a major step toward militarizing the internet? Doesn’t it seem plausible to you that the cyber-assault is not yet over and thus we face immediate questions looking forward? Won’t somebody fairly soon try to do it to us? Won’t it encourage substitution into more dangerous biological weapons?

Those are good questions. Let’s take them in turn.

**If they did it to us, would we consider it an act of war?** I tend to agree with [Franz-Stefan Gady’s perspective](http://www.huffingtonpost.com/franzstefan-gady/the-cyberwar-hoax_b_1549927.html) that Stuxnet should not be considered an act of war. One of the most overlooked aspects of the great reporting done by the NYT and WaPo uncovering the details of Stuxnet is that the U.S. did not “hack in” to Iran’s nuclear facilities from thousands of miles away. Instead it [had to rely on Israel’s](http://jerrybrito.org/post/24193112996/nyt-reveals-the-backstory-on-stuxnet) extensive intelligence apparatus to not only understand the target, but to deliver the worm as well. That is, humans had to physically infiltrate Iran’s operations to engage in the spying and then the sabotage.

Espionage [is not an act of war](http://legal-dictionary.thefreedictionary.com/espionage) under international law. Nations expect and tolerate espionage as an inevitable political practice. Spies are sometimes prosecuted criminally when caught, sometimes traded for other spies, and often simply expelled from the country. Sabotage I’m less certain about, but I think it inhabits a similar space as espionage: frowned up, prosecuted criminally, but not an act of war *per se*. (I’ve been trying to find the answer to that question in vein, so if any international law experts would like to send me the answer, I’d appreciate it.)

So what do we have with Flame? It’s essentially spying, albeit in a frighteningly efficient manner. But, it’s not act of war. Stuxnet is similarly not an act of war if we assume sabotage is not. There’s little difference between Stuxnet and a spy infiltrating Natanz and throwing a wrench into the works. Stuxnet is just the wrench. Now, it’s key to point out what makes Stuxnet political sabotage and not terrorism, and that is that there were no deaths, much less civilian deaths.

**Did we take a big step in militarizing the Internet? Won’t somebody fairly soon try to do it to us?** Well, it’s already happening and it’s been happening for years. U.S. government networks are very often the subject of espionage–and maybe even sabotage–by foreign states. If something feels new about Stuxnet, it’s that for the first time we have definitive attribution to a state. As a result, the U.S. loses moral high ground when it comes to cybersecurity, and if someone doing it to the U.S. gets caught, they will be able to say, “You started it.” But they’re already doing it. Not that it’s necessarily a good thing, but the militarization of cyberspace is not just inevitable, it’s been [well underway](http://en.wikipedia.org/wiki/United_States_Cyber_Command) for some time.

Finally, Tyler asks, **Won’t it encourage substitution into more dangerous biological weapons?** The answer to that, I think, is a definitive no. “Cyber weapons” arecompletely different from biological weapons and even chemical or conventional, and certainly nuclear. For one thing, they are [nowhere near](http://jerrybrito.org/post/23994462855/the-united-states-is-more-secure-than-washington-wants) [as dangerous](http://jerrybrito.org/post/23994472311/how-scary-was-the-white-houses-cyber-simulation-for). No one has ever died from a cyber attack. Again, short of already being in a shooting war, these capabilities won’t be employed beyond espionage and surgical sabotage like Stuxnet.

That raises the question, however, if we’re in a shooting war with a Lybia or a Syria, say, will they resort to cyber? Perhaps, but as Thomas Rid has pointed out, the more destructive a “cyber weapon” the more [difficult and costly](http://jerrybrito.org/post/23994467276/why-anonymous-will-never-be-able-to-take-down-the-power) it is to employ. Massively so. This is why it’s probably only the U.S. at this point who has the capability to pull off an operation as difficult as Stuxnet, and then only with the assistance of Israel’s existing traditional intelligence operation. Neither al Qaeda, nor Anonymous, nor even Iran will be able to carry out an operation on the same level as Stuxnet any time soon.

So, Tyler, you can sleep well. For now at least. ;o) Yes, we should have a national discussion about what sorts of weapons we want our government employing, and what sort of authorization and oversight should be required, but we should not panic or think we’re a few keystrokes away from Armageddon. The more important question to me is, [why does one keeps $2.85 million in bitcoin?](http://jerrybrito.org/post/25726774959/someone-is-holding-2-85-million-in-bitcoins)

Thanks to TLFers Jerry Brito and Eli Dourado, and the anonymous individual who leaked a key planning document for the International Telecommunication Union’s World Conference on International Telecommunications (WCIT) on Jerry and Eli’s inspired WCITLeaks.org site, we now have a clearer view of what a handful of regimes hope to accomplish at WCIT, scheduled for December in Dubai, U.A.E.

Although there is some danger of oversimplification, essentially a number of member states in the ITU, an arm of the United Nations, are pushing for an international treaty that will give their governments a much more powerful role in the architecture of the Internet and economics of the cross-border interconnection. Dispensing with the fancy words, it represents a desperate, last ditch effort by several authoritarian nations to regain control of their national telecommunications infrastructure and operations

A little history may help. Until the 1990s, the U.S. was the only country where telephone companies were owned by private investors. Even then, from AT&T and GTE on down, they were government-sanctioned monopolies. Just about everywhere else, including western democracies such as the U.K, France and Germany, the phone company was a state-owned monopoly. Its president generally reported to the Minster of Telecommunications.

Since most phone companies were large state agencies, the ITU, as a UN organization, could wield a lot of clout in terms of telecom standards, policy and governance–and indeed that was the case for much of the last half of the 20th century. That changed, for nations as much as the ITU, with the advent of privatization and the introduction of wireless technology. In a policy change that directly connects to these very issues here, just about every country in the world embarked on full or partial telecom privatization and, moreover, allowed at least one private company to build wireless telecom infrastructure. As ITU membership was reserved for governments, not enterprises, the ITU’s political influence as a global standards and policy agency has since diminished greatly. Add to that concurrent emergence of the Internet, which changed the fundamental architecture and cost of public communications from a capital-intensive hierarchical mechanism to inexpensive peer-to-peer connections and the stage was set for today’s environment where every smartphone owner is a reporter and videographer. Telecommunications, once part of the commanding heights of government control, was decentralized down to street level.

Continue reading →

When it comes to the UN exerting greater control over Internet governance, all of us who follow Internet policy in the U.S. seem to be on the same page: keep the Internet free of UN control. Many folks have remarked how rare this moment of agreement among all sides–right, left, and center–can be. And Congress seized that moment yesterday, [unanimously approving](http://techdailydose.nationaljournal.com/2012/06/house-committee-votes-to-preve.php) a bi-partisan resolution calling on the Secretary of State to “to promote a global Internet free from government control[.]”

However, below the surface of this “Kumbaya moment,” astute observers will have noticed quite a bit of eye-rolling. Adam Thierer and I wrote [a piece](http://www.theatlantic.com/technology/archive/2012/06/a-note-to-congress-the-united-nations-isnt-a-serious-threat-to-internet-freedom-151-but-you-are/258709/) for *The Atlantic* pointing out the obvious fact that when a unanimous Congress votes “to promote a global Internet free from government control,” they are being hypocrites. That’s a pretty uncontroversial statement, as far as I can tell, but of course no one likes a skunk at the garden party. Continue reading →

Count me among those who are rolling their eyes as the Department of Justice initiates an investigation into whether cable companies are using data caps to strong-arm so-called “over-the-top” on-demand video providers like Netflix, Walmart’s Vudu and Amazon.com and YouTube.

The Wall Street Journal reported last week that DoJ investigators “are taking a particularly close look at the data caps that pay-TV providers like Comcast and AT&T Inc. have used to deal with surging video traffic on the Internet. The companies say the limits are needed to stop heavy users from overwhelming their networks.”

Internet video providers like Netflix have expressed concern that the limits are aimed at stopping consumers from dropping cable television and switching to online video providers. They also worry that cable companies will give priority to their own online video offerings on their networks to stop subscribers from leaving.

Here are five reasons why the current anticompetitive sturm und drang is an absurd waste of time and might end up leading to more harm than good.

Continue reading →

This morning, the Secretary-General of the ITU, Hamadoun Touré, [gave a speech at the WCIT Council Working Group](http://www.itu.int/en/osg/speeches/Pages/2012-06-20.aspx) meeting in Geneva in which he said,

> It has come as a surprise — and I have to say as a great disappointment — to see that some of those who have had access to proposals presented to this working group have gone on to publicly mis-state or distort them in public forums, sometimes to the point of caricature.

> These distortions and mis-statements could be found plausible by credulous members of the public, and could even be used to influence national parliaments, given that the documents themselves are not officially available — in spite of recent developments, **including the leaking of Document TD 64.**

> As many of you surely know, a group of civil society organizations has written to me to request public access to the proposals under discussion.

> **I would therefore be grateful if you could consider this matter carefully, as I intend to make a recommendation to the forthcoming session of Council regarding open access to these documents, and in particular future versions of TD 64.**

> I would also be grateful if you would consider the opportunity of conducting an open consultation regarding the ITRs. I also intend to make a recommendation to Council in this regard as well.
Continue reading →

By Geoffrey Manne and Berin Szoka

Everyone loves to hate record labels. For years, copyright-bashers have ranted about the “Big Labels” trying to thwart new models for distributing music in terms that would make JFK assassination conspiracy theorists blush. Now they’ve turned their sites on the pending merger between Universal Music Group and EMI, insisting the deal would be bad for consumers. There’s even a Senate Antitrust Subcommittee hearing tomorrow, led by Senator Herb “Big is Bad” Kohl.

But this is a merger users of Spotify, Apple’s iTunes and the wide range of other digital services ought to love. UMG has done more than any other label to support the growth of such services, cutting licensing deals with hundreds of distribution outlets—often well before other labels. Piracy has been a significant concern for the industry, and UMG seems to recognize that only “easy” can compete with “free.” The company has embraced the reality that music distribution paradigms are changing rapidly to keep up with consumer demand. So why are groups like Public Knowledge opposing the merger?

Critics contend that the merger will elevate UMG’s already substantial market share and “give it the power to distort or even determine the fate of digital distribution models.” For these critics, the only record labels that matter are the four majors, and four is simply better than three. But this assessment hews to the outmoded, “big is bad” structural analysis that has been consistently demolished by economists since the 1970s. Instead, the relevant touchstone for all merger analysis is whether the merger would give the merged firm a new incentive and ability to engage in anticompetitive conduct. But there’s nothing UMG can do with EMI’s catalogue under its control that it can’t do now. If anything, UMG’s ownership of EMI should accelerate the availability of digitally distributed music.

To see why this is so, consider what digital distributors—whether of the pay-as-you-go, iTunes type, or the all-you-can-eat, Spotify type—most want: Access to as much music as possible on terms on par with those of other distribution channels. For the all-you-can-eat distributors this is a sine qua non: their business models depend on being able to distribute as close as possible to all the music every potential customer could want. But given UMG’s current catalogue, it already has the ability, if it wanted to exercise it, to extract monopoly profits from these distributors, as they simply can’t offer a viable product without UMG’s catalogue. Continue reading →

That is the title of my [new working paper](http://mercatus.org/publication/internet-security-without-law-how-service-providers-create-order-online), out today from Mercatus. The abstract:

> Lichtman and Posner argue that legal immunity for Internet service providers (ISPs) is inefficient on standard law and economics grounds. They advocate indirect liability for ISPs for malware transmitted on their networks. While their argument accurately applies the conventional law and economics toolkit, it ignores the informal institutions that have arisen among ISPs to mitigate the harm caused by malware and botnets. These informal institutions carry out the functions of a formal legal system—they establish and enforce rules for the prevention, punishment, and redress of cybersecurity-related harms.

> In this paper, I document the informal institutions that enforce network security norms on the Internet. I discuss the enforcement mechanisms and monitoring tools that ISPs have at their disposal, as well as the fact that ISPs have borne significant costs to reduce malware, despite their lack of formal legal liability. I argue that these informal institutions perform much better than a regime of formal indirect liability. The paper concludes by discussing how the fact that legal polycentricity is more widespread than is often recognized should affect law and economics scholarship.

While I frame the paper as a reply to Lichtman and Posner, I think it also conveys information that is relevant to the debate over CISPA and related Internet security bills. Most politicians and commentators do not understand the extent to which Internet security is peer-produced, or why security institutions have developed in the way they have. I hope that my paper will lead to a greater appreciation of the role of bottom-up governance institutions on the Internet and beyond.

Comments on the paper are welcome!

John Palfrey of the Berkmann Center at Harvard Law School, discusses his new book written with Urs Gasser, Interop: The Promise and Perils of Highly Interconnected Systems. Interoperability is a term used to describe the standardization and integration of technology. Palfrey discusses how the term can describe many relationships in the world and that it doesn’t have to be limited to technical systems. He also describes potential pitfalls of too much interoperability. Palfrey finds that greater levels of interoperability can lead to greater competition, collaboration, and the development of standards. It can also lead to giving less protection to privacy and security. The trick is to get to the right level of interoperability. If systems become too complex, then nobody can understand them and they can become unstable. Palfrey describes the current financial crises could be an example of this. Palfrey also describes the difficulty in finding the proper role of government in encouraging or discouraging interoperability.


Download

Related Links

There was an important article about online age verification in The New York Times yesterday entitled, “Verifying Ages Online Is a Daunting Task, Even for Experts.” It’s definitely worth a read since it reiterates the simple truth that online age verification is enormously complicated and hugely contentious (especially legally). It’s also worth reading since this issue might be getting hot again as Facebook considers allowing kids under 13 on its site.

Just five years ago, age verification was a red-hot tech policy issue. The rise of MySpace and social networking in general had sent many state AGs, other lawmakers, and some child safety groups into full-blown moral panic mode. Some wanted to ban social networks in schools and libraries (recall that a 2006 House measure proposing just that actually received 410 votes, although the measure died in the Senate), but mandatory online age verification for social networking sites was also receiving a lot of support. This generated much academic and press inquiry into the sensibility and practicality of mandatory age verification as an online safety strategy. Personally, I was spending almost all my time covering the issue between late 2006 and mid-2007. The title of one of my papers on the topic reflected the frustration many shared about the issue: “Social Networking and Age Verification: Many Hard Questions; No Easy Solutions.”

Simply put, too many people were looking for an easy, silver-bullet solution to complicated problems regarding how kids get online and how to keep them safe once they get there. For a time, age verification became that silver bullet for those who felt that “we must do something” politically to address online safety concerns. Alas, mandatory age verification was no silver bullet. As I summarized in this 2009 white paper, “Five Online Safety Task Forces Agree: Education, Empowerment & Self-Regulation Are the Answer,” all previous research and task force reports looking into this issue have concluded that a diverse toolbox and a “layered approach” must be brought to bear on these problems. There are no simple fixes. Specifically, here’s what each of the major online child safety task forces that have been convened since 2000 had to say about the wisdom of mandatory age verification: Continue reading →