By Geoffrey Manne and Berin Szoka
Everyone loves to hate record labels. For years, copyright-bashers have ranted about the “Big Labels” trying to thwart new models for distributing music in terms that would make JFK assassination conspiracy theorists blush. Now they’ve turned their sites on the pending merger between Universal Music Group and EMI, insisting the deal would be bad for consumers. There’s even a Senate Antitrust Subcommittee hearing tomorrow, led by Senator Herb “Big is Bad” Kohl.
But this is a merger users of Spotify, Apple’s iTunes and the wide range of other digital services ought to love. UMG has done more than any other label to support the growth of such services, cutting licensing deals with hundreds of distribution outlets—often well before other labels. Piracy has been a significant concern for the industry, and UMG seems to recognize that only “easy” can compete with “free.” The company has embraced the reality that music distribution paradigms are changing rapidly to keep up with consumer demand. So why are groups like Public Knowledge opposing the merger?
Critics contend that the merger will elevate UMG’s already substantial market share and “give it the power to distort or even determine the fate of digital distribution models.” For these critics, the only record labels that matter are the four majors, and four is simply better than three. But this assessment hews to the outmoded, “big is bad” structural analysis that has been consistently demolished by economists since the 1970s. Instead, the relevant touchstone for all merger analysis is whether the merger would give the merged firm a new incentive and ability to engage in anticompetitive conduct. But there’s nothing UMG can do with EMI’s catalogue under its control that it can’t do now. If anything, UMG’s ownership of EMI should accelerate the availability of digitally distributed music.
To see why this is so, consider what digital distributors—whether of the pay-as-you-go, iTunes type, or the all-you-can-eat, Spotify type—most want: Access to as much music as possible on terms on par with those of other distribution channels. For the all-you-can-eat distributors this is a sine qua non: their business models depend on being able to distribute as close as possible to all the music every potential customer could want. But given UMG’s current catalogue, it already has the ability, if it wanted to exercise it, to extract monopoly profits from these distributors, as they simply can’t offer a viable product without UMG’s catalogue. Continue reading →
That is the title of my [new working paper](http://mercatus.org/publication/internet-security-without-law-how-service-providers-create-order-online), out today from Mercatus. The abstract:
> Lichtman and Posner argue that legal immunity for Internet service providers (ISPs) is inefficient on standard law and economics grounds. They advocate indirect liability for ISPs for malware transmitted on their networks. While their argument accurately applies the conventional law and economics toolkit, it ignores the informal institutions that have arisen among ISPs to mitigate the harm caused by malware and botnets. These informal institutions carry out the functions of a formal legal system—they establish and enforce rules for the prevention, punishment, and redress of cybersecurity-related harms.
> In this paper, I document the informal institutions that enforce network security norms on the Internet. I discuss the enforcement mechanisms and monitoring tools that ISPs have at their disposal, as well as the fact that ISPs have borne significant costs to reduce malware, despite their lack of formal legal liability. I argue that these informal institutions perform much better than a regime of formal indirect liability. The paper concludes by discussing how the fact that legal polycentricity is more widespread than is often recognized should affect law and economics scholarship.
While I frame the paper as a reply to Lichtman and Posner, I think it also conveys information that is relevant to the debate over CISPA and related Internet security bills. Most politicians and commentators do not understand the extent to which Internet security is peer-produced, or why security institutions have developed in the way they have. I hope that my paper will lead to a greater appreciation of the role of bottom-up governance institutions on the Internet and beyond.
Comments on the paper are welcome!
John Palfrey of the Berkmann Center at Harvard Law School, discusses his new book written with Urs Gasser, Interop: The Promise and Perils of Highly Interconnected Systems. Interoperability is a term used to describe the standardization and integration of technology. Palfrey discusses how the term can describe many relationships in the world and that it doesn’t have to be limited to technical systems. He also describes potential pitfalls of too much interoperability. Palfrey finds that greater levels of interoperability can lead to greater competition, collaboration, and the development of standards. It can also lead to giving less protection to privacy and security. The trick is to get to the right level of interoperability. If systems become too complex, then nobody can understand them and they can become unstable. Palfrey describes the current financial crises could be an example of this. Palfrey also describes the difficulty in finding the proper role of government in encouraging or discouraging interoperability.
Download
Related Links
There was an important article about online age verification in The New York Times yesterday entitled, “Verifying Ages Online Is a Daunting Task, Even for Experts.” It’s definitely worth a read since it reiterates the simple truth that online age verification is enormously complicated and hugely contentious (especially legally). It’s also worth reading since this issue might be getting hot again as Facebook considers allowing kids under 13 on its site.
Just five years ago, age verification was a red-hot tech policy issue. The rise of MySpace and social networking in general had sent many state AGs, other lawmakers, and some child safety groups into full-blown moral panic mode. Some wanted to ban social networks in schools and libraries (recall that a 2006 House measure proposing just that actually received 410 votes, although the measure died in the Senate), but mandatory online age verification for social networking sites was also receiving a lot of support. This generated much academic and press inquiry into the sensibility and practicality of mandatory age verification as an online safety strategy. Personally, I was spending almost all my time covering the issue between late 2006 and mid-2007. The title of one of my papers on the topic reflected the frustration many shared about the issue: “Social Networking and Age Verification: Many Hard Questions; No Easy Solutions.”
Simply put, too many people were looking for an easy, silver-bullet solution to complicated problems regarding how kids get online and how to keep them safe once they get there. For a time, age verification became that silver bullet for those who felt that “we must do something” politically to address online safety concerns. Alas, mandatory age verification was no silver bullet. As I summarized in this 2009 white paper, “Five Online Safety Task Forces Agree: Education, Empowerment & Self-Regulation Are the Answer,” all previous research and task force reports looking into this issue have concluded that a diverse toolbox and a “layered approach” must be brought to bear on these problems. There are no simple fixes. Specifically, here’s what each of the major online child safety task forces that have been convened since 2000 had to say about the wisdom of mandatory age verification: Continue reading →
As Jerry noted [ten days ago](http://jerrybrito.org/post/24687446662/an-update-on-wcitleaks-org), [our little side project](http://wcitleaks.org/) got some good press right after we launched it. I am delighted to report that the media love continues. On Saturday, WCITLeaks was covered by [Talking Points Memo](http://idealab.talkingpointsmemo.com/2012/06/un-proposals-to-regulate-internet-are-troubling-leaked-documents-reveal.php), and a [Wall Street Journal article](http://online.wsj.com/article/SB10001424052702303822204577470532859210296.html) appeared online last night and in print this morning.
I think it’s great that both left- and right-of-center publications are covering WCIT and the threat to our online freedoms posed by international bureaucracy. But I worry that people will infer that since this is not a left vs. right issue, it must be a USA vs. the world issue. This is an unhelpful way to look at it.
**This is an Internet users vs. their governments issue.** Who benefits from increased ITU oversight of the Internet? Certainly not ordinary users in foreign countries, who would then be censored and spied upon by their governments with full international approval. The winners would be autocratic regimes, not their subjects. And let’s not pretend the US government is innocent on this score; it intercepts and records international Internet traffic all the time, and the SOPA/PIPA kerfuffle shows how much some interests, especially Big Content, want to use the government to censor the web.
The bottom line is that yes, the US should walk away from WCIT, but not because the Internet is our toy and we want to make the rules for the rest of the world. The US should walk away from WCIT as part of a repentant rejection of Internet policy under Bush and Obama, which has consistently carved out a greater role for the government online. I hope that the awareness we raise through WCITLeaks will not only highlight how foolish the US government is for playing the lose-lose game with the ITU, but how hypocritical it is for preaching net freedom while spying on, censoring, and regulating its own citizens online.
Today, WCITLeaks.org posted a new document called TD-62. It is a compilation of all the proposals for modification of the International Telecommunication Regulations (ITRs), which will be renegotiated at WCIT in Dubai this December. Some of the most troubling proposals include:
- The modification of section 1.4 and addition of section 3.5, which would make some or all ITU-T “Recommendations” mandatory. ITU-T “Recommendations” compete with standards bodies like the Internet Engineering Task Force (IETF), which proposes new standards for protocols and best practices on a completely voluntary and transparent basis.
- The modification of section 2.2 to explicitly include Internet traffic termination as a regulated telecommunication service. Under the status quo, Internet traffic is completely exempt from regulation under the ITRs because it is a “private arrangement” under article 9. If this proposal—supported by Russia and Iran—were adopted, Internet traffic would be metered along national boundaries and billed to the originator of the traffic, as is currently done with international telephone calls. This would create a new revenue stream for corrupt, autocratic regimes and raise the cost of accessing international websites and information on the Internet.
- The addition of a new section 2.13 to define spam in the ITRs. This would create an international legal excuse for governments to inspect our emails. This provision is supported by Russia, several Arab states, and Rwanda.
- The addition of a new section 3.8, the text of which is still undefined, that would give the ITU a role in allocating Internet addresses. The Internet Society points out in a comment that this “would be disruptive to the existing, successful mechanism for allocating/distributing IPv6 addresses.”
- The modification of section 4.3, subsection a) to introduce content regulation, starting with spam and malware, in the ITRs for the first time. The ITRs have always been about the pipes, not the content that flows through them. As the US delegation comments, “this text suggests that the ITU has a role in content related issues. We do not believe it does.” This is dangerous because many UN members do not have the same appreciation for freedom of speech that many of us do.
- The addition of a new section 8.2 to regulate online crime. Again, this would introduce content regulation into the ITRs.
- The addition of a new section 8.5, proposed by China, that would give member states what the Internet Society describes as a “a very active and inappropriate role in patrolling and enforcing newly defined standards of behaviour on telecommunication and Internet networks and in services.”
These proposals show that many ITU member states want to use international agreements to regulate the Internet by crowding out bottom-up institutions, imposing charges for international communication, and controlling the content that consumers can access online.
In my most recent weekly Forbes column, “Common Sense About Kids, Facebook & The Net,” I consider the wisdom of an online petition that the child safety advocacy group Common Sense Media is pushing, which demands that Facebook give up any thought of letting kids under the age of 13 on the site. “There is absolutely no proof of any meaningful social or educational value of Facebook for children under 13,” their petition insists. “Indeed, there are very legitimate concerns about privacy, as well as its impact on children’s social, emotional, and cognitive development.” Common Sense Media doesn’t offer any evidence to substantiate those claims, but one can sympathize with some of the general worries. Nonetheless, as I argue in my essay:
Common Sense Media’s approach to the issue is short-sighted. Calling for a zero-tolerance, prohibitionist policy toward kids on Facebook (and interactive media more generally) is tantamount to a bury-your-head-in-sand approach to child safety. Again, younger kids are increasingly online, often because their parents allow or even encourage it. To make sure they get online safely and remain safe, we’ll need a different approach than Common Sense Media’s unworkable “just-say-no” model.
Think about it this way: Would it make sense to start a petition demanding that kids be kept out of town squares, public parks, or shopping malls? Most of us would find the suggestion ludicrous. Continue reading →
The Wall Street Journal reports that “The Justice Department is conducting a wide-ranging antitrust investigation into whether cable companies are acting improperly to quash nascent competition from online video.” In particular, the DOJ is concerned that data caps may discourage consumers from switching to online video providers like Hulu and Netflix. The following statement can be attributed to Berin Szoka, President of TechFreedom:
It’s hard to see how tiered broadband pricing keeps users tethered to their cable service. Even watching ten hours of Hulu or Netflix a day wouldn’t exceed Comcast’s 300 GB basic data tier. And Comcast customers can buy additional blocks of 50 GB for just $10/month—enough for nearly two more hours a day of streamed video. Such tiers provide a much-needed incentive for online content providers to economize on bandwidth. They also allow ISPs to offer fairer broadband pricing, charging light users less than heavy users. Consumers might have been better off if cable companies could have simply charged online video providers for wholesale bandwidth use, but the FCC’s net neutrality rules bar that.
Counting cable content against caps might seem more fair, but it’s not necessarily something the law should mandate. Discriminating against a competitor isn’t a problem under antitrust law unless, on net, it harms consumers. Would consumers really be better off if their cable viewing reduced the amount of data available for streaming competing online video services? As long as the basic tier’s cap is high enough, few users will ever exceed it anyway—leaving consumers free to experiment with alternatives to cable subscriptions, just as cable providers are experimenting with new ways of offering cable content on multiple devices at no extra charge. Continue reading →
I’m impressed with the job Ryan Radia did in this Federalist Society podcast/debate about CISPA, the Cyber Intelligence and Sharing Protection Act.
It’s also notable how his opponent Stewart Baker veers into a strange ad hominem against “privacy groups” in his rejoinder to Ryan. Baker speaks as though arguable overbreadth in privacy statutes written years ago makes it appropriate to scythe down all law that might affect information sharing for cybersecurity purposes. That’s what language like “[n]otwithstanding any other provision of law” would do, and it’s in the current version of the bill three times.
I’m pretty rough on all the Internet and info-tech policy books that I review. There are two reasons for that. First, the vast majority of tech policy books being written today should never have been books in the first place. Most of them would have worked just fine as long-form (magazine-length) essays. Too many authors stretch a promising thesis into a long-winded, highly repetitive narrative just to say they’ve written an entire book about a subject. Second, many info-tech policy books are poorly written or poorly argued. I’m not going to name names, but I am frequently unimpressed by the quality of many books being published today about digital technology and online policy issues.
The books of Harvard University cyberlaw scholars John Palfrey and Urs Gasser offer a welcome break from this mold. Their recent books, Born Digital: Understanding the First Generation of Digital Natives, and Interop: The Promise and Perils of Highly Interconnected Systems, are engaging and extremely well-written books that deserve to be books. There’s no wasted space or mindless filler. It’s all substantive and it’s all interesting. I encourage aspiring tech policy authors to examine their works for a model of how a book should be done.
In a 2008 review, I heaped praise on Born Digital and declared that this “fine early history of this generation serves as a starting point for any conversation about how to mentor the children of the Web.” I still recommend highly to others today. I’m going to be a bit more critical of their new book, Interop, but I assure you that it is a text you absolutely must have on your shelf if you follow digital policy debates. It’s a supremely balanced treatment of a complicated and sometimes quite contentious set of information policy issues.
In the end, however, I am concerned about the open-ended nature of the standard that Palfrey and Gasser develop to determine when government should intervene to manage or mandate interoperability between or among information systems. I’ll push back against their amorphous theory of “optimal interoperability” and offer an alternative framework that suggests patience, humility, and openness to ongoing marketplace experimentation as the primary public policy virtues that lawmakers should instead embrace. Continue reading →