October 2012

This Wednesday the Information Economy Project at George Mason University wil present the latest installment of its Tullock Lecture series, featuring Thomas G. Krattenmaker, former director of research at the FCC. Here is the notice:

Thomas G. Krattenmaker
Former Director of Research, FCC
Former Professor of Law, Georgetown University Law Center
Former Dean and Professor, William and Mary Law School

Wednesday, October 17, 2012

The Information Economy Project at George Mason University
proudly presents The Tullock Lecture on Big Ideas About Information

4:00 – 5:30 pm @ Hazel Hall Room 215
GMU School of Law, 3301 Fairfax Drive, Arlington, Va.
(Orange Line: Virginia Square-GMU Metro)
Reception to Follow in the Levy Atrium, 5:30-6:30 pm

In its June 21, 2012 opinion in FCC v. Fox, the Supreme Court vacated reasoned judgments of the Second Circuit, without one sentence questioning the validity or wisdom of those judgments. Although the Court absolved Fox on a technicality, its opinion appears to reflect a post-modern approach to First Amendment jurisprudence concerning broadcast speech, whereby neither precedent nor principle control outcomes. This indulgent approach to a government censorship bureau appears to acquiesce in an unconfined, unprincipled, and unwarranted seizure of regulatory power by the FCC. The Fox opinion thus compounds and enables a grave regulatory failure; whether any sound broadcast indecency policy or legal regime is feasible is perhaps debatable, but the Federal Communications Commission is wholly incapable of administering such a regime. The lecture will be preceded by a short introduction by Fernando Laguarda.

Register here.

On the front page of today’s New York Times, Defense Secretary Leon Panetta again sounds the alarm about a "cyber Pearl Harbor.

“An aggressor nation or extremist group could use these kinds of cyber tools to gain control of critical switches,” Mr. Panetta said. “They could derail passenger trains, or even more dangerous, derail passenger trains loaded with lethal chemicals. They could contaminate the water supply in major cities, or shut down the power grid across large parts of the country.”

Defense officials insisted that Mr. Panetta’s words were not hyperbole, and that he was responding to a recent wave of cyberattacks on large American financial institutions. He also cited an attack in August on the state oil company Saudi Aramco, which infected and made useless more than 30,000 computers.

Not hyperbole, hmm? It’s the usual cyber fear two-step. First lay out a doomsday scenario involving hackers remotely derailing trains full of lethal chemicals. Second, cite recent attacks as evidence that the threat is real. Except let’s look at the cited evidence.

Here’s how the New York Times itself described the recent attacks on banks:

The banks suffered denial of service attacks, in which hackers barrage a Web site with traffic until it is overwhelmed and shuts down. Such attacks, while a nuisance, are not technically sophisticated and do not affect a company’s computer network — or, in this case, funds or customer bank accounts. But they are enough to upset customers.

Explosive stuff. And what about that attack on Saudi Aramco? Serious, to be sure, even if no control systems were breached, but as Reuters recently reported,

One or more insiders with high-level access are suspected of assisting the hackers who damaged some 30,000 computers at Saudi Arabia’s national oil company last month, sources familiar with the company’s investigation say. …

The hackers’ apparent access to a mole, willing to take personal risk to help, is an extraordinary development in a country where open dissent is banned.

“It was someone who had inside knowledge and inside privileges within the company,” said a source familiar with the ongoing forensic examination.

What this shows is that one of the greatest threats to networks is not master hackers tunneling their way in, but good old fashioned spies. The cybersecurity legislation that Panetta and the administration are pushing cannot prevent a determined insider with access and permissions from carrying out an attack. It can, however, distort the incentives of businesses and hamper innovation.

A few weeks I wrote an [intentionally provocative post](http://jerrybrito.com/2012/07/25/how-copyright-is-like-solyndra/) comparing copyright to Solyndra. My argument was that just as Congress has a knowledge problem and a public choice problem picking the right technologies to subsidize, so does it have these problems when it comes to picking winners and losers when it comes to setting out the contours of copyright.

I’m grateful for all the [wonderful feedback](https://plus.google.com/u/0/117169003326996777677/posts/TjzX6ZHLTK6) I got on that post, and I agree with those who pointed out that a problem with my analogy was that unlike subsidies to Solyndra, copyright doesn’t pick particular politically connected individuals or companies to privilege. I think it’s much more accurate to say that Congress can use copyright to privilege certain classes of well-organized industries or companies.

A case in point that shows how Congress picks winners and losers is being [debated right now](http://www.theverge.com/2012/9/24/3381396/pandora-internet-radio-royalty-bill): the framework that governs digital music broadcasting royalties for satellite radio and internet radio stations like Pandora. Today you pay a different royalty rate for playing a sound recording (like the lasted LMFAO opus) depending on what kind of radio station you are. Satellite radio stations pay 6 to 8 percent of their gross revenues each year in royalties. Pandora, however, pays around 50 percent, and it will likely be more next year. Meanwhile, traditional AM and FM radio stations pay nothing, zip, zero, zilch.

I won’t get into the public choice problems that may have led to this situation, but the fact is that one legacy industry is not just being subsidized with free access to an essential input (sound recordings), but it is also protected. That protection comes at the expense of a new and innovative industry–internet radio–that is being charged punishing rates for the same essential input.

My colleague Matt Mitchell recently published an excellent paper entitled [The Pathology of Privilege: The Economic Consequences of Government Favoritism](http://mercatus.org/publication/pathology-privilege-economic-consequences-government-favoritism) that catalogs the different ways government has favored particular industries. He also shows how this behavior “misdirects resources, impedes genuine economic progress, breeds corruption, and undermines the legitimacy of both the government and the private sector.” The uneven playing field in the digital music space fits right in with the type of privilege he discussed.

Conservatives and libertarians who are wary of such government extensions of privilege should keep their eye on copyright as the source of many such imbalances.

For some time now I’ve been trying to teach myself how to program. I’m proficient in HTML and CSS, and and I could always tinker around the edges of PHP, but I really couldn’t code something from scratch to save my life. Well, there’s no better way to learn than by doing, at least when it comes to programming, so I gave myself a project to complete and, by golly, I did it. It’s called TechWire. It’s a tech policy news aggregator and I’m making it available on the web because I think it might be useful to other tech policy nerds.

Quite simply it’s a semi-curated reverse-chronological list of the most recent tech policy news with links to the original sources. And it’s not just news stories. You’ll also see opinion columns, posts from the various policy shops around town, new scholarly papers, and new books on tech policy. And you can also drill down into just one of these categories to see just the latest news, or just the latest opinions, or just the latest papers. Leave the page open in a tab and the site auto-refreshes as new items come in. Alternatively there is a Twitter feed at @TechWireFTW to get the latest.

Other features include the ability to look up the news for a particular day in the past, as well as clicking on a story to see what other stories are related. That’s especially useful if you want to see how different outlets are covering the same issue or if you want to see how an issue has developed over time. Just click on the linked timestamp at the end of a story to see related posts.

I hope TechWire (at http://techwireftw.com/) is as useful to you as it was fun for me to code. I’d like to thank some folks who really helped me along the way: Pete Snyder and Eli Dourado for putting up with my dumb questions, Cord Blomquist for great hosting and serene patience, Adam Thierer for being the best QA department I could wish for, and my wife Kathleen for putting up with me staring at the computer for hours.

Today the Mercatus Center at George Mason University has released a new working paper by Boston College Law School Professor Daniel Lyons entitled, “The Impact of Data Caps and Other Forms of Usage-Based Pricing for Broadband Access.”

There’s been much hand-wringing about fixed and mobile broadband services increasingly looking to move to usage-based pricing or to impose data caps. Some have even suggested an outright ban on the practice. As Adam Thierer has catalogued in these pages, the ‘net neutrality’ debate has in many ways been leading to this point: pricing flexibility vs. price controls.

In his new paper, Lyons explores the implications of this trend toward usage-based pricing. He finds that data caps and other forms of metered consumption are not inherently anti-consumer or anticompetitive.

Rather, they reflect different pricing strategies through which a broadband company may recover its costs from its customer base and fund future infrastructure investment. By aligning costs more closely with use, usage-based pricing may effectively shift more network costs onto those consumers who use the network the most. Companies can thus avoid forcing light Internet users to subsidize the data-heavy habits of online gamers and movie torrenters. Usage-based pricing may also help alleviate network congestion by encouraging customers, content providers, and network operators to use broadband more efficiently.

Opponents of usage-based pricing have noted that data caps may be deployed for anticompetitive purposes. But data caps can be a problem only when a firm with market power exploits that power in a way that harms consumers. Absent a specific market failure, which critics have not yet shown, broadband providers should be free to experiment with usage-based pricing and other pricing strategies as tools in their arsenal to meet rising broadband demand. Public policies allowing providers the freedom to experiment best preserve the spirit of innovation that has characterized the Internet since its inception.

Lyons does a magnificent job of walking the reader through every aspect of the usage-based pricing issue, its benefits as a cost-recovery and congestion management tool, and its potential anticompetitive effects. “Ultimately, data caps and other pricing strategies are ways that broadband companies can distinguish themselves from one another to achieve a competitive advantage in the marketplace,” he concludes. “When firms experiment with different business models, they can tailor services to niche audiences whose interests are inadequately satisfied by a one-size-fits-all flat-rate plan. Absent anticompetitive concerns, public policy should encourage companies to experiment with different pricing models as a way to compete against one another.”

Scott Shackelford, assistant professor of business law and ethics at Indiana University, and author of the soon-to-be-published book Managing Cyber Attacks in International Law, Business, and Relations: In Search of Cyber Peace, explains how polycentric governance could be the answer to modern cybersecurity concerns.

Shackelford  originally began researching collective action problems in physical commons, including Antarctica, the deep sea bed, and outer space, where he discovered the efficacy of polycentric governance in addressing these issues. Noting the similarities between these communally owned resources and the Internet, Shackelford was drawn to the idea of polycentric governance as a solution to the collective action problems he identified in the online realm, particularly when it came to cybersecurity.

Shackelford contrasts the bottom-up form of governance characterized by self-organization and networking regulations at multiple levels to the increasingly state-centric approach prevailing in forums like the International Telecommunication Union (ITU).  Analyzing the debate between Internet sovereignty and Internet freedom through the lens of polycentric regulation, Shackelford reconceptualizes both cybersecurity and the future of Internet governance.

Download

Related Links

Looking for a concise overview of how Internet architecture has evolved and a principled discussion of the public policies that should govern the Net going forward? Then look no further than Christopher Yoo‘s new book, The Dynamic Internet: How Technology, Users, and Businesses are Transforming the Network. It’s a quick read (just 140 pages) and is worth picking up.  Yoo is a Professor of Law, Communication, and Computer & Information Science at the University of Pennsylvania and also serves as the Director of the Center for Technology, Innovation & Competition there. For those who monitor ongoing developments in cyberlaw and digital economics, Yoo is a well-known and prolific intellectual who has established himself as one of the giants of this rapidly growing policy arena.

Yoo makes two straight-forward arguments in his new book. First, the Internet is changing. In Part 1 of the book, Yoo offers a layman-friendly overview of the changing dynamics of Internet architecture and engineering. He documents the evolving nature of Internet standards, traffic management and congestion policies, spam and security control efforts, and peering and pricing policies. He also discusses the rise of peer-to-peer applications, the growth of mobile broadband, the emergence of the app store economy, and what the explosion of online video consumption means for ongoing bandwidth management efforts. Those are the supply-side issues. Yoo also outlines the implications of changes in the demand-side of the equation, such as changing user demographics and rapidly evolving demands from consumers. He notes that these new demand-side realities of Internet usage are resulting in changes to network management and engineering, further reinforcing changes already underway on the supply-side.

Yoo’s second point in the book flows logically from the first: as the Internet continues to evolve in such a highly dynamic fashion, public policy must as well. Yoo is particularly worried about calls to lock in standards, protocols, and policies from what he regards as a bygone era of Internet engineering, architecture, and policy. “The dramatic shift in Internet usage suggests that its founding architectural principles form the mid-1990s may no longer be appropriate today,” he argues. (p. 4) “[T]he optimal network architecture is unlikely to be static. Instead, it is likely to be dynamic over time, changing with the shifts in end-user demands,” he says. (p. 7) Thus, “the static, one-size-fits-all approach that dominates the current debate misses the mark.” (p. 7) Continue reading →

Post image for Dan Provost on indie capitalism

Designer Dan Provost, co-founder of the indie hardware and software company Studio Neat, and co-author of It Will Be Exhilarating: Indie Capitalism and Design Entrepreneurship in the 21st Century, discusses how technological innovation helped him build his business. Provost explains how he and his co-founder Tom Gerhardt were able to rely on crowdfunding to finance their business. Avoiding loans or investors, he says, has allowed them to more freely experiment and innovate. Provost also credits 3D printing for his company’s success, saying their hardware designs–very popular tripod mounts for the iPhone and a stylus for the iPad–would not have been possible without the quick-prototyping technology.

Download

Related Links

If the FCC had adopted the eligibility restrictions proposed by PISC in 2007, the United States would not have achieved the LTE leadership touted by current FCC Chairman Genachowski.

I was pleased to see FCC Chairman Genachowski praise the market-based policies of his predecessors in his remarks at Vox last week. He noted that the United States is currently leading the world in next generation mobile wireless services with 69 percent of the world’s LTE subscribers, which he attributes to “smart government policies.” He didn’t mention, however, that the “smart government policies” that led to America’s renewed mobile leadership were based on market principles adopted by the previous FCC. Continue reading →