Articles by Adam Thierer 
Senior Fellow in Technology & Innovation at the R Street Institute in Washington, DC. Formerly a senior research fellow at the Mercatus Center at George Mason University, President of the Progress & Freedom Foundation, Director of Telecommunications Studies at the Cato Institute, and a Fellow in Economic Policy at the Heritage Foundation.
In this very entertaining piece, our frequent intellectual sparring partner Tim Wu admits that certain New York City bureaucrats may be driving him to libertarianism.
I really wish Tim would become a true libertarian. As that essay and his brilliant 5-part series essays on “American Lawbreaking” for Slate illustrate, he is an incredibly gifted writer and a first-rate thinker. And, at times, his thinking does lean in the libertarian direction, but not enough to grant him credentials to the club just yet! (Tim and I also share a nerdy affection for Dungeons & Dragons, so I have to admit to liking him for that reason. I was far to familiar with 20-sided dice as a youngster. Sad, but true).
I’ve written plenty here before about the potential pitfalls associated with a la carte regulation of cable and satellite television. What troubles me most about a la carte regulatory proposals is that proponents make grandiose claims about how it would offer consumers greater “choice” and lower prices without thinking about the long-term consequences of regulation. As I pointed out in a recent editorial in the Los Angeles Daily Journal, the problem with these regulatory activists is that “Their static view of things takes the 500-channel universe for granted; they assume it will always be with us and that it’s just a question of dividing up the pie in different (and cheaper) ways.” But as I go on to explain, a la carte regulation could bring all that to an end:
To understand why [it will harm consumers], we need to consider how it is that we have gained access to a 500-channel universe of diverse viewing options on cable and satellite. All of these channels didn’t just fall like manna from heaven. Companies and investors took risks developing unique networks to suit diverse interests. Thirty years ago, few could have imagined a world of 24-hour channels devoted to cooking, home renovation, travel, weather, religion, women’s issues, and golf. Yet, today we have The Food Channel, Home & Garden TV, The Travel Channel, The Weather Channel, EWTN, Oxygen, The Golf Channel, and countless other niche networks devoted to almost every conceivable human interest. How did this happen?
The answer is “bundling.” Many niche-oriented cable networks only exist because they are bundled with stronger networks. On their own, the smaller channels can’t survive; nor would anyone have risked launching them in the first place. “Bundling” is a means for firms to cover the enormous fixed costs associated with developing TV programming while also satisfying the wide diversity of audience tastes. Bundling channels together allows the niche, specialty networks to remain viable alongside popular networks such as CNN, ESPN and TBS. Bundles, therefore, are not anticonsumer but proconsumer.
Continue reading →
Must reading from George Ou of ZDNet on the Comcast kerfuffle. (He benefits from a detailed exchange with Richard Bennett, as we also did when Richard was kind enough to join us for a TLF podcast on the issue two weeks ago). George goes into great detail about what is going on here and why it’s so important that people understand a bit about technology and network engineering before rushing to impose regulation making routine traffic management illegal. Ultimately, George concludes:
BitTorrent is by far the largest consumer of bandwidth and a single BitTorrent user is capable of generating hundreds of times more network load than conventional applications. Throttling the number of BitTorrent connections or any application that has similarly aggressive characteristics is critical to keeping the network healthy with reasonable round-trip response times. That means a better gaming and VoIP (Voice over Internet Protocol) experience since they are both highly sensitive to network latency despite the fact that they are low-bandwidth. If the Net Neutrality extremists get their way and get the Government to ban active network management, cable broadband customers will suffer and those web hog TV commercials might just come true.
Bruce Owen, one of the finest communications and media economists of our generation, has written a powerful piece for Cato’s Regulation magazine asking, “After the long fight to end the ‘common carrier,’ why are we trying to resurrect it?” He’s referring, of course, to the ongoing efforts by some to impose Net neutrality regulation on broadband networks. In his new article, “Antecedents to Net Neutrality,” Owen points out that we’ve been down this path before, and with troubling results:
[T]he architects of the concept of net neutrality have invented nothing new. They have simply resurrected the traditional but uncommonly naïve “common carrier” solution to the threats they fear. By choosing new words to describe a solution already well understood by another name, the economic interests supporting net neutrality may mislead themselves and others into repeating a policy error much more likely to harm consumers than to promote competition and innovation.
Net neutrality policies could only be implemented through detailed price regulation, an approach that has generally failed, in the past, to improve consumer welfare relative to what might have been expected under an unregulated monopoly. Worse, regulatory agencies often settle into a well-established pattern of subservience to politically influential economic interests. Consumers, would-be entrants, and innovators are not likely to be among those influential groups. History thus counsels against adoption of most versions of net neutrality, at least in the absence of refractory monopoly power and strong evidence of anticompetitive behavior — extreme cases justifying dangerous, long-shot remedies.
Continue reading →
Last week, a mob of anti-media activists gathered outside the FCC to protest what they regarded as the agency’s willingness to embrace a radical deregulatory agenda on the media ownership front. The critics fear that the whole media marketplace is being gobbled up by a handful of evil media tycoons in New York and LA. If only the critics spent some time reading the headlines in the media outlets they criticize, they’d know that the marketplace reality is quite different.
In fact, over the past few years, I have been documenting the ongoing DE-consoldation taking place in America’s media market. This series has built upon the themes and evidence I first presented in my 2005 book, Media Myths: Making Sense of the Debate over Media Ownership, in which I made the case that the media marketplace was far more dynamic than critics cared to admit.
And today we have yet another case study of DE-consolidation to report: Media tycoon Barry Diller announced yesterday that his conglomerate IAC/Interactive Corp. would be splitting into not 2, not 3, not 4, but FIVE different divisions. IAC controls more than 60 brands including Ticketmaster, Ask.com and the Home Shopping Network, but they have not been able to find a way to build “synergies” (an over-used business school term if there ever was one) together. And so Diller is separating those divisions so that they can pursue their “core competencies” (another business school term, but one that does not get enough attention).
Here’s how the NY Times summarized what is going on:
Continue reading →
Our old friend Declan McCullagh, the dean of high-tech policy journalists, has just posted an excellent column outlining his concerns with the “Do Not Track List” notion that Harper and I blasted yesterday. As usual, Declan says it better than any of us can regarding why this is such a silly and dangerous regulatory proposal:
Nobody’s holding a gun to Internet users’ heads and forcing them to visit Amazon or Yahoo. They do it because they trust those companies to take reasonable steps to protect their privacy. To insist that the feds must step in because a few vocal lobbyists and activists don’t like those steps should be insulting to Americans: it suggests that they’re too simpleminded to make their own decisions about what’s best for them and their families. (It’s similar in principle to price regulation, when special-interest lobbyists insist that prices are too high or too low and must be altered by legislative fiat.)
What makes this an even sillier debate is that there already are a wealth of ways to accomplish “Do Not Track” without the feds. This is the third principle of Internet regulation: If technology exists to solve a perceived problem, it’s probably better to encourage its use rather than ask federal agencies for more regulations or demand that the techno half-wits in Congress draft a new law.
Amen, brother. He continues:
Continue reading →
Earlier today, Jim Harper raised some valid concerns about the new “Do Not Track List” that some groups are proposing be mandated by the FTC. I’d like to point out another concern with this concept. A mandatory “Do Not Track” registry creates a potentially dangerous precedent / framework for a nationwide mandatory registry of URLs of websites that some policymakers might deem objectionable in other ways beyond just spam. When I first read these two provisions on page 4 of the Do Not Track proposal, I could not help but think of how a savvy Net-censor might use them in an attempt to regulate Internet content in other ways:
“Any advertising entity that sets a persistent identifier on a user device should be required to provide to the FTC the domain names of the servers or other devices used to place the identifier.”
..and…
“Companies providing web, video, and other forms of browser applications should provide functionality (i.e., a browser feature, plugin, or extension) that allows users to import or otherwise use the Do Not Track List of domain names, keep the list up-to-date, and block domains on the list from tracking their Internet activity.”
I can easily imagine would-be Net censors using that language as a blueprint to regulate other types of online speech. For example, it could be rewritten as follows [with my additions in brackets]:
“Companies providing web, video, and other forms of browser applications should provide functionality (i.e., a browser feature, plugin, or extension) that allows users to import or otherwise use the [government-approved ] list of domain names, keep the list up-to-date, and block domains on the list [that are harmful to minors].”
Perhaps I’m just being paranoid, but because would-be Net censors have struck out on other regulatory fronts over the past 10 years, they are looking for a new framework. A mandatory Do Not Track List might give them an opening.
Just when you think the debate over media ownership regulation in this country can’t get any more absurd, along comes this letter from FCC Commissioner Michael Copps arguing that Rupert Murdoch’s deal for the Wall Street Journal should be blocked to somehow save the nation (especially those poor New Yorkers) from an evil media monopoly. “It will create a single company with enormous influence over politics, art and culture across the nation and especially in the New York metropolitan area.”
PUH-LEASE! How can someone make such an argument with a straight face? Rupert Murdoch is going to control “the politics, art and culture” of the nation with the WSJ?? Come on, get serious. The Journal isn’t exactly the standard-bearer when it comes to setting artistic or cultural trends for the nation. And the argument that Murdoch is somehow going to control “the politics, art and culture” of the New York area with the Journal is even more absurd. Is there really any shortage of inputs in the New York area when it comes to those things? Are the artsy-fartsy liberals of NYC suddenly going to wake up one day, start reading the Journal, and completely change their lifestyles? Please.
Anyway, I wrote a much longer essay for the City Journal back in August predicting all this “Chicken Little” nonsense would be coming. As I said then:
Continue reading →
My colleague Scott Wallsten, PFF’s Director of Communications Policy Studies, has just released an excellent short essay on one of my favorite pet issues: Metered pricing as a solution to broadband congestion / traffic management issues. This is very relevant right now, of course, because of the Comcast kerfuffle regarding how the company have gone about managing BitTorrent traffic.
In his essay, entitled “Managing the Network? Rethink Prices, not Net Neutrality,” Scott points out that:
Comcast should have been more forthcoming in its response and should be more transparent about its actions. Even so, Comcast isn’t the culprit and net neutrality regulations aren’t the answer. Instead, network congestion problems caused by some people’s excessive use are a direct and predictable result of the all-you-can-eat pricing that nearly every ISP charges for broadband service.
We know that this kind of pricing gives people little incentive to pay attention to how much of the service they use. People whose electricity is included in their rent rather than metered, for example, may as well leave the lights on all day and keep their homes frigid in the summer and toasty in the winter. To be sure, some people conserve simply because they care about the environment, but most won’t since they don’t see any savings from using energy more efficiently.
It is often complicated to determine prices in network industries that have high fixed costs and low marginal costs–like broadband. As long as the cost of sending an extra bit down the pipe is close to nothing, a flat rate for unlimited use is probably efficient. In that case, the operator must cover the fixed cost of the infrastructure, but it might not be worthwhile to monitor usage. If usage costs begin to increase, however, flat rate pricing may become inefficient.
Continue reading →
In response to my post two days ago about my new paper on interoperability standards in the cable marketplace, one of our savvy TLF commenters (Eric) made the following argument about how he believed the lack of standardization killed high-def audio:
“In the world of high definition audio, the lack of standardization did not lead to innovation and exciting new services. It led to the languishing of two competing formats, SACD and DVD-Audio. The current fight between two high definition video formats may delay the mass market penetration of any hi-def video disc. Virtually everyone loses. … Freedom is great, but when you need a mass market application, standardization becomes a crucial consideration.”
But another reader (Mike Sullivan) makes an excellent counter-point when he notes:
“Isn’t it also possible that the two HD audio formats have “languished” not because of the fact that there are two competing formats, but because there is limited demand for HD audio recordings at a premium price?”
This is something I happen to know quite a bit about, so I wanted to respond in a separate, detailed post.
Continue reading →