March 2011

[Cross-posted at Truthonthemarket.com]

There is an antitrust debate brewing concerning Google and “search bias,” a term used to describe search engine results that preference the content of the search provider.  For example, Google might list Google Maps prominently if one searches “maps” or Microsoft’s Bing might prominently place Microsoft affiliated content or products.

Apparently both antitrust investigations and Congressional hearings are in the works; regulators and commentators appear poised to attempt to impose “search neutrality” through antitrust or other regulatory means to limit or prohibit the ability of search engines (or perhaps just Google) to favor their own content.  At least one proposal goes so far as to advocate a new government agency to regulate search.  Of course, when I read proposals like this, I wonder where Google’s share of the “search market” will be by the time the new agency is built.

As with the net neutrality debate, I understand some of the push for search neutrality involves an intense push to discard traditional economically-grounded antitrust framework.  The logic for this push is simple.  The economic literature on vertical restraints and vertical integration provides no support for ex ante regulation arising out of the concern that a vertically integrating firm will harm competition through favoring its own content and discriminating against rivals.  Economic theory suggests that such arrangements may be anticompetitive in some instances, but also provides a plethora of pro-competitive explanations.  Lafontaine & Slade explain the state of the evidence in their recent survey paper in the Journal of Economic Literature:

We are therefore somewhat surprised at what the weight of the evidence is telling us. It says that, under most circumstances, profit-maximizing vertical-integration decisions are efficient, not just from the firms’ but also from the consumers’ points of view. Although there are isolated studies that contradict this claim, the vast majority support it. Moreover, even in industries that are highly concentrated so that horizontal considerations assume substantial importance, the net effect of vertical integration appears to be positive in many instances. We therefore conclude that, faced with a vertical arrangement, the burden of evidence should be placed on competition authorities to demonstrate that that arrangement is harmful before the practice is attacked. Furthermore, we have found clear evidence that restrictions on vertical integration that are imposed, often by local authorities, on owners of retail networks are usually detrimental to consumers. Given the weight of the evidence, it behooves government agencies to reconsider the validity of such restrictions.

Of course, this does not bless all instances of vertical contracts or integration as pro-competitive.  The antitrust approach appropriately eschews ex ante regulation in favor of a fact-specific rule of reason analysis that requires plaintiffs to demonstrate competitive harm in a particular instance. Again, given the strength of the empirical evidence, it is no surprise that advocates of search neutrality, as net neutrality before it, either do not rely on consumer welfare arguments or are willing to sacrifice consumer welfare for other objectives.

I wish to focus on the antitrust arguments for a moment.  In an interview with the San Francisco Gate, Harvard’s Ben Edelman sketches out an antitrust claim against Google based upon search bias; and to his credit, Edelman provides some evidence in support of his claim.

I’m not convinced.  Edelman’s interpretation of evidence of search bias is detached from antitrust economics.  The evidence is all about identifying whether or not there is bias.  That, however, is not the relevant antitrust inquiry; instead, the question is whether such vertical arrangements, including preferential treatment of one’s own downstream products, are generally procompetitive or anticompetitive.  Examples from other contexts illustrate this point.

Continue reading →

Kinda cool.

Venture capitalist Bill Gurley asked a good question in a Tweet late last night when he was “wondering if Apple’s 30% rake isn’t a foolish act of hubris. Why drive Amazon, Facebook, and others to different platforms?” As most of you know, Gurley is referring to Apple’s announcement in February that it would require a 30% cut of app developers’ revenues if they wanted a place in the Apple App Store.

Indeed, why would Apple be so foolish? Of course, some critics will cry “monopoly!” and claim that Apple’s “act of hubris” was simply a logical move by a platform monopolist to exploit its supposedly dominant position in the mobile OS / app store marketplace.  But what then are we to make of Amazon’s big announcement yesterday that it was jumping in the ring with its new app store for Android? And what are we to make of the fact that Google immediately responded to Apple’s 30% announcement by offering publishers a more reasonable 10%-of-the-cut deal?  And, as Gurley notes, you can’t forget about Facebook. Who knows what they have up their sleeve next.  They’ve denied any interest in marketing their own phone and, at least so far, have not announced any intention to offer a competing app store, but why would they need to? Their platform can integrate apps directly into it!  Oh, and don’t forget that there’s a little company called Microsoft out there still trying to stake its claim to a patch of land in the mobile OS landscape. Oh, and have you visited the HP-Palm development center lately?  Some very interesting things going on there that we shouldn’t ignore.

What these developments illustrate is a point that I have constantly reiterated here: Continue reading →

[Cross-Posted at Truthonthemarket.com]

There has been, as is to be expected, plenty of casual analysis of the AT&T / T-Mobile merger to go around.  As I mentioned, I think there are a number of interesting issues to be resolved in an investigation with access to the facts necessary to conduct the appropriate analysis.  Annie Lowrey’s piece in Slate is one of the more egregious violators of the liberal application of “folk economics” to the merger while reaching some very confident conclusions concerning the competitive effects of the merger:

Merging AT&T and T-Mobile would reduce competition further, creating a wireless behemoth with more than 125 million customers and nudging the existing oligopoly closer to a duopoly. The new company would have more customers than Verizon, and three times as many as Sprint Nextel. It would control about 42 percent of the U.S. cell-phone market.

That means higher prices, full stop. The proposed deal is, in finance-speak, a “horizontal acquisition.” AT&T is not attempting to buy a company that makes software or runs network improvements or streamlines back-end systems. AT&T is buying a company that has the broadband it needs and cutting out a competitor to boot—a competitor that had, of late, pushed hard to compete on price. Perhaps it’s telling that AT&T has made no indications as of yet that it will keep T-Mobile’s lower rates.

Full stop?  I don’t think so.  Nothing in economic theory says so.  And by the way, 42 percent simply isn’t high enough to tell a merger to monopoly story here; and Lowrey concedes some efficiencies from the merger (“buying a company that has the broadband it needs” is an efficiency!).  To be clear, the merger may or may not pose competitive problems as a matter of fact.  The point is that serious analysis must be done in order to evaluate its likely competitive effects.  And of course, Lowrey (H/T: Yglesias) has no obligation to conduct serious analysis in a column — nor do I in a blog post. But this idea that the market concentration is an incredibly useful and — in her case, perfectly accurate — predictor of price effects is devoid of analytical content and also misleads on the relevant economics.

Continue reading →

[Cross-Posted at Truthonthemarket.com]

The big merger news is that AT&T is planning to acquire T-Mobile.  From the AT&T press release:

AT&T Inc. (NYSE: T) and Deutsche Telekom AG (FWB: DTE) today announced that they have entered into a definitive agreement under which AT&T will acquire T-Mobile USA from Deutsche Telekom in a cash-and-stock transaction currently valued at approximately $39 billion. The agreement has been approved by the Boards of Directors of both companies.

AT&T’s acquisition of T-Mobile USA provides an optimal combination of network assets to add capacity sooner than any alternative, and it provides an opportunity to improve network quality in the near term for both companies’ customers. In addition, it provides a fast, efficient and certain solution to the impending exhaustion of wireless spectrum in some markets, which limits both companies’ ability to meet the ongoing explosive demand for mobile broadband.

With this transaction, AT&T commits to a significant expansion of robust 4G LTE (Long Term Evolution) deployment to 95 percent of the U.S. population to reach an additional 46.5 million Americans beyond current plans – including rural communities and small towns.  This helps achieve the Federal Communications Commission (FCC) and President Obama’s goals to connect “every part of America to the digital age.” T-Mobile USA does not have a clear path to delivering LTE.

As the press release suggests, the potential efficiencies of the deal lie in relieving spectrum exhaustion in some markets as well as 4G LTE.  AT&T President Ralph De La Vega, in an interview, described the potential gains as follows:

The first thing is, this deal alleviates the impending spectrum exhaust challenges that both companies face. By combining the spectrum holdings that we have, which are complementary, it really helps both companies.  Second, just like we did with the old AT&T Wireless merger, when we combine both networks what we are going to have is more network capacity and better quality as the density of the network grid increases.In major urban areas, whether Washington, D.C., New York or San Francisco, by combining the networks we actually have a denser grid. We have more cell sites per grid, which allows us to have a better capacity in the network and better quality. It’s really going to be something that customers in both networks are going to notice.

The third point is that AT&T is going to commit to expand LTE to cover 95 percent of the U.S. population.

T-Mobile didn’t have a clear path to LTE, so their 34 million customers now get the advantage of having the greatest and latest technology available to them, whereas before that wasn’t clear. It also allows us to deliver that to 46.5 million more Americans than we have in our current plans. This is going to take LTE not just to major cities but to rural America.

Continue reading →

Today, the U.S. District Court for the Southern District of New York rejected a proposed class action settlement agreement between Google, the Authors Guild, and a coalition of publishers. Had it been approved, the settlement would have enabled Google to scan and sell millions of books, including out of print books, without getting explicit permission from the copyright owner. (Back in 2009, I submitted an amicus brief to the court regarding the privacy implications of the settlement agreement, although I didn’t take a position on its overall fairness.)

While the court recognized in its ruling (PDF) that the proposed settlement would “benefit many” by creating a “universal digital library,” it ultimately concluded that the settlement was not “fair, adequate, and reasonable.” The court further concluded that addressing the troubling absence of a market in orphan works is a “matter for Congress,” rather than the courts.

Both chambers of Congress are currently working hard to tackle patent reform and rogue websites. Whatever one thinks about the Google Books settlement, Judge Chin’s ruling today should serve as a wake-up call that orphan works legislation should also be a top priority for lawmakers.

Today, millions of expressive works cannot be enjoyed by the general public because their copyright owners cannot be found, as we’ve frequently pointed out on these pages (1, 2, 3, 4). This amounts to a massive black hole in copyright, severely undermining the public interest. Unfortunately, past efforts in Congress to meaningfully address this dilemma have failed.

In 2006, the U.S. Copyright Office recommended that Congress amend the Copyright Act by adding an exception for the use and reproduction of orphan works contingent on a “reasonably diligent search” for the copyright owner. The proposal also would have required that users of orphan works pay “reasonable compensation” to copyright owners if they emerge.

A similar solution to the orphan works dilemma was put forward by Jerry Brito and Bridget Dooling. They suggested in a 2006 law review article that Congress establish a new affirmative defense in copyright law that would permit a work to be reproduced without authorization if no rightsholder can be found following a reasonable, good-faith search.

Jane Yakowitz of Brooklyn Law School recently posted an interesting 63-page paper on SSRN entitled, “Tragedy of the Data Commons.” For those following the current privacy debates, it is must reading since it points out a simple truism: increased data privacy regulation could result in the diminution of many beneficial information flows.

Cutting against the grain of modern privacy scholarship, Yakowitz argues that “The stakes for data privacy have reached a new high water mark, but the consequences are not what they seem. We are at great risk not of privacy threats, but of information obstruction.” (p. 58)  Her concern is that “if taken to the extreme, data privacy can also make discourse anemic and shallow by removing from it relevant and readily attainable facts.” (p. 63)  In particular, she worries that “The bulk of privacy scholarship has had the deleterious effect of exacerbating public distrust in research data.”

Yakowitz is right to be concerned. Access to data and broad data sets that include anonymized profiles of individuals is profound importantly for countless sectors and professions: journalism, medicine, economics, law, criminology, political science, environmental sciences, and many, many others. Yakowitz does a brilliant job documenting the many “fruits of the data commons” by showing how “the benefits flowing from the data commons are indirect but bountiful.” (p. 5) This isn’t about those sectors making money. It’s more about how researchers in those fields use information to improve the world around us. In essence, more data = more knowledge. If we want to study and better understand the world around us, researchers need access to broad (and continuously refreshed) data sets. Overly restrictive privacy regulations or forms of liability could slow that flow, diminish or weaken research capabilities and output, and leave society less well off because of the resulting ignorance we face. Continue reading →

On this week’s podcast, Patri Friedman, executive director and chairman of the board of The Seasteading Institute, discusses seasteading. Friedman discusses how and why his organization works to enable floating ocean cities that will allow people to test new ideas for government. He talks about advantages of starting new systems of governments in lieu of trying to change existing ones, comparing seasteading to tech start-ups that are ideally positioned to challenge entrenched companies. Friedman also suggests when such experimental communities might become viable and talks about a few inspirations behind his “vision of multiple floating Hong Kongs”: intentional communities, Burning Man, and Ephemerisle.

Related Links

To keep the conversation around this episode in one place, we’d like to ask you to comment at the web page for this episode on Surprisingly Free. Also, why not subscribe to the podcast on iTunes?

Five years ago this month, I penned a white paper on “Fact and Fiction in the Debate over Video Game Regulation” that I have been meaning to update ever since but just never seem to get around to it. One of the myths I aimed to debunk in the paper was the belief that most video games contain intense depictions of violence or sexuality.  This argument drives many of the crusades to regulate video games. In my old study, I aggregated several years worth of data about video game ratings and showed that the exact opposite was the case: the majority of games sold each year were rating “E” for everyone or “E10+” (Everyone 10 and over) by the Entertainment Software Rating Board (ESRB).

Thanks to this new article by Ars Technica‘s Ben Kuchera, we know that this trend continues. Kuchera reports that out of 1,638 games rated by the ESRB in 2010, only 5% were rated “M” for Mature. As a percentage of top sellers, the percentage of “M”-rated games is a bit higher, coming in at 29%. But that’s hardly surprising since there are always a few big “M”-rated titles that are the power-sellers among young adults each year.  Still, most of the best sellers don’t contain extreme violence or sexuality.

Continue reading →

Here are some quick thoughts on the proposed AT&T – T-Mobile merger, mostly borrowed from my previous writing on the wireless marketplace. First, however, I highly recommend this excellent analysis of the issue by Larry Downes, which cuts through the hysteria we’re already hearing and offers a sober look at the issues at stake here.  Anyway, here are a few of my random thoughts on the deal:

* The deal will likely be approved: First, to cut to the chase.. After much wrangling, the deal will probably be approved primarily because of two factors, both of which help political officials as much as AT&T: (1) The deal delivers upon the National Broadband Plan promise of getting the country blanketed with wireless broadband; and (2) it “brings home” T-Mobile by giving an American company control of a German-held interest. As Larry Dignan of ZNet says, it is tantamount to “playing the patriotism card.”

* One reason it might not be approved: Some Administration critics, especially from the more liberal part of the Democratic base, could make this a litmus test for Obama administration’s antitrust enforcement efforts. In the wake of the Comcast merger approval — albeit after several pounds of flesh were handed over “voluntarily” to get the deal approved — some of the Administration’s base will be looking for blood. I remember how the Powell FCC was under real heat to “get tough” on mergers back in 2001-02 and during that time blocked the proposed DirecTV-EchoStar deal, possibly as a result of the pressure. The same thing could happen to AT&T – T-Mobile here.

* It’s all about spectrum: From AT&T’s perspective, this deal is all about getting more high-quality spectrum, which is in increasingly short supply. Indeed, as Jerry Brito noted earlier, this merger should serve as another wake-up call regarding the need to get spectrum reform going again to ensure that existing players can reallocate their spectrum to those who demand it most. (Hint: Incentivize the TV broadcasters to sell... NOW!) But, in the short-term, this deal helps AT&T built out a more robust nationwide wireless network. Over the long-haul, that should help T-Mobile deliver better service to its customers. Continue reading →