Economics

PaidContent.org has posted a chart showing “Who’s Getting Buzz Settlement Money.” This refers to the $9.5 million payout following the Federal Trade Commission settlement with Google a class action suit over its “Buzz” social networking service. Last week, the Federal Trade Commission entered into a consent decree with Google over its botched rollout of Buzz saying the search giant violated its own privacy policy. Google will also pay out to various advocacy groups according to the distribution seen in the chart as part of a separate class action. Payouts to advocates like this are not uncommon, although they are more often the result of a class action settlement than a regulatory agency consent decree. [Update/Correction 5:13 pm: I should have made it clear that this payout was the result of a class action lawsuit against Google and not the direct result of the FTC settlement. Apologies for that mistake, but still interested in the questions raised below.]

But that got me wondering whether this might make for good fodder for a case study by a public choice economist or political scientist. There are some really interesting questions raised by settlements like this that would be worth studying.

Continue reading →

[Cross-posted at Truthonthemarket.com]

There is an antitrust debate brewing concerning Google and “search bias,” a term used to describe search engine results that preference the content of the search provider.  For example, Google might list Google Maps prominently if one searches “maps” or Microsoft’s Bing might prominently place Microsoft affiliated content or products.

Apparently both antitrust investigations and Congressional hearings are in the works; regulators and commentators appear poised to attempt to impose “search neutrality” through antitrust or other regulatory means to limit or prohibit the ability of search engines (or perhaps just Google) to favor their own content.  At least one proposal goes so far as to advocate a new government agency to regulate search.  Of course, when I read proposals like this, I wonder where Google’s share of the “search market” will be by the time the new agency is built.

As with the net neutrality debate, I understand some of the push for search neutrality involves an intense push to discard traditional economically-grounded antitrust framework.  The logic for this push is simple.  The economic literature on vertical restraints and vertical integration provides no support for ex ante regulation arising out of the concern that a vertically integrating firm will harm competition through favoring its own content and discriminating against rivals.  Economic theory suggests that such arrangements may be anticompetitive in some instances, but also provides a plethora of pro-competitive explanations.  Lafontaine & Slade explain the state of the evidence in their recent survey paper in the Journal of Economic Literature:

We are therefore somewhat surprised at what the weight of the evidence is telling us. It says that, under most circumstances, profit-maximizing vertical-integration decisions are efficient, not just from the firms’ but also from the consumers’ points of view. Although there are isolated studies that contradict this claim, the vast majority support it. Moreover, even in industries that are highly concentrated so that horizontal considerations assume substantial importance, the net effect of vertical integration appears to be positive in many instances. We therefore conclude that, faced with a vertical arrangement, the burden of evidence should be placed on competition authorities to demonstrate that that arrangement is harmful before the practice is attacked. Furthermore, we have found clear evidence that restrictions on vertical integration that are imposed, often by local authorities, on owners of retail networks are usually detrimental to consumers. Given the weight of the evidence, it behooves government agencies to reconsider the validity of such restrictions.

Of course, this does not bless all instances of vertical contracts or integration as pro-competitive.  The antitrust approach appropriately eschews ex ante regulation in favor of a fact-specific rule of reason analysis that requires plaintiffs to demonstrate competitive harm in a particular instance. Again, given the strength of the empirical evidence, it is no surprise that advocates of search neutrality, as net neutrality before it, either do not rely on consumer welfare arguments or are willing to sacrifice consumer welfare for other objectives.

I wish to focus on the antitrust arguments for a moment.  In an interview with the San Francisco Gate, Harvard’s Ben Edelman sketches out an antitrust claim against Google based upon search bias; and to his credit, Edelman provides some evidence in support of his claim.

I’m not convinced.  Edelman’s interpretation of evidence of search bias is detached from antitrust economics.  The evidence is all about identifying whether or not there is bias.  That, however, is not the relevant antitrust inquiry; instead, the question is whether such vertical arrangements, including preferential treatment of one’s own downstream products, are generally procompetitive or anticompetitive.  Examples from other contexts illustrate this point.

Continue reading →

[Cross-Posted at Truthonthemarket.com]

There has been, as is to be expected, plenty of casual analysis of the AT&T / T-Mobile merger to go around.  As I mentioned, I think there are a number of interesting issues to be resolved in an investigation with access to the facts necessary to conduct the appropriate analysis.  Annie Lowrey’s piece in Slate is one of the more egregious violators of the liberal application of “folk economics” to the merger while reaching some very confident conclusions concerning the competitive effects of the merger:

Merging AT&T and T-Mobile would reduce competition further, creating a wireless behemoth with more than 125 million customers and nudging the existing oligopoly closer to a duopoly. The new company would have more customers than Verizon, and three times as many as Sprint Nextel. It would control about 42 percent of the U.S. cell-phone market. That means higher prices, full stop. The proposed deal is, in finance-speak, a “horizontal acquisition.” AT&T is not attempting to buy a company that makes software or runs network improvements or streamlines back-end systems. AT&T is buying a company that has the broadband it needs and cutting out a competitor to boot—a competitor that had, of late, pushed hard to compete on price. Perhaps it’s telling that AT&T has made no indications as of yet that it will keep T-Mobile’s lower rates.

Full stop?  I don’t think so.  Nothing in economic theory says so.  And by the way, 42 percent simply isn’t high enough to tell a merger to monopoly story here; and Lowrey concedes some efficiencies from the merger (“buying a company that has the broadband it needs” is an efficiency!).  To be clear, the merger may or may not pose competitive problems as a matter of fact.  The point is that serious analysis must be done in order to evaluate its likely competitive effects.  And of course, Lowrey (H/T: Yglesias) has no obligation to conduct serious analysis in a column — nor do I in a blog post. But this idea that the market concentration is an incredibly useful and — in her case, perfectly accurate — predictor of price effects is devoid of analytical content and also misleads on the relevant economics.

Continue reading →

[Cross-Posted at Truthonthemarket.com]

The big merger news is that AT&T is planning to acquire T-Mobile.  From the AT&T press release:

AT&T Inc. (NYSE: T) and Deutsche Telekom AG (FWB: DTE) today announced that they have entered into a definitive agreement under which AT&T will acquire T-Mobile USA from Deutsche Telekom in a cash-and-stock transaction currently valued at approximately $39 billion. The agreement has been approved by the Boards of Directors of both companies. AT&T’s acquisition of T-Mobile USA provides an optimal combination of network assets to add capacity sooner than any alternative, and it provides an opportunity to improve network quality in the near term for both companies’ customers. In addition, it provides a fast, efficient and certain solution to the impending exhaustion of wireless spectrum in some markets, which limits both companies’ ability to meet the ongoing explosive demand for mobile broadband. With this transaction, AT&T commits to a significant expansion of robust 4G LTE (Long Term Evolution) deployment to 95 percent of the U.S. population to reach an additional 46.5 million Americans beyond current plans – including rural communities and small towns.  This helps achieve the Federal Communications Commission (FCC) and President Obama’s goals to connect “every part of America to the digital age.” T-Mobile USA does not have a clear path to delivering LTE.

As the press release suggests, the potential efficiencies of the deal lie in relieving spectrum exhaustion in some markets as well as 4G LTE.  AT&T President Ralph De La Vega, in an interview, described the potential gains as follows:

The first thing is, this deal alleviates the impending spectrum exhaust challenges that both companies face. By combining the spectrum holdings that we have, which are complementary, it really helps both companies.  Second, just like we did with the old AT&T Wireless merger, when we combine both networks what we are going to have is more network capacity and better quality as the density of the network grid increases.In major urban areas, whether Washington, D.C., New York or San Francisco, by combining the networks we actually have a denser grid. We have more cell sites per grid, which allows us to have a better capacity in the network and better quality. It’s really going to be something that customers in both networks are going to notice. The third point is that AT&T is going to commit to expand LTE to cover 95 percent of the U.S. population. T-Mobile didn’t have a clear path to LTE, so their 34 million customers now get the advantage of having the greatest and latest technology available to them, whereas before that wasn’t clear. It also allows us to deliver that to 46.5 million more Americans than we have in our current plans. This is going to take LTE not just to major cities but to rural America.

Continue reading →

[Cross-posted at Truth on the Market]

[UPDATE:  Josh links to a WSJ article telling us that EU antitrust enforcers raided several (unnamed) e-book publishers as part of an apparent antitrust investigation into the agency model and whether it is “improperly restrictive.”  Whatever that means.  Key grafs:

At issue for antitrust regulators is whether agency models are improperly restrictive. Europe, in particular, has strong anticollusion laws that limit the extent to which companies can agree on the prices consumers will eventually be charged. Amazon, in particular, has vociferously opposed the agency practice, saying it would like to set prices as it sees fit. Publishers, by contrast, resist the notion of online retailers’ deep discounting.

It is unclear whether the animating question is whether the publishers might have agreed to a particular pricing  model, or to particular prices within that model.  As a legal matter that distinction probably doesn’t matter at all; as an economic matter it would seem to be more complicated–to be explored further another day . . . .]

A year ago I wrote about the economics of the e-book publishing market in the context of the dispute between Amazon and some publishers (notably Macmillan) over pricing.  At the time I suggested a few things about how the future might pan out (never a good idea . . . ):

And that’s really the twist.  Amazon is not ready to be a platform in this business.  The economic conditions are not yet right and it is clearly making a lot of money selling physical books directly to its users.  The Kindle is not ubiquitous and demand for electronic versions of books is not very significant–and thus Amazon does not want to take on the full platform development and distribution risk.  Where seller control over price usually entails a distribution of inventory risk away from suppliers and toward sellers, supplier control over price correspondingly distributes platform development risk toward sellers.  Under the old system Amazon was able to encourage the distribution of the platform (the Kindle) through loss-leader pricing on e-books, ensuring that publishers shared somewhat in the costs of platform distribution (from selling correspondingly fewer physical books) and allowing Amazon to subsidize Kindle sales in a way that helped to encourage consumer familiarity with e-books.  Under the new system it does not have that ability and can only subsidize Kindle use by reducing the price of Kindles–which impedes Amazon from engaging in effective price discrimination for the Kindle, does not tie the subsidy to increased use, and will make widespread distribution of the device more expensive and more risky for Amazon.

This “agency model,” if you recall, is one where, essentially, publishers, rather than Amazon, determine the price for electronic versions of their books sold via Amazon and pay Amazon a percentage.  The problem from Amazon’s point of view, as I mention in the quote above, is that without the ability to control the price of the books it sells, Amazon is limited essentially to fiddling with the price of the reader–the platform–itself in order to encourage more participation on the reader side of the market.  But I surmised (again in the quote above), that fiddling with the price of the platform would be far more blunt and potentially costly than controlling the price of the books themselves, mainly because the latter correlates almost perfectly with usage, and the former does not–and in the end Amazon may end up subsidizing lots of Kindle purchases from which it is then never able to recoup its losses because it accidentally subsidized lots of Kindle purchases by people who had no interest in actually using the devices very much (either because they’re sticking with paper or because Apple has leapfrogged the competition).

It appears, nevertheless, that Amazon has indeed been pursuing this pricing strategy.  According to this post from Kevin Kelly,

John Walkenbach noticed that the price of the Kindle was falling at a consistent rate, lowering almost on a schedule. By June 2010, the rate was so unwavering that he could easily forecast the date at which the Kindle would be free: November 2011.

Continue reading →

[Cross-posted at Truth on the Market]

Antitrust investigators continue to see smoke rising around Apple and the App Store.  From the WSJ:

For starters, subscriptions must be sold through Apple’s App Store. For instance, a magazine that wants to publish its content on an iPad cannot include a link in an iPad app that would direct readers to buy subscriptions through the magazine’s website. Apple earns a 30% share of any subscription sold through its App Store. … A federal official confirmed to The Washington Post that the government is looking at Apple’s subscription service terms for potential antitrust issues but said there is no formal investigation. Speaking on the condition of anonymity because he was not authorized to comment publicly, the official said that the government routinely tracks new commercial initiatives influencing markets.

Investigators certainly suspect Apple of myriad antitrust violations; there is even some absurd talk about breaking up Apple.  There is definitely smoke — but is there fire?

The most often discussed bar to an antitrust action against Apple is the one many regulators simply assume into existence: Apple must have market power in an antitrust-relevant market.  While Apple’s share of the smartphone market is only 16% or so, its share of the tablet computing market is much larger.  The WSJ, for example, reports that Apple accounts for about three-fourths of tablet computer sales.  I’ve noted before in the smartphone context that this requirement should not be consider a bar to FTC suit, given the availability of Section 5; however, as the WSJ explains, market definition must be a critical issue in any Apple investigation or lawsuit:

Publishers, for example, might claim that Apple dominates the market for consumer tablet computers and that it has allegedly used that commanding position to restrict competition. Apple, in turn, might define the market to include all digital and print media, and counter that any publisher not happy with Apple’s terms is free to still reach its customers through many other print and digital outlets.

One must conduct a proper, empirically-grounded analysis of the relevant data to speak with confidence; however, it suffices to say that I am skeptical that tablet sales would constitute a relevant market. Continue reading →

To believe some of the worrywarts around Washington, we find ourselves in the midst of a miserable mobile marketplace experience. Regulatory advocates like New America Foundation, Free Press, Public Knowledge and others routinely claim that the sky is falling on consumers and that far-reaching regulation of the wireless sector is needed to save the day.

I hope those folks are still willing to listen to facts, becuase those facts tell a very different story. Specifically, I invite critics to flip through the latest presentation by Internet market watchers Mary Meeker and Matt Murphy of Kleiner Perkins Caufield & Byers on “Top Mobile Internet Trends” and then explain to me how we can label this marketplace anything other than what it really is: One of the greatest capitalist success stories of modern times. Just about every metric illustrates the explosive growth of technological innovation in the U.S. mobile arena. I’ve embedded the entire slideshow down below, but two particular slides deserve to be showcased.

Continue reading →

My last post on the opportunities presented by “The Great Stagnation” got a bit of attention, and I’m heartened by that because I’d like to develop my conception of “opting out” a bit more in later posts. Today I’d just like to respond to my friend and Colleague Tate Watkins who reacted to my post noting that “most people don’t want any more leisure. People don’t work 20 hours a week because they would have to make up the difference ‘playing with [their] families and reading books.'”

Tate says that spending that much time doing nothing, and doing it with their families, is likely a net minus for most people. I think he’s absolutely right, so I guess I need to define what I mean by “leisure” when I say that “the cost of leisure is going down” allowing us to consume more of it. Continue reading →

In his column on Monday, David Brooks put his finger on what I found most interesting about Tyler Cowen’s The Great Stagnation. Namely:

It could be that in an industrial economy people develop a materialist mind-set and believe that improving their income is the same thing as improving their quality of life. But in an affluent information-driven world, people embrace the postmaterialist mind-set. They realize they can improve their quality of life without actually producing more wealth.

As Tyler points out in this book, and catalogued at length in his other excellent book, Create Your Own Economy, recent increases in happiness come from growth in internal economies. That is, internal to humans. In the past, increased well-being came from not having a toilet and then having one, or the invention of cheap air travel. Today they come from blogging, watching Lost on Netflix, listening to a symphony from iTunes, tweeting with your friends, seeing their pictures on Facebook or Path, and learning and collaborating on Wikipedia. As a result, once one secures a certain income to cover basic needs, greater happiness and well-being can be had for virtually nothing.

The problem some see with this is that the Internet sector, while it may give us amazing innovations, produces little by way of revenue or jobs. Brooks also laments that because American’s have not come to grips with this growing distinction between wealth and standard of living, we tend to live beyond our means, which is certainly true in a personal and public fiscal sense.

But I’d like to see this seeming decoupling of wealth and well-being as an opportunity. Continue reading →

For my contribution to Berin Szoka and Adam Marcus’ (of TechFreedom fame) awesome Next Digital Decade book, I wrote about search engine “neutrality” and the implicit and explicit claims that search engines are “essential facilities.” (Check out the other essays on this topic by Frank Pasquale, Eric Goldman and James Grimmelmann, linked to here, under Chapter 7).

The scare quotes around neutrality are there because the term is at best a misnomer as applied to search engines and at worst a baseless excuse for more regulation of the Internet.  (The quotes around essential facilities are there because it is a term of art, but it is also scary).  The essay is an effort to inject some basic economic and legal reasoning into the overly-emotionalized (is that a word?) issue.

So, what is wrong with calls for search neutrality, especially those rooted in the notion of Internet search (or, more accurately, Google, the policy scolds’ bête noir of the day) as an “essential facility,” and necessitating government-mandated access? As others have noted, the basic concept of neutrality in search is, at root, farcical. The idea that a search engine, which offers its users edited access to the most relevant websites based on the search engine’s assessment of the user’s intent, should do so “neutrally” implies that the search engine’s efforts to ensure relevance should be cabined by an almost-limitless range of ancillary concerns. Nevertheless, proponents of this view have begun to adduce increasingly detail-laden and complex arguments in favor of their positions, and the European Commission has even opened a formal investigation into Google’s practices, based largely on various claims that it has systematically denied access to its top search results (in some cases paid results, in others organic results) by competing services, especially vertical search engines. To my knowledge, no one has yet claimed that Google should offer up links to competing general search engines as a remedy for its perceived market foreclosure, but Microsoft’s experience with the “Browser Choice Screen” it has now agreed to offer as a consequence of the European Commission’s successful competition case against the company is not encouraging. These more superficially sophisticated claims are rooted in the notion of Internet search as an “essential facility” – a bottleneck limiting effective competition. These claims, as well as the more fundamental harm-to-competitor claims, are difficult to sustain on any economically-reasonable grounds. To understand this requires some basic understanding of the economics of essential facilities, of Internet search, and of the relevant product markets in which Internet search operates.

The essay goes into much more detail, of course, but the basic point is that Google’s search engine is not, in fact, “essential” in the economically-relevant sense.  Rather, Google’s competitors and other detractors have basically built precisely the most problematic sort of antitrust case, where success itself is penalized (in this case, Google is so good at what it does it just isn’t fair to keep it all to itself!). Continue reading →