Articles by Geoffrey Manne

Geoff is the founder and Executive Director of the International Center for Law and Economics (ICLE) in Portland, Oregon. He is also a Lecturer in Law at Lewis & Clark Law School in Portland and a Contributor to the Hoover Institution's Project on Commercializing Innovation.


Please join us at the Willard Hotel in Washington, DC on December 16th for a conference launching the year-long project, “FTC: Technology and Reform.” With complex technological issues increasingly on the FTC’s docket, we will consider what it means that the FTC is fast becoming the Federal Technology Commission.

The FTC: Technology & Reform Project brings together a unique collection of experts on the law, economics, and technology of competition and consumer protection to consider challenges facing the FTC in general, and especially regarding its regulation of technology.

For many, new technologies represent “challenges” to the agency, a continuous stream of complex threats to consumers that can be mitigated only by ongoing regulatory vigilance. We view technology differently, as an overwhelmingly positive force for consumers. To us, the FTC’s role is to promote the consumer benefits of new technology — not to “tame the beast” but to intervene only with caution, when the likely consumer benefits of regulation outweigh the risk of regulatory error. This conference is the start of a year-long project that will recommend concrete reforms to ensure that the FTC’s treatment of technology works to make consumers better off. Continue reading →

Over at Forbes we have a lengthy piece discussing “10 Reasons To Be More Optimistic About Broadband Than Susan Crawford Is.” Crawford has become the unofficial spokesman for a budding campaign to reshape broadband. She sees cable companies monopolizing broadband, charging too much, withholding content and keeping speeds low, all in order to suppress disruptive innovation — and argues for imposing 19th century common carriage regulation on the Internet. We begin (we expect to contribute much more to this discussion in the future) to explain both why her premises are erroneous and also why her proscription is faulty. Here’s a taste:

Things in the US today are better than Crawford claims. While Crawford claims that broadband is faster and cheaper in other developed countries, her statistics are convincingly disputed. She neglects to mention the significant subsidies used to build out those networks. Crawford’s model is Europe, but as Europeans acknowledge, “beyond 100 Mbps supply will be very difficult and expensive. Western Europe may be forced into a second fibre build out earlier than expected, or will find themselves within the slow lane in 3-5 years time.” And while “blazing fast” broadband might be important for some users, broadband speeds in the US are plenty fast enough to satisfy most users. Consumers are willing to pay for speed, but, apparently, have little interest in paying for the sort of speed Crawford deems essential. This isn’t surprising. As the LSE study cited above notes, “most new activities made possible by broadband are already possible with basic or fast broadband: higher speeds mainly allow the same things to happen faster or with higher quality, while the extra costs of providing higher speeds to everyone are very significant.”

Continue reading →

I have been a critic of the Federal Trade Commission’s investigation into Google since it was a gleam in its competitors’ eyes—skeptical that there was any basis for a case, and concerned about the effect on consumers, innovation and investment if a case were brought.

While it took the Commission more than a year and a half to finally come to the same conclusion, ultimately the FTC had no choice but to close the case that was a “square peg, round hole” problem from the start.

Now that the FTC’s investigation has concluded, an examination of the nature of the markets in which Google operates illustrates why this crusade was ill-conceived from the start. In short, the “realities on the ground” strongly challenged the logic and relevance of many of the claims put forth by Google’s critics. Nevertheless, the politics are such that their nonsensical claims continue, in different forums, with competitors continuing to hope that they can wrangle a regulatory solution to their competitive problem.

The case against Google rested on certain assumptions about the functioning of the markets in which Google operates. Because these are tech markets, constantly evolving and complex, most assumptions about the scope of these markets and competitive effects within them are imperfect at best. But there are some attributes of Google’s markets—conveniently left out of the critics’ complaints— that, properly understood, painted a picture for the FTC that undermined the basic, essential elements of an antitrust case against the company. Continue reading →

By Geoffrey Manne & Berin Szoka

As Democrats insist that income taxes on the 1% must go up in the name of fairness, one Democratic Senator wants to make sure that the 1% of heaviest Internet users pay the same price as the rest of us. It’s ironic how confused social justice gets when the Internet’s involved.

Senator Ron Wyden is beloved by defenders of Internet freedom, most notably for blocking the Protect IP bill—sister to the more infamous SOPA—in the Senate. He’s widely celebrated as one of the most tech-savvy members of Congress. But his latest bill, the “Data Cap Integrity Act,” is a bizarre, reverse-Robin Hood form of price control for broadband. It should offend those who defend Internet freedom just as much as SOPA did.

Wyden worries that “data caps” will discourage Internet use and allow “Internet providers to extract monopoly rents,” quoting a New York Times editorial from July that stirred up a tempest in a teapot. But his fears are straw men, based on four false premises.

First, US ISPs aren’t “capping” anyone’s broadband; they’re experimenting with usage-based pricing—service tiers. If you want more than the basic tier, your usage isn’t capped: you can always pay more for more bandwidth. But few users will actually exceed that basic tier. For example, Comcast’s basic tier, 300 GB/month, is so generous that 98.5% of users will not exceed it. That’s enough for 130 hours of HD video each month (two full-length movies a day) or between 300 and 1000 hours of standard (compressed) video streaming.

Second, Wyden sets up a false dichotomy: Caps (or tiers, more accurately) are, according to Wyden, “appropriate if they are carefully constructed to manage network congestion,” but apparently for Wyden the only alternative explanation for usage-based pricing is extraction of monopoly rents. This simply isn’t the case, and propagating that fallacy risks chilling investment in network infrastructure. In fact, usage-based pricing allows networks to charge heavy users more, thereby recovering more costs and actually reducing prices for the majority of us who don’t need more bandwidth than the basic tier permits—and whose usage is effectively subsidized by those few who do. Unfortunately, Wyden’s bill wouldn’t allow pricing structures based on cost recovery—only network congestion. So, for example, an ISP might be allowed to price usage during times of peak congestion, but couldn’t simply offer a lower price for the basic tier to light users.

That’s nuts—from the perspective of social justice as well as basic economic rationality. Even as the FCC was issuing its famous Net Neutrality regulations, the agency rejected proposals to ban usage-based pricing, explaining:

prohibiting tiered or usage-based pricing and requiring all subscribers to pay the same amount for broadband service, regardless of the performance or usage of the service, would force lighter end users of the network to subsidize heavier end users. It would also foreclose practices that may appropriately align incentives to encourage efficient use of networks.

It is unclear why Senator Wyden thinks the FCC—no friend of broadband “monopolists”—has this wrong. Continue reading →

By Geoffrey Manne and Berin Szoka

A debate is brewing in Congress over whether to allow the Federal Trade Commission to sidestep decades of antitrust case law and economic theory to define, on its own, when competition becomes “unfair.” Unless Congress cancels the FTC’s blank check, uncertainty about the breadth of the agency’s power will chill innovation, especially in the tech sector. And sadly, there’s no reason to believe that such expansive power will serve consumers.

Last month, Senators and Congressmen of both parties sent a flurry of letters to the FTC warning against overstepping the authority Congress granted the agency in 1914 when it enacted Section 5 of the FTC Act. FTC Chairman Jon Leibowitz has long expressed a desire to stake out new antitrust authority under Section 5 over unfair methods of competition that would otherwise be legal under the Sherman and Clayton antitrust acts. He seems to have had Google in mind as a test case.

On Monday, Congressmen John Conyers and Mel Watt, the top two Democrats on the House Judiciary Committee, issued their own letter telling us not to worry about the larger principle at stake. The two insist that “concerns about the use of Section 5 are unfounded” because “[w]ell established legal principles set forth by the Supreme Court provide ample authority for the FTC to address potential competitive concerns in the relevant market, including search.” The second half of that sentence is certainly true: the FTC doesn’t need a “standalone” Section 5 case to protect consumers from real harms to competition. But that doesn’t mean the FTC won’t claim such authority—and, unfortunately, there’s little by way of “established legal principles” stop the agency from overreaching. Continue reading →

Co-authored with Berin Szoka

In the past two weeks, Members of Congress from both parties have penned scathing letters to the FTC warning of the consequences (both to consumers and the agency itself) if the Commission sues Google not under traditional antitrust law, but instead by alleging unfair competition under Section 5 of the FTC Act. The FTC is rumored to be considering such a suit, and FTC Chairman Jon Leibowitz and Republican Commissioner Tom Rosch have expressed a desire to litigate such a so-called “pure” Section 5 antitrust case — one not adjoining a cause of action under the Sherman Act. Unfortunately for the Commissioners, no appellate court has upheld such an action since the 1960s.

This brewing standoff is reminiscent of a similar contest between Congress and the FTC over the Commission’s aggressive use of Section 5 in consumer protection cases in the 1970s. As Howard Beales recounts, the FTC took an expansive view of its authority and failed to produce guidelines or limiting principles to guide its growing enforcement against “unfair” practices — just as today it offers no limiting principles or guidelines for antitrust enforcement under the Act. Only under heavy pressure from Congress, including a brief shutdown of the agency (and significant public criticism for becoming the “National Nanny“), did the agency finally produce a Policy Statement on Unfairness — which Congress eventually codified by statute. Continue reading →

After more than a year of complaining about Google and being met with responses from me (see also here, here, here, here, and here, among others) and many others that these complaints have yet to offer up a rigorous theory of antitrust injury — let alone any evidence — FairSearch yesterday offered up its preferred remedies aimed at addressing, in its own words, “the fundamental conflict of interest driving Google’s incentive and ability to engage in anti-competitive conduct. . . . [by putting an] end [to] Google’s preferencing of its own products ahead of natural search results.”  Nothing in the post addresses the weakness of the organization’s underlying claims, and its proposed remedies would be damaging to consumers.

FairSearch’s first and core “abuse” is “[d]iscriminatory treatment favoring Google’s own vertical products in a manner that may harm competing vertical products.”  To address this it proposes prohibiting Google from preferencing its own content in search results and suggests as additional, “structural remedies” “[r]equiring Google to license data” and “[r]equiring Google to divest its vertical products that have benefited from Google’s abuses.”

Tom Barnett, former AAG for antitrust, counsel to FairSearch member Expedia, and FairSearch’s de facto spokesman should be ashamed to be associated with claims and proposals like these.  He better than many others knows that harm to competitors is not the issue under US antitrust laws.  Rather, US antitrust law requires a demonstration that consumers — not just rivals — will be harmed by a challenged practice.  He also knows (as economists have known for a long time) that favoring one’s own content — i.e., “vertically integrating” to produce both inputs as well as finished products — is generally procompetitive. Continue reading →

There are a lot of inaccurate claims – and bad economics – swirling around the Universal Music Group (UMG)/EMI merger, currently under review by the US Federal Trade Commission and the European Commission (and approved by regulators in several other jurisdictions including, most recently, Australia). Regulators and industry watchers should be skeptical of analyses that rely on outmoded antitrust thinking and are out of touch with the real dynamics of the music industry.

The primary claim of critics such as the American Antitrust Institute and Public Knowledge is that this merger would result in an over-concentrated music market and create a “super-major” that could constrain output, raise prices and thwart online distribution channels, thus harming consumers. But this claim, based on a stylized, theoretical economic model, is far too simplistic and ignores the market’s commercial realities, the labels’ self-interest and the merger’s manifest benefits to artists and consumers.

For market concentration to raise serious antitrust issues, products have to be substitutes. This is in fact what critics argue: that if UMG raised prices now it would be undercut by EMI and lose sales, but that if the merger goes through, EMI will no longer constrain UMG’s pricing power. However, the vast majority of EMI’s music is not a substitute for UMG’s. In the real world, there simply isn’t much price competition across music labels or among the artists and songs they distribute. Their catalogs are not interchangeable, and there is so much heterogeneity among consumers and artists (“product differentiation,” in antitrust lingo) that relative prices are a trivial factor in consumption decisions: No one decides to buy more Lady Gaga albums because the Grateful Dead’s are too expensive. The two are not substitutes, and assessing competitive effects as if they are, simply because they are both “popular music,” is not instructive. Continue reading →

On July 31 the FTC voted to withdraw its 2003 Policy Statement on Monetary Remedies in Competition Cases.  Commissioner Ohlhausen issued her first dissent since joining the Commission, and points out the folly and the danger in the Commission’s withdrawal of its Policy Statement.

The Commission supports its action by citing “legal thinking” in favor of heightened monetary penalties and the Policy Statement’s role in dissuading the Commission from following this thinking:

It has been our experience that the Policy Statement has chilled the pursuit of monetary remedies in the years since the statement’s issuance. At a time when Supreme Court jurisprudence has increased burdens on plaintiffs, and legal thinking has begun to encourage greater seeking of disgorgement, the FTC has sought monetary equitable remedies in only two competition cases since we issued the Policy Statement in 2003.

In this case, “legal thinking” apparently amounts to a single 2009 article by Einer Elhague.  But it turns out Einer doesn’t represent the entire current of legal thinking on this issue.  As it happens, Josh Wright and Judge Ginsburg looked at the evidence in 2010 and found no evidence of increased deterrence (of price fixing) from larger fines:

If the best way to deter price-fixing is to increase fines, then we should expect the number of cartel cases to decrease as fines increase. At this point, however, we do not have any evidence that a still-higher corporate fine would deter price-fixing more effectively. It may simply be that corporate fines are misdirected, so that increasing the severity of sanctions along this margin is at best irrelevant and might counter-productively impose costs upon consumers in the form of higher prices as firms pass on increased monitoring and compliance expenditures. Continue reading →

By Geoffrey Manne and Berin Szoka

Everyone loves to hate record labels. For years, copyright-bashers have ranted about the “Big Labels” trying to thwart new models for distributing music in terms that would make JFK assassination conspiracy theorists blush. Now they’ve turned their sites on the pending merger between Universal Music Group and EMI, insisting the deal would be bad for consumers. There’s even a Senate Antitrust Subcommittee hearing tomorrow, led by Senator Herb “Big is Bad” Kohl.

But this is a merger users of Spotify, Apple’s iTunes and the wide range of other digital services ought to love. UMG has done more than any other label to support the growth of such services, cutting licensing deals with hundreds of distribution outlets—often well before other labels. Piracy has been a significant concern for the industry, and UMG seems to recognize that only “easy” can compete with “free.” The company has embraced the reality that music distribution paradigms are changing rapidly to keep up with consumer demand. So why are groups like Public Knowledge opposing the merger?

Critics contend that the merger will elevate UMG’s already substantial market share and “give it the power to distort or even determine the fate of digital distribution models.” For these critics, the only record labels that matter are the four majors, and four is simply better than three. But this assessment hews to the outmoded, “big is bad” structural analysis that has been consistently demolished by economists since the 1970s. Instead, the relevant touchstone for all merger analysis is whether the merger would give the merged firm a new incentive and ability to engage in anticompetitive conduct. But there’s nothing UMG can do with EMI’s catalogue under its control that it can’t do now. If anything, UMG’s ownership of EMI should accelerate the availability of digitally distributed music.

To see why this is so, consider what digital distributors—whether of the pay-as-you-go, iTunes type, or the all-you-can-eat, Spotify type—most want: Access to as much music as possible on terms on par with those of other distribution channels. For the all-you-can-eat distributors this is a sine qua non: their business models depend on being able to distribute as close as possible to all the music every potential customer could want. But given UMG’s current catalogue, it already has the ability, if it wanted to exercise it, to extract monopoly profits from these distributors, as they simply can’t offer a viable product without UMG’s catalogue. Continue reading →