Geoffrey Manne – Technology Liberation Front https://techliberation.com Keeping politicians' hands off the Net & everything else related to technology Wed, 11 Dec 2013 03:23:40 +0000 en-US hourly 1 6772528 FTC: Technology & Reform Project Launches 12/16 with Conference keynoted by Commissioner Wright https://techliberation.com/2013/12/10/ftc-technology-reform-project-launches-1216-with-conference-keynoted-by-commissioner-wright/ https://techliberation.com/2013/12/10/ftc-technology-reform-project-launches-1216-with-conference-keynoted-by-commissioner-wright/#respond Tue, 10 Dec 2013 20:29:15 +0000 http://techliberation.com/?p=73946

image

Please join us at the Willard Hotel in Washington, DC on December 16th for a conference launching the year-long project, “FTC: Technology and Reform.” With complex technological issues increasingly on the FTC’s docket, we will consider what it means that the FTC is fast becoming the Federal Technology Commission.

The FTC: Technology & Reform Project brings together a unique collection of experts on the law, economics, and technology of competition and consumer protection to consider challenges facing the FTC in general, and especially regarding its regulation of technology.

For many, new technologies represent “challenges” to the agency, a continuous stream of complex threats to consumers that can be mitigated only by ongoing regulatory vigilance. We view technology differently, as an overwhelmingly positive force for consumers. To us, the FTC’s role is to promote the consumer benefits of new technology — not to “tame the beast” but to intervene only with caution, when the likely consumer benefits of regulation outweigh the risk of regulatory error. This conference is the start of a year-long project that will recommend concrete reforms to ensure that the FTC’s treatment of technology works to make consumers better off.

Convened by TechFreedom and the International Center for Law & Economics, the FTC Technology & Reform Project includes academics, practitioners, policy experts and several former FTC Commissioners and staffers. Our initial report, to be released around the December 16th event, will identify critical questions facing the agency, Congress,  and the courts about the FTC’s future and will propose a framework for addressing them.

FTC Commissioner Joshua Wright will kick off the half-day conference with a luncheon keynote. Following his remarks, Project members will discus principal aspects of our initial report. The event will conclude with a networking reception. Attendees will include a wide variety of practitioners and scholars with expertise working at the Commission or counseling businesses about it.

RSVP Today!

When: Monday, December 16, 2013 11:30 – Registration opens 12:00 – 5:30 pm – Luncheon keynote & conference 5:30 – 6:30 p.m. – Reception

Where: The Willard Hotel 1401 Pennsylvania Ave NW Washington, DC 20004

Questions? Email contact@techfreedom.org.

]]>
https://techliberation.com/2013/12/10/ftc-technology-reform-project-launches-1216-with-conference-keynoted-by-commissioner-wright/feed/ 0 73946
Forbes commentary on Susan Crawford’s “broadband monopoly” thesis https://techliberation.com/2013/03/04/forbes-commentary-on-susan-crawfords-broadband-monopoly-thesis/ https://techliberation.com/2013/03/04/forbes-commentary-on-susan-crawfords-broadband-monopoly-thesis/#comments Mon, 04 Mar 2013 23:26:21 +0000 http://techliberation.com/?p=43951

Over at Forbes we have a lengthy piece discussing “10 Reasons To Be More Optimistic About Broadband Than Susan Crawford Is.” Crawford has become the unofficial spokesman for a budding campaign to reshape broadband. She sees cable companies monopolizing broadband, charging too much, withholding content and keeping speeds low, all in order to suppress disruptive innovation — and argues for imposing 19th century common carriage regulation on the Internet. We begin (we expect to contribute much more to this discussion in the future) to explain both why her premises are erroneous and also why her proscription is faulty. Here’s a taste:

Things in the US today are better than Crawford claims. While Crawford claims that broadband is faster and cheaper in other developed countries, her statistics are convincingly disputed. She neglects to mention the significant subsidies used to build out those networks. Crawford’s model is Europe, but as Europeans acknowledge, “beyond 100 Mbps supply will be very difficult and expensive. Western Europe may be forced into a second fibre build out earlier than expected, or will find themselves within the slow lane in 3-5 years time.” And while “blazing fast” broadband might be important for some users, broadband speeds in the US are plenty fast enough to satisfy most users. Consumers are willing to pay for speed, but, apparently, have little interest in paying for the sort of speed Crawford deems essential. This isn’t surprising. As the LSE study cited above notes, “most new activities made possible by broadband are already possible with basic or fast broadband: higher speeds mainly allow the same things to happen faster or with higher quality, while the extra costs of providing higher speeds to everyone are very significant.”

Even if she’s right, she wildly exaggerates the costs. Using a back-of-the-envelope calculation, Crawford claims that slow downloads (compared to other countries) could cost the U.S. $3 trillion/year in lost productivity from wasted time spent “waiting for a link to load or an app to function on your wireless device.” This intentionally sensationalist claim, however, rests on a purely hypothetical average wait time in the U.S. of 30 seconds (vs. 2 seconds in Japan). Whatever the actual numbers might be, her methodology would still be shaky, not least because time spent waiting for laggy content isn’t necessarily simply wasted. And for most of us, the opportunity cost of waiting for Angry Birds to load on our phones isn’t counted in wages — it’s counted in beers or time on the golf course or other leisure activities. These are important, to be sure, but does anyone seriously believe our GDP would grow 20% if only apps were snappier? Meanwhile, actual econometric studies looking at the productivity effects of faster broadband on businesses have found that higher broadband speeds are not associated with higher productivity.

* * *

So how do we guard against the possibility of consumer harm without making things worse? For us, it’s a mix of promoting both competition and a smarter, subtler role for government.

Despite Crawford’s assertion that the DOJ should have blocked the Comcast-NBCU merger, antitrust and consumer protection laws  do operate to constrain corporate conduct, not only through government enforcement but also private rights of action. Antitrust works best in the background, discouraging harmful conduct without anyone ever suing. The same is true for using consumer protection law to punish deception and truly harmful practices (e.g., misleading billing or overstating speeds).

A range of regulatory reforms would also go a long way toward promoting competition. Most importantly, reform local franchising so competitors like Google Fiber can build their own networks. That means giving them “open access” not to existing networks but to the public rights of way under streets. Instead of requiring that franchisees build out to an entire franchise area—which often makes both new entry and service upgrades unprofitable—remove build-out requirements and craft smart subsidies to encourage competition to deliver high-quality universal service, and to deliver superfast broadband to the customers who want it. Rather than controlling prices, offer broadband vouchers to those that can’t afford it. Encourage telcos to build wireline competitors to cable by transitioning their existing telephone networks to all-IP networks, as we’ve urged the FCC to do (here and here). Let wireless reach its potential by opening up spectrum and discouraging municipalities from blocking tower construction. Clear the deadwood of rules that protect incumbents in the video marketplace—a reform with broad bipartisan appeal.

In short, there’s a lot of ground between “do nothing” and “regulate broadband like electricity—or railroads.” Crawford’s arguments simply don’t justify imposing 19th century common carriage regulation on the Internet. But that doesn’t leave us powerless to correct practices that truly harm consumers, should they actually arise.

Read the whole thing here.

]]>
https://techliberation.com/2013/03/04/forbes-commentary-on-susan-crawfords-broadband-monopoly-thesis/feed/ 3 43951
FTC Deservedly Closes Google Antitrust Investigation Without Taking Action https://techliberation.com/2013/01/03/ftc-deservedly-closes-google-antitrust-investigation-without-taking-action/ https://techliberation.com/2013/01/03/ftc-deservedly-closes-google-antitrust-investigation-without-taking-action/#respond Thu, 03 Jan 2013 19:22:43 +0000 http://techliberation.com/?p=43401

I have been a critic of the Federal Trade Commission’s investigation into Google since it was a gleam in its competitors’ eyes—skeptical that there was any basis for a case, and concerned about the effect on consumers, innovation and investment if a case were brought.

While it took the Commission more than a year and a half to finally come to the same conclusion, ultimately the FTC had no choice but to close the case that was a “square peg, round hole” problem from the start.

Now that the FTC’s investigation has concluded, an examination of the nature of the markets in which Google operates illustrates why this crusade was ill-conceived from the start. In short, the “realities on the ground” strongly challenged the logic and relevance of many of the claims put forth by Google’s critics. Nevertheless, the politics are such that their nonsensical claims continue, in different forums, with competitors continuing to hope that they can wrangle a regulatory solution to their competitive problem.

The case against Google rested on certain assumptions about the functioning of the markets in which Google operates. Because these are tech markets, constantly evolving and complex, most assumptions about the scope of these markets and competitive effects within them are imperfect at best. But there are some attributes of Google’s markets—conveniently left out of the critics’ complaints— that, properly understood, painted a picture for the FTC that undermined the basic, essential elements of an antitrust case against the company.

That case was seriously undermined by the nature and extent of competition in the markets the FTC was investigating. Most importantly, casual references to a “search market” and “search advertising market” aside, Google actually competes in the market for targeted eyeballs: a market aimed to offer up targeted ads to interested users. Search offers a valuable opportunity for targeting an advertiser’s message, but it is by no means alone: there are myriad (and growing) other mechanisms to access consumers online.

Consumers use Google because they are looking for information — but there are lots of ways to do that. There are plenty of apps that circumvent Google, and consumers are increasingly going to specialized sites to find what they are looking for. The search market, if a distinct one ever existed, has evolved into an online information market that includes far more players than those who just operate traditional search engines.

We live in a world where what prevails today won’t prevail tomorrow. The tech industry is constantly changing, and it is the height of folly (and a serious threat to innovation and consumer welfare) to constrain the activities of firms competing in such an environment by pigeonholing the market. In other words, in a proper market, Google looks significantly less dominant. More important, perhaps, as search itself evolves, and as Facebook, Amazon and others get into the search advertising game, Google’s strong position even in the overly narrow “search market” is far from unassailable.

This is progress — creative destruction — not regress, and such changes should not be penalized.

Another common refrain from Google’s critics was that Google’s access to immense amounts of data used to increase the quality of its targeting presented a barrier to competition that no one else could match, thus protecting Google’s unassailable monopoly. But scale comes in lots of ways.

Even if scale doesn’t come cheaply, the fact that challenging firms might have to spend the same (or, in this case, almost certainly less) Google did in order to replicate its success is not a “barrier to entry” that requires an antitrust remedy. Data about consumer interests is widely available (despite efforts to reduce the availability of such data in the name of protecting “privacy”—which might actually create barriers to entry). It’s never been the case that a firm has to generate its own inputs for every product it produces — and there’s no reason to suggest search or advertising is any different.

Additionally, to defend a claim of monopolization, it is generally required to show that the alleged monopolist enjoys protection from competition through barriers to entry. In Google’s case, the barriers alleged were illusory. Bing and other recent entrants in the general search business have enjoyed success precisely because they were able to obtain the inputs (in this case, data) necessary to develop competitive offerings.

Meanwhile unanticipated competitors like Facebook, Amazon, Twitter and others continue to knock at Google’s metaphorical door, all of them entering into competition with Google using data sourced from creative sources, and all of them potentially besting Google in the process. Consider, for example, Amazon’s recent move into the targeted advertising market, competing with Google to place ads on websites across the Internet, but with the considerable advantage of being able to target ads based on searches, or purchases, a user has made on Amazon—the world’s largest product search engine.

Now that the investigation has concluded, we come away with two major findings. First, the online information market is dynamic, and it is a fool’s errand to identify the power or significance of any player in these markets based on data available today — data that is already out of date between the time it is collected and the time it is analyzed.

Second, each development in the market – whether offered by Google or its competitors and whether facilitated by technological change or shifting consumer preferences – has presented different, novel and shifting opportunities and challenges for companies interested in attracting eyeballs, selling ad space and data, earning revenue and obtaining market share. To say that Google dominates “search” or “online advertising” missed the mark precisely because there was simply nothing especially antitrust-relevant about either search or online advertising. Because of their own unique products, innovations, data sources, business models, entrepreneurship and organizations, all of these companies have challenged and will continue to challenge the dominant company — and the dominant paradigm — in a shifting and evolving range of markets.

It would be churlish not to give credit where credit is due—and credit is due the FTC. I continue to think the investigation should have ended before it began, of course, but the FTC is to be commended for reaching this result amidst an overwhelming barrage of pressure to “do something.”

But there are others in this sadly politicized mess for whom neither the facts nor the FTC’s extensive investigation process (nor the finer points of antitrust law) are enough. Like my four-year-old daughter, they just “want what they want,” and they will stamp their feet until they get it.

While competitors will be competitors—using the regulatory system to accomplish what they can’t in the market—they do a great disservice to the very customers they purport to be protecting in doing so. As Milton Friedman famously said, in decrying “The Business Community’s Suicidal Impulse“:

As a believer in the pursuit of self-interest in a competitive capitalist system, I can’t blame a businessman who goes to Washington and tries to get special privileges for his company.… Blame the rest of us for being so foolish as to let him get away with it.

I do blame businessmen when, in their political activities, individual businessmen and their organizations take positions that are not in their own self-interest and that have the effect of undermining support for free private enterprise. In that respect, businessmen tend to be schizophrenic. When it comes to their own businesses, they look a long time ahead, thinking of what the business is going to be like 5 to 10 years from now. But when they get into the public sphere and start going into the problems of politics, they tend to be very shortsighted.

Ironically, Friedman was writing about the antitrust persecution of Microsoft by its rivals back in 1999:

Is it really in the self-interest of Silicon Valley to set the government on Microsoft? Your industry, the computer industry, moves so much more rapidly than the legal process, that by the time this suit is over, who knows what the shape of the industry will be.… [Y]ou will rue the day when you called in the government.

Among Microsoft’s chief tormentors was Gary Reback. He’s spent the last few years beating the drum against Google—but singing from the same song book. Reback recently told the Washington Post, “if a settlement were to be proposed that didn’t include search, the institutional integrity of the FTC would be at issue.” Actually, no it wouldn’t. As a matter of fact, the opposite is true. It’s hard to imagine an agency under more pressure, from more quarters (including the Hill), to bring a case around search. Doing so would at least raise the possibility that it were doing so because of pressure and not the merits of the case. But not doing so in the face of such pressure? That can almost only be a function of institutional integrity.

As another of Google’s most-outspoken critics, Tom Barnett, noted:

[The FTC has] really put [itself] in the position where they are better positioned now than any other agency in the U.S. is likely to be in the immediate future to address these issues. I would encourage them to take the issues as seriously as they can. To the extent that they concur that Google has violated the law, there are very good reasons to try to address the concerns as quickly as possible.

As Barnett acknowledges, there is no question that the FTC investigated these issues more fully than anyone. The agency’s institutional culture and its committed personnel, together with political pressure, media publicity and endless competitor entreaties, virtually ensured that the FTC took the issues “as seriously as they [could]” – in fact, as seriously as anyone else in the world. There is simply no reasonable way to criticize the FTC for being insufficiently thorough in its investigation and conclusions.

Nor is there a basis for claiming that the FTC is “standing in the way” of the courts’ ability to review the issue, as Scott Cleland contends in an op-ed in the Hill. Frankly, this is absurd. Google’s competitors have spent millions pressuring the FTC to bring a case. But the FTC isn’t remotely the only path to the courts. As Commissioner Rosch admonished,

They can darn well bring [a case] as a private antitrust action if they think their ox is being gored instead of free-riding on the government to achieve the same result.

Competitors have already beaten a path to the DOJ’s door, and investigations are still pending in the EU, Argentina, several US states, and elsewhere. That the agency that has leveled the fullest and best-informed investigation has concluded that there is no “there” there should give these authorities pause, but, sadly for consumers who would benefit from an end to competitors’ rent seeking, nothing the FTC has done actually prevents courts or other regulators from having a crack at Google.

The case against Google has received more attention from the FTC than the merits of the case ever warranted. It is time for Google’s critics and competitors to move on.

[Crossposted at Forbes.com]

]]>
https://techliberation.com/2013/01/03/ftc-deservedly-closes-google-antitrust-investigation-without-taking-action/feed/ 0 43401
Tears for Tiers: Wyden’s “Data Cap” Restrictions Would Hurt, not Help, Internet Users https://techliberation.com/2012/12/20/tears-for-tiers-wydens-data-cap-restrictions-would-hurt-not-help-internet-users/ https://techliberation.com/2012/12/20/tears-for-tiers-wydens-data-cap-restrictions-would-hurt-not-help-internet-users/#comments Fri, 21 Dec 2012 00:16:39 +0000 http://techliberation.com/?p=43389

By Geoffrey Manne & Berin Szoka

As Democrats insist that income taxes on the 1% must go up in the name of fairness, one Democratic Senator wants to make sure that the 1% of heaviest Internet users pay the same price as the rest of us. It’s ironic how confused social justice gets when the Internet’s involved.

Senator Ron Wyden is beloved by defenders of Internet freedom, most notably for blocking the Protect IP bill—sister to the more infamous SOPA—in the Senate. He’s widely celebrated as one of the most tech-savvy members of Congress. But his latest bill, the “Data Cap Integrity Act,” is a bizarre, reverse-Robin Hood form of price control for broadband. It should offend those who defend Internet freedom just as much as SOPA did.

Wyden worries that “data caps” will discourage Internet use and allow “Internet providers to extract monopoly rents,” quoting a New York Times editorial from July that stirred up a tempest in a teapot. But his fears are straw men, based on four false premises.

First, US ISPs aren’t “capping” anyone’s broadband; they’re experimenting with usage-based pricing—service tiers. If you want more than the basic tier, your usage isn’t capped: you can always pay more for more bandwidth. But few users will actually exceed that basic tier. For example, Comcast’s basic tier, 300 GB/month, is so generous that 98.5% of users will not exceed it. That’s enough for 130 hours of HD video each month (two full-length movies a day) or between 300 and 1000 hours of standard (compressed) video streaming.

Second, Wyden sets up a false dichotomy: Caps (or tiers, more accurately) are, according to Wyden, “appropriate if they are carefully constructed to manage network congestion,” but apparently for Wyden the only alternative explanation for usage-based pricing is extraction of monopoly rents. This simply isn’t the case, and propagating that fallacy risks chilling investment in network infrastructure. In fact, usage-based pricing allows networks to charge heavy users more, thereby recovering more costs and actually reducing prices for the majority of us who don’t need more bandwidth than the basic tier permits—and whose usage is effectively subsidized by those few who do. Unfortunately, Wyden’s bill wouldn’t allow pricing structures based on cost recovery—only network congestion. So, for example, an ISP might be allowed to price usage during times of peak congestion, but couldn’t simply offer a lower price for the basic tier to light users.

That’s nuts—from the perspective of social justice as well as basic economic rationality. Even as the FCC was issuing its famous Net Neutrality regulations, the agency rejected proposals to ban usage-based pricing, explaining:

prohibiting tiered or usage-based pricing and requiring all subscribers to pay the same amount for broadband service, regardless of the performance or usage of the service, would force lighter end users of the network to subsidize heavier end users. It would also foreclose practices that may appropriately align incentives to encourage efficient use of networks.

It is unclear why Senator Wyden thinks the FCC—no friend of broadband “monopolists”—has this wrong.

Third, charging heavy users more isn’t just more equitable, it’s actually a solution to the very problem Wyden worries about: ensuring that ISPs have an incentive to encourage Internet use. Tiered pricing means they actually benefit from heavy use. So rather than try to slow use or discriminate against bandwidth-heavy applications—which is how the Net Neutrality fight started—ISPs will continue to build out faster networks.

Now, it’s certainly possible that, if the basic tier were set low enough or if additional data were expensive enough, cable companies could discourage their subscribers from canceling a cable subscription and switching to a competing service like Netflix. But it’s hard to see how a 300 GB basic tier deters anyone, especially when users can buy additional blocks of 50 GB for just $10/month—enough for nearly two more hours a day of streamed video. If there actually were a problem here, antitrust law could address it far better than blunt pricing restrictions. Indeed, such an investigation is already ongoing.

Finally, Wyden would require that broadband providers count content download from them against your usage—fearing that a “discriminatory cap” would harm competing video providers. But if the “cap” is high enough, who cares? Under antitrust law, such “discrimination” is illegal only if it harms consumers—and it’s hard to see how consumers suffer from being able to download more video. Would they really be better off if every hour of video they streamed from their cable company meant an hour less they could stream from Netflix? That’s what Wyden’s bill would require.

The recent kerfuffle over Comcast’s decision in October to make some of its television (pay per view) content available through Xbox without counting against Internet usage limits brought this point into stark relief. While activists like Public Knowledge decried the decision for the same reasons Wyden does now, they missed the fact that by removing some of its content from usage limits Comcast was actually freeing up users to access more content at lower prices.

If Wyden’s concern is that usage-based pricing would allow ISPs to extract “monopoly profits” from users who bump up against tiers, then “preferencing” some of their own content will reduce, not increase, that risk: Users would be able to access, say, bandwidth-heavy video content just as they do television content now—without it counting against Internet usage limits. That this might “discriminate” against other Internet-based content providers does not mean that it harms consumers—quite the opposite, in fact. Again, to the extent that it might, antitrust rules are more than sufficient to discourage such practices in the first place or punish them if they arise— without restricting firms’ ability to price their content and manage their networks to ensure a reasonable return on their investments.

Pricing structures for broadband are still evolving. Just this year, Comcast moved from its original 250 GB cap—which it never enforced—to today’s 300 GB basic tier, and other broadband providers will likely follow suit. Those plans will probably continue to evolve towards pricing structures that minimize network congestion—like offering periods of unmetered use in the middle of the night, when network use plummets. That would go a long way to allaying concerns about the effect of tiered plans on competition, since Netflix could send your favorite shows and the next movies in your queue to the device of your choice while you sleep. But pricing structures also have to allow sensible, fair recovery of costs—which the Wyden bill would simply ban.

So much for not blithely regulating the Internet, Senator!

[Cross-posted at Truth on the Market]

]]>
https://techliberation.com/2012/12/20/tears-for-tiers-wydens-data-cap-restrictions-would-hurt-not-help-internet-users/feed/ 609 43389
Time for Congress to Cancel the FTC’s Section 5 Antitrust Blank Check https://techliberation.com/2012/12/20/time-for-congress-to-cancel-the-ftcs-section-5-antitrust-blank-check/ https://techliberation.com/2012/12/20/time-for-congress-to-cancel-the-ftcs-section-5-antitrust-blank-check/#respond Thu, 20 Dec 2012 19:46:25 +0000 http://techliberation.com/?p=43372

By Geoffrey Manne and Berin Szoka

A debate is brewing in Congress over whether to allow the Federal Trade Commission to sidestep decades of antitrust case law and economic theory to define, on its own, when competition becomes “unfair.” Unless Congress cancels the FTC’s blank check, uncertainty about the breadth of the agency’s power will chill innovation, especially in the tech sector. And sadly, there’s no reason to believe that such expansive power will serve consumers.

Last month, Senators and Congressmen of both parties sent a flurry of letters to the FTC warning against overstepping the authority Congress granted the agency in 1914 when it enacted Section 5 of the FTC Act. FTC Chairman Jon Leibowitz has long expressed a desire to stake out new antitrust authority under Section 5 over unfair methods of competition that would otherwise be legal under the Sherman and Clayton antitrust acts. He seems to have had Google in mind as a test case.

On Monday, Congressmen John Conyers and Mel Watt, the top two Democrats on the House Judiciary Committee, issued their own letter telling us not to worry about the larger principle at stake. The two insist that “concerns about the use of Section 5 are unfounded” because “[w]ell established legal principles set forth by the Supreme Court provide ample authority for the FTC to address potential competitive concerns in the relevant market, including search.” The second half of that sentence is certainly true: the FTC doesn’t need a “standalone” Section 5 case to protect consumers from real harms to competition. But that doesn’t mean the FTC won’t claim such authority—and, unfortunately, there’s little by way of “established legal principles” stop the agency from overreaching.

The Conyers-Watt letter cites four Supreme Court cases ( Aspen Skiing, Otter Tail Power, Lorrain Journal and Indiana Federation of Dentists), the latest decided in 1986, that deal only with the Sherman Act or that reference Section 5 only as the statutory basis by which the FTC enforces, indirectly, the Sherman Act. But what conduct does Section 5 allow the FTC to prosecute beyond the Sherman Act? The fifth case cited, Sperry & Hutchinson, from 1972, was the last time the Supreme Court directly addressed this critical question, holding that the FTC “does not arrogate excessive power to itself if, in measuring a practice against the elusive, but congressionally mandated standard of fairness, it, like a court of equity, considers public values beyond simply those enshrined in the letter or encompassed in the spirit of the antitrust laws.” Yet, even there, the Court concluded the FTC would have prevailed under the Sherman Act—thus leaving unresolved what a standalone Section 5 case could cover. Fourteen years later, the Court dodged the question again in Indiana Federation of Dentists, noting that, although Section 5 covers something more than the Sherman and Clayton acts, the Sherman Act provided the sole basis for liability in that case. Of Section 5, the Court in Indiana Federation of Dentists said merely that “the standard of ‘unfairness’ under the FTC Act is, by necessity, an elusive one.”

Elusive. Try telling that to your shareholders—or investors looking for The Next Big Thing—when asked how the FTC might regulate innovative business methods!

The FTC has been down this road before—starting with the same Sperry & Hutchinson decision cited by Conyers and Watt. The FTC interpreted that 1972 decision as a blank check to use its authority over unfair trade practices (distinct from, but related to, its authority over unfair methods of competition) to regulate everything from funeral parlors to children’s advertising. But the FTC’s overreach provoked widespread outcry, causing the Washington Post to blast the agency as the “National Nanny.” The Democratic Congress briefly closed the agency, slashed its budget and, in 1980, ordered the Commission to establish legal limiting principles in the form of a formal policy statement on unfairness (followed in 1983 by one on deception). That statement bars the FTC from banning a practice as unfair simply because a majority of Commissioners decide it is “immoral” or in violation of public policy; instead the Commission must show that it violates public policy that is “widely-shared” and “clear and well-established in law or that causes a substantial injury to consumers without countervailing benefits and which consumers cannot reasonably avoid. Congress enshrined this doctrine into law in 1994.

But the Commission has never issued any such policy statement about Section 5’s unfair competition language—and Congress has never bothered to intervene, even though the FTC has begun exploiting this uncertainty as additional leverage in “convincing” companies to settle shaky antitrust cases. That’s precisely what happened in the Intel case where, as we’ve explained, Intel settled a questionable complaint, probably because it concluded that that settling the case was less costly than litigating it. While such outcomes may bolster the agency’s power, they do nothing to protect consumers and serve instead to chill business conduct that would benefit consumers.

That dynamic is a major reason why the FTC gets away with pushing the boundaries of its authority. Litigation in court is costly enough, but the agency can always threaten companies with an administrative “Part III” litigation—meaning the company would have to spend upwards of a year litigating before the FTC’s Administrative Law Judge and then the full Commission, almost certainly suffering two losses, both PR disasters, before ever getting to an independent, neutral tribunal. So it’s not surprising that most companies settle. Sure, they might win in court eventually, but if the FTC is talking to you about a standalone Section 5 case while pressuring you to settle a case in a consent decree… well, “you’ve got to ask yourself one question: ‘Do I feel lucky?’ Well, do ya, punk?”

High-tech companies are particularly likely to find themselves the targets of Section 5 sabre-rattling. Cutting-edge companies are often antitrust test-cases because technological innovation goes hand-in-hand with innovations in business practices, from consumer pricing to “coopetition” partnerships between rivals. They’re more likely to settle rather than litigate because they’re terrified of squandering money, investor goodwill and management time on litigation—lest, like Microsoft, they fall behind their rivals even as they are demonized as rapacious monopolists in the press. At the same time, Internet-related cases tend to attract a unique degree of popular attention, driving antitrust regulators to show they’re “doing something” about a perceived problem. Even the best regulators all too easily fall prey to the costly tendency described by Nobel economist Ronald Coase: “if an economist finds something—a business practice of one sort or another—that he does not understand, he looks for a monopoly explanation.”

We can’t wait for the courts to fix this problem—not least because the tendency for these cases to settle out of court means it may be a long while before any court gets the chance. At a minimum, Congress should insist that the FTC convene a public workshop aimed at identifying what a valid standalone Section 5 case could cover—followed by formal guidelines, as we’ve urged. If the FTC cannot rigorously define an interpretation of Section 5 that will actually serve consumer welfare—which the Supreme Court has defined as the proper aim of antitrust law—Congress should expressly limit Section 5’s prohibition of unfair competition only to invitations to collude (which aren’t cognizable under the Sherman Act).

As the FTC’s policy statement on unfairness puts it, “[t]he Supreme Court has stated on many occasions that the definition of ‘unfairness’ is ultimately one for judicial determination.” But for the courts to play that vital role in defining the “elusive,” Congress may need to reassess how the FTC operates. That might start with requiring the agency to bring suit directly in federal court, just as the Department of Justice does. But it also means much more careful Congressional oversight of what the FTC does across the board. Otherwise, the Commission may once again, as it did in the 1970s, become a second national legislature—with three political appointees deciding what’s “fair” for the entire economy, especially the high-tech sector.

[Crossposted at Forbes.com]

]]>
https://techliberation.com/2012/12/20/time-for-congress-to-cancel-the-ftcs-section-5-antitrust-blank-check/feed/ 0 43372
Section 5 of the FTC Act and Monopolization Cases: A Brief Primer https://techliberation.com/2012/11/26/section-5-of-the-ftc-act-and-monopolization-cases-a-brief-primer/ https://techliberation.com/2012/11/26/section-5-of-the-ftc-act-and-monopolization-cases-a-brief-primer/#comments Mon, 26 Nov 2012 23:01:40 +0000 http://techliberation.com/?p=42915

Co-authored with Berin Szoka

In the past two weeks, Members of Congress from both parties have penned scathing letters to the FTC warning of the consequences (both to consumers and the agency itself) if the Commission sues Google not under traditional antitrust law, but instead by alleging unfair competition under Section 5 of the FTC Act. The FTC is rumored to be considering such a suit, and FTC Chairman Jon Leibowitz and Republican Commissioner Tom Rosch have expressed a desire to litigate such a so-called “pure” Section 5 antitrust case — one not adjoining a cause of action under the Sherman Act. Unfortunately for the Commissioners, no appellate court has upheld such an action since the 1960s.

This brewing standoff is reminiscent of a similar contest between Congress and the FTC over the Commission’s aggressive use of Section 5 in consumer protection cases in the 1970s. As Howard Beales recounts, the FTC took an expansive view of its authority and failed to produce guidelines or limiting principles to guide its growing enforcement against “unfair” practices — just as today it offers no limiting principles or guidelines for antitrust enforcement under the Act. Only under heavy pressure from Congress, including a brief shutdown of the agency (and significant public criticism for becoming the “National Nanny“), did the agency finally produce a Policy Statement on Unfairness — which Congress eventually codified by statute.

Given the attention being paid to the FTC’s antitrust authority under Section 5, we thought it would be helpful to offer a brief primer on the topic, highlighting why we share the skepticism expressed by the letter-writing members of Congress (along with many other critics).

The topic has come up, of course, in the context of the FTC’s case against Google. The scuttlebut is that the Commission believes it may not be able to bring and win a traditional, Section 2 antitrust action, and so may resort to Section 5 to make its case — or simply force a settlement, as the FTC did against Intel in late 2010. While it may be Google’s head on the block today, it could be anyone’s tomorrow. This isn’t remotely just about Google; it’s about broader concerns over the Commission’s use of Section 5 to prosecute monopolization cases without being subject to the rigorous economic standards of traditional antitrust law.

Background on Section 5

Section 5 has two “prongs.” The first, reflected in its prohibition of “unfair acts or deceptive acts or practices” (UDAP) is meant (and has previously been used—until recently, as explained) as a consumer protection statute. The other, prohibiting “unfair methods of competition” (UMC) has, indeed, been interpreted to have relevance to competition cases.

Most commonly (and commonly-accepted), the UMC language has been viewed to authorize the agency to bring cases that fill the gaps between clearly anticompetitive conduct and the language of the Sherman Act. Principally, this has been invoked in “invitation to collude” cases, which raise the spectre of price-fixing but nevertheless do not meet the literal prohibition against “agreement in restraint of trade” under Section 1 of the Sherman Act.

Over strenuous objections from dissenting Commissioners (and only in consent decrees; not before courts), the FTC has more recently sought to expand the reach of the UDAP language beyond the consumer protection realm to address antitrust concerns that would likely be non-starters under the Sherman Act.

In  N-Data, the Commission brought and settled a case invoking both the UDAP and UMC prongs of Section 5 to reach (alleged) conduct that amounted to breach of a licensing agreement without the requisite (Sherman Act) Section 2 claim of exclusionary conduct (which would have required that the FTC show that N-Data’s conducted had the effect of excluding its rivals without efficiency or welfare-enhancing properties). Although the FTC’s claims fall outside the ambit of Section 2, the Commission’s invocation of Section 5’s UDAP language was so broad that it could — quite improperly — be employed to encompass traditional Section 2 claims nonetheless, but without the rigor Section 2 requires (as the vigorous dissents by Commissioners Kovacic and Majoras discuss). As Commissioner Kovacic wrote in his dissent:

[T]he framework that the [FTC’s] Analysis presents for analyzing the challenged conduct as an unfair act or practice would appear to encompass all behavior that could be called a UMC or a violation of the Sherman or Clayton Acts. The Commission’s discussion of the UAP [sic] liability standard accepts the view that all business enterprises – including large companies – fall within the class of consumers whose injury is a worthy subject of unfairness scrutiny. If UAP coverage extends to the full range of business-to-business transactions, it would seem that the three-factor test prescribed for UAP analysis would capture all actionable conduct within the UMC prohibition and the proscriptions of the Sherman and Clayton Acts. Well-conceived antitrust cases (or UMC cases) typically address instances of substantial actual or likely harm to consumers. The FTC ordinarily would not prosecute behavior whose adverse effects could readily be avoided by the potential victims – either business entities or natural persons. And the balancing of harm against legitimate business justifications would encompass the assessment of procompetitive rationales that is a core element of a rule of reason analysis in cases arising under competition law.

In  Intel, the most notorious of the recent FTC Section 5 antitrust actions, the Commission brought (and settled) a straightforward (if unwinnable) Section 2 case as a Section 5 case (with Section 2 “tag along” claims), using the justification that it simply couldn’t win a Section 2 case under current jurisprudence. Intel presumably settled the case because the absence of judicial limits under Section 5 made its outcome far less certain — and presumably the FTC brought the case under Section 5 for the same reason.

In  Intel, there was no effort to distinguish Section 5 grounds from those under Section 2. Rather, the FTC claimed that the limiting jurisprudence under Section 2 wasn’t meant to rein in agencies, but merely private plaintiffs. This claim falls flat, as one of us (Geoff) has noted:

[Chairman] Leibowitz’ continued claim that courts have reined in Sherman Act jurisprudence only out of concern with the incentives and procedures of private enforcement, and not out of a concern with a more substantive balancing of error costs—errors from which the FTC is not, unfortunately immune—seems ridiculous to me. To be sure (as I said before), the procedural background matters as do the incentives to bring cases that may prove to be inefficient. But take, for example, Twombly, mentioned by Leibowitz as one of the cases that has recently reined in Sherman Act enforcement in order to constrain overzealous private enforcement (and thus not in a way that should apply to government enforcement). . . . But the over-zealousness of private plaintiffs is not all [ Twombly] was about, as the Court made clear:

The inadequacy of showing parallel conduct or interdependence, without more, mirrors the ambiguity of the behavior: consistent with conspiracy, but just as much in line with a wide swath of rational and competitive business strategy unilaterally prompted by common perceptions of the market. Accordingly, we have previously hedged against false inferences from identical behavior at a number of points in the trial sequence.

Hence, when allegations of parallel conduct are set out in order to make a §1 claim, they must be placed in a context that raises a suggestion of a preceding agreement, not merely parallel conduct that could just as well be independent action. [Citations omitted].

The Court was appropriately concerned with the ability of decision-makers to separate pro-competitive from anticompetitive conduct. Even when the FTC brings cases, it and the court deciding the case must make these determinations. And, while the FTC may bring fewer strike suits, it isn’t limited to challenging conduct that is simple to identify as anticompetitive. Quite the opposite, in fact—the government has incentives to develop and bring suits proposing novel theories of anticompetitive conduct and of enforcement (as it is doing in the  Intel case, for example).

Problems with Unleashing Section 5

It would be a serious problem — as the Members of Congress who’ve written letters seem to realize — if Section 5 were used to sidestep the important jurisprudential limitations on Section 2 by focusing on such unsupported theories as “reduction in consumer choice” instead of Section 2’s well-established consumer welfare standard. As Geoff has noted:

Following Sherman Act jurisprudence, traditionally the FTC has understood (and courts have demanded) that antitrust enforcement . . . requires demonstrable consumer harm to apply. But this latest effort reveals an agency pursuing an interpretation of Section 5 that would give it unprecedented and largely-unchecked authority. In particular, the definition of “unfair” competition wouldn’t be confined to the traditional antitrust measures — reduction in output or an output-reducing increase in price — but could expand to, well, just about whatever the agency deems improper. * * * One of the most important shifts in antitrust over the past 30 years has been the move away from indirect and unreliable proxies of consumer harm toward a more direct, effects-based analysis. Like the now archaic focus on market concentration in the structure-conduct-performance framework at the core of “old” merger analysis, the consumer choice framework [proposed by Commissioner Rosch as a cause of action under Section 5] substitutes an indirect and deeply flawed proxy for consumer welfare for assessment of economically relevant economic effects. By focusing on the number of choices, the analysis shifts attention to the wrong question. The fundamental question from an antitrust perspective is whether consumer choice is a better predictor of consumer outcomes than current tools allow. There doesn’t appear to be anything in economic theory to suggest that it would be. Instead, it reduces competitive analysis to a single attribute of market structure and appears susceptible to interpretations that would sacrifice a meaningful measure of consumer welfare (incorporating assessment of price, quality, variety, innovation and other amenities) on economically unsound grounds. It is also not the law.

Commissioner Kovacic echoed this in his dissent in N-Data:

More generally, it seems that the Commission’s view of unfairness would permit the FTC in the future to plead all of what would have been seen as competition-related infringements as constituting unfair acts or practices.

And the same concerns animate Kovacic’s belief (drawn from an article written with then-Attorney Advisor Mark Winerman) that courts will continue to look with disapproval on efforts by the FTC to expand its powers:

We believe that UMC should be a competition-based concept, in the modern sense of fostering improvements in economic performance rather than equating the health of the competitive process with the wellbeing of individual competitors, per se. It should not, moreover, rely on the assertion in [the Supreme Court’s 1972 Sperry & Hutchinson Trading Stamp case] that the Commission could use its UMC authority to reach practices outside both the letter and spirit of the antitrust laws. We think the early history is now problematic, and we view the relevant language in [Sperry & Hutchinson] with skepticism.

Representatives Eshoo and Lofgren were even more direct in their letter:

Expanding the FTC’s Section 5 powers to include antitrust matters could lead to overbroad authority that amplifies uncertainty and stifles growth. . . . If the FTC intends to litigate under this interpretation of Section 5, we strongly urge the FTC to reconsider.

But it isn’t only commentators and Congressmen who point to this limitation. The FTC Act itself contains such a limitation. Section 5(n) of the Act, the provision added by Congress in 1994 to codify the core principles of the FTC’s 1980 Unfairness Policy Statement, says that:

The Commission shall have no authority under this section or section 57a of this title to declare unlawful an act or practice on the grounds that such act or practice is unfair unless the act or practice causes or is likely to cause substantial injury to consumers which is not reasonably avoidable by consumers themselves and not outweighed by countervailing benefits to consumers or to competition. [Emphasis added].

In other words, Congress has already said, quite clearly, that Section 5 isn’t a blank check. Yet Chairman Leibowitz seems to be banking on the dearth of direct judicial precedent saying so to turn it into one — as do those who would cheer on a Section 5 antitrust case (against Google, Intel or anyone else). Given the unique breadth of the FTC’s jurisdiction over the entire economy, the agency would again threaten to become a second national legislature, capable of regulating nearly the entire economy.

The Commission has tried — and failed — to bring such cases before the courts in recent years. But the judiciary has not been receptive to an invigoration of Section 5 for several reasons. Chief among these is that the agency simply hasn’t defined the scope of its power over unfair competition under the Act, and the courts hesitate to let the Commission set the limits of its own authority. As Kovacic and Winerman have noted:

The first [reason for judicial reluctance in Section 5 cases] is judicial concern about the apparent absence of limiting principles. The tendency of the courts has been to endorse limiting principles that bear a strong resemblance to standards familiar to them from Sherman Act and Clayton Act cases. The cost-benefit concepts devised in rule of reason cases supply the courts with natural default rules in the absence of something better. The Commission has done relatively little to inform judicial thinking, as the agency has not issued guidelines or policy statements that spell out its own view about the appropriate analytical framework. This inactivity contrasts with the FTC’s efforts to use policy statements to set boundaries for the application of its consumer protection powers under Section 5.

This concern was stressed in the letter sent by Senator DeMint and other Republican Senators to Chairman Leibowitz:

[W]e are concerned about the apparent eagerness of the Commission under your leadership to expand Section 5 actions without a clear indication of authority or a limiting principle. When a federal regulatory agency uses creative theories to expand its activities, entrepreneurs may be deterred from innovating and growing lest they be targeted by government action.

As we have explained many times (see, e.g., herehere and here), a Section 2 case against Google will be an uphill battle. As far as we have seen publicly, complainants have offered only harm to competitors — not harm to consumers — to justify such a case. It is little surprise, then, that the agency (or, more accurately, Chairman Leibowitz and Commissioner Rosch) may be seeking to use the less-limited power of Section 5 to mount such a case.

In a blog post in 2011, Geoff wrote:

Commissioner Rosch has claimed that Section Five could address conduct that has the effect of “reducing consumer choice” — an effect that a very few commentators support without requiring any evidence that the conduct actually reduces consumer welfare. Troublingly, “reducing consumer choice” seems to be a euphemism for “harm to competitors, not competition,” where the reduction in choice is the reduction of choice of competitors who may be put out of business by competitive behavior. The U.S. has a long tradition of resisting enforcement based on harm to competitors without requiring a commensurate, strong showing of harm to consumers — an economically-sensible tradition aimed squarely at minimizing the likelihood of erroneous enforcement. The FTC’s invigorated interest in Section Five contemplates just such wrong-headed enforcement, however, to the inevitable detriment of the very consumers the agency is tasked with protecting. In fact, the theoretical case against Google depends entirely on the ways it may have harmed certain competitors rather than on any evidence of actual harm to consumers (and in the face of ample evidence of significant consumer benefits). * * * In each of [the complaints against Google], the problem is that the claimed harm to competitors does not demonstrably translate into harm to consumers. For example, Google’s integration of maps into its search results unquestionably offers users an extremely helpful presentation of these results, particularly for users of mobile phones. That this integration might be harmful to MapQuest’s bottom line is not surprising — but nor is it a cause for concern if the harm flows from a strong consumer preference for Google’s improved, innovative product. The same is true of the other claims. . . .

To the extent that the FTC brings an antitrust case against Google under Section 5, using the Act to skirt the jurisprudential limitations (and associated economic rigor) that make a Section 2 case unwinnable, it would be contravening congressional intent, judicial precedent, the plain language of the FTC Act, and the collected wisdom of the antitrust commentariat that sees such an action as inappropriate. This includes not just traditional antitrust-skeptics like us, but even antitrust-enthusiasts like Allen Grunes, who has written:

The FTC, of course, has Section 5 authority. But there is well-developed case law on monopolization under Section 2 of the Sherman Act. There are no doctrinal “gaps” that need to be filled. For that reason it would be inappropriate, in my view, to use Section 5 as a crutch if the evidence is insufficient to support a case under Section 2.

As Geoff has said:

Modern antitrust analysis, both in scholarship and in the courts, quite properly rejects the reductive and unsupported sort of theories that would undergird a Section 5 case against Google. That the FTC might have a better chance of winning a Section 5 case, unmoored from the economically sound limitations of Section 2 jurisprudence, is no reason for it to pursue such a case. Quite the opposite: When consumer welfare is disregarded for the sake of the agency’s power, it ceases to further its mandate. . . . But economic substance, not self-aggrandizement by rhetoric, should guide the agency. Competition and consumers are dramatically ill-served by the latter.

Conclusion: What To Do About Unfairness?

So, what  should the FTC do with Section 5? The right answer may be “nothing” (and probably is, in our opinion). But even those who think something should be done to apply the Act more broadly to allegedly anticompetitive conduct should be able to agree that the FTC ought not bring a case under Section 5’s UDAP language without first defining with analytical rigor what its limiting principles are.

Rather than attempting to do this in the course of a single litigation, the agency ought to heed Kovacic and Winerman’s advice and do more to “inform judicial thinking” such as by “issu[ing] guidelines or policy statements that spell out its own view about the appropriate analytical framework.” The best way to start that process would be for whoever succeeds Leibowitz as chairman to convene a workshop on the topic. (As one of us (Berin) has previously suggested, the FTC is long overdue on issuing guidelines to explain how it has applied its Unfairness and Deception Policy Statements in UDAP consumer protection cases. Such a workshop would dovetail nicely with this.)

The question posed should not presume that Section 5’s UDAP language ought to be used to reach conduct actionable under the antitrust statutes at all. Rather, the fundamental question to be asked is whether the use of Section 5 in antitrust cases is a relic of a bygone era before antitrust law was given analytical rigor by economics. If the FTC cannot rigorously define an interpretation of Section 5 that will actually serve consumer welfare — which the Supreme Court has defined as the proper aim of antitrust law — Congress should explicitly circumscribe it once and for all, limiting Section 5 to protecting consumers against unfair and deceptive acts and practices and, narrowly, prohibiting unfair competition in the form of invitations to collude. The FTC (along with the DOJ and the states) would still regulate competition through the existing antitrust laws. This might be the best outcome of all.

Previous commentary by us on Section 5:

[This is cross posted from Truth on the Market]

]]>
https://techliberation.com/2012/11/26/section-5-of-the-ftc-act-and-monopolization-cases-a-brief-primer/feed/ 6 42915
Forget remedies – FairSearch doesn’t even have a valid statement of harm in its Google antitrust criticism https://techliberation.com/2012/11/21/forget-remedies-fairsearch-doesnt-even-have-a-valid-statement-of-harm-in-its-google-antitrust-criticism/ https://techliberation.com/2012/11/21/forget-remedies-fairsearch-doesnt-even-have-a-valid-statement-of-harm-in-its-google-antitrust-criticism/#respond Wed, 21 Nov 2012 05:36:47 +0000 http://techliberation.com/?p=42862

After more than a year of complaining about Google and being met with responses from me (see also here, here, here, here, and here, among others) and many others that these complaints have yet to offer up a rigorous theory of antitrust injury — let alone any evidence — FairSearch yesterday offered up its preferred remedies aimed at addressing, in its own words, “the fundamental conflict of interest driving Google’s incentive and ability to engage in anti-competitive conduct. . . . [by putting an] end [to] Google’s preferencing of its own products ahead of natural search results.”  Nothing in the post addresses the weakness of the organization’s underlying claims, and its proposed remedies would be damaging to consumers.

FairSearch’s first and core “abuse” is “[d]iscriminatory treatment favoring Google’s own vertical products in a manner that may harm competing vertical products.”  To address this it proposes prohibiting Google from preferencing its own content in search results and suggests as additional, “structural remedies” “[r]equiring Google to license data” and “[r]equiring Google to divest its vertical products that have benefited from Google’s abuses.”

Tom Barnett, former AAG for antitrust, counsel to FairSearch member Expedia, and FairSearch’s de facto spokesman should be ashamed to be associated with claims and proposals like these.  He better than many others knows that harm to competitors is not the issue under US antitrust laws.  Rather, US antitrust law requires a demonstration that consumers — not just rivals — will be harmed by a challenged practice.  He also knows (as economists have known for a long time) that favoring one’s own content — i.e., “vertically integrating” to produce both inputs as well as finished products — is generally procompetitive.

In fact, Barnett has said as much before:

Because a Section 2 violation hurts competitors, they are often the focus of section 2 remedial efforts.  But competitor well-being, in itself, is not the purpose of our antitrust laws.
Access remedies also raise efficiency and innovation concerns.  By forcing a firm to share the benefits of its investments and relieving its rivals of the incentive to develop comparable assets of their own, access remedies can reduce the competitive vitality of an industry.

Not only has FairSearch not actually demonstrated that Google has preferenced its own products, the organization has also not demonstrated either harm to consumers arising from such conduct nor even antitrust-cognizable harm to competitors arising from it.

As an empirical study supported by the International Center for Law and Economics (itself, in turn, supported in part by Google, and of which I am the Executive Director) makes clear, search bias simply almost never occurs.  And when it does, it is the non-dominant Bing that more often practices it, not Google.  Moreover, and most important, the evidence marshaled in favor of the search bias claim (largely adduced by Harvard Business School professor, Ben Edelman (whose work is supported by Microsoft)) demonstrates that consumers do, indeed, have the ability to detect and counter allegedly biased results.

Recall what search bias means in this context.  According to Edelman, looking at the top three search results, Google links to its own content (think Gmail, Google Maps, etc.) in the first search result about twice as often as Yahoo! and Bing link to Google content in this position.  While the ICLE paper refutes even this finding, notice what it means:  “Biased” search results lead to a reshuffling of results among the top few results offered up; there is no evidence that Google simply drops users’ preferred results.  While it is true that the difference in click-through rates between the top and second results can be significant, Edelman’s own findings actually demonstrate that consumers are capable of finding what they want when their preferred (more relevant) results appears in the second or third slot.

Edelman notes that Google ranks Gmail first and Yahoo! Mail second in his study, even though users seem to think Yahoo! Mail is the more relevant result:  Gmail receives only 29% of clicks while Yahoo! Mail receives 54%.  According to Edelman, this is proof that Google’s conduct forecloses access by competitors and harms consumers under the antitrust laws.

But is it?  Note that users click on the second, apparently more-relevant result nearly twice as often as they click on the first.  This demonstrates that Yahoo! is not competitively foreclosed from access to users, and that users are perfectly capable of identifying their preferred results, even when they appear lower in the results page.  This is simply not foreclosure — in fact, if anything, it demonstrates the opposite.

Among other things, foreclosure — limiting access by a competitor to a necessary input — under the antitrust laws must be substantial enough to prevent a rival from reaching sufficient scale that it can effectively compete.  It is no more “foreclosure” for Google to “impair” traffic to Kayak’s site by offering its own Flight Search than it is for Safeway to refuse to allow Kroger to sell Safeway’s house brand.  Rather, actionable foreclosure requires that a firm “impair[s] the ability of rivals to grow into effective competitors that erode the firm’s position.”  Such quantifiable claims are noticeably absent from critic’s complaints against Google.

And what about those allegedly harmed competitors?  How are they faring?  As of September 2012, Google ranks 7th in visits among metasearch travel sites, with a paltry 1.4% of such visits.  Residing at number one?  FairSearch founding member, Kayak, with a whopping 61% (up from 52% six months after Google entered the travel search business).  Nextag.com, another vocal Google critic, has complained that Google’s conduct has forced it to shift its strategy from attracting traffic through Google’s organic search results to other sources, including paid ads on Google.com.  And how has it fared?  It has parlayed its experience with new data sources into a successful new business model, Wize Commerce, showing exactly the sort of “incentive to develop comparable assets of their own” Barnett worries will be destroyed by aggressive antitrust enforcement.  And Barnett’s own Expedia.com?  Currently, it’s the largest travel company in the world, and it has only grown in recent years.

Meanwhile consumers’ interests have been absent from critics’ complaints since the beginning.  And not only do they fail to demonstrate any connection between harm to consumers and the claimed harms to competitors arising from Google’s conduct, but they also ignore the harm to consumers that may result from restricting potentially efficient business conduct — like the integration of Google Maps and other products into its search results.  That Google not only produces search results but also owns some of the content that generates those results is not a problem cognizable by modern antitrust.

FairSearch and other Google critics have utterly failed to make a compelling case, and their proposed remedies would serve only to harm, not help, consumers.

]]>
https://techliberation.com/2012/11/21/forget-remedies-fairsearch-doesnt-even-have-a-valid-statement-of-harm-in-its-google-antitrust-criticism/feed/ 0 42862
Let The Music Play: Critics Of Universal-EMI Merger Are Singing Off-Key https://techliberation.com/2012/09/20/let-the-music-play-critics-of-universal-emi-merger-are-singing-off-key/ https://techliberation.com/2012/09/20/let-the-music-play-critics-of-universal-emi-merger-are-singing-off-key/#comments Thu, 20 Sep 2012 20:38:58 +0000 http://techliberation.com/?p=42401

There are a lot of inaccurate claims – and bad economics – swirling around the Universal Music Group (UMG)/EMI merger, currently under review by the US Federal Trade Commission and the European Commission (and approved by regulators in several other jurisdictions including, most recently, Australia). Regulators and industry watchers should be skeptical of analyses that rely on outmoded antitrust thinking and are out of touch with the real dynamics of the music industry.

The primary claim of critics such as the American Antitrust Institute and Public Knowledge is that this merger would result in an over-concentrated music market and create a “super-major” that could constrain output, raise prices and thwart online distribution channels, thus harming consumers. But this claim, based on a stylized, theoretical economic model, is far too simplistic and ignores the market’s commercial realities, the labels’ self-interest and the merger’s manifest benefits to artists and consumers.

For market concentration to raise serious antitrust issues, products have to be substitutes. This is in fact what critics argue: that if UMG raised prices now it would be undercut by EMI and lose sales, but that if the merger goes through, EMI will no longer constrain UMG’s pricing power. However, the vast majority of EMI’s music is not a substitute for UMG’s. In the real world, there simply isn’t much price competition across music labels or among the artists and songs they distribute. Their catalogs are not interchangeable, and there is so much heterogeneity among consumers and artists (“product differentiation,” in antitrust lingo) that relative prices are a trivial factor in consumption decisions: No one decides to buy more Lady Gaga albums because the Grateful Dead’s are too expensive. The two are not substitutes, and assessing competitive effects as if they are, simply because they are both “popular music,” is not instructive.

Given these factors, a larger catalog won’t lead to abuse of market power. This is precisely why the European Union cleared the Sony/EMI music publishing merger, concluding that “Customers usually select a song or certain musical works and not a [label] or a [label’s] catalog… In the event that a customer is wedded to a particular song…or a catalog of songs…, even a small [label] would have pricing power over these particular musical works. The merger would not affect this situation (since the size of the catalog does not matter).”

A second popular criticism is that a combined UMG/EMI would control 51 of 2011’s Billboard Hot 100 songs. But this assertion ignores the ever-changing nature of musical output and consumer tastes – not to mention that “top-selling songs of 2011” is hardly a relevant antitrust market (and neither is “top-selling songs of the last 10 years”). A label’s ownership of 51 songs that were popular in 2011 is not suggestive of its ability to price its full catalog of several million songs in negotiations with an online music service. Meanwhile, by other measures (this year independent artists garnered over 50% of Grammy nominations and won 44% of the awards) the major labels are hardly the only purveyors of valuable songs, and competition from Indie labels and artists is significant.

Edgar Bronfman, a director and former CEO and chairman of Warner Music Group, recently testified in Congress against the merger, arguing that a combined UMG/EMI could decide “what digital services live and what digital services die.” But Bronfman himself has elsewhere acknowledged that labels can’t prosper if they can’t sell their music. As chairman of Universal in 2001 he told Congress that, “for us to effectively market and distribute…albums, they are going to have to be on as many different online music sites as possible…. Frankly, if we lock away our catalog, we aren’t generating value for our artists or our shareholders or our fans.” As a competitor of UMG, Bronfman may have changed his tune, but his earlier point is even more true today with digital sales exceeding 50% of the market.

Far from wanting to constrain supply or hamstring distribution channels, labels have an incentive to make music widely and easily accessible. In fact, power buyers like Apple may have greater control over the marketplace than the labels. As UMG’s CEO Lucian Grainge bluntly noted, “[i]f Apple stops selling our music, we go out of business. Apple does not.” Critics downplay the role of power buyers in disciplining prices, but that assertion goes against the evidence.

Dismissive attitudes about piracy as a constraint on prices also miss the mark. For many consumers, a marginal price increase will indeed induce some piracy. More positively, the opposite also holds true: Increased consumer access to inexpensive and accessible legal content reduce piracy. Given the ravages of pirated music since Napster, it’s no wonder that labels – including both Universal and EMI – are now licensing their music to so many legal digital music services like Spotify. UMG’s incentives to continue to do so can only increase following the merger.

Finally, antitrust reviews must consider the benefits of the merger. Bringing together Universal and EMI could create substantial operating efficiencies. More efficient A&R and production should benefit artists (and fans) directly. And with a larger catalog UMG’s opportunities for pairing similar artists for marketing and concert promotion would increase, helping new and less-popular artists reach larger audiences. And UMG is in a position to breathe new life into EMI’s catalog with investment in human capital and artists’ careers that EMI simply can’t muster.

Claims of this merger‘s anticompetitive effects are not supported either by antitrust analysis or the realities of this market. Regulators should let the music play.

[Crossposted from Forbes.com]

]]>
https://techliberation.com/2012/09/20/let-the-music-play-critics-of-universal-emi-merger-are-singing-off-key/feed/ 1 42401
FTC sacrifices the rule of law for more flexibility; Commissioner Ohlhausen wisely dissents https://techliberation.com/2012/08/02/ftc-sacrifices-the-rule-of-law-for-more-flexibility-commissioner-ohlhausen-wisely-dissents/ https://techliberation.com/2012/08/02/ftc-sacrifices-the-rule-of-law-for-more-flexibility-commissioner-ohlhausen-wisely-dissents/#comments Thu, 02 Aug 2012 17:32:24 +0000 http://techliberation.com/?p=41877

On July 31 the FTC voted to withdraw its 2003 Policy Statement on Monetary Remedies in Competition Cases.  Commissioner Ohlhausen issued her first dissent since joining the Commission, and points out the folly and the danger in the Commission’s withdrawal of its Policy Statement.

The Commission supports its action by citing “legal thinking” in favor of heightened monetary penalties and the Policy Statement’s role in dissuading the Commission from following this thinking:

It has been our experience that the Policy Statement has chilled the pursuit of monetary remedies in the years since the statement’s issuance. At a time when Supreme Court jurisprudence has increased burdens on plaintiffs, and legal thinking has begun to encourage greater seeking of disgorgement, the FTC has sought monetary equitable remedies in only two competition cases since we issued the Policy Statement in 2003.

In this case, “legal thinking” apparently amounts to a single 2009 article by Einer Elhague.  But it turns out Einer doesn’t represent the entire current of legal thinking on this issue.  As it happens, Josh Wright and Judge Ginsburg looked at the evidence in 2010 and found no evidence of increased deterrence (of price fixing) from larger fines:

If the best way to deter price-fixing is to increase fines, then we should expect the number of cartel cases to decrease as fines increase. At this point, however, we do not have any evidence that a still-higher corporate fine would deter price-fixing more effectively. It may simply be that corporate fines are misdirected, so that increasing the severity of sanctions along this margin is at best irrelevant and might counter-productively impose costs upon consumers in the form of higher prices as firms pass on increased monitoring and compliance expenditures.

Commissioner Ohlhausen points out in her dissent that there is no support for the claim that the Policy Statement has led to sub-optimal deterrence and quite sensibly finds no reason for the Commission to withdraw the Policy Statement.  But even more importantly Commissioner Ohlhausen worries about what the Commission’s decision here might portend:

The guidance in the Policy Statement will be replaced by this view: “[T]he Commission withdraws the Policy Statement and will rely instead upon existing law, which provides sufficient guidance on the use of monetary equitable remedies.”  This position could be used to justify a decision to refrain from issuing any guidance whatsoever about how this agency will interpret and exercise its statutory authority on any issue. It also runs counter to the goal of transparency, which is an important factor in ensuring ongoing support for the agency’s mission and activities. In essence, we are moving from clear guidance on disgorgement to virtually no guidance on this important policy issue.

An excellent point.  If the standard for the FTC issuing policy statements is the sufficiency of the guidance provided by existing law, then arguably the FTC need not offer any guidance whatever.

But as we careen toward a more and more active role on the part of the FTC in regulating the collection, use and dissemination of data (i.e., “privacy”), this sets an ominous precedent.  Already the Commission has managed to side-step the courts in establishing its policies on this issue by, well, never going to court.  As Berin Szoka noted in recent Congressional testimony:

The problem with the unfairness doctrine is that the FTC has never had to defend its application to privacy in court, nor been forced to prove harm is substantial and outweighs benefits.

This has lead Berin and others to suggest — and the chorus will only grow louder — that the FTC clarify the basis for its enforcement decisions and offer clear guidance on its interpretation of the unfairness and deception standards it applies under the rubric of protecting privacy.  Unfortunately, the Commission’s reasoning in this action suggests it might well not see fit to offer any such guidance.

[Cross posted at TruthontheMarket]

]]>
https://techliberation.com/2012/08/02/ftc-sacrifices-the-rule-of-law-for-more-flexibility-commissioner-ohlhausen-wisely-dissents/feed/ 1 41877
UMG-EMI Deal Is No Threat To Innovation In Music Distribution https://techliberation.com/2012/06/20/umg-emi-deal-is-no-threat-to-innovation-in-music-distribution/ https://techliberation.com/2012/06/20/umg-emi-deal-is-no-threat-to-innovation-in-music-distribution/#comments Wed, 20 Jun 2012 18:52:39 +0000 http://techliberation.com/?p=41474

By Geoffrey Manne and Berin Szoka

Everyone loves to hate record labels. For years, copyright-bashers have ranted about the “Big Labels” trying to thwart new models for distributing music in terms that would make JFK assassination conspiracy theorists blush. Now they’ve turned their sites on the pending merger between Universal Music Group and EMI, insisting the deal would be bad for consumers. There’s even a Senate Antitrust Subcommittee hearing tomorrow, led by Senator Herb “Big is Bad” Kohl.

But this is a merger users of Spotify, Apple’s iTunes and the wide range of other digital services ought to love. UMG has done more than any other label to support the growth of such services, cutting licensing deals with hundreds of distribution outlets—often well before other labels. Piracy has been a significant concern for the industry, and UMG seems to recognize that only “easy” can compete with “free.” The company has embraced the reality that music distribution paradigms are changing rapidly to keep up with consumer demand. So why are groups like Public Knowledge opposing the merger?

Critics contend that the merger will elevate UMG’s already substantial market share and “give it the power to distort or even determine the fate of digital distribution models.” For these critics, the only record labels that matter are the four majors, and four is simply better than three. But this assessment hews to the outmoded, “big is bad” structural analysis that has been consistently demolished by economists since the 1970s. Instead, the relevant touchstone for all merger analysis is whether the merger would give the merged firm a new incentive and ability to engage in anticompetitive conduct. But there’s nothing UMG can do with EMI’s catalogue under its control that it can’t do now. If anything, UMG’s ownership of EMI should accelerate the availability of digitally distributed music.

To see why this is so, consider what digital distributors—whether of the pay-as-you-go, iTunes type, or the all-you-can-eat, Spotify type—most want: Access to as much music as possible on terms on par with those of other distribution channels. For the all-you-can-eat distributors this is a sine qua non: their business models depend on being able to distribute as close as possible to all the music every potential customer could want. But given UMG’s current catalogue, it already has the ability, if it wanted to exercise it, to extract monopoly profits from these distributors, as they simply can’t offer a viable product without UMG’s catalogue.

The merger with EMI—the smallest of the four major labels, with a US market share of around 9%—does nothing to increase UMG’s incentive or ability to extract monopoly rents. UMG’s ability to raise prices on Lady Gaga’s music is hardly affected by the fact that it might also own Lady Antebellum’s music, anymore than its current ownership of Ladyhawke’s music does. But, regardless, UMG has viewed digital distribution as a friend, not a foe.

Even on their own, structural terms, the critics’ analysis is flawed. The argument against the merger is based largely on the notion that the critical, relevant antitrust market comprises album sales by the four major labels. But this makes no sense.

In fact, UMG currently distributes only about 30% of the music consumed in the US, and because, like all the majors, it distributes some music over which it has no ownership rights (including no ability to set prices), it owns only 24% of music purchased in the US. EMI’s share of distribution, as we noted, is around 9%, and it has experienced significant turmoil in recent years. Meanwhile, the independent labels that some critics seek to exclude from the market (and which, ironically, probably distribute the bulk of the music they listen to) sell 30% of the records sold in the US today and do so digitally largely through a single distributor, Merlin—essentially a fifth major record label. This is far beyond trivial.

What matters for antitrust market definition is substitutability: If customers would purchase eight singles off an album in response to an increase in the 12-track album price, singles and albums are surely in the same market. Ditto consumption of singles and entire albums through streaming services in lieu of outright purchase—and it’s clear that this mode of distribution is increasingly popular. There is no principled defense of an album-only market, nor one that excludes independent labels or streaming services. And once you appreciate these market dynamics, the concerns over this merger disappear.

The reality is closer to this: EMI is effectively a failing firm. Its current owner (Citigroup) inherited the company when its previous owner defaulted, and it promptly put it up for auction. Warner and UMG both bid on EMI and UMG won. Now Warner leads the effort to stymie the deal, deploying a time-tested strategy of trying to accomplish by regulation what it couldn’t manage through genuine competition.

Everyone loves to hate record labels. For years, copyright-bashers have ranted about the “Big Labels” trying to thwart new models for distributing music in terms that would make JFK assassination conspiracy theorists blush. Now they’ve turned their sites on the pending merger between Universal Music Group and EMI, insisting the deal would be bad for consumers. There’s even a Senate Antitrust Subcommittee hearing tomorrow, led by Senator Herb “Big is Bad” Kohl.

But this is a merger users of Spotify, Apple’s iTunes and the wide range of other digital services ought to love. UMG has done more than any other label to support the growth of such services, cutting licensing deals with hundreds of distribution outlets—often well before other labels. Piracy has been a significant concern for the industry, and UMG seems to recognize that only “easy” can compete with “free.” The company has embraced the reality that music distribution paradigms are changing rapidly to keep up with consumer demand. So why are groups like Public Knowledge opposing the merger?

Critics worry that a larger UMG will stifle innovative distribution services. While that’s theoretically possible, UMG’s past practice and the industry’s changing dynamics—including the significant increase in buyer power from large retailers like Apple, Amazon and Wal-Mart—suggest the concern is speculative, at best. Albums are simply not the dominant marketing vehicles they once were for most artists, and, increasingly, consumers are content to “rent” their music through streaming and other online services rather than own it outright.

A slightly larger UMG poses no threat to the evolving distribution of music. In fact, UMG has increasingly championed digital distribution as it has grown in size. UMG’s history with digital distribution should please anyone concerned about the deal: it has been both aggressive and progressive in the digital space. UMG is often the first to license its catalogue to new services and it has financially supported the creation of some of the largest of these services. When online giant Slacker Radio added a subscription service to its Web radio offering, UMG not only licensed its catalogue for the new service but also renegotiated (and lowered) its terms for Slacker’s webcasting license in order to ease Slacker’s move into subscription services. And UMG was instrumental in getting Muve—the second largest subscription music service in the US today—off the ground. Again—the industry’s best defense against “free” is “easy,” and that doesn’t change for UMG if it gains another few percentage points of market share.

To paraphrase Timbuk 3 (from an album originally released on the famed I.R.S. label): Music’s future is so bright, it’s gotta wear shades. Music has never been cheaper, easier to access, more widely distributed, nor available in more forms and formats. And the digital distribution of music—significantly facilitated by UMG—shows no signs of slowing down. What has slowed down, thanks largely to these advances in digital and online distribution, is music piracy. Anyone looking for an explanation why UMG has been so progressive in its support for innovation in music distribution need look no further than that fact. This merger does nothing to change UMG’s critical incentives to continue to support digital distribution of its catalogue: fighting piracy and effectively distributing its music.

[Cross posted at Forbes.com]

]]>
https://techliberation.com/2012/06/20/umg-emi-deal-is-no-threat-to-innovation-in-music-distribution/feed/ 1 41474
The procompetitive story that could undermine the DOJ’s e-books antitrust case against Apple https://techliberation.com/2012/04/12/the-procompetitive-story-that-could-undermine-the-dojs-e-books-antitrust-case-against-apple/ https://techliberation.com/2012/04/12/the-procompetitive-story-that-could-undermine-the-dojs-e-books-antitrust-case-against-apple/#comments Thu, 12 Apr 2012 22:46:50 +0000 http://techliberation.com/?p=40837

Did Apple conspire with e-book publishers to raise e-book prices?  That’s what DOJ argues in a lawsuit filed yesterday. But does that violate the antitrust laws?  Not necessarily—and even if it does, perhaps it shouldn’t.

Antitrust’s sole goal is maximizing consumer welfare.  While that generally means antitrust regulators should focus on lower prices, the situation is more complicated when we’re talking about markets for new products, where technologies for distribution and consumption are evolving rapidly along with business models.  In short, the so-called Agency pricing model Apple and publishers adopted may mean (and may not mean) higher e-book prices in the short run, but it also means more variability in pricing, and it might well have facilitated Apple’s entry into the market, increasing e-book retail competition and promoting innovation among e-book readers, while increasing funding for e-book content creators.

The procompetitive story goes something like the following.  (As always with antitrust, the question isn’t so much which model is better, but that no one really knows what the right model is—least of all antitrust regulators—and that, the more unclear the consumer welfare effects of a practice are, as in rapidly evolving markets, the more we should err on the side of restraint).

Apple versus Amazon

Apple–decidedly a hardware company–entered the e-book market as a device maker eager to attract consumers to its expensive iPad tablets by offering appealing media content.  In this it is the very opposite of Amazon, a general retailer that naturally moved into retailing digital content, and began selling hardware (Kindle readers) only as a way of getting consumers to embrace e-books.

The Kindle is essentially a one-trick pony (the latest Kindle notwithstanding), and its focus is on e-books.  By contrast, Apple’s platform (the iPad and, to a lesser degree, the iPhone) is a multi-use platform, offering Internet browsing, word processing, music, apps, and other products, of which books probably accounted–and still account–for a relatively small percentage of revenue.  Importantly, unlike Amazon, Apple has many options for promoting adoption of its platform—not least, the “sex appeal” of its famously glam products.  Without denigrating Amazon’s offerings, Amazon, by contrast, competes largely on the basis of its content, and its devices sell only as long as the content is attractive and attractively priced.

In essence, Apple’s iPad is a platform; Amazon’s Kindle is a book merchant wrapped up in a cool device.

What this means is that Apple, unlike Amazon, is far less interested in controlling content prices for books and other content; it hardly needs to control that lever to effectively market its platform, and it can easily rely on content providers’ self interest to ensure that enough content flows through its devices.

In other words, Apple is content to act as a typical platform would, acting as a conduit for others’ content, which the content owner controls.  Amazon surely has “platform” status in its sights, but reliant as it is on e-books, and nascent as that market is, it is not quite ready to act like a “pure” platform.  (For more on this, see my blog post from 2010).

The Agency Model

As it happens, publishers seem to prefer the Agency Model, as well, preferring to keep control over their content in this medium rather than selling it (as in the brick-and-mortar model) to a retailer like Amazon to price, market, promote and re-sell at will.  For the publishers, the Agency Model is essentially a form of resale price maintenance — ensuring that retailers who sell their products do not inefficiently discount prices.  (For a clear exposition of the procompetitive merits of RPM, see this article by Benjamin Klein).

(As a side note, I suspect that they may well be wrong to feel this way.  The inclination seems to stem from a fear of e-books’ threat to their traditional business model — a fear of technological evolution that can have catastrophic consequences (cf. Kodak, about which I wrote a few weeks ago).  But then content providers moving into digital media have been consistently woeful at understanding digital markets).

So the publishers strike a deal with Apple that gives the publishers control over pricing and Apple a cut (30%) of the profits.  Contrary to the DOJ’s claim in its complaint, this model happens to look exactly like Apple’s arrangement for apps and music, as well, right down to the same percentage Apple takes from sales.  This makes things easier for Apple, gives publishers more control over pricing, and offers Apple content and a good return sufficient to induce it to market and sell its platform.

It is worth noting here that there is no reason to think that the wholesale model wouldn’t also have generated enough content and enough return for Apple, so I don’t think the ultimate motivation here for Apple was higher prices (which could well have actually led to lower total return given fewer sales), but rather that it wasn’t interested in paying for control.  So in exchange for a (possibly) larger slice of the pie, as well as consistency with its existing content provider back-end and the avoidance of having to monitor and make pricing decisions,  Apple happily relinquished decision-making over pricing and other aspects of sales.

The Most Favored Nation Clauses

Having given up this price control, Apple has one remaining problem: no guarantee of being able to offer attractive content at an attractive price if it is forced to try to sell e-books at a high price while its competitors can undercut it.  And so, as is common in this sort of distribution agreement, Apple obtains “Most Favored Nation” (MFN) clauses from publishers to ensure that if they are permitting other platforms to sell their books at a lower price, Apple will at least be able to do so, as well.  The contracts at issue in the case specify maximum resale prices for content and ensure Apple that if a publisher permits, say, Amazon to sell the same content at a lower price, it will likewise offer the content via Apple’s iBooks store for the same price.

The DOJ is fighting a war against MFNs, which is a story for another day, and it seems clear from the terms of the settlement with the three setting publishers that indeed MFNs are a big part of the target here.  But there is nothing inherently problematic about MFNs, and there is plenty of scholarship explaining why they are beneficial.  Here, and important among these, they facilitate entry by offering some protection for an entrant’s up-front investment in challenging an incumbent, and prevent subsequent entrants from undercutting this price.  In this sense MFNs are essentially an important way of inducing retailers like Apple to sign on to an RPM (no control) model by offering some protection against publishers striking a deal with a competitor that leaves Apple forced to price its e-books out of the market.

There is nothing, that I know of, in the MFNs or elsewhere in the agreements that requires the publishers to impose higher resale prices elsewhere, or prevents the publishers from selling throughApple at a lower price, if necessary.  That said, it may well have been everyone’s hope that, as the DOJ alleges, the MFNs would operate like price floors instead of price ceilings, ensuring higher prices for publishers.  But hoping for higher prices is not an antitrust offense, and, as I’ve discussed, it’s not even clear that, viewed more broadly in terms of the evolution of the e-book and e-reader markets, higher prices in the short run would be bad for consumers.

The Legal Standard

To the extent that book publishers don’t necessarily know what’s really in their best interest, the DOJ is even more constrained in judging the benefits (or costs) for consumers at large from this scheme.  As I’ve suggested, there is a pretty clear procompetitive story here, and a court may indeed agree that this should not be judged under a per se liability standard (as would apply in the case of naked price-fixing).

Most important, here there is no allegation that the publishers and Apple (or the publishers among themselves) agreed on price.  Rather, the allegation is that they agreed to adopt a particular business model (one that, I would point out, probably resulted in greater variation in price, rather than less, compared to Amazon’s traditional $9.99-for-all pricing scheme).  If the DOJ can convince a court that this nevertheless amounts to a naked price-fixing agreement among publishers, with Apple operating as the hub, then they are probably sunk.  But while antitrust law is suspicious of collective action among rivals in coordinating on prices, this change in business model does not alone coordinate on prices.  Each individual publisher can set its own price, and it’s not clear that the DOJ’s evidence points to any agreement with respect to actual pricing level.

It does seem pretty clear that there is coordination here on the shift in business models.  But sometimes antitrust law condones such collective action to take account of various efficiencies (think standard setting or joint ventures or collective rights groups like BMI).  Here, there is a more than plausible case that coordinated action to move to a plausibly-more-efficient business model was necessary and pro-competitive.  If Apple can convince a court of that, then the DOJ has a rule of reason case on its hands and is facing a very uphill battle.

[Cross posted at Forbes.com]

]]>
https://techliberation.com/2012/04/12/the-procompetitive-story-that-could-undermine-the-dojs-e-books-antitrust-case-against-apple/feed/ 2 40837
Blocking Verizon/SpectrumCo Deal Would Harm, not Help, Consumers https://techliberation.com/2012/03/27/blocking-verizonspectrumco-deal-would-harm-not-help-consumers/ https://techliberation.com/2012/03/27/blocking-verizonspectrumco-deal-would-harm-not-help-consumers/#respond Tue, 27 Mar 2012 18:51:46 +0000 http://techliberation.com/?p=40580

Yesterday, the International Center for Law and Economics and TechFreedom jointly filed comments [pdf] with the FCC on the Verizon SpectrumCo deal.  In the comments, ICLE Executive Director Geoffrey Manne and TechFreedom President Berin Szoka counter the primary arguments against the deal:

Critics lament the concentration of spectrum in the hands of one of the industry’s biggest players, but the assumption that concentration will harm to consumers is unsupported and misplaced.  Concentration of spectrum has not slowed the growth of the market; rather, the problem is that growth in demand has dramatically outpaced capacity.  What’s more: prices have plummeted even as the industry has become more concentrated.

While the FCC undeniably has authority to review the license transfers, the argument that the separate but related commercial agreements would reduce competition is properly the province of the Department of Justice.  That argument is best measured under the antitrust laws, not by the FCC under its vague “public interest” standard.  Indeed, if the FCC can assert jurisdiction over the commercial agreements as part of its public interest review, its authority over license transfers will become a license to regulate all aspects of business.  This is a recipe for certain mischief.

The need for all competitors, including Verizon, to obtain sufficient spectrum to meet increasing demand demonstrates that the deal is in the public interest and should be approved.

]]>
https://techliberation.com/2012/03/27/blocking-verizonspectrumco-deal-would-harm-not-help-consumers/feed/ 0 40580
Google Isn’t ‘Leveraging Its Dominance,’ It’s Fighting To Avoid Obsolescence https://techliberation.com/2012/03/12/google-isnt-leveraging-its-dominance-its-fighting-to-avoid-obsolescence/ https://techliberation.com/2012/03/12/google-isnt-leveraging-its-dominance-its-fighting-to-avoid-obsolescence/#comments Mon, 12 Mar 2012 14:32:29 +0000 http://techliberation.com/?p=40328

Six months may not seem a great deal of time in the general business world, but in the Internet space it’s a lifetime as new websites, tools and features are introduced every day that change where and how users get and share information. The rise of Facebook is a great example: the social networking platform that didn’t exist in early 2004 filed paperwork last month to launch what is expected to be one of the largest IPOs in history. To put it in perspective, Ford Motor went public nearly forty years after it was founded.

This incredible pace of innovation is seen throughout the Internet, and since Google’s public disclosure of its Federal Trade Commission antitrust investigation just this past June, there have been many dynamic changes to the landscape of the Internet Search market. And as the needs and expectations of consumers continue to evolve, Internet search must adapt – and quickly – to shifting demand.

One noteworthy development was the release of Siri by Apple, which was introduced to the world in late 2011 on the most recent iPhone. Today, many consider it the best voice recognition application in history, but its potential really lies in its ability revolutionize the way we search the Internet, answer questions and consume information. As Eric Jackson of Forbes noted, in the future it may even be a “Google killer.”

Of this we can be certain: Siri is the latest (though certainly not the last) game changer in Internet search, and it has certainly begun to change people’s expectations about both the process and the results of search. The search box, once needed to connect us with information on the web, is dead or dying. In its place is an application that feels intuitive and personal. Siri has become a near-indispensible entry point, and search engines are merely the back-end. And while a new feature, Siri’s expansion is inevitable. In fact, it is rumored that Apple is diligently working on Siri-enabled televisions – an entirely new market for the company.

The past six months have also brought the convergence of social media and search engines, as first Bing and more recently Google have incorporated information from a social network into their search results. Again we see technology adapting and responding to the once-unimagined way individuals find, analyze and accept information. Instead of relying on traditional, mechanical search results and the opinions of strangers, this new convergence allows users to find data and receive input directly from people in their social world, offering results curated by friends and associates.

As Social networks become more integrated with the Internet at large, reviews from trusted contacts will continue to change the way that users search for information. As David Worlock put it in a post titled, “Decline and Fall of the Google Empire,” “Facebook and its successors become the consumer research environment. Search by asking someone you know, or at least have a connection with, and get recommendations and references which take you right to the place where you buy.” The addition of social data to search results lends a layer of novel, trusted data to users’ results. Search Engine Land’s Danny Sullivan agreed writing, “The new system will perhaps make life much easier for some people, allowing them to find both privately shared content from friends and family plus material from across the web through a single search, rather than having to search twice using two different systems.”It only makes sense, from a competition perspective, that Google followed suit and recently merged its social and search data in an effort to make search more relevant and personal.

Inevitably, a host of Google’s critics and competitors has cried foul. In fact, as Google has adapted and evolved from its original template to offer users not only links to URLs but also maps, flight information, product pages, videos and now social media inputs, it has met with a curious resistance at every turn. And, indeed, judged against a world in which Internet search is limited to “ten blue links,” with actual content – answers to questions – residing outside of Google’s purview, it has significantly expanded its reach and brought itself (and its large user base) into direct competition with a host of new entities.

But the worldview that judges these adaptations as unwarranted extensions of Google’s platform from its initial baseline, itself merely a function of the relatively limited technology and nascent consumer demand present at the firm’s inception, is dangerously crabbed. By challenging Google’s evolution as “leveraging its dominance” into new and distinct markets, rather than celebrating its efforts (and those of Apple, Bing and Facebook, for that matter) to offer richer, more-responsive and varied forms of information, this view denies the essential reality of technological evolution and exalts outdated technology and outmoded business practices.

And while Google’s forays into the protected realms of others’ business models grab the headlines, it is also feverishly working to adapt its core technology, as well, most recently (and ambitiously) with its “Google Knowledge Graph” project, aimed squarely at transforming the algorithmic guts of its core search function into something more intelligent and refined than its current word-based index permits. In concept, this is, in fact, no different than its efforts to bootstrap social network data into its current structure: Both are efforts to improve on the mechanical process built on Google’s PageRank technology to offer more relevant search results informed by a better understanding of the mercurial way people actually think.

Expanding consumer welfare requires that Google, like its ever-shifting roster of competitors, must be able to keep up with the pace and the unanticipated twists and turns of innovation. As The Economist recently said, “Kodak was the Google of its day,” and the analogy is decidedly apt. Without the drive or ability to evolve and reinvent itself, its products and its business model, Kodak has fallen to its competitors in the marketplace. Once revered as a powerhouse of technological innovation for most of its history, Kodak now faces bankruptcy because it failed to adapt to its own success. Having invented the digital camera, Kodak radically altered the very definition of its market. But by hewing to its own metaphorical ten blue links – traditional film – instead of understanding that consumer photography had come to mean something dramatically different, Kodak consigned itself to failure.

Like Kodak and every other technology company before it, Google must be willing and able to adapt and evolve; just as for Lewis Carol’s Red Queen, “here it takes all the running you can do, to keep in the same place.” Neither consumers nor firms are well served by regulatory policy informed by nostalgia. Even more so than Kodak, Google confronts a near-constantly evolving marketplace and fierce competition from unanticipated quarters. If regulators force it to stop running, the market will simply pass it by.

[Cross posted at Forbes]

]]>
https://techliberation.com/2012/03/12/google-isnt-leveraging-its-dominance-its-fighting-to-avoid-obsolescence/feed/ 4 40328
Some Much-Needed Antitrust Skepticism on Senate Letter Urging FTC Google Investigation https://techliberation.com/2011/12/20/some-much-needed-antitrust-skepticism-on-senate-letter-urging-ftc-google-investigation/ https://techliberation.com/2011/12/20/some-much-needed-antitrust-skepticism-on-senate-letter-urging-ftc-google-investigation/#comments Tue, 20 Dec 2011 22:38:10 +0000 http://techliberation.com/?p=39545

By Geoffrey Manne and Berin Szoka

Back in September, the Senate Judiciary Committee’s Antitrust Subcommittee held a hearing on “The Power of Google: Serving Consumers or Threatening Competition?” Given the harsh questioning from the Subcommittee’s Chairman Herb Kohl (D-WI) and Ranking Member Mike Lee (R-UT), no one should have been surprised by the letter they sent yesterday to the Federal Trade Commission asking for a “thorough investigation” of the company. At least this time the danger is somewhat limited: by calling for the FTC to investigate Google, the senators are thus urging the agency to do . . . exactly what it’s already doing.

So one must wonder about the real aim of the letter. Unfortunately, the goal does not appear to be to offer an objective appraisal of the complex issues intended to be addressed at the hearing. That’s disappointing (though hardly surprising) and underscores what we noted at the time of the hearing: There’s something backward about seeing a company hauled before a hostile congressional panel and asked to defend itself, rather than its self-appointed prosecutors being asked to defend their case.

Senators Kohl and Lee insist that they take no position on the legality of Google’s actions, but their lopsided characterization of the issues in the letter—and the fact that the FTC is already doing what they purport to desire as the sole outcome of the letter!—leaves little room for doubt about their aim: to put political pressure on the FTC not merely to investigate, but to reach a particular conclusion and bring a case in court (or simply to ratchet up public pressure from its bully pulpit).

The five page letter concludes with, literally, three sentences presenting Google’s case, one of which reads, in its entirety, “Google strongly denies the arguments of its critics.” The derision is palpable—as if only a craven monopolist would deign to actually deny the iron-clad arguments of Google’s competitors so painstakingly reproduced by Senators Kohl and Lee in the preceding four pages. This is neither rigorous analysis nor objective reporting on the contents of the Senate’s hearing.

While we worry about particularly successful companies being singled out for punishment, we hold no brief for Google in this debate. Instead, in all our writings, we’ve tried to present a consistently skeptical view about a worrisome trend in antitrust enforcement in high tech markets: error-prone and costly intervention in markets that are ill-understood and fast-moving, to the great detriment of consumers and progress generally. Although our institutions have received financial support from Google among a range of other companies, organizations and individuals, our work is focused on this broad mission; we have no obligation or intention to support any company simply because it finds value in supporting our mission.

We’ve defended (and one of us has even worked for) Microsoft in the past, and just yesterday, we lamented the fact that the Obama Justice Department and the FCC have effectively blocked Google’s arch-rival, AT&T, from buying T-Mobile. Rather than defend any particular company, our goal, to paraphrase Hayek, is to “demonstrate to [regulators] how little they really know about what they imagine they can design”—lest they undermine how competition actually works in the name of defending outdated models of how they think it should work. Unfortunately, the letter from Senators Kohl and Lee does nothing to assuage our concern and suggests instead that crass politics, rather than sensible economics, could determine the outcome of cases like this one—if not in a court of law, then in the court of public opinion and extra-legal intimidation.

To begin with, the letter asserts that “Google faces competition from only one general search engine, Bing,” suggesting that only Bing (and it, only ineffectively) could keep Google in check. In essence, the Senators are prejudging an essential question on which any case against Google would turn: market definition. But why would the market not include other tools for information retrieval? Is it not at least worth mentioning that more and more Internet users are finding information and spending time on social networks like Facebook and Twitter, while more and more advertisers are spending their money on these Google competitors? Isn’t it clear that search itself is evolving from “ten blue links” into something more social, multi-faceted and interactive?

In a remarkable leap, the senators then identify the specific alleged abuse that Google’s alleged market power leads to: search bias. That’s remarkable because, other than the breathless claims of disgruntled competitors (given plenty of air time at the September hearing), there is actually no evidence that search bias is, in fact, harmful to consumers—which is what antitrust is concerned with. (Read both sides of this debate in TechFreedom’s free ebook, The Next Digital Decade: Essays on the Future of the Internet.)

As our colleague, Josh Wright, has thoroughly demonstrated, this “own-content” bias is actually an infrequent phenomenon and is simply not consistent with an actionable claim of anticompetitive foreclosure. Moreover, among search engines, Google references its own content far less frequently than does Bing (which favors Microsoft content in the first search result when no other search engine does so more than twice as often as Google favors its own content).

Of course, none of this is even hinted at in the Senators’ letter, which seems intended to condemn Google for “preferencing” its own content (under the pretense of withholding judgment). It’s a little like condemning Target for deigning to use its trucks to supply inventory only to its own stores instead of Wal-Mart’s, or, say, condemning a congressman for targeting earmarks for his own state or district. Earmark bias!

In Google’s case, the fundamental basis for these claims, according to the letter, is that “Google’s business model has changed dramatically in recent years.” This is a remarkably candid admission: a company that successfully advances its organization, keeping up with rapidly-shifting technology and mercurial demand, can be condemned—and its business practices adjudged illegal—simply by virtue of the fact that it has, indeed, evolved to offer products it didn’t offer before. Never mind that those products didn’t previously exist and, in some cases, were in fact invented by that company! How would punishing this serve consumers?

To add insult to injury, the story is “corroborated” by the senators’ parroting, without caveats, claims by Google’s rivals that they are harmed by Google’s favoring its own content, and that “they would not attempt to launch their companies today given GoogIe’s current practices.” As a general matter, antitrust law treats such self-interested claims of competitors with the skepticism they deserve. You wouldn’t know it from reading the letter (nor from reading the transcript from the September hearing), but harm to competitors is not the same thing as harm to consumers or competition more generally (which is what antitrust law cares about). The reason is simple: nothing harms competitors more than effective, vigorous competition. Reasoning backward from harm to competitors to infer anticompetitive conduct is the height of irresponsible antitrust enforcement.

The letter also reports, again with no caveats, claims by the CEOs of Yelp! and Nextag that “75 percent of Yelp!’s web traffic consists of consumers who find its website as a result of Google searches, and . . . 65 percent of Nextag’s traffic originates from Google searches,” and that losing this much traffic to Google preferencing its own content would be catastrophic. But the letter fails to mention that most searches for brand names on Google are “navigational” rather than “informational.” As Google competitor Expedia’s CEO recently explained:

The majority of, at least Expedia’s, and I believe Hotel.com’s traffic that comes from search to our site actually come through people searching for Expedia, for example. So in typing in Expedia in Google or so on, typing in Hotels.com in Google. So of the 25% for Expedia, for example, the majority of that traffic is someone who’s already looking for Expedia, and that person is going to find Expedia one way or the other because they are searching for something very specific. (Expedia earnings call, 10/28/10, quoted here).

Indeed, a recently published independent academic study conducted across search engines concluded that 52% of “business queries” (and 72% of organizational queries) were navigational. In other words, most of the Google traffic going to these sites was likely from users who simply typed in “Yelp” or “NextTag” as a convenient way of getting to those sites. Such searches are not diverted (and not even claimed to be diverted) to Google’s own sites, and the first search result for the search term “Expedia” will always be expedia.com. Thus, the majority of these searches that are claimed to make up 75% and 65% of the complaining companies’ traffic is not in any way threatened by Google’s business model, and is completely irrelevant to assessing the effect of Google preferencing its own content.

Furthermore, the letter does not mention Yelp’s recent boast that over 40% (and growing) of its searches are now conducted on its mobile app—insulating it from whatever “power” Google might exercise over traditional searches. While generic search may be the default navigational tool for many desktop users, a great many users seem to prefer searching with apps like Yelp’s on their mobile devices, further underscoring the complexity of the markets at issue and the problem with the kind of facile market definition on display here.

Moreover, who really knows what anyone might have done in 1999 (Nextag) or 2004 (Yelp)? It is facile and meaningless for the companies to imply that Google’s conduct is stifling today the same business models that emerged 7 or 12 years ago, before the ensuing evolution of the market. It would be a shame, in fact, if those same companies were emerging only today, and one shouldn’t be surprised in a rapidly evolving marketplace to find that many once-brilliant ideas turn out to be bested by the vagaries of uncertain, innovative markets. Remember, it wasn’t so long ago that Yahoo! ruled the “portal market,” which morphed into the “search” market “controlled,” in turn, by AOL and AltaVista. A static snapshot of the market at any given moment might have inspired the sort of hand-wringing Google inspires today. But the market kept evolving—without government intervention—each time rendering today’s tech titans tomorrow’s has-beens. Nostalgia and a reflexive preference for the status quo are the worst vices of regulating any evolving market, especially high-tech ones. Estimates that Yelp’s upcoming IPO may put the company at a valuation of $1-2 billion should at least make us somewhat skeptical of such claims, anyway.

It is for this reason—the disconnect between the interests of competitors and those of “competition” and the consumers it serves—that it’s particularly disingenuous for the letter to identify Tom Barnett only as “the Assistant Attorney General for Antitrust in the administration of President George W. Bush.” This is an ostentatious attempt to appeal to Republicans normally skeptical of government meddling, giving him the last word to claim that “the ultimate result of Google’s practices will be an Internet with fewer choices for consumers and business, higher prices, and less innovation.” (Sen. Lee himself seems to have fallen prey to claims by soi disant conservatives like Rick Rule (also, coincidentally, antitrust attorney to several of Google’s complainants) that antitrust meddling is a core part of capitalism—rather than another form of government regulation prone to capture by incumbents and politicization, precisely as Judge Bork warned in the Antitrust Paradox.)

A fairer letter would have noted the far more salient fact that Barnett is counsel for Expedia Inc., a member of the anti-Google Fairsearch coalition, for which he has served as spokesman. As Josh Wright has ably demonstrated, AAG Barnett and counsel-to-Expedia Barnett have wildly divergent views. While AAG Barnett is rightly celebrated as a thoughtful and restrained antitrust expert (indeed, he taught Berin antitrust law!), counsel-to-Expedia Barnett is a faithful and diligent advocate for his client (as well he should be). It is no disrespect to him to say that his client’s interests are not necessarily the same as those of the consumers Senators Kohl and Lee purport to represent; it is, however, questionable to hold out his views on this matter as representative of consumer interests.

The letter goes on to highlight mobile search as a particularly problematic arena. Why? Because “Google may, as a condition of access to the Android operating system, require phone manufacturers to install Google as the default search engine.” But . . . they haven’t actually done that! The mobile phone market is remarkably competitive and ever-shifting. (One can easily imagine this same letter being written to raise pressing, irreversible concerns about Apple’s iPhone a year or two ago—just before Google’s Android operating system managed to seize the 43% of smart phone operating system share about which this letter is so concerned). Nevertheless, the FTC is urged to “ensure robust competition” in a market marred only by the senators’ purely speculative story about what could conceivably happen some day in the future. Is this really a responsible use of antitrust law?

It certainly isn’t responsible analysis. The Senators’ professed concern for robust competition and protection of the free market is undermined by the letter’s uncritical repetition of attacks on Google made by its competitors. At best, this letter is a missed opportunity to fairly present both sides of this complex case. For this reason, as well as the inconvenient fact (oddly completely absent from the letter) that the FTC is, as we noted, already actually investigating Google, we urge Chairman Leibowitz to investigate nothing more pertaining to this letter than the shape of the arc it makes as it flies through the air into his office wastebasket.

]]>
https://techliberation.com/2011/12/20/some-much-needed-antitrust-skepticism-on-senate-letter-urging-ftc-google-investigation/feed/ 2 39545
A Quick Assessment of the FCC’s Appalling Staff Report on the AT&T Merger https://techliberation.com/2011/12/02/a-quick-assessment-of-the-fcc%e2%80%99s-appalling-staff-report-on-the-att-merger/ https://techliberation.com/2011/12/02/a-quick-assessment-of-the-fcc%e2%80%99s-appalling-staff-report-on-the-att-merger/#comments Fri, 02 Dec 2011 16:18:33 +0000 http://techliberation.com/?p=39223

[Cross posted at Truth on the Market]

As everyone knows by now, AT&T’s proposed merger with T-Mobile has hit a bureaucratic snag at the FCC. The remarkable decision to refer the merger to the Commission’s Administrative Law Judge (in an effort to derail the deal) and the public release of the FCC staff’s internal, draft report are problematic and poorly considered. But far worse is the content of the report on which the decision to attempt to kill the deal was based.

With this report the FCC staff joins the exalted company of AT&T’s complaining competitors (surely the least reliable judges of the desirability of the proposed merger if ever there were any) and the antitrust policy scolds and consumer “advocates” who, quite literally, have never met a merger of which they approved.

In this post I’m going to hit a few of the most glaring problems in the staff’s report, and I hope to return again soon with further analysis.

As it happens, AT&T’s own response to the report is actually very good and it effectively highlights many of the key problems with the staff’s report. While it might make sense to take AT&T’s own reply with a grain of salt, in this case the reply is, if anything, too tame. No doubt the company wants to keep in the Commission’s good graces (it is the very definition of a repeat player at the agency, after all). But I am not so constrained. Using the company’s reply as a jumping off point, let me discuss a few of the problems with the staff report.

First, as the blog post (written by Jim Cicconi, Senior Vice President of External & Legislative Affairs) notes,

We expected that the AT&T-T-Mobile transaction would receive careful, considered, and fair analysis. Unfortunately, the preliminary FCC Staff Analysis offers none of that. The document is so obviously one-sided that any fair-minded person reading it is left with the clear impression that it is an advocacy piece, and not a considered analysis. In our view, the report raises questions as to whether its authors were predisposed. The report cherry-picks facts to support its views, and ignores facts that don’t. Where facts were lacking, the report speculates, with no basis, and then treats its own speculations as if they were fact. This is clearly not the fair and objective analysis to which any party is entitled, and which we have every right to expect.

OK, maybe they aren’t pulling punches. The fact that this reply was written with such scathing language despite AT&T’s expectation to have to go right back to the FCC to get approval for this deal in some form or another itself speaks volumes about the undeniable shoddiness of the report.

Cicconi goes on to detail five areas where AT&T thinks the report went seriously awry: “Expanding LTE to 97% of the U.S. Population,” “Job Gains Versus Losses,” “Deutsche Telekom, T-Mobile’s Parent, Has Serious Investment Constraints,” “Spectrum” and “Competition.” I have dealt with a few of these issues at some length elsewhere, including most notably here (noting how the FCC’s own wireless competition report “supports what everyone already knows: falling prices, improved quality, dynamic competition and unflagging innovation have led to a golden age of mobile services”), and here (“It is troubling that critics–particularly those with little if any business experience–are so certain that even with no obvious source of additional spectrum suitable for LTE coming from the government any time soon, and even with exponential growth in broadband (including mobile) data use, AT&T’s current spectrum holdings are sufficient to satisfy its business plans”).

What is really galling about the staff report—and, frankly, the basic posture of the agency—is that its criticisms really boil down to one thing: “We believe there is another way to accomplish (something like) what AT&T wants to do here, and we’d just prefer they do it that way.” This is central planning at its most repugnant. What is both assumed and what is lacking in this basic posture is beyond the pale for an allegedly independent government agency—and as Larry Downes notes in the linked article, the agency’s hubris and its politics may have real, costly consequences for all of us.

Competition

But procedure must be followed, and the staff thus musters a technical defense to support its basic position, starting with the claim that the merger will result in too much concentration. Blinded by its new-found love for HHIs, the staff commits a few blunders. First, it claims that concentration levels like those in this case “trigger a presumption of harm” to competition, citing the DOJ/FTC Merger Guidelines. Alas, as even the report’s own footnotes reveal, the Merger Guidelines actually say that highly concentrated markets with HHI increases of 200 or more trigger a presumption that the merger will “enhance market power.” This is not, in fact, the same thing as harm to competition. Elsewhere the staff calls this—a merger that increases concentration and gives one firm an “undue” share of the market—“presumptively illegal.” Perhaps the staff could use an antitrust refresher course. I’d be happy to come teach it.

Not only is there no actual evidence of consumer harm resulting from the sort of increases in concentration that might result from the merger, but the staff seems to derive its negative conclusions despite the damning fact that the data shows that wireless markets have seen considerable increases in concentration along with considerable decreases in prices, rather than harm to competition, over the last decade. While high and increasing HHIs might indicate a need for further investigation, when actual evidence refutes the connection between concentration and price, they simply lose their relevance. Someone should tell the FCC staff.

This is a different Wireless Bureau than the one that wrote so much sensible material in the 15th Annual Wireless Competition Report. That Bureau described a complex, dynamic, robust mobile “ecosystem” driven not by carrier market power and industrial structure, but by rapid evolution and technological disruptors. The analysis here wishes away every important factor that every consumer knows to be the real drivers of price and innovation in the mobile marketplace, including, among other things:

  1. Local markets, where there are five, six, or more carriers to choose from;
  2. Non-contract/pre-paid providers, whose strength is rapidly growing;
  3. Technology that is making more bands of available spectrum useful for competitive offerings;
  4. The reality that LTE will make inter-modal competition a reality; and
  5. The reality that churn is rampant and consumer decision-making is driven today by devices, operating systems, applications and content – not networks.

The resulting analysis is stilted and stale, and describes a wireless industry that exists only in the agency’s collective imagination.

There is considerably more to say about the report’s tortured unilateral effects analysis, but it will have to wait for my next post. Here I want to quickly touch on a two of the other issues called out by Cicconi’s blog post.

Jobs

First, although it’s not really in my bailiwick to comment on the job claims that have been such an important aspect of the public conversations surrounding this merger, some things are simple logic, and the staff’s contrary claims here are inscrutable. As Cicconi suggests, it is hard to understand how the $8 billion investment and build-out required to capitalize on AT&T’s T-Mobile purchase will fail to produce a host of jobs, how the creation of a more-robust, faster broadband network will fail to ignite even further growth in this growing sector of the economy, and, finally, how all this can fail to happen while the FCC’s own (relatively) paltry $4.5 billion broadband fund will somehow nevertheless create approximately 500,000 (!!!) jobs. Even Paul Krugman knows that private investment is better than government investment in generating stimulus – the claim is that there’s not enough of it, not that it doesn’t work as well. Here, however, the fiscal experts on the FCC’s staff have determined that massive private funding won’t create even 96,000 jobs, although the same agency claims that government funding only one half as large will create five times that many jobs. Um, really?

Meanwhile the agency simply dismisses AT&T’s job preservation commitments. Now, I would also normally disregard such unenforceable pronouncements as cheap talk – except given the frequency and the volume with which AT&T has made them, they would suffer pretty mightily for failing to follow through on them now. Even more important perhaps, I have to believe (again, given the vehemence with which they have made the statements and the reality of de facto, reputational enforcement) they are willing to agree to whatever is in their control in a consent decree, thus making them, in fact, legally enforceable. For the staff to so blithely disregard AT&T’s claims on jobs is unintelligible except as farce—or venality.

Spectrum

Although the report rarely misses an opportunity to fail to mention the spectrum crisis that has been at the center of the Administration’s telecom agenda and the focus of the National Broadband Plan, coincidentally authored by the FCC’s staff, the crux of the report seems to come down to a stark denial that such a spectrum crunch even exists. As I noted, much of the staff report amounts to an extended meditation on why the parties can and should run their businesses as the staff say they can and should. The report’s section assessing the parties’ claims regarding the transition to LTE (para 210, ff.) is remarkable. It begins thus:

One of the Applicants’ primary justifications for the necessity of this transaction is that, as standalone firms, AT&T and T-Mobile are, and will continue to be, spectrum and capacity constrained. Due to these constraints, we find it more plausible that a spectrum constrained firm would maximize deployment of more spectrally efficient LTE, rather than limit it. Transitioning to LTE is primarily a function of only two factors: (1) the extent of LTE capable equipment deployed on the network and (2) the penetration of LTE compatible devices in the subscriber base. Although it may make it more economical, the transition does not require “spectrum headroom” as the Applicants claim. Increased deployment could be achieved by both of the Applicants on a standalone basis by adding the more spectrally efficient LTE-capable radios and equipment to the network and then providing customers with dual mode HSPAILTE devices. . . .

Forget the spectrum crunch! It is the very absence of spectrum that will give firms the incentive and the ability to transition to more-efficient technology. And all they have to do is run duplicate equipment on their networks and give all their customers new devices overnight. And, well, the whole business model fits in a few paragraphs, entails no new spectrum, actually creates spectrum, and meets all foreseeable demand (as long as demand never increases which, of course, the report conveniently fails to assess).

Moreover, claims the report, AT&T’s transition to LTE flows inevitably from its competition with Verizon. But, as Cicconi points out, the staff is unprincipled in its disparate treatment of the industry’s competitive conditions. Somehow, without T-Mobile in the mix, prices will skyrocket and quality will be degraded—let’s say, just for example, by not upgrading to LTE (my interpretation, not the staff’s). But 100 pages later, it turns out that AT&T doesn’t need to merge with T-Mobile to expand its LTE network because it will have to do so in response to competition from Verizon anyway. It would appear, however, that Verizon’s power over AT&T operates only if T-Mobile exists separately and AT&T has a harder time competing. Remove T-Mobile and expand AT&T’s ability to compete and, apparently, the market collapses. Such is the logic of the report.

There is much more to criticize in the report, and I hope to have a chance to do so in the next few days.

]]>
https://techliberation.com/2011/12/02/a-quick-assessment-of-the-fcc%e2%80%99s-appalling-staff-report-on-the-att-merger/feed/ 7 39223
ABA Roundtable Discussion Tomorrow on the AT&T/T-Mobile Merger https://techliberation.com/2011/09/26/aba-roundtable-discussion-tomorrow-on-the-attt-mobile-merger/ https://techliberation.com/2011/09/26/aba-roundtable-discussion-tomorrow-on-the-attt-mobile-merger/#respond Mon, 26 Sep 2011 22:23:10 +0000 http://techliberation.com/?p=38424

[Cross posted at Truthonthemarket]

As I have posted before, I was disappointed that the DOJ filed against AT&T in its bid to acquire T-Mobile.  The efficacious provision of mobile broadband service is a complicated business, but it has become even more so by government’s meddling.  Responses like this merger are both inevitable and essential.  And Sprint and Cellular South piling on doesn’t help — and, as Josh has pointed out, further suggests that the merger is actually pro-competitive.

Tomorrow, along with a great group of antitrust attorneys, I am going to pick up where I left off in that post during a roundtable discussion hosted by the American Bar Association.  If you are in the DC area you should attend in person, or you can call in to listen to the discussion–but either way, you will need to register here.  There should be a couple of people live tweeting the event, so keep up with the conversation by following #ABASAL.

Panelists: Richard Brunell, Director of Legal Advocacy, American Antitrust Institute, Boston Allen Grunes, Partner, Brownstein Hyatt Farber Schreck, Washington Glenn Manishin, Partner, Duane Morris LLP, Washington Geoffrey Manne, Lecturer in Law, Lewis & Clark Law School, Portland Patrick Pascarella, Partner, Tucker Ellis & West, Cleveland

Location:  Wilson Sonsini Goodrich & Rosati, P.C. 1700 K St. N.W. Fifth Floor Washington, D.C. 20006

For more information, check out the flyer here.

]]>
https://techliberation.com/2011/09/26/aba-roundtable-discussion-tomorrow-on-the-attt-mobile-merger/feed/ 0 38424
The Spectrum Argument Lives, Debunking Letter-Gate, and Why the DOJ Is Still Wrong to Try to Stop the AT&T/T-Mobile Merger https://techliberation.com/2011/09/02/the-spectrum-argument-lives-debunking-letter-gate-and-why-the-doj-is-still-wrong-to-try-to-stop-the-attt-mobile-merger/ https://techliberation.com/2011/09/02/the-spectrum-argument-lives-debunking-letter-gate-and-why-the-doj-is-still-wrong-to-try-to-stop-the-attt-mobile-merger/#comments Fri, 02 Sep 2011 17:01:41 +0000 http://techliberation.com/?p=38230

Milton Mueller responded to my post Wednesday on the DOJ’s decision to halt the AT&T/T-Mobile merger by asserting that there was no evidence the merger would lead to “anything innovative and progressive” and claiming “[t]he spectrum argument fell apart months ago, as factual inquiries revealed that AT&T had more spectrum than Verizon and the mistakenly posted lawyer’s letter revealed that it would be much less expensive to expand its capacity than to acquire T-Mobile.”  With respect to Milton, I think he’s been suckered by the “big is bad” crowd at Public Knowledge and Free Press.  But he’s hardly alone and these claims — claims that may well have under-girded the DOJ’s decision to step in to some extent — merit thorough refutation.

To begin with, LTE is “progress” and “innovation” over 3G and other quasi-4G technologies.  AT&T is attempting to make an enormous (and risky) investment in deploying LTE technology reliably and to almost everyone in the US–something T-Mobile certainly couldn’t do on its own and something AT&T would have been able to do only partially and over a longer time horizon and, presumably, at greater expense.  Such investments are exactly the things that spur innovation across the ecosystem in the first place.  No doubt AT&T’s success here would help drive the next big thing–just as quashing it will make the next big thing merely the next medium-sized thing.

The “Spectrum Argument”

The spectrum argument that Milton claims “fell apart months ago” is the real story here, the real driver of this merger, and the reason why the DOJ’s action yesterday is, indeed, a blow to progress.  That argument, unfortunately, still stands firm.  Even more, the irony is that to a significant extent the spectrum shortfall is a product of the government’s own making–through mismanagement of spectrum by the FCC, political dithering by Congress, and local government intransigence on tower siting and co-location–and the notion of the government now intervening here to “fix” one of the most significant private efforts to make progress despite these government impediments is really troubling.

Anyway, here’s what we know about spectrum:  There isn’t enough of it in large enough blocks and in bands suitable for broadband deployment using available technology to fully satisfy  current–let alone future–demand.

Two incredibly detailed government sources for this conclusion are the FCC’s 15th Annual Wireless Competition Report and the National Broadband Plan.  Here’s FCC Chairman Julius Genachowski summarizing the current state of affairs (pdf):

The point deserves emphasis:  the clock is ticking on our mobile future. The FCC is an expert agency staffed with first-rate employees who have been working on spectrum allocation for decades – and let me tell you what the career engineers are telling me. Demand for spectrum is rapidly outstripping supply. The networks we have today won’t be able to handle consumer and business needs.

* * *

To avoid this crisis, the National Broadband Plan recommended reallocating 500 megahertz of spectrum for broadband, nearly double the amount that is currently available.

* * *

First, there are some who say that the spectrum crunch is greatly exaggerated – indeed, that there is no crunch coming. They also suggest that there are large blocks of spectrum just lying around – and that some licensees, such as cable and wireless companies, are just sitting on top of, or “hoarding,” unused spectrum that could readily solve that problem. That’s just not true.

* * *

The looming spectrum shortage is real – and it is the alleged hoarding that is illusory.

It is not hoarding if a company paid millions or billions of dollars for spectrum at auction and is complying with the FCC’s build-out rules. There is no evidence of non-compliance. . . . [T]he spectrum crunch will not be solved by the build-out of already allocated spectrum.

All of the evidence suggests that spectrum suitable for mobile broadband is scarce and growing scarcer.  Full stop.

It is troubling that critics–particularly those with little if any business experience–are so certain that even with no obvious source of additional spectrum suitable for LTE coming from the government any time soon, and even with exponential growth in broadband (including mobile) data use, AT&T’s current spectrum holdings are sufficient to satisfy its business plans (and its investors and stockholders).  You’d think AT&T would be delighted to hear this news–what we really need is a shareholder resolution to put Gigi Sohn on the board!

But seriously, put yourself in AT&T’s shoes for a moment.  Its long-term plans require the company to deploy significantly more spectrum than it currently holds in a reasonable time horizon (even granting Milton’s dubious premise that the company is squatting on scads of unused spectrum–remember that even if AT&T had all the spectrum sitting in its proverbial bank vault it would still be just about a third of the total amount of spectrum we’re predicted to need in just a few years).  Considering the various impediments of net neutrality regulation, congressional politics, presidential politics (think this had anything to do with claims about job losses from the merger, by chance?), reluctant broadcasters, the FCC, state PUCs, environmental groups and probably 10-12 others . . . the chances of being able to obtain the necessary spectrum and cell tower sitings in any other reasonable fashion were perhaps appropriately deemed . . . slim.

With the T-Mobile deal, on the other hand, “AT&T will gain cell sites equivalent to what would have taken on average five years to build without the transaction, and double that in some markets. AT&T’s network density will increase by approximately 30 percent in some of its most populated areas.” (Source).  I just don’t see how this jibes with the claim that the spectrum argument has fallen apart.

But there is a larger, “meta” point to make here, and it’s one that policy scolds and government regulators too often forget.  Even if none of that were true, as long as we don’t know for sure what is optimal and do know the DOJ is both a political organization made up of human beings operating not only under said ignorance but with incentives that don’t necessarily translate into “maximize social welfare” and also devoid of any actual “skin in the game,” I think the basic, simple, time-tested, logical and self-evident error cost principle counsels pretty firmly against intervention.  Humility, not hubris should rule the roost.

And that’s especially true since you know what will happen if the DOJ (or the FCC) succeeds in preventing AT&T from buying T-Mobile?  T-Mobile will still disappear and we’ll still be left with (according to the DOJ’s analysis) the terrifying prospect of only 3 national wireless telecom providers.  Only, in that case, everyone’s going to think a lot harder about investing in future developments that might warrant integration or cooperation or . . . well, the DOJ will challenge anything, so add to the list patent pools, too much success, not enough sharing, etc., etc.  And you wonder why I think this might constitute an assault on innovation?

Now, as for Milton’s specific claims, reminiscent of Public Knowledge’s and Free Press’ talking points, let me quote AT&T’s Public Interest Statement discussing its own particular spectrum holdings:

Because of the high demand for broadband service, AT&T already has had to deploy four carriers (for a total of 40 MHz of spectrum) for UMTS [3G] in some areas—and it will need to deploy more in the near future, even if doing so squeezes its GSM spectrum allocation and compromises GSM service quality . . . .  AT&T expects that, given the relative infancy of the LTE ecosystem and the time needed to migrate subscribers, it will need to continue to allocate spectrum to UMTS services for a substantial number of years—indeed, even longer than AT&T needs to continue allocating spectrum for GSM services.

* * *

AT&T has begun deployment of LTE services using its AWS and 700 MHz spectrum and currently plans to cover more than 250 million people by the end of 2013

* * *

AT&T projects it will need to use its 850 MHz and 1900 MHz spectrum holdings to support GSM and UMTS services for a number of years and, in the meantime, will not be able to re-deploy them for more spectrally efficient LTE services.

* * *

AT&T’s existing WCS spectrum holdings cannot be used for this purpose either, because the technical rules for the WCS band, such as limits on the power spectral density limits, make it infeasible to use that band for broadband service.

In other words, I don’t think AT&T has been (nor could it be, given the FCC’s detailed knowledge on the subject) hiding its spectrum holdings.  Instead, the company has been making quite clear that the spectrum it has is simply insufficient to meet anticipated demand.  And, well, duh!  Anyone who uses AT&T knows its network is overloaded.  Some of that’s because of tower-siting issues, some because it simply didn’t anticipate the extent of demand it would face.  I heard somewhere that no matter how hard they try to account for their perpetual under-accounting, every estimate by every mobile provider of anticipated spectrum needs in the past two decades or so has fallen short.  I’m quite sure that AT&T didn’t anticipate in 2007 that spectrum usage would increase by 8000% (yes, that’s thousand) by 2010.

Moreover, there will always (in any sensible system) be excess capacity at times–as it happens, at (conveniently) the times when spectrum usage is often counted–in order to deal with peak loads.  It is no more sensible to deploy capacity sufficient to handle the maximum load 100% of the time than it is to deploy capacity to handle only the minimum load 100% of the time.  Does that mean the often-unused spectrum is “excess”?  Clearly not.

Moreover (again), not all spectrum is in contiguous blocks sufficient to deploy LTE.  AT&T (at least) claims that is the case with much of its existing spectrum.  Spectrum isn’t simply fungible, and un-nuanced claims that “AT&T has X megahertz of spectrum and it is plenty” are just meaningless.  Again, just because Free Press says otherwise does not make it so.  You can simply discount AT&T’s claims if you like–I’m sure it’s possible they’re just lying; but you should probably be careful whose “information” you believe instead.

But, no, Milton, the spectrum argument did not “fall apart months ago.”  Gigi Sohn, Harold Feld and Sprint just said it did.  There’s a difference.

“Letter-Gate”

As for the infamous letter alleged to show that AT&T could expand LTE service from its previously-planned 80% of the country to the 97% it promises if the merger goes through for significantly less than it would cost to buy T-Mobile:  I don’t know exactly what its import is—but no one outside AT&T and, maybe, the FCC really does, either.  But I think a little sensible skepticism is in order.

First, for those who haven’t read it, the letter says, in relevant part:

The purpose of the meeting was to discuss AT&T’s current LTE deployment plans to reach 80 percent of the U.S. population by the end of 2013…; the estimated [Begin Confidential Information] $3.8 billion [End Confidential Information] in additional capital expenditures to expand LTE coverage from 80 to 97 percent of the U.S. population; and AT&T’s commitment to expand LTE to over 97 percent of the U.S. population as a result of this transaction.

That part, “$3.8 billion,” between the words “Begin Confidential Information” and “End Confidential Information” was supposed to be redacted, but apparently wasn’t when the letter was first posted to the FCC’s website.

While Public Knowledge and other critics of the deal would have you believe that this proves AT&T could roll-out nationwide LTE service for 1/10 of the cost of the T-Mobile deal, it’s basically impossible to tell what this number really means–except it certainly doesn’t mean that.

Claims about its meaning are actually largely content-less; nothing I’ve seen asks (or can possibly answer) whether the number in the letter was full cost, partial cost, annualized cost, based off of what baseline, etc., etc.  Moreover, unless I’m mistaken, nothing in the letter said anything at all about $3.8 billion being used to relieve congestion, meet future demand, increase speeds, reduce latency, expand coverage in urban areas, etc.  It seems to me that it’s referring to “additional” (additional to what?) capital expense to build infrastructure to make it even possible to offer LTE coverage to 97% of the U.S. population following the merger.  AT&T has from the outset said (bragged, more like it, because it’s supposed to bring lots of jobs and that’s what the politicians care about) that it planned to spend an “additional” $8 billion–additional to the $39 billion required to buy T-Mobile, that is–to build out its infrastructure as part of the deal.  But neither this letter nor any of AT&T’s statements (nor anyone with any familiarity with the relevant facts) has ever said it could or would have full-speed, LTE service available and up and running to 97% of the country for $3.8 billion or even $8 billion–or even merely $39 billion.  In fact, AT&T seemed to be saying that it was going to cost at least $47 billion to make that happen (and I can assure you that doesn’t begin to account for all the costs associated with integrating T-Mobile with AT&T once the $39 billion is out the door).

As I’ve alluded to above, deploying LTE service to rural areas is probably not as important for AT&T as increasing its network’s capacity in urban areas. The T-Mobile deal allows AT&T to alleviate the congestion problems experienced by its existing customers in urban areas more quickly than any other option–and because T-Mobile’s network is already up and running, that’s still true even if the federal government was somehow able to make tons of spectrum immediately available.  Moreover, with respect to the $3.8 billion, as I’ve discussed at length above, without T-Mobile’s–or someone’s!–additional spectrum and the miraculous removal of local government impediments to tower construction, pretty much no amount of money would enable AT&T to actually deliver LTE service to 97% of the country.  Is that what it would cost to build the extra pieces of hardware necessary to support such an offering?  That sounds plausible.  But actually deliver it? Hardly.

And just to play this out, let’s say the letter did mean just that — that AT&T could deliver real, fine LTE service to 97% of the country for a mere $3.8 billion direct, marginal outlay, even without T-Mobile.  It is still the case that none of us outsiders knows what such a claim would assume about where the necessary spectrum would come from and what, absent the merger, the effect would be on existing 3G coverage, congestion, pricing, etc., and what the expected ROI for such a project would be.  Elsewhere in the letter its author states that AT&T considered whether making this investment (without the T-Mobile merger) was prudent, and repeatedly rejected it.  In other words, all those armchair CEOs are organizing AT&T’s business and spending its money without the foggiest clue as to what the real consequences would be of doing so–and then claiming that, although, unlike them, actually in possession of the data relevant to such an assessment, AT&T must be lying, and could only justify spending $39 billion to buy T-Mobile as a means of securing its monopoly power.

And I think it’s important to gut check that claim, as well, as it’s what critics claim to fear (The Ma Bell from the Black Lagoon).  Unpacked, it goes something like this:

Given that:

  1.  AT&T is going to spend $39 billion to buy T-Mobile;
  2. It is going to spend $8 billion to build additional infrastructure;
  3. Having bought T-Mobile, it is going to incur some ungodly amount of expense integrating T-Mobile’s assets and employees with its own;
  4. It is going to incur huge, ongoing additional costs to govern a now-larger, more-complex organization;
  5. It is going to continue to be regulated by the FCC and watched carefully by the DOJ and its unofficial consumer watchdog minions;
  6. It will continue to face competition from its current largest and second-largest competitor;
  7. It will continue to face entry threats from the likes of Dish and Lightsquared;
  8. It will continue to face competition from fixed broadband offered by the likes of Comcast and Time Warner;
  9. It will do all this quite publicly, under the watchful eyes of Congress and its union to whom it has made all manner of politically-expedient promises;

 Then it follows that:

  1. Although it can’t muster the gumption to risk $3.8 billion to legitimately (it is claimed) extend full LTE coverage to 97% of the U.S. population, it nevertheless thinks it’s a sure bet that it will be able to recoup all of these expenditures, in this competitive and regulatory environment, by virtue of having thus taken out not its largest, not even its second-largest, but its smallest “national” competitor, and thereby having converted itself into an unfettered monopolist. QED.

The mind boggles.

So.  Back to Milton and his suggestion that I was wrong to claim that the DOJ’s action here is a threat to innovation and progress and his assertion that AT&T’s claims surrounding the benefits of the transaction fail to stand up to scrutiny:  C’mon, Miltons of the world!  Where’s your normally healthy skepticism?  I know you don’t like big infrastructure providers.  I know you’re angry your iPhone isn’t as functional as it is beautiful.  I know capitalists are only slightly more trustworthy than regulators (or is it the other way around?).  But why give in so credulously to the claims of the professional critics?  Isn’t it more likely that the deal’s critics are just blowing smoke here because they don’t like any consolidation?  It doesn’t take much research to understand (to the extent anyone can understand something so complex) the current state of the U.S. broadband market and its discontents–and why something like this merger is a plausible response.  And you don’t have to like, trust, or even stand the sight of any business executive to know that, however stupid or evil, he is still constrained by powerful market forces beyond his ken.  And “Letter-Gate” is just another pseudo-scandal contrived to suit an agenda of aggressive government meddling.

We all ought to be more wary of such claims, less quick to join anyone in condemning big as bad, and far less quick to, implicitly or explicitly, substitute the known depredations of the government for the possible ones of the market without a hell of a lot better evidence to do so.

]]>
https://techliberation.com/2011/09/02/the-spectrum-argument-lives-debunking-letter-gate-and-why-the-doj-is-still-wrong-to-try-to-stop-the-attt-mobile-merger/feed/ 615 38230
A couple of quick thoughts on the DOJ’s filing to block AT&T/T-Mobile https://techliberation.com/2011/08/31/a-couple-of-quick-thoughts-on-the-doj%e2%80%99s-filing-to-block-attt-mobile/ https://techliberation.com/2011/08/31/a-couple-of-quick-thoughts-on-the-doj%e2%80%99s-filing-to-block-attt-mobile/#comments Wed, 31 Aug 2011 17:49:32 +0000 http://techliberation.com/?p=38193

[Cross posted at Truthonthemarket]

As Josh noted, the DOJ filed a complaint today to block the merger. I’m sure we’ll have much, much more to say on the topic, but here are a few things that jump out at me from perusing the complaint:

    • The DOJ distinguishes between the business (“Enterprise”) market and the consumer market. This is actually a good play on their part, on the one hand, because it is more sensible to claim a national market for business customers who may be purchasing plans for widely-geographically-dispersed employees. I would question how common this actually is, however, given that, I’m sure, most businesses that buy group cell plans are not IBM but are instead pretty small and pretty local, but still, it’s a good ploy.
    • But it has one significant problem: The DOJ also seems to be stressing a coordinated effects story, making T-Mobile out to be a disruptive maverick disciplining the bigger carriers. But–and this is, of course an empirical matter I will have to look in to–I highly doubt that T-Mobile plays anything like this role in the Enterprise market, at least for those enterprises that fit the DOJ’s overly-broad description. In fact, the DOJ admits as much in para. 43 of its Complaint. Of course, the DOJ claims this was all about to change, but that’s not a very convincing story coupled with the fact that DT, T-Mobile’s parent, was reducing its investment in the company anyway. The reality is that Enterprise was not a key part of T-Mobile’s business model–if it occupied any cognizable part of it at all– and it can hardly be considered a maverick in a market in which it doesn’t actually operate.
    • On coordinated effects, I think the claim that T-Mobile is a maverick is pretty easily refuted, and not only in the Enterprise realm. As Josh has pointed out in his Congressional testimony, a maverick is a term of art in antitrust, and it’s just not enough that a firm may be offering products at a lower price–there is nothing “maverick-y” about a firm that offers a different, less valuable product at a lower price. I have seen no evidence to suggest that T-Mobile offered the kind of pricing constraint on AT&T that would be required to make it out to be a maverick.

  • Meanwhile, I know this is just a complaint and even post-Twombly pleading standards are lower than standards of proof, but the DOJ does seem t make a lot out of its HHI numbers. In part this is a function of its adoption of a national relevant geographic market. But (as noted above even for most Enterprise customers) this is just absurd. As the FCC itself has noted, consumers buy cell service where they “live, work and travel.” For most everyone, this is local.
  • Meanwhile, even on a national level, the blithe dismissal of a whole range of competitors is untenable. MetroPCS, Cell South and many other companies have broad regional coverage (MetroPCS even has next-gen LTE service in something like 17 cities) and roaming agreements with each other and with the larger carriers that give them national coverage. Why they should be excluded from consideration is baffling. Moreover, Dish has just announced plans to build a national 4G network (take that, DOJ claim that entry is just impossible here!). And perhaps most important the real competition here is not for mobile telephone service. The merger is about broadband. Mobile is one way of getting broadband. So is cable and DSL and WiMax, etc. That market includes such insignificant competitors as Time Warner, Comcast and Cox. Calling this a 4 to 3 merger strains credulity, particularly under the new merger guidelines.
  • Moreover, the DOJ already said as much! In its letter to the FCC on the FCC’s National Broadband Plan the DOJ says:
Ultimately what matters for any given consumer is the set of broadband offerings available to that consumer, including their technical characteristics and the commercial terms and conditions on which they are offered. Competitive conditions vary considerably for consumers in different geographic locales.
  • The DOJ also said this, in the same letter:
[W]ith differentiated products subject to large economies of scale (relative to the size of the market), the Department does not expect to see a large number of suppliers. . . . [Rather, the DOJ cautions the FCC agains] striving for broadband markets that look like textbook markets of perfect competition, with many price-taking firms. That market structure is unsuitable for the provision of broadband services.

Quite the different tune, now that it’s the DOJ’s turn to spring into action rather than simply admonish the antitrust activities of a sister agency!

I’m sure there is lots more, but I must say I’m really surprised and disappointed by this filing. Effective, efficient provision of mobile broadband service is a complicated business. It is severely hampered by constraints of the government’s own doing — both in terms of the government’s failure to make available spectrum to enable companies to build out large-scale broadband networks, and in local governments’ continued intransigence in permitting new cell towers and even co-location of cell sites on existing towers that would relieve some of the infuriating congestion we now experience.

This decision by the DOJ is an ill-conceived assault on innovation and progress in what may be the one shining segment of our bedraggled economy.

]]>
https://techliberation.com/2011/08/31/a-couple-of-quick-thoughts-on-the-doj%e2%80%99s-filing-to-block-attt-mobile/feed/ 11 38193
FairSearch’s Non-Sequitur Response https://techliberation.com/2011/07/25/fairsearchs-non-sequitur-response/ https://techliberation.com/2011/07/25/fairsearchs-non-sequitur-response/#comments Mon, 25 Jul 2011 21:04:11 +0000 http://techliberation.com/?p=37919

[By Geoffrey Manne and Joshua Wright.  Cross-posted at TOTM]

Our search neutrality paper has received some recent attention.  While the initial response from Gordon Crovitz in the Wall Street Journal was favorablecritics are now voicing their responses.  Although we appreciate FairSearch’s attempt to engage with our paper’s central claims, its response is really little more than an extended non-sequitur and fails to contribute to the debate meaningfully.

Unfortunately, FairSearch grossly misstates our arguments and, in the process, basic principles of antitrust law and economics.  Accordingly, we offer a brief reply to correct a few of the most critical flaws, point out several quotes in our paper that FairSearch must have overlooked when they were characterizing our argument, and set straight FairSearch’s various economic and legal misunderstandings.

We want to begin by restating the simple claims that our paper does—and does not—make.

Our fundamental argument is that claims that search discrimination is anticompetitive are properly treated skeptically because:  (1) discrimination (that is, presenting or ranking a search engine’s own or affiliated content more prevalently than its rivals’ in response to search queries) arises from vertical integration in the search engine market (i.e., Google responds to a query by providing not only “10 blue links” but also perhaps a map or video created Google or previously organized on a Google-affiliated site (YouTube, e.g.)); (2) both economic theory and evidence demonstrate that such integration is  generally pro-competitive; and (3) in Google’s particular market, evidence of intense competition and constant innovation abounds, while evidence of harm to consumers is entirely absent.  In other words, it is much more likely than not that search discrimination is pro-competitive rather than anticompetitive, and doctrinal error cost concerns accordingly counsel great hesitation in any antitrust intervention, administrative or judicial.  As we will discuss, these are claims that FairSearch’s lawyers are quite familiar with.

FairSearch, however, grossly mischaracterizes these basic points, asserting instead that we claim

“that even if Google does [manipulate its search results], this should be immune from antitrust enforcement due to the difficulty of identifying ‘bias’ and the risks of regulating benign conduct.”

This statement is either intentionally deceptive or betrays a shocking misunderstanding of our central claim for at least two reasons: (1) we  never advocate for complete antitrust immunity, and (2) it trivializes the very real—and universally-accepted–difficulty of distinguishing between pro- and anticompetitive conduct.

First, we acknowledge the obvious point that  as a theoretical matter discrimination can amount to an antitrust violation in some cases under certain specific circumstances—not the least important of which is proof of actual competitive harm.  To quote ourselves:

The key question is whether such a bias benefits consumers or inflicts competitive harm. Economic theory has long understood the competitive benefits of such vertical integration; modern economic theory also teaches that, under some conditions, vertical integration and contractual arrangements can create a potential for competitive harm that must be weighed against those benefits . . . .  From a policy perspective, the issue is whether some sort of ex ante blanket prohibition or restriction on vertical integration is appropriate instead of an ex post, fact-intensive evaluation on a case-by-case basis, such as under antitrust law. (Manne and Wright, 2011) (emphasis added).

This is not much of a concession.  While FairSearch tries to move the goalposts by focusing on a straw man proposition that search bias is categorically immune from antitrust scrutiny, this sleight of hand doesn’t accomplish much and reveals what FairSearch is missing.   After all, consider that almost every single form of business conduct  can be an antitrust violation under some set of conditions!  The antitrust laws apply in principle to (that is, do not categorically make immune) horizontal mergers, vertical mergers, long-term contracts, short-term contracts, exclusive dealing, partial exclusive dealing, burning down a rival’s factory, dealing with rivals, refusing to dealing with rivals, boycotts, tying contracts, overlapping boards, and all manner of pricing practices.  Indeed, it is hard to find categories of business conduct that are outright immune from the antitrust laws.  So—we agree:  “Search bias” can conceivably be anticompetitive.  Unfortunately for FairSearch, we never said otherwise and it’s not a very interesting point to discuss.

With that point firmly established, one can return focus to the topic FairSearch painstakingly avoids throughout its response and on which we think the issue really does (and should) turn:  Where’s the proof of consumer harm?

We make the rather common sense point that when engaging in a case-by-case factual analysis of the competitive effects of business conduct, one should take advantage of what we already know about the class of business practice generally.  Along those lines, we noted that the economic literature has extensively analyzed the competitive effects of such discrimination and concluded that in most cases it yields significant efficiencies and is therefore unlikely ex ante to amount to an antitrust violation.   Perhaps FairSearch was confused by this argument, though we explicitly frame the choice as one between ex ante categorical regulations and ex post case-by-case analyses, not as one between ex anteprohibitions and “allowing Google free rein to discriminate against competitors,” as FairSearch bizarrely claims.  This is something akin to claiming a defense attorney demands “immunity” for a client with an alibi rather than the correct outcome under a faithful application of the law.  (Although we hasten to add that it’s not clear that would be, in fact, anything at all wrong with Google “discriminating against competitors.”  We’re sure its shareholders would be apoplectic if it didn’t, in fact).

At any rate, even before we get to the question of whether there is any evidence of consumer harm, we do indeed think it is relevant to any analysis that bias is hard to identify, that one user’s “bias” is another user’s “relevant search result,” that a remedy would be difficult to design and harder to enforce, and that the costs of being wrong are significant.  These are not dispositive, nor do we claim they are.  But they do underscore another point that FairSearch misses:  That harmful search bias would be exceptionally difficult to detect even if it were assumed to exist.

This problem can’t be brushed off, and it mirrors the well-known uncertainty at the core of the antitrust enterprise:  That a given course of conduct may prove pro-competitive or anticompetitive under differing situations.  Former AAG Tom Barnettand current Google critic (representing Expedia) has himself echoed this point, observing that

No institution that enforces the antitrust laws is omniscient or perfect. Among other things, antitrust enforcement agencies and courts lack perfect information about pertinent facts, including the impact of particular conduct on consumer welfare . . . .  We face the risk of condemning conduct that is not harmful to competition . . . and the risk of failing to condemn conduct that does harm competition . . .

Barnett further notes that “discerning whether a monopolist’s actions have hurt or helped competition can be extremely difficult.”

At the same time, FairSearch does not dispute that significant efficiencies can arise from vertical integration in the search engine market.  Rather, FairSearch distracts itself from this substantial problem in its response by instead quibbling over the factual details of our shelf space analogy and arguing incoherently that Google’s conduct violates antitrust laws merely because it deprives rivals of scale.  But here FairSearch misses the mark in telling ways.

First, on the shelf space analogy:  As we’ve discussed numerous times (e.g., here and here) the shelf space analogy illustrates that promotional arrangements and vertical integration can have obvious benefits; for example, such arrangements can align incentives for product promotion between up- and downstream actors, thereby increasing output and consumer welfare.  FairSearch confuses the analogy by focusing upon the fact that supermarkets, unlike Google, certainly do not have monopoly power.  To borrow a quote, we do not think that means what FairSearch thinks it means.  That competitors without market power in highly competitive markets engage in a challenged conduct just as competitors with it do might suggest to the astute antitrust observer that the conduct has real competitive merit unrelated to any exclusionary effect.  Moreover, the shelf space analogy is designed to highlight the essential fact that because a firm is large (or has a large market share) does not mean its conduct does not also generate consumer benefits!  As the court in Berkey Photo noted:

[A] large firm does not violate § 2 simply by reaping the competitive rewards attributable to its efficient size, nor does an integrated business offend the Sherman Act whenever one of its departments benefits from association with a division possessing a monopoly in its own market. So long as we allow a firm to compete in several fields, we must expect it to seek the competitive advantages of its broad-based activity – more efficient production, greater ability to develop complementary products, reduced transaction costs, and so forth. These are gains that accrue to any integrated firm, regardless of its market share, and they cannot by themselves be considered uses of monopoly power.

The point of the analogy, which FairSearch entirely misses yet does not deny, is that these arrangements yield pro-competitive efficiencies – and that these efficiencies are not a function of monopoly power but rather of efficient—even innovative—forms of business organization.

Similarly, FairSearch’s arguments about scale in the search engine market are unpersuasive and represent a serious misunderstanding of the market dynamics.  FairSearch seems to argue that increased access to ad space leads to higher profits, and that Google, as the recipient of this windfall, can accordingly out-buy its rivals, implying that, eventually, they will simply wither on the vine as Google ends up possessing all the money in the world.  Or something like that.

Of course this argument is outlandish on its face.  Microsoft, which has overwhelming resources, is one of Google’s closest competitors and certainly cannot be easily out-bought.  Moreover, the initial claim itself is weak because (1) many advertisers multi-home, dissipating any efficiency that increased access would yield, and (2) advertisers pay  per click – because search engines with fewer users necessarily initiate fewer clicks, advertising may be cheaper on smaller platforms.  So, while the value on two different size platforms may diverge, so do prices.  Merely asserting that one has more value and therefore higher prices is not sufficient to demonstrate divergence on the relevant dimension; rather these factors could – and likely do – increase commensurately.

The most glaring flaw in FairSearch’s argument, however, is its failure to present any evidence whatsoever of competitive harm.  In demonstrating a Section 2 violation, the burden is on the plaintiff in the first instance to make a showing of harm to competition.  Rather than engage in this effort itself, FairSearch summarily dismisses our argument because we “do not seek to marshal evidence that Google does not manipulate search results to harm competition.”  However, this is not our burden to bear.  We repeat: FairSearch fails to present any evidence of its own case-in-chief; it relies on an economically incoherent and premature dismissal of claims we did not make (categorical immunity for “bias”) in making its own naked assertions.

Note FairSearch’s remarkable sleight of hand here – it (1) attempts to shift the burden of proof to us while (2) simultaneously equating bias with harm to consumer welfare, in an attempt to further simplify its burden.  Bias, however, does not implicate harm to consumers, as FairSearch suggests–nor has harmful bias ever even been convincingly demonstrated.

FairSearch asserts that research has shown that Google often ranks its own results above those of rivals “without any apparent relationship to the quality of these Google sites as compared to competing sites.”  However,  Professor Ben Edelman’s study, upon which FairSearch relies for this assertion, is a far cry from the sort of rigorous analysis Mr. Barnett called for in Section 2 cases as the Attorney General (“there seems to be consensus that we should prohibit unilateral conduct only where it is demonstrated through rigorous economic analysis to harm competition and thereby to harm consumer welfare”).  Indeed, comparing 32 hand-picked search queries across search engines is hardly an adequate sample size or methodology for these purposes – and certainly does not suggest that Google is entirely unconcerned with the quality of its results.  In any event, so-called “bias,” even if proven, may at most represent harm to rivals – and this is not the relevant metric of antitrust injury.  Furthermore, as noted above, bias offers significant benefits and more often than not enhances consumer welfare.

Overall, FairSearch’s critique of our paper is weak and crucially flawed.  FairSearch relies upon a bald assertion that bias exists to equate that bias to consumer harm while conceding (but ignoring) that vertical integration may introduce significant efficiencies in the search engine market.  It tries to wish these efficiencies out of existence by erroneously claiming that monopoly power in the downstream market forecloses that possibility.  The basic economic analysis of vertical integration says otherwise.  But most essentially, FairSearch simply offers no evidence at all of consumer harm.  By alleging antitrust injury, FairSearch has the burden of demonstrating competitive harm in the first instance.  This burden is especially high in the search engine market, where, in contrast to the dearth of evidence of consumer harm, evidence of innovation abounds.

All of which is to say, our original argument bears repeating:  Claims of anticompetitive conduct should be viewed with serious skepticism when there is abundant evidence of consumer benefits, in the form of innovation and competition, and zero evidence of consumer harm.  If FairSearch wishes to mount a credible challenge to our analysis they have a lot more work to do.

]]>
https://techliberation.com/2011/07/25/fairsearchs-non-sequitur-response/feed/ 1 37919
The FTC makes its Google investigation official, now what? https://techliberation.com/2011/06/24/the-ftc-makes-its-google-investigation-official-now-what/ https://techliberation.com/2011/06/24/the-ftc-makes-its-google-investigation-official-now-what/#comments Fri, 24 Jun 2011 17:10:32 +0000 http://techliberation.com/?p=37461

[By Geoffrey Manne & Joshua Wright.  Cross-posted at Truth on the Market]

No surprise here.  The WSJ announced it was coming yesterday, and today Google publicly acknowledged that it has received subpoenas related to the Commission’s investigation.  Amit Singhal of Google acknowledged the FTC subpoenas at the Google Public Policy Blog:

At Google, we’ve always focused on putting the user first. We aim to provide relevant answers as quickly as possible—and our product innovation and engineering talent have delivered results that users seem to like, in a world where the competition is only one click away. Still, we recognize that our success has led to greater scrutiny. Yesterday, we received formal notification from the U.S. Federal Trade Commission that it has begun a review of our business. We respect the FTC’s process and will be working with them (as we have with other agencies) over the coming months to answer questions about Google and our services. It’s still unclear exactly what the FTC’s concerns are, but we’re clear about where we stand. Since the beginning, we have been guided by the idea that, if we focus on the user, all else will follow. No matter what you’re looking for—buying a movie ticket, finding the best burger nearby, or watching a royal wedding—we want to get you the information you want as quickly as possible. Sometimes the best result is a link to another website. Other times it’s a news article, sports score, stock quote, a video or a map.

It is too early to know the precise details of the FTC’s interest.  However, We’ve been discussing various aspects of the investigation here and at TOTM for the last year.  Indeed, we’ve written two articles focused upon framing and evaluating a potential antitrust case against Google as well as the misguided attempts to use the antitrust laws to impose “search neutrality.”  We’ve also written a number of blog posts on Google and antitrust (see here for an archive).

For now, until more details become available, it strikes us that the following points should be emphasized:

  • For several reasons, the Federal Trade Commission’s investigation into Google’s business practices seems misguided from the perspective of competition policy directed toward protecting consumer welfare.  We hope and expect that the agency will conclude its investigation quickly and without any enforcement action against the company.  But it is important to note that this is merely an investigation–and at that, one that is not necessarily new.  More importantly, it is not a full-fledged enforcement action, much less a successful one; and although such investigations are extraordinarily costly for their targets, there is not yet (and there may never be) even any allegation of liability inherent in an investigation.
  • In any such case, the focus of concern must always be on consumer harm–not harm to certain competitors.  This is a well known antitrust maxim, but it is certainly appropriately applied here.  We are skeptical that consumer harm is present in this case, and our writings have explored this issue at length.  In brief, Google of today is not the Microsoft of 1998, and the issues and circumstances that gave rise to liability in the Microsoft case are uniformly absent here.
  • Related, most of the claims we have seen surrounding Google’s conduct here are of the vertical sort–where Google has incorporated (either by merger, business development or technological development) and developed new products or processes to evolve its basic search engine in novel ways by, for instance, offering results in the form of maps or videos, or integrating travel-related search results into its traditional offerings.  As we’ve written, these sorts of vertical activities are almost always pro-competitive, despite claims to the contrary by aggrieved competitors, and we should confront such claims with extreme skepticism.   Vertical claims instigated by rivals are historically viewed with skepticism in antitrust circles.  Failing to subject these claims to scrutiny focused on consumer welfare risks would be a mistake whose costs would be borne largely by consumers.
  • The fact that Google’s rivals–including most importantly Microsoft itself–are complaining about the company is, ironically, some of the very best evidence that Google’s practices are in fact pro-consumer and pro-competitive.  It is always problematic when competitors use the regulatory system to try to hamstring their rivals, and we should be extremely wary of claims arising from such conduct.
  • We are also troubled by statements emanating from FTC Commissioners suggesting that the agency intends to pursue this case as a so-called “Section 5” case rather than the more traditional “Section 2” case.  We will have to wait to see whether any complaint is actually brought and, if so, under what statutory authority, but a Section 5 case against Google raises serious concerns about effective and efficient antitrust enforcement.  Commissioner Rosch has claimed that Section 5 could address conduct that has the effect of “reducing consumer choice”—an effect that some commentators support without requiring any evidence that the conduct actually reduces consumer welfare.  Troublingly, “reducing consumer choice” seems to be a euphemism for “harm to competitors, not competition,” where the reduction in choice is the reduction of choice of competitors who may be put out of business by pro-competitive behavior.  This would portend an extremely problematic shift in direction for US antitrust law.

Together Geoffrey Manne and Joshua Wright are the authors of two articles on the antitrust law and economics of Google and search engines more broadly, Google and the Limits of Antitrust: The Case Against the Case Against Google, and If Search Neutrality Is the Answer, What’s the Question?

Manne is also the author of “The Problem of Search Engines as Essential Facilities: An Economic & Legal Assessment,” an essay debunking arguments for regulation of search engines to preserve so-called “search neutrality” in TechFreedom’s 2011 book, The Next Digital Decade: Essays on the Future of the Internet.

Among our recent blog posts on the topic are the following:

What’s Really Motivating the Pursuit of Google

Barnett v. Barnett on Antitrust

Sacrificing Consumer Welfare in the Search Bias Debate

Type I Errors in Action, Google Edition

Google, Antitrust, and First Principles

Microsoft Comes Full Circle

Search Bias and Antitrust

The EU Tightens the Noose Around Google

When Google’s Competitors Attack

Antitrust Karma, The Microsoft-Google Wars, and a Question for Rick Rule

DOJ Gears Up to Challenge the Proposed Google ITA Merger

]]>
https://techliberation.com/2011/06/24/the-ftc-makes-its-google-investigation-official-now-what/feed/ 6 37461
What’s really motivating the pursuit of Google? https://techliberation.com/2011/06/14/whats-really-motivating-the-pursuit-of-google/ https://techliberation.com/2011/06/14/whats-really-motivating-the-pursuit-of-google/#comments Tue, 14 Jun 2011 15:08:08 +0000 http://techliberation.com/?p=37360

I have an op-ed up at Main Justice on FTC Chairman Leibowitz’ recent comment in response the a question about the FTC’s investigation of Google that the FTC is looking for a “pure Section Five case.”  With Main Justice’s permission, the op-ed is re-printed here:

There’s been a lot of chatter around Washington about federal antitrust regulators’ interest in investigating Google, including stories about an apparent tug of war between agencies. But this interest may be motivated by expanding the agencies’ authority, rather than by any legitimate concern about Google’s behavior.

Last month in an interview with Global Competition Review, FTC Chairman  Jon Leibowitz was asked whether the agency was “investigating the online search market” and he made this startling revelation:

“What I can say is that one of the commission’s priorities is to find a pure Section Five case under unfair methods of competition. Everyone acknowledges that Congress gave us much more jurisdiction than just antitrust. And I go back to this because at some point if and when, say, a large technology company acknowledges an investigation by the FTC, we can use both our unfair or deceptive acts or practice authority and our unfair methods of competition authority to investigate the same or similar unfair competitive behavior . . . . ”

“Section Five” refers to Section Five of the Federal Trade Commission Act. Exercising its antitrust authority, the FTC can directly enforce the Clayton Act but can enforce the Sherman Act only via the FTC Act, challenging as “unfair methods of competition” conduct that would otherwise violate the Sherman Act. Following Sherman Act jurisprudence, traditionally the FTC has interpreted Section Five to require demonstrable consumer harm to apply.

But more recently the commission—and especially Commissioners Rosch and Leibowitz—has been pursuing an interpretation of Section Five that would give the agency unprecedented and largely-unchecked authority. In particular, the definition of “unfair” competition wouldn’t be confined to the traditional measures–reduction in output or increase in price–but could expand to, well, just about whatever the agency deems improper.

Commissioner Rosch has claimed that Section Five could address conduct that has the effect of “reducing consumer choice”—an effect that a very few commentators support without requiring any evidence that the conduct actually reduces consumer welfare. Troublingly, “reducing consumer choice” seems to be a euphemism for “harm to competitors, not competition,” where the reduction in choice is the reduction of choice of competitors who may be put out of business by competitive behavior.

The U.S. has a long tradition of resisting enforcement based on harm to competitors without requiring a commensurate, strong showing of harm to consumers–an economically-sensible tradition aimed squarely at minimizing the likelihood of erroneous enforcement. The FTC’s invigorated interest in Section Five contemplates just such wrong-headed enforcement, however, to the inevitable detriment of the very consumers the agency is tasked with protecting.

In fact, the theoretical case against Google depends entirely on the ways it may have harmed certain competitors rather than on any evidence of actual harm to consumers (and in the face of ample evidence of significant consumer benefits).

Google has faced these claims at a number of levels. Many of the complaints against Google originate from Microsoft (Bing), Google’s largest competitor. Other sites have argued that that Google impairs the placement in its search results of certain competing websites, thereby reducing these sites’ ability easily to access Google’s users to advertise their competing products. Other sites that offer content like maps and videos complain that Google’s integration of these products into its search results has impaired their attractiveness to users.

In each of these cases, the problem is that the claimed harm to competitors does not demonstrably translate into harm to consumers.

For example, Google’s integration of maps into its search results unquestionably offers users an extremely helpful presentation of these results, particularly for users of mobile phones. That this integration might be harmful to MapQuest’s bottom line is not surprising—but nor is it a cause for concern if the harm flows from a strong consumer preference for Google’s improved, innovative product. The same is true of the other claims; harm to competitors is at least as consistent with pro-competitive as with anti-competitive conduct, and simply counting the number of firms offering competing choices to consumers is no way to infer actual consumer harm.

In the absence of evidence of Google’s harm to consumers, then, Leibowitz appears more interested in using Google as a tool in his and Rosch’s efforts to expand the FTC’s footprint. Advancing the commission’s “priority” to “find a pure Section Five case” seems to be more important than the question of whether Google is actually doing anything harmful.

When economic sense takes a back seat to political aggrandizement, we should worry about the effect on markets, innovation and the overall health of the economy.

 

]]>
https://techliberation.com/2011/06/14/whats-really-motivating-the-pursuit-of-google/feed/ 3 37360
An update on the evolving e-book market: Kindle edition (pun intended) https://techliberation.com/2011/03/01/an-update-on-the-evolving-e-book-market-kindle-edition-pun-intended/ https://techliberation.com/2011/03/01/an-update-on-the-evolving-e-book-market-kindle-edition-pun-intended/#comments Wed, 02 Mar 2011 01:26:44 +0000 http://techliberation.com/?p=35421

[Cross-posted at Truth on the Market]

[UPDATE:  Josh links to a WSJ article telling us that EU antitrust enforcers raided several (unnamed) e-book publishers as part of an apparent antitrust investigation into the agency model and whether it is “improperly restrictive.”  Whatever that means.  Key grafs:

At issue for antitrust regulators is whether agency models are improperly restrictive. Europe, in particular, has strong anticollusion laws that limit the extent to which companies can agree on the prices consumers will eventually be charged. Amazon, in particular, has vociferously opposed the agency practice, saying it would like to set prices as it sees fit. Publishers, by contrast, resist the notion of online retailers’ deep discounting.

It is unclear whether the animating question is whether the publishers might have agreed to a particular pricing  model, or to particular prices within that model.  As a legal matter that distinction probably doesn’t matter at all; as an economic matter it would seem to be more complicated–to be explored further another day . . . .]

A year ago I wrote about the economics of the e-book publishing market in the context of the dispute between Amazon and some publishers (notably Macmillan) over pricing.  At the time I suggested a few things about how the future might pan out (never a good idea . . . ):

And that’s really the twist.  Amazon is not ready to be a platform in this business.  The economic conditions are not yet right and it is clearly making a lot of money selling physical books directly to its users.  The Kindle is not ubiquitous and demand for electronic versions of books is not very significant–and thus Amazon does not want to take on the full platform development and distribution risk.  Where seller control over price usually entails a distribution of inventory risk away from suppliers and toward sellers, supplier control over price correspondingly distributes platform development risk toward sellers.  Under the old system Amazon was able to encourage the distribution of the platform (the Kindle) through loss-leader pricing on e-books, ensuring that publishers shared somewhat in the costs of platform distribution (from selling correspondingly fewer physical books) and allowing Amazon to subsidize Kindle sales in a way that helped to encourage consumer familiarity with e-books.  Under the new system it does not have that ability and can only subsidize Kindle use by reducing the price of Kindles–which impedes Amazon from engaging in effective price discrimination for the Kindle, does not tie the subsidy to increased use, and will make widespread distribution of the device more expensive and more risky for Amazon.

This “agency model,” if you recall, is one where, essentially, publishers, rather than Amazon, determine the price for electronic versions of their books sold via Amazon and pay Amazon a percentage.  The problem from Amazon’s point of view, as I mention in the quote above, is that without the ability to control the price of the books it sells, Amazon is limited essentially to fiddling with the price of the reader–the platform–itself in order to encourage more participation on the reader side of the market.  But I surmised (again in the quote above), that fiddling with the price of the platform would be far more blunt and potentially costly than controlling the price of the books themselves, mainly because the latter correlates almost perfectly with usage, and the former does not–and in the end Amazon may end up subsidizing lots of Kindle purchases from which it is then never able to recoup its losses because it accidentally subsidized lots of Kindle purchases by people who had no interest in actually using the devices very much (either because they’re sticking with paper or because Apple has leapfrogged the competition).

It appears, nevertheless, that Amazon has indeed been pursuing this pricing strategy.  According to this post from Kevin Kelly,

John Walkenbach noticed that the price of the Kindle was falling at a consistent rate, lowering almost on a schedule. By June 2010, the rate was so unwavering that he could easily forecast the date at which the Kindle would be free: November 2011.

There’s even a nice graph to go along with it:

So what about the recoupment risk?  Here’s my new theory:  Amazon, having already begun offering free streaming videos for Prime customers, will also begin offering heavily-discounted Kindles and even e-book subsidies–but will also begin rescinding its shipping subsidy and otherwise make the purchase of dead tree books relatively more costly (including by maintaining less inventory–another way to recoup).  It will still face a substantial threat from competing platforms like the iPad but Amazon is at least in a position to affect a good deal of consumer demand for Kindle’s dead tree competitors.

For a take on what’s at stake (here relating to newspapers rather than books, but I’m sure the dynamic is similar), this tidbit linked from one of the comments to Kevin Kelly’s post is eye-opening:

If newspapers switched over to being all online, the cost base would be instantly and permanently transformed. The OECD report puts the cost of printing a typical paper at 28 per cent and the cost of sales and distribution at 24 per cent: so the physical being of the paper absorbs 52 per cent of all costs. (Administration costs another 8 per cent and advertising another 16.) That figure may well be conservative. A persuasive looking analysis in the Business Insider put the cost of printing and distributing the New York Times at $644 million, and then added this: ‘a source with knowledge of the real numbers tells us we’re so low in our estimate of the Times’s printing costs that we’re not even in the ballpark.’ Taking the lower figure, that means that New York Times, if it stopped printing a physical edition of the paper, could afford to give every subscriber a free Kindle. Not the bog-standard Kindle, but the one with free global data access. And not just one Kindle, but four Kindles. And not just once, but every year. And that’s using the low estimate for the costs of printing.
]]>
https://techliberation.com/2011/03/01/an-update-on-the-evolving-e-book-market-kindle-edition-pun-intended/feed/ 4 35421
Tim Wu to the FTC: What does it mean? https://techliberation.com/2011/02/09/tim-wu-to-the-ftc-what-does-it-mean/ https://techliberation.com/2011/02/09/tim-wu-to-the-ftc-what-does-it-mean/#comments Wed, 09 Feb 2011 23:58:38 +0000 http://techliberation.com/?p=34959

As Adam notes, Columbia lawprof and holder of the dubious distinction of having originated the term and concept of Net Neutrality, Tim Wu, is headed to the FTC as a senior advisor.

Curiously, his guest stint runs for only about four and a half months.  As the WSJ reports:

Mr. Wu, 38, will start his new position on Feb. 14 in the FTC’s Office of Policy Planning, and will help the agency to develop policies that affect the Internet and the market for mobile communications and services. The FTC said Mr. Wu will work in the unit until July 31. Mr. Wu, who is taking a leave from Columbia, said that to work after that date he would have to request a further leave from the university.

Mr. Wu’s claim that the source of the date constraint is Columbia doesn’t pass the smell test.  Now, it is possible that what he says is  literally true–and therefore intentionally misleading.  Perhaps he asked only for leave through the end of July and would indeed have to request further leave if he wanted it.  But the implication that Columbia would have trouble granting further leave–especially during the summer!–and thus the short tenure seems very fishy to me.

So what else could be going on, while we’re reading inscrutable tea leaves?  Well, for one thing, it could be that Wu has already signed on for some not-yet-public role at Columbia that he prefers not to imperil.  Maybe associate dean or something like that.

But I have another, completely unsupported speculation.  I think the author of The Master Switch (commented on by Josh and me here) and one of the most capable (as far as that goes) proponents of Internet regulation in the land is being brought in to the FTC to help the agency gin up a case against Google.

I think with Google-ITA seemingly approaching its denouement, the FTC knows or believes that Google is either planning to abandon the merger or else enter into an (insufficiently-restrictive for the FTC) settlement with the DOJ.  In either case, not a full-blown investigation and intervention into Google’s business.  So the FTC is preparing its own Section 5 (and Section 2, but who needs that piker when you have the real deal in Section 5?) (for previous TOTM takes on Section 5, see, e.g., here and here) case and has brought in Wu to help.  Given the switching back and forth between the DOJ and FTC in reviewing Google mergers, it could very well be (I haven’t kept close tabs on Google’s proposed acquisitions) that there’s even already another merger review in waiting at the FTC on which the agency is planning to build its case.

But the phase of the case requiring Wu’s full attention–the conceptual early phase–should be completed by the end of July, so no need to detain him further.

More concretely, I would point out that it says a lot about the agency’s mindset that it is bringing in the likes of Wu to help it with its ongoing forays into the regulation of Internet businesses.  By comparison, I would just point out that Chairman Majoras’ FTC brought in our own Josh Wright as the agency’s first Scholar in Residence.  Sends a very different signal, don’t you think?

]]>
https://techliberation.com/2011/02/09/tim-wu-to-the-ftc-what-does-it-mean/feed/ 10 34959
The Problem of Search Engines as Essential Facilities https://techliberation.com/2011/02/04/the-problem-of-search-engines-as-essential-facilities/ https://techliberation.com/2011/02/04/the-problem-of-search-engines-as-essential-facilities/#comments Fri, 04 Feb 2011 21:58:25 +0000 http://techliberation.com/?p=34874

For my contribution to Berin Szoka and Adam Marcus’ (of TechFreedom fame) awesome Next Digital Decade book, I wrote about search engine “neutrality” and the implicit and explicit claims that search engines are “essential facilities.” (Check out the other essays on this topic by Frank Pasquale, Eric Goldman and James Grimmelmann, linked to here, under Chapter 7).

The scare quotes around neutrality are there because the term is at best a misnomer as applied to search engines and at worst a baseless excuse for more regulation of the Internet.  (The quotes around essential facilities are there because it is a term of art, but it is also scary).  The essay is an effort to inject some basic economic and legal reasoning into the overly-emotionalized (is that a word?) issue.

So, what is wrong with calls for search neutrality, especially those rooted in the notion of Internet search (or, more accurately, Google, the policy scolds’ bête noir of the day) as an “essential facility,” and necessitating government-mandated access? As others have noted, the basic concept of neutrality in search is, at root, farcical. The idea that a search engine, which offers its users edited access to the most relevant websites based on the search engine’s assessment of the user’s intent, should do so “neutrally” implies that the search engine’s efforts to ensure relevance should be cabined by an almost-limitless range of ancillary concerns. Nevertheless, proponents of this view have begun to adduce increasingly detail-laden and complex arguments in favor of their positions, and the European Commission has even opened a formal investigation into Google’s practices, based largely on various claims that it has systematically denied access to its top search results (in some cases paid results, in others organic results) by competing services, especially vertical search engines. To my knowledge, no one has yet claimed that Google should offer up links to competing general search engines as a remedy for its perceived market foreclosure, but Microsoft’s experience with the “Browser Choice Screen” it has now agreed to offer as a consequence of the European Commission’s successful competition case against the company is not encouraging. These more superficially sophisticated claims are rooted in the notion of Internet search as an “essential facility” – a bottleneck limiting effective competition. These claims, as well as the more fundamental harm-to-competitor claims, are difficult to sustain on any economically-reasonable grounds. To understand this requires some basic understanding of the economics of essential facilities, of Internet search, and of the relevant product markets in which Internet search operates.

The essay goes into much more detail, of course, but the basic point is that Google’s search engine is not, in fact, “essential” in the economically-relevant sense.  Rather, Google’s competitors and other detractors have basically built precisely the most problematic sort of antitrust case, where success itself is penalized (in this case, Google is so good at what it does it just isn’t fair to keep it all to itself!).

Search neutrality and forced access to Google’s results pages is based on the proposition that—Google’s users’ interests be damned—if Google is the easiest way competitors can get to potential users, Google must provide that access.  The essential facilities doctrine, dealt a near-death blow by the Supreme Court in Trinko, has long been on the ropes.   It should remain moribund here.  On the one hand Google does not preclude, nor does it have the power to preclude, users from accessing competitors’ sites; all users need do is type “www.foundem.com” into their web browser—which works even if it’s Google’s own Chrome browser!  To the extent that Google can and does limit competitors’ access to its search results page, it is not controlling access to an “essential facility” in any sense other than Wal-Mart controls access to its own stores.  “Google search results generated by its proprietary algorithm and found on its own web pages” does not constitute a market to which access should be forcibly granted by the courts or legislature.

The set of claims that are adduced under the rubric of “search neutrality” or the  “essential facilities doctrine” against Internet search engines in general and, as a practical matter, Google in particular, are deeply problematic.  They risk encouraging courts and other decision makers to find antitrust violations where none actually exist, threatening to chill innovation and efficiency-enhancing conduct.  In part for this reason, the essential facilities doctrine has been relegated by most antitrust experts to the dustbin of history.

The full text of my essay is below, but you can also find it at SSRN and the book’s website.

The Problem of Search Engines as Essential Facilities (Geoffrey A. Manne) http://d1.scribdassets.com/ScribdViewer.swf

]]>
https://techliberation.com/2011/02/04/the-problem-of-search-engines-as-essential-facilities/feed/ 5 34874
Correcting Herb Kohl (& Kayak & Bing Travel) on Google/ITA https://techliberation.com/2010/12/02/correcting-herb-kohl-and-kayak-and-bing-travel-on-googleita/ https://techliberation.com/2010/12/02/correcting-herb-kohl-and-kayak-and-bing-travel-on-googleita/#comments Thu, 02 Dec 2010 21:10:28 +0000 http://techliberation.com/?p=33398

Today comes news that Senator Kohl has sent a letter to the DOJ urging “careful review” of the proposed Google/ITA merger. Underlying his concerns (or rather the “concerns raised by a number of industry participants and consumer advocates that I believe warrant careful review”) is this:

Many of ITA’s customers believe that access to ITA’s technology is critical to competition in online air travel search because it cannot be matched by other players in the travel search industry. They claim that ITA’s superior access to information and superior technology enables it to provide faster and better results to consumers. As a result, some of these industry participants and independent experts fear that the current high level of competition among online travel agents and metasearch providers could be undermined if Google were to acquire ITA and start its own OTA or metasearch service. If this were to happen, they argue, consumers would lose the benefits of a robustly competitive online air travel market.

For several reasons, these complaints are without merit and a challenge to the Google/ITA merger would be premature at best—and a costly mistake at worst. The high-tech market is innovative and dynamic. Goods and services that were once inconceivable are now indispensable, and competition has improved the quality of technology while driving down its costs. But as the market continues to change, antitrust interventions are stuck using a static regulatory framework. As the government develops a strategy for regulating competition in the digital marketplace, it must tread carefully—excessive intervention will stifle innovation, harm consumers, and prevent growth. And given the link between innovation and economic growth, the stakes of “getting it right” are high. The individual nature of every decision, however, makes errors in antitrust enforcement inevitable. Some conduct that is bad for competition will be allowed to go on while some conduct that is good for competition will be blocked by intervention.

But prosecuting pro-competitive conduct is almost certainly more costly than mistakenly allowing anticompetitive conduct because mechanisms are in place to mitigate the latter but not the former. The cost of erroneous intervention is the loss to consumers directly and a deterrent effect on innovation—for fear of intervention, companies may not take large risks. Meanwhile, allowing conduct to persist amidst uncertainty allows the potential benefits of conduct to materialize while maintaining checks against practices that are bad for consumers: both the competitive marketplace and future enforcers have the power to mitigate specific anticompetitive outcomes that may arise. Unfortunately, current antitrust enforcement—abetted by influential congressmen like Senator Kohl—is more, rather than less, aggressive against innovative companies in high-tech industries. This aggression threatens to stifle growth and deter future innovation in a market with incredible potential.

Google has become a primary target of this scrutiny, and the company’s proposed acquisition of ITA, a software company that compiles and processes travel data, is a good example of aggressive scrutiny threatening to stifle growth.

Google’s acquisition of ITA is a straightforward merger where one company has decided to purchase another outright (instead of merely purchasing its services through contract). There are good reasons for integration. Most notably, Google gets to exercise direct control over ITA’s talented engineers if it owns ITA—influence that it would not have if the company simply signed a contract with ITA. If Google is correct that it can manage ITA’s resources better than ITA’s current management, then integration makes sense and is valuable for consumers.

The primary concern raised over Google’s proposed acquisition of ITA is that acquisition would “leverage” Google’s alleged dominance into another market—the online travel search market—and permit Google to prevent its competitors from accessing ITA’s high-quality analysis of flights and fares. There are a few problems with this.

  • First, ITA does not provide or own the underlying data (this comes from the airlines themselves); rather it works only to analyze and process it—processing that other companies can and do undertake. It may have developed superior technology to engage in this processing, but that is precisely why it (and consumers) should not be penalized by its competitors’ efforts to hamstring it. Remember—although most of the hand-wringing surrounding this deal concerns Google, it is first and foremost the innovative entrepreneurs at ITA who would be prevented from capitalizing on their success if the deal is stopped.
  • Second, it is hard to see why, under the facts as alleged by the deal’s naysayers, consumers would be worse off if Google owns ITA than if ITA stands on its own. The claims seem to turn on ITA’s indispensability to the online travel industry. But if ITA is so indispensable—if it possesses such market power, in other words—it’s hard to see how its incentives to capitalize on that market power would change simply by virtue of a change in its management. Either ITA possesses market power and is already taking advantage of it (or else its managers are leaving money on the table and it most certainly should be taken over by another set of managers) or else it does not actually possess this market power and its combination with Google, even if Google were to keep all of ITA’s technology for itself, will do little to harm the rest of the industry as its competitors step up and step in to take its place.
  • Third and related to these is the simple repugnance of hamstringing successful entrepreneurs because of the exhortations of their competitors, and the implication that a successful company’s work product (like ITA’s “superior technology”) must be rendered widely-available, by government force if necessary.
  • Meanwhile, Google does not seem to have any interest in selling airline tickets or making airline reservations (just as it doesn’t sell the retail goods one can search for using its site). Instead, its interest is in providing its users easy access to airline flight and pricing data and giving online travel agencies the ability to bid on the sale of tickets to Google users looking to buy. The availability of this information via Google search will lower search costs for consumers and the expected bidding should increase competition and drive down travel costs for consumers. It is easy to see why companies like Kayak and Bing Travel and Expedia and Travelocity might be unhappy about this, but far more difficult to see how their woes should be a problem for the antitrust enforcers (or Congress, for that matter).

The point is not that we know that Google—or any other high-tech company’s—conduct is pro-competitive, but rather that the very uncertainty surrounding it counsels caution, not aggression. As the technology, usage and market structure change, so do the strategies of the various businesses that build up around them. These new strategies present unknown and unprecedented challenges to regulators, and these new challenges call for a deferential approach. New conduct is not necessarily anticompetitive conduct, and if our antitrust regulation does not accept this, we all lose.

[Cross-posted at Truth on the Market]

]]>
https://techliberation.com/2010/12/02/correcting-herb-kohl-and-kayak-and-bing-travel-on-googleita/feed/ 1 33398
The EU tightens the noose around Google https://techliberation.com/2010/12/01/the-eu-tightens-the-noose-around-google/ https://techliberation.com/2010/12/01/the-eu-tightens-the-noose-around-google/#comments Wed, 01 Dec 2010 23:01:53 +0000 http://techliberation.com/?p=33371

[Cross-posted at Truth on the Market]

Here we go again.  The European Commission is after Google more formally than a few months ago (but not yet having issued a Statement of Objections).

For background on the single-firm antitrust issues surrounding Google I modestly recommend my paper with Josh Wright, Google and the Limits of Antitrust: The Case Against the Antitrust Case Against Google (forthcoming soon in the Harvard Journal of Law & Public Policy, by the way).

According to one article on the investigation (from Ars Technica):

The allegations of anticompetitive behavior come as Google has acquired a large array of online services in the last couple of years. Since the company holds around three-quarters of the online search and online advertising markets, it is relatively easy to leverage that dominance to promote its other services over the competition.

(As a not-so-irrelevant aside, I would just point out that I found that article by running a search on Google and clicking on the first item to come up.  Somehow I imagine that a real manipulative monopolist Google would do a better job of white-washing the coverage if its ability to tinker with its search results is so complete.)

More to the point, these sorts of leveraging of dominance claims are premature at best and most likely woefully off-base.  As I noted in commenting on the Google/Ad-Mob merger investigation and similar claims from such antitrust luminaries as Herb Kohl:

If mobile application advertising competes with other forms of advertising offered by Google, then it represents a small fraction of a larger market and this transaction is competitively insignificant.  Moreover, acknowledging that mobile advertising competes with online search advertising does more to expand the size of the relevant market beyond the narrow boundaries it is usually claimed to occupy than it does to increase Google’s share of the combined market (although critics would doubtless argue that the relevant market is still “too concentrated”).  If it is a different market, on the other hand, then critics need to make clear how Google’s “dominance” in the “PC-based search advertising market” actually affects the prospects for competition in this one.  Merely using the words “leverage” and “dominance” to describe the transaction is hardly sufficient.  To the extent that this is just a breathless way of saying Google wants to build its business in a growing market that offers economies of scale and/or scope with its existing business, it’s identifying a feature and not a bug.  If instead it’s meant to refer to some sort of anticompetitive tying or “cross-subsidy” (see below), the claim is speculative and unsupported.

The EU press release promotes a version of the “leveraged dominance” story by suggesting that

The Commission will investigate whether Google has abused a dominant market position in online search by allegedly lowering the ranking of unpaid search results of competing services which are specialised in providing users with specific online content such as price comparisons (so-called vertical search services) and by according preferential placement to the results of its own vertical search services in order to shut out competing services.

The biggest problem I see with these claims is that, well, they make no sense.

First, if someone is searching for a specific vertical search engine on Google by typing its name into Google, it will invariably come up as the first result.  If one is searching for price comparison sites more generally by searching in Google for “price comparison sites” lots of other sites top the list before Google’s own price comparison site shows up.  If one is searching for a specific product and hoping to find price comparisons on Google, why on Earth would that person be hoping to find not Google’s own efforts at price comparison, built right into its search engine, but instead a link to another site and another several steps before finding the information?  As a practical matter, Google doesn’t actually do this particularly well (not as well as Bing, in any case, where the link to its own shopping site almost always comes up first; on Google I often get several manufacturer or other retailer sites before Google’s comparison shopping link appears further down the page).

But even if it did, it’s hard to see how this could be a problem.  The primary reason for this?  Google makes no revenue (that I know of) from users clicking through to purchase anything from its shopping page.  The page has paid search results only at the bottom (rather than the top as on a normal search page), the information is all algorithmically generated, and retailers do not pay to have their information on the page.  If this is generating something of value for Google it is doing so only in the most salutary fashion: By offering additional resources for users to improve their “search experience” and thus induce them to use Google’s search engine.  Of course, this should help Google’s bottom line.  Of course this makes it a better search engine than its competitors.  These are good things, and the fact that Google offers effective, well-targeted and informative search results, presented in multiple forms, demonstrates its (and the industry’s as a whole) degree of innovation and effort–the sort of effort that is typically born out of vibrant competition, not the complacency of a fat, happy monopolist.  The claim that Google’s success harms its competitors should fall on deaf ears.

The same goes for claims that Google favors its own maps, by the way–to the detriment of MapQuest (paging Professor Schumpeter . . . ).  Look for the nearest McDonalds in Google and a Google Map is bound to top the list (but not be the exclusive result, of course).  But why should it be any other way?  In effect, what Google does is give you the Web’s content in as accessible and appropriate a form as it can.  By offering not only a link to McDonalds’ web site, as well as various other links, but also a map showing the locations of the nearest restaurants, Google is offering up results in different forms, hoping that one is what the user is looking for.  Why on Earth should Google be required to use someone else’s graphical presentation of the nearby McDonalds restaurants rather than its own simply because the presentation happens to be graphical rather than in a typed list?

So what’s going on?

First off, in essence, the EU is taking up the argument put forth by (the EU’s very own) Foundem in its complaint against Google.  Foundem is a UK price comparison site.  It claims that it was targeted by Google and demoted in Google’s organic search results.  Its argument is laid out here.  But Google responds that it is simply applying its algorithm to the site (along with all other sites) and finds some things lacking.  In fact, all Foundem does, in essence, is pull information from other sites and present it on its own.  While in general this is little different than what Google does (although the quality of the information and its presentation may be different), from the point of view of a user who has already searched once in Google, the prospect of Google serving up sites requiring the user to make duplicate searches in other search engines to find the information she is looking for would seem to be pretty poor.  In part for this reason Google disfavors sites in its searches that simply duplicate other sites’ content.  While Foundem may offer something more than the typical spam site that Google intends to block, this fact is not immediately obvious (and, for what it’s worth, apparently Google was eventually convinced of the difference and has lifted the “penalty” formerly imposed on Foundem).

To make an antitrust claim out of this, one has to adopt a sort of “essential facilities” stance with respect to Google, in essence claiming that (Google’s users’ interests be damned) if Google is the only way users can get to its competitors’ sites, it must provide that access.  The essential facilities doctrine, dealt a near-death blow by the Supreme Court in Trinko, has long been on the ropes.  As Areeda and Hovenkamp said of it, “the essential facility doctrine is both harmful and unnecessary and should be abandoned.”  That is true in this case, as in the others before it.  On the one hand Google does not preclude, nor does it have the power to preclude, users from accessing Foundem’s site:  all they need do is type “www.foundem.com” into a web browser.  To the extent that Google can and does (or did) limit Foundem’s access to its search results page, it is not controlling access to an “essential facility” in any sense other than Wal-Mart controls access to its own stores.  “Google search results generated by its proprietary alogrithm and found on its own web pages” is not a market to which access should be forcibly granted by the courts or legislature.  While Europe takes a less critical view of the doctrine (see Microsoft), it shouldn’t.

And as Josh has pointed out, Microsoft’s fingerprints are all over these cases (see also here and here where Microsoft Deputy General Counsel, Dave Heiner, essentially lays out the unfortunate state of play in this arena–a state of play that has ensnared Microsoft in the past).  The relevance of which is just this: When the EU went after Microsoft itself, many of us decried the case in part as a witch hunt by competitors looking for advantage through regulatory means when they were unable to get it through innovation, marketing and the like.  The case against Google in the EU looks to be following the same unfortunate pattern, and even the same unfortunate case-law.  Even if it is not true that the EU actually behaves in this fashion (indeed, appearances can be deceiving, sometimes a cigar is just a cigar, etc., etc.), it is costly to everyone that it is so widely perceived to do so.  This case doesn’t help matters.  It has always been true that the Holy Grail (to its competitors) of a Section 2 (or Dominance) case against Google was a substantive stretch but a near-inevitability nonetheless.  But as Josh and I conclude our paper:

Indeed, it is our view that in light of the antitrust claims arising out of innovative contractual and pricing conduct, and the apparent lack of any concrete evidence of anticompetitive effects or harm to competition, an enforcement action against Google on these grounds creates substantial risk for a “false positive” which would chill innovation and competition currently providing immense benefits to consumers.

The cost of poorly-considered, seemingly politicized, competitor-induced antitrust cases is substantial.

]]>
https://techliberation.com/2010/12/01/the-eu-tightens-the-noose-around-google/feed/ 2 33371