Antitrust & Competition Policy – Technology Liberation Front https://techliberation.com Keeping politicians' hands off the Net & everything else related to technology Thu, 10 Aug 2023 15:25:01 +0000 en-US hourly 1 6772528 America Does Not Need a Digital Consumer Protection Commission https://techliberation.com/2023/08/10/america-does-not-need-a-digital-consumer-protection-commission/ https://techliberation.com/2023/08/10/america-does-not-need-a-digital-consumer-protection-commission/#comments Thu, 10 Aug 2023 15:25:01 +0000 https://techliberation.com/?p=77151

The New York Times today published my response to an oped by Senators Lindsey Graham & Elizabeth Warren calling for a new “Digital Consumer Protection Commission” to micromanage the high-tech information economy. “Their new technocratic digital regulator would do nothing but hobble America as we prepare for the next great global technological revolution,” I argue. Here’s my full response:

Senators Lindsey Graham and Elizabeth Warren propose a new federal mega-regulator for the digital economy that threatens to undermine America’s global technology standing.

A new “licensing and policing” authority would stall the continued growth of advanced technologies like artificial intelligence in America, leaving China and others to claw back crucial geopolitical strategic ground.

America’s digital technology sector enjoyed remarkable success over the past quarter-century — and provided vast investment and job growth — because the U.S. rejected the heavy-handed regulatory model of the analog era, which stifled innovation and competition.

The tech companies that Senators Graham and Warren cite (along with countless others) came about over the past quarter-century because we opened markets and rejected the monopoly-preserving regulatory regimes that had been captured by old players.

The U.S. has plenty of federal bureaucracies, and many already oversee the issues that the senators want addressed. Their new technocratic digital regulator would do nothing but hobble America as we prepare for the next great global technological revolution.

]]>
https://techliberation.com/2023/08/10/america-does-not-need-a-digital-consumer-protection-commission/feed/ 6 77151
Podcast: Why Ban Direct Electric Vehicle Sales? https://techliberation.com/2022/05/23/podcast-why-ban-direct-electric-vehicle-sales/ https://techliberation.com/2022/05/23/podcast-why-ban-direct-electric-vehicle-sales/#respond Mon, 23 May 2022 13:57:04 +0000 https://techliberation.com/?p=76989

Why is it illegal in many states to purchase an electric vehicle directly from a manufacturer? In this new Federalist Society podcast, Univ. of Michigan law school professor Daniel Crane and I examine how state protectionist barriers block choice and innovation for no good reason whatsoever. The only group that benefits from these protectionist, anti-consumer direct sales bans are local car dealers who don’t want the competition.

Additional Reading :

]]>
https://techliberation.com/2022/05/23/podcast-why-ban-direct-electric-vehicle-sales/feed/ 0 76989
Podcast: Remember FAANG? https://techliberation.com/2022/05/10/podcast-remember-faang/ https://techliberation.com/2022/05/10/podcast-remember-faang/#comments Tue, 10 May 2022 15:47:16 +0000 https://techliberation.com/?p=76986

Corbin Barthold invited me on Tech Freedom’s “Tech Policy Podcast” to discuss the history of antitrust and competition policy over the past half century. We covered a huge range of cases and controversies, including: the DOJ’s mega cases against IBM & AT&T, Blockbuster and Hollywood Video’s derailed merger, the Sirius-XM deal, the hysteria over the AOL-Time Warner merger, the evolution of competition in mobile markets, and how we finally ended that dreaded old MySpace monopoly!

What does the future hold for Google, Facebook, Amazon, and Netflix? Do antitrust regulators at the DOJ or FTC have enough to mount a case against these firms? Which case is most likely to have legs?

Corbin and I also talked about the of progress more generally and the troubling rise of more and more Luddite thinking on both the left and right. I encourage you to give it a listen:

]]>
https://techliberation.com/2022/05/10/podcast-remember-faang/feed/ 4 76986
Is the FTC’s Antitrust Enforcement Still Focused on Consumers? https://techliberation.com/2021/07/12/is-the-ftcs-antitrust-enforcement-still-focused-on-consumers/ https://techliberation.com/2021/07/12/is-the-ftcs-antitrust-enforcement-still-focused-on-consumers/#respond Mon, 12 Jul 2021 15:47:33 +0000 https://techliberation.com/?p=76895

The Federal Trade Commission (FTC) voted on July 1 to withdraw its pubic affirmation of consumer welfare as the guiding principle for antitrust enforcement. While this change is symbolic at this point, it weakens the agency’s public commitment to an objective consumer-based approach to antitrust. The result opens the door to politicized and unprincipled antitrust enforcement that will ultimately hurt rather than benefit consumers.

The FTC is the nation’s primary consumer protection agency, focused on ensuring a healthy market that avoids the dangers of monopolistic practices. The statement on the agency’s antitrust enforcement had been uncontroversial up to this point. A bipartisan group of commissioners passed the statement in 2015—during the Obama Administration—and the statement primarily clarified that the FTC’s antitrust enforcement under Section 5 of the FTC Act concerning the agency’s authority over unfair and deceptive trade practices was guided by consumer welfare. In other words, the FTC would focus on those acts that cause or are likely to cause harm to consumers, based on objective economic analysis rather than the effects of business moves on competition itself or other policy standards. The statement sought to provide clarity to consumers and businesses, and in fact, the sole vote against it was on the basis that the statement was too abbreviated to provide meaningful guidance.

Despite these uncontroversial origins, on Thursday at a hastily announced open meeting, the current FTC voted 3-2 to withdraw this statement. The withdrawal of the FTC’s statement is the latest signal that antitrust policy, particularly at the FTC, is shifting away from focusing on consumers and using the consumer welfare standard.  Instead, there are now real concerns the FTC will enforce antitrust policy in a way that promotes competitors or ideology at consumers’ expense.

Most specifically, rejecting the consumer welfare standard signals the FTC may apply its enforcement power in more subjective ways based in changing political motives and policy preference, as was seen in earlier eras of antitrust enforcement. For example, if not focused on the consumer welfare standard, the FTC could act against some of the largest tech companies to break them up or prevent mergers even though consumers were not harmed—or were even helped—by these changes in the market. This shift would have three specific, if related, implications.

First, it would undermine confidence among consumers in the FTC’s actions. It is far less clear now by what standards antitrust enforcement will be guided and if they are truly objective. As a result, it is unclear what the purpose behind enforcement is.

Second, such expansive enforcement could diminish the options available to consumers. Without the consumer welfare standard, aggressive antitrust enforcement could lead to regulatory interventions in competitive and dynamic markets apart from a data-based and consumer-focused analysis. The result of such unnecessary enforcement could be to raise costs or eliminate products, preventing consumers from having access to products they enjoy or face higher prices, not because of unfair or anti-competitive behavior but because of political animus against a particular industry.

Finally, this shift away from the consumer welfare standard is likely to result in inefficient markets. Unprincipled or politically motivated enforcement could result in some products and services never making it to consumers. In other cases, markets may find certain “competitors” kept alive past their value, or other markets could remain with few choices because companies fear that entrance would be considered anticompetitive. Without the consumer welfare standard, misguided notions of concentration or “bigness” could result in a less beneficial market and instead benefit competitors with inferior products that would not have otherwise survived—all to the detriment of consumers.

When regulators move away from an objective, consumer-focused approach to antitrust, it is ultimately the consumers who are harmed in the form of higher prices, inferior products, and less innovation. As Commissioner Christine Wilson stated prior to the vote, “If the Commission is no longer focused on consumer welfare then consumers will be harmed.”

]]>
https://techliberation.com/2021/07/12/is-the-ftcs-antitrust-enforcement-still-focused-on-consumers/feed/ 0 76895
A Return of the Trustbusters Could Harm Consumers https://techliberation.com/2021/04/13/a-return-of-the-trustbusters-could-harm-consumers/ https://techliberation.com/2021/04/13/a-return-of-the-trustbusters-could-harm-consumers/#comments Tue, 13 Apr 2021 17:01:58 +0000 https://techliberation.com/?p=76868

Is it a time for the return of the trustbusters? Some politicians seem to imply that today’s tech giants have become modern day robber barons taking advantage of the American consumer and, as a result, they argue that it is time for a return of aggressive antitrust enforcement and for dramatic changes to existing antitrust interpretations to address the concerns associated with today’s big business.

This criticism is not limited to one side of the aisle, with Senators Amy Klobuchar (D-MN) and Josh Hawley (R-MO) both proposing their own dramatic overhauls of antitrust laws and the House Judiciary Committee majority issuing a report that greatly criticizes the current technology market. In both cases these new proposals create presumptive bans on mergers for companies of certain size and lower the burdens on the government for intervening and proving its case. I have previously analyzed the potential impact of Senator Klobuchar’s proposal, and Senator Hawley’s proposal raises many similar concerns when it comes to its merger ban and shift away from existing objective standards.

Proponents on both sides of the aisle argue changing current antitrust standards is needed to fight big business, but sadly these modern-day trustbusters may not be the heroes they see themselves as. In fact, such a shift would harm American consumers and small businesses well beyond the tech sector.

The Trustbusters-Era Standards Would Fail Consumers

The original trustbusters of the late 19th and early 20th century created a system that was not always clear and could be abused by regulators subjectively determining what was and was not anti-competitive behavior. The result was that, in this earlier era, businesses and consumers could never be certain what behaviors would be considered violations.

The shift to the consumer welfare standard helped fixed that problem by providing an objective framework using economic analysis to weigh the risk and benefits of behavior and judging it based on its impact on consumers and not specific competitors. Unfortunately, these new proposals would shift away from this objective focus and return to a presumption that big is bad. This shift would be bad news not only for big business but for smaller businesses and consumers as well. Small businesses would lose an important exit strategy option with the presumptive ban on mergers with large companies, and consumers would miss out on benefits such as price reductions, improvements, and innovations that these mergers could bring.

While much of the debate around antitrust changes focuses on large tech firms such as Google, Apple, Facebook, and Amazon, changing antitrust laws would impact far more of the economy than just tech. Both the Hawley and Klobuchar proposals would bar mergers unless there is strong evidence proving their value (a “regulatory presumption” against mergers), but this presumption would impact industries such as pharmaceuticals, finance, and agriculture that also frequently have mergers and acquisitions that benefit consumers by helping to expand the distribution of a product or improve on an existing service. In fact, companies including L’Oreal and Nike could find any mergers or acquisitions presumptively prohibited under the limits in these proposals.

Existing Standards Can Adapt to Dynamic Markets Like Tech

Existing standards are still able to address the concerns associated with this dynamic and changing markets as well as more established markets. For example, the Antitrust Modernization Commission concluded, “There is no need to revise the antitrust laws to apply different rules to industries in which innovation, intellectual property, and technological change are central features.”

Sometimes regulators’ sense of the market in technology may prove to be wrong by the evolution of a technology or the disruption caused by a dramatic shift in the industry. For example, debates used to be focused on MySpace and AOL , which have now become things of internet nostalgia. Today’s tech giants are facing growing challenges not only from each other in many cases, but also from many newer entrants, from ClubHouse and TikTok to Zoom and Shopify. Removing the need to firmly establish the existing standards of an antitrust case would risk unnecessary intervention into the market or, more likely, could prevent actions that benefit consumers.

Some question whether this economic analysis-based standard can handle the zero-price services offered by many technology companies. While price is often the easiest focus, this standard also considers issues such as quality and innovation, making it elastic enough to address potential concerns even if the price is zero. Still, this does not mean that the definition of harm under the consumer welfare standard should be expanded to address any litany of concerns that cannot be objectively shown to have market harm.

Trustbusters’ Concerns with Tech Are Unlikely to Be Solved by Antitrust

Antitrust is also a poor tool to address concerns such as data privacy or content moderation, and using it to do so could allow for future abuse for other political ends. There is no guarantee that smaller companies would respond to existing market demands around issues such as content moderation any differently than the current large players. Additionally, when it comes to privacy and targeted advertising, smaller platforms would have to find new ways to gain revenue and might be forced to monetize the platform more to stay afloat without being able to rely on the revenue from a larger parent company. Finally, there is no guarantee that these smaller companies would be more innovative or dynamic particularly as existing teams and talents are divided by break ups and walls are erected to prevent entry into certain markets.

The good news is some policymakers have realized that these problems exist and argued for preserving the existing framework and addressing these other concerns with appropriately targeted policies. For example, Sen. Mike Lee recently defended the consumer welfare standard and was critical of the negative impact “radically alter[ing] our antitrust regime” could have while still questioning some recent decisions around content moderation.

Conclusion

Many have hoped for a return of bipartisan cooperation in Washington, but unfortunately bad ideas can also emerge on both sides of the aisle. Shifting away from the consumer welfare standard would ultimately harm consumers at a time when innovation and economic recovery are especially critical.

]]>
https://techliberation.com/2021/04/13/a-return-of-the-trustbusters-could-harm-consumers/feed/ 1 76868
Video: Lessons from the “Hall of Fallen Giants” https://techliberation.com/2021/03/17/video-lessons-from-the-hall-of-fallen-giants/ https://techliberation.com/2021/03/17/video-lessons-from-the-hall-of-fallen-giants/#comments Wed, 17 Mar 2021 13:47:10 +0000 https://techliberation.com/?p=76852

Here’s a new animated explainer video that I narrated for the Federalist Society’s Regulatory Transparency Project. The 3-minute video discusses how earlier “tech giants” rose and fell as technological innovation and new competition sent them off to what the New York Times once appropriately called “The Hall of Fallen Giants.” It’s a continuing testament to the power of “creative destruction” to upend and reorder markets, even as many pundits insist that there’s no possibility change can happen.

This is an important lesson for us to remember today, as I noted in the recent editorial for The Hill about why, “Open-ended antitrust is an innovation killer“:

Those who worry about today’s largest tech giants becoming supposedly unassailable monopolies should consider how similar fears were expressed not so long ago about other tech titans, many of which we laugh about today. Just 14 years ago, headlines proclaimed that “MySpace Is a Natural Monopoly,” and asked, “Will MySpace Ever Lose Its Monopoly?” We all know how that “monopoly” ceased to exist. At the same time, pundits insisted “Apple should pull the plug on the iPhone,” since “there is no likelihood that Apple can be successful in a business this competitive.” The smartphone market of that era was viewed as completely under the control of BlackBerry, Palm, Motorola and Nokia. A few years prior to that, critics lambasted the merger of AOL and TimeWarner as a new corporate “Big Brother” that would decimate digital diversity and online competition.

Accordingly, policymakers should be humble and recognize that, “it’s better to let rivalry and innovation emerge organically,” and only bring in the wrecking ball of heavy-handed antitrust regulation as a last resort, I argued. Technological change and entrepreneurialism has a way of upending and reordering markets when we least expect it. Just ask all those members of the Hall of Fallen Giants.

]]>
https://techliberation.com/2021/03/17/video-lessons-from-the-hall-of-fallen-giants/feed/ 2 76852
5 Tech Policy Topics to Follow in the Biden Administration and 117th Congress https://techliberation.com/2020/11/12/5-tech-policy-topics-to-follow-in-the-biden-administration-and-117th-congress/ https://techliberation.com/2020/11/12/5-tech-policy-topics-to-follow-in-the-biden-administration-and-117th-congress/#comments Thu, 12 Nov 2020 14:08:17 +0000 https://techliberation.com/?p=76818

In a five-part series at the American Action Forum, I presented prior to the 2020 presidential election the candidates’ positions on a range of tech policy topics including: the race to 5GSection 230antitrust, and the sharing economy. Now that the election is over, it is time to examine what topics in tech policy will gain more attention and how the debate around various tech policy issues may change. In no particular order, here are five key tech policy issues to be aware of heading into a new administration and a new Congress. 

The  Use of Soft Law for Tech Policy 

In 2021, it is likely America will still have a divided government with Democrats controlling the White House and House of Representatives and Republicans expected to narrowly control the Senate. The result of a divided government, particularly between the two houses of Congress, will likely be that many tech policy proposals face logjams. The result will likely be that many of the questions of tech policy lack the legislation or hard law framework that might be desired. As a result, we are likely to continue to see “soft law”—regulation by various sub-regulatory means such as guidance documents, workshops, and industry consultations—rather than formal action. While it appears we will see more formal regulatory action from the administrative state as well in a Biden Administration, these actions require quite a process through comments and formal or informal rulemaking. As technology continues to accelerate, many agencies turn to soft law to avoid “pacing problems” where policy cannot react as quickly as technology and rules may be outdated by the time they go into effect. 

A soft law approach can be preferable to a hard law approach as it is often able to better adapt to rapidly changing technologies. Policymakers in this new administration, however, should work to ensure that they are using this tool in a way that enables innovation and that appropriate safeguards ensure that these actions do not become a crushing regulatory burden. 

Return of  the  Net Neutrality  Debate 

One key difference between President Trump and President-elect Biden’s stances on tech policy concerns whether the Federal Communication Commission (FCC) should categorize internet service providers (ISPs) as Title II “common carrier services,” thereby enabling regulations such as “net neutrality” that places additional requirements on how these service providers can prioritize data. President-elect Biden has been clear in the past that he favors reinstating net neutrality. 

The imposition of this classification and regulations occurred during the Obama Administration and the FCC removed both the classification under Title II and the additional regulations for “net neutrality” during the Trump Administration. Critics of these changes made many hyperbolic claims at the time such as that Netflix would be interrupted or that ISPs would use the freedom in a world without net neutrality to block abortion resources or pro-feminist groups. These concerns have proven to be misguided. If anything, the COVID-19 pandemic has shown the benefits to building a robust internet infrastructure and expanded investment that a light-touch approach has yielded. 

It is likely that net neutrality will once again be debated. Beyond just the imposition of these restrictions, a repeated change in such a key classification could create additional regulatory uncertainty and deter or delay investment and innovation in this valuable infrastructure. To overcome such concerns, congressional action could help fashion certainty in a bipartisan and balanced way to avoid a back-and-forth of such a dramatic nature. 

Debates Regarding  Sharing Economy Providers   Classification  as Independent Contractors 

California voters passed Proposition 22 undoing the misguided reclassification of app-based service drivers as employees rather than independent contractors under AB5; during the campaign, however, President-elect Biden stated that he supports AB5 and called for a similar approach nationwide. Such an approach would make it more difficult on new sharing economy platforms and a wide range of independent workers (such as freelance journalists) at a time when the country is trying to recover economically.  

Changing classifications to make it more difficult to consider service providers as independent contractors makes it less likely that platforms such as Fiverr or TaskRabbit could provide platforms for individuals to offer their skills. This reclassification as employees also misunderstands the ways in which many people choose to engage in gig economy work and the advantages that flexibility has. As my AAF colleague Isabel Soto notes, the national costs of a similar approach found in the Protecting the Right to Organize (PRO) Act “could see between $3.6 billion and $12.1 billion in additional costs to businesses” at a time when many are seeking to recover during the recession. Instead, both parties should look for solutions that continue to allow the benefits of the flexible arrangements that many seek in such work, while allowing for creative solutions and opportunities for businesses that wish to provide additional benefits to workers without risking reclassification. 

Shifting Conversations and Debates Around Section 230 

Section 230 has recently faced most of its criticism from Republicans regarding allegations of anti-conservative bias. President-elect Biden, however, has also called to revoke Section 230 and to set up a taskforce regarding “Online Harassment and Abuse.” While this may seem like a positive step to resolving concerns about online content, it could also open the door to government intervention in speech that is not widely agreed upon and chip away at the liability protection for content moderation. 

For example, even though the Stop Enabling Sex Trafficking Act was targeting the heinous crime of sex trafficking (which was already not subject to Section 230 protection) was aimed at companies such as Backpage where it was known such illegal activity was being conducted, it has resulted in legitimate speech such as Craigslist personal ads being removed  and companies such as Salesforce being subjected to lawsuits for what third parties used their product for. A carveout for hate speech or misinformation would only pose more difficulties for many businesses. These terms to do not have clearly agreed-upon meanings and often require far more nuanced understanding for content moderation decisions. To enforce changes that limit online speech even on distasteful and hateful language in the United States would dramatically change the interpretation of the First Amendment that has ruled such speech is still protected and would result in significant intrusion by the government for it to be truly enforced. For example, in the UK, an average of nine people a day were questioned or arrested over offensive or harassing “trolling” in online posts, messages, or forums under a law targeting online harassment and abuse such as what the taskforce would be expected to consider. 

Online speech has provided new ways to connect, and Section 230 keeps the barriers to entry low. It is fair to be concerned about the impact of negative behavior, but policymakers should also recognize the impact that online spaces have had on allowing marginalized communities to connect and be concerned about the unintended consequences changes to Section 230 could have. 

Continued Antitrust Scrutiny of “Big Tech” 

One part of the “techlash” that shows no sign of diminishing in the new administration or new Congress is using antitrust to go after “Big Tech.” While it remains to be seen if the Biden Department of Justice will continue the current case against Google, there are indications that they and congressional Democrats will continue to go after these successful companies with creative theories of harm that do not reflect the current standards in antitrust. 

Instead of assuming a large and popular company automatically merits competition scrutiny  or attempting to utilize antitrust to achieve policy changes for which it is an ill-fitted tool, the next administration should return to the principled approach of the consumer welfare standard. Under such an approach, antitrust is focused on consumers and not competitors. In this regard, companies would need to be shown to be dominant in their market, abusing that dominance in some ways, and harming consumers. This approach also provides an objective standard that lets companies and consumers know how actions will be considered under competition law. With what is publicly known, the proposed cases against the large tech companies fail at least one element of this test. 

There will likely be a shift in some of the claimed harms, but unfortunately scrutiny of large tech companies and calls to change antitrust laws to go after these companies are likely to continue. 

Conclusion 

There are many other technology and innovation issues the next administration and Congress will see. These include not only the issues mentioned above, but emerging technologies like 5G, the Internet of Things, and autonomous vehicles. Other issues such as the digital divide provide an opportunity for policymakers on both sides of the aisle to come together and have a beneficial impact and think of creative and adaptable solutions. Hopefully, the Biden Administration and the new Congress will continue a light-touch approach that allows entrepreneurs to engage with innovative ideas and continues American leadership in the technology sector. 

]]>
https://techliberation.com/2020/11/12/5-tech-policy-topics-to-follow-in-the-biden-administration-and-117th-congress/feed/ 1 76818
How Are We Ever Going to Stop the Blockbuster Video Monopoly? https://techliberation.com/2020/07/21/how-are-we-ever-going-to-stop-the-blockbuster-video-monopoly/ https://techliberation.com/2020/07/21/how-are-we-ever-going-to-stop-the-blockbuster-video-monopoly/#respond Tue, 21 Jul 2020 14:15:58 +0000 https://techliberation.com/?p=76771

Does anyone remember Blockbuster and Hollywood Video? I assume most of you do, but wow, doesn’t it seem like forever ago when we actually had to drive to stores to get movies to watch at home? What a drag that was!

Yet, just 15 years ago, that was the norm and those two firms were the titans of video distribution, so much so that federal regulators at the Federal Trade Commission looked to stop their hegemony through antitrust intervention. But then those firms and whatever “market power” they possessed quickly evaporated as a wave of Schumpeterian creative destruction swept through video distribution markets. Both those firms and antitrust regulators had completely failed to anticipate the tsunami of technological and marketplace changes about to hit in the form of alternative online video distribution platforms as well as the rise of smartphones and robust nationwide mobile networks.

Today, this serves as a cautionary tale of what happens when regulatory hubris triumphs over policy humility, as Trace Mitchell and I explain in this new essay for  National Review Online entitled, “The Crystal Ball of Antitrust Regulators Is Cracked.” As we note:

There is no discernable end point to the process of entrepreneurial-driven change. In fact, it seems to be proliferating rapidly. To survive, even the most successful companies must be willing to quickly dispense with yesterday’s successful business plans, lest they be steamrolled by the relentless pace of technological change and ever-shifting consumer demands. It is easy to understand why some people find it hard to imagine a time when Amazon, Apple, Facebook, and Google won’t be quite as dominant as they are today. But it was equally challenging 20 years ago to imagine that those same companies could disrupt the giants of that era.

Hopefully today’s policymakers will have a little more patience and trust competition and continued technological innovation to bring us still more wonderful video choices.

[OC] Blockbuster Video US store locations between 1986 and 2019 from r/dataisbeautiful
//embed.redditmedia.com/widgets/platform.js]]>
https://techliberation.com/2020/07/21/how-are-we-ever-going-to-stop-the-blockbuster-video-monopoly/feed/ 0 76771
There are good reasons to be skeptical that automation will unravel the labor market https://techliberation.com/2019/07/08/there-are-good-reasons-to-be-skeptical-that-automation-will-unravel-the-labor-market/ https://techliberation.com/2019/07/08/there-are-good-reasons-to-be-skeptical-that-automation-will-unravel-the-labor-market/#respond Mon, 08 Jul 2019 20:19:33 +0000 https://techliberation.com/?p=76520

When it comes to the threat of automation, I agree with Ryan Khurana: “From self-driving car crashes to failed workplace algorithms, many AI tools fail to perform simple tasks humans excel at, let alone far surpass us in every way.” Like myself, he is skeptical that automation will unravel the labor market, pointing out that “[The] conflation of what AI ‘may one day do’ with the much more mundane ‘what software can do today’ creates a powerful narrative around automation that accepts no refutation.”

Khurana marshals a number of examples to make this point:

Google needs to use human callers to impersonate its Duplex system on up to a quarter of calls, and Uber needs crowd-sourced labor to ensure its automated identification system remains fast, but admitting this makes them look less automated…

London-based investment firm MMC Ventures found that out of the 2,830 startups they identified as being “AI-focused” in Europe, 40% used no machine learning tools, whatsoever.

I’ve been collecting examples of the AI hype machine as well. Here are some of my favorites.

From Rodney Brooks comes this corrective:

Chris Urmson, the former leader of Google’s self-driving car project, once hoped that his son wouldn’t need a driver’s license because driverless cars would be so plentiful by 2020. Now the CEO of the self-driving startup Aurora, Urmson says that driverless cars will be slowly integrated onto our roads “over the next 30 to 50 years.”

Judea Pearl, a pioneer in statistics, said last year that “All the impressive achievements of deep learning amount to just curve fitting,” a technique that was developed decades ago.

Earlier this year, IBM shut down its Watson AI tool for drug discovery.

Mike Mallazzo said it this way: “The investors know it’s bullshit. When venture capitalists say they are looking to add ‘A.I. companies’ to their portfolio, what they really want is a technological moat built around access to uniquely valuable data. If it’s beneficial for companies to sprinkle in a little sex appeal and brand this as ‘A.I.,’ there’s no incentive to stop them from doing so.”

And there is the problem of cost:

As I explained before, the large pecuniary costs in big data technologies don’t speak to the equally expensive task of overhauling management techniques to make the new systems work. New technologies can’t be seamlessly adopted within firms, they need management and process innovations to make the new data-driven methods profitable. And to be honest, we just aren’t there yet.

]]>
https://techliberation.com/2019/07/08/there-are-good-reasons-to-be-skeptical-that-automation-will-unravel-the-labor-market/feed/ 0 76520
The “A La Carte” Wars Come to an End https://techliberation.com/2019/04/12/the-a-la-carte-wars-come-to-an-end/ https://techliberation.com/2019/04/12/the-a-la-carte-wars-come-to-an-end/#comments Fri, 12 Apr 2019 14:26:38 +0000 https://techliberation.com/?p=76476

A decade ago, a heated debate raged over the benefits of “a la carte” (or “unbundling”) mandates for cable and satellite TV operators. Regulatory advocates said consumers wanted to buy all TV channels individually to lower costs. The FCC under former Republican Chairman Kevin Martin got close to mandating a la carte regulation.

But the math just didn’t add up. A la carte mandates, many economists noted, would actually cost consumers just as much (or even more) once they repurchased all the individual channels they desired. And it wasn’t clear people really wanted a completely atomized one-by-one content shopping experience anyway.

Throughout media history, bundles of all different sorts had been used across many different sectors (books, newspapers, music, etc.). This was because consumers often enjoyed the benefits of getting a package of diverse content delivered to them in an all-in-one package. Bundling also helped media operators create and sustain a diversity of content using creative cross-subsidization schemes. The traditional newspaper format and business is perhaps the greatest example of media bundling. The classifieds and sports sections helped cross-subsidize hard news (especially local reporting). See this 2008 essay by Jeff Eisenach and me for details for more details on the economics of a la carte.

Yet, with the rise of cable and satellite television, some critics protested the use of bundles for delivering content. Even though it was clear that the incredible diversity of 500+ channels on pay TV was directly attributable to strong channels cross-subsidizing weaker ones, many regulatory advocates said we would be better off without bundles. Moreover, they said, online video markets could show us the path forward in the form of radically atomized content options and cheaper prices.

Flash-forward to today. As this Wall Street Journal article points out, online video providers are rejecting a la carte and recreating content bundles to keep a diversity of programming flowing. This happened in unregulated markets without any FCC rules. YouTube, Hulu, PlayStation, and many other online video providers are creating new bundles and monetization schemes.

It is also worth noting that this same sort of “re-bundling” of content is happening with online news sources and other digital platforms as various sites struggle to find content monetization schemes that can sustain diverse, high-quality content in the Digital Era. Content bundling and various paywall schemes are helping them do so.

The lesson here is that the economics of content creation and delivery are quite dynamic, challenging, and extremely hard to predict. Mandating “a la carte” unbundling of content sounded smart and well-intentioned to many people a decade ago, but it proved to be problematic even in highly competitive online markets. Thankfully, we did not mandate unbundling by law. We waited and watched to see how it naturally played out in various markets. We now have a better feel for how big of a mistake mandatory a la carte would have likely been in practice.

]]>
https://techliberation.com/2019/04/12/the-a-la-carte-wars-come-to-an-end/feed/ 1 76476
Schumpeter vs. the “Techlash” https://techliberation.com/2019/04/09/schumpeter-vs-the-techlash/ https://techliberation.com/2019/04/09/schumpeter-vs-the-techlash/#comments Tue, 09 Apr 2019 14:00:37 +0000 https://techliberation.com/?p=76471

Image result for joseph schumpeterIn my first essay for the American Institute for Economic Research, I discuss what lessons the great prophet of innovation Joseph Schumpeter might have for us in the midst of today’s “techlash” and rising tide of techopanics.  I argue that, “[i]f Schumpeter were alive today, he’d have two important lessons to teach us about the techlash and why we should be wary of misguided interventions into the Digital Economy.” Specifically:
We can summarize Schumpeter’s first lesson in two words: Change happens. But disruptive change only happens in the right policy environment. Which gets to the second great lesson that Schumpeter can still teach us today, and which can also be summarized in two words: Incentives matter. Entrepreneurs will continuously drive dynamic, disruptive change, but only if public policy allows it.
]]>
https://techliberation.com/2019/04/09/schumpeter-vs-the-techlash/feed/ 1 76471
Is There a Kill Zone in Tech? https://techliberation.com/2018/11/07/is-there-a-kill-zone-in-tech/ https://techliberation.com/2018/11/07/is-there-a-kill-zone-in-tech/#comments Wed, 07 Nov 2018 22:24:53 +0000 https://techliberation.com/?p=76409

Recently, Noah Smith explored an emerging question in tech. Is there a kill zone where new and innovative upstarts are being throttled by the biggest players? He explains ,

Facebook commissioned a study by consultant Oliver Wyman that concluded that venture investment in the technology sector wasn’t lower than in other sectors, which led Wyman to conclude that there was no kill zone.

But economist Ian Hathaway noted that looking at the overall technology industry was too broad. Examining three specific industry categories — internet retail, internet software and social/platform software, corresponding to the industries dominated by Amazon, Google and Facebook, respectively — Hathaway found that initial venture-capital financings have declined by much more in the past few years than in comparable industries. That suggests the kill zone is real.

A recent paper by economists Wen Wen and Feng Zhu reaches a similar conclusion. Observing that Google has tended to follow Apple in deciding which mobile-app markets to enter, they assessed whether the threat of potential entry by Google (as measured by Apple’s actions) deters innovation by startups making apps for Google’s Android platform. They conclude that when the threat of the platform owner’s entry is higher, fewer app makers will be interested in offering a product for that particular niche. A 2014 paper by the same authors found similar results for Amazon and third-party merchants using its platform.

So, are American tech companies making it difficult for startups? Perhaps, but there are some other reasons to be skeptical.

First off, the nature of the venture capital market has changed due to the declining costs of computing. Not too long ago, much of a tech company’s Series A and B would be dedicated to buying  server racks and computing power. But with the advent of Amazon Web Services (AWS) and other cloud computing technologies, this need has dried up.

What does this mean for the ecosystem? Ben Thompson explained the impact back in 2015 :

In fact, angels have nearly completely replaced venture capital at the seed stage, which means they are the first to form critical relationships with founders. True, this has led to an explosion in new companies far beyond the levels seen previously, which is entirely expected — lower barriers to entry to any market means more total entries — but this has actually made it even more difficult for venture capitalists to invest in seed rounds: most aren’t capable of writing massive numbers of seed checks; the amounts are just too small to justify the effort.

Instead, venture capitalists have gone up-market: firms may claim they invest in Series’ A and B, but those come well after one or possibly two rounds of seed investment; in other words, today’s Series A is yesteryear’s Series C. This, by the way, is the key to understanding the so-called “Series A crunch”: it used to be that Series C was the make-or-break funding round, and in fact it still is — it just has a different name now. Moreover, the fact more companies can get started doesn’t mean that more companies will succeed; venture capitalists just have more companies to choose from.

Research is only now catching up with Thompson’s hunch. In a newly released NBER working paper , economists David Byrne, Carol Corrado, Daniel E. Sichel find that prices for computing, database, and storage services offered by AWS dropped dramatically from 2009 to 2016. As they concluded, “cloud service providers are undertaking large amounts of own-account investment in IT equipment and that some of this investment may not be captured in GDP.”

Second, a decline in startups was predicted by Nobel winning economist Robert Lucas back in 1978 . Over time, Lucas surmised, productivity increases will yield wage increases, which in turn will incentivize marginal entrepreneurs to become employees. This will increase productivity at the company, but also increases the size of the firm. Over time, as productivity and wages inch upwards, working at a firm gets incentivized over starting a company. Entrepreneurs as a portion of the economy will thus decline and industries with higher productivity rates will see bigger firms.

Recent analysis of 50 separate national economies confirmed the inverse relationship between entrepreneurship rates and Gross Domestic Product (GDP), which has also been confirmed by the World Bank Group Entrepreneurship Survey as well. Time series analysis also hints at this relationship. Employment within large firms tends to grow over time as a country gets wealthier. Analysis of the Census Business Dynamics Statistics (BDS) illustrates this, as does groundwork conducted in American manufacturing from 1850 to 1880. But the United States isn’t the only country where this relationship can be found. The same trend exists for Canada , Germany , Indonesia , Japan , South Korea , and Thailand .

Moreover, the distribution of firms tends to change as a country becomes wealthier. As economist Markus Poschke noted, “richer countries thus feature fewer, larger firms, with a firm size distribution that is more dispersed and more skewed.” So, it not just the United States that has large firms. Sweden, the Netherlands and Ireland all have large firms, but they too are relatively wealthy by international standards. Productivity goes a long way to explain the distributional changes.

Nicholas Kozeniauskas, a recent minted economist from NYU, also has been working on research showing the skewed nature of entrepreneurism, which adds some depth to this conversation. As he found , the decline in entrepreneurship has been more pronounced for higher education levels. Overall, “an increase in fixed costs explains most of the decline in the aggregate entrepreneurship rate.”  

As of right now, I think we should be unsatisfied with the evidence of a kill zone. The research doesn’t point in the same direction. But as new insight comes in, we will need to update, as always.  

]]>
https://techliberation.com/2018/11/07/is-there-a-kill-zone-in-tech/feed/ 2 76409
Did The Supreme Court Get The Market Definition Correct In The Amex Case? https://techliberation.com/2018/07/06/did-the-amex-case-get-the-market-definition-correct/ https://techliberation.com/2018/07/06/did-the-amex-case-get-the-market-definition-correct/#comments Fri, 06 Jul 2018 20:22:22 +0000 https://techliberation.com/?p=76308

The Supreme Court is winding down for the year and last week put out a much awaited decision in Ohio v. American Express . Some have rung the alarm with this case, but I think caution is worthwhile. In short, the Court’s analysis wasn’t expansive like some have claimed, but incomplete. There are a lot of important details to this case and the guideposts it has provided will likely be fought over in future litigation over platform regulation. To narrow the scope of this post, I am going to focus on the market definition question and the issue of two-sided platforms in light of the developments in the industrial organization (IO) literature in the past two decades.

Just to review, Amex centers on what is known as anti-steering provisions. These provisions limit merchants who take the credit card payment from implying a preference for non-Amex cards; dissuading customers from using Amex cards; persuading customers to use other cards; imposing any special restrictions, conditions, disadvantages, or fees on Amex cards; or promoting other cards more than Amex. Importantly, these provisions never limited merchants from steering customers toward debit cards, checks, or cash.

In October 2010, the Department of Justice (DoJ) and several states sued Amex, Visa, and Mastercard for these contract provisions, and Amex was the only one among the three to take it to court. Initially, the District Court ruled in favor of the DoJ and states, explaining that the credit card platforms should be treated as two separate markets, one for merchants and one for cardholders. In that analysis, the court cleaved off the merchant side and declared the anti-steering provisions as being anticompetitive under Section 1 of the Sherman Act.

On appeal, the Court of Appeals for the Second Circuit reversed that decision because “without evidence of the [anti-steering provisions’] net effect on both merchants and cardholders, the District Court could not have properly concluded that the [provisions] unreasonably restrain trade in violation” of Section 1 of the Sherman Act. The Department of Justice petitioned the Appeals Court to reconsider the case en banc , but that was rejected and then headed to the Supreme Court.

The Supreme Court agreed with this two-sided theory as “credit-card networks are best understood as supplying only one product—the transaction—that is jointly consumed by a cardholder and a merchant.” Even though the DoJ was able to show that the provisions did increase merchant fees, “evidence of a price increase on one side of a two-sided transaction platform cannot, by itself, demonstrate an anticompetitive exercise of market power.” To prove this, the DoJ would have to prove that Amex increased of the cost of credit-card transactions above a competitive level, reduced the number of credit-card transactions, or otherwise stifled competition in the two-sided credit-card market.

The decision only briefly mentions why this is important, so consider a platform with two sides, users and advertisers. If users experience an increase in price or a reduction in quality, then they are likely to exit or use the platform less. Yet, advertisers are on the other side because they can reach users. So in response to the decline in user quality, advertiser demand will drop even if the ad prices stay constant. The result echoes back.  When advertisers drop out, the total amount of content also recedes and user demand falls because the platform is less valuable to them. Demand is tightly integrated between the two side of the platform. Changes in user and advertiser preferences have far outsized effects on the platforms because each side responds to the other. In other words, small changes in price or quality tends to be far more impactful in chasing off both groups from the platforms as compared to one-sided goods. In the economic parlance, these are called demand interdependencies. The demand on one side of the market is interdependent with demand on the other. Research on magazine price changes confirms this theory.   

In the last two decades, economics has been adapting to the insights and the challenges of two-sided markets. In the case of a one-sided business, like a laundromat or a mining company, there is one downstream or upstream consumer, so demand is fairly straightforward. But platforms are more complex since value must be balanced across the different participants in a platform, which leads to demand interdependencies.

In an article cited in the decision, economists David Evans and Richard Schmalensee explained the importance of their integration into competition analysis, “The key point is that it is wrong as a matter of economics to ignore significant demand interdependencies among the multiple platform sides” when defining markets. If they are ignored, then the typical analytical tools will yield incorrect assessments.

While it didn’t employ the language of demand interdependencies, the Court did agree with that general assessment:

To be sure, it is not always necessary to consider both sides of a two-sided platform. A market should be treated as one sided when the impacts of indirect network effects and relative pricing in that market are minor. Newspapers that sell advertisements, for example, arguably operate a two-sided platform because the value of an advertisement increases as more people read the newspaper. But in the newspaper-advertisement market, the indirect networks effects operate in only one direction; newspaper readers are largely indifferent to the amount of advertising that a newspaper contains. Because of these weak indirect network effects, the market for newspaper advertising behaves much like a one-sided market and should be analyzed as such.

Why does this bit matter?

In a piece in the New York Times in April, Law scholar Lina Khan worried that this case would “effectively [shield] big tech platforms from serious antitrust scrutiny.” Law professor Tim Wu followed up with an op-ed just this past week in the Times expressing similar concern,

To reach this strained conclusion, the court deployed some advanced economics that it seemed not to fully understand, nor did it apply the economics in a manner consistent with the goals of the antitrust laws. Justice Stephen Breyer’s dissent mocks the majority’s economic reasoning, as will most economists, including the creators of the “two-sided markets” theory on which the court relied. The court used academic citations in the worst way possible — to take a pass on reality.

Respectfully, I have to disagree with Wu’s assessment and Khan’s worries. Both Google and Facebook more evidently fall into the newspaper category than the payments category under the majority’s opinion. Moreover, the opinion didn’t define what “weak indirect network effects” actually means in practice, so this case doesn’t leave Google and Facebook off the hook by any means.

How the Court reached that conclusion is worth exploring, however.

In contrast to newspapers, credit card payment platforms “cannot make a sale unless both sides of the platform simultaneously agree to use their services,” so, “two-sided transaction platforms exhibit more pronounced indirect network effects and interconnected pricing and demand.” The Court seems to connect two-sidedness with the simultaneity requirement. On this front, Wu is correct. They didn’t seem to fully understand the economic reasoning. It isn’t the simultaneous nature of credit cards that makes them two-sided markets, but their demand interdependencies. Newspapers also have strong demand interdependencies even though they may not feature the simultaneity of credit cards. Yet, the Court was correct in defining the market as a transactional one, where cardholders and merchants are intimately connected.  

That being said, Breyer’s economic reasoning isn’t any sharper than the majority’s:

But while the market includes substitutes, it does not include what economists call complements: goods or services that are used together with the restrained product, but that cannot be substituted for that product. See id., ¶565a, at 429; Eastman Kodak Co. v. Image Technical Services, Inc., 504 U. S. 451, 463 (1992). An example of complements is gasoline and tires. A driver needs both gasoline and tires to drive, but they are not substitutes for each other, and so the sale price of tires does not check the ability of a gasoline firm (say a gasoline monopolist) to raise the price of gasoline above competitive levels. As a treatise on the subject states: “Grouping complementary goods into the same market” is “economic nonsense,” and would “undermin[e] the rationale for the policy against monopolization or collusion in the first place.” 2B Areeda & Hovenkamp ¶565a, at 431.

Here, the relationship between merchant-related card services and shopper-related card services is primarily that of complements, not substitutes. Like gasoline and tires, both must be purchased for either to have value. Merchants upset about a price increase for merchant related services cannot avoid that price increase by becoming cardholders, in the way that, say, a buyer of newspaper advertising can switch to television advertising or direct mail in response to a newspaper’s advertising price increase.

Breyer makes a bit of a mess when it comes to the idea of demand complementarity. It isn’t the case that “both must be purchased for either to have value.” That is perfect complementarity, which is rare. Rather, when the price of gasoline increases, then the demand for tires is likely to decrease as well. However, it doesn’t need to run the other way. When the price of tires decreases, the demand for gasoline doesn’t typically inch up. This kind of asymmetric demand relationship is counter to the kind of relationship on platforms where demand in linked on both sides.

Still, Breyer buries the lede. Attributing a price increase to firms in the tire market might be wrong if demand fluctuations in the adjacent gasoline market partially caused those prices changes. In other words, the reason why complementary demand matters in the first place is to ensure that the court’s analysis is correct. Going back to Evans and Schmalensee, “The key point is that it is wrong as a matter of economics to ignore significant demand interdependencies among the multiple platform sides” when defining markets. You get the assessments wrong.        

To his credit, Breyer does rightly point out the thin definition offered by the majority:

I take from that definition that there are four relevant features of such businesses on the majority’s account: they (1) offer different products or services, (2) to different groups of customers, (3) whom the “platform” connects, (4) in simultaneous transactions.

Having simultaneous transactions isn’t the defining feature of two-sidedness and if the lower courts come to rely on this feature to define platforms, then some assessments of competitive effects are likely to be wrong.

Amex offers up a lot for the antitrust community to consider, but in key ways, the decision is incomplete. Importantly, the Court didn’t address the validity of many new analytical tools that have popped up in the past decade to understand platform market power. Take a quick glance at the papers cited in the majority opinion and you will notice how many of references dates from after 2010 when this case was first brought. In other words, Amex hardly shuts the door for future litigation.     

]]>
https://techliberation.com/2018/07/06/did-the-amex-case-get-the-market-definition-correct/feed/ 1 76308
What We Learn From Past Government-Imposed Corporate Breakups Is That They Don’t Work https://techliberation.com/2018/06/28/what-we-learn-from-past-government-imposed-corporate-breakups-is-that-they-dont-work/ https://techliberation.com/2018/06/28/what-we-learn-from-past-government-imposed-corporate-breakups-is-that-they-dont-work/#respond Thu, 28 Jun 2018 18:42:13 +0000 https://techliberation.com/?p=76306

Voices from all over the political and professional spectrum have been clamoring for tech companies to be broken up. Tech investor Roger McNamee,  machine learning pioneer Yoshua Bengio NYU professor Scott Galloway, and even Marco Rubio’s 2016 presidential digital director have all suggested that tech companies should be forcibly separated. So, I took a look at some of the past efforts in a new survey of corporate breakups and found that they really weren’t all that effective at creating competitive markets.

Although many consider  Standard Oil and AT&T as classic cases, I think United States v. American Tobacco Company is far more instructive. 

Like Standard Oil, the American Tobacco Company was organized as a trust and came to acquire nearly 75 percent of the total market by buying both the Union Tobacco Company and the Continental Tobacco Company. But unlike Standard Oil, as soon as these companies were bought, they were integrated within American Tobacco. In 1908 the federal government filed and eventually won a lawsuit under the Sherman Act, which dissolved the trust into three companies, which in theory matched the original three companies.

Yet, the breakup wasn’t as easy as simply splitting the larger company into its original three companies, since the successor companies had intertwined processes. A single purchasing department managed the leaf purchasing. Processing plants has been assigned to specific products without any concern for their previous ownership. For eight months over tense negotiations, the government pulled apart factories, distribution and storage facilities, and name brands. Office by office, the company was pulled apart by government fiat.

Historian Allan M. Brandt had this to say in  The Cigarette Century,

It was one thing to identify monopolistic practices and activities in restraint of trade, and quite another to figure out how to return the tobacco industry to some form of regulated competition. Even those who applauded the breakup of American Tobacco soon found themselves critics of the negotiated decree restructuring the industry. This would not be the last time that the tobacco industry would successfully turn a regulatory intervention to its own advantage.

While some might think that breaking up companies would be a clean operation, American Tobacco suggests the opposite. And I’m not alone in this assessment. Here is what Robert Crandall had to say a couple of years back  in a piece for the Brookings Institution:

[W]ith one exception, the breakup of AT&T in 1984, there is very little evidence that such relief is successful in increasing competition, raising industry output, and reducing prices to consumers. The exception turns out to be a case of overkill because the same results could have been obtained through a simple regulatory rule, obviating the need for vertical divestiture of AT&T.

In other words, this method simply does not achieve competitive markets.

If you’re interested in the longer piece, you can find it over at American Action Forum.

]]>
https://techliberation.com/2018/06/28/what-we-learn-from-past-government-imposed-corporate-breakups-is-that-they-dont-work/feed/ 0 76306
Mandating AI Fairness May Come At The Expense Of Other Types of Fairness https://techliberation.com/2018/06/21/mandating-ai-fairness-may-come-at-the-expense-of-other-types-of-fairness/ https://techliberation.com/2018/06/21/mandating-ai-fairness-may-come-at-the-expense-of-other-types-of-fairness/#respond Thu, 21 Jun 2018 18:12:25 +0000 https://techliberation.com/?p=76285

Two years ago, ProPublica initiated a conversation over the use of risk assessment algorithms when they concluded that a widely used “score proved remarkably unreliable in forecasting violent crime” in Florida. Their examination of the racial disparities in scoring has been cited countless times, often as a proxy for the power of automation and algorithms in daily life. Indeed, as the authors concluded, these scores are “part of a part of a larger examination of the powerful, largely hidden effect of algorithms in American life.”

As this examination continues, two precepts are worth keeping in mind. First, the social significance of algorithms needs to be considered, not just their internal model significance. While the accuracy of algorithms are important, more emphasis should be placed on how they are used within institutional settings. And second, fairness is not a single idea. Mandates for certain kinds of fairness could come at the expense of others forms of fairness. As always, policymakers need to be cognizant of the trade offs.  

Statistical significance versus social significance

The ProPublica study arrived at a critical junction in the conversation over algorithms. In the tech space, TensorFlow, Google’s artificial intelligence (AI) engine, had been released in 2015, sparking interest in algorithms and the application of AI for commercial applications. At the same time, in the political arena, sentencing reform was gaining steam. Senators Rand Paul and Cory Booker helped bring wider attention to the need for reforms to the criminal justice system through their efforts in passing the REDEEM Act. Indeed, when the Koch Brothers’ political network announced more than $5 million in spending for criminal justice reform, the Washington Post noted that it underscored “prison and sentencing reform’s unique position as one of the nation’s most widely discussed policy proposals as well as one with some of the most broad political backing.”

Model selection is a critical component of any study, so it is no wonder that criticism of risk assessment algorithms have focused on this aspect of the process. Error bars might reflect precision, but they tell us little about a model’s applicability. More importantly however, the implementation isn’t frictionless. People have to use them to make decisions. Algorithms must be integrated within a set of processes that involve the messiness of human relations. Because of the variety of institutional settings, there is sure to be significant variability in how they come to be used. The impact of real decision-making processes isn’t constrained only by the accuracy of the models, but also the purposes to which they are applied.

In other words, the social significance of these models, how they come to be used in practice, is a pertinent question for policy makers just the same as their statistical significance is.

Angèle Christin, a professor at Stanford who studies these topics, made the issue abundantly clear when she noted,

Yet it is unclear whether these risk scores always have the meaningful effect on criminal proceedings that their designers intended. During my observations, I realized that risk scores were often ignored. The scores were printed out and added to the heavy paper files about defendants, but prosecutors, attorneys, and judges never discussed them. The scores were not part of the plea bargaining and negotiation process. In fact, most of judges and prosecutors told me that they did not trust the risk scores at all. Why should they follow the recommendations of a model built by a for-profit company that they knew nothing about, using data they didn’t control? They didn’t see the point. For better or worse, they trusted their own expertise and experience instead. (emphasis added)

Christin’s on the ground experience urges scholars to consider how these algorithms have come to be implemented in practice. As she points out, institutions engage in various kinds rituals to appear modern, chief among them being the acquisition of new technological tools. Changing practices within workplaces is a much more difficult task than reformers would like to imagine. Instead, a typical reaction by those who have long worked within a system is to  manipulate the tool to look compliant.

The implementation of pretrial risk assessment instruments highlights the potential variability when algorithms are deployed. These instruments can help guide judges when decisions are made about what is going to happen to a defendant before a trial. Will the defendant be put on bail and what will be the cost? The most popular of these instruments is known as the Public Safety Assessment or simply the PSA, which was developed by the Laura and John Arnold Foundation and has been adopted in over 30 jurisdictions in the last five years.

The adoption of the PSA across regions helps to demonstrate just how disparate implementation can be. In New Jersey, the adoption of the PSA seems to have correlated with a dramatic decline in the pretrial detention rate. In Lucas County, Ohio the pretrial detention rate increased after the PSA was put into place. In Chicago, judges seem to be simply ignoring the PSA. Indeed, there appears to be little agreement on how well the PSA’s high-risk classification corresponds to reality, as re-arrest can be as low as 10 percent or as high as 42 percent, depending on how the PSA is integrated in a region.

And in the most comprehensive study of its kind, George Mason University law professor Megan Stevenson looked at Kentucky after it implemented the PSA and found significant changes in bail-setting practices, but only a small increase in pretrial release. Over time these changes eroded as judges returned to their previous habits. If this tendency to revert back to the mean is widespread, then why even implement these pre-trial risk instruments?

Although it was focused on pretrial risk assessments, Stevenson’s call for a broader understanding of these tools applies to the entirety of algorithm research:

Risk assessment in practice is different from risk assessment in the abstract, and its impacts depend on context and details of implementation. If indeed risk assessment is capable of producing large benefits, it will take research and experimentation to learn how to achieve them. Such a process would be evidence-based criminal justice at its best: not a flocking towards methods that bear the glossy veneer of science, but a careful and iterative evaluation of what works and what does not.

Algorithms are tools. While it is important to understand how well calibrated the tool is, researchers needs to be focused on how that tool impacts real people working with and within institutions with embedded cultural and historic practices.

Trade offs in fairness determinations

Julia Angwin and her team at ProPublica helped to spark a new interest in algorithmic decision-making when they dove deeper into a commonly used post trial sentencing tool known as COMPAS. Instead of predicting behavior before a trial takes place, COMPAS purports to predict a defendant’s risk of committing another crime in the sentencing phase after a defendant has been found guilty. As they discovered, the risk system was biased against African-American defendants, who were more likely to be incorrectly labeled as higher-risk than they actually were. At the same time, white defendants were labeled as lower-risk than they was actually the case.

Superficially, that seems like  a simple problem to solve. Just add features to the algorithm that consider race and rerun the tool. If only the algorithm payed attention to this bias, the outcome could be corrected. Or so goes the thinking.

But let’s take a step back and consider really what these tools represent. The task of the COMPAS tool is to estimate the degree to which people possess a likeliness for future risk. In this sense, the algorithm aims for calibration, one of at least three distinct ways we might understand fairness. Aiming for fairness through calibration means that people were correctly identified as having some probability of committing an act. Indeed, as subsequent research has found, the number of people who committed crimes were correctly distributed within each group. In other words, the algorithm did correctly identify a set of people as having a probability of committing a crime.

Angwin’s criticism is of another kind, as Jon Kleinberg, Sendhil Mullainathan, and Manish Raghavan explain in “Inherent Trade-Offs in the Fair Determination of Risk Scores.” The kind of fairness that Angwin aligns with might be understand as a balance for the positive class. To violate this kind of fairness notion, people would be later identified as being part of the class, yet they were predicted initially as having a lower probability by the algorithm. For example, as the ProPublica study found, white defendants that did commit crimes in the future were assigned lower risk scores. This would be a violation of balance for the positive class.

Similarly, balance for a negative class is the negative correlate. To violate this kind of fairness notion, people that would be later identified as not being part of the class would be predicted initially as having a higher probability of being part of it by the algorithm. Both of these conditions try to capture the idea that groups should have equal false negative and false positive rates.

After formalizing these three conditions for fairness, Kleinberg, Mullainathan, and Raghavan proved that it isn’t possible to satisfy all constraints simultaneously except in highly constrained special cases. These results hold regardless of how the risk assignment is computed, since “it is simply a fact about risk estimates when the base rates differ between two groups.”

What this means is that some views of fairness might simply be incompatible with each other. Balancing for one kind of notion of fairness is likely to come at the expense of another.

This trade off is really a subclass of a larger problem that is of central focus in data science, econometrics, and statistics. As Pedro Domingos noted:

You should be skeptical of claims that a particular technique “solves” the overfitting problem. It’s easy to avoid overfitting (variance) by falling into the opposite error of underfitting (bias). Simultaneously avoiding both requires learning a perfect classifier, and short of knowing it in advance there is no single technique that will always do best (no free lunch).

Internalizing these lessons about fairness requires a shift in framing. For those working in the AI field, actively deploying algorithms, and especially for policy makers, fairness mandates will likely create trade offs. If most algorithms cannot achieve multiple notions of fairness simultaneously, then every decision to balance for class attributes is likely to take away from efficiency elsewhere. This isn’t to say that we shouldn’t strive to optimize fairness. Rather, it is simply important to recognize that mandating of one type of fairness may necessarily come at the expense of a different type of fairness.

Understanding the internal logic of risk assessment tools is not the end of the conversation. Without data of how they are used, it could be that these algorithm entrench bias, uproot it, or have ambiguous effects. To have an honest conversation, we need to understand how they nudge decisions in the real world.

]]>
https://techliberation.com/2018/06/21/mandating-ai-fairness-may-come-at-the-expense-of-other-types-of-fairness/feed/ 0 76285
Some thoughts on the T-Mobile-Sprint merger https://techliberation.com/2018/04/30/some-thoughts-on-the-t-mobile-sprint-merger/ https://techliberation.com/2018/04/30/some-thoughts-on-the-t-mobile-sprint-merger/#comments Mon, 30 Apr 2018 20:20:33 +0000 https://techliberation.com/?p=76265

Mobile broadband is a tough business in the US. There are four national carriers–Verizon, AT&T, T-Mobile, and Sprint–but since about 2011, mergers have been contemplated (and attempted, but blocked). Recently, the competition has gotten fiercer.  The higher data buckets and unlimited data plans have been great for consumers.

The FCC’s latest mobile competition report, citing UBS data, says that industry ARPU (basically, monthly revenue per subscriber), which had been pretty stable since 1998, declined significantly from 2013 to 2016 from about $46 to about $36. These revenue pressures seemed to fall hardest on Sprint, who in February, issued $1.5 billion of “junk bonds” to help fund its network investments. Analysts pointed out in 2016 that “Sprint has not reported full-year net profits since 2006.”  Further, mobile TV watching is becoming a bigger business. AT&T and Verizon both plan to offer a TV bundle to their wireless customers this year, and T-Mobile’s purchase of Layer3 indicates an interest in offering a mobile TV service.

It’s these trends that probably pushed T-Mobile and Sprint to announce yesterday their intention to merge. All eyes will be on the DOJ and the FCC as their competition divisions consider whether to approve the merger.

The Core Arguments

Merger opponents’ primary argument is what’s been raised several times since the 2011 AT&T-T-Mobile aborted merger: this “4 to 3” merger significantly raises the prospect of “tacit collusion.” After the merger, the story goes, the 3 remaining mobile carriers won’t work as hard to lower prices or improve services. While outright collusion on prices is illegal, they have a point that tacit collusion is more difficult for regulators to prove, to prevent, and to prosecute.

The counterargument, that T-Mobile and Sprint are already making, is that “mobile” is not a distinct market anymore–technologies and services are converging. Therefore, tacit collusion won’t be feasible because mobile broadband is increasingly competing with landline broadband providers (like Comcast and Charter), and possibly even media companies (like Netflix and Disney). Further, they claim, T-Mobile and Sprint going it alone will each struggle to deploy a capex-intensive 5G network that can compete with AT&T, Verizon, Comcast-NBCU, and the rest, but the merged company will be a formidable competitor in TV and in consumer and enterprise broadband.

Competitive Review

Any prediction about whether the deal will be approved or denied is premature. This is a horizontal merger in a highly-visible industry and it will receive an intense antitrust review. (Rachel Barkow and Peter Huber have an informative 2001 law journal article about telecom mergers at the DOJ and FCC.) The DOJ and FCC will seek years of emails and financial records from Sprint and T-Mobile executives and attempt to ascertain the “real” motivation for the merger and its likely consumer effects.

T-Mobile and Sprint will likely lean on evidence that consumers view (or soon will view) mobile broadband and TV as a substitute for landline broadband and TV. Much like phone and TV went from “local markets with one or two competitors” years ago to a “national market with several competitors,” their story seems to be, broadband is following a similar trajectory and viewing this as a 4 to 3 merger misreads industry trends.

There’s preliminary evidence that mobile broadband will put competitive pressure on conventional, landline broadband. Census surveys indicate that in 2013, 10% of Internet-using households were mobile Internet only (no landline Internet). By 2015, about 20% of households were mobile-only, and the proportion of Internet users who had landline broadband actually fell from 82% to 75%. But this is still preliminary and I haven’t seen economic evidence yet that mobile is putting pricing pressure on landline TV and broadband.

FCC Review

Antitrust review is only one step, however. The FCC transaction review process is typically longer and harder to predict. The FCC has concurrent  authority with the DOJ under the Clayton Act to review telecommunications mergers under Sections 7 and 11 of the Clayton Act but it has never used that authority. Instead, the FCC uses its spectrum transfer review authority as a hook to evaluate mergers using the Communication Act’s (vague) “public interest standard.” Unlike antitrust standards, which generally put the burden on regulators to show consumer and competitive harm, the public interest standard as currently interpreted puts the burden on merging companies to show social and competitive benefits.

Hopefully the FCC will hew to a more rigorous antitrust inquiry and reform the open-ended public interest inquiry. As Chris Koopman and I wrote for the law journal a few years ago, these FCC  “public interest” reviews are sometimes excessively long and advocates use the vague standards to force the FCC into ancillary concerns, like TV programming decisions and “net neutrality” compliance.

Part of the public interest inquiry is a complex “spectrum screen” analysis. Basically, transacting companies can’t have too much “good” spectrum in a single regional market. I doubt the spectrum screen analysis would be dispositive (much of the analysis in the past seemed pretty ad hoc), but I do wonder if it will be an issue since this was a major issue raised in the AT&T-T-Mobile attempted merger.

In any case, that’s where I see the core issues, though we’ll learn much more as the merger reviews commence.

]]>
https://techliberation.com/2018/04/30/some-thoughts-on-the-t-mobile-sprint-merger/feed/ 4 76265
Video: The Dangers of Regulating Information Platforms https://techliberation.com/2018/04/27/video-the-dangers-of-regulating-information-platforms/ https://techliberation.com/2018/04/27/video-the-dangers-of-regulating-information-platforms/#respond Fri, 27 Apr 2018 18:13:13 +0000 https://techliberation.com/?p=76264

On March 19th, I had the chance to debate Franklin Foer at a Patrick Henry College event focused on the question, “Is Big Tech Big Brother?” It was billed as a debate over the role of technology in American society and whether government should be regulating media and technology platforms more generally.  [The full event video is here.] Foer is the author of the new book, World Without Mind: The Existential Threat of Big Tech, in which he advocates a fairly expansive regulatory regime for modern information technology platforms. He is open to building on regulatory ideas from the past, including broadcast-esque licensing regimes, “Fairness Doctrine”-like mandates for digital intermediaries, “fiduciary” responsibilities, beefed-up antitrust intervention, and other types of controls. In a review of the book for Reason, and then again during the debate at Patrick Henry University, I offered some reflections on what we can learn from history about how well ideas like those worked out in practice.

My closing statement of the debate, which lasted just a little over three minutes, offers a concise summation of what that history teaches us and why it would be so dangerous to repeat the mistakes of the past by wandering down that disastrous path again. That 3-minute clip is posted below. (The audience was polled before and after the event and asked the same question each time: “Do large tech companies wield too much power in our economy, media and personal lives and if so, should government(s) intervene?” Apparently at the beginning, the poll was roughly Yes – 70% and No – 30%, but after the debated ended it has reversed, with only 30% in favor of intervention and 70% against. Glad to turn around some minds on this one!)

via ytCropper

]]>
https://techliberation.com/2018/04/27/video-the-dangers-of-regulating-information-platforms/feed/ 0 76264
How Well-Intentioned Privacy Regulation Could Boost Market Power of Facebook & Google https://techliberation.com/2018/04/25/how-well-intentioned-privacy-regulation-could-boost-market-power-of-facebook-google/ https://techliberation.com/2018/04/25/how-well-intentioned-privacy-regulation-could-boost-market-power-of-facebook-google/#respond Wed, 25 Apr 2018 14:25:08 +0000 https://techliberation.com/?p=76261

Image result for Zuckerberg Schmidt laughing

Two weeks ago, as Facebook CEO Mark Zuckerberg was getting grilled by Congress during a two-day media circus set of hearings, I wrote a counterintuitive essay about how it could end up being Facebook’s greatest moment. How could that be? As I argued in the piece, with an avalanche of new rules looming, “Facebook is potentially poised to score its greatest victory ever as it begins the transition to regulated monopoly status, solidifying its market power, and limiting threats from new rivals.”

With the exception of probably only Google, no firm other than Facebook likely has enough lawyers, lobbyists, and money to deal with layers of red tape and corresponding regulatory compliance headaches that lie ahead. That’s true both here and especially abroad in Europe, which continues to pile on new privacy and “data protection” regulations. While such rules come wrapped in the very best of intentions, there’s just no getting around the fact that  regulation has costs. In this case, the unintended consequence of well-intentioned data privacy rules is that the emerging regulatory regime will likely discourage (or potentially even destroy) the chances of getting the new types of innovation and competition that we so desperately need right now.

Others now appear to be coming around to this view. On April 23, both the  New York Times and The Wall Street Journal ran feature articles with remarkably similar titles and themes. The New York Times article by Daisuke Wakabayashi and Adam Satariano was titled, “How Looming Privacy Regulations May Strengthen Facebook and Google,” and The Wall Street Journal’s piece, “Google and Facebook Likely to Benefit From Europe’s Privacy Crackdown,” was penned by Sam Schechner and Nick Kostov. “In Europe and the United States, the conventional wisdom is that regulation is needed to force Silicon Valley’s digital giants to respect people’s online privacy. But new rules may instead serve to strengthen Facebook’s and Google’s hegemony and extend their lead on the internet,” note Wakabayashi and Satariano in the  NYT essay. They continue on to note how “past attempts at privacy regulation have done little to mitigate the power of tech firms.” This includes regulations like Europe’s “right to be forgotten” requirement, which has essentially put Google in a privileged position as the “chief arbiter of what information is kept online in Europe.” Meanwhile, the  WSJ article opens with this interesting story about the epiphany EU regulator Věra Jourová had upon visiting with the supposed victims of the EU’s new General Data Protection Regulation, or GDPR:
When the European Union’s justice commissioner traveled to California to meet with Google and Facebook last fall, she was expecting to get an earful from executives worried about the Continent’s sweeping new privacy law. Instead, she realized they already had the situation under control. “They were more relaxed, and I became more nervous,” said the EU official, Věra Jourová. “They have the money, an army of lawyers, an army of technicians and so on.”
Image result for Google Brin laughingIndeed they do. And that means that they are better positioned to absorb the significant costs of compliance that will be associated with the new GDPR rules, which are somewhat ambiguous and will require a great deal of ongoing interpretation and legal wrangling.  The Journal essay also cites an unnamed Brussels lobbyist for an media-measurement firm saying, “The politicians wanted to teach Google and Facebook a lesson. And yet they favor them.” Consider this paragraph from the WSJ essay about how the two firms worked diligently to come into compliance with the new GDPR regulations:
Once the law passed in spring 2016, Google and Facebook threw people at the problem. Google involved lawyers in the U.S., Ireland, Brussels and elsewhere to pore over contracts and procedures, said people close to the company. Facebook mobilized hundreds of people in what it describes as the largest interdepartmental team it has ever assembled. Facebook lawyers spent a year scrutinizing the law’s lengthy text. Designers and engineers then toiled over how to implement changes, according to Stephen Deadman, Facebook’s global deputy chief privacy officer. During the process, Facebook got frequent access to regulators across Europe. It met with Helen Dixon, the data protection commissioner in Ireland, where the company bases its European operations, and her staff to run through changes Facebook was planning. Ms. Dixon’s agency provided the firm with feedback on the wording of its consent requests, Facebook said.
Now ask yourself how many other smaller existing or new firms would be in a position to do the same thing. Answer: Not many. We’re already seeing the deleterious effects of the GDPR on market structure, the  Journal reports. “Some advertisers are planning to shift money away from smaller providers and toward Google and Facebook,” Schechner and Kostov note. And they end their essay with the telling thoughts of Bill Simmons, co-founder and chief technology officer of Dataxu, Boston-based company that helps buy targeted ads, who says, “It is paradoxical. The GDPR is actually consolidating the control of consumer data onto these tech giants.” The  NYT essay included a funny tidbit about how “Some privacy advocates also bristle at the idea that these new restrictions would help already powerful internet companies, noting that is a well-worn argument employed by tech giants to try to prevent future regulation.” That’s a highly unfortunate attitude. If privacy advocates really care about improving the situation on the ground, then the best way to do that is with more and better choices. Sadly, it seems that with each passing day the write off the idea of any new competition emerging to today’s tech giants. “Can Facebook be replaced?” asks Olivia Solon writing in The Guardian today. Some probably think not, but as Solon notes, “prominent Silicon Valley investor Jason Calacanis, who was an early investor in several high-profile tech companies including Uber certainly hopes so. He has launched a competition to find a ‘social network that is actually good for society,'” and his “Openbook Challenge will offer seven “purpose-driven teams” $100,000 in investment to build a billion-user social network that could replace the technology titan while protecting consumer privacy.” In a blog post announcing the Challenge, Calacanis wrote: “All community and social products on the internet have had their era, from AOL to MySpace, and typically they’re not shut down by the government — they’re slowly replaced by better products. So, let’s start the process of replacing Facebook.” I don’t have any idea whether this Openbook Challenge will succeed. It’s hard building big, scalable digital platforms that satisfy the diverse needs of a diverse world. But this is exactly the sort of innovation that we should be encouraging. Even the very threat of new competition will keep the big dogs on their toes. Alas, all the new regulations being consider will likely just leave us with fewer choices and regulations that probably won’t even do all that much to truly better protect our data or privacy. But hey, at least it was all well-intentioned!

Updates :

]]>
https://techliberation.com/2018/04/25/how-well-intentioned-privacy-regulation-could-boost-market-power-of-facebook-google/feed/ 0 76261
Video from TPI Event on Regulating Facebook https://techliberation.com/2018/04/19/video-from-tpi-event-on-regulating-facebook/ https://techliberation.com/2018/04/19/video-from-tpi-event-on-regulating-facebook/#comments Thu, 19 Apr 2018 13:19:54 +0000 https://techliberation.com/?p=76257

On Monday, April 16th, the Technology Policy Institute hosted an event on “Facebook & Cambridge Analytica: Regulatory & Policy Implications.” I was invited to deliver some remarks on a panel that included Howard Beales of George Washington University, Stuart Ingis of Venable LLP, Josephine Wolff of the Rochester Institute of Technology, and Thomas Lenard of TPI, who moderated. I offered some thoughts about the potential trade-offs associated with treating Facebook like a regulated public utility. I wrote an essay here last week on that topic. My remarks at the event begin at the 13:45 mark of the video.

 

]]>
https://techliberation.com/2018/04/19/video-from-tpi-event-on-regulating-facebook/feed/ 2 76257
The Week Facebook Became a Regulated Monopoly (and Achieved Its Greatest Victory in the Process) https://techliberation.com/2018/04/10/the-week-facebook-became-a-regulated-monopoly-and-achieved-its-greatest-victory-in-the-process/ https://techliberation.com/2018/04/10/the-week-facebook-became-a-regulated-monopoly-and-achieved-its-greatest-victory-in-the-process/#comments Tue, 10 Apr 2018 20:30:45 +0000 https://techliberation.com/?p=76253

With Facebook CEO Mark Zuckerberg in town this week for a political flogging, you might think that this is darkest hour for the social networking giant. Facebook stands at a regulatory crossroads, to be sure. But allow me to offer a cynical take, and one based on history: Facebook is potentially poised to score its greatest victory ever as it begins the transition to regulated monopoly status, solidifying its market power, and limiting threats from new rivals.

By slowly capitulating to critics (both here and abroad) who are thirsty for massive regulation of the data-driven economy, Facebook is setting itself up as a servant of the state. In the name of satisfying some amorphous political “public interest” standard and fulfilling a variety of corporate responsibility objectives, Facebook will gradually allow itself to be converted into a sort of digital public utility or electronic essential facility.

That sounds like trouble for the firm until you realize that Facebook is one of the few companies who will be able to sacrifice a pound of flesh like that and remain alive. As layers of new regulatory obligations are applied, barriers to new innovations will become formidable obstacles to the very competitors that the public so desperately needs right now to offer us better alternatives. Gradually, Facebook will recognize this and go along with the regulatory schemes. And then eventually they will become the biggest defender of all of it.

Welcome to Facebook’s broadcast industry moment. The firm is essentially in the same position the broadcast sector was about a century ago when it started cozying up to federal lawmakers. Over time, broadcasters would warmly embrace an expansive licensing regime that would allow all parties—regulatory advocates, academics, lawmakers, bureaucrats, and even the broadcasters themselves—to play out the fairy tale that broadcasters would be good “public stewards” of the “public airwaves” to serve the “public interest.”

Alas, the actual listening and viewing public got royally shafted in this deal. Broadcasters got billions of dollars’ worth of completely free beachfront spectrum along with protected geographic monopolies. Congressional lawmakers and the unelected bureaucrats at the FCC got power to tinker with broadcast content and received other special favors (like free airtime) from their cronies in the industry. People, money, and influence floated freely between the political and business realms until at some point there really wasn’t much distinction between them. Meanwhile, the public got stuck with bland fare and limited competition for their ears and eyes. The “public interest” ended up meaning many things during this time, but it rarely had much to do with what the public actually desired—namely, more and better options for a diverse citizenry.

Of course, much the same story played out in the U.S. telecommunications market a few decades prior to the broadcast industry making their deal with the devil. The early history of telecommunications in America was characterized by competition among a variety of local and regional rivals. But it was derailed by political shenanigans. Here are a few choice paragraphs about the cronyist origins of the Bell System monopoly from a law review article that Brent Skorup and I wrote back in 2013 [footnotes omitted]. As you read it, imagine how similar well-intentioned regulations might play out for Facebook:

… this intensely competitive, pro-consumer free-for-all would be derailed by AT&T’s brilliant strategy to use the government to accomplish what it could not in the free market: eliminate its rivals. In 1907, Theodore Newton Vail became AT&T’s president. He had a clear vision: achieving “universal service” (in the form of interconnected and fully integrated systems) by eliminating rivals and consolidating networks. Befriending lawmakers and regulators was a crucial component of this strategy. While many policymakers nominally supported the idea of competition, they were more preoccupied with achieving widespread, interconnected network coverage. Vail capitalized on that impulse. On December 19, 1913, the government and AT&T reached the “Kingsbury Commitment.” Named after AT&T vice president Nathan C. Kingsbury, who helped negotiate the terms, the agreement outlined a plan whereby AT&T agreed not to acquire any other independent companies while also allowing other competitors to interconnect with the Bell System. The Kingsbury Commitment was thought to be pro-competitive, yet it was hardly an altruistic agreement on AT&T’s part. Regulators did not interpret the agreement so as to restrict AT&T from acquiring any new telephone systems, but only to require that an equal number be sold to an independent buyer for each system AT&T purchased. Hence, the Kingsbury Commitment contained a built-in incentive for network swapping (trading systems and solidifying territorial monopolies) rather than continued competition.  “The government solution, in short, was not the steamy, unsettling cohabitation that marks competition but rather a sort of competitive apartheid, characterized by segregation and quarantine,” observe telecom legal experts Michael Kellogg, John Thorne, and Peter Huber.  Thus, the move toward interconnection, while appearing to assist independent operators, actually allowed AT&T to gain greater control over the industry. “Vail chose at this time to put AT&T squarely behind government regulation, as the quid pro quo for avoiding competition,” explains [Richard] Vietor.  “This was the only politically acceptable way for AT&T to monopolize telephony,” he notes.  AT&T’s 1917 annual report confirms this fact, stating, “[with a] combination of like activities under proper control and regulation, the service to the public would be better, more progressive, efficient, and economical than competitive systems.”

So much for “the public interest”! If the last century’s worth of communications and media regulation teaches us anything, it’s that good intentions only get you so far in this world. Many of the lawmakers and regulators who allowed themselves to be duped by big corporations asking for protection from competition probably thought they were doing the right thing. Those policymakers may even have believed that they were actually encouraging innovation and competition through some of their regulatory actions. Alas, things did not turn out that way. We the public were denied real, meaningful choices and innovations because of these misguided policies.

And so now it’s Facebook’s turn to become part of this sordid tale. Zuckerberg has already made it clear that he is open to regulation and that his firm would also start enforcing new European data rules globally. And after this week’s political circus in Congress, the floodgates will be wide open and everyone’s regulatory pet peeve will be up for political consideration, which is exactly what happened for broadcasters and communications in past decades.

Every crackpot idea under the sun will be on the table but the most extreme versions of those proposals will be beaten back just enough to ensure that Facebook can offer up its pound of sacrificial flesh each time without running the risk of killing the patient entirely. Again, this was always part of the broadcast and communications regulatory playbook as well. So long as they were guaranteed a fairly stable market return and protection from pesky new innovators, the firms were willing to go along with the deal.

The “deal” in this case between Facebook and regulators won’t be so explicitly cronyist as it was for broadcasters and communications companies, however. The days of price controls, rate-of-return regulation, and formal line of business restrictions are likely over. Everyone now recognizes that regulations creating formal barriers to innovation and entry are a bad idea and, as a result, they are usually rejected.

But laws and regulations can sometimes create informal or hidden barriers to innovation and entry, even when they are well-intentioned. And that’s what could happen here as this latest Facebook fiasco leads to calls for seeming innocuous things like transparency and disclosures requirements, restrictions on “bad speech,” advertising and data collection regulations, “fiduciary” responsibilities, “algorithmic accountability” efforts, and so on. Facebook hasn’t wanted to adopt some of these things in the past, but now they’ll be pushed aggressively to do so by policymakers and regulatory activists. As Zuckerberg and Facebook cozy up with policymakers and regulatory activists and begin talking about a “broader view of responsibility,” the transition to the firm’s next phase as a quasi-public utility will get underway.

The rich irony of all this is that the same regulatory advocates who are cheering on this week’s developments as well as the coming regulatory avalanche will be the ones howling the loudest if and when only Facebook is left standing in the social media universe. In fact, that’s already happened in Europe where policymakers and their burdensome top-down data protection regulations have driven most digital innovators and investors to other continents, leaving only Facebook, Google, and handful of other (mostly U.S.-based) companies left to regulate. And then European policymakers have the audacity to cry foul about the market power of these firms! It boggles the mind how European policymakers and regulatory advocates see zero connection between their heavy-handed approach to the Digital Economy and the corresponding lack of enough competitors in those sectors.

But none of that will make any difference to the regulatory advocates. They want that pound of flesh, and they are going to get it. And then in Facebook they will have a regulatory plaything to toy with for years to come.

What about the public? Will we really be any better off because of any of this? How many people will want to stick with Facebook if it becomes a digital public utility or a social media version of the Post Office? That sure doesn’t sound like much fun for us. But if the new regulations imposed on Facebook do end up hurting smaller rivals more and create barriers to new entry and innovation going forward, then it’s unclear whether it makes any difference what we want because the options just won’t be there for us.

With time, Facebook will not only become more comfortable with its new regulatory status for that reason but then in the name of ensuring a “level playing field,” the firm will simultaneously advocate that each and every new rule be applied to all its rivals. Again, this is how well-intentioned regulation ends up indirectly discouraging the very innovation and competitive options that we need. Broadcasters and communications companies played the “level playing field” card at every juncture to beat down new technologies and rivals.

Finally, at some point, don’t be surprised if all roads lead back to prices for digital services. Right now, social networking services like Facebook are free-of-charge to consumers and digital companies use advertising to support their services. Many regulatory advocates have suggested that this sort of business model is fundamentally incompatible with privacy and have wanted it strictly curtail if not ended altogether. Of course, if you ask the public how many of them would be willing to pay $19.95 a month for Facebook, you won’t get many takers.

I wrote a couple of law review articles talking about the “privacy paradox” and consumer “willingness to pay” for privacy more generally. All the evidence suggests that consumer willingness to pay for privacy is significantly lower than privacy advocates would prefer. But if in the name protecting privacy, prices get pushed or imposed as a matter of public policy, then we will have entered a truly surreal moment in the history of regulatory policy because we will have inverted the presumption that consumer welfare is better served by lower prices. Over the past century, the purpose of most public utility regulation was lower prices, higher quality, and more choice. The modern Digital Economy has largely achieved those goals without heavy-handed regulation. But now, with the emerging regulatory regime looming for Facebook and social media more generally, we might end up with a sort of bizarro policy world in which we make people pay more in the name of making them better off!

I hope I’m wrong about everything I’ve said here. It would be troubling if we enter an era of less competition, less innovation, and lower quality information services. But to borrow a quote from my favorite sci-fi show, “all of this has happened before, and all of this will happen again.” And regulatory history tends to repeat. We shouldn’t be surprised, therefore, when some forget the ugly history of public utility-style regulation or broadcast era “public interest” mandates and we find ourselves stuck right back in the hole that we’ve been trying to dig ourselves out of for so many decades.

]]>
https://techliberation.com/2018/04/10/the-week-facebook-became-a-regulated-monopoly-and-achieved-its-greatest-victory-in-the-process/feed/ 3 76253
A welcome restructuring at the FCC https://techliberation.com/2018/01/09/a-welcome-restructuring-at-the-fcc/ https://techliberation.com/2018/01/09/a-welcome-restructuring-at-the-fcc/#comments Tue, 09 Jan 2018 20:45:33 +0000 https://techliberation.com/?p=76222

The FCC released a proposed Order today that would create an Office of Economics and Analytics. Last April, Chairman Pai proposed this data-centric office. There are about a dozen bureaus and offices within the FCC and this proposed change in the FCC’s organizational structure would consolidate a few offices and many FCC economists and experts into a single office.

This is welcome news. Several years ago when I was in law school, I was a legal clerk for the FCC Wireless Bureau and for the FCC Office of General Counsel. During that ten-month stint, I was surprised at the number of economists, who were all excellent, at the FCC. I assisted several of them closely (and helped organize what one FCC official dubbed, unofficially, “The Economists’ Cage Match” for outside experts sparring over the competitive effects of the proposed AT&T-T-Mobile merger). However, my impression even during my limited time at the FCC was well-stated by Chairman Pai in April:

[E]conomists are not systematically incorporated into policy work at the FCC. Instead, their expertise is typically applied in an ad hoc fashion, often late in the process. There is no consistent approach to their use.

And since the economists are sprinkled about the agency, their work is often “siloed” within their respective bureau. Economics as an afterthought in telecom is not good for the development of US tech industries, nor for consumers.

As Geoffrey Manne and Allen Gibby said recently, “the future of telecom regulation is antitrust,” and the creation of the OEA is a good step in line with global trends. Many nations–like the Netherlands, Denmark, Spain, Japan, South Korea, and New Zealand–are restructuring legacy telecom regulators. The days of public and private telecom monopolies and discrete, separate communications, computer, and media industries (thus bureaus) is past. Convergence, driven by IP networks and deregulation, has created these trends and resulted in sometimes dramatic restructuring of agencies.

In Denmark, for instance, as Roslyn Layton and Joe Kane have written, national parties and regulators took inspiration from the deregulatory plans of the Clinton FCC. The Social Democrats, the Radical Left, the Left, the Conservative People’s Party, the Socialist People’s Party, and the Center Democrats agreed in 1999:

The 1990s were focused on breaking down old monopoly; now it is important to make the frameworks for telecom, IT, radio, TV meld together—convergence. We believe that new technologies will create competition. It is important to ensure that regulation does not create a barrier for the possibility of new converged products; for example, telecom operators should be able to offer content if they so choose. It is also important to ensure digital signature capability, digital payment, consumer protection, and digital rights. Regulation must be technologically neutral, and technology choices are to be handled by the market. The goal is to move away from sector-specific regulation toward competition-oriented regulation. We would prefer to handle telecom with competition laws, but some special regulation may be needed in certain cases—for example, regulation for access to copper and universal service.

This agreement was followed up by the quiet shuttering of NITA, the Danish telecom agency, in 2011.

Bringing economic rigor to the FCC’s notoriously vague “public interest” standard seemed to be occurring (slowly) during the Clinton and Bush administrations. However, during the Obama years, this progress was de-railed, largely by the net neutrality silliness, which not only distracted US regulators from actual problems like rural broadband expansion but also reinvigorated the media-access movement, whose followers believe the FCC should have a major role in shaping US culture, media, and technologies.

Fortunately, those days are in the rearview mirror. The proposed creation of the OEA represents another pivot toward the likely future of US telecom regulation: a focus on consumer welfare, competition, and data-driven policy.

]]>
https://techliberation.com/2018/01/09/a-welcome-restructuring-at-the-fcc/feed/ 4 76222
Who needs a telecom regulator? Denmark doesn’t. https://techliberation.com/2017/03/27/who-needs-a-telecom-regulator-denmark-doesnt/ https://techliberation.com/2017/03/27/who-needs-a-telecom-regulator-denmark-doesnt/#comments Mon, 27 Mar 2017 19:42:52 +0000 https://techliberation.com/?p=76124

US telecommunications laws are in need of updates. US law states that “the Internet and other interactive computer services” should be “unfettered by Federal or State regulation,” but regulators are increasingly imposing old laws and regulations onto new media and Internet services. Further, Federal Communications Commission actions often duplicate or displace general competition laws. Absent congressional action, old telecom laws will continue to delay and obstruct new services. A new Mercatus paper by Roslyn Layton and Joe Kane shows how governments can modernize telecom agencies and laws.

Legacy Laws

US telecom laws are codified in Title 47 of the US Code and enforced mostly by the FCC. That the first eight sections of US telecommunications law are devoted to the telegraph, the killer app of 1850, illustrates congressional inaction towards obsolete regulations.

In the last decade, therefore, several media, Internet, and telecom companies inadvertently stumbled into Communications Act quagmires. An Internet streaming company, for instance, was bankrupted for upending the TV status quo established by the FCC in the 1960s;  FCC precedents mean broadcasters can be credibly threatened with license revocation for airing a documentary critical of a presidential candidate;  and the thousands of Internet service providers across the US are subjected to laws designed to constrain the 1930s AT&T long-distance phone monopoly .

US telecom and tech laws, in other words, are a shining example of American “kludgeocracy”–a regime of prescriptive and dated laws whose complexity benefits special interests and harms innovators.  These anti-consumer results led progressive Harvard professor Lawrence Lessig to conclude in 2008 that “it’s time to demolish the FCC.”  While Lessig’s proposal goes too far, Congress should listen to the voices on the right and left urging them to sweep away the regulations of the past and rationalize telecom law for the 21st century.

Modern Telecom Policy in Denmark

An interesting new Mercatus working paper explains how Denmark took up that challenge. The paper, “Alternative Approaches to Broadband Policy: Lessons on Deregulation from Denmark,” is by Denmark-based scholar Roslyn Layton, who served on President Trump’s transition team for telecom policy, and Joe Kane, a masters student in the GMU econ department. 

The “Nordic model” is often caricatured by American conservatives (and progressives like Bernie Sanders) as socialist control of industry. But as AEI’s James Pethokoukis and others point out, it’s time both sides updated their 1970s talking points. “[W]hen it comes to regulatory efficiency and business freedom,” Tyler Cowen recently noted, “Denmark has a considerably higher [Heritage Foundation] score than does the U.S.”

Layton and Kane explore Denmark’s relatively free-market telecom policies. They explain how Denmark modernized its telecom laws over time as technology and competition evolved. Critically, the center-left government eliminated Denmark’s telecom regulator in 2011 in light of the “convergence” of services to the Internet. Scholars noted,

Nobody seemed to care much—except for the staff who needed to move to other authorities and a few people especially interested in IT and telecom regulation.

Even-handed, light telecom regulation performs pretty well. Denmark, along with South Korea, leads the world in terms of broadband access. The country also has a modest universal service program that depends primarily on the market. Further, similar to other Nordic countries, Denmark permitted a voluntary forum, including consumer groups, ISPs, and Google, to determine best practices and resolve “net neutrality” controversies.

Contrast Denmark’s tech-neutral, consumer-focused approach with recent proceedings in the United States. One of the Obama FCC’s major projects was attempting to regulate how TV streaming apps functioned–despite the fact that TV has never been more abundant and competitive. Countless hours of staff time and industry time were wasted (Trump’s election killed the effort) because advocates saw the opportunity to regulate the streaming market with a law intended to help Circuit City (RIP) sell a few more devices in 1996.  The biggest waste of government resources has been the “net neutrality” fight, which stems from prior FCC attempts to apply 1930s telecom laws to 1960s computer systems. Old rules haphazardly imposed on new technologies creates a compliance mindset in our tech and telecom industries. Worse, these unwinnable fights over legal minutiae prevent FCC staff from working on issues where they can help consumers. 

Americans deserve better telecom laws but the inscrutability of FCC actions means consumers don’t know what to ask for. Layton and Kane illuminate that alternative frameworks are available. They highlight Denmark’s political and cultural differences from the US. Nevertheless, Denmark’s telecom reforms and pro-consumer policies deserve study and emulation. The Danes have shown how tech-neutral, consumer-focused policies not only can expand broadband access, they reduce government duplication and overreach.

]]>
https://techliberation.com/2017/03/27/who-needs-a-telecom-regulator-denmark-doesnt/feed/ 1 76124
No, the Telecom Act didn’t destroy phone and TV competition https://techliberation.com/2016/08/16/no-the-telecom-act-didnt-destroy-phone-and-tv-competition/ https://techliberation.com/2016/08/16/no-the-telecom-act-didnt-destroy-phone-and-tv-competition/#comments Tue, 16 Aug 2016 15:18:28 +0000 https://techliberation.com/?p=76067

I came across an article last week in the AV Club that caught my eye. The title is: “The Telecommunications Act of 1996 gave us shitty cell service, expensive cable.” The Telecom Act is the largest update to the regulatory framework set up in the 1934 Communications Act. The basic thrust of the Act was to update the telephone laws because the AT&T long-distance monopoly had been broken up for a decade. The AV Club is not a policy publication but it does feature serious reporting on media. This analysis of the Telecom Act and its effects, however, omits or obfuscates important information about dynamics in media since the 1990s.

The AV Club article offers an illustrative collection of left-of-center critiques of the Telecom Act. Similar to Glass-Steagall  repeal or Citizens United, many on the left are apparently citing the Telecom Act as a kind of shorthand for deregulatory ideology run amuck. And like Glass-Steagall repeal and Citizens United, most of the critics fundamentally misstate the effects and purposes of the law. Inexplicably, the AV Club article relies heavily on a Common Cause white paper from 2005. Now, Common Cause typically does careful work but the paper is hopelessly outdated today. Eleven years ago Netflix was a small DVD-by-mail service. There was no 4G LTE (2010). No iPhone or Google Android (2007). And no Pandora, IPTV, and a dozen other technologies and services that have revolutionized communications and media. None of the competitive churn since 2005, outlined below, is even hinted at in the AV Club piece. The actual data undermine the dire diagnoses about the state of communications and media from the various critics cited in the piece. 

Competition in Telephone Service

Let’s consider the article’s provocative claim that the Act gave us “the continuing rise of cable, cellphone, and internet pricing.” Despite this empirical statement, no data are provided to support this. Instead, the article mostly quotes progressive platitudes about the evils of industry consolidation. I suppose platitudes are necessary because on most measures there’s been substantial, measurable improvements in phone and Internet service since the 1990s. In fact, the cost-per-minute of phone service has plummeted, in part, because of the competition unleashed by the Telecom Act. (Relatedly, there’s been a 50-fold increase in Internet bandwidth with no price increase.)

The Telecom Act undid much of the damage caused by decades of monopoly protection of telephone and cable companies by federal and state governments. For decades it was accepted that local telephone and cable TV service were natural monopolies. Regulators therefore prohibited competitive entry. The Telecom Act (mostly) repudiated that assumption and opened the door for cable companies and others to enter the telephone marketplace. The competitive results were transformative. According to FCC data, incumbent telephone companies, the ones given monopoly protection for decades, have lost over 100 million residential subscribers since 2000. Most of those households went wireless only but new competitors (mostly cable companies) have added over 32 million residential phone customers and may soon overtake the incumbents. The chart below breaks out connections by technology (VoIP, wireless, POTs), not incumbency, but the churn between competitors is apparent.

Phone Connections 11.7.14

Further, while the Telecom Act was mostly about local landlines, not cellular networks, we can also dispense with the AV Club claim that dominant phone companies are increasing cellphone bills. Again, no data are cited. In fact, in quality-adjusted terms, the price of cell service has plummeted. In 1999, for instance, a typical cell plan was for regional coverage and offered 200 voice minutes for about $55 per month (2015 dollars). Until about 2000, there was no texting (1999 was the first year texting between carriers worked) and no data included. In comparison, for that same price today you can find a popular plan that includes, for all of North America, unlimited texting and voice minutes, plus 10 GB of 4G LTE data. Carriers spend tens of billions of dollars annually on maintaining and upgrading cellular networks and as a result, millions of US households are dropping landline connections (voice and broadband) for smartphones alone.

Competition in Television and Media

The critics of cable deregulation completely misunderstand and misstate the role of competition in the TV industry. Media quality is harder to measure, but its not a stretch to say that quality is higher than ever. Few dispute that we are in the Golden Age of Media, resulting from the proliferation of niche programming provided by Netflix, podcasts, Hulu, HBO, FX, and others. This virtual explosion in programming came about largely because there are more buyers (distributors) of programming and more cutthroat competition for eyeballs.

Again, the AV Club quotes the Common Cause report: “Roughly 98 percent of households with access to cable are served by only one cable company.”  Quite simply, this is useless stat. Why do we care how many coaxial cable companies are in a neighborhood? Consumers care about outputs–price, programming, quality, customer service–and number of competitors, regardless of the type of transmission network, which can be cellular, satellite, coaxial cable, fiber, or copper.

Look beyond the contrived “number of coaxial competitors” measure and it’s clear that m ost c able companies face substantial competition. The Telecom Act is a major source of the additional competition, particularly telco TV. Since passage of the Telecom Act, cable TV’s share of the subscription TV market fell from 95% to nearly 50%.

Pay TV Market Share PT

The Telecom Act repealed a decades-old federal policy that largely prohibited telephone companies from competing with cable TV providers. Not much changed for telco TV until the mid-2000s, when broadband technology improved and when the FCC freed phone companies from “unbundling” rules that forced telcos to lease their networks to competitors at regulated rates. In this investment-friendly environment, telephone companies began upgrading their networks for TV service and began purchasing and distributing programming. Since 2005, telcos have attracted about 13 million households and cable TV’s market share fell from about 70% to 53%. Further, much of consumer dissatisfaction with TV is caused by legacy regulations, not the Telecom Act. If cable, satellite, and phone companies were as free as Netflix and Hulu to bundle, price, and purchase content, we’d see lower prices and smaller bundles. 

TV regs chart small

The AV Club’s focus on Clear Channel [sic] and now-broken up media companies is puzzling and must be because of the article’s reliance on the 2005 Common Cause report. The bête noire of media access organizations circa 2005 was Clear Channel, ostensibly the sort of corporate media behemoth created by the Telecom Act. The hysteria proved unfounded.

Clear Channel broadcasting was rebranded in 2014 to iHeartRadio and its operations in the last decade do not resemble the picture described in the AV Club piece, that of a “radio giant” with “more than 1200 stations.” While still a major player in radio, since 2005 iHeartRadio’s parent company went private, sold all of its TV stations and hundreds of its radio station, and shed thousands of employees. The firm has serious financial challenges because of the competitive nature of the radio industry, which has seen entry from the likes of Pandora, Spotify, Google, and Apple.

The nostalgia for Cold War-era radio is also strange for an article written in the age of Pandora, Spotify, iTunes, and Google Play. The piece quotes media access scholar Robert McChesney about radio in the 1960s:

Fifty years ago when you drove from New York to California, every station would have a whole different sound to it because there would be different people talking. You’d learn a lot about the local community through the radio, and that’s all gone now. They destroyed radio. It was assassinated by the FCC and corporate lobbyists.

This oblique way of assessing competition–driving across the country–is necessary because local competition was actually relatively scarce in the 1960s. There were only about 5000 commercial radio stations in the US, which sounds like a lot except when you consider the choice and competition today. Today, largely because of digital advancements and channel splitting, there are more than 10 times as many available broadcast channels, as well as hundreds of low-power stations. Combined with streaming platforms, competition and choice is much more common today. Everyone in the US can, with an inexpensive 3G plan and a radio, access millions of niche podcasts and radio programs featuring music, hobbies, entertainment, news, and politics.

The piece quotes the 2005 report, alarmed that “ just five companies—Viacom, the parent of CBS, Disney, owner of ABC, News Corp, NBC and AOL, owner of Time Warner—now control 75 percent of all primetime viewing.” Again, I don’t understand why the article quotes decade-old articles about market share without updates. There is no mention that Viacom and CBS split up in 2005 and NewsCorp. and Fox split in 2013. The hysteria surrounding NBC, AOL, and Time Warner’s failed commercial relationships has been thoroughly explored and discredited by my colleague Adam Thierer and I’ll point you to his piece. As Adam has also documented, broadcast networks have been losing primetime audience share since at least the late 1970s, first to cable channels, then to streaming video. And nearly all networks, broadcast and cable, are seeing significant drops in audience as consumers turn to Internet streaming and gaming. Market power and profits in media is often short lived.

The article then decries the loss of local and state news reporting. It’s strange to blame the Telecom Act for newspaper woes since shrinking newsrooms is a global, not American, phenomenon with well-understood causes (loss of classifieds and increased competition with Web reporting). And, as I’ve pointed out, the greatest source of local and state reporting is local papers, but the FCC has largely prohibited papers from owning radio and TV broadcasters (which would provide papers a piece of TV’s lucrative ad and retrans revenue) for decades, even as local newspapers downsize and fail. 

The article was a fascinating read if only because it reveals how many left-of-center prognostications about media aged poorly. Those on the right have their own problems with the Act, namely its vastly different regulatory regimes (“telecommunications,” “wireless,” “television”) in a world of broadband and convergence. But useful reform means diagnosing what inhibits competition and choice in media and communications markets. Much of the competitive problems in fact arise from the enforcement of natural monopoly restrictions in the past. Media and communications has seen huge quality improvements since 1996 because the Telecom Act rejected the natural monopoly justifications for regulation. The Telecom Act has proven unwieldy but it cannot be blamed for nonexistent problems in phone and TV.

]]>
https://techliberation.com/2016/08/16/no-the-telecom-act-didnt-destroy-phone-and-tv-competition/feed/ 3 76067
Clinton’s Tech and Telecom Agenda: Good News for Communications Act Reform? https://techliberation.com/2016/06/29/clintons-tech-and-telecom-agenda-good-news-for-communications-act-reform/ https://techliberation.com/2016/06/29/clintons-tech-and-telecom-agenda-good-news-for-communications-act-reform/#respond Wed, 29 Jun 2016 15:13:10 +0000 https://techliberation.com/?p=76047

Yesterday, Hillary Clinton’s campaign released a tech and innovation agenda. The document covers many tech subjects, including cybersecurity, copyright, and and tech workforce investments, but I’ll narrow my comments to the areas I have the most expertise in: broadband infrastructure and Internet regulation. These roughly match up, respectively, to the second and fourth sections of the five-section document.

On the whole, the broadband infrastructure and Internet regulation sections list good, useful priorities. The biggest exception is Hillary’s strong endorsement of the Title II rules for the Internet, which, as I explained in the National Review last week, is a heavy-handed regulatory regime that is ripe for abuse and will be enforced by a politicized agency.

Her tech agenda doesn’t mention a Communications Act rewrite but I’d argue it’s implied in her proposed reforms. Further, her statements last year at an event suggest she supports significant telecom reforms.  In early 2015, Clinton spoke to tech journalist Kara Swisher (HT Doug Brake) and it was pretty clear Clinton viewed Title II as an imperfect and likely temporary effort to enforce neutrality norms. In fact, Clinton said she prefers “a modern, 21st-century telecom technology act” to replace Title II and the rest of the 1934 Communications Act.

It’s refreshing to see that, regarding broadband and Internet policy, there’s significant bipartisan agreement that government’s role should be primarily to provide public goods, protect consumers, and lower regulatory barriers, not micromanage providers, deploy public networks, and shape social policy. (Niskanen Center’s Ryan Hagemann similarly agrees that, with the exception of Title II, there’s a lot to like in Clinton’s tech agenda. )  In fact, 85% of the text in Clinton’s broadband infrastructure and Internet policy sections could be copied-and-pasted to a free-market Republican presidential candidate’s tech platform and it would be right at home.

It’s difficult to know what to make of her pledge to defend and enforce Title II. I suspect it represents a promise she won’t reverse the Title II determination of the FCC, not that she’s particularly enamored with Title II. Clinton (and President Bill Clinton ) seem to prefer a more hands-off approach to the Internet.

The Good

The document emphasizes that all types of broadband should be encouraged, including “fiber, wireless, satellite, and other technologies.” It’s nice to see this flexibility because many advocates are pushing a fiber optics-only agenda that is simply infeasible and tremendously expensive. (Professor Susan Crawford has said bluntly that governments should “refuse to fund last-mile solutions that aren’t primarily fiber.” )  The reality, acknowledged by Google and others, is that fixed wireless and satellite broadband are needed to affordably connect households in rural and suburban areas for the foreseeable future.  A fiber-only policy, because it’s impractically expensive, would have rather regressive effects and Clinton’s all-the-above strategy is commendable.

There’s also a recognition in the document that broadband networks are not natural monopolies and can be competitive, especially if the federal government works to lower entry barriers. Government policy for several decades was that telephone and cable networks were natural monopolies. Increasingly, broadband is competitive, especially as consumers go wireless only, but we’re still living with the negative side effects of past policies. The Clinton document emphasizes the need to reduce local regulatory barriers, streamline permitting, and allow nondiscriminatory access to conduits, poles, and rights-of-way controlled by local governments.

Spectrum policy is critical to any technology agenda and it’s a priority for Clinton. She emphasizes the need for more spectrum and identifying and reclaiming underutilized federal spectrum, a subject I’ve written about. The federal government uses spectrum worth hundreds of billions of dollars and pays very little for that asset, so there’s significant consumer gains available.

Clinton’s call to reinvigorate antitrust enforcement in technology and telecommunications is also noteworthy. Though the DOJ and FTC can overreach, they are better equipped to handle broadband and tech competition issues than the FCC.

The Not So Good

In the “Close the Digital Divide” item, there are some problems. In a word: the right goal with the wrong tools. The legacy broadband subsidy programs, which Clinton wishes to retain and expand, are fragmented and poorly designed. They essentially function as corporate welfare programs and should be eliminated in favor of consumer-focused subsidies.

One item says that by 2020 “100 percent of households in America will have the option of affordable broadband.” Literally connecting all American homes to the Internet is impossible today because millions of Americans simply don’t want the Internet. According to Pew, 70% of non-adopters are just not interested, and many would not subscribe no matter the price.  (Relatedly, after over a century of telephone’s existence and tens of billions in federal universal service funding, US phone subscribership has hovered around 95% for 20 years. )  

To accomplish the expansion of broadband access, Clinton promises to fund the FCC’s Connect America Fund (CAF), the Ag Department’s Rural Utilities Service Program (RUS), and the Broadband Technology Opportunities Program (BTOP). They differ somewhat in purpose and strategy but their major flaw is the same: they primarily fund and lend to broadband providers, not subscribers.

As I’ve noted before,

A direct subsidy plus a menu of options is a good way to expand access to low-income people (assuming there are effective anti-fraud procedures). A direct subsidy is more or less how the US and state governments help lower-income families afford products and services like energy, food, housing, and education. For energy bills there’s LIHEAP. For grocery bills there’s SNAP and WIC. For housing, there’s Section 8 vouchers. For higher education, there’s Pell grants.

By subsidizing providers, not consumers, there’s immense waste, corruption, and featherbedding. For instance, last year, Tony Romm at Politico published an in-depth investigation about the RUS program, funded by the stimulus. The waste in the RUS broadband program is appalling and the program will serve only a fraction of the subscribers that were promised. As one GAO researcher said about the program, “We are left with a program that spent $3 billion and we really don’t know what became of it.”  “ Even more troubling,” Romm explained “RUS can’t tell which residents its stimulus dollars served.”

Similarly, Clinton cites E-rate as a model for connecting “anchor institutions” like libraries and schools. E-rate likewise primarily benefits telecom and tech companies, not the intended recipients. As OECD researchers have found regarding EdTech government investment,   

The results…show no appreciable improvements in student achievement in reading, mathematics or science in the countries that had invested heavily in ICT for education.

Rather than the E-rate model, a smarter policy is to provide block grants to schools and institutions to give them more flexibility to optimize according to their own perceived technology and education needs.  The federal government already started doing this to a limited extent with Section IV of the 2015 Every Student Succeeds Act, which allocates $1.6 billion annually in block grants to states for tech-focused education spending. Policymakers should eliminate the expensive, dysfunctional E-rate program, which is funded by regressive fees on telephone bills, and expand the block grants somewhat to make up the shortfall.

Altogether, there’s a lot to like in Clinton’s broadband infrastructure and Internet policy agenda. There are hiccups–namely Title II enforcement and retention of broken broadband and tech subsidy programs–and hopefully her advisors will reexamine those. Given Clinton’s past statements about the need for a modernized Communications Act in place of Title II, she and her advisors have developed a forward-looking telecom agenda.

]]>
https://techliberation.com/2016/06/29/clintons-tech-and-telecom-agenda-good-news-for-communications-act-reform/feed/ 0 76047
New Article at Harvard JLPP: The FCC’s Transaction Reviews May Violate the First Amendment https://techliberation.com/2016/06/08/new-article-at-harvard-jlpp-the-fccs-transaction-reviews-may-violate-the-first-amendment/ https://techliberation.com/2016/06/08/new-article-at-harvard-jlpp-the-fccs-transaction-reviews-may-violate-the-first-amendment/#comments Wed, 08 Jun 2016 19:40:07 +0000 https://techliberation.com/?p=76035

The FCC’s transaction reviews have received substantial scholarly criticism lately. The FCC has increasingly used its license transaction reviews as an opportunity to engage in ad hoc merger reviews that substitute for formal rulemaking. FCC transaction conditions since 2000 have ranged from requiring AOL-Time Warner to make future instant messaging services interoperable, to price controls for broadband for low-income families, to mandating merging parties to donate $1 million to public safety initiatives.

In the last few months alone,

  • Randy May and Seth Cooper of the Free State Foundation wrote a piece that the transaction reviews contravene rule of law norms.
  • T. Randolph Beard et al. at the Phoenix Center published a research paper about how the FCC’s informal bargaining during mergers has become much more active and politically motivated in recent years.
  • Derek Bambauer, law professor at the University of Arizona, published a law review article that criticized the use of informal agency actions to pressure companies to act in certain ways. These secretive pressures “cloak what is in reality state action in the guise of private choice.”

This week, in the Harvard Journal of Law and Public Policy, my colleague Christopher Koopman and I added to this recent scholarship on the FCC’s controversial transaction reviews.

We echo the argument that the FCC merger policies undermine the rule of law. Firms have no idea which policies they’ll need to comply with to receive transaction approval. We also note that the FCC is motivated to shift from formal regulation, which is time consuming and subject to judicial review, to “regulation by transaction,” which has fewer restraints on agency action. The FCC and the courts have put few meaningful limits on what can be coerced from merging firms. Many concessions from merging firms are policies that the FCC is simply unwilling to accomplish via formal rulemaking or, sometimes, is outright prohibited by law from regulating. Since a firm’s concessions in this coercive process are nominally voluntary, they typically can’t sue.

We point out, further, that the FCC has a potentially damaging legal issue on its hands. Since the agency is now extracting concessions related to content distribution and TV and radio programming, its transaction review authority may be presumptively unconstitutional and subject to facial First Amendment challenges. That means many parties can challenge the law, not simply the ones burdened by conditions (who fear FCC retaliation).

Content-neutral licensing laws, like the FCC’s transaction review authority, are presumptively unconstitutional when there’s a risk  that public officials will intimidate speakers about content. We cite for this proposition the Supreme Court’s decision in City of Lakewood v. Plain Dealer Publishing Co., a 1988 case striking down as unconstitutional a city requirement that newspapers seek a public interest determination from public officials before installing newsracks. As the Court said, for rules with a “nexus to expression,”

a facial [First Amendment] challenge lies whenever a licensing law gives a government official or agency substantial power to discriminate based on the content or viewpoint of speech by suppressing disfavored speech or disliked speakers.

The public officials in City of Lakewood hadn’t even pressured newspapers about content; the mere potential for intimidation was a constitutional violation. If the agency’s authority was challenged, the FCC would be in worse shape than the public officials in City of Lakewood. Unlike those local officials, the FCC has used licensing to pressure firms to add certain types of programming. So the law certainly has the nexus to expression that the Supreme Court requires for a facial challenge.

We highlight, for instance, the many concessions related to content in the 2010 Comcast-NBCU merger. Comcast-NBCU conceded to create children’s, public interest, and Spanish-language TV and video-on-demand programming, relinquish editorial control over Hulu programming, and spend millions of dollars on digital literacy and FDA nutritional TV public service announcements. In that merger and many others, the FCC conditioned approval on compliance with open access and net neutrality policies. As I and others have pointed out, net neutrality rules also threaten free speech rights.

We conclude with some policy recommendations to avoid a constitutional problem for the FCC, including congressional repeal of the FCC’s transaction review authority. We point out that the FCC actually has Clayton Act authority to review common carrier mergers, but the FCC refuses to use it, likely because the agency views traditional competition analysis as too constraining. In our view, unless or until the FCC promulgates predictable guidelines about what is relevant in a transaction review and stays away from content distribution issues, the FCC’s transaction review authority is vulnerable to legal challenge.

]]>
https://techliberation.com/2016/06/08/new-article-at-harvard-jlpp-the-fccs-transaction-reviews-may-violate-the-first-amendment/feed/ 1 76035
Apple eBooks Case: Supreme Court Refuses to Defend Permissionless Innovation https://techliberation.com/2016/03/07/apple-ebooks-case-supreme-court-refuses-to-defend-permissionless-innovation/ https://techliberation.com/2016/03/07/apple-ebooks-case-supreme-court-refuses-to-defend-permissionless-innovation/#comments Mon, 07 Mar 2016 17:59:37 +0000 https://techliberation.com/?p=76003

This article originally appeared at techfreedom.org.

Today, the Supreme Court declined to review a Second Circuit decision that held Apple violated the antitrust laws by fixing ebook prices when, in preparing to launch its own iBookstore, it negotiated a deal with publishers that would allow them to set prices above Amazon’s one-size-fits-all $9.99 price. The appeals court reached its decision by applying the strict per se rule, which ignores any procompetitive justifications of a challenged business practice. The dissent had argued that Apple “was unwilling to [enter the ebook market] on terms that would incur a loss on e-book sales (as would happen if it met Amazon’s below-cost price),” and thus that Apple’s agreement with major publishers actually benefitted consumers by facilitating competition in the ebooks market, even if it meant higher prices for some ebooks.

The Supreme Court’s refusal to hear the case means the 2013 verdict against Apple, resulting in a $450 million dollar class-action settlement, will stand. The case began in 2010 when Apple negotiated with five major publishers, adopting an agency pricing model in which the publishers set a book’s price and gave a sales commission to Apple. This pricing model is distinct from Amazon’s previously dominant model, where t was allowed to unilaterally set e-book prices — often for below cost as a loss leader strategy to encourage sales of its own Kindle reader and promote the overall Amazon platform. The Justice Department claimed that Apple’s agency model amounted to antitrust conspiracy — and the Second Circuit agreed. Meanwhile, Apple’s entry reduced Amazon’s share of the ebooks market from 90% to 60%.

The question here wasn’t actually whether Apple should win, but whether Apple should even be allowed to argue that its arrangement could benefit consumers,” said TechFreedom President Berin Szoka. “Apple made a strong case that its deal with publishers was critical to allowing it compete with Amazon. The Supreme Court might or might not have found those arguments convincing, but it should have at least weighed them under antitrust’s flexible rule of reason. By letting the rigid per se deal stand as the controlling legal standard, the Court has ensured that antitrust law in general will put obsolete legal precedents from the pre-digital era above consumer welfare.”

Business model innovation is no less essential for progress than technological innovation,” concluded Szoka. “Indeed, the two usually go hand in hand. And new business models are usually essential to unseating the first mover in new markets like ebook publishing, especially when the first mover sets artificially low prices. Categorically banning deals that attempt to rebalance pricing power between distributors and publishers in multi-sided markets likely means strangling competition in its crib. Unfortunately, the real costs of today’s decision will go unseen: without an opportunity to defend new business models, innovative companies like Apple will be less likely to attempt to disrupt the dominance of entrenched incumbents. Consumers will simply never know how much today’s decision cost them.”

Read more about the argument for reversing the Second Circuit and applying a rule of reason to novel business arrangements in the amicus brief filed by the International Center for Law & Economics and eleven leading antitrust scholars. Truth on the Market, a blog dedicated to law and economics, held ablog symposium on the case last month.

]]>
https://techliberation.com/2016/03/07/apple-ebooks-case-supreme-court-refuses-to-defend-permissionless-innovation/feed/ 1 76003
UK Competition & Markets Authority on Online Platform Regulation https://techliberation.com/2015/10/30/uk-competition-markets-authority-on-online-platform-regulation/ https://techliberation.com/2015/10/30/uk-competition-markets-authority-on-online-platform-regulation/#comments Fri, 30 Oct 2015 14:00:03 +0000 http://techliberation.com/?p=75939

I wanted to draw your attention to this important address on online platform regulation by Alex Chisholm, the head of UK’s Competition and Markets Authority. That’s the non-ministerial department in the UK responsible for competition policy issues. Chisholm delivered the address on October 27th at the Bundesnetzagentur conference in Bonn. It’s a terrific speech that other policymakers would be wise to read and mimic to ensure that antitrust and competition policy decisions don’t derail the many benefits of the Information Revolution.

“Today, as regulators, we have the responsibility but also the great historical privilege of playing an influential role in the deployment throughout the economy of the latest of these defining technological eras,” Chisholm began. “As regulators, we must try to minimise the inevitable mismatch between how we’ve done things before and the opportunities and risks of the new,” he argued.

He continued on to specify three recommendations for those crafting policy on this front:

  1. “First, blanket solutions should be avoided. Instead an evidence-based assessment of potential adverse effects of specific industry features or practices should be carried out before either ex ante regulatory or ex post enforcement tools are deployed. In either case this should be closely targeted to the specific harm identified, and every care given to avoid disproportionate actions and unwelcome side-effects. In that respect, online platforms and the digital economy do not differ from any other sector: there is no need to reinvent the regulatory wheel.
  2. Secondly, the significant risks associated with premature, broad-brush ex ante legislation or rule-making point towards a need to shift away from sector-specific regulation to ex post antitrust enforcement, which is better adapted to the period we’re in, with its fast-changing technology and evolving market reactions.
  3. Thirdly, as regulators, policymakers, businesses and consumers, we all need to adapt our practices to harvest the benefits of the new while containing its costs and risks.”

That’s an excellent framework that can and should guide future antitrust and competition policy decisions by policymakers across the globe. But Chisholm wasn’t done. Here are some of my other favorite highlights from his address:

  • On avoiding “one-size-fits-all” regulation: “[T]here is no ‘digital one size fits all’. . .  [O]penness is not necessarily always good for competition, nor are closed systems always bad.”
  • On dealing with the pace of change: “Leaving aside costs of compliance, protecting consumers by virtue of ex ante regulation is inherently difficult in digital markets where consumer preferences evolve fast and in a less predictable manner.”
  • On the difficulty of forecasting: “Where ex ante regulation is introduced, it therefore risks harming innovation by locking in existing standards and discouraging or preventing more disruptive innovations. The evolution of digital markets has been particularly difficult to predict.”
  • On how to level the playing field: “Finally, consider deregulation. If policymakers were to seek to avoid every hypothetical consumer harm through pre-emptive ex ante regulation, they would likely prevent many best-case scenarios entailing significant consumer benefits from ever coming about. Policymakers and regulators should be open to the idea that a review of existing regulation and its suitability in the context of online platforms may in certain cases actually result in a withdrawal of such regulation – creating a reasonably level playing field by ‘levelling down’ as opposed to ‘levelling up’.”

I really appreciate those last few points, and they are very much consistent with the recommendations set forth in my recent book on  Permissionless Innovation. In the book, I argued that, “Trying to preemptively plan for every hypothetical worst-case scenario means that many best-case scenarios will never come about.”

I was pleased to see the book cited in Chisholm’s speech, as well as some work that Mercatus scholars had done on how to level the proverbial playing field within sectors undergoing rapid technological and regulatory change. Chris Koopman, Matt Mitchell, and I have argue that, while regulatory asymmetries represent a legitimate policy problem,

the solution is not to punish new innovations by simply rolling old regulatory regimes onto new technologies and sectors. The better alternative is to level the playing field by “deregulating down” to put everyone on equal footing, not by “regulating up” to achieve parity. Policymakers should relax old rules on incumbents as new entrants and new technologies challenge the status quo. By extension, new entrants should only face minimal regulatory requirements as more onerous and unnecessary restrictions on incumbents are relaxed.

Anyway, make sure to read Alex Chisholm’s entire speech. It’s very much worth your time. Incidentally, I think his vision is very much consistent with that of  Maureen K. Ohlhausen, a Commissioner with the Federal Trade Commission (FTC). I have written extensively here and elsewhere about Commissioner Ohlhausen’s laudable vision for wise tech policy-making, most recently in this essay.

 

]]>
https://techliberation.com/2015/10/30/uk-competition-markets-authority-on-online-platform-regulation/feed/ 1 75939
Deregulation of Television Finally Bearing Fruit for Consumers https://techliberation.com/2015/10/14/deregulation-of-television-finally-bearing-fruit-for-consumers/ https://techliberation.com/2015/10/14/deregulation-of-television-finally-bearing-fruit-for-consumers/#comments Wed, 14 Oct 2015 21:26:46 +0000 http://techliberation.com/?p=75887

Last Friday I attended a fascinating conference hosted by the Duke Law School’s Center for Innovation Policy about television regulation and competition. It’s remarkable how quickly television competition has changed and how online video providers are putting pressure on old business models.

I’ve been working on a project about competition in technology, communications, and media and one chart that stands out is one that shows increasing competition in pay television, below. Namely, that cable providers have lost nearly 15 million subscribers since 2002. Cable was essentially the only game in town in 1990 for pay television (about 100% market share). Yet today, cable’s market share approaches 50%. This competitive pressure accounts for some cable companies trying to merge in recent years.

Much of this churn by subscribers was to satellite providers but it’s the “telephone” companies providing TV that’s really had a competitive impact in recent years. Telcos went from about 0% market share in 2005 to 13% in 2014. This new competition can be tied to Congress finally allowing telephone companies to provide TV in 1996. However, these new services didn’t really get started until a decade ago when 1) digital and IP technology improved, and 2) the FCC made it clear by deregulating DSL ISPs that telephone companies could expect a market return for investing in fiber broadband nationwide.

Pay TV Market Share TLF

UPDATE:

And below is market share data going back ten more years to 1994 using FCC data, which uses a slightly different measurement methodology (hence the kink around 2003-2004). I’ve also omitted market share of Home Satellite Dish (those large dishes you sometimes see in rural areas). Though HSD has negligible market share today, it had a few million subscribers in the mid-1990s. I may add HSD later.

Pay TV Market Share TLF 1994-2014

]]>
https://techliberation.com/2015/10/14/deregulation-of-television-finally-bearing-fruit-for-consumers/feed/ 3 75887
What Should the FTC Do about State & Local Barriers to Sharing Economy Innovation? https://techliberation.com/2015/05/12/what-should-the-ftc-do-about-state-local-barriers-to-sharing-economy-innovation/ https://techliberation.com/2015/05/12/what-should-the-ftc-do-about-state-local-barriers-to-sharing-economy-innovation/#respond Tue, 12 May 2015 20:21:02 +0000 http://techliberation.com/?p=75549

The Federal Trade Commission (FTC) is taking a more active interest in state and local barriers to entry and innovation that could threaten the continued growth of the digital economy in general and the sharing economy in particular. The agency recently announced it would be hosting a June 9th workshop “to examine competition, consumer protection, and economic issues raised by the proliferation of online and mobile peer-to peer business platforms in certain sectors of the [sharing] economy.” Filings are due to the agency in this matter by May 26th. (Along with my Mercatus Center colleagues, I will be submitting comments and also releasing a big paper on reputational feedback mechanisms that same week. We have already released this paper on the general topic.)

Relatedly, just yesterday, the FTC sent a letter to Michigan policymakers about restricting entry by Tesla and other direct-to-consumer sellers of vehicles. Michigan passed a law in October 2014 prohibiting such direct sales. The FTC’s strongly-worded letter decries the state’s law as “protectionism for independent franchised dealers” noting that “current provisions operate as a special protection for dealers—a protection that is likely harming both competition and consumers.” The agency argues that:

consumers are the ones best situated to choose for themselves both the vehicles they want to buy and how they want to buy them. Automobile manufacturers have an economic incentive to respond to consumer preferences by choosing the most effective distribution method for their vehicle brands. Absent supportable public policy considerations, the law should permit automobile manufacturers to choose their distribution method to be responsive to the desires of motor vehicle buyers.

The agency cites the “well-developed body of research on these issues strongly suggests that government restrictions on distribution are rarely desirable for consumers” and the staff letter continues on to utterly demolish the bogus arguments set forth by defenders of the blatantly self-serving, cronyist law. (For more discussion of just how anti-competitive and anti-consumer these laws are in practice, see this January 2015 Mercatus Center study, “State Franchise Law Carjacks Auto Buyers,” by Jerry Ellig and Jesse Martinez.)

The FTC’s letter is another example of how the agency can take steps using its advocacy tools to explain to state and local policymakers how their laws may be protectionist and anti-consumer in character. Needless to say, this also has ramifications for how the agency approaches parochial restraints on entry and innovation affecting the sharing economy.

In our forthcoming Mercatus Center comments to the FTC for its June 6th sharing economy workshop, Christopher Koopman, Matt Mitchell, and I will address many issues related to the sharing economy and its regulation. Beyond addressing all five of the specific questions asked in the Commission’s workshop notice, we also include a discussion about “Federal Responses to Local Anticompetitive Regulations.” Down below I have reproduced the current rough draft of that section of our filing in the hope of getting input from others. Needless to say, the idea of the FTC aggressively using its advocacy efforts or even federal antitrust laws to address state and local barriers to trade and innovation will make some folks uncomfortable–especially on federalism grounds. But we argue that a good case can be made for the agency using both its advocacy and antitrust tools to address these issues. Let us know what you think.

 


 

The Federal Trade Commission possesses two primary tools to address public restraints of trade created by state and local authorities: advocacy and antitrust.[1]

Through its advocacy program, the Commission can provide specific comments to state and local officials regarding the effects of both proposed and existing regulations.[2] Commissioner Joshua Wright has noted that, “For many years, the FTC has used its mantle to comment on legislation and regulation that may restrain competition in a way that harms consumers.”[3] Thus, at a minimum, the Commission can and should shine light on parochial governmental efforts to restrain trade and limit innovation throughout the sharing economy.[4] By shining more light on state or local anti-competitive rules, the Commission will hopefully make governments, or their surrogate bodies (such as licensing boards), more transparent about their practices and more accountable for laws or regulations that could harm consumer welfare. However, to be successful, the Commission’s advocacy efforts depend upon the willingness of state and local legislators and regulators to heed its advice.[5]

The Commission has already used its advisory role in its recent guidance to state and local policymakers regarding the regulation of ridesharing services. The Commission noted then that “a regulatory framework should be responsive to new methods of competition,” and set forth the following vision regarding what it regards as the proper approach to parochial regulation of passenger transportation services:

Staff recommends that a regulatory framework for passenger vehicle transportation should allow for flexibility and adaptation in response to new and innovative methods of competition, while still maintaining appropriate consumer protections. [Regulators] also should proceed with caution in responding to calls for change that may have the effect of impairing new forms or methods of competition that are desirable to consumers. . . .  In general, competition should only be restricted when necessary to achieve some countervailing procompetitive virtue or other public benefit such as protecting the public from significant harm.[6]

This represents a reasonable framework for addressing concerns about parochial regulation of the sharing economy more generally.

Unfortunately, in areas relevant to the regulation of the sharing economy (e.g., taxicab regulations and rules governing home and apartment rentals) anticompetitive regulations have remained on the books—and in some instances have expanded—in spite of more than 30 years of Commission comment and advocacy.[7]  In fact, as Public Citizen noted in a recent Supreme Court filing:

[M]any more occupations are regulated than ever before, and most boards doing the regulating—in both traditional and new professions—are dominated by industry members who compete in the regulated market. Those board member-competitors, in turn, commonly engage in regulation that can be seen as anticompetitive self-protection. The particular forms anticompetitive regulations take are highly varied, the possibilities seemingly limited only by the imaginations of the board members.[8]

In these instances, the Commission’s antitrust enforcement authority may need to be utilized when its advocacy efforts fall short with regard to regulations that favor incumbents by limiting competition and entry.[9] Many academics have endorsed expanded antitrust oversight of public barriers to trade and innovation.[10] As Commissioner Wright has argued, “the FTC is in a good position to use its full arsenal of tools to ensure that state and local regulators do not thwart new entrants from using technology to disrupt existing marketplace.”[11] He notes specifically that he is “quite confident that a significant shift of agency resources away from enforcement efforts aimed at taming private restraints of trade and instead toward fighting public restraints would improve consumer welfare.”[12] We agree.

The Supreme Court’s recent decision in North Carolina State Board of Dental Examiners v. Federal Trade Commission made it clear that local authorities cannot claim broad immunity from federal antitrust laws.[13] This is particularly true, the Court noted, “where a State delegates control over a market to a nonsovereign actor,” such as a professional licensing board consisting primarily of members of the affected interest being regulated.[14] “Limits on state-action immunity are most essential when a State seeks to delegate its regulatory power to active market participants,” the Court held, “for dual allegiances are not always apparent to an actor and prohibitions against anticompetitive self-regulation by active market participants are an axiom of federal antitrust policy.”[15]

The touchstone of this case and the Court’s related jurisprudence in this area is political accountability.[16] State officials must (1) “clearly articulate” and (2) “actively supervise” licensing arrangements and regulatory bodies if they hope to withstand federal antitrust scrutiny.[17] The Court clarified this test in N.C. Dental holding that “the Sherman Act confers immunity only if the State accepts political accountability for the anticompetitive conduct it permits and controls.”[18] In other words, if state and local officials want to engage in protectionist activities that restrain trade in pursuit of some other countervailing objective, then they need to own up to it by being transparent about their anticompetitive intentions and then actively oversee the process after that to ensure it is not completely captured by affected interests.[19]

Some might argue that this does not go far enough to eradicate anti-competitive barriers to trade at the state or local level that could restrain the innovative potential of the sharing economy. While that may be true, some limits on the Commission’s federal antitrust discretion are necessary to avoid impinging upon legitimate state and local priorities.

Over time, it is our hope that by empowering the public with more options, more information and better ways to shine light on bad actors, the sharing economy will continue to make many of those old regulations unnecessary. Thus, in line with Commissioner Maureen Ohlhausen’s wise advice, the Commission should encourage state and local officials to exercise patience and humility as they confront technological changes that disrupt traditional regulatory systems.[20]

But when parochial regulators engage in blatantly anti-competitive activities that restrain trade, foster cartelization, or harm consumer welfare in other ways, the Commission can act to counter the worst of those tendencies.[21] The Commission’s standard of review going forward was appropriately articulated by Commissioner Wright recently when he noted that, “in the context of potentially disruptive forms of competition through new technologies or new business models, we should generally be skeptical of regulatory efforts that have the effect of favoring incumbent industry participants.”[22]

Such parochial protectionist barriers to trade and innovation will become even more concerning as the potential reach of so many sharing economy businesses grows larger. The boundary between intrastate and interstate commerce is sometimes difficult to determine for many sharing economy platforms. Clearly, much of the commerce in question occurs within the boundaries of a state or municipality, but sharing economy services also rely upon Internet-enabled platforms with a broader reach. To the extent state or local restrictions on sharing economy operations create negative externalities in the form of “interstate spillovers,” the case for federal intervention is strengthened.[23] It would be preferable if Congress chose to deal with such spillovers using its Commerce Clause authority (Art. 1, Sec. 8 of the Constitution),[24] but the presence of such negative externalities might also bolster the case for the Commission’s use of antitrust to address parochial restraints on trade.


[1]     See Maureen K. Ohlhausen, Reflections on the Supreme Court’s North Carolina Dental Decision and the FTC’s Campaign to Rein in State Action Immunity, before the Heritage Foundation, Washington, DC, March 31, 2015, at 19-20.

[2]     Id., at 20. (“The primary goal of such advocacy is to convince policymakers to consider and then minimize any adverse effects on competition that may result from regulations aimed at preventing various consumer harms.”) Also see James C. Cooper and William E. Kovacic, “U.S. Convergence with International Competition Norms: Antitrust Law and Public Restraints on Competition,” Boston University Law Review, Vol. 90, No. 4, (August 2010): 1582, “Competition advocacy helps solve consumers’ collective action problem by acting within the regulatory process to advocate for regulations that do not restrict competition unless there is a compelling consumer protection rationale for imposing such costs on citizens.”).

[3]     Joshua D. Wright, “Regulation in High-Tech Markets:  Public Choice, Regulatory Capture, and the FTC,” Remarks of Joshua D. Wright Commissioner, Federal Trade Commission at the Big Ideas about Information Lecture Clemson University, Clemson, South Carolina, April 2, 2015, at 15, https://www.ftc.gov/public-statements/2015/04/regulation-high-tech-markets-public-choice-regulatory-capture-ftc.

[4]     Cooper and Kovacic, “U.S. Convergence with International Competition Norms,” at 1610, (“Competition agencies could devote greater resources to conduct research to measure the effects of public policies that restrict competition. A research program could accumulate and analyze empirical data that assesses the consumer welfare effects of specific restrictions. Such a program could also assess whether the stated public interest objectives of government restrictions are realized in practice.”)

[5]     Cooper and Kovacic, “U.S. Convergence with International Competition Norms,” at 1582, (“The value of competition advocacy should be measured by (1) the degree to which comments altered regulatory outcomes times (2) the value to consumers of those improved outcomes. For all practical purposes, however, both elements are difficult to measure with any degree of certainty.”).

[6]     Federal Trade Commission, Staff Comments Before the Colorado Public Utilities Commission In The Matter of The Proposed Rules Regulating Transportation By Motor Vehicle, 4 Code of Colorado Regulations, (March 6, 2013), http://ftc.gov/os/2013/03/130703coloradopublicutilities.pdf.

[7]     Marvin Ammori, “Can the FTC Save Uber,” Slate, March 12, 2013, http://www.slate.com/articles/technology/future_tense/2013/03/uber_lyft_sidecar_can_the_ftc_fight_local_taxi_commissions.html (noting that, “not only does the FTC have the authority to take these cities to impartial federal courts and end their anticompetitive actions; it also has deep expertise in taxi markets and antitrust doctrines.”) Also see, Edmund W. Kitch, “Taxi Reform—The FTC Can Hack It,” Regulation, May/June 1984, http://object.cato.org/sites/cato.org/files/serials/files/regulation/1984/5/v8n3-3.pdf.

[8]     Brief of Amici Curiae Public Citizen in Support of Respondent, North Carolina State Bd. of Dental Exam’rs v. FTC, (August 2014): 24.

[9]     Brief of Antitrust Scholars as Amici Curiae in Support of Respondent, North Carolina State Bd. of Dental Exam’rs v. FTC, (August 6, 2014): 24, (“Antitrust review is entirely appropriate for curbing the excesses of occupational licensing because the anticompetitive effect has a similar effect on the market—and in particular consumers—as does traditional cartel activity.”)

[10]   See Mark A. Perry, “Municipal Supervision and State Action Antitrust Immunity,” The University of Chicago Law Review, Vol. 57, (Fall 1990): 1413-1445; William J. Martin, “State Action Antitrust Immunity for Municipally Supervised Parties,” The University of Chicago Law Review, Vol. 72, (Summer, 2005): 1079-1102; Jarod M. Bona, “The Antitrust Implications of Licensed Occupations Choosing Their Own Exclusive Jurisdiction,” University of St. Thomas Journal of Law & Public Policy, Vol 5, (Spring 2011): 28-51; Ingram Weber “The Antitrust State Action Doctrine and State Licensing Boards,” The University of Chicago Law Review, Vol. 79, (2012); Aaron Edlin and Rebecca Haw, “Cartels by Another Name:  Should Licensed Occupations Face Antitrust Scrutiny?,” University of Pennsylvania Law Review, Vol. 162, (2014): 1093-1164.

[11]   Wright, “Regulation in High-Tech Markets,” at 28-9.

[12]   Wright, “Regulation in High-Tech Markets,” at 29.

[13]   North Carolina State Bd. of Dental Exam’rs v. FTC, 135 S. Ct. 1101 (2015).

[14]   Id.

[15]   Id. Also see Edlin & Haw, “Cartels by Another Name,” at 1143, (“Who could seriously argue that an unsupervised group of competitors appointed to regulate their own profession can be counted on to neglect their selfish interests in favor of the state’s?”); Brief Amicus of the Pacific Legal Foundation and Cato Institute, North Carolina State Bd. of Dental Exam’rs v. FTC, (August 2014): 3, (“Antitrust immunity for private parties who act under color of state law is especially problematic, given that anticompetitive conduct is most likely to occur when private parties are in a position to exploit government’s regulatory powers.”)

[16]   See Maureen K. Ohlhausen, Reflections on the Supreme Court’s North Carolina Dental Decision and the FTC’s Campaign to Rein in State Action Immunity, before the Heritage Foundation, Washington, DC, March 31, 2015, at 16, https://www.ftc.gov/public-statements/2015/03/reflections-supreme-courts-north-carolina-dental-decision-ftcs-campaign, (“states need to be politically accountable for whatever market distortions they impose on consumers.”); Edlin & Haw, “Cartels by Another Name,” at 1137, (“political accountability is the price a state must pay for antitrust immunity.)

[17]   See Federal Trade Commission, Office of Policy and Planning, Report of the State Action Task Force (2003): 54, (“clear articulation requires that a state enunciate an affirmative intent to displace competition and to replace it with a stated criterion. Active supervision requires the state to examine individual private conduct, pursuant to that regulatory regime, to ensure that it comports with that stated criterion. Only then can the underlying conduct accurately be deemed that of the state itself, and political responsibility for the conduct fairly placed with the state.”) This test has been developed and refined in a variety of cases over the past 35 years. See: California Retail Liquor Dealers Ass’n v. Midcal Aluminum, Inc., 445 U.S. 97 (1980); Cmty. Comm’ns Co., Inc. v. City of Boulder, 455 U.S. 40, 48-51 (1982); City of Columbia v. Omni Outdoor Advertising, Inc., 499 U.S. 365 (1991); FTC v. Ticor Title Ins. Co., 504 U.S. 621 (1992).

[18]   North Carolina State Bd. of Dental Exam’rs v. FTC, 135 S. Ct. 1101 (2015).

[19]   Edlin & Haw, “Cartels by Another Name,” at 1156. (“Requiring that the state place its imprimatur on regulation is at least better than the status quo, in which states too often delegate self-regulation to professionals and walk away.”) See also North Carolina State Bd. of Dental Exam’rs v. FTC, 135 S. Ct. 1101 (2015) (“[Federal antitrust] immunity requires that the anticompetitive conduct of nonsovereign actors, especially those authorized by the State to regulate their own profession, result from procedures that suffice to make it the State’s own.”).

[20]  Maureen K. Ohlhausen, Commissioner, Fed. Trade Commission, “Regulatory Humility in Practice,” Remarks of the American Enterprise Institute, Washington, D.C. (April 1, 2015).

[21]   Edlin & Haw, “Cartels by Another Name,” at 1094, (“state action doctrine should not prevent antitrust suits against state licensing boards that are comprised of private competitors deputized to regulate and to outright exclude their own competition, often with the threat of criminal sanction.”). See also Brief Amicus of the Pacific Legal Foundation and Cato Institute, North Carolina State Bd. of Dental Exam’rs v. FTC, (August 2014): 2, 21, http://www.americanbar.org/content/dam/aba/publications/supreme_court_preview/BriefsV4/13-534_resp_amcu_plf-cato.authcheckdam.pdf, (noting that courts “should presume strongly against granting state-action immunity in antitrust cases.  It makes little sense to impose powerful civil and criminal punishments on private parties who are deemed to have engaged in anti-competitive conduct, while exempting government entities—or, worse, private parties acting under the government’s aegis—when they engage in the exact same conduct. . . . “Whatever one’s opinion of antitrust law in general, there is no justification for allowing states broad latitude to disregard federal law and erect private cartels with only vague instructions and loose oversight.”)

[22]   Wright, “Regulation in High-Tech Markets,” at 7.

[23]   FTC, Report of the State Action Task Force, 44, (“an unfortunate gap has emerged between scholarship and case law. Although many of the leading commentators have expressed serious concern regarding problems posed by interstate spillovers, their thinking has yet to take root in the law. Such spillovers undermine both economic efficiency and some of the same political representation values thought to be protected by principles of federalism.”); Brief Amicus of the Pacific Legal Foundation and Cato Institute, North Carolina State Bd. of Dental Exam’rs v. FTC, (August 2014): 13, (“Allowing states expansive power to exempt private actors from antitrust laws would also disrupt national economic policy by encouraging a patchwork of state-established entities licensed to engage in cartel behavior. This would disrupt interstate investment and consumer expectations, and would have spillover effects across state lines.”) Cooper and Kovacic, “U.S. Convergence with International Competition Norms,” at 1598, (“When a state exports the costs attendant to its anticompetitive regulatory scheme to those who have not participated in the political process, however, there is no political backstop; arguments for immunity based on federalism concerns are severely weakened, if not wholly eviscerated, in these situations.”

[24]   See Adam Thierer, The Delicate Balance: Federalism, Interstate Commerce, and Economic Freedom in the Technological Age (Washington, DC: The Heritage Foundation, 1998): 81-118.

]]>
https://techliberation.com/2015/05/12/what-should-the-ftc-do-about-state-local-barriers-to-sharing-economy-innovation/feed/ 0 75549
The Wrong Way to End the Terrestrial Radio Exemption https://techliberation.com/2015/04/19/the-wrong-way-to-end-the-terrestrial-radio-exemption/ https://techliberation.com/2015/04/19/the-wrong-way-to-end-the-terrestrial-radio-exemption/#comments Mon, 20 Apr 2015 00:53:22 +0000 http://techliberation.com/?p=75527

A bill before Congress would for the first time require radio broadcasters to pay royalty fees to recording artists and record labels pursuant to the Copyright Act. The proposed Fair Play Fair Pay Act (H.R. 1733) would “[make] sure that all radio services play by the same rules, and all artists are fairly compensated,” according to Congressman Jerrold Nadler (D-NY).

… AM/FM radio has used whatever music it wants without paying a cent to the musicians, vocalists, and labels that created it. Satellite radio has paid below market royalties for the music it uses …

The bill would still allow for different fees for AM/FM radio, satellite radio and Internet radio, but it would mandate a “minimum fee” for each type of service for the first time.

A February report from the U.S. Copyright Office cites the promotional value of airtime as the longstanding justification for exempting terrestrial radio broadcasters from paying royalties under the Copyright Act.

In the traditional view of the market, broadcasters and labels representing copyright owners enjoy a mutually beneficial relationship whereby terrestrial radio stations exploit sound recordings to attract the listener pools that generate advertising dollars, and, in return, sound recording owners receive exposure that promotes record and other sales.

The Copyright Office now feels there are “significant questions” whether the traditional view remains credible today. But significant questions are not the same thing as clear evidence.The problem with the proposed Fair Play Fair Pay Act is two-fold. First, notwithstanding that there is now some uncertainty around the traditional view of the AM/FM market, the bill mandates new minimum fees anyway. Second, it would empower a government panel consisting of three judges appointed by the Librarian of Congress to engage in what could become highly-subjective decision-making.

The Copyright Royalty Judges shall establish rates and terms that most clearly represent the rates and terms that would have been negotiated in the marketplace between a willing buyer and a willing seller.

The most efficient way to get an accurate indicator of what a willing buyer and a willing seller would’ve negotiated in the marketplace is to call for private negotiations. The Copyright Office recommends this approach, too. Only when a music rights organization (MRO) and a licensee are unsuccessful in reaching an agreement on their own would the Copyright Royalty Board set the rates.

Each MRO would enjoy an antitrust exemption to negotiate performance and mechanical licenses collectively on behalf of its members—as would licensee groups negotiating with the MROs—with the CRB available to establish a rate in case of a dispute.

If Congress wants to end the terrestrial radio exemption, this is the better way to do it. Plainly, however, promotional value counts for something—and even the proposed Fair Play Fair Pay Act acknowledges that the value of the promotional effect qualifies as a legitimate form of compensation to recording artists and record labels.

]]>
https://techliberation.com/2015/04/19/the-wrong-way-to-end-the-terrestrial-radio-exemption/feed/ 1 75527