Economics

There is reporting suggesting that the Trump FCC may move to eliminate the FCC’s complex Title II regulations for the Internet and restore the FTC’s ability to police anticompetitve and deceptive practices online. This is obviously welcome news. These reports also suggest that FCC Chairman Pai and the FTC will require ISPs add open Internet principles to their terms of service, that is, no unreasonable blocking or throttling of content and no paid priority. These principles have always been imprecise because federal law allows ISPs to block objectionable content if they wish (like pornography or violent websites) and because ISPs have a First Amendment right to curate their services.

Whatever the exact wording, there shouldn’t be a per se ban of paid priority. Whatever policy develops should limit anticompetitive paid priority, not all paid priority. Paid prioritization is simply a form of consideration payment, which is economists’ term for when upstream producers pay downstream retailers or distributors for special treatment. There’s economics literature on consideration payments and it’s an accepted business practice in many other industries. Further, consideration payments often benefit small providers and niche customers. Some small and large companies with interactive IP services might be willing to pay for end-to-end service reliability.

The Open Internet Order’s paid priority ban has always been short sighted because it attempts to preserve the Internet as it existed circa 2002. It resembles the FCC’s unfounded insistence for decades that subscription TV (ie, how the vast majority of Americans consume TV today) was against “the public interest.” Like the defunct subscription TV ban, the paid priority ban is an economics-free policy that will hinder new services. 

Despite what late-night talk show hosts might say, “fast lanes” on the Internet are here and will continue. “Fast lanes” have always been permitted because, as Obama’s US CTO Aneesh Chopra noted, some emerging IP services need special treatment. Priority transmission was built into Internet protocols years ago and the OIO doesn’t ban data prioritization; it bans BIAS providers from charging “edge providers” a fee for priority.

The notion that there’s a level playing field online needing preservation is a fantasy. Non-real-time services like Netflix streaming, YouTube, Facebook pages, and major websites can mostly be “cached” on servers scattered around the US. Major web companies have their own form of paid prioritization–they spend millions annually, including large payments to ISPs, on transit agreements, CDNs, and interconnection in order to avoid congested Internet links.

The problem with a blanket paid priority ban is that it biases the evolution of the Internet in favor of these cache-able services and against real-time or interactive services like teleconferencing, live TV, and gaming. Caching doesn’t work for these services because there’s nothing to cache beforehand. 

When would paid prioritization make sense? Most likely a specialized service for dedicated users that requires end-to-end reliability. 

I’ll use a plausible example to illustrate the benefits of consideration payments online–a telepresence service for deaf people. As Martin Geddes described, a decade ago the government in Wales developed such a service. The service architects discovered that a well-functioning service had quality characteristics not supplied by ISPs. ISPs and video chat apps like Skype optimize their networks, video codecs, and services for non-deaf people (ie, most customers) and prioritize consistent audio quality over video quality. While that’s useful for most people, deaf people need basically the opposite optimization because they need to perceive subtle hand and finger motions. The typical app that prioritizes audio, not video, doesn’t work for them.

But high-def real-time video quality requires upstream and downstream capacity reservation and end-to-end reliability. This is not cheap to provide. An ISP, in this illustration, has three options–charge the telepresence provider, charge deaf customers a premium, or spread the costs across all customers. The paid priority ban means ISPs can only charge customers for increased costs. This paid priority ban unnecessarily limits the potential for such services since there may be companies or nonprofits willing to subsidize such a service.

It’s a specialized example but illustrates the idiosyncratic technical requirements needed for many real-time services. In fact, real-time services are the next big challenge in the Internet’s evolution. As streaming media expert Dan Rayburn noted, “traditional one-way live streaming is being disrupted by the demand for interactive engagement.”  Large and small edge companies are increasingly looking for low-latency video solutions. Today, a typical “live” event is broadcast online to viewers with a 15- to 45-second delay. This latency limits or kills the potential for interactive online streaming services like online talk shows, pet cams, online auctions, videogaming, and online classrooms.

If the FTC takes back oversight of ISPs and the Internet it should, as with any industry, permit any business practice that complies with competition law and consumer protection law. The agency should disregard the unfounded belief that consideration payments online (“paid priority”) are always harmful.

Federal Communications Commission (FCC) Chairman Ajit Pai today announced plans to expand the role of economic analysis at the FCC in a speech at the Hudson Institute. This is an eminently sensible idea that other regulatory agencies (both independent and executive branch) could learn from.

Pai first made the case that when the FCC listened to its economists in the past, it unlocked billions of dollars of value for consumers. The most prominent example was the switch from hearings to auctions in order to allocate spectrum licenses. He perceptively noted that the biggest effect of auctions was the massive improvement in consumer welfare, not just the more than $100 billion raised for the Treasury. Other examples of the FCC using the best ideas of its economists include:

  • Use of reverse auctions to allocate universal service funds to reduce costs.
  • Incentive auctions that reward broadcasters for transferring licenses to other uses – an idea initially proposed in a 2002 working paper by Evan Kwerel and John Williams at the FCC.
  • The move from rate of return to price cap regulation for long distance carriers.

More recently, Pai argued, the FCC has failed to use economics effectively. He identified four key problems:

  1. Economics is not systematically employed in policy decisions and often employed late in the process. The FCC has no guiding principles for conduct and use of economic analysis.
  2. Economists work in silos. They are divided up among bureaus. Economists should be able to work together on a wide variety of issues, as they do in the Federal Trade Commission’s Bureau of Economics, the Department of Justice Antitrust Division’s economic analysis unit, and the Securities and Exchange Commission’s Division of Economic and Risk Analysis.
  3. Benefit-cost analysis is not conducted well or often, and the FCC does not take Regulatory Flexibility Act analysis (which assesses effects of regulations on small entities) seriously. The FCC should use Office of Management and Budget guidance as its guide to doing good analysis, but OMB’s 2016 draft report on the benefits and costs of federal regulations shows that the FCC has estimated neither benefits nor costs of any of its major regulations issued in the past 10 years. Yet executive orders from multiple administrations demonstrate that “Serious cost-benefit analysis is a bipartisan tradition.”
  4. Poor use of data. The FCC probably collects a lot of data that’s unnecessary, at a paperwork cost of $800 million per year, not including opportunity costs of the private sector. But even useful data are not utilized well. For example, a few years ago the FCC stopped trying to determine whether the wireless market is effectively competitive even though it collects lots of data on the wireless market.

To remedy these problems, Pai announced an initiative to establish an Office of Economics and Data that would house the FCC’s economists and data analysts. An internal working group will be established to collect input within the FCC and from the public. He hopes to have the new office up and running by the end of the year. The purpose of this change is to give economists early input into the rulemaking process, better manage the FCC’s data resources, and conduct strategic research to help find solutions to “the next set of difficult issues.”

Can this initiative significantly improve the quality and use of economic analysis at the FCC?

There’s evidence that independent regulatory agencies are capable of making some decent improvements in their economic analysis when they are sufficiently motivated to do so. For example, the Securities and Exchange Commission’s authorizing statue contains language that requires benefit-cost analysis of regulations when the commission seeks to determine whether they are in the public interest. Between 2005 and 2011, the SEC lost several major court cases due to inadequate economic analysis.

In 2012, the commission’s general counsel and chief economist issued new economic analysis guidance that pledged to assess regulations according to the principal criteria identified in executive orders, guidance from the Office of Management and Budget, and independent research. In a recent study, I found that the economic analysis accompanying a sample of major SEC regulations issued after this guidance was measurably better than the analysis accompanying regulations issued prior to the new guidance. The SEC improved on all five aspects of economic analysis it identified as critical: assessment of the need for the regulation, assessment of the baseline outcomes that will likely occur in the absence of new regulation, identification of alternatives, and assessment of the benefits and costs of alternatives.

Unlike the SEC, the FCC faces no statutory benefit-cost analysis requirement for its regulations. Unlike the executive branch agencies, the FCC is under no executive order requiring economic analysis of regulations. Unlike the Federal Trade Commission in the early 1980s, the FCC faces little congressional pressure for abolition.

But Congress is considering legislation that would require all regulatory agencies to conduct economic analysis of major regulations and subject that analysis to limited judicial review. Proponents of executive branch regulatory review have always contended that the president has legal authority to extend the executive orders on regulatory impact analysis to cover independent agencies, and perhaps President Trump is audacious enough to try this. Thus, it appears Chairman Pai is trying to get the FCC out ahead of the curve.

Elizabeth_WarrenThe folks over at RegBlog are running a series of essays on “Rooting Out Regulatory Capture,” a problem that I’ve spent a fair amount of time discussing here and elsewhere in the past. (See, most notably, my compendium on, “Regulatory Capture: What the Experts Have Found.”) The first major contribution in the RegBlog series is from Sen. Elizabeth Warren (D-MA) and it is entitled, “Corporate Capture of the Rulemaking Process.”

Sen. Warren makes many interesting points about the dangers of regulatory capture, but the heart of her argument about how to deal with the problem can basically be summarized as ‘Let’s Build a Better Breed of Bureaucrat and Give Them More Money.’  In her own words, she says we should “limit opportunities for ‘cultural’ capture'” of government officials and also “give agencies the money that they need to do their jobs.”

It may sound good in theory, but I’m always a bit perplexed by that argument because the implicit claims here are that:

(a) the regulatory officials of the past were somehow less noble-minded and more open to corruption than some hypothetical better breed of bureaucrat that is out there waiting to be found and put into office; and

(b) that the regulatory agencies of the past were somehow starved for resources and lacked “the money that they need to do their jobs.”

Neither of these assumptions is true and yet those arguments seem to animate most of the reform proposals set forth by progressive politicians and scholars for how to deal with the problem of capture. Continue reading →

I recently finished Learning by Doing: The Real Connection between Innovation, Wages, and Wealth, by James Bessen of the Boston University Law School. It’s a good book to check out if you are worried about whether workers will be able to weather this latest wave of technological innovation. One of the key insights of Bessen’s book is that, as with previous periods of turbulent technological change, today’s workers and businesses will obviously need find ways to adapt to rapidly-changing marketplace realities brought on by the Information Revolution, robotics, and automated systems.

That sort of adaptation takes time, but for technological revolutions to take hold and have meaningful impact on economic growth and worker conditions, it requires that large numbers of ordinary workers acquire new knowledge and skills, Bessen notes. But, “that is a slow and difficult process, and history suggests that it often requires social changes supported by accommodating institutions and culture.” (p 223) That is not a reason to resist disruptive forms of technological change, however. To the contrary, Bessen says, it is crucial to allow ongoing trial-and-error experimentation and innovation to continue precisely because it represents a learning process which helps people (and workers in particular) adapt to changing circumstances and acquire new skills to deal with them. That, in a nutshell, is “learning by doing.” As he elaborates elsewhere in the book:

Major new technologies become ‘revolutionary’ only after a long process of learning by doing and incremental improvement. Having the breakthrough idea is not enough. But learning through experience and experimentation is expensive and slow. Experimentation involves a search for productive techniques: testing and eliminating bad techniques in order to find good ones. This means that workers and equipment typically operate for extended periods at low levels of productivity using poor techniques and are able to eliminate those poor practices only when they find something better. (p. 50)

Luckily, however, history also suggests that, time and time again, that process has happened and the standard of living for workers and average citizens alike improved at the same time. Continue reading →

The most pressing challenge in wireless telecommunications policy is transferring spectrum from inefficient legacy operators like federal agencies to the commercial sector for consumer use.

Reflecting high consumer demand for more wireless services, in early 2015 the FCC completed an auction for a small slice of prime spectrum–currently occupied by federal agencies and other non-federal incumbents–that grossed over $40 billion for the US Treasury. Increasing demand for mobile services such as Web browsing, streaming video, the Internet of Things, and gaming requires even more spectrum. Inaction means higher smartphone bills, more dropped calls, and stuttering downloads.

My latest research for the Mercatus Center, “Sweeten the Deal: Transfer of Federal Spectrum through Overlay Licenses,” was published recently and recommends the use of overlay licenses to transfer federal spectrum into commercial use. Purchasing an overlay license is like acquiring real property that contains a few tenants with unexpired leases. While those tenants have a superior possessory right to use the property, a high enough cash payment or trade will persuade them to vacate the property. The same dynamic applies for spectrum. Continue reading →

At the same time FilmOn, an Aereo look-alike, is seeking a compulsory license to broadcast TV content, free market advocates in Congress and officials at the Copyright Office are trying to remove this compulsory license. A compulsory license to copyrighted content gives parties like FilmOn the use of copyrighted material at a regulated rate without the consent of the copyright holder. There may be sensible objections to repealing the TV compulsory license, but transaction costs–the ostensible inability to acquire the numerous permissions to retransmit TV content–should not be one of them. Continue reading →

Yesterday, the White House Council of Economic Advisers released an important new report entitled, “Occupational Licensing: A Framework for Policymakers.” (PDF, 76 pgs.) The report highlighted the costs that outdated or unneeded licensing regulations can have on diverse portions of the citizenry. Specifically, the report concluded that:

the current licensing regime in the United States also creates substantial costs, and often the requirements for obtaining a license are not in sync with the skills needed for the job. There is evidence that licensing requirements raise the price of goods and services, restrict employment opportunities,  and make it more difficult for workers to take their skills across State lines. Too often, policymakers do not carefully weigh these costs and benefits when making decisions about whether or how to regulate a profession through licensing.

The report supported these conclusions with a wealth of evidence. In that regard, I was pleased to see that research from Mercatus Center-affiliated scholars was cited in the White House report (specifically on pg. 34). Mercatus Center scholars have repeatedly documented the costs of occupational licensing and offered suggestions for how to reform or eliminate unnecessary licensing practices. Most recently, my colleagues and I have explored the costs of licensing restrictions for new sharing economy platforms and innovators. The White House report cited, for example, the recently-released Mercatus paper on “How the Internet, the Sharing Economy, and Reputational Feedback Mechanisms Solve the ‘Lemons Problem,’” which I co-authored with Christopher Koopman, Anne Hobson, and Chris Kuiper. And it also cited a new essay by Tyler Cowen and Alex Tabarrok on “The End of Asymmetric Information.” Continue reading →

“Why hasn’t Europe fostered the kind of innovation that has spawned hugely successful technology companies?” asks James B. Stewart in an important new column for the New York Times (“A Fearless Culture Fuels U.S. Tech Giants“).

That’s a great question, and one that I have tried to answer in a series of recent essays. (See, for example, “Europe’s Choice on Innovation” and “Embracing a Culture of Permissionless Innovation.”) What I have suggested in those essays is that the starkly different outcomes on either side of the Atlantic in terms of recent economic growth and innovation can primarily be explained by cultural attitudes toward risk-taking and failure. “For innovation and growth to blossom, entrepreneurs need a clear green light from policymakers that signals a general acceptance of risk-taking—especially risk-taking that challenges existing business models and traditional ways of doing things,” I have argued. And the most powerful proof of this is to examine the amazing natural experiment that has played out on either side of the Atlantic over the past two decades with the Internet and the digital economy.

For example, an annual Booz & Company report on the world’s most innovative companies revealed that 9 of the top 10 most innovative companies are based in the U.S. and that most of them are involved in computing and digital technology. None of them are based in Europe, however. Another recent survey revealed that the world’s 15 most valuable Internet companies (based on market capitalizations) have a combined market value of nearly $2.5 trillion, but none of them are European while 11 of them are U.S. firms. Again, it is America’s tech innovators that dominate that list.

Many European officials and business leaders are waking up to this grim reality and are wondering how to reverse this situation. In his Times essay, Stewart quotes Danish economist Jacob Kirkegaard of the Peterson Institute for International Economics, who notes that Europeans “all want a Silicon Valley. . . . But none of them can match the scale and focus on the new and truly innovative technologies you have in the United States. Europe and the rest of the world are playing catch-up, to the great frustration of policy makers there.”

OK, but why is that? Continue reading →

The Federal Trade Commission (FTC) is taking a more active interest in state and local barriers to entry and innovation that could threaten the continued growth of the digital economy in general and the sharing economy in particular. The agency recently announced it would be hosting a June 9th workshop “to examine competition, consumer protection, and economic issues raised by the proliferation of online and mobile peer-to peer business platforms in certain sectors of the [sharing] economy.” Filings are due to the agency in this matter by May 26th. (Along with my Mercatus Center colleagues, I will be submitting comments and also releasing a big paper on reputational feedback mechanisms that same week. We have already released this paper on the general topic.)

Relatedly, just yesterday, the FTC sent a letter to Michigan policymakers about restricting entry by Tesla and other direct-to-consumer sellers of vehicles. Michigan passed a law in October 2014 prohibiting such direct sales. The FTC’s strongly-worded letter decries the state’s law as “protectionism for independent franchised dealers” noting that “current provisions operate as a special protection for dealers—a protection that is likely harming both competition and consumers.” The agency argues that:

consumers are the ones best situated to choose for themselves both the vehicles they want to buy and how they want to buy them. Automobile manufacturers have an economic incentive to respond to consumer preferences by choosing the most effective distribution method for their vehicle brands. Absent supportable public policy considerations, the law should permit automobile manufacturers to choose their distribution method to be responsive to the desires of motor vehicle buyers.

The agency cites the “well-developed body of research on these issues strongly suggests that government restrictions on distribution are rarely desirable for consumers” and the staff letter continues on to utterly demolish the bogus arguments set forth by defenders of the blatantly self-serving, cronyist law. (For more discussion of just how anti-competitive and anti-consumer these laws are in practice, see this January 2015 Mercatus Center study, “State Franchise Law Carjacks Auto Buyers,” by Jerry Ellig and Jesse Martinez.) Continue reading →

FAA sealRegular readers know that I can get a little feisty when it comes to the topic of “regulatory capture,” which occurs when special interests co-opt policymakers or political bodies (regulatory agencies, in particular) to further their own ends. As I noted in my big compendium, “Regulatory Capture: What the Experts Have Found“:

While capture theory cannot explain all regulatory policies or developments, it does provide an explanation for the actions of political actors with dismaying regularity.  Because regulatory capture theory conflicts mightily with romanticized notions of “independent” regulatory agencies or “scientific” bureaucracy, it often evokes a visceral reaction and a fair bit of denialism.

Indeed, the more I highlight the problem of regulatory capture and offer concrete examples of it in practice, the more push-back I get from true believers in the idea of “independent” agencies. Even if I can get them to admit that history offers countless examples of capture in action, and that a huge number of scholars of all persuasions have documented this problem, they will continue to persist that, WE CAN DO BETTER! and that it is just a matter of having THE RIGHT PEOPLE! who will TRY HARDER!

Well, maybe. But I am a realist and a believer in historical evidence. And the evidence shows, again and again, that when Congress (a) delegates broad, ambiguous authority to regulatory agencies, (b) exercises very limited oversight over that agency, and then, worse yet, (c) allows that agency’s budget to grow without any meaningful constraint, then the situation is ripe for abuse. Specifically, where unchecked power exists, interests will look to exploit it for their own ends.

In any event, all I can do is to continue to document the problem of regulatory capture in action and try to bring it to the attention of pundits and policymakers in the hope that we can start the push for real agency oversight and reform. Today’s case in point comes from a field I have been covering here a lot over the past year: commercial drone innovation. Continue reading →