Wireless & Spectrum Policy

Federal Communications Commission (FCC) Chairman Ajit Pai today announced plans to expand the role of economic analysis at the FCC in a speech at the Hudson Institute. This is an eminently sensible idea that other regulatory agencies (both independent and executive branch) could learn from.

Pai first made the case that when the FCC listened to its economists in the past, it unlocked billions of dollars of value for consumers. The most prominent example was the switch from hearings to auctions in order to allocate spectrum licenses. He perceptively noted that the biggest effect of auctions was the massive improvement in consumer welfare, not just the more than $100 billion raised for the Treasury. Other examples of the FCC using the best ideas of its economists include:

  • Use of reverse auctions to allocate universal service funds to reduce costs.
  • Incentive auctions that reward broadcasters for transferring licenses to other uses – an idea initially proposed in a 2002 working paper by Evan Kwerel and John Williams at the FCC.
  • The move from rate of return to price cap regulation for long distance carriers.

More recently, Pai argued, the FCC has failed to use economics effectively. He identified four key problems:

  1. Economics is not systematically employed in policy decisions and often employed late in the process. The FCC has no guiding principles for conduct and use of economic analysis.
  2. Economists work in silos. They are divided up among bureaus. Economists should be able to work together on a wide variety of issues, as they do in the Federal Trade Commission’s Bureau of Economics, the Department of Justice Antitrust Division’s economic analysis unit, and the Securities and Exchange Commission’s Division of Economic and Risk Analysis.
  3. Benefit-cost analysis is not conducted well or often, and the FCC does not take Regulatory Flexibility Act analysis (which assesses effects of regulations on small entities) seriously. The FCC should use Office of Management and Budget guidance as its guide to doing good analysis, but OMB’s 2016 draft report on the benefits and costs of federal regulations shows that the FCC has estimated neither benefits nor costs of any of its major regulations issued in the past 10 years. Yet executive orders from multiple administrations demonstrate that “Serious cost-benefit analysis is a bipartisan tradition.”
  4. Poor use of data. The FCC probably collects a lot of data that’s unnecessary, at a paperwork cost of $800 million per year, not including opportunity costs of the private sector. But even useful data are not utilized well. For example, a few years ago the FCC stopped trying to determine whether the wireless market is effectively competitive even though it collects lots of data on the wireless market.

To remedy these problems, Pai announced an initiative to establish an Office of Economics and Data that would house the FCC’s economists and data analysts. An internal working group will be established to collect input within the FCC and from the public. He hopes to have the new office up and running by the end of the year. The purpose of this change is to give economists early input into the rulemaking process, better manage the FCC’s data resources, and conduct strategic research to help find solutions to “the next set of difficult issues.”

Can this initiative significantly improve the quality and use of economic analysis at the FCC?

There’s evidence that independent regulatory agencies are capable of making some decent improvements in their economic analysis when they are sufficiently motivated to do so. For example, the Securities and Exchange Commission’s authorizing statue contains language that requires benefit-cost analysis of regulations when the commission seeks to determine whether they are in the public interest. Between 2005 and 2011, the SEC lost several major court cases due to inadequate economic analysis.

In 2012, the commission’s general counsel and chief economist issued new economic analysis guidance that pledged to assess regulations according to the principal criteria identified in executive orders, guidance from the Office of Management and Budget, and independent research. In a recent study, I found that the economic analysis accompanying a sample of major SEC regulations issued after this guidance was measurably better than the analysis accompanying regulations issued prior to the new guidance. The SEC improved on all five aspects of economic analysis it identified as critical: assessment of the need for the regulation, assessment of the baseline outcomes that will likely occur in the absence of new regulation, identification of alternatives, and assessment of the benefits and costs of alternatives.

Unlike the SEC, the FCC faces no statutory benefit-cost analysis requirement for its regulations. Unlike the executive branch agencies, the FCC is under no executive order requiring economic analysis of regulations. Unlike the Federal Trade Commission in the early 1980s, the FCC faces little congressional pressure for abolition.

But Congress is considering legislation that would require all regulatory agencies to conduct economic analysis of major regulations and subject that analysis to limited judicial review. Proponents of executive branch regulatory review have always contended that the president has legal authority to extend the executive orders on regulatory impact analysis to cover independent agencies, and perhaps President Trump is audacious enough to try this. Thus, it appears Chairman Pai is trying to get the FCC out ahead of the curve.

The Wall Street Journal reported yesterday that the White House is crafting a plan for $1 trillion in infrastructure investment. I was intrigued to learn that President Trump “inquired about the possibility of auctioning the broadcast spectrum to wireless carriers” to help fund the programs. Spectrum sales are the rare win-win-win: they stimulate infrastructure investment (cell towers, fiber networks, devices), provide new wireless services and lower prices to consumers, and generate billions in revenue for the federal government.

Broadcast TV spectrum is good place to look for revenue but the White House should also look at federal agencies, who possess about ten times what broadcasters hold.

Large portions of spectrum are underused or misallocated because of decades of command-and-control policies. Auctioning spectrum for flexible uses, on the other hand, is a free-market policy that is often lucrative for the federal government. Since 1993, when Congress authorized spectrum auctions, wireless carriers and tech companies have spent somewhere around $120 billion for about 430 MHz of flexible-use spectrum, and the lion’s share of revenue was deposited in the US Treasury.

A few weeks ago, the FCC completed the $19 billion sale of broadcast TV spectrum, the so-called incentive auction. Despite underwhelming many telecom experts, this was the third largest US spectrum auction ever in terms of revenue and will transfer a respectable 70 MHz from restricted (broadcast TV) use to flexible use.

The remaining broadcast TV spectrum that President Trump is interested in totals about 210 MHz. But even more spectrum is under the President’s nose.

As Obama’s Council of Advisors on Science and Technology pointed out in 2012, federal agencies possess around 2,000 MHz of “beachfront” (sub-3.7 GHz) spectrum. I charted various spectrum uses in a December 2016 Mercatus policy brief.

This government spectrum is very valuable if portions can be cleared of federal users. Federal spectrum was part of the frequencies the FCC auctioned in 2006 and 2015, and the slivers of federal spectrum (around 70 MHz of the federal total) sold for around $27 billion combined.

The Department of Commerce has been analyzing which federal spectrum bands could be used commercially and the Mobile Now Act, a pending bill in Congress, proposes more sales of federal spectrum. These policies have moved slowly (and the vague language about unlicensed spectrum in the Mobile Now bill has problems) but the Trump administration has a chance to expedite spectrum reallocation processes and sell more federal spectrum to commercial users.

The proposed Mobile Now Act signals that spectrum policy is being prioritized by Congress and there’s some useful reforms in the bill. However, the bill encourages unlicensed spectrum allocations in ways that I believe will create major problems down the road.

Congress and the FCC need to proceed much more carefully before allocating more unlicensed spectrum. The FCC’s 2008 decision, for instance, to allow unlicensed devices in the “TV white spaces” has been disappointing. As some economists recently noted, “[s]imply stated, the FCC’s TV white space policy to date has been a flop.” Unlicensed spectrum policy is also generating costly fights (see WiFi v. LTE-U, Bluetooth v. TLPS, LightSquared v. GPS) as device makers and carriers lobby about who gains regulatory protection and how to divide this valuable resource that the FCC parcels out for free.

The unlicensed spectrum provisions in the Mobile Now Act may force the FCC to referee innumerable fights over who has access to unlicensed spectrum. Section 18 of the Mobile Now bill encourages unlicensed spectrum. It says the FCC must

make available on an unlicensed basis radio frequency bands sufficient to meet demand for unlicensed wireless broadband operations if doing so is…reasonable…and…in the public interest.

Note that we have language about supply and demand here. But unlicensed spectrum is free to all users using an approved device (that is, nearly everyone in the US). Quantity demanded will always outstrip quantity supplied when a valuable asset (like spectrum or real estate) is handed out when price = 0. By removing a valuable asset from the price system, large allocation distortions are likely.

Any policy originating from Congress or the FCC to satisfy “demand” for unlicensed spectrum biases the agency towards parceling out an excessive amount of unlicensed spectrum. 

The problems from unlicensed spectrum allocation could be mitigated if the FCC decided, as part of a “public interest” conclusion, to estimate the opportunity cost of any unlicensed spectrum allocated. That way, the government will have a rough idea of the market value of unlicensed spectrum being given away. There have been several auctions and there is an active secondary market for spectrum so estimates are achievable, and the UK has required the calculation of the opportunity cost of spectrum for over a decade.

With these estimates, it will be more difficult but still possible for the FCC to defend giving away spectrum for free. Economist Coleman Bazelon, for instance, estimates that the incremental value of a nationwide megahertz of licensed spectrum is more than 10x the equivalent unlicensed spectrum allocation. Significantly, unlike licensed spectrum, allocations of unlicensed bands are largely irreversible.

People can quibble with the estimates but it is unclear that unlicensed use is the best use of additional spectrum. In any case, hopefully the FCC will attempt to bring some economic rigor to public interest determinations.

Is the incentive auction a disappointment? For consumers, this auction is not a disappointment. At least–not yet.

Scott Wallsten at the Technology Policy Institute has a good rundown. My thoughts below:

By my count, this was the eighth major auction of commercial, flexible-use spectrum since auctions were authorized in 1993. On the most important question–how much spectrum was repurposed from restricted uses to flexible, licensed uses?–this auction stacks up pretty well.

At 70 MHz, this was the third largest auction in terms of total spectrum repurposed, trailing the mid-1990s PCS auction (120 MHz) and 2006 AWS-1 auction (90 MHz).

On the next most important question–how quickly will new services be deployed?–the verdict is still out. Historically, repurposing spectrum like this typically takes six to twelve years. Depending on how you classify it, this proceeding commenced in 2010 (when the FCC proposed the incentive auction) or 2012 (when Congress authorized the auction). With the auction over, broadcasters have over three years to clear out of the spectrum but some believe it will take longer. Right now, it looks like the process will take seven to eleven years total–not great but pretty typical. 

Some people are disappointed, however, with this auction, particularly some in the broadcasting industry and in the FCC or Congress, who expected higher auction revenues.

High revenue gets nice headlines but is far less important than the amount of spectrum repurposed. It’s an underreported story but close to 290 MHz of spectrum, nearly 45% of all liberalized, licensed spectrum, was de-zoned by the FCC, not auctioned. De-zoning spectrum generates zero auction revenue for the government but consumers see substantial benefits from this de-zoning, even if the government does not directly benefit. I recently wrote a policy brief about the benefits of de-zoning spectrum.

In any case, in terms of revenue, this auction was not a failure. At around $17 billion, it’s third out of eight, trailing the 2008 700 MHz band auction (about $21 billion in 2015 dollars) and the massive haul from the 2015 AWS-3 auction (about $42 billion).

At close, broadcasters will receive $10 billion for the 70 MHz of available licensed spectrum. Some broadcasters consider it a failure, just as a home seller is disappointed when her home sells below list price. The broadcasters initially requested $86 billion for 100 MHz of available spectrum. When the carriers’ bids didn’t match that price, some broadcasters pulled out and the remaining broadcasters lowered their price.

Were there better ways of repurposing broadcast spectrum? Broadcasters have a point that the complexity of the auction might have reduced buyer and seller participation (which means lower bids and fewer deals). As Wallsten notes, an overlay auction (like AWS-1) or simply de-zoning the spectrum might have been better (faster) alternatives. But it goes too far deem this auction a failure (at least until we know how long the broadcaster repack takes).

Those of us with deep reservations about the push for ever more unlicensed spectrum are having many of our fears realized with the new resistance to novel technologies using unlicensed spectrum. By law unlicensed spectrum users have no rights to their spectrum; unlicensed spectrum is a managed commons. In practice, however, existing users frequently act as if they own their spectrum and they can exclude others. By entertaining these complaints, the FCC simply encourages NIMBYism in unlicensed spectrum.

The general idea behind unlicensed spectrum is that by providing a free spectrum commons to any device maker who complies with certain simple rules (namely, Part 15’s low power operation requirement), device makers will develop wireless services that would never have developed if the device makers had to shell out millions for licensed spectrum. For decades, unlicensed spectrum has stimulated development and sale of millions of consumer devices, including cordless phones, Bluetooth devices, wifi access points, RC cars, and microwave ovens.

Now, however, many device makers are getting nervous about new entrants. For instance, Globalstar is developing a technology, TLPS, based on wifi standards that will use some unlicensed spectrum at 2.4 GHz and mobile carriers would like to market an unlicensed spectrum technology, LTE-U, based on 4G LTE standards that will use spectrum at 5 GHz.

This resistance from various groups and spectrum incumbents, who fear interference in “their” spectrum if these new technologies catch on, was foreseeable, which makes these intractable conflicts even more regrettable. As Prof. Tom Hazlett wrote in a 2001 essay, long before today’s conflicts, when it comes to unlicensed devices, “economic success spells its own demise.” Hazlett noted, “Where an unlicensed firm successfully innovates, open access guarantees imitation. This not only results in competition…but may degrade wireless emissions — perhaps severely.”

On the other hand, the many technical filings about potential interference to existing unlicensed devices are red herrings. Prospective device makers in these unlicensed bands have no duty to protect existing users. Part 15 rules say that unlicensed users like wifi and Bluetooth “shall not be deemed to have any vested or recognizable right to continued use of any given frequency by virtue of prior registration or certification of equipment” and that “interference must be accepted.” These rules, however, put the FCC in a self-created double bind: the agency provides no interference protection to existing users but its open access policy makes interference conflicts likely. Continue reading →

The most pressing challenge in wireless telecommunications policy is transferring spectrum from inefficient legacy operators like federal agencies to the commercial sector for consumer use.

Reflecting high consumer demand for more wireless services, in early 2015 the FCC completed an auction for a small slice of prime spectrum–currently occupied by federal agencies and other non-federal incumbents–that grossed over $40 billion for the US Treasury. Increasing demand for mobile services such as Web browsing, streaming video, the Internet of Things, and gaming requires even more spectrum. Inaction means higher smartphone bills, more dropped calls, and stuttering downloads.

My latest research for the Mercatus Center, “Sweeten the Deal: Transfer of Federal Spectrum through Overlay Licenses,” was published recently and recommends the use of overlay licenses to transfer federal spectrum into commercial use. Purchasing an overlay license is like acquiring real property that contains a few tenants with unexpired leases. While those tenants have a superior possessory right to use the property, a high enough cash payment or trade will persuade them to vacate the property. The same dynamic applies for spectrum. Continue reading →

The FCC is being dragged–reluctantly, it appears–into disputes that resemble the infamous beauty contests of bygone years, where the agency takes on the impossible task of deciding which wireless services deliver more benefits to the public. Two novel technologies used for wireless broadband–TLPS and LTE-U–reveal the growing tensions in unlicensed spectrum. The two technologies are different and pose slightly different regulatory issues but each is an attempt to bring wireless Internet to consumers. Their advocates believe these technologies will provide better service than existing wifi technology and will also improve wifi performance. Their major similarity is that others, namely wifi advocates, object that the unlicensed bands are already too crowded and these new technologies will cause interference to existing users.

The LTE-U issue is new and developing. The TLPS proceeding, on the other hand, has been pending for a few years and there are warning signs the FCC may enter into beauty contests–choosing which technologies are entitled to free spectrum–once again.

What are FCC beauty contests and why does the FCC want to avoid them? Continue reading →

Many readers will recall the telecom soap opera featuring the GPS industry and LightSquared and the subsequent bankruptcy of LightSquared. Economist Thomas W. Hazlett (who is now at Clemson, after a long tenure at the GMU School of Law) and I wrote an article published in the Duke Law & Technology Review titled Tragedy of the Regulatory Commons: Lightsquared and the Missing Spectrum Rights. The piece documents LightSquared’s ambitions and dramatic collapse. Contrary to popular reporting on this story, this was not a failure of technology. We make the case that, instead, the FCC’s method of rights assignment led to the demise of LightSquared and deprived American consumers of a new nationwide wireless network. Our analysis has important implications as the FCC and Congress seek to make wide swaths of spectrum available for unlicensed devices. Namely, our paper suggests that the top-down administrative planning model is increasingly harming consumers and delaying new technologies.

Read commentary from the GPS community about LightSquared and you’ll get the impression LightSquared is run by rapacious financiers (namely CEO Phil Falcone) who were willing to flaunt FCC rules and endanger thousands of American lives with their proposed LTE network. LightSquared filings, on the other hand, paint the GPS community as defense-backed dinosaurs who abused the political process to protect their deficient devices from an innovative entrant. As is often the case, it’s more complicated than these morality plays. We don’t find villains in this tale–simply destructive rent-seeking triggered by poor FCC spectrum policy.

We avoid assigning fault to either LightSquared or GPS, but we stipulate that there were serious interference problems between LightSquared’s network and GPS devices. Interference is not an intractable problem, however. Interference is resolved everyday in other circumstances. The problem here was intractable because GPS users are dispersed and unlicensed (including government users), and could not coordinate and bargain with LightSquared when problems arose. There is no feasible way for GPS companies to track down and compel users to use more efficient devices, for instance, if LightSquared compensated them for the hassle. Knowing that GPS mitigation was unfeasible, LightSquared’s only recourse after GPS users objected to the new LTE network was through the political and regulatory process, a fight LightSquared lost badly. The biggest losers, however, were consumers, who were deprived of another wireless broadband network because FCC spectrum assignment prevented win-win bargaining between licensees. Continue reading →

Congressional debates about STELA reauthorization have resurrected the notion that TV stations “must provide a free service” because they “are using public spectrum.” This notion, which is rooted in 1930s government policy, has long been used to justify the imposition of unique “public interest” regulations on TV stations. But outdated policy decisions don’t dictate future rights in perpetuity, and policymakers abandoned the “public spectrum” rationale long ago. Continue reading →

Adam and I recently published a Mercatus research paper titled Video Marketplace Regulation: A Primer on the History of Television Regulation And Current Legislative Proposals, now available on SSRN. I presented the paper at a Silicon Flatirons academic conference last week.

We wrote the paper for a policy audience and students who want succinct information and history about the complex world of television regulation. Television programming is delivered to consumers in several ways, including via cable, satellite, broadcast, IPTV (like Verizon FiOS), and, increasingly, over-the-top broadband services (like Netflix and Amazon Instant Video). Despite their obvious similarities–transmitting movies and shows to a screen–each distribution platform is regulated differently.

The television industry is in the news frequently because of problems exacerbated by the disparate regulatory treatment. The Time Warner Cable-CBS dispute last fall (and TWC’s ensuing loss of customers), the Aereo lawsuit, and the Comcast-TWC proposed merger were each caused at least indirectly by some of the ill-conceived and antiquated TV regulations we describe. Further, TV regulation is a “thicket of regulations,” as the Copyright Office has said, which benefits industry insiders at the expense of most everyone else.

We contend that overregulation of television resulted primarily because past FCCs, and Congress to a lesser extent, wanted to promote several social objectives through a nationwide system of local broadcasters:

1) Localism
2) Universal Service
3) Free (that is, ad-based) television; and
4) Competition

These objectives can’t be accomplished simultaneously without substantial regulatory mandates. Further, these social goals may even contradict each other in some respects.

For decades, public policies constrained TV competitors to accomplish those goals. We recommend instead a reliance on markets and consumer choice through comprehensive reform of television laws, including repeal of compulsory copyright laws, must-carry, retransmission consent, and media concentration rules.

At the very least, our historical review of TV regulations provides an illustrative case study of how regulations accumulate haphazardly over time, demand additional “correction,” and damage dynamic industries. Congress and the FCC focused on attaining particular competitive outcomes through industrial policy, unfortunately. Our paper provides support for market-based competition and regulations that put consumer choice at the forefront.