An interesting divide has opened up in recent months among right-of-center groups about what the FCC should do with the “C Band.” A few weeks ago, the FCC requested public comment on how to proceed with the band.

The C Band is 500 MHz of spectrum that the FCC, like regulators around the globe, dedicated for satellite use years ago and gave to satellite companies to share among each other. Satellite operators typically use it to transmit cable programming to a regional cable network operations center, where it is bundled and relayed to cable subscribers. However, the C Band would work terrifically if repurposed for 5G and cellular services. As Joe Kane explained in a white paper, the FCC and telecom companies are exploring various ways of accomplishing that.

Free-market groups disagree. Should the FCC prioritize:

The quick deployment of new wireless services? Or:

Deficit reduction and limiting FCC-granted windfalls?

This is a complex question since we’re dealing with the allocation of public property. Both sides, in my view, have a defensible free-market position. There are other non-trivial C Band issues like interference protection and the FCC’s authority to act here, but I’ll address the ideological split on the right.

The case for secondary markets

The full 500 MHz of “clean” C Band in the US would be worth tens of billions to cellular companies. However, the current satellite users don’t want to part with all of it and a group of satellite companies using the spectrum estimate they could sell 200 MHz to cellular carriers if the FCC would liberalize its rules to allow flexible uses (like 5G), not merely satellite services. The satellite providers would then be able to sell much of their spectrum on the secondary market (probably to cellular providers) at a nice premium.

Prof. Dan Lyons and Roslyn Layton wrote in support of the secondary market plan on the AEI blog and at Forbes, respectively. Joe Kane also favors the approach. As they say, the benefit of secondary market sales is that it will likely lead a significant and fast repurposing of the C Band for mobile use. The consumer benefits of dezoned spectrum are large and with every year of inaction, billions of dollars of consumer welfare evaporate. Hazlett and Munoz estimate that spectrum reallocated from a restricted use to flexible use generates annual consumer benefits in the same order of magnitude as auction value of the spectrum.

I’d add that there’s a history of the FCC de-zoning spectrum (SMR spectrum in 2004, EBS spectrum in 2004, AWS-4 in 2011, WCS spectrum in 2012). The FCC is considering doing this with some government spectrum that Ligado or others could repurpose for mobile broadband. In these cases, the FCC upzoned spectrum so that it can be used for higher-valued uses, not legacy uses required by previous FCCs. The circumstances and technologies vary, but some of these bands were repurposed quickly for better uses by cellular providers and are used for 4G LTE today by tens of millions of Americans.

The case for FCC auction

Liberalizing spectrum quickly gets spectrum to higher-valued uses but does raise the complaint that the existing users are gaining an unfair windfall. I’m not sure when the C Band was allocated for satellite but many legacy assignments of spectrum were given to industries for free.

When the FCC “upzones” spectrum, it typically increases the value of the band. The “secondary market” plan is akin to the government giving away a parcel of public land to a developer to be used for a gas station, then deciding years later to upzone the land so that condo or office buildings can be built on it. It’s a better use for the land, but the gas station operator gains a big windfall when the property value increases. Not only is there a windfall, the government captures no revenue from the increase in the value of public property.

Free-market groups like Americans for Tax Reform, Taxpayers Protection Alliance, and Citizens Against Government Waste favor the FCC reclaiming the spectrum from satellite providers, perhaps via incentive auction, and collecting government revenue by re-selling it. If the FCC went the incentive auction route, the FCC would purchase the “satellite spectrum” (ie a low price) from the current C Band users, upzone it, and re-sell that spectrum as “mobile spectrum” (ie a high price) in an open auction. The FCC and the Treasury pocket the difference, probably several billion dollars here.

The FCC has only done one incentive auction, the 600 MHz auction. There, the FCC purchased “TV spectrum” from broadcasters and re-sold it to wireless carriers.

The benefit of this is deficit reduction and there’s more perceived fairness since there’s no big, FCC-granted windfall to legacy users. The downside is that it’s a slower, more complicated process since the FCC is deeply involved in the spectrum transfer. Arguably, however, the FCC should be deeply involved and interested in government revenue since spectrum is public property.

My view

A few years ago I would have definitely favored speed and the secondary market plan. I still lean towards that approach but I’m a little more on the fence after reading Richard Epstein’s work and others’ about the “public trust doctrine.” This is a traditional governance principle that requires public actors to receive fair value when disposing of public property. It prevents public institutions from giving discounted public property to friends and cronies. Clearly, cronyism isn’t the case here and FCC can’t undo what FCCs did generations ago in giving away spectrum. I think the need for speedy deployment trumps the windfall issue here, but it’s a closer call for me than in the past.

One proposal that hasn’t been contemplated with the C Band but might have merit is an overlay auction with a deadline. With such an auction, the FCC gives incumbent users a deadline to vacate a band (say, 5 years). The FCC then auctions flexible-use licenses in the band. The FCC receives the auction revenues and the winning bidders are allowed to deploy services in the “white spaces” unoccupied by the incumbents. The winning bidders are allowed to pay the incumbents to move out before the deadline.

With an overlay auction, you get fairly rapid deployment–at least in the white spaces–and the government gains revenue from the auction. This type of auction was used to deploy cellular (PCS) in the 1990s and cellular (AWS-1) in the 2000s. However, incumbents dislike it because the deadline devalues their existing spectrum holdings.

I think overlay auctions should be considered in more spectrum proceedings because they avoid the serious windfall problems while also allowing rapid deployment of new services. That doesn’t seem in the cards, however, and secondary markets seems like the next best option.

– Coauthored with Mercatus MA Fellow Walter Stover

The advent of artificial intelligence technology use in dynamic pricing has given rise to fears of ‘digital market manipulation.’ Proponents of this claim argue that companies leverage artificial intelligence (AI) technology to obtain greater information about people’s biases and then exploit them for profit through personalized pricing. Those that advance these arguments often support regulation to protect consumers against information asymmetries and subsequent coercive market practices; however, such fears ignore the importance of the institutional context. These market manipulation tactics will not have a great effect precisely because they lack coercive power to force people to open their wallets. Such coercive power is a function of social and political institutions, not of the knowledge of people’s biases and preferences that could be gathered from algorithms.

As long as companies such as Amazon operate in a competitive market setting, they are constrained in their ability to coerce customers who can vote with their feet, regardless of how much knowledge they actually gather about those customers’ preferences through AI technology. Continue reading →

I (Eye), Robot?

by on May 8, 2019 · 0 comments

[Originally published on the Mercatus Bridge blog on May 7, 2019.]

I became a little bit more of a cyborg this month with the addition of two new eyes—eye lenses, actually. Before I had even turned 50, the old lenses that Mother Nature gave me were already failing due to cataracts. But after having two operations this past month and getting artificial lenses installed, I am seeing clearly again thanks to the continuing miracles of modern medical technology.

Cataracts can be extraordinarily debilitating. One day you can see the world clearly, the next you wake up struggling to see through a cloudy ocular soup. It is like looking through a piece of cellophane wrap or a continuously unfocused camera.

If you depend on your eyes to make a living as most of us do, then cataracts make it a daily struggle to get even basic things done. I spend most of my time reading and writing each workday. Once the cataracts hit, I had to purchase a half-dozen pair of strong reading glasses and spread them out all over the place: in my office, house, car, gym bag, and so on. Without them, I was helpless.

Reading is especially difficult in dimly lit environments, and even with strong glasses you can forget about reading the fine print on anything. Every pillbox becomes a frightening adventure. I invested in a powerful magnifying glass to make sure I didn’t end up ingesting the wrong things.

For those afflicted with particularly bad cataracts, it becomes extraordinarily risky to drive or operate machinery. More mundane things—watching TV, tossing a ball with your kid, reading a menu at many restaurants, looking at art in a gallery—also become frustrating. Continue reading →

Over at the American Institute for Economic Research blog, I recently posted two new essays discussing increasing threats to innovation and discussing how to counter them. The first is on “The Radicalization of Modern Tech Criticism,” and the second discusses, “How To Defend a Culture of Innovation During the Technopanic.”

“Technology critics have always been with us, and they have sometimes helped temper society’s occasional irrational exuberance about certain innovations,” I note in the opening of the first essay. The problem is that the “technology critics sometimes go much too far and overlook the importance of finding new and better ways of satisfying both basic and complex human needs and wants.” I continue on to highlight the growing “technopanic” rhetoric we sometimes hear today, including various claims that “it’s OK to be a Luddite” and push for a “degrowth movement” that would slow the wheels of progress. That would be a disaster for humanity because, as I note in concluding that first essay:

Through ongoing trial-and-error tool building, we discover new and better ways of satisfying human needs and wants to better our lives and the lives of those around us. Human flourishing is dependent upon our collective willingness to embrace and defend the creativity, risk-taking, and experimentation that produces the wisdom and growth that propel us forward. By contrast, today’s neo-Luddite tech critics suggest that we should just be content with the tools of the past and slow down the pace of technological innovation to supposedly save us from any number of dystopian futures they predict. If they succeed, it will leave us in a true dystopia that will foreclose the entrepreneurialism and innovation opportunities that are paramount to raising the standard of living for billions of people across the world.

In the second essay, I make an attempt to sketch out a more robust vision and set of principles to counter the tech critics. Continue reading →

A decade ago, a heated debate raged over the benefits of “a la carte” (or “unbundling”) mandates for cable and satellite TV operators. Regulatory advocates said consumers wanted to buy all TV channels individually to lower costs. The FCC under former Republican Chairman Kevin Martin got close to mandating a la carte regulation.

But the math just didn’t add up. A la carte mandates, many economists noted, would actually cost consumers just as much (or even more) once they repurchased all the individual channels they desired. And it wasn’t clear people really wanted a completely atomized one-by-one content shopping experience anyway.

Throughout media history, bundles of all different sorts had been used across many different sectors (books, newspapers, music, etc.). This was because consumers often enjoyed the benefits of getting a package of diverse content delivered to them in an all-in-one package. Bundling also helped media operators create and sustain a diversity of content using creative cross-subsidization schemes. The traditional newspaper format and business is perhaps the greatest example of media bundling. The classifieds and sports sections helped cross-subsidize hard news (especially local reporting). See this 2008 essay by Jeff Eisenach and me for details for more details on the economics of a la carte.

Yet, with the rise of cable and satellite television, some critics protested the use of bundles for delivering content. Even though it was clear that the incredible diversity of 500+ channels on pay TV was directly attributable to strong channels cross-subsidizing weaker ones, many regulatory advocates said we would be better off without bundles. Moreover, they said, online video markets could show us the path forward in the form of radically atomized content options and cheaper prices.

Flash-forward to today. Continue reading →

Image result for joseph schumpeterIn my first essay for the American Institute for Economic Research, I discuss what lessons the great prophet of innovation Joseph Schumpeter might have for us in the midst of today’s “techlash” and rising tide of techopanics.  I argue that, “[i]f Schumpeter were alive today, he’d have two important lessons to teach us about the techlash and why we should be wary of misguided interventions into the Digital Economy.” Specifically:
We can summarize Schumpeter’s first lesson in two words: Change happens. But disruptive change only happens in the right policy environment. Which gets to the second great lesson that Schumpeter can still teach us today, and which can also be summarized in two words: Incentives matter. Entrepreneurs will continuously drive dynamic, disruptive change, but only if public policy allows it.
Schumpeter’s now-famous model of “creative destruction” explained why economies are never in a state static equilibrium and that entrepreneurial competition comes from many (usually completely unpredictable) sources. “This kind of competition is much more effective than the other,” he argued, because the “ever-present threat” of dynamic, disruptive change, “disciplines before it attacks.”
But if we want innovators to take big risks and challenge existing incumbents and their market power, then it is essential that we get policy incentives right or else this sort of creative destruction will never come about. The problem with too much of today’s “techlash” thinking is that it imagines the current players are here to stay and that their market power is unassailable. Again, that is static “snapshot” thinking that ignores the reality that new generations of entrepreneurs are in a sort of race for a prize and will make big bets on the future in the face of seemingly astronomical odds against their success. But we have to give them a chance to win that “prize” if we want to see that dynamic, disruptive change happen.
As always, we have much to learn from Schumpeter. Jump over to the AIER website to read the entire essay.

Many have likened efforts to build out rural broadband today to the accomplishments of rural electrification in the 1930s. But the two couldn’t be further from each other. From the structure of the program and underlying costs, to the impact on productivity, rural electrification is drastically different than current efforts to get broadband in rural regions. My recent piece at ReaclClearPolicy explores some of those differences, but there is one area I wasn’t able to explore, the question of cost. If a government agency, any government agency for that matter, was able to repeat the dramatic reduction in cost for broadband, the US wouldn’t have a deployment problem. Continue reading →

It was my great pleasure to recently join Paul Matzko and Will Duffield on the Building Tomorrow podcast to discuss some of the themes in my last book and my forthcoming one. During our 50-minute conversation, which you can listen to here, we discussed:

  • the “pacing problem” and how it complicates technological governance efforts;
  • the steady rise of “innovation arbitrage” and medical tourism across the globe;
  • the continued growth of “evasive entrepreneurialism” (i.e., efforts to evade traditional laws & regs while innovating);
  • new forms of “technological civil disobedience;”
  • the rapid expansion of “soft law” governance mechanism as a response to these challenges; and,
  • craft beer bootlegging tips!  (Seriously, I move a lot of beer in the underground barter markets).

Bounce over to the Building Tomorrow site and give the show a listen. Fun chat.

Over the years I have been asked to speak to colleagues and students I work with about best practices for preparing testimony, public interest comments, opeds, speeches, etc. A few years back, I jotted down some miscellaneous thoughts and used these notes whenever speaking on such matters. I did another session with some GMU econ students today and someone suggested I should publish these tips online somewhere.

So, for whatever it’s worth, here are a few ideas about how to improve your content and your own brand as a public policy analyst. The first list is just some general tips I’ve learned from others after 25 years in the world of public policy. Following that, I have also included a separate set of notes I use for presentations focused specifically on how to prepare effective editorials and legislative testimony. There are many common recommendations on both lists, but I thought I would just post them both here together.

Continue reading →

Why should we really care about technological innovation? My Mercatus Center colleague James Broughel and I have just published a paper answering that question. In “Technological Innovation and Economic Growth: A Brief Report on the Evidence,” we summarize the extensive body of evidence that discusses the relationship between innovation, growth, and human prosperity. We note that while economists, political scientists, and historians don’t agree on much, there exists widespread consensus among them that there is a symbiotic relationship between the pace of innovation and the progress of civilization. Our 27-page paper documenting the academic evidence on this issue can be downloaded on SSRN or from the Mercatus website. Here’s the abstract:

Technological innovation is a fundamental driver of economic growth and human progress. Yet some critics want to deny the vast benefits that innovation has bestowed and continues to bestow on mankind. To inform policy discussions and address the technology critics’ concerns, this paper summarizes relevant literature documenting the impact of technological innovation on economic growth and, more broadly, on living standards and human well-being. The historical record is unambiguous regarding how ongoing innovation has improved the way we live; however, the short-term disruptive aspects of technological change are real and deserve attention as well. The paper concludes with an extended discussion about the relevance of these findings for shaping cultural attitudes toward technology and the role that public policy can play in fostering innovation, growth, and ongoing improvements in the quality of life of citizens.