[This essay originally appeared on the AIER blog on May 23, 2019 under the title, “Spring Cleaning for the Regulatory State.”]

_____________________________

Spring is in full blossom, and many of us are in the midst of our annual house-cleaning ritual. A regular deep clean makes good sense because it makes our living spaces more orderly and gets rid of the gunk and grime that has amassed over the past year.

Unfortunately, governments almost never engage in their own spring-cleaning exercise. Statutes and regulations continue to accumulate, layer by layer, until they suffocate not only economic opportunity, but also the effective administration of government itself. Luckily, some states have realized this and have taken steps to help address this problem.

Mountains of Regulations

First, here are some hard facts about regulatory accumulation:

  • Red tape grows: Since the first edition of his annual publication Ten Thousand Commandments in 1993, Wayne Crews has documented how federal agencies have issued 101,380 rules. Other reports find agency staffing levels jumped from 57,109 to 277,163 employees from 1960 to 2017, while agency budgets swelled in real terms from $3 billion in 1960 to $58 billion in 2017 (2009$).
  • Nothing ever gets cleaned up: A Deloitte survey of U.S. Code reveals that 68 percent of federal regulations have never been updated and that 17 percent have only been updated once. If a company never updated its business model, it would fail eventually. But governments get away with doing the same thing without any fear of failure. “If it were a country, U.S. regulation would be the world’s eighth-largest economy, ranking behind India and ahead of Italy,” Crews notes.
  • The burden of regulatory accumulation is getting worse: “The estimate for regulatory compliance and economic effects of federal intervention is $1.9 trillion annually,” Crews finds, which is equal to 10 percent of the U.S. gross domestic product for 2017. When federal spending is added to regulatory costs are added to federal spending, Crews finds, the burden equals $4.173 trillion, or 30 percent of the entire economy. Mercatus Center research has found that “economic growth in the United States has, on average, been slowed by 0.8 percent per year since 1980 owing to the cumulative effects of regulation.” This means that “the US economy would have been about 25 percent larger than it actually was as of 2012” if regulation had been held to roughly the same aggregate level it stood at in 1980.

In sum, the evidence shows that the red tape is growing without constraint, hindering entrepreneurship and innovation, deterring new investment, raising costs to consumers, limiting worker opportunities/wages, and undermining economic growth.

Regulations accumulate in this fashion because the administrative state is on autopilot. Legislatures pass broad statutes delegating ambiguous authority to agencies. Bureaucrats are then free to roll the regulatory snowball down the hill until it has become so big that its momentum cannot be stopped.

The Death of Common Sense

Policy makers enact new rules with the best of intentions, of course, but we should not assume that the untrammeled growth of the regulatory state produces positive results. There is no free lunch, after all. Every regulation is a restriction on opportunities for experimentation with new and potentially better ways of doing things. Sometimes such restrictions make sense because regulations can pass a reasonable cost-benefit test. It would be foolish to assume that all regulations on the books do.

Spring cleaning for the regulatory state, therefore, should be viewed as an exercise in “good governance.” The goal is not to get rid of all regulations. The goal is to make sure that rules are reasonable and cost-effective so that the public can actually understand the law and get the highest value out of their government institutions.

Philip K. Howard, founder and chair of the nonprofit coalition Common Good and the author of The Death of Common Sense, has written extensively about how regulatory accumulation has become a chronic problem. “Too much law,” he argues, “can have similar effects as too little law.” “People slow down, they become defensive, they don’t initiate projects because they are surrounded by legal risks and bureaucratic hurdles,” Howard notes. “They tiptoe through the day looking over their shoulders rather than driving forward on the power of their instincts. Instead of trial and error, they focus on avoiding error.”

In such an environment, risk-taking and entrepreneurialism are more challenging and economic dynamism suffers. But regulatory accumulation also hurts the quality of government institutions and policies, which become fundamentally incomprehensible or illogical. “Society can’t function when stuck in a heap of accumulated mandates of past generations,” Howard concludes. This is why an occasional regulatory house cleaning is essential to unleash economic opportunity and improve the functioning of our democratic institutions.

Regulatory House Cleaning Begins

Reforms to address this problem are finally happening. In a series of new essays, my colleague James Broughel has documented how several states — including IdahoOhioVirginia, and New Jersey — are undertaking serious efforts to get regulatory accumulation under control. They are utilizing a variety of mechanisms, including “regulatory reduction pilot programs” and “red tape review commissions.” Recently, Idaho actually initiated a sunset of its entire regulatory code and will now try to figure out how to clean up its 8,200 pages of regulations containing 736 chapters of state rules.

Meanwhile, other states are undertaking serious reform in one of the worst forms of regulatory accumulation: occupational licenses. The Federal Trade Commission notes that roughly 30 percent of American jobs require a license today, up from less than 5 percent in the 1950s. Research by economist Morris Kleiner and others finds that “restrictions from occupational licensing can result in up to 2.85 million fewer jobs nationwide, with an annual cost to consumers of $203 billion.” And many of the rules do not even serve their intended purpose. A major 2015 Obama administration report on the costs of occupational licensing concluded that “most research does not find that licensing improves quality or public health and safety.”

ArizonaWest Virginia, and Nebraska are among the leaders in reforming occupational-licensing regimes using a variety of approaches. In some cases, the reforms sunset licensing rules for specific professions altogether. Other proposals grant workers reciprocity to use a license they obtained in another state. Finally, some states have proposed letting most professions operate without any license at all but then requiringall, but then require them to make it clear to consumers that they are unlicensed.

The Need for a Fresh Look

Sunsets are not silver-bullet solutions, and the recent experience with sunsetting and “de-licensing” requirements at the state level has been mixed because many legislatures ignore or circumvent requirements. Nonetheless, sunsets can still help prompt much-needed discussions about which rules make sense and which ones no longer do.

Sunsets can be forward-looking, too. I have proposed that when policy makers craft new laws, especially for fast-paced tech sectors, they should incorporate a clause that what we might think of as “the Sunsetting Imperative.” It would demand that any existing or newly imposed technology regulation should include a provision sunsetting the law or regulation within two years. Reforms like these are also sometimes referred to as “temporary legislation” or “fresh look” requirements. Policy makers can always reenact rules that are still relevant and needed.

By forcing a periodic spring cleaning, sunsets and fresh-look requirements can help stem the tide of regulatory accumulation and ensure that only those policies that serve a pressing need remain on the books. There is no good reason for governments not to clean up their messes on occasion, just like the rest of us have to.

Congress should let the Satellite Television Extension and Localism Act Reauthorization (STELAR) of 2014 expire at the end of this year. STELAR is the most recent reincarnation of the Satellite Home Viewer Act of 1988, a law that has long since outlived it’s purposes.

Owners of home satellite dishes in the 1980s—who were largely concentrated in rural areas—were receiving retransmission of popular television programs via satellite carriers in apparent violation of copyright law. When copyright owners objected, Congress established a compulsory, statutory license mandating that content providers allow secondary transmission via satellite to areas unserved by either a broadcaster or a cable operator, and requiring satellite carriers to compensate copyright holders at the rate of 3 cents per subscriber per month for the retransmission of a network TV station or 12 cents for a cable superstation.

The retransmission fees were purposely set low to help the emerging satellite carriers get established in the marketplace when innovation in satellite technology still had a long way to go. Today the carriers are thriving business enterprises, and there is no need for them to continue receiving subsidies. Broadcasters, on the other hand, face unprecedented competition for advertising revenue that historically covered the entire cost of content production.

Today a broadcaster receives 28 cents per subscriber per month when a satellite carrier retransmits their local television signal. But the fair market value of that signal is actually $2.50, according to one estimate.

There is no reason retransmission fees cannot be “determined in the marketplace through negotiations among carriers, broadcasters and copyright holders,” as the Reagan administration suggested in 1988.

Aside from perpetuating an unjustified subsidy, renewal of STELAR may deprive owners of home satellite dishes in the nation’s twelve smallest Designated Market Areas from receiving programming from their own local broadcast TV stations.

Due to severe capacity constraints inherent in satellite technology in the 1980s, the statutory license originally allowed satellite carriers to retransmit a single, distant signal (e.g. from a New York or Los Angeles network affiliate) throughout their entire footprint. As the technology has improved, the statutory license has been expanded in recent years to include local-into-local retransmission. DISH Network, which already provides local-into-local retransmission throughout the nation (in all 210 DMAs), has demonstrated that a statutory license for distant signals is no longer necessary or warranted.

Although DirecTV does not yet offer nationwide local-into-local retransmission, this is a voluntary business decision that should not dictate the renewal of a statutory license based on 30 year old technology.


An interesting divide has opened up in recent months among right-of-center groups about what the FCC should do with the “C Band.” A few weeks ago, the FCC requested public comment on how to proceed with the band.

The C Band is 500 MHz of spectrum that the FCC, like regulators around the globe, dedicated for satellite use years ago and gave to satellite companies to share among each other. Satellite operators typically use it to transmit cable programming to a regional cable network operations center, where it is bundled and relayed to cable subscribers. However, the C Band would work terrifically if repurposed for 5G and cellular services. As Joe Kane explained in a white paper, the FCC and telecom companies are exploring various ways of accomplishing that.

Free-market groups disagree. Should the FCC prioritize:

The quick deployment of new wireless services? Or:

Deficit reduction and limiting FCC-granted windfalls?

This is a complex question since we’re dealing with the allocation of public property. Both sides, in my view, have a defensible free-market position. There are other non-trivial C Band issues like interference protection and the FCC’s authority to act here, but I’ll address the ideological split on the right.

The case for secondary markets

The full 500 MHz of “clean” C Band in the US would be worth tens of billions to cellular companies. However, the current satellite users don’t want to part with all of it and a group of satellite companies using the spectrum estimate they could sell 200 MHz to cellular carriers if the FCC would liberalize its rules to allow flexible uses (like 5G), not merely satellite services. The satellite providers would then be able to sell much of their spectrum on the secondary market (probably to cellular providers) at a nice premium.

Prof. Dan Lyons and Roslyn Layton wrote in support of the secondary market plan on the AEI blog and at Forbes, respectively. Joe Kane also favors the approach. As they say, the benefit of secondary market sales is that it will likely lead a significant and fast repurposing of the C Band for mobile use. The consumer benefits of “upzoned” spectrum are large and with every year of inaction, billions of dollars of consumer welfare evaporate. Hazlett and Munoz estimate that spectrum reallocated from a restricted use to flexible use generates annual consumer benefits in the same order of magnitude as auction value of the spectrum.

I’d add that there’s a history of the FCC upzoning spectrum (SMR spectrum in 2004, EBS spectrum in 2004, AWS-4 in 2011, WCS spectrum in 2012). The FCC is considering doing this with some government spectrum that Ligado or others could repurpose for mobile broadband. In these cases, the FCC upzoned spectrum so that it can be used for higher-valued uses, not legacy uses required by previous FCCs. The circumstances and technologies vary, but some of these bands were repurposed quickly for better uses by cellular providers and are used for 4G LTE today by tens of millions of Americans.

The case for FCC auction

Liberalizing spectrum quickly gets spectrum to higher-valued uses but does raise the complaint that the existing users are gaining an unfair windfall. I’m not sure when the C Band was allocated for satellite but many legacy assignments of spectrum were given to industries for free.

When the FCC upzones spectrum, it typically increases the value of the band. The “secondary market” plan is akin to the government giving away a parcel of public land to a developer to be used for a gas station, then deciding years later to upzone the land so that condo or office buildings can be built on it. It’s a better use for the land, but the gas station operator gains a big windfall when the property value increases. Not only is there a windfall, the government captures no revenue from the increase in the value of public property.

Free-market groups like Americans for Tax Reform, Taxpayers Protection Alliance, and Citizens Against Government Waste favor the FCC reclaiming the spectrum from satellite providers, perhaps via incentive auction, and collecting government revenue by re-selling it. If the FCC went the incentive auction route, the FCC would purchase the “satellite spectrum” (ie a low price) from the current C Band users, upzone it, and re-sell that spectrum as “mobile spectrum” (ie a high price) in an open auction. The FCC and the Treasury pocket the difference, probably several billion dollars here.

The FCC has only done one incentive auction, the 600 MHz auction. There, the FCC purchased “TV spectrum” from broadcasters and re-sold it to wireless carriers.

The benefit of this is deficit reduction and there’s more perceived fairness since there’s no big, FCC-granted windfall to legacy users. The downside is that it’s a slower, more complicated process since the FCC is deeply involved in the spectrum transfer. Arguably, however, the FCC should be deeply involved and interested in government revenue since spectrum is public property.

My view

A few years ago I would have definitely favored speed and the secondary market plan. I still lean towards that approach but I’m a little more on the fence after reading Richard Epstein’s work and others’ about the “public trust doctrine.” This is a traditional governance principle that requires public actors to receive fair value when disposing of public property. It prevents public institutions from giving discounted public property to friends and cronies. Clearly, cronyism isn’t the case here and FCC can’t undo what FCCs did generations ago in giving away spectrum. I think the need for speedy deployment trumps the windfall issue here, but it’s a closer call for me than in the past.

One proposal that hasn’t been contemplated with the C Band but might have merit is an overlay auction with a deadline. With such an auction, the FCC gives incumbent users a deadline to vacate a band (say, 5 years). The FCC then auctions flexible-use licenses in the band. The FCC receives the auction revenues and the winning bidders are allowed to deploy services immediately in the “white spaces” unoccupied by the incumbents. The winning bidders are allowed to pay the incumbents to move out before the deadline.

With an overlay auction, you get fairly rapid deployment–at least in the white spaces–and the government gains revenue from the auction. This type of auction was used to deploy cellular (PCS) in the 1990s and cellular (AWS-1) in the 2000s. However, incumbents dislike it because the deadline devalues their existing spectrum holdings.

I think overlay auctions should be considered in more spectrum proceedings because they avoid the serious windfall problems while also allowing rapid deployment of new services. That doesn’t seem in the cards, however, and secondary markets seems like the next best option.

– Coauthored with Mercatus MA Fellow Walter Stover

The advent of artificial intelligence technology use in dynamic pricing has given rise to fears of ‘digital market manipulation.’ Proponents of this claim argue that companies leverage artificial intelligence (AI) technology to obtain greater information about people’s biases and then exploit them for profit through personalized pricing. Those that advance these arguments often support regulation to protect consumers against information asymmetries and subsequent coercive market practices; however, such fears ignore the importance of the institutional context. These market manipulation tactics will not have a great effect precisely because they lack coercive power to force people to open their wallets. Such coercive power is a function of social and political institutions, not of the knowledge of people’s biases and preferences that could be gathered from algorithms.

As long as companies such as Amazon operate in a competitive market setting, they are constrained in their ability to coerce customers who can vote with their feet, regardless of how much knowledge they actually gather about those customers’ preferences through AI technology. Continue reading →

I (Eye), Robot?

by on May 8, 2019 · 0 comments

[Originally published on the Mercatus Bridge blog on May 7, 2019.]

I became a little bit more of a cyborg this month with the addition of two new eyes—eye lenses, actually. Before I had even turned 50, the old lenses that Mother Nature gave me were already failing due to cataracts. But after having two operations this past month and getting artificial lenses installed, I am seeing clearly again thanks to the continuing miracles of modern medical technology.

Cataracts can be extraordinarily debilitating. One day you can see the world clearly, the next you wake up struggling to see through a cloudy ocular soup. It is like looking through a piece of cellophane wrap or a continuously unfocused camera.

If you depend on your eyes to make a living as most of us do, then cataracts make it a daily struggle to get even basic things done. I spend most of my time reading and writing each workday. Once the cataracts hit, I had to purchase a half-dozen pair of strong reading glasses and spread them out all over the place: in my office, house, car, gym bag, and so on. Without them, I was helpless.

Reading is especially difficult in dimly lit environments, and even with strong glasses you can forget about reading the fine print on anything. Every pillbox becomes a frightening adventure. I invested in a powerful magnifying glass to make sure I didn’t end up ingesting the wrong things.

For those afflicted with particularly bad cataracts, it becomes extraordinarily risky to drive or operate machinery. More mundane things—watching TV, tossing a ball with your kid, reading a menu at many restaurants, looking at art in a gallery—also become frustrating. Continue reading →

Over at the American Institute for Economic Research blog, I recently posted two new essays discussing increasing threats to innovation and discussing how to counter them. The first is on “The Radicalization of Modern Tech Criticism,” and the second discusses, “How To Defend a Culture of Innovation During the Technopanic.”

“Technology critics have always been with us, and they have sometimes helped temper society’s occasional irrational exuberance about certain innovations,” I note in the opening of the first essay. The problem is that the “technology critics sometimes go much too far and overlook the importance of finding new and better ways of satisfying both basic and complex human needs and wants.” I continue on to highlight the growing “technopanic” rhetoric we sometimes hear today, including various claims that “it’s OK to be a Luddite” and push for a “degrowth movement” that would slow the wheels of progress. That would be a disaster for humanity because, as I note in concluding that first essay:

Through ongoing trial-and-error tool building, we discover new and better ways of satisfying human needs and wants to better our lives and the lives of those around us. Human flourishing is dependent upon our collective willingness to embrace and defend the creativity, risk-taking, and experimentation that produces the wisdom and growth that propel us forward. By contrast, today’s neo-Luddite tech critics suggest that we should just be content with the tools of the past and slow down the pace of technological innovation to supposedly save us from any number of dystopian futures they predict. If they succeed, it will leave us in a true dystopia that will foreclose the entrepreneurialism and innovation opportunities that are paramount to raising the standard of living for billions of people across the world.

In the second essay, I make an attempt to sketch out a more robust vision and set of principles to counter the tech critics. Continue reading →

A decade ago, a heated debate raged over the benefits of “a la carte” (or “unbundling”) mandates for cable and satellite TV operators. Regulatory advocates said consumers wanted to buy all TV channels individually to lower costs. The FCC under former Republican Chairman Kevin Martin got close to mandating a la carte regulation.

But the math just didn’t add up. A la carte mandates, many economists noted, would actually cost consumers just as much (or even more) once they repurchased all the individual channels they desired. And it wasn’t clear people really wanted a completely atomized one-by-one content shopping experience anyway.

Throughout media history, bundles of all different sorts had been used across many different sectors (books, newspapers, music, etc.). This was because consumers often enjoyed the benefits of getting a package of diverse content delivered to them in an all-in-one package. Bundling also helped media operators create and sustain a diversity of content using creative cross-subsidization schemes. The traditional newspaper format and business is perhaps the greatest example of media bundling. The classifieds and sports sections helped cross-subsidize hard news (especially local reporting). See this 2008 essay by Jeff Eisenach and me for details for more details on the economics of a la carte.

Yet, with the rise of cable and satellite television, some critics protested the use of bundles for delivering content. Even though it was clear that the incredible diversity of 500+ channels on pay TV was directly attributable to strong channels cross-subsidizing weaker ones, many regulatory advocates said we would be better off without bundles. Moreover, they said, online video markets could show us the path forward in the form of radically atomized content options and cheaper prices.

Flash-forward to today. Continue reading →

Image result for joseph schumpeterIn my first essay for the American Institute for Economic Research, I discuss what lessons the great prophet of innovation Joseph Schumpeter might have for us in the midst of today’s “techlash” and rising tide of techopanics.  I argue that, “[i]f Schumpeter were alive today, he’d have two important lessons to teach us about the techlash and why we should be wary of misguided interventions into the Digital Economy.” Specifically:
We can summarize Schumpeter’s first lesson in two words: Change happens. But disruptive change only happens in the right policy environment. Which gets to the second great lesson that Schumpeter can still teach us today, and which can also be summarized in two words: Incentives matter. Entrepreneurs will continuously drive dynamic, disruptive change, but only if public policy allows it.
Schumpeter’s now-famous model of “creative destruction” explained why economies are never in a state static equilibrium and that entrepreneurial competition comes from many (usually completely unpredictable) sources. “This kind of competition is much more effective than the other,” he argued, because the “ever-present threat” of dynamic, disruptive change, “disciplines before it attacks.”
But if we want innovators to take big risks and challenge existing incumbents and their market power, then it is essential that we get policy incentives right or else this sort of creative destruction will never come about. The problem with too much of today’s “techlash” thinking is that it imagines the current players are here to stay and that their market power is unassailable. Again, that is static “snapshot” thinking that ignores the reality that new generations of entrepreneurs are in a sort of race for a prize and will make big bets on the future in the face of seemingly astronomical odds against their success. But we have to give them a chance to win that “prize” if we want to see that dynamic, disruptive change happen.
As always, we have much to learn from Schumpeter. Jump over to the AIER website to read the entire essay.

Many have likened efforts to build out rural broadband today to the accomplishments of rural electrification in the 1930s. But the two couldn’t be further from each other. From the structure of the program and underlying costs, to the impact on productivity, rural electrification is drastically different than current efforts to get broadband in rural regions. My recent piece at ReaclClearPolicy explores some of those differences, but there is one area I wasn’t able to explore, the question of cost. If a government agency, any government agency for that matter, was able to repeat the dramatic reduction in cost for broadband, the US wouldn’t have a deployment problem. Continue reading →

It was my great pleasure to recently join Paul Matzko and Will Duffield on the Building Tomorrow podcast to discuss some of the themes in my last book and my forthcoming one. During our 50-minute conversation, which you can listen to here, we discussed:

  • the “pacing problem” and how it complicates technological governance efforts;
  • the steady rise of “innovation arbitrage” and medical tourism across the globe;
  • the continued growth of “evasive entrepreneurialism” (i.e., efforts to evade traditional laws & regs while innovating);
  • new forms of “technological civil disobedience;”
  • the rapid expansion of “soft law” governance mechanism as a response to these challenges; and,
  • craft beer bootlegging tips!  (Seriously, I move a lot of beer in the underground barter markets).

Bounce over to the Building Tomorrow site and give the show a listen. Fun chat.