Economics

A curious thing happened last week. Facebook’s stock, which had seem to have weathered the 2018 controversies, took a beating.

In the Washington Post, Craig Timberg and Elizabeth Dwoskin explained that the stock market drop was representative of a larger wave:

The cost of years of privacy missteps finally caught up with Facebook this week, sending its market value down more than $100 billion Thursday in the largest single-day drop in value in Wall Street history.

Jeff Chester of the Center for Digital Democracy piled on, describing the drop as “a privacy wake-up call that the markets are delivering to Mark Zuckerberg.”

But the downward pressure was driven by more fundamental changes. Simply put, Facebook missed its earnings target. But it is important to peer into why the company didn’t meet those targets. Continue reading →

The White House has announced a new effort to help prepare workers for the challenges they will face in the future. While it’s a well-intentioned effort, and one that I hope succeeds, I’m skeptical about it for a simple reason: It’s just really hard to plan for the workforce needs of the future and train people for jobs that we cannot possibly envision today.

Writing in the Wall Street Journal today, Ivanka Trump, senior adviser to the president, outlines the elements of new Executive Order that President Trump is issuing “to prioritize and expand workforce development so that we can create and fill American jobs with American workers.” Toward that end, the Administration plans on:

  • establishing a National Council for the American Worker, “composed of senior administration officials, who will develop a national strategy for training and retraining workers for high-demand industries.” This is meant to bring more efficiency and effectiveness to the “more than 40 workforce-training programs in more than a dozen agencies, and too many have produced meager results.”
  • “facilitat[ing] the use of data to connect American businesses, workers and educational institutions.” This is meant to help workers find “what jobs are available, where they are, what skills are required to fill them, and where the best training is available.”
  • launching a nationwide campaign “to highlight the growing vocational crisis and promote careers in the skilled trades, technology and manufacturing.”

The Administration also plans on creating a new advisory board of experts to address these issues, and the administration is also “asking companies and trade groups throughout the country to sign our new Pledge to America’s Workers—a commitment to invest in the current and future workforce.” They hope to encourage companies to take additional steps “to educate, train and reskill American students and workers.”

Perhaps some of these steps make sense, and perhaps a few will even help workers deal with the challenges of our more complex, fast-evolving, global economy. But I doubt it.

Continue reading →

Expanding rural broadband has generated significant interest in recent years. However, the current subsidy programs are often mismanaged and impose little accountability. It’s not clear what effect rural broadband subsidies have had, despite the amount of money spent on it. As economist Scott Wallsten has pointed out, the US government has spent around $100 billion on rural telecommunications and broadband since 1995 “without evidence that it has improved adoption.”

So I was pleased to hear a few months ago that the Montana Public Service Commission was making an inquiry into how to improve rural broadband subsidy programs. Montana looms large in rural broadband discussions because Montana telecommunications providers face some of the most challenging terrain the US–mountainous, vast, and lightly-populated. (In fact, “no bars on your phone” in rural Montana is a major plot element in the popular videogame Far Cry 5. HT Rob Jackson.)

I submitted comments in the Montana PSC proceeding and received an invitation to testify at a hearing on the subject. So last week I flew to Helena to discuss rural broadband programs with the PSC and panelists. I emphasized three points.

  • Federal broadband subsidy programs are facing higher costs and fewer beneficiaries.

Using FCC data, I calculated that since 1998, USF high-cost subsidies to Montana telecom companies have risen by about 40% while the number of rural customers served by those companies have decreased by over 50%. I suspect these trends are common nationally, and that USF subsidies are increasing while fewer people are benefiting.

  • Wireless broadband is the future, especially in rural areas.

“Fiber everywhere” is not a wise use of taxpayer funds and exurban and rural households are increasingly relying on wireless–from satellite, WISPs, and mobile. In 2016, the CDC reported that more households had wireless phone than landline phone service. You’re starting to see “cord cutting” pick up for broadband as well. Census surveys indicate that in 2013, 10% of Internet-using households were mobile Internet only (no landline Internet). By 2015, that percentage had doubled, and about 20% of households were mobile-only. The percentage is likely even higher today now that unlimited data plans are common. Someday soon the FCC will have to conclude that mobile broadband is a substitute for fixed broadband, and subsidy programs should reflect that.

  • Consumer-focused “tech vouchers” would be a huge improvement over current broadband programs.

Current programs subsidize the construction of networks even where there’s no demand. The main reason the vast majority of non-Internet users don’t subscribe to broadband is that they are uninterested in subscribing, according to surveys from the NTIA (55% are uninterested), Pew (70% are uninterested), and FCC and Connected Nation experts (63% are uninterested). With rising costs and diminishing returns to rural fiber construction, the FCC needs to reevaluate USF and make subsidies more consumer-focused. The UK for a couple years has pursued another model for rural broadband: consumer broadband vouchers. Since most people who don’t subscribe to broadband don’t want it, vouchers protect taxpayers from unnecessary expense and paying for gold-plated services.

For years, economists and the GAO have criticized the structure, complexity, and inefficiency of the USF programs, and particularly the rural program. The FCC is constantly changing the programs because of real and perceived deficiencies, but this has made the USF unwieldy. Montana providers participate in at least seven different rural USF programs alone (that doesn’t include the other USF programs and subprograms or other federal help, like RUS grants).

Unfortunately, most analysis and reporting on US broadband programs can be summed up as “don’t touch the existing programs–just send more money.” (There are some exceptions and scrutiny of the programs, like Tony Romm’s 2015 Politico investigation into the mismanagement of stimulus-funded Ag Department broadband projects.)

“Journalism as advocacy” is unfortunately the norm when it comes to broadband policy. Take, for instance, this article about the digital divide that omits mention of the $100 billion spent in rural areas alone, only to conclude that “small [broadband] companies and cooperatives are going it more or less alone, without much help yet from the federal government.”

(That story and another digital divide story had other problems, namely, a reliance on an academic study using faulty data purchased from a partisan campaign firm. FiveThirtyEight deserves credit for acknowledging the data’s flaws but that should have alerted the editors on the need for still more fact-checking.) 

States can’t rewrite federal statutes and regulations but it’s to the Montana PSC’s great credit that they sensed that all is not well. Current trends will only put more stress on the programs. Hopefully other state PUCs will see that the current programs do a disservice for universal service objectives and consumers.

Last week the FCC commissioners voted to restructure the agency and create an Office of Economics and Analytics. Hopefully the new Office will give some rigor to the “public interest standard” that guides most FCC decisions. It’s important the FCC formally inject economics in to public interest determinations, perhaps much like the Australian telecom regulator’s “total welfare standard,” which is basically a social welfare calculation plus consideration of “broader social impacts.”

In contrast, the existing “standard” has several components and subcomponents (some of them contradictory) depending on the circumstances; that is, it’s no standard at all. As the first general counsel of the Federal Radio Commission, Louis Caldwell, said of the public interest standard, it means

as little as any phrase that the drafters of the Act could have used and still comply with the constitutional requirement that there be some standard to guide the administrative wisdom of the licensing authority.

Unfortunately, this means public interest determinations are largely shielded from serious court scrutiny. As Judge Posner said of the standard in Schurz Communications v. FCC,

So nebulous a mandate invests the Commission with an enormous discretion and correspondingly limits the practical scope of responsible judicial review.

Posner colorfully characterized FCC public interest analysis in that case:

The Commission’s majority opinion … is long, but much of it consists of boilerplate, the recitation of the multitudinous parties’ multifarious contentions, and self-congratulatory rhetoric about how careful and thoughtful and measured and balanced the majority has been in evaluating those contentions and carrying out its responsibilities. Stripped of verbiage, the opinion, like a Persian cat with its fur shaved, is alarmingly pale and thin.

Every party who does significant work before the FCC has agreed with Judge Posner’s sentiments at one time or another.

Which brings us to the Office of Economics and Analytics. Cost-benefit analysis has its limits, but economic rigor is increasingly important as the FCC turns its attention away from media regulation and towards spectrum assignment and broadband subsidies.

The worst excesses of FCC regulation are in the past where, for instance, one broadcaster’s staff in 1989 “was required to review 14,000 pages of records to compile information for one [FCC] interrogatory alone out of 299.” Or when, say, FCC staff had to sift through and consider 60,000 TV and radio “fairness” complaints in 1970. These regulatory excesses were corrected by economists (namely, Ronald Coase’s recommendation that spectrum licenses be auctioned, rather than given away for free by the FCC after a broadcast “beauty contest” hearing), but history shows that FCC proceedings spiral out of control without the agency intending it.

Since Congress gave such a nebulous standard, the FCC is always at risk of regressing. Look no further than the FCC’s meaningless “Internet conduct standard” from its 2015 Open Internet Order. This “net neutrality” regulation is a throwback to the bad old days, an unpredictable conduct standard that–like the Fairness Doctrine–would constantly draw the FCC into social policy activism and distract companies with interminable FCC investigations and unknowable compliance requirements.

In the OIO’s mercifully short life, we saw glimpses of the disputes that would’ve distracted the agency and regulated companies. For instance, prominent net neutrality supporters had wildly different views about whether a common practice, “zero rating” of IP content, by T-Mobile violated the Internet conduct standard. Chairman Tom Wheeler initially called it “highly innovative and highly competitive” while Harvard professor Susan Crawford said it was “dangerous” and “malignant” and should be outlawed “immediately.” The nearly year-long FCC investigations into zero rating and the equivocal report sent a clear, chilling message to ISPs and app companies: 20 years of permissionless innovation for the Internet was long enough. Submit your new technologies and business plans to us or face the consequences.

Fortunately, by rescinding the 2015 Order and creating the new economics Office, Chairman Pai and his Republican colleagues are improving the outlook for the development of the Internet. Hopefully the Office will make social welfare calculations a critical part of the public interest standard.

The FCC released a proposed Order today that would create an Office of Economics and Analytics. Last April, Chairman Pai proposed this data-centric office. There are about a dozen bureaus and offices within the FCC and this proposed change in the FCC’s organizational structure would consolidate a few offices and many FCC economists and experts into a single office.

This is welcome news. Several years ago when I was in law school, I was a legal clerk for the FCC Wireless Bureau and for the FCC Office of General Counsel. During that ten-month stint, I was surprised at the number of economists, who were all excellent, at the FCC. I assisted several of them closely (and helped organize what one FCC official dubbed, unofficially, “The Economists’ Cage Match” for outside experts sparring over the competitive effects of the proposed AT&T-T-Mobile merger). However, my impression even during my limited time at the FCC was well-stated by Chairman Pai in April:

[E]conomists are not systematically incorporated into policy work at the FCC. Instead, their expertise is typically applied in an ad hoc fashion, often late in the process. There is no consistent approach to their use.

And since the economists are sprinkled about the agency, their work is often “siloed” within their respective bureau. Economics as an afterthought in telecom is not good for the development of US tech industries, nor for consumers.

As Geoffrey Manne and Allen Gibby said recently, “the future of telecom regulation is antitrust,” and the creation of the OEA is a good step in line with global trends. Many nations–like the Netherlands, Denmark, Spain, Japan, South Korea, and New Zealand–are restructuring legacy telecom regulators. The days of public and private telecom monopolies and discrete, separate communications, computer, and media industries (thus bureaus) is past. Convergence, driven by IP networks and deregulation, has created these trends and resulted in sometimes dramatic restructuring of agencies.

In Denmark, for instance, as Roslyn Layton and Joe Kane have written, national parties and regulators took inspiration from the deregulatory plans of the Clinton FCC. The Social Democrats, the Radical Left, the Left, the Conservative People’s Party, the Socialist People’s Party, and the Center Democrats agreed in 1999:

The 1990s were focused on breaking down old monopoly; now it is important to make the frameworks for telecom, IT, radio, TV meld together—convergence. We believe that new technologies will create competition.

It is important to ensure that regulation does not create a barrier for the possibility of new converged products; for example, telecom operators should be able to offer content if they so choose. It is also important to ensure digital signature capability, digital payment, consumer protection, and digital rights. Regulation must be technologically neutral, and technology choices are to be handled by the market. The goal is to move away from sector-specific regulation toward competition-oriented regulation. We would prefer to handle telecom with competition laws, but some special regulation may be needed in certain cases—for example, regulation for access to copper and universal service.

This agreement was followed up by the quiet shuttering of NITA, the Danish telecom agency, in 2011.

Bringing economic rigor to the FCC’s notoriously vague “public interest” standard seemed to be occurring (slowly) during the Clinton and Bush administrations. However, during the Obama years, this progress was de-railed, largely by the net neutrality silliness, which not only distracted US regulators from actual problems like rural broadband expansion but also reinvigorated the media-access movement, whose followers believe the FCC should have a major role in shaping US culture, media, and technologies.

Fortunately, those days are in the rearview mirror. The proposed creation of the OEA represents another pivot toward the likely future of US telecom regulation: a focus on consumer welfare, competition, and data-driven policy.

If the techno-pessimists are right and robots are set to take all the jobs, shouldn’t employment in Amazon warehouses be plummeting right now? After all, Amazon’s sorting and fulfillment centers have been automated at a rapid pace, with robotic technologies now being integrated into almost every facet of the process. (Just watch the video below to see it all in action.)

And yet according to this Wall Street Journal story by Laura Stevens, Amazon is looking to immediately fill 50,000 new jobs, which would mean that its U.S. workforce “would swell to around 300,000, compared with 30,000 in 2011.”  According to the article, “Nearly 40,000 of the promised jobs are full-time at the company’s fulfillment centers, including some facilities that will open in the coming months. Most of the remainder are part-time positions available at Amazon’s more than 30 sorting centers.”

How can this be? Shouldn’t the robots have eaten all those jobs by now?

Continue reading →

By Brent Skorup and Melody Calkins

Tech-optimists predict that drones and small aircraft may soon crowd US skies. An FAA administrator predicted that by 2020 tens of thousands of drones would be in US airspace at any one time. Further, over a dozen companies, including Uber, are building vertical takeoff and landing (VTOL) aircraft that could one day shuttle people point-to-point in urban areas. Today, low-altitude airspace use is episodic (helicopters, ultralights, drones) and with such light use, the low-altitude airspace is shared on an ad hoc basis with little air traffic management. Coordinating thousands of aircraft in low-altitude flight, however, demands a new regulatory framework.

Why not auction off low-altitude airspace for exclusive use?

There are two basic paradigms for resource use: open access and exclusive ownership. Most high-altitude airspace is lightly used and the open access regime works tolerably well because there are a small number of players (airline operators and the government) and fixed routes. Similarly, Class G airspace—which varies by geography but is generally the airspace from the surface to 700 feet above ground—is uncontrolled and virtually open access.

Valuable resources vary immensely in their character–taxi medallions, real estate, radio spectrum, intellectual property, water–and a resource use paradigm, once selected requires iteration and modification to ensure productive use. “The trick,” Prof. Richard Epstein notes, “is to pick the right initial point to reduce the stress on making these further adjustments.” If indeed dozens of operators will be vying for variable drone and VTOL routes in hundreds of local markets, exclusive use models could create more social benefits and output than open access and regulatory management. NASA is exploring complex coordination systems in this airspace but, rather than agency permissions, lawmakers should consider using property rights and the price mechanism.

The initial allocation of airspace could be determined by auction. An agency, probably the FAA, would:

  1. Identify and define geographic parcels of Class G airspace;
  2. Auction off the parcels to any party (private corporations, local governments, non-commercial stakeholders, or individual users) for a term of years with an expectation of renewal; and
  3. Permit the sale, combination, and subleasing of those parcels

The likely alternative scenario—regulatory allocation and management of airspace–derives from historical precedent in aviation and spectrum policy:

  1. First movers and the politically powerful acquire de facto control of low-altitude airspace,
  2. Incumbents and regulators exclude and inhibit newcomers and innovators,
  3. The rent-seeking and resource waste becomes unendurable for lawmakers, and
  4. Market-based reforms are slowly and haphazardly introduced.

For instance, after demand for commercial flights took off in the 1960s, a command-and-control quota system was created for crowded Northeast airports. Takeoff and landing rights, called “slots,” were assigned to early airlines but regulators did not allow airlines to sell those rights. The anticompetitive concentration and hoarding of airport slots at terminals is still being slowly unraveled by Congress and the FAA to this day. There’s a similar story for government assignment of spectrum over decades, as explained in Thomas Hazlett’s excellent new book, The Political Spectrum.

The benefit of an auction, plus secondary markets, is that the resource is generally put to its highest-valued use. Secondary markets and subleasing also permit latecomers and innovators to gain resource access despite lacking an initial assignment and political power. Further, exclusive use rights would also provide VTOL operators (and passengers) the added assurance that routes would be “clear” of potential collisions. (A more regulatory regime might provide that assurance but likely via complex restrictions on airspace use.) Airspace rights would be a new cost for operators but exclusive use means operators can economize on complex sensors, other safety devices, and lobbying costs. Operators would also possess an asset to sublease and monetize.

Another bonus (from the government’s point of view) is that the sale of Class G airspace can provide government revenue. Revenue would be slight at first but could prove lucrative once there’s substantial commercial interest. The Federal government, for instance, auctions off its usage rights for grazing, oil and gas retrieval, radio spectrum, mineral extraction, and timber harvesting. Spectrum auctions alone have raised over $100 billion for the Treasury since they began in 1994.

[originally published on Plaintext on June 21, 2017.]

This summer, we celebrate the 20th anniversary of two developments that gave us the modern Internet as we know it. One was a court case that guaranteed online speech would flow freely, without government prior restraints or censorship threats. The other was an official White House framework for digital markets that ensured the free movement of goods and services online.

The result of these two vital policy decisions was an unprecedented explosion of speech freedoms and commercial opportunities that we continue to enjoy the benefits of twenty years later.

While it is easy to take all this for granted today, it is worth remembering that, in the long arc of human history, no technology or medium has more rapidly expanded the range of human liberties — both speech and commercial liberties — than the Internet and digital technologies. But things could have turned out much differently if not for the crucially important policy choices the United States made for the Internet two decades ago. Continue reading →

There is reporting suggesting that the Trump FCC may move to eliminate the FCC’s complex Title II regulations for the Internet and restore the FTC’s ability to police anticompetitve and deceptive practices online. This is obviously welcome news. These reports also suggest that FCC Chairman Pai and the FTC will require ISPs add open Internet principles to their terms of service, that is, no unreasonable blocking or throttling of content and no paid priority. These principles have always been imprecise because federal law allows ISPs to block objectionable content if they wish (like pornography or violent websites) and because ISPs have a First Amendment right to curate their services.

Whatever the exact wording, there shouldn’t be a per se ban of paid priority. Whatever policy develops should limit anticompetitive paid priority, not all paid priority. Paid prioritization is simply a form of consideration payment, which is economists’ term for when upstream producers pay downstream retailers or distributors for special treatment. There’s economics literature on consideration payments and it’s an accepted business practice in many other industries. Further, consideration payments often benefit small providers and niche customers. Some small and large companies with interactive IP services might be willing to pay for end-to-end service reliability.

The Open Internet Order’s paid priority ban has always been short sighted because it attempts to preserve the Internet as it existed circa 2002. It resembles the FCC’s unfounded insistence for decades that subscription TV (ie, how the vast majority of Americans consume TV today) was against “the public interest.” Like the defunct subscription TV ban, the paid priority ban is an economics-free policy that will hinder new services. 

Despite what late-night talk show hosts might say, “fast lanes” on the Internet are here and will continue. “Fast lanes” have always been permitted because, as Obama’s US CTO Aneesh Chopra noted, some emerging IP services need special treatment. Priority transmission was built into Internet protocols years ago and the OIO doesn’t ban data prioritization; it bans BIAS providers from charging “edge providers” a fee for priority.

The notion that there’s a level playing field online needing preservation is a fantasy. Non-real-time services like Netflix streaming, YouTube, Facebook pages, and major websites can mostly be “cached” on servers scattered around the US. Major web companies have their own form of paid prioritization–they spend millions annually, including large payments to ISPs, on transit agreements, CDNs, and interconnection in order to avoid congested Internet links.

The problem with a blanket paid priority ban is that it biases the evolution of the Internet in favor of these cache-able services and against real-time or interactive services like teleconferencing, live TV, and gaming. Caching doesn’t work for these services because there’s nothing to cache beforehand. 

When would paid prioritization make sense? Most likely a specialized service for dedicated users that requires end-to-end reliability. 

I’ll use a plausible example to illustrate the benefits of consideration payments online–a telepresence service for deaf people. As Martin Geddes described, a decade ago the government in Wales developed such a service. The service architects discovered that a well-functioning service had quality characteristics not supplied by ISPs. ISPs and video chat apps like Skype optimize their networks, video codecs, and services for non-deaf people (ie, most customers) and prioritize consistent audio quality over video quality. While that’s useful for most people, deaf people need basically the opposite optimization because they need to perceive subtle hand and finger motions. The typical app that prioritizes audio, not video, doesn’t work for them.

But high-def real-time video quality requires upstream and downstream capacity reservation and end-to-end reliability. This is not cheap to provide. An ISP, in this illustration, has three options–charge the telepresence provider, charge deaf customers a premium, or spread the costs across all customers. The paid priority ban means ISPs can only charge customers for increased costs. This paid priority ban unnecessarily limits the potential for such services since there may be companies or nonprofits willing to subsidize such a service.

It’s a specialized example but illustrates the idiosyncratic technical requirements needed for many real-time services. In fact, real-time services are the next big challenge in the Internet’s evolution. As streaming media expert Dan Rayburn noted, “traditional one-way live streaming is being disrupted by the demand for interactive engagement.”  Large and small edge companies are increasingly looking for low-latency video solutions. Today, a typical “live” event is broadcast online to viewers with a 15- to 45-second delay. This latency limits or kills the potential for interactive online streaming services like online talk shows, pet cams, online auctions, videogaming, and online classrooms.

If the FTC takes back oversight of ISPs and the Internet it should, as with any industry, permit any business practice that complies with competition law and consumer protection law. The agency should disregard the unfounded belief that consideration payments online (“paid priority”) are always harmful.

Federal Communications Commission (FCC) Chairman Ajit Pai today announced plans to expand the role of economic analysis at the FCC in a speech at the Hudson Institute. This is an eminently sensible idea that other regulatory agencies (both independent and executive branch) could learn from.

Pai first made the case that when the FCC listened to its economists in the past, it unlocked billions of dollars of value for consumers. The most prominent example was the switch from hearings to auctions in order to allocate spectrum licenses. He perceptively noted that the biggest effect of auctions was the massive improvement in consumer welfare, not just the more than $100 billion raised for the Treasury. Other examples of the FCC using the best ideas of its economists include:

  • Use of reverse auctions to allocate universal service funds to reduce costs.
  • Incentive auctions that reward broadcasters for transferring licenses to other uses – an idea initially proposed in a 2002 working paper by Evan Kwerel and John Williams at the FCC.
  • The move from rate of return to price cap regulation for long distance carriers.

More recently, Pai argued, the FCC has failed to use economics effectively. He identified four key problems:

  1. Economics is not systematically employed in policy decisions and often employed late in the process. The FCC has no guiding principles for conduct and use of economic analysis.
  2. Economists work in silos. They are divided up among bureaus. Economists should be able to work together on a wide variety of issues, as they do in the Federal Trade Commission’s Bureau of Economics, the Department of Justice Antitrust Division’s economic analysis unit, and the Securities and Exchange Commission’s Division of Economic and Risk Analysis.
  3. Benefit-cost analysis is not conducted well or often, and the FCC does not take Regulatory Flexibility Act analysis (which assesses effects of regulations on small entities) seriously. The FCC should use Office of Management and Budget guidance as its guide to doing good analysis, but OMB’s 2016 draft report on the benefits and costs of federal regulations shows that the FCC has estimated neither benefits nor costs of any of its major regulations issued in the past 10 years. Yet executive orders from multiple administrations demonstrate that “Serious cost-benefit analysis is a bipartisan tradition.”
  4. Poor use of data. The FCC probably collects a lot of data that’s unnecessary, at a paperwork cost of $800 million per year, not including opportunity costs of the private sector. But even useful data are not utilized well. For example, a few years ago the FCC stopped trying to determine whether the wireless market is effectively competitive even though it collects lots of data on the wireless market.

To remedy these problems, Pai announced an initiative to establish an Office of Economics and Data that would house the FCC’s economists and data analysts. An internal working group will be established to collect input within the FCC and from the public. He hopes to have the new office up and running by the end of the year. The purpose of this change is to give economists early input into the rulemaking process, better manage the FCC’s data resources, and conduct strategic research to help find solutions to “the next set of difficult issues.”

Can this initiative significantly improve the quality and use of economic analysis at the FCC?

There’s evidence that independent regulatory agencies are capable of making some decent improvements in their economic analysis when they are sufficiently motivated to do so. For example, the Securities and Exchange Commission’s authorizing statue contains language that requires benefit-cost analysis of regulations when the commission seeks to determine whether they are in the public interest. Between 2005 and 2011, the SEC lost several major court cases due to inadequate economic analysis.

In 2012, the commission’s general counsel and chief economist issued new economic analysis guidance that pledged to assess regulations according to the principal criteria identified in executive orders, guidance from the Office of Management and Budget, and independent research. In a recent study, I found that the economic analysis accompanying a sample of major SEC regulations issued after this guidance was measurably better than the analysis accompanying regulations issued prior to the new guidance. The SEC improved on all five aspects of economic analysis it identified as critical: assessment of the need for the regulation, assessment of the baseline outcomes that will likely occur in the absence of new regulation, identification of alternatives, and assessment of the benefits and costs of alternatives.

Unlike the SEC, the FCC faces no statutory benefit-cost analysis requirement for its regulations. Unlike the executive branch agencies, the FCC is under no executive order requiring economic analysis of regulations. Unlike the Federal Trade Commission in the early 1980s, the FCC faces little congressional pressure for abolition.

But Congress is considering legislation that would require all regulatory agencies to conduct economic analysis of major regulations and subject that analysis to limited judicial review. Proponents of executive branch regulatory review have always contended that the president has legal authority to extend the executive orders on regulatory impact analysis to cover independent agencies, and perhaps President Trump is audacious enough to try this. Thus, it appears Chairman Pai is trying to get the FCC out ahead of the curve.