Economics

In our latest feature for Discourse magazine, Connor Haaland and I explore the question, “Should the U.S. Copy China’s Industrial Policy?” We begin by noting that:

Calls for revitalizing American industrial policy have multiplied in recent years, with many pundits and policymakers suggesting that the U.S. should consider taking on Europe and China by emulating their approaches to technological development. The goal would be to have Washington formulate a set of strategic innovation goals and mobilize government planning and spending around them.

We continue on to argue that what most of these advocates miss is that:

China’s targeting efforts are often antithetical to both innovation and liberty, and involve plenty of red tape and bureaucracy. China has become a remarkably innovative country for many reasons, including its greater tolerance for risk-taking, even as the Chinese Communist Party continues to pump resources into strategic sectors. But most Chinese innovation is permissible only insomuch as it furthers the party’s objectives, a strategy the U.S. obviously wouldn’t want to copy.

We discuss the problems associated with some of those Chinese efforts as well as proposed US responses, like the recently released 756 page report from the National Security Commission on Artificial Intelligence. The report takes an everything-and-the-kitchen-sink approach to state direction for new AI-related efforts and spending. While that report says the government now must “drive change through top-down leadership” in order to “win the AI competition that is intensifying strategic competition with China,” we argue that there could be some serious pitfalls with top-down, high price tag approaches.

Jump over to the Discourse site to read the full essay, as well as our previous essay, which asked, “Can European-Style Industrial Policies Create Tech Supremacy?” These two essay build on the research Connor and I have been doing on global artificial intelligence policies in the US, China, and the EU. In a much longer forthcoming white paper, we explore both the regulatory and industrial policy approaches for AI being adopted in the US, China, and the EU. Stay tuned for more.

In his debut essay for the new Agglomerations blog, my former colleague Caleb Watney, now Director of Innovation Policy for the Progressive Policy Institute, seeks to better define a few important terms, including: technology policy, innovation policy, and industrial policy. In the end, however, he decides to basically dispense with the term “industry policy” because, when it comes to defining these terms, “it is useful to have a limiting principle and it’s unclear what the limiting principle is for industrial policy.”

I sympathize. Debates about industrial policy are frustrating and unproductive when people cannot even agree to the parameters of sensible discussion. But I don’t think we need to dispense with the term altogether. We just need to define it somewhat more narrowly to make sure it remains useful.

First, let’s consider how this exact same issue played out three decades ago. In the 1980s, many articles and books featured raging debates about the proper scope of industrial policy. I spent my early years as a policy analyst devouring all these books and essays because I originally wanted to be a trade policy analyst. And in the late 1980s and early 1990s, you could not be a trade policy analyst without confronting industrial policy arguments.

Continue reading →

Interoperability is a topic that has long been of interest to me. How networks, platforms, and devices work with each other–or sometimes fail to–is an important engineering, business, and policy issue. Back in 2012, I spilled out over 5,000 words on the topic when reviewing John Palfrey and Urs Gasser’s excellent book, Interop: The Promise and Perils of Highly Interconnected Systems.

I’ve always struggled with the interoperability issues, however, and often avoided them became of the sheer complexity of it all. Some interesting recent essays by sci-fi author and digital activist Cory Doctorow remind me that I need to get back on top of the issue. His latest essay is a call-to-arms in favor of what he calls “adversarial interoperability.” “[T]hat’s when you create a new product or service that plugs into the existing ones without the permission of the companies that make them,” he says. “Think of third-party printer ink, alternative app stores, or independent repair shops that use compatible parts from rival manufacturers to fix your car or your phone or your tractor.”

Doctorow is a vociferous defender of expanded digital access rights of many flavors and his latest essays on interoperability expand upon his previous advocacy for open access and a general freedom to tinker. He does much of this work with the Electronic Frontier Foundation (EFF), which shares his commitment to expanded digital access and interoperability rights in various contexts.

I’m in league with Doctorow and EFF on some of these things, but also find myself thinking they go much too far in other ways. At root, their work and advocacy raise a profound question: should there be any general right to exclude on digital platforms? Although he doesn’t always come right out and say it, Doctorow’s work often seems like an outright rejection of any sort of property rights in networks or platforms. Generally speaking, he does not want the law to recognize any right for tech platforms to exclude using digital fences of any sort. Continue reading →

Teacher pay raise funding passes House | Local | idahostatejournal.comWhy can’t governments ever clean up their messes? Occasional spring cleanings are essential not only for keeping our own homes tidy and in good working order, but also for keeping our government systems functioning effectively. What can be done? In a new essay with my Mercatus Center colleagues Patrick McLaughlin and Matthew Mitchell, we note that Idaho Governor Brad Little has just issued a smart Executive Order that aims to clean house by bringing state rules in line with common sense. Specifically, the governor’s order addresses what to do with the 150-plus regulations that Idaho state agencies waived in response to the COVID-19 outbreak. This is a great model for other states, and it tracks a proposal that Patrick, Matt, and I floated in a white paper just a few months ago. The entire essay, which originally ran on The Bridge, is reprinted below.

_________

Idaho “Spring Cleaning” Order a Model for Other States

by Patrick McLaughlin, Matthew D. Mitchell & Adam Thierer

Regulations tend to accumulate endlessly. Today there are over 1 million restrictive words (think “shall” or “must”) in the Code of Federal Regulations. Some states, like California and New York, layer on hundreds of thousands of additional regulatory restrictions. Fewer than 1 percent of these rules have been subjected to rigorous cost-benefit analyses. And once regulations are on the books, it is fairly rare to see them subjected to any sort of retrospective review to see how they have performed. Continue reading →

[Co-authored with Walter Stover]

Artificial Intelligence (AI) systems have grown more prominent in both their use and their unintended effects. Just last month, LAPD announced that they would end their use of a predicting policing system known as PredPol, which had sustained criticism for reinforcing policing practices that disproportionately affect minorities. Such incidents of machine learning algorithms producing unintentionally biased outcomes have prompted calls for ‘ethical AI’. However, this approach focuses on technical fixes to AI, and ignores two crucial components of undesired outcomes: the subjectivity of data fed into and out of AI systems, and the interaction between actors who must interpret that data. When considering regulation on artificial intelligence, policymakers, companies, and other organizations using AI should therefore focus less on the algorithms and more on data and how it flows between actors to reduce risk of misdiagnosing AI systems. To be sure, applying an ethical AI framework is better than discounting ethics all together, but an approach that focuses on the interaction between human and data processes is a better foundation for AI policy.

The fundamental mistake underlying the ethical AI framework is that it treats biased outcomes as a purely technical problem. If this was true, then fixing the algorithm is an effective solution, because the outcome is purely defined by the tools applied. In the case of landing a man on the moon, for instance, we can tweak the telemetry of the rocket with well-defined physical principles until the man is on the moon. In the case of biased social outcomes, the problem is not well-defined. Who decides what an appropriate level of policing is for minorities? What sentence lengths are appropriate for which groups of individuals? What is an acceptable level of bias? An AI is simply a tool that transforms input data into output data, but it’s people that give meaning to data at both steps in context of their understanding of these questions and what appropriate measures of such outcomes are.

Continue reading →

[First published by AIER on April 20, 2020 as “Innovation and the Trouble with the Precautionary Principle.”]

In a much-circulated new essay (“It’s Time to Build”), Marc Andreessen has penned a powerful paean to the importance of building. He says the COVID crisis has awakened us to the reality that America is no longer the bastion of entrepreneurial creativity it once was. “Part of the problem is clearlyforesight, a failure of imagination,” he argues. “But the other part of the problem is what we didn’t do in advance, and what we’re failing to do now. And that is a failure of action, and specifically our widespread inability to build.”The Mind of Marc Andreessen | The New Yorker

Andreessen suggests that, somewhere along the line, something changed in the DNA of the American people and they essentially stopped having the desire to build as they once did. “You don’t just see this smug complacency, this satisfaction with the status quo and the unwillingness to build, in the pandemic, or in healthcare generally,” he says. “You see it throughout Western life, and specifically throughout American life.” He continues:

“The problem is desire. We need to want these things. The problem is inertia. We need to want these things more than we want to prevent these things. The problem is regulatory capture. We need to want new companies to build these things, even if incumbents don’t like it, even if only to force the incumbents to build these things.”

Accordingly, Andreessen continues on to make the case to both the political right and left to change their thinking about building more generally. “It’s time for full-throated, unapologetic, uncompromised political support from the right for aggressive investment in new products, in new industries, in new factories, in new science, in big leaps forward.”

What’s missing in Andreessen’s manifesto is a concrete connection between America’s apparent dwindling desire to build these things and the political realities on the ground that contribute to that problem. Put simply, policy influences attitudes. More specifically, policies that frown upon entrepreneurial risk-taking actively disincentivize the building of new and better things. Thus, to correct the problem Andreessen identifies, it is essential that we must first remove political barriers to productive entrepreneurialism or else we will never get back to being the builders we once were.     Continue reading →

[Co-authored with Trace Mitchell and first published on The Bridge on April 21, 2020.]

Which seems like a more pressing concern at the moment: Ensuring that we get hand sanitizer onto shelves or making sure that children don’t drink it once we do? Getting face masks out to the public quickly, or waiting until they can be manufactured to precise federal regulatory specifications?

These questions are currently being raised by the Food and Drug Administration (FDA). The agency’s rules have been making it hard for distilleries to address the hand sanitizer shortage. Companies looking to supply more face masks to the nation as quickly as possible have been similarly stymied.

Sometimes government regulation is so out of touch with reality and common sense that it should force us to rethink the way business is done in Washington. As our Mercatus Center colleague Scott Sumner concludes, the past month has witnessed “a torrent of governmental incompetence that is breathtaking in scale” and “regulations so bizarre that if put in a novel no one would believe them.” Sadly, he’s right, and the strange world of FDA hand sanitizer and face mask regulation provides us with two teachable moments.

As we write, the FDA has repealed or at least examined many of the immediate barriers to the production of face masks and hand sanitizers. Still, precious time was lost waiting for the agency to clear out unneeded regulations. Going forward, regulators should be sure to review and prune their cache of rules before calamity strikes, perhaps in the form of an independent regulatory review commission or a regulatory budget. Continue reading →

The Mercatus Center at George Mason University has just released a new paper by Patrick A. McLaughlin, Matthew D. Mitchell, and me entitled, “A Fresh Start: How to Address Regulations Suspended during the Coronavirus Crisis.” Here’s a quick summary.

As the COVID-19 crisis intensified, policymakers at the federal, state, and local levels started suspending or rescinding laws and regulations that hindered sensible, speedy responses to the pandemic. These “rule departures” raised many questions. Were the paused rules undermining public health and welfare even before the crisis? Even if the rules were well intentioned or once possibly served a compelling interest, had they grown unnecessary or counterproductive? If so, why did they persist? How will the suspended rules be dealt with after the crisis? Are there other rules on the books that might transform from merely unnecessary to actively harmful in future crises?

Once the COVID-19 crisis subsides, there is likely to be considerable momentum to review the rules that have slowed down the response. If policymakers felt the need to abandon these rules during the current crisis, those same rules should probably be permanently repealed or at least comprehensively reformed to allow for more flexible responses in the future.

Accordingly, when the pandemic subsides, policymakers at the federal and state levels should create “Fresh Start Initiatives” that would comprehensively review all suspended rules and then outline sunsetting or reform options for them. To this end, we propose an approach based on the successful experience of the Base Realignment and Closure (BRAC) Commission.

Read the entire paper here to see how it would work. This is our chance to finally do some much-needed spring cleaning for the regulatory state.

In a new essay in The Dallas Morning News (“Licensing restrictions for health care workers need to be flexible to fight coronavirus“), Trace Mitchell and I discuss recent efforts to reform occupational licensing restrictions for health care workers to help fight the coronavirus. Trace and I have written extensively about the need for licensing flexibility over the past couple of years, but it is needed now more than ever. Luckily, some positive reforms are now underway.

We highlight efforts in states like Massachusetts and Texas to reform their occupational licensing rules in response to the crisis, as well as federal reforms aimed at allowing reciprocity across state lines. We conclude by noting that:

It should not take a crisis of this magnitude for policymakers to reconsider the way we prevent fully qualified medical professionals from going where they are most needed. But that moment is now upon us. More leaders would be wise to conduct a comprehensive review of regulatory burdens that hinder sensible, speedy responses to the coronavirus crisis.

If nothing else, the relaxation of these rules should give us a better feel for how necessary strict licensing requirements truly are. Chances are, we will learn just how costly the regulations have been all along.
Read the entire piece here.

Coauthored with Mercatus MA Fellow Jessie McBirney

Flat standardized test scores, low college completion rates, and rising student debt has led many to question the bachelor’s degree as the universal ticket to the middle class. Now, bureaucrats are turning to the job market for new ideas. The result is a renewed enthusiasm for Career and Technical Education (CTE), which aims to “prepare students for success in the workforce.” Every high school student stands to benefit from a fun, rigorous, skills-based class, but the latest reauthorization of the Carl D. Perkins Act, which governs CTE at the federal level, betrays a faulty economic theory behind the initiative.

Modern CTE is more than a rebranding of yesterday’s vocational programs, which earned a reputation as “dumping grounds” for struggling students and, unfortunately, minorities. Today, CTE classes aim to be academically rigorous and cover career pathways ranging from manufacturing to Information Technology and STEM (science, technology, engineering, and mathematics). Most high school CTE occurs at traditional public schools, where students take a few career-specific classes alongside their core requirements.

Continue reading →