Innovation & Entrepreneurship – Technology Liberation Front https://techliberation.com Keeping politicians' hands off the Net & everything else related to technology Mon, 16 Oct 2023 17:33:58 +0000 en-US hourly 1 6772528 America Does Not Need a Digital Consumer Protection Commission https://techliberation.com/2023/08/10/america-does-not-need-a-digital-consumer-protection-commission/ https://techliberation.com/2023/08/10/america-does-not-need-a-digital-consumer-protection-commission/#comments Thu, 10 Aug 2023 15:25:01 +0000 https://techliberation.com/?p=77151

The New York Times today published my response to an oped by Senators Lindsey Graham & Elizabeth Warren calling for a new “Digital Consumer Protection Commission” to micromanage the high-tech information economy. “Their new technocratic digital regulator would do nothing but hobble America as we prepare for the next great global technological revolution,” I argue. Here’s my full response:

Senators Lindsey Graham and Elizabeth Warren propose a new federal mega-regulator for the digital economy that threatens to undermine America’s global technology standing.

A new “licensing and policing” authority would stall the continued growth of advanced technologies like artificial intelligence in America, leaving China and others to claw back crucial geopolitical strategic ground.

America’s digital technology sector enjoyed remarkable success over the past quarter-century — and provided vast investment and job growth — because the U.S. rejected the heavy-handed regulatory model of the analog era, which stifled innovation and competition.

The tech companies that Senators Graham and Warren cite (along with countless others) came about over the past quarter-century because we opened markets and rejected the monopoly-preserving regulatory regimes that had been captured by old players.

The U.S. has plenty of federal bureaucracies, and many already oversee the issues that the senators want addressed. Their new technocratic digital regulator would do nothing but hobble America as we prepare for the next great global technological revolution.

]]>
https://techliberation.com/2023/08/10/america-does-not-need-a-digital-consumer-protection-commission/feed/ 6 77151
Is AI Really an Unregulated Wild West? https://techliberation.com/2023/06/22/is-ai-really-an-unregulated-wild-west/ https://techliberation.com/2023/06/22/is-ai-really-an-unregulated-wild-west/#comments Thu, 22 Jun 2023 15:04:44 +0000 https://techliberation.com/?p=77142

As I noted in a recent interview with James Pethokoukis for his Faster, Please! newsletter, “[t]he current policy debate over artificial intelligence is haunted by many mythologies and mistaken assumptions. The most problematic of these is the widespread belief that AI is completely ungoverned today.” In a recent R Street Institute report and series of other publications, I have documented just how wrong that particular assumption is.

The first thing I try to remind everyone is that the U.S. federal government is absolutely massive—2.1 million employees, 15 cabinet agencies, 50 independent federal commissions and 434 federal departments. Strangely, when policymakers and pundits deliver remarks on AI policy today, they seem to completely ignore all that regulatory capacity while simultaneously casually tossing out proposals to just add more and more layers of regulation and bureaucracy to it. Well, I say why not see if the existing regulations and bureaucracy are working first, and then we can have a chat about what more is needed to fill gaps.

And a lot  is being done on this front. In a new blog post for R Street, I offer a brief summary of some of the most important recent efforts.

  • In January, the National Institute of Standards and Technology released its “ AI Risk Management Framework ,” which was created through a multi-year, multi-stakeholder process. It is intended to help developers and policymakers better understand how to identify and address various types of potential algorithmic risk.
  • The Food and Drug Administration (FDA) has been using its broad regulatory powers  to review and approve AI and ML-enabled medical devices  for many years already, and the agency possesses  broad recall authority  that can address risks that develop from algorithmic or robotic systems. The FDA is currently refining its approach to AI/ML in  a major proceeding .
  • The National Highway Traffic Safety Administration (NHTSA) has been issuing  constant revisions  to its driverless car policy guidelines since 2016. Like the FDA, the NHTSA also has broad recall authority, which it used in February 2023 to  mandate a recall  of Tesla’s full self-driving autonomous driving system, also requiring an over-the-air software update to over 300,000 vehicles that had the software package.
  • In 2021, the Consumer Product Safety Commission agency issued  a major report  highlighting the many policy tools it already has to address AI risks. Like the FDA and NHTSA, the agency has recall authority that can address risks that develop from consumer-facing algorithmic or robotic systems.
  • In April, Securities and Exchange Commission Chairman Gary Gensler told Congress that his agency is  moving to address  AI and predictive data analytics in finance and investing.
  • The Federal Trade Commission (FTC) has become increasingly active on AI policy issues and has noted in a series of  recent   blog   posts  that the agency is ready to use its broad authority to “unfair and deceptive practices,” involving algorithmic claims or applications.
  • The Equal Employment Opportunity Commission (EEOC) recently  released a memo  as part of its “ongoing effort to help ensure that the use of new technologies complies with federal [equal employment opportunity] law.” It outlines how existing employment antidiscrimination laws and policies cover algorithmic technologies.
  • In May, the Consumer Financial Protection Bureau (CFPB) issued a statement clarifying how existing federal anti-discrimination law already applies to complex algorithmic systems used for lending decisions.  The agency also recently released a report on the use of Chatbots in Consumer Finance, and explained the many ways that the “CFPB is actively monitoring the market” for risks associated with these new services.
  • Along with the EEOC, the FTC and the CFPB, the Civil Rights Division of the Department of Justice released  an April joint statement  saying that the agency heads said that they would be looking to take preemptive steps to address algorithmic discrimination.

“This is real-time algorithmic governance in action,” I argue. Again, additional regulatory steps may be needed later to fill gaps in current law, but policymakers should begin by acknowledging that a lot of algorithmic oversight authority exists across the federal government. Meanwhile, the courts and our common law system are also starting to address novel AI problems as cases develop. For more along these lines, see my recent essay on “The Many Ways Government Already Regulates Artificial Intelligence.”

So, next time someone suggests that AI is developing in an unregulated “Wild West,” remind them of all these existing laws, agencies, and regulatory efforts. And then also ask them a different question no one is really exploring currently: Could it be the case that many agencies are already overregulating some algorithmic and autonomous systems? (I’m looking at you, FAA!) Why is no one worried about that possibility as the global AI race with China and other countries intensifies?

Additional Reading :

]]>
https://techliberation.com/2023/06/22/is-ai-really-an-unregulated-wild-west/feed/ 13 77142
My Latest Study on AI Governance https://techliberation.com/2023/04/20/my-latest-study-on-ai-governance/ https://techliberation.com/2023/04/20/my-latest-study-on-ai-governance/#comments Thu, 20 Apr 2023 18:25:29 +0000 https://techliberation.com/?p=77114

The R Street Institute has just released my latest study on AI governance and how to address “alignment” concerns in a bottom-up fashion. The 40-page report is entitled, “Flexible, Pro-Innovation Governance Strategies for Artificial Intelligence.”

My report asks, is it possible to address AI alignment without starting with the Precautionary Principle as the governance baseline default? I explain how that is indeed possible. While some critics claim that no one is seriously trying to deal with AI alignment today, my report explains how no technology in history has been more heavily scrutinized this early in its life-cycle as AI, machine learning and robotics. The number of ethical frameworks out there already is astonishing. We don’t have too few alignment frameworks; we probably have too many!

We need to get serious about bringing some consistency to these efforts and figure out more concrete ways to a culture of safety by embedding ethics-by-design. But there is an equally compelling interest in ensuring that algorithmic innovations are developed and made widely available to society.

Although some safeguards will be needed to minimize certain AI risks, a more agile and iterative governance approach can address these concerns without creating overbearing, top-down mandates, which would hinder algorithmic innovations – especially at a time when America is looking to stay ahead of China and other nations in the global AI race.

My report explores the many ethical frameworks that professional associations have already formulated as well as the various other “soft law” frameworks that have been devised. I also consider how AI auditing and algorithmic impact assessments can be used to help formalize the twin objectives of “ethics-by-design” and keeping “humans in the loop,” which are the two principles that drive most AI governance frameworks. But it is absolutely essential that audits and impact assessments are done right to ensure it does not become an overbearing, compliance-heavy, and politicized nightmare that would undermine algorithmic entrepreneurialism and computational innovation.

Finally, my report reviews the extensive array of existing government agencies and policies that ALREADY govern artificial intelligence and robotics as well as the wide variety of court-based common law solutions that cover algorithmic innovations. The notion that America has no law or regulation covering artificial intelligence today is massively wrong, as my report explains in detail.

I hope you’ll take the time to check out my new report. This and my previous report on “Getting AI Innovation Culture Right” serve as the foundation of everything we have coming on AI and robotics from the R Street Institute. Next up will be a massive study on global AI “existential risks” and national security issues. Stay tuned. Much more to come!

In the meantime, you can find all my recent work here on my “Running List of My Research on AI, ML & Robotics Policy.”


Additional Reading:

]]>
https://techliberation.com/2023/04/20/my-latest-study-on-ai-governance/feed/ 4 77114
What Policy Vision for Artificial Intelligence? https://techliberation.com/2023/04/02/what-policy-vision-for-artificial-intelligence/ https://techliberation.com/2023/04/02/what-policy-vision-for-artificial-intelligence/#comments Sun, 02 Apr 2023 21:32:49 +0000 https://techliberation.com/?p=77103

In my latest R Street Institute report, I discuss the importance of “Getting AI Innovation Culture Right.” This is the first of a trilogy of major reports on what sort of policy vision and set of governance principles should guide the development of  artificial intelligence (AI), algorithmic systems, machine learning (ML), robotics, and computational science and engineering more generally. More specifically, these reports seek to answer the question, Can we achieve AI safety without innovation-crushing top-down mandates and massive new regulatory bureaucracies? 

These questions are particular pertinent as we just made it through a week in which we’ve seen a major open letter issued that calls for a 6-month freeze on the deployment of AI technologies, while a prominent AI ethicist argued that governments should go further and consider airstrikes data processing centers even if the exchange of nuclear weapons needed to be considered! On top of that, Italy became the first major nation to ban ChatGPT, the popular AI-enabled chatbot created by U.S.-based OpenAI.

My report begins from a different presumption: AI, ML and algorithmic technologies present society with enormously benefits and, while real risks are there, we can find better ways of addressing them. As I summarize:

The danger exists that policy for algorithmic systems could be formulated in such a way that innovations are treated as guilty until proven innocent—i.e., a precautionary principle approach to policy—resulting in many important AI applications never getting off the drawing board. If regulatory impediments block or slow the creation of life-enriching, and even life-saving, AI innovations, that would leave society less well-off and give rise to different types of societal risks.

I argue that it is essential we not trap AI in an “innovation cage” by establishing the wrong policy default for algorithmic governance but instead work through challenges as they come at us. The right policy default for the internet and for AI continues to be “innovation allowed.” But AI risks do require serious governance steps. Luckily, many tools exist and others are being created. While my next major report (due out April 20th) offers far more detail, this paper sketches out some of those mechanisms. 

The goal of algorithmic policy should be for policymakers and innovators to work together to find flexible, iterative, agile, bottom-up governance solutions over time. We can promote a culture of responsibility among leading AI innovators and balance safety and innovation for complex, rapidly evolving computational and computing technologies like AI. This approach is buttressed by existing laws and regulations, as well as common law and the courts.

The new Biden Admin “AI Bill of Rights” unfortunately represents a fear-based model of technology policymaking that breaks from the superior Clinton framework for the internet & digital technology. Our nation’s policy toward AI, robotics & algorithmic innovation should instead embrace a dynamic future and the enormous possibilities that await us.

Please check out my new paper for more details. Much more to come. And you can also check out my running list of research on AI, ML robotics policy.

]]>
https://techliberation.com/2023/04/02/what-policy-vision-for-artificial-intelligence/feed/ 2 77103
Why Isn’t Everyone Already Unemployed Due to Automation? https://techliberation.com/2023/03/11/why-isnt-everyone-already-unemployed-due-to-automation/ https://techliberation.com/2023/03/11/why-isnt-everyone-already-unemployed-due-to-automation/#comments Sat, 11 Mar 2023 14:16:41 +0000 https://techliberation.com/?p=77099

I have a new R Street Institute policy study out this week doing a deep dive into the question: “Can We Predict the Jobs and Skills Needed for the AI Era?” There’s lots of hand-wringing going on today about AI and the future of employment, but that’s really nothing new. In fact, in light of past automation panics, we might want to step back and ask: Why isn’t everyone already unemployed due to technological innovation?

To get my answers, please read the paper! In the meantime, here’s the executive summary:

To better plan for the economy of the future, many academics and policymakers regularly attempt to forecast the jobs and worker skills that will be needed going forward. Driving these efforts are fears about how technological automation might disrupt workers, skills, professions, firms and entire industrial sectors. The continued growth of artificial intelligence (AI), robotics and other computational technologies exacerbate these anxieties. Yet the limits of both our collective knowledge and our individual imaginations constrain well-intentioned efforts to plan for the workforce of the future. Past attempts to assist workers or industries have often failed for various reasons. However, dystopian predictions about mass technological unemployment persist, as do retraining or reskilling programs that typically fail to produce much of value for workers or society. As public efforts to assist or train workers move from general to more specific, the potential for policy missteps grows greater. While transitional-support mechanisms can help alleviate some of the pain associated with fast-moving technological disruption, the most important thing policymakers can do is clear away barriers to economic dynamism and new opportunities for workers.

I do discuss some things that government can do to address automation fears at the end of the paper, but it’s important that policymakers first understand all the mistakes we’ve made with past retraining and reskilling efforts. The easiest thing to do to help in the short-term is clear away barriers to labor mobility and economic dynamism, I argue. Again, read the study for details.

For more info on other AI policy developments, check out my running list of research on AI, ML robotics policy.

]]>
https://techliberation.com/2023/03/11/why-isnt-everyone-already-unemployed-due-to-automation/feed/ 3 77099
7 AI Policy Issues to Watch in 2023 and Beyond https://techliberation.com/2023/02/10/7-ai-policy-issues-to-watch-in-2023-and-beyond/ https://techliberation.com/2023/02/10/7-ai-policy-issues-to-watch-in-2023-and-beyond/#respond Fri, 10 Feb 2023 13:33:58 +0000 https://techliberation.com/?p=77088

In my latest R Street Institute blog post, “Mapping the AI Policy Landscape Circa 2023: Seven Major Fault Lines,” I discuss the big issues confronting artificial intelligence and machine learning in the coming year and beyond. I note that the AI regulatory proposals are multiplying fast and coming in two general varieties: broad-based and targeted. Broad-based algorithmic regulation would address the use of these technologies in a holistic fashion across many sectors and concerns. By contrast, targeted algorithmic regulation looks to address specific AI applications or concerns. In the short-term, it is more likely that targeted or “sectoral” regulatory proposals have a chance of being implemented.

I go on to identify seven major issues of concern that will drive these policy proposals. They include:

1) Privacy and Data Collection

2) Bias and Discrimination

3) Free Speech and Disinformation

4) Kids’ Safety

5) Physical Safety and Cybersecurity

6) Industrial Policy and Workforce Issues

7) National Security and Law Enforcement Issues

Of course, each of these issues includes many sub-issues and nuanced concerns. But I also noted that “this list only scratches the surface in terms of the universe of AI policy issues.” Algorithmic policy considerations are now being discussed in many other fields, including educationinsurancefinancial servicesenergy marketsintellectual propertyretail and trade, and more. I’ll be rolling out a new series of essays examining all these issues throughout the year.

But, as I note in concluding my new essay, the danger of over-reach exists with early regulatory efforts:

AI risks deserve serious attention, but an equally serious risk exists that an avalanche of fear-driven regulatory proposals will suffocate different life-enriching algorithmic innovations. There is a compelling interest in ensuring that AI innovations are developed and made widely available to society. Policymakers should not assume that important algorithmic innovations will just magically come about; our nation must get its innovation culture right if we hope to create a better, more prosperous future.

America needs a flexible governance approach for algorithmic systems that avoids heavy-handed, top-down controls as a first-order solution. “There is no use worrying about the future if we cannot even invent it first,” I conclude.

Additional Reading

]]>
https://techliberation.com/2023/02/10/7-ai-policy-issues-to-watch-in-2023-and-beyond/feed/ 0 77088
Quick Thoughts on Biden’s Tech-Bashing in the State of the Union https://techliberation.com/2023/02/07/quick-thoughts-on-bidens-tech-bashing-in-the-state-of-the-union/ https://techliberation.com/2023/02/07/quick-thoughts-on-bidens-tech-bashing-in-the-state-of-the-union/#respond Wed, 08 Feb 2023 03:43:49 +0000 https://techliberation.com/?p=77080

  • President Biden began his 2023 State of the Union remarks by saying America is defined by possibilities. Correct! Unfortunately, his tech-bashing will undermine those possibilities by discouraging technological innovation & online freedom in the United States.
  • America became THE global leader on digital tech because we rejected heavy-handed controls on innovators & speech. We shouldn’t return to the broken model of the past by layering on red tape, economic controls & speech restrictions.
  • What has the tech economy done for us lately? Here is a look at the value added to the U.S. economy by the digital sector from 2005-2021. That’s $2.4 TRILLION (with a T) added in 2021. These are astonishing numbers.
  • FACT: According to the BEA, in 2021, “the U.S. digital economy accounted for $3.70 trillion of gross output, $2.41 trillion of value added (translating to 10.3 % of U.S. GDP), $1.24 trillion of compensation + 8.0 million jobs.”

In 2021…

  • $3.70 trillion of gross output
  • $2.41 trillion of value added (=10.3% percent GDP)
  • $1.24 trillion of compensation
  • 8.0 million jobs

FACT: globally, 49 of the top 100 digital tech firms with most employees are US companies. Here they are. Smart public policy made this list possible.

  • FACT: 18 of the world’s Top 25 tech companies by Market Cap are US-based firms.
  • It’d be a huge mistake to adopt Europe’s approach to tech regulation. As I noted recently in the Wall Street Journal, “The only thing Europe exports now on the digital-technology front is regulation.”  Yet, Biden would have us import the EU model to our shores.
  • My R Street colleague Josh Withrow has also noted how, “the EU’s approach appears to be, in sum, ‘If you can’t innovate, regulate.’” America should not be following the disastrous regulatory path of the European Union on digital technology policy.
  • On antitrust regulation, here is a study by my R Street colleague Wayne Brough on the dangerous approach that the Biden administration wants, which would swing a wrecking ball through the tech economy. We have to avoid this.
  • It is particularly important that the US not follow the EU’s lead on artificial intelligence regulation at a time when we are in heated competition w China on the AI front as I noted here.
  • American tech innovators flourished thanks to a positive innovation culture rooted in permissionless innovation & policies like Section 230, which allowed American firms to become global powerhouses. And we’ve moved from a world of information scarcity to one of information abundance. Let’s keep it that way.
]]>
https://techliberation.com/2023/02/07/quick-thoughts-on-bidens-tech-bashing-in-the-state-of-the-union/feed/ 0 77080
We Need to Get All the Smart People in a Room & Have a Conversation https://techliberation.com/2022/10/16/we-need-to-get-all-the-smart-people-in-a-room-have-a-conversation/ https://techliberation.com/2022/10/16/we-need-to-get-all-the-smart-people-in-a-room-have-a-conversation/#comments Sun, 16 Oct 2022 12:51:13 +0000 https://techliberation.com/?p=77052

For my latest Discourse column (“We Really Need To ‘Have a Conversation’ About AI … or Do We?”), I discuss two commonly heard lines in tech policy circles.

1) “We need to have conversation about the future of AI and the risks that it poses.”

2) “We should get a bunch of smart people in a room and figure this out.”

I note that, if you’ve read enough essays, books or social media posts about artificial intelligence (AI) and robotics—among other emerging technologies—then chances are you’ve stumbled on variants of these two arguments many times over. They are almost impossible to disagree with in theory, but when you start to investigate what they actually mean in practice, they are revealed to be largely meaningless rhetorical flourishes which threaten to hold up meaningful progress on the AI front. I continue on to argue in my essay that:

I’m not at all opposed to people having serious discussions about the potential risks associated with AI, algorithms, robotics or smart machines. But I do have an issue with: (a) the astonishing degree of ambiguity at work in the world of AI punditry regarding the nature of these “conversations,” and, (b) the fact that people who are making such statements apparently have not spent much time investigating the remarkable number of very serious conversations that have already taken place, or which are ongoing, about AI issues. In fact, it very well could be the case that we have  too many conversations going on currently about AI issues and that the bigger problem is instead one of better coordinating the important lessons and best practices that we have already learned from those conversations.

I then unpack each of those lines and explain what is wrong with them in more detail. One thing that always bugs be about the “we need to have a conversation” aphorism is that those uttering it absolutely refuse to be nailed down on the specifics, like:

  1. What is the nature or goal of that conversation?
  2. Who is the “we” in this conversation?
  3. How is this conversation to be organized and managed?
  4. How do we know when the conversation is going on, or when it is sufficiently complete such that we can get on with things?
  5. And, most importantly, aren’t you implicitly suggesting that we should ban or limit the use of that technology until you (or the royal “we”) is somehow satisfied that the conversation is over or yielded satisfactory answers?

The other commonly heard line — “We need to get a bunch of smart people in a room and figure this out” — can be equally infuriating due to both a lack of specifics (which people? what room? where and when? etc) but also because of the fact that we already have had tons of the Very Smartest People on these issues meeting in countless rooms across the globe for many years. In an earlier essay, I documented the astonishing growth of AI governance frameworks, ethical best practices and professional codes of conduct: “The amount of interest surrounding AI ethics and safety dwarfs all other fields and issues. I sincerely doubt that ever in human history has so much attention been devoted to any technology as early in its lifecycle as AI.”

I also note that, practically speaking, “the most important conversations society has about new technologies are those we have every day when we all interact with those new technologies and with one another. Wisdom is born from experiences, including activities and interactions involving risk and the possibility of mistakes. This is how progress happens.” And I conclude by noting how:

We won’t ever be able to “have a conversation” about a new technology that yields satisfactory answers for some critics precisely because the questions just multiply and evolve endlessly over time, and they can only be answered through ongoing societal interactions and problem-solving. But we shouldn’t stop life-enriching innovations from happening just because we don’t have all the answers beforehand.

Anyway, I invite you to head over to  Discourse and read the entire essay. In the meantime, I propose we get all the smart people in a room and have a conversation about how these two lines came to dominate tech policy discussions before they end up doing real damage to human prosperity! It’s the ethical thing to do if you really care about the future.

]]>
https://techliberation.com/2022/10/16/we-need-to-get-all-the-smart-people-in-a-room-have-a-conversation/feed/ 2 77052
Dispatch from JMI’s “Tech & Innovation Summit” Panel on Progress Studies https://techliberation.com/2022/09/16/dispatch-from-jmis-tech-innovation-summit-panel-on-progress-studies/ https://techliberation.com/2022/09/16/dispatch-from-jmis-tech-innovation-summit-panel-on-progress-studies/#comments Fri, 16 Sep 2022 13:59:12 +0000 https://techliberation.com/?p=77044

It was my pleasure this week to participate in a panel discussion about the future of innovation policy at the James Madison Institute’s 2022 Tech and Innovation Summit in Coral Gables, FL. Our conversation focused on the future of Progress Studies, which is one of my favorite topics. We were asked to discuss five major questions and below I have summarized some of my answers to them, plus some other thoughts I had about what I heard at the conference from others.

  1. What is progress studies and why is it so needed today?

In a sense, Progress Studies is nothing new. Progress studies goes back at least to the days of Adam Smith and plenty of important scholars have been thinking about it ever since. Those scholars and policy advocates have long been engaged in trying to figure out what’s the secret sauce that powers economic growth and human prosperity. It’s just that we didn’t call that Progress Studies in the old days.

The reason Progress Studies is important is because technological innovation has been shown to be the fundamental driver in improvements in human well-being over time.  When we can move the needle on progress, it helps individuals extend and improve their lives, incomes, and happiness. By extension, progress helps us live lives of our choosing. As Hans Rosling brilliantly argued, the goal of expanding innovation opportunities and raising incomes “is not just bigger piles of money” or more leisure time. “The ultimate goal is to have the freedom to do what we want.”

  1. What don’t policymakers get about progress?

Policymakers often fail to appreciate the connection between innovation policy defaults and actual real-world innovation outcomes. Here is the biggest no-duh statement ever uttered: If you discourage innovation by default, you’ll get a lot less of it. In other words, incentives matters if you hope to create a positive innovation culture. Innovation culture refers to the various social and political attitudes, policies and entrepreneurial activities that, taken together, influence the innovative capacity of a particular region.

Thus, when policymakers make the Precautionary Principle the legal default for innovative activities, it means that government has put a red light in front of entrepreneurs and treated them and their innovations as guilty until proven innocent.  That’s a sure-fire recipe for stagnation.

The better approach is to make Permissionless Innovation our policy default and treat entrepreneurs and innovations as innocent until proven guilty. When our policy defaults offer entrepreneurs more green lights instead of red ones, it encourages more experimentation with new and better ways of doing things. In turn, this spurs business formation, job creation, new industries and products, and broad-based economic growth.

But policymakers consistently ignore this fundamental reality about the connection between policy and progress.

  1. Can you think of any states or governments that are doing a good job of putting the insights of progress studies into practice?

This summer, I co-authored an essay about, “How Arizona Is Getting Innovation Culture Right,” and highlighted the many important reforms undertaken over the past eight years by Gov. Doug Ducey and the Arizona Legislature. Arizona has advanced several reforms that have helped the state get its innovation culture right both broadly and narrowly. Broadly speaking, the state took steps to minimize red tape burdens and streamline permitting process and occupational licensing mandates. They also promoted “right to earn a living” and “right to try” initiatives to broaden worker and patient opportunities.

In terms of more targeted reforms, Arizona took steps to clear the way for greater broadband rollout and encouraged experimentation with commercial drones and driverless cars. The state also helped pioneer the use of “regulatory sandboxes,” which grant innovators a temporary safe space free of excessive regulatory burdens so they can experiment with new products and services.

And then there’s the city of Miami. At the JMI event, Miami Mayor Francis Suarez delivered a keynote address and he identified 3 keys to attracting talent and building opportunity: (1) Keep taxes low, (2) keep people safe, and (3) focus on innovation. He’s following that script and making Miami a hotbed of entrepreneurial opportunity.

Mayor Suarez spoke of how he is embracing emerging technologies like blockchain to compete with the traditional geographic Goliaths of tech, like San Francisco and New York. There’s been a massive inflow of companies and investors as a result. The city has become #1 in tech job growth and the inflow of tech entrepreneurs. “It turns out that if you welcome people… they come,” he said. “They want to migrate to places that are on the cutting edge of technology” and find “pathways to prosperity.”

Miami and Arizona offer great models that other cities and states could follow if they hope to improve their own innovation culture.

  1. What is the difference between progress studies and industrial organization, or industrial policy, or “government planning, but for innovation”?

Many policymakers foolishly believe there exists a precise technocratic cocktail that can immediately unlock innovation through highly targeted interventions and spending initiatives. In reality, achieving consistent growth and prosperity requires more than Big Government gimmicks. It’s a long game.

Many politicians and pundits are often fond of using machine-like metaphors and insisting that they have the ability to “fine-tune” innovative outcomes or “dial-in” economic development according to a precise formula. This is how we end up trillions in debt without much to show for it. Most recently, we’ve witnessed an “orgy of spending” on industrial policy schemes at the federal level.

The better metaphor for thinking about a nation’s innovation culture might be a plant or garden. Two of the great Progress Studies thinkers are F. A. Hayek and Joel Mokyr. Hayek once suggested that policymakers should aim to “cultivate a growth by providing the appropriate environment, in the manner in which the gardener does this for his plants.”  And Mokyr has argued that technological innovation and economic progress must be viewed as “a fragile and vulnerable plant, whose flourishing is not only dependent on the appropriate surroundings and climate, but whose life is almost always short. It is highly sensitive to the social and economic environment and can easily be arrested by relatively small external changes.”

Thus, the technocratic industrial policy mindset is always looking for “sexy” initiatives that capture a lot of short-term media attention, but typically fail to produce meaningful innovations or lasting growth. What’s more important to long-term prosperity is that policymakers get the “boring” stuff right.

The building blocks of the “boring” general approach economic development is a mix of broadly applicable tax, spending, regulatory and legal rules that help create a stable innovation ecosystem. Again, it’s like Mayor Suarez’s 3-prong approach of low taxes, safe communities, and a welcoming embrace of entrepreneurialism. That’s the secret sauce that fuels long-term progress and a sustainable prosperity.

  1. Is there a disconnect between the theories of progress and the practice – in other words, is it a problem of governance forms?

Indeed, I already mentioned the difference between the Precautionary Principle and Permissionless Innovation and it’s always interesting to me how my scholars ignore the importance of these governance forms when thinking about how to advance progress. There exists an unfortunate tendency among many to either ignore or repeat the mistakes of the past. Having made significant economic and societal gains thanks to past technological progress, many pundits and policymakers come to take much of it for granted. Thus, Progress Studies requires a process of constant re-education to remind each new generation of what helped raise our living standards so dramatically over the past two centuries.

The dramatic growth in incomes, life expectancy, and human welfare were not the product of sheer luck but of important policy choices. The freedom to think, to innovate, and to trade are the three freedoms that gave us our modern riches. If our governance forms limit those foundational freedoms, our current welfare and future prosperity will suffer. This is the great lesson of Progress Studies.


Additional Reading from Adam Thierer on Progress Studies

 

]]>
https://techliberation.com/2022/09/16/dispatch-from-jmis-tech-innovation-summit-panel-on-progress-studies/feed/ 1 77044
No Goldilocks Formula for Content Moderation in Social Media or the Metaverse, But Algorithms Still Help https://techliberation.com/2022/09/13/no-goldilocks-formula-for-content-moderation-in-social-media-or-the-metaverse-but-algorithms-still-help/ https://techliberation.com/2022/09/13/no-goldilocks-formula-for-content-moderation-in-social-media-or-the-metaverse-but-algorithms-still-help/#comments Tue, 13 Sep 2022 17:48:00 +0000 https://techliberation.com/?p=77041

[Cross-posted from Medium.]

In an age of hyper-partisanship, one issue unites the warring tribes of American politics like no other: hatred of “Big Tech.” You know, those evil bastards who gave us instantaneous access to a universe of information at little to no cost. Those treacherous villains! People are quick to forget the benefits of moving from a world of Information Poverty to one of Information Abundance, preferring to take for granted all they’ve been given and then find new things to complain about.

But what mostly unites people against large technology platforms is the feeling that they are just too big or too influential relative to other institutions, including government. I get some of that concern, even if I strongly disagree with many of their proposed solutions, such as the highly dangerous sledgehammer of antitrust breakups or sweeping speech controls. Breaking up large tech companies would not only compromise the many benefits they provide us with, but it would undermine America’s global standing as a leader in information and computational technology. We don’t want that. And speech codes or meddlesome algorithmic regulations are on a collision course with the First Amendment and will just result in endless litigation in the courts.

There’s a better path forward. As President Ronald Reagan rightly said in 1987 when vetoing a bill to reestablish the Fairness Doctrine, “History has shown that the dangers of an overly timid or biased press cannot be averted through bureaucratic regulation, but only through the freedom and competition that the First Amendment sought to guarantee.” In other words, as I wrote in a previous essay about “The Classical Liberal Approach to Digital Media Free Speech Issues,” more innovation and competition are always superior to more regulation when it comes to encouraging speech and speech opportunities.

Can Government Get Things Just Right?

But what about the accusations we hear on both the left and right about tech companies failing to properly manage or moderate online content in some fashion? This is not only a concern for today’s most popular social media platforms, but it is a growing concern for the so-called Metaverse, where questions about content policies already surround activities and interactions on AR and VR systems.

The problem here is that different people want different things from digital platforms when it comes to content moderation. As I noted in a column for The Hill late last year:

there is considerable confusion in the complaints both parties make about “Big Tech.” Democrats want tech companies doing more to limit content they claim is hate speech, misinformation, or that incites violence. Republicans want online operators to do less, because many conservatives believe tech platforms already take down too much of their content.

Thus, large digital intermediaries are expected to make all the problems of the world go away through a Goldilocks formula whereby digital platforms will get content moderation “just right.” It’s an impossible task with billions of voices speaking. Bureaucrats won’t do a better job refereeing these disputes, and letting them do so will turn every content spat into an endless regulatory proceeding.

What Algorithms Can and Cannot Do to Help

But we should be clear on one thing: These disputes will always be with us because every media platform in history has had some sort of content moderation policies, even if we didn’t call them that until recently. Creating what used to just be called guidelines or standards for information production and dissemination has always been a tricky business. But the big difference between the old and new days comes down to three big problems:

#1- the volume problem: There’s just a ton of content online to moderate today compared to the past.

#2- the subjectivity problem: Content moderation always involves “eye of the beholder” questions, but now there’s even more of those problems because of Problem #1.

#3- the crafty adversaries problem: There are a lot of people bound and determined to get around any rules or restrictions platforms impose, and they’ll find creative ways to do so.

These problems are nicely summarized in an excellent new AEI report by Alex Feerst on, “The Use of AI in Online Content Moderation.” This is the fifth in a series of new reports from the AEI’s Digital Platforms and American Life project. The goal of the project is to highlight how the “democratization of knowledge and influence comes with incredible opportunities but also immense challenges. How should policymakers think about the digital platforms that have become embedded in our social and civic life?” Various experts have been asked to sound off on that question and address different challenges. The series kicked off in April with an essay I wrote on “Governing Emerging Technology in an Age of Policy Fragmentation and Disequilibrium.” More studies are coming.

In Feerst’s new report, the focus is squarely on the issue of algorithmic content moderation policies and procedures. Feerst provides a brilliant summary of how digital media platforms currently utilize AI to assist their content moderation efforts. He notes:

The short answer to the question “why AI” is scale — the sheer never-ending vastness of online speech. Scale is the prime mover of online platforms, at least in their current, mainly ad-based form and maybe in all incarnations. It’s impossible to internalize the dynamics of running a digital platform without first spending some serious time just sitting and meditating on the dizzying, sublime amounts of speech we are talking about: 500 million tweets a day comes out to 200 billion tweets each year. More than 50 billion photos have been uploaded to Instagram. Over 700,000 hours of video are uploaded to YouTube every day. I could go on. Expression that would previously have been ephemeral or limited in reach under the existing laws of nature and pre-digital publishing economics can now proliferate and move around the world. It turns out that, given the chance, we really like to hear ourselves talk.

So that’s the scale/volume problem in a nutshell. Algorithmic systems are absolutely going to be needed to help do some sifting and sorting, therefore.

What Do You Want to Do about Man-Boobs?

But then we immediately run into the subjectivity problem that pervades so many content moderation issues. When it comes to topics like hate speech, “There will be as many opinions as there are people. Three well-meaning civic groups will agree on four different definitions of hate speech,” Feerst notes.

Indeed, these eye-of-the-beholder judgment calls are ubiquitous and endlessly frustrating for content moderators. Let me tell you a quick story I told a Wall Street Journal reporter who asked me in 2019 why I gave up helping tech companies figure out how to handle these content moderation controversies. I had spent many years trying to help companies and trade associations figure this stuff out because I had been writing about these challenges since the late 1990s. But then finally I gave up. Why? Because of man boobs. Yes, man boobs. Here’s the summary of my story from that WSJ article:

Adam Thierer, a senior research fellow at the right-leaning Mercatus Center at George Mason University, says he used to consult with Facebook and other tech companies. The futility of trying to please all sides hit home after he heard complaints about a debate at YouTube over how much skin could be seen in breast-feeding videos.

While some argued the videos had medical purposes, other advisers wondered whether videos of shirtless men with large mammaries should be permitted as well. “I decided I don’t want to be the person who decides on whether man boobs are allowed,” says Mr. Thierer.

No, seriously. This has been one of the many crazy problems that content moderators have had to deal with. There are scumbag dudes with large mammaries who not only salaciously jiggle them around on camera for the world to see, but then even put whipped cream on their own boobs and lick it off. Now, if a woman does that and posts it on almost any mainstream platform, it’ll get quickly flagged (probably by an algorithmic filter) and probably immediately blocked. But if a dude with man boobs does the same thing, shouldn’t the policy be the same? Well, in our still very sexist world of double standards, policies can vary on that question. And I didn’t want any part of trying to figure out an answer to that question (and others like it), so I largely got out of the business of helping companies do so. Not even King Solomon could figure out a fair resolution to some of this stuff.

Algorithms can only help us so much here because, at some point, humans must tell the machines what to flag or block using some sort of subjective standard that will lead to all sorts of problems later. This is one reason why Feerst reminds us of another important rule here: “Don’t confuse a subjectivity problem for an accuracy problem, especially when you’re using automation technology.” As he notes:

If the things we’re doing are controversial among humans and it’s not even clear that humans judge them consistently, then using AI is not going to help. It’s just going to allow you to achieve the same controversial outcomes more quickly and in greater volume. In other words, if you can’t get 50 humans to agree on whether a particular post violates content rules, whether that content rule is well formulated, or whether that rule should exist, then why would automating this process help?

So Many Troublemakers (Sometimes Accidental)

The man boobs moderation story also reminds us that the crafty adversary problem will always haunt us, too. There are just so many bastards out there looking to cause trouble for whatever reason. “There will never be ‘set it and forget it’ technologies for these issues,” Feerst argues. “At best, it’s possible to imagine a state of dynamic equilibrium — eternal cops and robbers.”

That is exactly right. It’s a never-ending learning/coping process, as I noted in my earlier paper in the AEI series: “There is no Goldilocks formula that can get things just right” when it comes to many tech governance issues, especially content moderation issues. Muddling through is the new normal. And the exact same process is now unfolding for Metaverse content moderation. Algorithmic moderation helps us weed out the worst stuff and gives us a better chance of letting humans — with their limited time and resources — deal with the hardest problems (and problem-makers) out there.

Sometimes the content infractions may even be accidental. Here’s another embarrassing story involving me. I was asked last year to sit in on a VR meeting about content moderation in the Metaverse. I was wearing my headset and sitting at a virtual table with about 8 other people in the room. Back in my real-world office, I had my coffee mug sitting far to the right of me on a side table. After about 45 minutes of discussion, I realized that every time I reached way over to my right to grab my coffee mug in the real-world, my virtual self’s hand was reaching over and touching the crotch of the guy sitting next to me in the Metaverse! It looked like I was fondling the dude virtually! What a nightmare. I’m surprised someone didn’t report me for virtual harassment. I would have had to plead the coffee mug defense and throw myself on the mercy of the Meta-Court judge or jury.

Ok, so that’s a funny story, but you can imagine little mistakes like this happening all throughout the Metaverse as we slowly figure out how to interact normally in new virtual environments. We’ll have to rely on users and algorithms flagging some of the worst behaviors and then have humans evaluate the tough calls to the best of their abilities. But let’s not be fooled into thinking that humans can handle all these questions because the task at hand is too overwhelming and expensive for many platform operators. “Ten thousand employees here, ten thousand ergonomic mouse pads there, and pretty soon we’re talking about real money,” Feerst notes. “This is what the cost of running a platform looks like, once you’ve internalized the harmful and inexorable externalities we’ve learned about the hard way over the past decade.”

The Problem with “Explainability”

The key takeaway here is that content moderation at scale is messy, confusing, and unsatisfying. Do platforms need to be more transparent about how their algorithms work to do this screening? Yes, they do. But perfect transparency or “explainability” is impossible.

It’s hard to perfectly explain how algorithms work for the same reason it’s hard for your car mechanic to explain to you exactly how your car engine works. Except it’s even harder with algorithmic systems. As Feerst notes:

AI outputs can be hard to explain. In some cases, even the creators or managers of a particular product are no longer sure why it is functioning a particular way. It’s not like the formula to Coca-Cola; it’s constantly evolving. Requirements to “disclose the algorithm” may not help much if it means that companies will simply post a bunch of not especially meaningful code.

And if explainability was mandated by law, it’d instantly be gamed by still other troublemakers out there. A mandate to make AI perfectly transparent is an open invitation to every scam artist in the world to game platforms with new phishing attacks, spammy scams, and other such nonsense. Again, this is the “crafty adversaries” problem at work. Endless cat-and-mouse or, as Feerst says “eternal cops and robbers.”

So, in sum, content moderation — including algorithmic content moderation — is a nightmarishly difficult task, and there is no Goldilocks formula available to us that will help us get things just right. It’ll always just be endless experimentation and iteration with lots and lots of failures along the way. Learning by doing and constantly refining our systems and procedures is the key to helping us muddle through.

And if you think government will somehow figure this all out through some sort of top-down regulatory regime, ask yourself how well that worked out for Analog Era efforts to create “community standards” for broadcast radio and television. And then multiply that problem by a zillion. It cannot be done without severely undermining free speech and innovation. We don’t want to go down that path.

____________

Additional Reading

· “Again, We Should Not Ban All Teens from Social Media

· “The Classical Liberal Approach to Digital Media Free Speech Issues

· “AI Eats the World: Preparing for the Computational Revolution and the Policy Debates Ahead

· “Left and right take aim at Big Tech — and the First Amendment

· “When It Comes to Fighting Social Media Bias, More Regulation Is Not the Answer

· “FCC’s O’Rielly on First Amendment & Fairness Doctrine Dangers

· “Conservatives & Common Carriage: Contradictions & Challenges

· “The Great Deplatforming of 2021

· “A Good Time to Re-Read Reagan’s Fairness Doctrine Veto

· “Sen. Hawley’s Radical, Paternalistic Plan to Remake the Internet

· “How Conservatives Came to Favor the Fairness Doctrine & Net Neutrality

· “Sen. Hawley’s Moral Panic Over Social Media

· “The White House Social Media Summit and the Return of ‘Regulation by Raised Eyebrow’

· “The Not-So-SMART Act

· “The Surprising Ideological Origins of Trump’s Communications Collectivism

]]>
https://techliberation.com/2022/09/13/no-goldilocks-formula-for-content-moderation-in-social-media-or-the-metaverse-but-algorithms-still-help/feed/ 1 77041
AI Eats the World: Preparing for the Computational Revolution and the Policy Debates Ahead https://techliberation.com/2022/09/12/ai-eats-the-world-preparing-for-the-computational-revolution-and-the-policy-debates-ahead/ https://techliberation.com/2022/09/12/ai-eats-the-world-preparing-for-the-computational-revolution-and-the-policy-debates-ahead/#comments Mon, 12 Sep 2022 23:52:26 +0000 https://techliberation.com/?p=77039

[Cross-posted from Medium.]

The Coming Computational Revolution

Thomas Edison once spoke of how electricity was a “field of fields.” This is even more true of AI, which is ready to bring about a sweeping technological revolution. In Carlota Perez’s influential 2009 paper on “Technological Revolutions and Techno-economic Paradigms,” she defined a technological revolution “as a set of interrelated radical breakthroughs, forming a major constellation of interdependent technologies; a cluster of clusters or a system of systems.” To be considered a legitimate technological revolution, Perez argued, the technology or technological process must be “opening a vast innovation opportunity space and providing a new set of associated generic technologies, infrastructures and organisational principles that can significantly increase the efficiency and effectiveness of all industries and activities.” In other words, she concluded, the technology must have “the power to bring about a transformation across the board.”

Expanding Our Skillset

Thus, AI (and AI policy) is multi-dimensional, amorphous, and ever-changing. It has many layers and complexities. This will require public policy analysts and institutions to reorient their focus and develop new capabilities.

Mapping the AI Policy Terrain: Broad vs. Narrow

Beyond talent development, the other major challenge is issue coverage. How can we cover all the AI policy bases? There are two general categories of AI concerns, and supporters of free markets need to be prepared to engage on both battlefields.

Confronting the Formidable Resistance to Change

Finally, free-market analysts and organizations must prepare to defend the general concept of progress through technological change as AI becomes a central social, economic, and legal battleground — both domestically and globally. Every technological revolution involves major social and economic disruptions and gives rise to intense efforts to defend the status quo and block progress. As Perez concludes, “the profound and wide-ranging changes made possible by each technological revolution and its techno-economic paradigm are not easily assimilated; they give rise to intense resistance.”

]]>
https://techliberation.com/2022/09/12/ai-eats-the-world-preparing-for-the-computational-revolution-and-the-policy-debates-ahead/feed/ 3 77039
AI Governance “on the Ground” vs “on the Books” https://techliberation.com/2022/08/24/ai-governance-on-the-ground-vs-on-the-books/ https://techliberation.com/2022/08/24/ai-governance-on-the-ground-vs-on-the-books/#respond Wed, 24 Aug 2022 15:14:56 +0000 https://techliberation.com/?p=77028

[Cross-posted from Medium]

There are two general types of technological governance that can be used to address challenges associated with artificial intelligence (AI) and computational sciences more generally. We can think of these as “on the ground” (bottom-up, informal “soft law”) governance mechanisms versus “on the books” (top-down, formal “hard law”) governance mechanisms.

Unfortunately, heated debates about the latter type of governance often divert attention from the many ways in which the former can (or already does) help us address many of the challenges associated with emerging technologies like AI, machine learning, and robotics. It is important that we think harder about how to optimize these decentralized soft law governance mechanisms today, especially as traditional hard law methods are increasingly strained by the relentless pace of technological change and ongoing dysfunctionalism in the legislative and regulatory arenas.

On the Grounds vs. On the Books Governance

Let’s unpack these “on the ground” and “on the books” notions a bit more. I am borrowing these descriptors from an important 2011 law review article by Kenneth A. Bamberger and Deirdre K. Mulligan, which explored the distinction between what they referred to as “Privacy on the Books and on the Ground.” They identified how privacy best practices were emerging in a decentralized fashion thanks to the activities of corporate privacy officers and privacy associations who helped formulate best practices for data collection and use.

The growth of privacy professional bodies and non­profit organizations — especially the International Association of Privacy Profession­als (IAPP) — helped better formalize privacy best practices by establishing and certifying internal champions to uphold key data-handling principles with organizations. By 2019, the IAPP had over 50,000 trained members globally, and its numbers keep swelling. Today, it is quite common to find Chief Privacy Officers throughout the corporate, governmental, and non-profit world.

These privacy professionals work together and in conjunction with a wide diversity of other players to “bake-in” widely-accepted information collection/ use practices within all these organizations. With the help of IAPP and other privacy advocates and academics, these professionals also look to constantly refine and improve their standards to account for changing circumstances and challenges in our fast-paced data economy. They also look to ensure that organizations live up to commitments they have made to the public or even governments to abide by various data-handling best practices.

Soft Law vs. Hard Law

These “on the ground” efforts have helped usher in a variety of corporate social responsibility best practices and provide a flexible governance model that can be a compliment to, or sometimes even a substitute for, formal “on the books” efforts. We can also think of this as the difference between soft law and hard law.

Soft law refers to agile, adaptable governance schemes for emerging technology that create substantive expectations and best practices for innovators without regulatory mandates. Soft law can take many forms, including guidelines, best practices, agency consultations & workshops, multistakeholder initiatives, and other experimental types of decentralized, non-binding commitments and efforts.

Soft law has become a bit of a gap-filler in the U.S. as hard law efforts fail for various reasons. The most obvious explanations for why the role of hard law governance has shrunk is that it’s just very hard for law to keep up with fast-moving technological developments today. This is known as the pacing problem. Many scholars have identified how the pacing problem gives rise to a “governance gap” or “competency trap” for policymakers because, just as quickly as they are coming to grips with new technological developments, other technologies are emerging quickly on their heels.

Think of modern technologies — especially informational and computational technologies — like a series of waves that come flowing in to shore faster and faster. As soon as one wave crests and then crashes down, another one comes right after it and soaks you again before you’ve had time to recover from the daze of the previous ones hitting you. In a world of combinatorial innovation, in which technologies build on top of one another in a symbiotic fashion, this process becomes self-reinforcing and relentless. For policymakers, this means that just when they’ve worked their way up one technological learning curve, the next wave hits and forces them to try to quickly learn about and prepare for the next one that has arrived. Lawmakers are often overwhelmed by this flood of technological change, making it harder and harder for policies to get put in place in a timely fashion — and equally hard to ensure that any new or even existing policies stay relevant as all this rapid-fire innovation continues.

Legislative dysfunctionalism doesn’t help. Congress has a hard time advancing bills on many issues, and technical matters often get pushed to the bottom of the priorities list. The end result is that Congress has increasingly become a non-actor on tech policy in the U.S. Most of the action lies elsewhere.

What’s Your Backup Plan?

This means there is a powerful pragmatic case for embracing soft law efforts that can at least provide us with some “on the ground” governance efforts and practices. Increasingly, soft law is filling the governance gap because hard law is failing for a variety of reasons already identified. Practically speaking, even if you are dead set on imposing a rigid, top-down, technocratic regulatory regime on any given sector or technology, you should at least have a backup plan in mind if you can’t accomplish that.

This is why privacy governance in the United States continues to depend heavily on such soft law efforts to fill the governance vacuum after years of failed attempts to enact a formal federal privacy law. While many academics and others continue to push for such an over-arching data handling law, bottom-up soft law efforts have played an important role in balancing privacy and innovation.

In a similar way, “on the ground” governance efforts are already flourishing for artificial intelligence and machine learning as policymakers continue to very slowly consider whether new hard law initiatives are wise or even possible. For example, congressional lawmakers have been considering a federal regulatory framework for driverless cars for the past several sessions of Congress. Many people in Congress and in academic circles agree that a federal framework is needed, if for no other reason than to preempt the much-dreaded specter of a patchwork of inconsistent state and local regulatory policies. With so much bipartisan agreement out there on driverless car legislation, it would seem like a federal bill would be a slam dunk. For that reason, year in and year out, people always predict: this is the year we’ll get driverless car legislation! And yet, it never happens due to a combination of special interest opposition from unions and trial lawyers, in addition to the pacing problem issue and Congress focusing its limited attention on other issues.

This is also already true for algorithmic regulation. We hear lots of calls to do something, but it remains unclear what that something is or whether it will get done any time soon. If we could not get a privacy bill through Congress after at least a dozen years of major efforts, chances are that broad-based AI regulation is going to be equally challenging.

Soft Law for AI is Exploding

Thus, soft law will likely fill the governance gap for AI. It already is. I’m working on a new book that documents the astonishing array of soft law mechanisms already in place or being developed to address various algorithmic concerns. I can’t seem to finish the book because there is just so much going on related to soft law governance efforts for algorithmic systems. As Mark Coeckelbergh noted in his recent book on AI Ethics, there’s been an “avalanche of​ initiatives and policy documents” around AI ethics and best practices in recent years. It is a bit overwhelming, but the good news is that there is a lot of consistency in these governance efforts.

To illustrate, a 2019 survey by a group of researchers based in Switzerland analyzed 84 AI ethical frameworks and found “a global convergence emerging around five ethical principles (transparency, justice and fairness, non-maleficence, responsibility and privacy).” A more recent 2021 meta-survey by a team of Arizona State University (ASU) legal scholars reviewed an astonishing 634 soft law AI programs that were formulated between 2016–2019. 36 percent of these efforts were initiated by governments, with the others being led by non-profits or private sector bodies. Echoing the findings from the Swiss researchers, the ASU report found widespread consensus among these soft law frameworks on values such as transparency and explainability, ethics/rights, security, and bias. This makes it clear that there is considerable consistency among ethical soft law frameworks in that most of them focus on a core set of values to embed within AI design. The UK-based Alan Turing Institute boils their list down to four “FAST Track Principles”: Fairness, Accountability, Sustainability, and Transparency.

The ASU scholars noted how ethical best practices for product design already influence developers today by creating powerful norms and expectations about responsible product design. “Once a soft law program is created, organizations may seek to enforce it by altering how their employees or representatives perform their duties through the creation and implementation of internal procedures,” they note. “Publicly committing to a course of action is a signal to society that generates expectations about an organization’s future actions.”

This is important because many major trade associations and individual companies have been formulating governance frameworks and ethical guidelines for AI development and use. For example, among large trade associations, the U.S. Chamber of Commerce, the Business Roundtable, the BSA | The Software Alliance, and ACT (The App Association) have all recently released major AI best practice guidelines. Notable corporate efforts to adopt guidelines for ethical AI practices include statements or frameworks by IBM, Intel, GoogleMicrosoftSalesforceSAP, and Sony, to just name a few. They are also creating internal champions to push AI ethics though either the appointment of Chief Ethical Officers, the creation of official departments, or both plus additional staff to guide the process of baking-in AI ethics by design.

Once again, there is remarkable consistency among these corporate statements in terms of the best practices and ethical guidelines they endorse. Each trade association or corporate set of guidelines align closely with the core values identified in the hundreds of other soft law frameworks that ASU scholars surveyed. These efforts go a long way toward helping to promote a culture of responsibility among leading AI innovators. We can think of this as the professionalization of AI best practices.

What Soft Law Critics Forget

Some will claim that “on the ground” soft law efforts are not enough, but they typically make two mistakes when saying so.

Their first mistake is thinking that hard law is practical or even optimal for fast-paced, highly mercurial AI and ML technologies. It’s not just that the pacing problem necessitates new thinking about governance. Critics fail to understand how hard law would likely significantly undermine algorithmic innovation because algorithmic systems can change by the minute and require a more agile and adaptive system of governance by their very nature.

This is a major focus of my book and I previously published a draft chapter from my book on “The Proper Governance Default for AI,” and another essay on “Why the Future of AI Will Not Be Invented in Europe.” These essays explain why a Precautionary Principle-oriented regulatory regime for algorithmic systems would stifle technological development, undermine entrepreneurialism, diminish competition and global competitive advantage, and even have a deleterious impact on our national security goals.

Traditional regulatory systems can be overly rigid, bureaucratic, inflexible, and slow to adapt to new realities. They focus on preemptive remedies that aim to predict the future, and future hypothetical problems that may not ever come about. Worse yet, administrative regulation generally preempts or prohibits the beneficial experiments that yield new and better ways of doing things. When innovators must seek special permission before they offer a new product or service, it raises the cost of starting a new venture and discourages activities that benefit society. We need to avoid that approach if we hope maximize the potential of AI-based technologies.

The second mistake that soft law critics make is that they fail to understand how many hard law mechanisms actually play a role in supporting soft law governance. AI applications already are regulated by a whole host of existing legal policies. If someone does something stupid or dangerous with AI systems, the Federal Trade Commission (FTC) has the power to address “unfair and deceptive practices” of any sort. And state Attorneys General and state consumer protection agencies also routinely address unfair practices and continue to advance their own privacy and data security policies, some of which are often more stringent than federal law.

Meanwhile, several existing regulatory agencies in the U.S. possess investigatory and recall authority that allows them to remove products from the market when certain unforeseen problems manifest themselves. For example, the National Highway Traffic Safety Administration (NHTSA), the Food & Drug Administration (FDA), and Consumer Product Safety Commission (CPSC) all possess broad recall authority that could be used to address risks that develop for many algorithmic or robotic systems. For example, NHTSA is currently using its investigative authority to evaluate Tesla’s claims about “full self-driving” technology and the agency has the power to take action against the company under existing regulations. Likewise, the FDA used its broad authority to crack down on genetic testing company 23andme many years ago. And CPSC and the FTC have broad authority to investigate claims made by innovators, and they’ve already used it. It’s not like our expansive regulatory state lacks considerable existing power to police new technology. If anything, the power of the administrative state is too broad and amorphous and it can be abused in certain instances.

Perhaps most importantly, our common law system can address other deficiencies with AI-based systems and applications using product defects law, torts, contract law, property law, and class action lawsuits. This is a better way of addressing risks compared to preemptive regulation of general-purpose AI technology because it at least allows the technologies to first develop and then see what actual problems manifest themselves. Better to treat innovators as innocent until proven guilty than the other way around.

There are other thorny issues that deserve serious policy consideration and perhaps even some new rules. But how risks are addressed matters deeply. Before we resort to heavy-handed, legalistic solutions for possible problems, we should exhaust all other potential remedies first.

In other words, “on the ground” soft law government mechanisms and ex post legal solutions should generally trump “ex ante (preemptive, precautionary) regulatory constraints. But we should look for ways to refine and improve soft law governance tools, perhaps through better voluntary certification and auditing regimes to hold developers to a high standard as it pertains to the important AI ethical practices we want them to uphold. This is the path forward to achieve responsible AI innovation without the heavy-handed baggage associated with more formalistic, inflexible, regulatory approaches that are ill-suited for complicated, rapidly-evolving computational and computing technologies.

___________________

Related Reading on AI & Robotics

]]>
https://techliberation.com/2022/08/24/ai-governance-on-the-ground-vs-on-the-books/feed/ 0 77028
Why the Future of AI Will Not Be Invented in Europe https://techliberation.com/2022/08/01/why-the-future-of-ai-will-not-be-invented-in-europe/ https://techliberation.com/2022/08/01/why-the-future-of-ai-will-not-be-invented-in-europe/#comments Mon, 01 Aug 2022 18:28:40 +0000 https://techliberation.com/?p=77016

For my latest column in The Hill, I explored the European Union’s (EU) endlessly expanding push to regulate all facets of the modern data economy. That now includes a new effort to regulate artificial intelligence (AI) using the same sort of top-down, heavy-handed, bureaucratic compliance regime that has stifled digital innovation on the continent over the past quarter century.

The European Commission (EC) is advancing a new Artificial Intelligence Act, which proposes banning some AI technologies while classifying many others under a heavily controlled “high-risk” category. A new bureaucracy, the European Artificial Intelligence Board, will be tasked with enforcing a wide variety of new rules, including “prior conformity assessments,” which are like permission slips for algorithmic innovators. Steep fines are also part of the plan. There’s a lengthy list of covered sectors and technologies, with many others that could be added in coming years. It’s no wonder, then, that the measure has been labelled the measure “the mother of all AI laws” and analysts have argued it will further burden innovation and investment in Europe.

As I noted in my new column, the consensus about Europe’s future on the emerging technology front is dismal to put it mildly. The International Economy journal recently asked 11 experts from Europe and the U.S. where the EU currently stood in global tech competition. Responses were nearly unanimous and bluntly summarized by the symposium’s title: “The Biggest Loser.” Respondents said Europe is “lagging behind in the global tech race,” and “unlikely to become a global hub of innovation.” “The future will not be invented in Europe,” another analyst bluntly concluded.

That’s a grim assessment, but there is no doubt that European competitiveness is suffering today and that excessive regulation plays a fairly significant role in causing it. As I noted in my column, “the EU’s risk-averse culture and preference for paperwork compliance over entrepreneurial freedom” had serious consequences for continent-wide innovation.  I note in my recent column how:

After the continent piled on layers of data restrictions beginning in the mid-1990s, innovation and investment suffered. Regulation grew more complex with the 2018 General Data Protection Regulation (GDPR), which further limits data collection and use. As a result of all the red tape, the EU came away from the digital revolution with “the complete absence of superstar companies.” There are no serious European versions of Microsoft, Google, Facebook, Apple or Amazon. Europe’s leading providers of digital technology services today are American-based companies.

Let’s take a look at a few numbers that illustrate what’s happened in Europe’s tech sector over the past quarter century. Here’s an old KPGM breakdown of market caps for public Internet companies over an important 20 year period, from 1995 to 2015, when the digital technology marketplace was taking shape. Besides the remarkable amount of churn over that period (with only Apple appearing on both lists), the other notable thing is the complete absence of any European companies in 2015.

Next, here’s a chart I constructed using CB Insights data for global unicorns ($billion valued companies) from 2010 up through early 2022. It shows how the U.S. dominates fully half the list with China having a 16% share, but all of the European Union’s firms equal just a 9 percent slice of the world’s share.

If you want to see a per capita breakdown of VC investment by country, here’s a handy Crunchbase News chart. While the U.S. is geographically much larger than Europe, a breakdown of VC funding on a per capita basis reveals that only Estonia ($915B) and Sweden ($700B) have startup investment on par with America ($808B). No other European country has even half as much per capita VC investment as the U.S., and most don’t even have a quarter as much.

As we enter the “age of AI,” what will the EU’s same regulatory model for mean for AI, machine learning, and robotics in Europe? We do have some early data on that, too. Here’s a breakdown of AI-related VC activity and AI unicorn in 2021 from the recent State of AI Report 2021, with European countries already trailing far behind:

Also, here’s some data on recent AI investment by region from the latest Stanford “AI Index Report 2022” which again highlights a gap that is only growing larger:

It’s important to listen to what actual AI innovators across the Atlantic have to say about the new EU regulatory efforts. Just last month, the UK-based Coalition for a Digital Economy (Coadec), an advocacy group for Britain’s technology-led startups, published a report entitled, “What do AI Startups Want from Regulation?” Coadec surveyed its members to gauge their feelings about the EU’s proposed approach to AI regulation, as well as the UK’s. 76% of those startups said that their business model would be either negatively affected or become infeasible if the UK were to echo the EU by making AI developers liable, and an equal percentage said they had varying concerns about whether it’s technically even feasible to make their datasets “free of errors,” as the EU looks set to demand. Respondents also said they feared that the new AI Act would be particularly burdensome to small and mid-size entrepreneurs because they cannot afford to deal with the costly compliance hassles like the larger competitors they face. This would end of being a replay of the burdens they faced from GDPR, which decimated small businesses. “The experience of GDPR demonstrated how unclear, complex and expensive regulations drove many startups out of business, and disproportionately impact startups that survived–GDPR compliance cost startups significantly more than it did the Tech Giants,” the Coadec report concluded.

At least those UK-based innovators might be in a slightly better position post-Brexit with the British government now looking to chart a different–and much less burdensome–governance approach for digital technologies. In fact, the UK government recently released a major policy document on “Establishing a Pro-Innovation Approach to Regulating AI,” which makes a concerted effort to distinguish its approach from the EU’s. “We will ask that regulators focus on high risk concerns rather than hypothetical or low risks associated with AI,” the report noted. “We want to encourage innovation and avoid placing unnecessary barriers in its way.” This is consistent with what the UK government has been saying on technology governance more generally. For example, in recent report advocating for Innovation Friendly Regulation, the UK government’s Regulatory Horizons Council argued that, when it comes to the regulation of emerging technologies like AI, “it is also necessary to consider the risk that the intervention itself poses.” “This would include the potential impact on benefits from a particular innovation that might be foregone; it would also include the potential creation of a ‘chilling effect’ on innovation more generally,” the Council concluded. Clearly, this approach to technology policy stands in stark contrast to the EU’s heavy-handed model. So, there is a chance that at least some innovators based in the UK can escape the EU’s regulatory hell.

What about AI innovators stuck on the European continent? What are they saying about the regulations they will soon face? The European DIGITAL SME Alliance, which is the largest network of small and medium sized enterprises (SMEs) in the European ICT sector, represents roughly 45,000 digital SMEs. In comments to the EC about the impact of the law, the Alliance highlighted how costly the AI Act’s conformity assessments and other regulations will be for smaller innovators. “This may put a burden on AI innovation” the Alliance argued, because smaller developers have limited financial and human resources of SMEs.” “[A] regulation that requires SMEs to make these significant investments, will likely push SMEs out of the market,” the group noted. “This is exactly the opposite of the intention to support a thriving and innovative AI ecosystem in Europe.” Moreover, “SMEs will not be able to pass on these costs to their customers in the final customer end pricing,” the Alliance correctly noted because, “[t[he market is global and highly competitive. Therefore, customers will choose cheaper solutions and Europe risks to be left behind in technology development and global competition.”

In March, the Alliance also hosted a forum on “The European AI Act and Digital SMEs,” which featured comments from some operators in this space. Some speakers were quite timid and you could sense that they might have feared pushing back too aggressively against the European Commission so as not to get on the bad side of regulators before the rules go into effect. But Mislav Malenica, Founder & CEO Mindsmiths didn’t pull any punches in his remarks. His company Mindsmiths is trying to build autonomous support systems in many different fields, but their ability to innovate and compete globally will be severely curtailed by the EU AI Act, he argued.

I usually don’t spend time transcribing people’s comments from events, but I went back and watched Malenica’s multiple times because his remarks are so powerful and I wanted to make sure others hear what he was saying. [Malenica’s opening comments during the event run from 42:29 to 49:34 of the video and then he has more to say during Q&A beginning at the 1:27:28 of the video.] Here’s a quick summary of a few of Malenica’s key points (listed chronologically):

  • “I’m not sure we are doing everything we can do actually to create an environment that’s innovation friendly.”
  • “we see a lot of uncertainty. We see fear.”
  • “basically we won’t be able to get funding here.”
  • while reading through the AI Act, he notes, “I don’t see start-ups being mentioned anywhere, and startups are the main vehicles of innovation.” […] “I find it very arrogant”
  • if AI Act becomes law, “what we’ll do in Europe is we’ll create a new market and that’s the AI markets based on fear,” and in how to just build products that avoid the wrath of government or lawsuits.
  • “we are really stifling innovation” and that means Europeans will have to import autonomous products from foreign companies instead of making them there.

Later, during in the Q&A period, Malenica notes how his first virtual currency startup had to use half it’s investment capital just dealing with regulatory compliance issues, and most venture capitalists wouldn’t get behind launching in Europe because of such legal hassles. He reflects upon what this mean for other innovators going forward as the EU prepares to expand their regulatory regime for AI sectors:

  • “I don’t think we’re missing talent. That’s just a consequence” of all the regulation. “We are missing a sense that you have opportunities here. If you the opportunities here, then the talent will come, the funding will come, and so on because people see that they’ll be able to make money, they’ll be able to build companies, and so on.”
  • “If we now take a look at the 10 biggest companies market capitalizations in the world, we’ll see that none of them comes actually from Europe” with U.S. tech companies dominating the list. “So, we missed that wave completely.” Why? “Because we didn’t inspire anyone to take action,” and that is about to happen for AI.
  • “We need to decide if we are going to be a land of opportunities, or will we be just consumers of other people’s tech, the same we are right now” for digital software and services.
  • “We’re already finding excuses for the loss” of the AI market, he argues.

Malenica’s comments are extraordinarily demoralizing if you care about innovation. Now, I’m an American and one way to look at this dismal situation is that, by hobbling its own startups and existing AI innovators, Europe is doing the U.S. another favor by essentially taking itself out of the running in next great global tech race. Europe’s actions may also mean that America gains many of their best and brightest if they come to the U.S. when looking to create the next great algorithmic service or application because they can’t do so in the EU. This is exactly what happened over the past few decades for Internet startups, Malenica noted.

But that’s dismal news in another sense. Europe is filled with brilliant innovators, highly-skilled talent, world-class educational institutions, and even many venture capitalists looking to invest in this arena. Unfortunately, the continent’s suffocating regulatory approach makes it nearly impossible for digital technology innovators to have a fighting chance. Through their heavy-handed policies, European officials have essentially declared their innovators “guilty until proven innocent.” And that means that Europeans and the rest of the world are being deprived of many important life-enriching and life-saving AI applications that those innovators could create. Technological innovation is not a zero-sum game that only one country can “win.” Innovation drives growth and prosperity and lifts all boats as its benefits spread throughout the world. When European innovators prosper, people all over the world prosper along with them.

Is there any chance the European Commission softens its stance toward emerging technologies and looks to adopt a more flexible governance approach that instead treats AI innovators as innocent until proven guilty? I think it is extremely unlikely that will happen because, as Malenica noted, European technology policy is too rooted in fear of disruption and extreme risk-aversion. EU officials are forgetting that the most important lesson from the history of technological innovation is there can be no progress without some risk-taking and corresponding disruption. My favorite quote about the relationship between risk-taking and human progress comes from Wilbur Wright who, along with his brother, helped pioneer human flight. “If you are looking for perfect safety,” Wright said, “you would do well to sit on a fence and watch the birds.” European policymakers are essentially forcing their best and brightest innovators to sit on the fence and watch the rest of the world fly right past them on the digital technology and AI front. The ramifications for the continent will be disastrous. Regardless, as I noted in concluding my recent Hill column, Europe’s approach to AI “shouldn’t be the model the U.S. follows if it hopes to maintain its early lead in AI and robotics. America should instead welcome European companies, workers and investors looking for a more hospitable place to launch bold new AI innovations.”

Alas, European officials appear ready to ignore the deleterious impact of their policies on innovation and competition and instead make regulation their leading export to the world. In fact, the European Commission will soon open a San Francisco office to work more closely with Silicon Valley companies affected by EU tech regulation. European leaders have basically surrendered on the idea of home-grown innovation and are now plowing all their energies into regulating the rest of the world’s largest digital technology companies, most of which are headquartered in the United States. It’s no wonder, then, that The Economist magazine concludes that, “Europe is the free-rider continent” that “has piggybacked on innovation from elsewhere, keeping up with rivals, not forging ahead.” Instead, “the cuddly form of capitalism embraced in Europe has markedly failed to create world-beating companies,” the magazine argues.

European officials want us to believe that they are somehow doing the world a favor by being its global tech regulator, when instead the are simply solidifying the power of the largest digital tech companies, who are the only ones with enough resources–mainly in the form of massive legal compliance teams–to live under the EU’s innovation-crushing regulations. Sadly, many US policymakers hate our own home-grown tech companies so much now, that they are willing to let this happen. In a better world, those American lawmakers would stand up to European officials looking to bully tech innovators and we would reject the innovation-killing recipe that the EU is cooking up for AI markets and expects the rest of the world to eat.


Additional Reading on AI & Robotics:

]]>
https://techliberation.com/2022/08/01/why-the-future-of-ai-will-not-be-invented-in-europe/feed/ 3 77016
3 Questions about Progress: The Profectus Progress Roundtable https://techliberation.com/2022/06/15/3-questions-about-the-progress-the-profectus-progress-roundtable/ https://techliberation.com/2022/06/15/3-questions-about-the-progress-the-profectus-progress-roundtable/#respond Wed, 15 Jun 2022 17:10:56 +0000 https://techliberation.com/?p=77002

Profectus is an excellent new online magazine featuring essays and interviews on the intersection of academic literature, public policy, civilizational progress, and human flourishing. The Spring 2022 edition of the magazine features a “Progress Roundtable” in which six different scholars were asked to contribute their thoughts on three general questions:
  1. What is progress?
  2. What are the most significant barriers holding back further progress?
  3. If those challenges can be overcome, what does the world look like in 50 years?

I was honored to be asked by Clay Routledge to contribute answers to those questions alongside others, including: Steven Pinker (Harvard University), Jason Crawford (Roots of Progress), Matt Clancy (Institute for Progress), Marian Tupy (Human​Progress​.org), James Pethokoukis (AEI). I encourage you to jump over the roundtable and read all their excellent responses. I’ve included my answers down below:

What is progress?

Progress is the advancement of human health, happiness, and general well-being. Measures of well-being can be challenging, however, so we should consider a broad range of metrics, including: life expectancy, infant mortality, poverty measures, energy production/consumption, GDP, productivity, agricultural yields/nourishment, and access to various important goods, services, and conveniences. While each of these metrics may have limitations, taken together, they stand for something meaningful that represents a rough proxy for progress.

But we should always remember what progress means at a deeper level for every individual. Innovation and economic growth are important because they allow us to live lives of our own choosing and enjoy the fruits of a prosperous, pluralistic society.  Progress “is not just bigger piles of money,” as Hans Rosling once noted. “The ultimate goal is to have the freedom to do what we want.”  Accordingly, we should aim to broaden the range of opportunities available to all people to help them flourish.

What are the most significant barriers holding back further progress?

The most significant threat to continued progress is the risk of stagnation accompanying efforts to protect the status quo. As Virginia Postrel taught us in her wonderful book The Future & Its Enemies, we should reject stasis-minded thinking and instead shoot for a world of dynamism, which cherishes and protects the freedom to think and act differently.

Progress hinges upon the growth of knowledge. Knowledge comes from experience, and the most important experiences involve trial-and-error learning. Public attitudes and policies that restrict people and ideas from intermingling freely are a recipe for intellectual, social, and economic stagnation. Accordingly, when we consider public policies toward progress, we should first seek to identify and remove legal and regulatory impediments that limit risk-taking, entrepreneurialism, and technological innovation. As science writer Matt Ridley provocatively puts it, to unlock more growth and prosperity, we must first remove obstacles to “ideas having sex.”

The free movement of people and capital is essential to this process. Openness to immigration is the easiest way for a nation to expand its potential for innovation and growth. But domestic labor skills and mobility are equally important. For entrepreneurs and workers, we need to reframe the battle for progress as “the freedom to innovate” and “the right to earn a living.”

Unfortunately, many barriers exist to advancing those goals, like occupational licensing rules and permitting processes, cronyist industrial protectionist schemes, inefficient tax schemes, and many other layers of regulatory red tape. Reforming or eliminating such rules is crucial for broadening opportunities.

Finally, we need to address cultural barriers to progress. Technology and entrepreneurs often get a bad rap in the media and popular culture. Fear and pessimism dominate their narratives. We must do a better job communicating the benefits of openness to change and give people more reasons to be optimistic about a dynamic future.

If those challenges can be overcome, what does the world look like in 50 years?

I agree with Yogi Berra that “It’s tough to make predictions, especially about the future.” Nonetheless, history shows we can achieve remarkable things when we get the prerequisites for progress right and let people tap into their inherent inquisitiveness and inventiveness. Moving the needle on innovation and growth even just a little will yield compounding returns to future generations. But we should dare to dream bigger and think what progress means for each person today and in the future.

A pro-progress agenda will help us lead longer lives and significantly expand our capabilities because that is what people have always desired most. Accordingly, I believe the most significant advance of the next 50 years will be a radical increase in life expectancy and dramatic improvements in our physical and mental capabilities while we are alive.

Today’s tech critics often claim that technological innovation somehow undermines our humanity. They couldn’t be more wrong. There are few things more human than acts of invention. When we take steps to address practical human needs and wants, we enrich our lives and the lives of countless others. The future will be wonderful, so long as we are free to make it so.

]]>
https://techliberation.com/2022/06/15/3-questions-about-the-progress-the-profectus-progress-roundtable/feed/ 0 77002
VIDEO: My London Talk about the Future of AI Governance https://techliberation.com/2022/06/13/video-my-london-talk-about-the-future-of-ai-governance/ https://techliberation.com/2022/06/13/video-my-london-talk-about-the-future-of-ai-governance/#comments Mon, 13 Jun 2022 09:29:50 +0000 https://techliberation.com/?p=76999

On Thursday, June 9, it was my great pleasure to return to my first work office at the Adam Smith Institute in London and give a talk on the future of innovation policy and the governance of artificial intelligence. James Lawson, who is affiliated with the ASI and wrote a wonderful 2020 study on AI policy, introduced me and also offered some remarks. Among the issues discussed:

  • What sort of governance vision should govern the future of innovation generally and AI in particular: the “precautionary principle” or “permissionless innovation”?
  • Which AI sectors are witnessing the most exciting forms of innovation currently?
  • What are the fundamental policy fault lines in the AI policy debates today?
  • Will fears about disruption and automation lead to a new Luddite movement?
  • How can “soft law” and decentralized governance mechanism help us solve pressing policy concerns surrounding AI?
  • How did automation affect traditional jobs and sectors?
  • Will the European Union’s AI Act become a global model for regulation and will it have a “Brussels Effect” in terms of forcing innovators across the world to come into compliance with EU regulatory mandates?
  • How will global innovation arbitrage affect the efforts by governments in Europe and elsewhere to regulate AI innovation?
  • Can the common law help address AI risk? How is the UK common law system superior to the US legal system?
  • What do we mean by “existential risk” as it pertains to artificial intelligence?

I have a massive study in the works addressing all these issues. In the meantime, you can watch the video of my London talk here. And thanks again to my friends at the Adam Smith Institute for hosting!

Additional Reading:

 

 

]]>
https://techliberation.com/2022/06/13/video-my-london-talk-about-the-future-of-ai-governance/feed/ 4 76999
The Proper Governance Default for AI https://techliberation.com/2022/05/26/the-proper-governance-default-for-ai/ https://techliberation.com/2022/05/26/the-proper-governance-default-for-ai/#comments Thu, 26 May 2022 20:15:21 +0000 https://techliberation.com/?p=76994

[This is a draft of a section of a forthcoming study on “A Flexible Governance Framework for Artificial Intelligence,” which I hope to complete shortly. I welcome feedback. I have also cross-posted this essay at Medium.]

Debates about how to embed ethics and best practices into AI product design is where the question of public policy defaults becomes important. To the extent AI design becomes the subject of legal or regulatory decision-making, a choice must be made between two general approaches: the precautionary principle or the proactionary principle.[1] While there are many hybrid governance approaches in between these two poles, the crucial issue is whether the initial legal default for AI technologies will be set closer to the red light of the precautionary principle (i.e., permissioned innovation) or to the green light of the proactionary principle (i.e., (permissionless innovation). Each governance default will be discussed.

The Problem with the Precautionary Principle as the Policy Default for AI

The precautionary principle holds that innovations are to be curtailed or potentially even disallowed until the creators of those new technologies can prove that they will not cause any theoretical harms. The classic formulation of the precautionary principle can be found in the “Wingspan Statement,” which was formulated at an academic conference that took place at the Wingspread Conference Center in Wisconsin in 1998. It read: “Where an activity raises threats of harm to the environment or human health, precautionary measures should be taken even if some cause and effect relationships are not fully established scientifically.”[2] There have been many reformulations of the precautionary principle over time but, as legal scholar Cass Sunstein has noted, “in all of them, the animating idea is that regulators should take steps to protect against potential harms, even if causal chains are unclear and even if we do not know that those harms will come to fruition.”[3] Put simply, under almost all varieties of the precautionary principle, innovation is treated as “guilty until proven innocent.”[4] We can also think of this as permissioned innovation.

The logic animating the precautionary principle reflects a well-intentioned desire to play it safe in the face of uncertainty. The problem lies in the way this instinct gets translated into law and regulation. Making the precautionary principle the public policy default for any given technology or sector has a strong bearing on how much innovation we can expect to flow from it. When trial-and-error experimentation is preemptively forbidden or discouraged by law, it can limit many of the positive outcomes that typically accompany efforts by people to be creative and entrepreneurial. This can, in turn, give rise to different risks for society in terms of forgone innovation, growth, and corresponding opportunities to improve human welfare in meaningful ways.

St. Thomas Aquinas once observed that if the sole goal of a captain were to preserve their ship, the captain would keep it in port forever. But that clearly is not the captain’s highest goal. Aquinas was making a simple but powerful point: There can be no reward without some effort and even some risk-taking. Ship captains brave the high seas because they are in search of a greater good, such as recognition, adventure, or income. Keeping ships in port forever would preserve their vessels, but at what cost?

Similarly, consider the wise words of Wilbur Wright, who pioneered human flight. Few people better understood the profound risks associated with entrepreneurial activities. After all, Wilbur and his brother were trying to figure out how to literally lift humans off the Earth. The dangers were real, but worth taking. “If you are looking for perfect safety,” Wright said, “you would do well to sit on a fence and watch the birds.” Humans would have never taken to the skies if the Wright brothers had not gotten off the fence and taken the risks they did. Risk-taking drives innovation and, over the long-haul, improves our well-being.[5] Nothing ventured, nothing gained.

These lessons can be applied to public policy by considering what would happen if, in the name of safety, public officials told captains to never leave port or told aspiring pilots to never leave the ground. The opportunity cost of inaction can be hard to quantify, but it should be clear that if we organized our entire society around a rigid application of the precautionary principle, progress and prosperity would suffer.

Heavy-handed preemptive restraints on creative acts can have deleterious effects because they raise barriers to entry, increase compliance costs, and create more risk and uncertainty for entrepreneurs and investors. Thus, it is the unseen costs—primarily in the form of forgone innovation opportunities—that makes the precautionary principle so problematic as a policy default. This is why scientist Martin Rees speaks of “the hidden cost of saying no” that is associated with the precautionary principle.[6]

The precise way the precautionary principle leads to this result is that it derails the so-called learning curve by limiting opportunities to learn from trial-and-error experimentation with new and better ways of doing things.[7] The learning curve refers to the way that individuals, organizations, or industries are able to learn from their mistakes, improve their designs, enhance productivity, lower costs, and then offer superior products based on the resulting knowledge.[8] In his recent book, Where Is My Flying Car?, J. Storrs Hall documents how, over the last half century, “regulation clobbered the learning curve” for many important technologies in the U.S., especially nuclear, nanotech, and advanced aviation.[9] Hall shows how society was denied many important innovations due to endless foot-dragging or outright opposition to change from special interests, anti-innovation activists, and over-zealous bureaucrats.

In many cases, innovators don’t even know what they are up against because, as many scholars have noted, “the precautionary principle, in all of its forms, is fraught with vagueness and ambiguity.”[10] It creates confusion and fear about the wisdom of taking action in the face of uncertainty. Worst case thinking paralyzes regulators who aim to “play it safe” at all costs. The result is an endless snafu of red tape as layer upon layer of mandates build up and block progress. The result is what many scholars now decry as a culture of “vetocracy,” which describes the many veto points within modern political systems that hold back innovation, development and economic opportunity.[11] This endless accumulation of potential veto points in the policy process in the form of mandates and restrictions can greatly curtail innovation opportunities. “Like sediment in a harbor, law has steadily accumulated, mainly since the 1960s, until most productive activity requires slogging through a legal swamp,” says Philip K. Howard, chair of Common Good.[12] “Too much law,” he argues, “can have similar effects as too little law,” because:

People slow down, they become defensive, they don’t initiate projects because they are surrounded by legal risks and bureaucratic hurdles. They tiptoe through the day looking over their shoulders rather than driving forward on the power of their instincts. Instead of trial and error, they focus on avoiding error.[13]

This is exactly why it is important that policymakers not get too caught up in attempts to preemptively resolve every potential hypothetical worst case scenarios associated with AI technologies. The problem with that approach was succinctly summarized by the political scientist Aaron Wildavsky when he noted, “If you can do nothing without knowing first how it will turn out, you cannot do anything at all.”[14] Or, as I have stated in a book on this topic, “living in constant fear of worst-case scenarios—and premising public policy on them—means that best-case scenarios will never come about.”[15]

This does not mean society should dismiss all concerns about the risks surrounding AI. Some technological risks do necessitate a degree of precautionary policy, but proportionality is crucial, notes Gabrielle Bauer, a Toronto-based medical writer. “Used too liberally,” she argues, “the precautionary principle can keep us stuck in a state of extreme risk-aversion, leading to cumbersome policies that weigh down our lives. To get to the good parts of life, we need to accept some risk.”[16] It is not enough to simply hypothesize that certain AI innovations might entail some risk. The critics need to prove it using risk analysis techniques that properly weigh both the potential costs and benefits.[17] Moreover, when conducting such analyses, the full range of trade-offs associated with preemptive regulation must be evaluated. Again, where precautionary constraints might deny society life-enriching devices or services, those costs must be acknowledged.

Generally speaking, the most extreme precautionary controls should only be imposed when the potential harms in question are highly probable, tangible, immediate, irreversible, catastrophic, or directly threatening to life and limb in some fashion.[18] In the context of AI and ML systems, it may be the case that such a test is satisfied already for law enforcement use of certain algorithmic profiling techniques. And that test is satisfied for so-called “killer robots,” or autonomous military technology.[19] These are often described as “existential risks.” The precautionary principle is the right default in these cases because it is abundantly clear how unrestricted use would have catastrophic consequences. For similar reasons, governments have long imposed comprehensive restrictions on certain types of weapons.[20] And although nuclear and chemical technologies have many important applications, their use must also be limited to some degree even outside of militaristic applications because they can pose grave danger if misused.

But the vast majority of AI-enabled technologies are not like this. Most innovations should not be treated the same a hand grenade or a ticking time bomb. In reality, most algorithmic failures will be more mundane and difficult to foresee in advance. By their very nature, algorithms are constantly evolving because programs and systems are being endlessly tweaked by designers to improve them. In his books on the evolution of engineering and systems design, Henry Petroski has noted that “the shortcomings of things are what drive their evolution.”[21] The normal state of things is “ubiquitous imperfection,” he notes, and it is precisely that reality that drives efforts to continuously innovate and iterate.[22]

Regulations rooted in the precautionary principle hope to preemptively find and address product imperfections before any harm comes from them. In reality, and as explained more below, it is only through ongoing experimentation that we find both the nature of failures and the knowledge to know how to correct them. As Petroski observes, “the history of engineering in general, may be told in its failures as well as in its triumphs. Success may be grand, but disappointment can often teach us more.”[23] This is particularly true for complex algorithmic systems, where rapid-fire innovation and incessant iteration are the norm.

Importantly, the problem with precautionary regulation for AI is not just that it might be over-inclusive in seeking to regulate hypothetical problems that never develop. Precautionary regulation can also be under-inclusive by missing problematic behavior or harms that no one anticipated before the fact. Only experience and experimentation reveal certain problems.

In sum, we should not presume that there is a clear preemptive regulatory solution to every problem some people raise about AI, nor should we presume we can even accurately identify all such problems that might come about in the future. Moreover, some risks will never be eliminated entirely, meaning that risk mitigation is the wiser approach. This is why a more flexible bottom-up governance strategy focused on responsiveness and resiliency makes more sense than heavy-handed, top-down strategies that would only avoid risks by making future innovations extremely difficult if not impossible.

The “Proactionary Principle” is the Better Default for AI Policy

The previous section made it clear why the precautionary principle should generally not be used as our policy default if we hope to encourage the development of AI applications and services. What we need is a policy approach that:

  • objectively evaluates the concerns raised about AI systems and applications;
  • considers whether more flexible governance approaches might be available to address them; and,
  • does so without resorting to the precautionary principle as a first-order response.

The proactionary principle is the better general policy default for AI because it satisfies these three objectives.[24] Philosopher Max More defines the proactionary principle as the idea that policymakers should, “[p]rotect the freedom to innovate and progress while thinking and planning intelligently for collateral effects.”[25] There are different names for this same concept, including the innovation principle, which Daniel Castro and Michael McLaughlin of the Information Technology and Innovation Foundation say represents the belief that “the vast majority of new innovations are beneficial and pose little risk, so government should encourage them.”[26] Permissionless innovation is another name for the same idea. Permissionless innovation refers to the idea that experimentation with new technologies and business models should generally be permitted by default.[27]

What binds these concepts together is the belief that innovation should generally be treated as innocent until proven guilty. There will be risks and failures, of course, but the permissionless innovation mindset views them as important learning experiences. These experiences are chances for individuals, organizations, and all of society to make constant improvements through incessant experimentation with new and better ways of doing things.[28] As Virginia Postrel argued in her 1998 book, The Future and Its Enemies, progress demands “a decentralized, evolutionary process” and mindset in which mistakes are not viewed as permanent disasters but instead as “the correctable by-products of experimentation.”[29] “No one wants to learn by mistakes,” Petroski once noted, “but we cannot learn enough from successes to go beyond the state of the art.”[30] Instead we must realize, as other scholars have observed, that “[s]uccess is the culmination of many failures”[31] and understand “failure as the natural consequence of risk and complexity.”[32]

This is why the default for public policy for AI innovation should, whenever possible, be more green lights than red ones to allow for the maximum amount of trial-and-error experimentation, which encourages ongoing learning.[33] “Experimentation matters,” observes Stefan H. Thomke of the Harvard Business School, “because it fuels the discovery and creation of knowledge and thereby leads to the development and improvement of products, processes, systems, and organizations.”[34]

Obviously, risks and mistakes are “the very things regulators inherently want to avoid,”[35] but “if innovators fear they will be punished for every mistake,” Daniel Castro and Alan McQuinn argue, “then they will be much less assertive in trying to develop the next new thing.”[36] And for all the reasons already stated, that would represent the end of progress because it would foreclose the learning process that allows society to discover new, better, and safer ways of doing things. Technology author Kevin Kelly puts it this way:

technologies must be evaluated in action, by action. We test them in labs, we try them out in prototypes, we use them in pilot programs, we adapt our expectations, we monitor their alterations, we redefine their aims as they are modified, we retest them given actual behavior, we re-direct them to new jobs when we are not happy with their outcomes.[37]

In other words, the proactionary principle appreciates the benefits that flow from learning by doing. The goal is to continuously assess and prioritize risks from natural and human-made systems alike, and then formulate and reformulate our toolkit of possible responses to those risks using the most practical and effective solutions available. This should make it clear that the proactionary approach is not synonymous with anarchy. Various laws, government bodies, and especially the courts play an important role in protecting rights, health, and order. But policies need to be formulated such that innovators and innovation are given the benefit of the doubt and risks are analyzed and addressed in a more flexible fashion.

Some of the most effective ways to address potential AI risks already exist in the form of “soft law” and decentralized governance solution. These will be discussed at greater length below. But existing legal remedies include various common law solutions (torts, class actions, contract law, etc), recall authority possessed by many regulatory agencies, and various consumer protection policies. Ex post remedies are generally superior to ex ante prior restraints if we hope to maximize innovation opportunities. Ex ante regulatory defaults are too often set closer to the red light of the precautionary principle and then enforced through volumes of convoluted red tape.

This is what the World Economic Forum has referred to as a “regulate-and-forget” system of governance,[38] or what others call a “build-and-freeze model” or regulation.[39] In such technological governance regimes, older rules are almost never revisited, even after new social, economic, and technical realities render them obsolete or ineffective.[40] A 2017 survey of U.S. Code of Regulations by Deloitte consultants revealed that 68 percent of federal regulations have never been updated and that 17 percent have only been updated once.[41] Public policies for complex and fast-moving technologies like AI cannot be set in stone and forgotten like that if America hopes to remain on the cutting edge of this sector.

Advocates of the proactionary principle look to counter this problem not by eliminating all laws or agencies, but by bringing them in line with flexible governance principles rooted in more decentralized approaches to policy concerns.[42] As many regulatory advocates suggest, it is important to embed or “bake in” various ethical best practices into AI systems to ensure that they benefit humanity. But this, too, is a process of ongoing learning and there are many ways to accomplish such goals without derailing important technological advances. What is often referred to as “value alignment” or “ethically-aligned design” is challenged by the fact that humans regularly disagree profoundly about many moral issues.[43] “Before we can put our values into machines, we have to figure out how to make our values clear and consistent,” says Harvard University psychologist Joshua D. Greene.[44]

The “Three Laws of Robotics” famously formulated decades ago by Isaac Asimov in his science fiction stories continue to be widely discussed today as a guide to embedding ethics into machines.[45] They read:

  1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
  2. A robot must obey orders given it by human beings except where such orders would conflict with the First Law.
  3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws.

What is usually forgotten about these principles, as AI expert Melanie Mitchell reminds us, is the way Asimov, “often focused on the unintended consequences of programming ethical rules into robots,” and how he made it clear that, if applied too literally, “such a set of rules would inevitably fail.”[46]

This is why flexibility and humility are essential virtues when thinking about AI policy. The optimal governance regime for AI can be shaped by responsible innovation practices and embed important ethical principles by design without immediately defaulting to a rigid application of the precautionary principle.[47] In other words, an innovation policy regime rooted in the proactionary principle can also be infused with the same values that animate a precautionary principle-based system.[48] The difference is that the proactionary principle-based approach will look to achieve these goals in a more flexible fashion using a variety of experimental governance approaches and ex post legal enforcement options, while also encouraging still more innovation to solve problems past innovations may have caused.

To reiterate, not every AI risk is foreseeable, and many risks and harms are more amorphous or uncertain. In this sense, the wisest governance approach for AI was recently outlined by the National Institute of Standards and Technology (NIST) in its initial draft AI Risk Management Framework, which is a multistakeholder effort “to describe how the risks from AI-based systems differ from other domains and to encourage and equip many different stakeholders in AI to address those risks purposefully.”[49] NIST notes that the goal of the Framework is:

to be responsive to new risks as they emerge rather than enumerating all known risks in advance. This flexibility is particularly important where impacts are not easily foreseeable, and applications are evolving rapidly. While AI benefits and some AI risks are well-known, the AI community is only beginning to understand and classify incidents and scenarios that result in harm.[50]

This is a sensible framework for how to address AI risks because it makes it clear that it will be difficult to preemptively identify and address all potential AI risks. At the same time, there will be a continuing need to advance AI innovation while addressing AI-related harms. The key to striking that balance will be decentralized governance approaches and soft law techniques described below.

[Note: The subsequent sections of the study will detail how decentralized governance approaches and soft law techniques already are helping to address concerns about AI risks.]

Endnotes:

[1]     Adam Thierer, Permissionless Innovation: The Continuing Case for Comprehensive Technological Freedom, 2nd ed. (Arlington, VA: Mercatus Center at George Mason University, 2016): 1-6, 23-38; Adam Thierer, Evasive Entrepreneurs & the Future of Governance (Washington, DC: Cato Institute, 2020): 48-54.

[2]     “Wingspread Statement on the Precautionary Principle,” January 1998, https://www.gdrc.org/u-gov/precaution-3.html.

[3]     Cass R. Sunstein, Laws of Fear: Beyond the Precautionary Principle (Cambridge, UK: Cambridge University Press, 2005). (“The Precautionary Principle takes many forms. But in all of them, the animating idea is that regulators should take steps to protect against potential harms, even if causal chains are unclear and even if we do not know that those harms will come to fruition.”)

[4]     Henk van den Belt, “Debating the Precautionary Principle: ‘Guilty until Proven Innocent’ or ‘Innocent until Proven Guilty’?” Plant Physiology 132 (2003): 1124.

[5]     H.W. Lewis, Technological Risk (New York: WW. Norton & Co., 1990): x. (“The history of the human race would be dreary indeed if none of our forebears had ever been willing to accept risk in return for potential achievement.”)

[6]     Martin Rees, On the Future: Prospects for Humanity (Princeton, NJ: Princeton University Press, 2018): 136.

[7]     Adam Thierer, “Failing Better: What We Learn by Confronting Risk and Uncertainty,” in Sherzod Abdukadirov (ed.), Nudge Theory in Action: Behavioral Design in Policy and Markets (Palgrave Macmillan, 2016): 65-94.

[8]     Adam Thierer, “How to Get the Future We Were Promised,” Discourse, January 18, 2022, https://www.discoursemagazine.com/culture-and-society/2022/01/18/how-to-get-the-future-we-were-promised.

[9]     J. Storrs Hall, Where Is My Flying Car? (San Francisco: Stripe Press, 2021)

[10]    Derek Turner and Lauren Hartzell Nichols, “The Lack of Clarity in the Precautionary Principle,” Environmental Values, Vol 13, No. 4 (2004): 449.

[11]    William Rinehart, “Vetocracy, the Costs of Vetos and Inaction,” Center for Growth & Opportunity at Utah State University, March 24, 2022, https://www.thecgo.org/benchmark/vetocracy-the-costs-of-vetos-and-inaction; Adam Thierer, “Red Tape Reform is the Key to Building Again,” The Hill, April 28, 2022, https://thehill.com/opinion/finance/3470334-red-tape-reform-is-the-key-to-building-again.

[12]    Philip K. Howard, “Radically Simplify Law,” Cato Institute, Cato Online Forum, http://www.cato.org/publications/cato-online-forum/radically-simplify-law.

[13]    Ibid.

[14]    Aaron Wildavsky, Searching for Safety (New Brunswick, NJ: Transaction Publishers, 1989): 38.

[15]    Thierer, Permissionless Innovation, at 2.

[16]    Gabrielle Bauer, “Danger: Caution Ahead,” The New Atlantis, February 4, 2022, https://www.thenewatlantis.com/publications/danger-caution-ahead.

[17]    Richard B. Belzer, “Risk Assessment, Safety Assessment, and the Estimation of Regulatory Benefits” (Mercatus Working Paper, Mercatus Center at George Mason University, Arlington, VA, 2012), 5, http://mercatus.org/publication/risk-assessment-safety-assessment-and-estimation-regulatory-benefits; John D. Graham and Jonathan Baert Wiener, eds. Risk vs. Risk: Tradeoffs in Protecting Health and the Environment, (Cambridge, MA: Harvard University Press, 1995).

[18]    Thierer, Permissionless Innovation, at 33-8.

[19]    Adam Satariano, Nick Cumming-Bruce and Rick Gladstone, “Killer Robots Aren’t Science Fiction. A Push to Ban Them Is Growing,” New York Times, December 17, 2021, https://www.nytimes.com/2021/12/17/world/robot-drone-ban.html.

[20]    Adam Thierer, “Soft Law: The Reconciliation of Permissionless & Responsible Innovation,” in Adam Thierer, Evasive Entrepreneurs & the Future of Governance (Washington, DC: Cato Institute, 2020): 183-240, https://www.mercatus.org/publications/technology-and-innovation/soft-law-reconciliation-permissionless-responsible-innovation.

[21]    Henry Petroski, The Evolution of Useful Things (New York: Vintage Books, 1994): 34.

[22]    Ibid., 27,

[23]    Henry Petroski, To Engineer is Human: The Role of Failure in Successful Design (New York: Vintage, 1992): 9.

[24]    James Lawson, These Are the Droids You’re Looking For: An Optimistic Vision for Artificial Intelligence, Automation and the Future of Work (London: Adam Smith Institute, 2020): 86, https://www.adamsmith.org/research/these-are-the-droids-youre-looking-for.

[25]    Max More, “The Proactionary Principle (March 2008),” Max More’s Strategic Philosophy, March 28, 2008, http://strategicphilosophy.blogspot.com/2008/03/proactionary-principle-march-2008.html.

[26]    Daniel Castro & Michael McLaughlin, “Ten Ways the Precautionary Principle Undermines Progress in Artificial Intelligence,” Information Technology and Innovation Foundation, February 4, 2019, https://itif.org/publications/2019/02/04/ten-ways-precautionary-principle-undermines-progress-artificial-intelligence.

[27]    Thierer, Permissionless Innovation.

[28]    Thierer, “Failing Better.”

[29]    Virginia Postrel, The Future and Its Enemies (New York: The Free Press, 1998): xiv.

[30]    Henry Petroski, To Engineer is Human: The Role of Failure in Successful Design (New York: Vintage, 1992): 62.

[31]    Kevin Ashton, How to Fly a Horse: The Secret History of Creation, Invention, and Discovery (New York: Doubleday, 2015): 67.

[32]    Megan McArdle, The Up Side of Down: Why Failing Well is the Key to Success (New York: Viking, 2014), 214.

[33]    F. A. Hayek, The Constitution of Liberty (London: Routledge, 1960, 1990): 81. (“Humiliating to human pride as it may be, we must recognize that the advance and even preservation of civilization are dependent upon a maximum of opportunity for accidents to happen.”)

[34]    Stefan H. Thomke, Experimentation Matters: Unlocking the Potential of New Technologies for Innovation (Harvard Business Review Press, 2003), 1.

[35]    Daniel Castro and Alan McQuinn, “How and When Regulators Should Intervene,” Information Technology and Innovation Foundation Reports, (February 2015): 2 http://www.itif.org/publications/how-and-when-regulators-should-intervene.

[36]    Ibid.

[37]    Kevin Kelly, “The Pro-Actionary Principle,” The Technium, November 11, 2008, https://kk.org/thetechnium/the-pro-actiona.

[38]    World Economic Forum, Agile Regulation for the Fourth Industrial Revolution (Geneva: Switzerland: 2020): 4, https://www.weforum.org/projects/agile-regulation-for-the-fourth-industrial-revolution.

[39]    Jordan Reimschisel and Adam Thierer, “’Build & Freeze’ Regulation Versus Iterative Innovation,” Plain Text, November 1, 2017, https://readplaintext.com/build-freeze-regulation-versus-iterative-innovation-8d5a8802e5da.

[40]    Adam Thierer, “Spring Cleaning for the Regulatory State,” AIER, May 23, 2019, https://www.aier.org/article/spring-cleaning-for-the-regulatory-state.

[41]    Daniel Byler, Beth Flores & Jason Lewris, “Using Advanced Analytics to Drive Regulatory Reform: Understanding Presidential Orders on Regulation Reform,” Deloitte, 2017, https://www2.deloitte.com/us/en/pages/public-sector/articles/advanced-analytics-federal-regulatory-reform.html.

[42]    Adam Thierer, Governing Emerging Technology in an Age of Policy Fragmentation and Disequilibrium, American Enterprise Institute (April 2022), https://platforms.aei.org/can-the-knowledge-gap-between-regulators-and-innovators-be-narrowed.

[43]    Brian Christian, The Alignment Problem: Machine Learning and Human Values (New York: W.W. Norton & Company, 2020).

[44]    Joshua D. Greene, “Our Driverless Dilemma,” Science (June 2016): 1515.

[45]    Susan Leigh Anderson, “Asimov’s ‘Three Laws of Robotics’ and Machine Metaethics,” AI and Society, Vol. 22, No. 4, (2008): 477-493.

[46]    Melanie Mitchell, Artificial Intelligence: A Guide for Thinking Humans (New York: Farrar, Straus and Giroux, 2019): 126 [Kindle edition.]

[47]    Thomas A. Hemphill, “The Innovation Governance Dilemma: Alternatives to the Precautionary Principle,” Technology in Society, Vol. 63 (2020): 6, https://ideas.repec.org/a/eee/teinso/v63y2020ics0160791x2030751x.html.

[48]    Adam Thierer, “Are ‘Permissionless Innovation’ and ‘Responsible Innovation’ Compatible?” Technology Liberation Front, July 12, 2017, https://techliberation.com/2017/07/12/are-permissionless-innovation-and-responsible-innovation-compatible.

[49]    The National Institute of Standards and Technology, “AI Risk Management Framework: Initial Draft,” (March 17, 2022): 1, https://www.nist.gov/itl/ai-risk-management-framework.

[50]    Ibid., at 5.

]]>
https://techliberation.com/2022/05/26/the-proper-governance-default-for-ai/feed/ 3 76994
Event Notice: “2022 Tech and Innovation Summit” https://techliberation.com/2022/05/25/event-notice-2022-tech-and-innovation-summit/ https://techliberation.com/2022/05/25/event-notice-2022-tech-and-innovation-summit/#respond Wed, 25 May 2022 14:10:18 +0000 https://techliberation.com/?p=76991

Just FYI, the James Madison Institute will be hosting its “2022 Tech and Innovation Summit” on Thursday, September 15 and Friday, September 16 in Coral Gables, Florida. I’m honored to be included among the roster of speakers announced so far, which includes:

  • Ajit Pai, Former Chairman of the Federal Communications Commission
  • Adam Thierer, the Mercatus Center at George Mason University
  • Will Duffield, Cato Institute
  • Utah State Representative Cory Maloy
  • Dane Ishihara, Director of Utah’s Office of Regulatory Relief

Registration info is here.

]]>
https://techliberation.com/2022/05/25/event-notice-2022-tech-and-innovation-summit/feed/ 0 76991
Podcast: Remember FAANG? https://techliberation.com/2022/05/10/podcast-remember-faang/ https://techliberation.com/2022/05/10/podcast-remember-faang/#comments Tue, 10 May 2022 15:47:16 +0000 https://techliberation.com/?p=76986

Corbin Barthold invited me on Tech Freedom’s “Tech Policy Podcast” to discuss the history of antitrust and competition policy over the past half century. We covered a huge range of cases and controversies, including: the DOJ’s mega cases against IBM & AT&T, Blockbuster and Hollywood Video’s derailed merger, the Sirius-XM deal, the hysteria over the AOL-Time Warner merger, the evolution of competition in mobile markets, and how we finally ended that dreaded old MySpace monopoly!

What does the future hold for Google, Facebook, Amazon, and Netflix? Do antitrust regulators at the DOJ or FTC have enough to mount a case against these firms? Which case is most likely to have legs?

Corbin and I also talked about the of progress more generally and the troubling rise of more and more Luddite thinking on both the left and right. I encourage you to give it a listen:

]]>
https://techliberation.com/2022/05/10/podcast-remember-faang/feed/ 4 76986
New Report: “Governing Emerging Technology in an Age of Policy Fragmentation and Disequilibrium” https://techliberation.com/2022/05/02/new-report-governing-emerging-technology-in-an-age-of-policy-fragmentation-and-disequilibrium/ https://techliberation.com/2022/05/02/new-report-governing-emerging-technology-in-an-age-of-policy-fragmentation-and-disequilibrium/#respond Mon, 02 May 2022 18:00:35 +0000 https://techliberation.com/?p=76982

The American Enterprise Institute (AEI) has kicked off a new project called “Digital Platforms and American Life,” which will bring together a variety of scholars to answer the question: How should policymakers think about the digital platforms that have become embedded in our social and civic life? The series, which is being edited by AEI Senior Fellow Adam J. White, highlights how the democratization of knowledge and influence in the Internet age comes with incredible opportunities but also immense challenges. The contributors to this series will approach these issues from various perspectives and also address different aspects of policy as it pertains to the future of technological governance.

It is my honor to have the lead paper in this new series. My 19-page essay is entitled, Governing Emerging Technology in an Age of Policy Fragmentation and Disequilibrium, and it represents my effort to concisely tie together all my writing over the past 30 years on governance trends for the Internet and related technologies. The key takeaways from my essay are:

  • Traditional governance mechanisms are being strained by modern technological and political realities. Newer technologies, especially digital ones, are developing at an ever-faster rate and building on top of each other, blurring lines between sectors.
  • Congress has failed to keep up with the quickening pace of technological change. It also continues to delegate most of its constitutional authority to agencies to deal with most policy concerns. But agencies are overwhelmed too. This situation is unlikely to change, creating a governance gap.
  • Decentralized governance techniques are filling the gap. Soft law—informal, iterative, experimental, and collaborative solutions—represents the new normal for technological governance. This is particularly true for information sectors, including social media platforms, for which the First Amendment acts as a major constraint on formal regulation anyway.
  • No one-size-fits-all tool can address the many governance issues related to fast-paced science and technology developments; therefore, decentralized governance mechanisms may be better suited to address newer policy concerns.

My arguments will frustrate many people of varying political dispositions because I adopt a highly pragmatic approach to technological governance. No matter what your preferred ideal state of affairs looks like in terms of technological governance, you’re bound to be disappointed by the way high-tech policy is unfolding today. Many people desire bright-letter hard law that has government(s) establishing comprehensive, precautionary regulation of various tech sectors. Others prefer a clearly defined but more light-touch policy regime for emerging technology. Alas, neither of these preferred hard law dispositions describe the world we live in today, nor will either of them likely govern the future. My essay outlines a variety of reasons why such hard law approaches are breaking down today, including general legislative dysfunctionalism, the endless delegation of power from Congress to regulatory agencies or the states, and the the intensifying “pacing problem” (i.e., the fact that technological change is happening at a must faster rate than policy change).

In light of this, I argue:

it is smart to think practically about alternative governance frameworks when traditional hard-law approaches prove slow or ineffective in addressing governance needs. It is also wise to consider alternative governance frameworks that might address the occasional downsides of disruptive technologies without completely foreclosing ongoing innovation opportunities the way many hard-law solutions would.

I also show that, whether anyone cares to admit it or not, we already live in a world of multiplying “soft law” mechanisms and decentralized governance approaches. I use the example of how these new governance trends are unfolding for autonomous vehicles, but note how we see decentralized governance approaches being utilized in many other sectors. This is equally true across the Atlantic where the United Kingdom is increasingly experimenting with new governance approached for emerging technologies.

What counts as “soft law” or “decentralized governance” is an open-ended and ever-changing topic of discussion. But I note that it, at a minimum, it includes: multi-stakeholder processes, experimental “sandboxes,” industry best practices or codes of conduct, technical standards, private certifications, agency workshops and guidance documents, informal negotiations, and education and awareness building efforts. I unpack these ideas in the essay in more detail.

For social media, soft law approaches are the current governance norm, even as hard law regulatory proposals continue to multiply rapidly. But I note that despite all that pressure for more formal regulatory governance of social media platforms, the First Amendment presents a formidable barrier to most of those proposals. Thus, soft law will continue to be the dominant governance approach here. I also conclude by predicting that that soft law will become the dominant approach for artificial intelligence, too, even as regulatory proposals multiply there as well.

I’ll have more to say about my paper and other papers in the AEI series in coming weeks and month. For now, I encourage you to jump over to the website AEI has set up for the series and take a look at my new paper.


Additional Reading :

]]>
https://techliberation.com/2022/05/02/new-report-governing-emerging-technology-in-an-age-of-policy-fragmentation-and-disequilibrium/feed/ 0 76982
The Future of Progress Studies https://techliberation.com/2022/05/01/the-future-of-progress-studies/ https://techliberation.com/2022/05/01/the-future-of-progress-studies/#comments Sun, 01 May 2022 19:21:03 +0000 https://techliberation.com/?p=76980

If you haven’t yet had the chance to check out the new Progress Forum, I encourage you to do so. It’s a discussion group for progress studies and all things related to it. The Forum is sponsored by The Roots of Progress. Even though the Forum is still in pre-launch phase, there are already many interesting threads worth checking out. I was my honor to contribute one of the first on the topic, “Where is ‘Progress Studies’ Going?” It’s an effort to sort through some of the questions and challenges facing the Progress Studies movement in terms of focus and philosophical grounding. I thought I would just reproduce the essay here, but I encourage you to jump over to the Progress Forum to engage in discussion about it, or the many other excellent discussions happening there on other issues.

________________

Where is “Progress Studies” Going? by Adam Thierer

What do we mean by “Progress Studies” and how can this field of study be advanced? I’ve been thinking about that question a lot since Patrick Collison and Tyler Cowen published their 2019 manifesto in  The Atlantic on why “We Need a New Science of Progress.” At present, there is no overarching “unified field theory” of what Progress Studies entails or what underpins it, and that may be holding up progress on Progress Studies. I recently attended an important conference on the “Moral Foundations of Progress Studies,” co-hosted by The Roots of Progress and the Salem Center at UT Austin, where I discovered that many others were grappling with these same issues.

While a broad range of people are interested in Progress Studies, their moral priors differ, sometimes significantly. For example, the UT Austin conference included scholars from diverse disciplines (philosophy, psychology, economics, political science, history, and others) whose thinking was rooted in different philosophical traditions (utilitarianism, effective altruism, individualism, and various hybrids). Everyone shared the goal of advancing human well-being, but participants had different conceptions of the moral foundations of well-being, and even some disagreement about what well-being meant in concrete terms. There were also differing perspectives about what the “studies” part of Progress Studies should entail. Specifically, does it include progress  advocacy, including the potential for specific policy recommendations?

Comprehension vs. Advocacy

Part of the confusion over the nature and goals of Progress Studies can be traced back to Collison and Cowen’s foundational essay. On one hand, their goal was progress  comprehension. “Progress itself is understudied,” Collison and Cowen argued. They lamented that “there is no broad-based intellectual movement focused on understanding the dynamics of progress.”

But Collison and Cowen went further. Their goal was not merely to inspire the development of a field of study that could give us a better understanding of the prerequisites of progress, but also to formulate a plan for advancing progress. They argued that “mere comprehension is not the goal,” and advocated for “the deeper goal of speeding it up.” They went on to say, “the implicit question is how scientists [and others]  should be acting” and that Progress Studies should be viewed as, “closer to medicine than biology: The goal is to treat, not merely to understand.” The presupposition here is that progress is important and that we need to take steps to get a lot more of it. Again, we can think of this part of Progress Studies as progress advocacy. And advocacy can entail both advocating for progress generally as well as specific types of policy advocacy.

This raises an interesting question we debated at the UT Austin conference: Can you study something and advocate for it at the same time? Some felt you really cannot separate them, while others believed that the broader questions about how progress has worked could be kept separate from any advocacy efforts. Of course, this same tension between comprehension and advocacy comes up in many other fields.

What Progress Studies Can Learn from STS

In this sense, Progress Studies might learn some important lessons by examining the older but loosely related field of Science and Technology Studies (STS). STS incorporates a wide variety of mostly “soft science” academic disciplines, such as law, philosophy, sociology, and anthropology. These scholars analyze the relationship between technology, society, culture, and politics.

One conclusion from studying STS is obvious: comprehension and advocacy frequently get blurred. Many of the STS scholars who engage in critical studies of the history of technology seamlessly transition into anti-technology advocates, even as many of them claim they are “just studying” the issues. As I’ve noted elsewhere:

When thinking about of technology, STS scholars commonly employ words like “anxiety,” “alienation,” “degradation,” and “discrimination.” Consequently, most of them suggest that the burden of proof lies squarely on scientists, engineers, and innovators to prove that their ideas and inventions will bring worth to society before they are deployed. In other words, STS scholars generally fall in the precautionary principle camp, and their policy prescriptions have grown increasingly radical over time.

Meanwhile, as I discussed in my latest book, many STS scholars describe themselves as “humanists” while implicitly suggesting that those who promote technological progress are somehow callous oafs who only care about the cold calculus of profit-seeking and creating shiny new gadgets we don’t need.

While some STS scholars continue to do important and largely objective work, many others routinely show their more radical leanings in books, essays, and social media posts. Most worrying is their newfound love of Luddism, as they spin revisionist histories of “Why Luddites Matter,” insisting that “There’s Nothing Wrong with Being a Luddite,” and that “I’m a Luddite. You Should Be One Too.” Neil Richards, a law professor and leading STS scholar declares bluntly on Twitter: “Less metaverse, less crypto, less disruptive innovation. More regulation, more ethics, more humanity.” In other words, public policy defaults should be set squarely to the Precautionary Principle and anyone opposed to that is unethical and anti-human. Taken to the extreme, STS scholars marry up this Luddite revisionism with the retrograde philosophy of “degrowth” and produce book chapters with titles like, “Methodological Luddism: A Concept for Tying Degrowth to the Assessment and Regulation of Technologies.”

The Progress Studies movement might consider framing its work as a response to the growing extremism of the STS movement. STS scholars have become so remarkably hostile to the very notion that science and technology are central to human advancement that the field might today better be labeled  Anti-Science & Technology Studies. Yet, these are the scholars that dominate many academic departments where students are learning about technological progress. Progress Studies scholars can push back against that radicalism and offer level-headed, empirical responses to it.

Ensuring A Big Tent 

To improve its chances of success, the Progress Studies movement should seek to broaden its appeal by avoiding a dogmatic party line on its moral foundations while ensuring that multiple disciplines and viewpoints are incorporated into it.

In terms of philosophical underpinnings, those interested in Progress Studies can take different approaches to the moral foundations of progress and human well-being. Many philosophers get frustrated when others fail to hammer out all the detailed nuances of the metaphysics, epistemology, and ethics of these matters. I understand that urge, but I’ve now spent over 30 years covering technology policy and have been constantly surprised about how many people can come together and agree on a broad set of principles about the importance of progress without sharing a common philosophical framework.

The same is true as it pertains to policy prescriptions. We need to ensure a “big tent” in this way, too. It is already the case that many people engaged in Progress Studies have very different perspectives on issues like intellectual property and industrial policy, for example. I have many friends on different sides of these issues. Importantly, there are not even clear sides on these issues but rather a very broad spectrum of viewpoints. Progress Studies scholars will likely always disagree on the finer points of both types of “IP” policy. Nonetheless, they can remain more unified in stressing the common goal of moving the needle on progress in a positive direction and highlighting the continuing importance of flexible experimentation with policies aimed at enhancing innovation and growth.

To the extent there is any litmus test for the Progress Studies movement, that’s it:  advancing opportunities for innovation and growth is paramount. Regardless of how one grounds their moral philosophy, or goes about constructing a theory of rights, many people can agree that granting humans the freedom to explore, experiment, and be entrepreneurial has important benefits for individuals, families, organizations, and entire nations. Openness to change is what unifies us. Stagnation and “steady state” thinking—and the Precautionary Principle-based policies that flow from such reasoning—are the enemy. 

Thus, the Progress Studies movement can focus on both studying progress and advancing it at the same time, even if some will devote more effort to one priority than the other. And we shouldn’t forget that these two objectives are reinforcing: Comprehension informs advocacy and vice-versa. Progress is a never-ending process of trial-and-error. It’s all about learning by doing. We try, we fail, we learn, and we try again. This is as true for the individuals attempting to make progress in the real-world as it is for scholars studying it and seeking to promote it.

Let us get on with this important work, regardless of what motivates us to do it.

]]>
https://techliberation.com/2022/05/01/the-future-of-progress-studies/feed/ 3 76980
“Building Again” Must Be More than Just Rhetoric https://techliberation.com/2022/04/29/building-again-must-be-more-than-just-rhetoric/ https://techliberation.com/2022/04/29/building-again-must-be-more-than-just-rhetoric/#comments Fri, 29 Apr 2022 18:22:05 +0000 https://techliberation.com/?p=76978

As I note in my latest regular column for The Hill, it seems like everyone these days is talking about the importance of America “building again.” For example, take a look at this compendium of essays I put together where scholars and pundits have been making the case for “building again” in various ways and contexts. It would seem that the phrase is on everyone’s lips. “These calls include many priorities,” I note, “but what unifies them is the belief that the nation needs to develop new innovations and industries to improve worker opportunities, economic growth and U.S. global competitive standing.”

What I fear, however, is that “building again” has become more of a convenient catch line than anything else. It seems like few people are willing to spell out exactly what it will take to get that started. My new column suggests that the most important place to start is “to cut back the thicket of red tape and stifling bureaucratic procedures that limit the productiveness of the American workforce.” I cite recent reports and data documenting the enormous burden that regulatory accumulation imposes on American innovators and workers. I then discuss how to get reforms started at all levels of government to get the problem under control and help us start building again in earnest. Jump over to The Hill to read the entire essay.

]]>
https://techliberation.com/2022/04/29/building-again-must-be-more-than-just-rhetoric/feed/ 3 76978
Book Review: “Questioning the Entrepreneurial State” https://techliberation.com/2022/04/26/book-review-questioning-the-entrepreneurial-state/ https://techliberation.com/2022/04/26/book-review-questioning-the-entrepreneurial-state/#comments Tue, 26 Apr 2022 20:14:03 +0000 https://techliberation.com/?p=76975

An important new book launched this week in Europe on issues related to innovation policy and industrial policy. “Questioning the Entrepreneurial State: Status-quo, Pitfalls, and the Need for Credible Innovation Policy” (Springer, 2022) brings together more than 30 scholars who contribute unique chapters to this impressive volume. It was edited by Karl Wennberg of the Stockholm School of Economics and Christian Sandström of the Jönköping (Sweden) International Business School.

As the title of this book suggests, the authors are generally pushing back against the thesis found in Mariana Mazzucato’s book The Entrepreneurial State (2011). That book, like many other books and essays written recently, lays out a romantic view of industrial policy that sees government as the prime mover of markets and innovation. Mazzucato calls for “a bolder vision for the State’s dynamic role in fostering economic growth” and innovation. She wants the state fully entrenched in technological investments and decision-making throughout the economy because she believes that is the best way to expand the innovative potential of a nation.

The essays in Questioning the Entrepreneurial State offer a different perspective, rooted in the realities on the ground in Europe today. Taken together, the chapters tell a fairly consistent story: Despite the existence of many different industrial policy schemes at the continental and country level, Europe isn’t in very good shape on the tech and innovation front. The heavy-handed policies and volumes of regulations imposed by the European Union and its member states have played a role in that outcome. But these governments have simultaneously been pushing to promote innovation using a variety of technocratic policy levers and industrial policy schemes. Despite all those well-intentioned efforts, the EU has struggled to keep up with the US and China in most important modern tech sectors.

As Wennberg and Sandström note in their introductory chapter:

Grand schemes toward noble outcomes have a disappointing track record in human political and economic history. Conventional wisdom regarding authorities’ inability to selectively pinpoint certain technologies, sectors, or firms as winners, and the fact that large support structures for specific technologies are bound to distort incentives and result in opportunism, seem to have been forgotten.

In summarizing the chapters, they conclude that, “while the idea of aiming high and leveraging large portions of society’s resources to address some fundamental human challenges may sound appealing to many, such ideas have limited scientific credibility.”

Why do governments frequently fail in attempts to be entrepreneurial? Johan P. Larsson gets at the heart of the matter in his chapter when noting how, “[t]he state entrepreneur is not subject to real risk, often faces no market, and cannot be properly evaluated. It pays no price for being wrong and it struggles in assigning responsibility.” Which leads to two questions that are rarely asked, he notes: “[F]irst, how do we ensure that the state pays a price for being wrong? And second, when is that price high enough for us to know it is time to cut our losses?”

The authors of another chapter (Murtinu, Foss & Klein) concur and note how, “even well-intentioned and strongly motivated public actors lack the ability to manage the process of innovation.” “As stewards of resources owned by the public,” they note, “government bureaucrats do not exercise the ultimate responsibility that comes with ownership.” In other words, the state faces problems of misaligned incentives.

Several authors in the book highlight the various public choice problems often associated with large-scale industrial policy initiatives, including rent-seeking and capture. Wennberg and Sandström note how this results in less disruption as established players don’t seek to challenge existing market or technological status quos but instead simply seek to benefit from it. “[S]upport structures, platforms for private-public cooperation, and large volumes of technology-specific money usually end up in the hands of established interest groups,” they note. “Hence, they are not very likely to question these policies but will rather go along with the ride.”

John-Erik Bergkvist and Jerker Moodysson devote an entire chapter to this problem and offer a grim assessment of how past industrial policy schemes have exacerbated it:

Assuming that policies and programs are shaped by the interest groups that are affected by the policies, we highlight the risk that policymaking may end up as support for established interest groups rather than supporting the emergence of those who could act as institutional entrepreneurs or disruptors. Policies and programs may thus be captivated by dominant actors in the established regime, who have superior financial and relational resources. The result would then be that innovation policies sustain the established socio-technical structures of industries rather than contributing to the emergence of new structures.”

Other organizations are incentivized to support the status quo when big money is on the line. One of the most interesting chapters in the book was co-authored by Wennberg and Sandström along with Elias Collin. They examine the conflicts of interest inherent in many evaluations of industrial policy programs by various third parties, including academics and consultants who receive generous state contracts:

the overwhelming majority of evaluations are positive or neutral and that very few evaluations are negative. While this is the case across all categories of evaluators, we note that consulting firms stand out as particularly inclined to provide positive evaluations. The absence of negative or critical reports can be related to the fact that most of the studies do not rely upon methods that make it possible to discuss effects. This discrepancy between so many positive evaluations on the one hand and comparatively weak evaluation methods on the other hand leads us to suspect that evaluators are not sufficiently independent. Consultants and scholars that are funded by a government agency in order to evaluate the agency’s policies and programs are put in a position where it is difficult to maintain objectivity.

This is one reason why industrial policy continues to have such currency in European policy discussions despite a long track record of failure, as documented throughout this new book. The biggest problem for Europe lies in its layers of regulatory bureaucracy and heavy-handed treatment of entrepreneurs.

Later in the book, Zoltan J. Acs offers a grim account of just how bad things have been for Europe on the digital technology front in recent decades, despite the many state-led efforts to promote the sector. “The European Union protected traditional industries and hoped that existing firms would introduce new technologies. This was a policy designed to fail,” Acs argues. “What has been the outcome of E.U. policy in limiting entrepreneurial activity over recent decades?” he asks. Acs concludes that:

It is immediately clear… that the United States and China dominate the platform landscape. Based on the market value of top companies, the United States alone represents 66% of the world’s platform economy with 41 of the top 100 companies. European platform-based companies play a marginal role, with only 3% of market value.

He says that the United Kingdom’s “Brexit” from the European Union was a logical move, “because E.U. regulations were holding back the U.K.’s strong DPE (digital platform economy).” “If the United Kingdom was to realize its economic potential, it had to extricate itself from the European Union,” Acs says, due to the “dysfunctional E.U. bureaucracy.” No amount of industrial policy support is going to allow European firms to overcome those burdens. In fact, many of Europe’s industrial policy programs create the very disincentives that retard innovation and discourage entrepreneurialism in key sectors.

Several of the authors in the collection stress how the better role for the state is usually to set the table for innovation and growth without trying to determine everything that is served on the plate. As Wennberg and Sandström summarize:

the best policies to promote innovation are those that promote productive economic activity more generally: property rights protection, open and contestable markets, a stable monetary system, and legal rules that favor competition and entrepreneurship. Policy should promote an institutional environment in which innovation and entrepreneurship can flourish without trying to anticipate the specific outcomes of those processes—an impossible task in the face of uncertainty, technological change, and a dynamic, knowledge-based economy.

That’s good advice, as is everything found throughout the book. I encourage all those interested in these issues to take a hard look at it because it is particularly relevant even here in the Unites States, as Congress is currently considering a massive new 3,000-page, $350 billion industrial policy bill that I’ve labelled “The Most Corporatist & Wasteful Industrial Policy Ever.” There doesn’t seem to be anything stopping the momentum of this effort with both liberals and conservatives lining up to pass out the pork. I wish I could put a copy of Questioning the Entrepreneurial State in all their hands and ask them to read every word of it before they gamble hundreds of billions on such foolish efforts.


Additional Reading:

]]>
https://techliberation.com/2022/04/26/book-review-questioning-the-entrepreneurial-state/feed/ 1 76975
Slide Presentation on “The Future of Innovation Policy” https://techliberation.com/2022/04/18/slide-presentation-on-the-future-of-innovation-policy/ https://techliberation.com/2022/04/18/slide-presentation-on-the-future-of-innovation-policy/#comments Mon, 18 Apr 2022 19:24:10 +0000 https://techliberation.com/?p=76968

Here’s a slide presentation on “The Future of Innovation Policy” that I presented to some student groups recently. It builds on themes discussed in my recent books, Permissionless Innovation: The Continuing Case for Comprehensive Technological Freedom, and Evasive Entrepreneurs and the Future of Governance: How Innovation Improves Economies and GovernmentsI specifically discuss the tension between permissionless innovation and the precautionary principle as competing policy defaults.

]]>
https://techliberation.com/2022/04/18/slide-presentation-on-the-future-of-innovation-policy/feed/ 1 76968
The Precautionary Principle: A Plea for Proportionality https://techliberation.com/2022/02/07/the-precautionary-principle-a-plea-for-proportionality/ https://techliberation.com/2022/02/07/the-precautionary-principle-a-plea-for-proportionality/#comments Mon, 07 Feb 2022 19:57:03 +0000 https://techliberation.com/?p=76949

Gabrielle Bauer, a Toronto-based medical writer, has just published one of the most concise explanations of what’s wrong with the precautionary principle that I have ever read. The precautionary principle, you will recall, generally refers to public policies that limit or even prohibit trial-and-error experimentation and risk-taking. Innovations are restricted until their creators can prove that they will not cause any harms or disruptions. In an essay for The New Atlantis entitled, “Danger: Caution Ahead,” Bauer uses the world’s recent experiences with COVID lockdowns as the backdrop for how society can sometimes take extreme caution too far, and create more serious dangers in the process. “The phrase ‘abundance of caution’ captures the precautionary principle in a more literary way,” Bauer notes. Indeed, another way to look at it is through the prism of the old saying, “better to be safe than sorry.” The problem, she correctly observes, is that, “extreme caution comes at a cost.” This is exactly right and it points to the profound trade-offs associated with precautionary principle thinking in practice.

In my own writing about the problems associated with the precautionary principle (see list of essays at bottom), I often like to paraphrase an ancient nugget of wisdom from St. Thomas Aquinas, who once noted in his Summa Theologica that, if the highest aim of a captain were merely to preserve their ship, then they would simply keep it in port forever. Of course, that is not the only goal of a captain has. The safety of the vessel and the crew is essential, of course, but captains brave the high seas because there are good reasons to take such risks. Most obviously, it might be how they make their living. But historically, captains have also taken to the seas as pioneering explorers, researchers, or even just thrill-seekers.

This was equally true when humans first decided to take to the air in balloons, blimps, airplanes, and rockets. A strict application of the precautionary principle would have instead told us we should keep our feet on the ground. Better to be safe than sorry! Thankfully, many brave souls ignored that advice and took the heavens in the spirit of exploration and adventure. As Wilbur Wright once famously said, “If you are looking for perfect safety, you would do well to sit on a fence and watch the birds.” Needless to say, humans would have never mastered the skies if the Wright brothers (and many others) had not gotten off the fence and taken the risks they did.

Opportunity Costs Matter

Here we get to the true danger of strict versions of the precautionary principle: It essentially becomes a crime to get off the fence and do anything risky at all. This sets up the potential for stasis and stagnation as societal learning is severely curtailed. Progress becomes harder because there can be no reward without some risk. — both individually or societally. “Caution makes sense except when it doesn’t,” Bauer notes. She continues on to note:

Used too liberally, the precautionary principle can keep us stuck in a state of extreme risk-aversion, leading to cumbersome policies that weigh down our lives. To get to the good parts of life, we need to accept some risk.

As I argued in a book on these issues, the root problem with precautionary principle thinking is that “living in constant fear of worst-case scenarios—and premising public policy on them—means that best-case scenarios will never come about.” If societal attitudes and public policy will not tolerate the idea of any error resulting from experimentation with new and better ways of doing things, then we will obviously not get many new and better things! Scientist Martin Rees refers to this truism about the precautionary principle as “the hidden cost of saying no.”

The opportunity cost of inaction or stasis can be hard to quantify but imagine if we organized our entire society around a rigid application of the precautionary principle. Bauer notes that this is basically what we did during COVID. And the results are in. “It’s far past time we ask ourselves when  abundance really means excess, when our precautionary measures against Covid have gone too far, when we have ignored the costs and lost all sense of proportionality.” Unfortunately, the precautionary mindset–which is always rooted in fear of the unknown–took control. As Bauer notes:

It should have been socially acceptable to debate the merits of these tradeoffs, with nuance and without censure. But that is not what happened. Early in the pandemic, an unspoken rule — thou shalt not question the costs — sprang up and stifled discourse.

“And here’s the worst of it: the costs of excess caution can persist long after the initial danger has passed,” she notes. “It’s no different with Covid: our knee-jerk caution may have downstream effects that persist after the virus has ceased to be a threat.” She cites many compelling examples of the negative effects associated with extreme precautionary thinking during COVID, noting how, “[t]he impact of travel and trade restrictions on food security and childhood vaccination in developing countries will likely reverberate for decades.” Moreover:

The Covid-19 pandemic has laid bare the risks of extreme protection: lost businesses, lost livelihoods, lost graduations, lost loves, lost goodbyes; the loss of personal agency over life’s most intimate and meaningful moments; the loss, quite possibly, of our cherished principles of liberal democracy. A recent report by International IDEA, a democracy advocacy organization, concluded that many countries had become more authoritarian as they took steps to contain the pandemic.

This list of lockdown trade-offs goes on and the aggregate costs will be staggering once economists and others get around to better estimating them. As noted, gauging those costs will be challenging because of the many variables and values that come into play. But it remains vital that society takes risk analysis and trade-offs more seriously so that we don’t make these mistakes again and again.

Proportionality is the Key

Toward that end, Bauer makes “a plea for proportionality.” She wants society to strike a more reasonable balance when it comes to policy measures that might block actions and research that could help us better understand how to deal with risk uncertainties. Accordingly, “we must understand when to apply the precautionary principle and when to move on from it.”

“The precautionary principle doesn’t come with such checks and balances. On the contrary, it tends to perpetuate itself and acquire a life of its own,” she notes. In other words, once set in place initially for a given issue or sector, precautionary principle thinking tends to grow like bad weeds until it has taken over everything in sight. (To see the consequences of that in fields like aviation, space, nanotech, and others, please check out J. Storrs Hall’s amazing new book, Where Is My Flying Car?)

Of course, proportionality cuts both ways, and as I noted in my last two books, there are some instances in which at least a light version of the precautionary principle should be preemptively applied, but they are limited to scenarios where the threat in question is tangible, immediate, irreversible, and catastrophic in nature. In such cases, I argue, society might be better suited thinking about when an “anti-catastrophe principle” is needed, which narrows the scope of the precautionary principle and focuses it more appropriately on the most unambiguously worst-case scenarios that meet those criteria. Generally speaking, however, this test is not satisfied in the vast majority of cases. “Innovation Allowed” should be our default principle. 

Conclusion

The single most important thing that we must always remember when debating precautionary principle-based policies is that, just because someone has good intentions and claims safety as their goal, that does not automatically make the world a safer place. To repeat: Excessive safety-related measure can result in less safety overall. Or again, as Bauer says, “extreme caution comes at a cost.”

No one ever summarized this truism more clearly than the great political scientist Aaron Wildavsky, who devoted much of his life’s work to proving how efforts to create a risk-free society would instead lead to an extremely unsafe society. In his 1988 book, Searching for Safety, Wildavsky warned of the dangers of “trial without error” reasoning, and contrasted it with the trial-and-error method of evaluating risk and seeking wise solutions to it. He argued that wisdom is born of experience and that we can learn how to be wealthier and healthier as individuals and a society only by first being willing to embrace uncertainty and even occasional failure. Here was the crucial takeaway:

The direct implication of trial without error is obvious: If you can do nothing without knowing first how it will turn out, you cannot do anything at all. An indirect implication of trial without error is that if trying new things is made more costly, there will be fewer departures from past practice; this very lack of change may itself be dangerous in forgoing chances to reduce existing hazards. . . . Existing hazards will continue to cause harm if we fail to reduce them by taking advantage of the opportunity to benefit from repeated trials.

Trial and error is the basis of all societal learning, and without it, humanity will be less safe and less prosperous over the long run. Gabrielle Bauer’s new essay captures that insight better than anything I’ve read since Wildavsky was writing about the dangers of the precautionary principle. I beg you to jump over to New Atlantis and read her entire article. It’s absolutely essential.


Additional reading from Adam Thierer on the precautionary principle

]]>
https://techliberation.com/2022/02/07/the-precautionary-principle-a-plea-for-proportionality/feed/ 5 76949
Podcast: An Update on Federal & State Driverless Cars Policy https://techliberation.com/2022/02/04/podcast-an-update-on-federal-state-driverless-cars-policy/ https://techliberation.com/2022/02/04/podcast-an-update-on-federal-state-driverless-cars-policy/#comments Fri, 04 Feb 2022 19:44:27 +0000 https://techliberation.com/?p=76947

This week, I hosted another installment of the “Tech Roundup,” for the Federalist Society’s Regulatory Transparency Project. This latest 30-minute episode was on, “Autonomous Vehicles: Where Are We Now?” I was joined by Marc Scribner, a transportation policy expert with the Reason Foundation.  We provided an quick update of where federal and state policy for AVs stands as of early 2022 and offered some thoughts about what might happen next in the Biden Administration Department of Transportation (DOT). Some experts believe that the DOT could be ready to start aggressively regulating driverless car tech or AV companies, especially Elon Musk’s Tesla. Tune in to hear what Marc and I have to say about all that and more.

Related Reading:

 

]]>
https://techliberation.com/2022/02/04/podcast-an-update-on-federal-state-driverless-cars-policy/feed/ 1 76947
The Case for Innovation, Progress & Abundance: Some Readings https://techliberation.com/2022/01/25/the-case-for-innovation-progress-abundance-some-readings/ https://techliberation.com/2022/01/25/the-case-for-innovation-progress-abundance-some-readings/#comments Tue, 25 Jan 2022 20:27:31 +0000 https://techliberation.com/?p=76937

This is a compendium of readings on “ progress studies ,” or essays and books which generally make the case for technological innovation, dynamism, economic growth, and abundance. I will update this list as additional material of relevance is brought to my attention.   

[Last update: 10/11/22]

Recent Essays

Books

]]>
https://techliberation.com/2022/01/25/the-case-for-innovation-progress-abundance-some-readings/feed/ 2 76937
Can Government Reproduce Silicon Valley Everywhere? https://techliberation.com/2021/09/12/can-government-reproduce-silicon-valley-everywhere/ https://techliberation.com/2021/09/12/can-government-reproduce-silicon-valley-everywhere/#comments Sun, 12 Sep 2021 17:36:07 +0000 https://techliberation.com/?p=76903

Wishful thinking is a dangerous drug. Some pundits and policymakers believe that, if your intentions are pure and you have the “right” people in power, all government needs to do is sprinkle a little pixie dust (in the form of billions of taxpayer dollars) and magical things will happen.

Of course, reality has a funny way of throwing a wrench into the best-laid plans. Which brings me to the question I raise in a new 2-part series for  Discourse magazine: Can governments replicate Silicon Valley everywhere?

In the first installment, I explore the track record of federal and state attempts to build tech clusters, science parks & “regional innovation hubs” using state subsidies and industrial policy. This is highly relevant today because of the huge new industrial policy push at the federal level is building on top of growing state and local efforts to create tech hubs, science parks, or various other types of industrial “clusters.

At the federal level, this summer, the Senate passed a 2,300-page industrial policy bill, the “United States Innovation and Competition Act of 2021,” that included almost $10 billion over four years for a Department of Commerce-led effort to fund 20 new regional technology hubs, “in a manner that ensures geographic diversity and representation from communities of differing populations.” A similar proposal that is moving in the House, the “Regional Innovation Act of 2021,” proposes almost $7 billion over five years for 10 regional tech hubs. Meanwhile, the Biden administration also is pitching ideas for new high-tech hubs. In late July, the Commerce Department’s Economic Development Administration announced plans to allocate $1 billion in pandemic recovery funds to create or expand “regional industry clusters” as part of the administration’s new Build Back Better Regional Challenge. Among the possible ideas the agency said might win funding are an “artificial intelligence corridor,” an “agriculture-technology cluster” in rural coal counties, a “blue economy cluster” in coastal regions, and a “climate-friendly electric vehicle cluster.”

In my essay, I note that the economic literature on these efforts has been fairly negative, to put it mildly. There is no precise recipe for growing tech clusters, as most economists and business analysts note.

“Despite several attempts, Silicon Valley has not been successfully copied elsewhere,” notes Mark Zachary Taylor, author of “The Politics of Innovation: Why Some Countries Are Better Than Others at Science and Technology.” Judge Glock, a senior policy adviser with the Cicero Institute, offers a more blistering assessment of such efforts: “Almost every American state has tried to fund the creation of biotech clusters, projects that almost inevitably end with weeds growing through the parking-lot pavement and a trail of corrupt bargains.”

I then highlight the key findings from several major studies of these efforts, all of which make it clear that, as cluster scholars by Aaron Chatterji, Edward Glaeser and William Kerr noted in 2014 after gathering all the research conducted on the topic: existing evidence “suggests that the regional foundation for growth-enabling innovation is complex and that we should be cautious of single policy solutions that claim to fit all needs.” Furthermore, “even if clusters of entrepreneurship are good for local growth, it is less clear that cities or states have the ability to generate those clusters.”

I also highlight research from my Mercatus Center colleagues on “The Economics of a Targeted Economic Development Subsidy” documenting costs of state-level planning & case study of Foxconn fiasco. They summarize the fairly miserable track record of state and local mini-industrial policy efforts. As they note, the extensive economic literature on this matter finds that “the net effect of targeted economic development subsidies is likely to be negative” because “the taxes funding the subsidies will discourage more economic activity than will be encouraged by the subsidies themselves.” Similarly, Harvard Business School economist Josh Lerner evaluated dozens of similar targeted development efforts from around the globe in his 2009 book Boulevard of Broken Dreams: Why Public Efforts to Boost Entrepreneurship and Venture Capital Have Failed—and What to Do About It. He concluded that “for each effective government intervention, there have been dozens, even hundreds, of failures, where substantial public expenditures bore no fruit.”

In my essay, I also discuss the astonishing array of federal efforts to promote the geographic spread of high-tech sectors and jobs since 2000. Throughout Bush, Obama, Trump & Biden admins, there’s been a lot of spending, but not a lot of success. Just lots of new laws and bureaucracies:

In 2012, the Obama administration launched the multiagency Rural Jobs and Innovation Accelerator Challenge and Advanced Manufacturing Jobs and Innovation Accelerator Challenge. This occurred at roughly the same time President Obama was launching his Startup America initiative. He also signed the JOBS Act (Jump-start Our Business Startups) in 2012. All these efforts included various measures to support the spread of advanced manufacturing and high-tech startups across the U.S. But none of these efforts have borne much fruit so far.

In the second installment of this series, I explore better ways to encourage regional tech innovation and economic development without doubling down on failed programs of the past. Specifically, I explain why, when it comes to economic development efforts, policymakers would be wise to avoid the costly, ineffective “fun stuff” and refocus on time-tested “boring” strategies:

The boring approach to economic development seeks to promote an open innovation culture that is conducive to risk-taking, investment and growth without the need to extend targeted privileges to particular firms or industries. Such a culture comes down to a classic mix of simplified and equally applied taxes, streamlined permitting processes and sensible regulations, limits on frivolous lawsuits, and clear protection of contracts and property rights. As Matt Mitchell and I argued previously, policymakers need to resist the urge to go for broke with splashy policies and programs. They need to appreciate the benefits of generalized economic development policy (a.k.a. the boring approach) as opposed to far riskier targeted development efforts.

I also highlight recent research explaining how perhaps the simplest way to strengthen existing clusters, or give rise to new ones, is to make sure America’s immigration policies are hospitable to the best and brightest minds from across the globe.

And I note how, due to the problems associated with many other forms of government-sponsored R&D assistance, many scholars and policymakers are increasingly turning to the idea of government-sponsored competitions and prizes as a superior way to distribute R&D assistance.

With competitions, governments can set broad goals to help facilitate the search for important societal needs. The prizes then create a powerful incentive for innovators to pursue those goals, not only to win money, but also to gain recognition from peers and the public. Another alternative is just using lotteries to distribute R&D money instead of having agencies target grants. That at least avoids political shenanigans and paperwork delays, although it may not be a particularly effective approach.

There is also some good news is overlooked in today’s rush to make big industrial policy gambles: Venture capitalists and new startups are already spreading out naturally.

A 2021 study on “The State of the Startup Ecosystem” by Engine, a research and advocacy organization supporting startups, revealed that “as Series A funding grew over the last fifteen years, more of that growth has started to shift to areas located outside of the largest ecosystems.” Series A funding refers to the initial round of outside venture capitalist investment in startups. The report looked at Series A deals from 2003 to 2018 and found that “Series A rounds outside of the top five ecosystems grew nearly 900 percent, while the number of rounds outside of the top nine grew nearly tenfold.” Whereas Series A fundings outside of the top five ecosystems stood at 38% in 2003, they had jumped up to 43% in 2018. “The increase in deal location diversity over this period reflects an increasing spread in venture capital investment across the country and less centralization of investment in areas like Silicon Valley,” the report concluded.

Meanwhile, tech innovators and investors are increasingly engaging in innovation arbitrage as they move to cities and states across the nation that are more hospitable to entrepreneurial activities. Firms and investors are voting with their feet (and dollars) by flocking to areas where tech clusters can more naturally sprout because the general policy environment is sound.

But government efforts to artificially try to create regional innovation hubs in a top-down, technocratic fashion will almost certainly persist. As they do, some will argue that this time will be different! Perhaps, but it is more likely that the past is prologue; these new hubs will likely cause federal politicians to jockey for position to have their regions named one of the winners and get a big cut of all the new high-tech pork being served up by Washington. We can do better.

Jump over to  Discourse to read both installments here and here.

Also, down below I list several other things I have written recently on industrial policy efforts more generally.

]]>
https://techliberation.com/2021/09/12/can-government-reproduce-silicon-valley-everywhere/feed/ 1 76903
Keeping Uncle Sam out of the Industrial Policy Casino https://techliberation.com/2021/07/16/keeping-uncle-sam-out-of-the-industrial-policy-casino/ https://techliberation.com/2021/07/16/keeping-uncle-sam-out-of-the-industrial-policy-casino/#comments Fri, 16 Jul 2021 19:01:32 +0000 https://techliberation.com/?p=76898

Financial Help for Gamblers: How to Get Find ReliefIn my latest column for The Hill, I consider that dangers of government gambling our tax dollars on risky industrial policy programs. I begin by noting:

Roll the dice at a casino enough times, and you are bound to win a few games. But knowing the odds are not in your favor, how much are you willing to risk losing by continuing to gamble? This is the same issue governments confront when they gamble taxpayer dollars on industrial policy efforts, which can best be described as targeted and directed efforts to plan for specific future industrial outputs and outcomes. Throwing enough money at risky ventures might net a few wins, but at what cost? Could those resources have been better spent? And do bureaucrats really make better bets than private investors?

I continue on to note that, while the US is embarking on a major new industrial policy push, history does not provide us with a lot of hope regarding Uncle Sam’s betting record when he starts rolling those industrial policy dice. “How much tolerance should the public have for government industrial policy gambling?” I ask. I continue on:

Generally speaking, “basic” support (broad-based funding for universities and research labs) is wiser than “applied” (targeted subsidies for specific firms or sectors). With basic R&D funding, the chances of wasting resources on risky investments can be contained, at least as compared to highly targeted investments in unproven technologies and firms.

I also argue that “The riskiest bets on new technologies and sectors are better left to private investors,” and note how, “America’s venture capital industry remains the envy of the world because it continues to power world-beating advanced technology.” Accordingly, I conclude:

While some government investments will always be necessary, policymakers engaging in casino economics means bad industrial policy bets and taxpayer money squandered on risky ventures best made by private actors. We need to keep Uncle Sam’s gambling habits in check.

Read the whole thing here. And here’s a list of more of my recent writing on industrial policy:

]]>
https://techliberation.com/2021/07/16/keeping-uncle-sam-out-of-the-industrial-policy-casino/feed/ 1 76898
Remembering the ‘Japan Inc.’ Industrial Policy Scare of the 1980s & 1990s https://techliberation.com/2021/06/29/remembering-the-japan-inc-industrial-policy-scare-of-the-1980s-1990s/ https://techliberation.com/2021/06/29/remembering-the-japan-inc-industrial-policy-scare-of-the-1980s-1990s/#respond Tue, 29 Jun 2021 16:12:22 +0000 https://techliberation.com/?p=76892

Discourse magazine has just published my latest essay, “‘Japan Inc.’ and Other Tales of Industrial Policy Apocalypse.” It is a short history of the hysteria surrounding the growth of Japan in the 1980s and early 1990s and its various industrial policy efforts. I begin by noting that, “American pundits and policymakers are today raising a litany of complaints about Chinese industrial policies, trade practices, industrial espionage and military expansion. Some of these concerns have merit. In each case, however, it is easy to find identical fears that were raised about Japan a generation ago.” I then walk through many of the leading books, opeds, movies, and other things from that past era to show how that was the case.

“Hysteria” is not too strong a word to use in this case. Many pundits and politicians were panicking about the rise of Japan economically and more specifically about the way Japan’s Ministry of International Trade and Industry (MITI) was formulating industrial policy schemes for industrial sectors in which they hoped to make advances. This resulted in veritable “MITI mania” here in America. “U.S. officials and market analysts came to view MITI with a combination of reverence and revulsion, believing that it had concocted an industrial policy cocktail that was fueling Japan’s success at the expense of American companies and interests,” I note. Countless books and essays were being published with breathless titles and predictions. I go through dozens of them in my essay. Meanwhile, the debate in policy circles and Capitol Hill even took on an ugly racial tinge, with some lawmakers calling the the Japanese “leeches.” and suggesting the U.S. should have dropped more atomic bombs on Japan during World War II. At one point, several members of Congress gathered on the lawn of the U.S. Capitol in 1987 to smash Japanese electronics with sledgehammers.

All this hysteria about Japan and MITI bore little semblance to reality. In fact, as I note in the essay, the MITI industrial planning model fell apart after it made a host of horrible bad bets and the stock market tanked in the late 1980s. Corruption also became a huge problem within many state-led efforts. A 2000 report by the Policy Research Institute within Japan’s Ministry of Finance concluded that “the Japanese model was not the source of Japanese competitiveness but the cause of our failure.” MITI was renamed the Ministry of Economy, Trade and Industry at about the same time, and its mission shifted more toward market-oriented reforms.

Industrial policy came to be viewed as a bit of a joke in America after that, but now it is back with a vengeance, thanks largely to the rise of Chinese economic power. Thus, because “we hear echoes from the Japan Inc. era debates in today’s policy discussions about China and industrial policy planning,” I end my essay with some lessons from the ‘Japan Inc.’ era for today’s industrial policy debates:

This similarity demonstrates the first lesson we can learn from the previous era: It is important to separate serious geopolitical and economic analysis from breathless fear-mongering and borderline xenophobia. The former has a serious place in policy discussions; the latter needs to be called out and shunned. After all, there are many legitimate worries about rising Chinese power, particularly when it involves Chinese Communist Party efforts to squash human rights domestically or to engage in industrial espionage, trade mercantilism and military adventurism abroad. Separating serious matters from trivial or imaginary ones is crucial, especially to help keep peace between nations. Avoiding hysteria is especially pertinent today with a wave of anti-Asian sentiment and attacks on the rise in the U.S. A second lesson from the Japan Inc. experience relates to today’s renewed interest in industrial policy: Forecasting the future of nations and economies—and trying to plan for it—is a tricky business. A huge range of variables affects global competitiveness and technological advancement. A nonexhaustive list of some of the most important factors would include legal and political stability, physical and intellectual property rights, tax burdens, competition policy, trade and investment laws, monetary policy, research and development efforts, and even demographic factors and access to certain natural resources. Understanding how these and other factors all work together is an inexact science. When targeted industrial policy mechanisms are added to the mix, it becomes even harder to untangle which variables are making the most difference. Both in the past and today, a less visible group of scholars has suggested that an embrace of entrepreneurialism and free trade was the fundamental factor driving Japanese economic expansion in the past and China’s amazing growth today. Openness to markets, they say, drove the enormous economic expansions—which also happened during times of much-needed catch-up modernization in both countries. But these perspectives have usually been shouted out of the room by louder voices, who either bombastically blast or praise industrial policy mechanisms as the prime mover in the economic rejuvenation of both nations. We need to tamp down on the magical thinking that governments can easily achieve technological innovation and economic growth by simply spinning a few industrial policy gauges. A few big bets may pay off, but that doesn’t justify governments engaging in casino economics regularly. History more often shows that grandiose industrial policy schemes simply result in cost overruns, cronyism and even corruption.

I also conclude by noting that:

Perhaps the most ironic indictment of industrial policy punditry lies in the way all the earlier books and essays about Japanese planning not only failed to forecast the many flops associated with it, but also did not foresee China as a potential future economic juggernaut. Korea, Singapore and Taiwan were mentioned as potential Asian challengers, but no one gave China much consideration. What might that tell us about the ability of experts to predict the future course of countries and economies? It is a reminder of the wisdom of another great Yogi Berra quote: “It’s tough to make predictions, especially about the future.”

You can read the entire piece, as well as several others listed below, over at Discourse.


Recent writing on industrial policy:
]]>
https://techliberation.com/2021/06/29/remembering-the-japan-inc-industrial-policy-scare-of-the-1980s-1990s/feed/ 0 76892
Innovation policy in Arizona https://techliberation.com/2021/06/17/innovation-policy-in-arizona/ https://techliberation.com/2021/06/17/innovation-policy-in-arizona/#comments Thu, 17 Jun 2021 14:12:05 +0000 https://techliberation.com/?p=76881

I write about telecom and tech policy and have found that lawmakers and regulators are eager to learn about new technologies. That said, I find that good tech policies usually die of neglect as lawmakers and lobbyists get busy patching up or growing “legacy” policy areas, like public pensions, income taxes, Medicare, school financing, and so forth. So it was a pleasant surprise this spring to see Arizona lawmakers prioritize and pass several laws that anticipate and encourage brand-new technologies and industries.

Flying cars, autonomous vehicles, telehealth–legislating in any one of these novel legal areas is noteworthy. New laws in all of these areas, plus other tech areas, as Arizona did in 2021, is a huge achievement and an invitation to entrepreneurs and industry to build in Arizona.

Re: AVs and telehealth, Arizona was already a national leader in autonomous vehicles and Gov. Ducey in 2015 created the first (to my knowledge) statewide AV task force, something that was imitated nationwide. A new law codifies some of those executive orders and establishes safety rules for testing and commercializing AVs. Another law liberalizes and mainstreams telehealth as an alternative to in-person doctor visits. 

A few highlights about new Arizona laws on legal areas I’ve followed more closely:

  1. Urban air mobility and passenger drones

Arizona lawmakers passed a law (HB 2485) creating an Urban Air Mobility study committee. 26 members of public and private representatives are charged with evaluating current regulations that affect and impede the urban air mobility industry and making recommendations to lawmakers. “Urban air mobility” refers to the growing aviation industry devoted to new, small aircraft designs, including eVTOL and passenger drones, for the air taxi industry. Despite the name, urban air mobility includes intra-city (say, central business district to airport) aviation as well as regional aviation between small cities.

The law is well timed. The US Air Force is giving eVTOL aircraft companies access to military airspace and facilities this year, in part to jumpstart the US commercial eVTOL industry, and NASA recently released a new study (PDF) about regional aviation and technology. NASA and the FAA last year also endorsed the idea of urban air mobility corridors and it’s part of the national strategy for new aviation.

The federal government partnering with cities and state DOTs in the next few years to study air taxis and to test the corridor concept. This Arizona study committee might be to identify possible UAM aerial corridors in the state and cargo missions for experimental UAM flights. They could also identify the regulatory and zoning obstacles to, say, constructing or retrofitting a 2-story air taxi vertiport in downtown Phoenix or Tucson.

Several states have drone advisory committees but this law makes Arizona a trailblazer nationally when it comes to urban air mobility. Very few states have made this a legislative priority: In May 2020 Oklahoma law created a task force to examine autonomous vehicle and passenger drones. Texas joined Oklahoma and Arizona on this front–this week Gov. Abbot signed a similar law creating an urban air mobility committee.

  1. Smart corridor and broadband infrastructure construction

Infrastructure companies nationwide are begging state and local officials to allow them to build along roadways. These “smart road” projects include installing 5G antennas, fiber optics, lidar, GPS nodes, and other technologies for broadband or for connected and autonomous vehicles. To respond to that trend, Arizona passed a law (HB 2596) on May 10 that allows the state DOT–solely or via public-private partnership–to construct and lease out roadside passive infrastructure.

In particular, the new law allows the state DOT to construct, manage, and lease out passive “telecommunication facilities”–not simply conduit, which was allowed under existing law. “Telecommunication facilities” is defined broadly:

Any cable, line, fiber, wire, conduit, innerduct, access manhole, handhole, tower, hut, pedestal, pole, box, transmitting equipment, receiving equipment or power equipment or any other equipment, system or device that is used to transmit, receive, produce or distribute by wireless, wireline, electronic or optical signal for communication purposes.

The new Section 28-7383 also allows the state to enter into an agreement with a public or private entity “for the purpose of using, managing or operating” these state-owned assets. Access to all infrastructure must be non-exclusive, in order to promote competition between telecom and smart city providers. Access to the rights-of-way and infrastructure must also be non-discriminatory, which prevents a public-private partner from favoring its affiliated or favored providers. 

Leasing revenues from private companies using the roadside infrastructure are deposited into a new Smart Corridor Trust Fund, which is used to expand the smart corridor network infrastructure. The project also means it’s easier for multiple providers to access the rights-of-way and roadside infrastructure, making it easier to deploy 5G antennas and extend fiber backhaul and Internet connectivity to rural areas.

It’s the most ambitious smart corridor and telecom infrastructure deployment program I’ve seen. There have been some smaller projects involving the competitive leasing of roadside conduit and poles, like in Lincoln, Nebraska and a proposal in Michigan, but I don’t know of any state encouraging this statewide.

For more about this topic of public-private partnerships and open-access smart corridors, you can read my law review article with Prof. Korok Ray: Smart Cities, Dumb Infrastructure.

  1. Legal protections for residents to install broadband infrastructure on their property

Finally, in May, Gov. Ducey signed a law (HB 2711) sponsored by Rep. Nutt that protects that resembles and supplements the FCC’s “over-the-air-reception-device” rules that protect homeowner installations of wireless broadband antennas. Many renters and landowners–especially in rural areas where wireless home Internet makes more sense–want to install wireless broadband antennas on their property, and this Arizona law protects them from local zoning and permitting regulations that would “unreasonably” delay or raise the cost of installation of antennas. (This is sometimes called the “pizza box rule”–the antenna is protected if it’s smaller than 1 meter diameter.) Without this state law and the FCC rules, towns and counties could and would prohibit antennas or fine residents and broadband companies for installing small broadband and TV antennas on the grounds that the antennas are an unpermitted accessory structure or zoning violation.

The FCC’s new 2021 rules are broader and protect certain types of outdoor 5G and WiFi antennas that serve multiple households. The Arizona law doesn’t extend to these “one-to-many” antennas but its protections supplement those FCC rules and clearer than FCC rules, which can directly regulate antennas but not town and city officials. Between the FCC rules and the Arizona law, Arizona households and renters have new, substantial freedom to install 5G and other wireless antennas on their rooftops, balconies, and yard poles. In rural areas especially this will help get infrastructure and small broadband antennas installed quickly on private property.

Too often, policy debates by state lawmakers and agencies are dominated by incremental reforms of longstanding issues and established industries. Very few states plant the seeds–via policy and law–for promotion of new industries. Passenger drones, smart corridors, autonomous vehicles, and drone delivery are maturing as technologies. Preparing for those industries signals to companies and their investors that innovation, legal clarity, and investment is a priority for the state. Hopefully other states will take Arizona’s lead and look to encouraging the industries and services of the future.

]]>
https://techliberation.com/2021/06/17/innovation-policy-in-arizona/feed/ 3 76881