Articles by Adam Thierer

Adam ThiererSenior Fellow in Technology & Innovation at the R Street Institute in Washington, DC. Formerly a senior research fellow at the Mercatus Center at George Mason University, President of the Progress & Freedom Foundation, Director of Telecommunications Studies at the Cato Institute, and a Fellow in Economic Policy at the Heritage Foundation.


It was my pleasure to recently appear on the Independent Women’s Forum’s “She Thinks” podcast to discuss “Artificial Intelligence for Dummies.” In this 24-minute conversation with host Beverly Hallberg, I outline basic definitions, identify potential benefits, and then consider some of the risks associated with AI, machine learning, and algorithmic systems.

Reminder, you can find all my relevant past work on these issues via my, “Running List of My Research on AI, ML & Robotics Policy.”

Here’s the video from a June 6th event on, “Does the US Need a New AI Regulator?” which was co-hosted by Center for Data Innovation & R Street Institute. We discuss algorithmic audits, AI licensing, an “FDA for algorithms” and other possible regulatory approaches, as well as various “soft law” self-regulatory efforts and targeted agency efforts. The event was hosted by Daniel Castro and included Lee Tiedrich, Shane Tews, Ben Shneiderman and me.

Continue reading →

It was my pleasure to recently join Matthew Lesh, Director of Public Policy and Communications for the London-based Institute of Economic Affairs (IEA), for the IEA podcast discussion, “Should We Regulate AI?” In our wide-ranging 30-minute conversation, we discuss how artificial intelligence policy is playing out across nations and I explained why I feel the UK has positioned itself smartly relative to the US & EU on AI policy. I argued that the UK approach encourages a better ‘innovation culture’ than the new US model being formulated by the Biden Administration.

We also went through some of the many concerns driving calls to regulate AI today, including: fears about job dislocations, privacy and security issues, national security and existential risks, and much more.

Continue reading →

The R Street Institute has just released my latest study on AI governance and how to address “alignment” concerns in a bottom-up fashion. The 40-page report is entitled, “Flexible, Pro-Innovation Governance Strategies for Artificial Intelligence.”

My report asks, is it possible to address AI alignment without starting with the Precautionary Principle as the governance baseline default? I explain how that is indeed possible. While some critics claim that no one is seriously trying to deal with AI alignment today, my report explains how no technology in history has been more heavily scrutinized this early in its life-cycle as AI, machine learning and robotics. The number of ethical frameworks out there already is astonishing. We don’t have too few alignment frameworks; we probably have too many!

We need to get serious about bringing some consistency to these efforts and figure out more concrete ways to a culture of safety by embedding ethics-by-design. But there is an equally compelling interest in ensuring that algorithmic innovations are developed and made widely available to society.

Continue reading →

Recently, the Future of Life Institute released an open letter that included some computer science luminaries and others calling for a 6-month “pause” on the deployment and research of “giant” artificial intelligence (AI) technologies. Eliezer Yudkowsky, a prominent AI ethicist, then made news by arguing that the “pause” letter did not go far enough and he proposed that governments consider “airstrikes” against data processing centers, or even be open to the use of nuclear weapons. This is, of course, quite insane. Yet, this is the state of the things today as a AI technopanic seems to growing faster than any of the technopanic that I’ve covered in my 31 years in the field of tech policy—and I have covered a lot of them.

In a new joint essay co-authored with Brent Orrell of the American Enterprise Institute and Chris Messerole of Brookings, we argue that “the ‘pause’ we are most in need of is one on dystopian AI thinking.” The three of us recently served on a blue-ribbon Commission on Artificial Intelligence Competitiveness, Inclusion, and Innovation, an independent effort assembled by the U.S. Chamber of Commerce. In our essay, we note how:

Many of these breakthroughs and applications will already take years to work their way through the traditional lifecycle of development, deployment, and adoption and can likely be managed through legal and regulatory systems that are already in place. Civil rights laws, consumer protection regulations, agency recall authority for defective products, and targeted sectoral regulations already govern algorithmic systems, creating enforcement avenues through our courts and by common law standards allowing for development of new regulatory tools that can be developed as actual, rather than anticipated, problems arise.

“Instead of freezing AI we should leverage the legal, regulatory, and informal tools at hand to manage existing and emerging risks while fashioning new tools to respond to new vulnerabilities,” we conclude. Also on the pause idea, it’s worth checking out this excellent essay from Bloomberg Opinion editors on why “An AI ‘Pause’ Would Be a Disaster for Innovation.”

The problem is not with the “pause” per se. Even if the signatories could somehow enforce a worldwide stop-work order, six months probably wouldn’t do much to halt advances in AI. If a brief and partial moratorium draws attention to the need to think seriously about AI safety, it’s hard to see much harm. Unfortunately, a pause seems likely to evolve into a more generalized opposition to progress.

The editors continue on to rightly note:

This is a formula for outright stagnation. No one can ever be fully confident that a given technology or application will only have positive effects. The history of innovation is one of trial and error, risk and reward. One reason why the US leads the world in digital technology — why it’s home to virtually all the biggest tech platforms — is that it did not preemptively constrain the industry with well-meaning but dubious regulation. It’s no accident that all the leading AI efforts are American too.

That is 100% right, and I appreciate the Bloomberg editors linking to my latest study on AI governance when they made this point. In this new R Street Institute study, I explain why “Getting AI Innovation Culture Right,” is essential to make sure we can enjoy the many benefits that algorithmic systems offer, while also staying competitive in the global race for competitive advantage in this space. Continue reading →

In my latest R Street Institute report, I discuss the importance of “Getting AI Innovation Culture Right.” This is the first of a trilogy of major reports on what sort of policy vision and set of governance principles should guide the development of  artificial intelligence (AI), algorithmic systems, machine learning (ML), robotics, and computational science and engineering more generally. More specifically, these reports seek to answer the question, Can we achieve AI safety without innovation-crushing top-down mandates and massive new regulatory bureaucracies? 

These questions are particular pertinent as we just made it through a week in which we’ve seen a major open letter issued that calls for a 6-month freeze on the deployment of AI technologies, while a prominent AI ethicist argued that governments should go further and consider airstrikes data processing centers even if the exchange of nuclear weapons needed to be considered! On top of that, Italy became the first major nation to ban ChatGPT, the popular AI-enabled chatbot created by U.S.-based OpenAI.

My report begins from a different presumption: AI, ML and algorithmic technologies present society with enormously benefits and, while real risks are there, we can find better ways of addressing them. Continue reading →

I have a new R Street Institute policy study out this week doing a deep dive into the question: “Can We Predict the Jobs and Skills Needed for the AI Era?” There’s lots of hand-wringing going on today about AI and the future of employment, but that’s really nothing new. In fact, in light of past automation panics, we might want to step back and ask: Why isn’t everyone already unemployed due to technological innovation?

To get my answers, please read the paper! In the meantime, here’s the executive summary:

To better plan for the economy of the future, many academics and policymakers regularly attempt to forecast the jobs and worker skills that will be needed going forward. Driving these efforts are fears about how technological automation might disrupt workers, skills, professions, firms and entire industrial sectors. The continued growth of artificial intelligence (AI), robotics and other computational technologies exacerbate these anxieties.

Yet the limits of both our collective knowledge and our individual imaginations constrain well-intentioned efforts to plan for the workforce of the future. Past attempts to assist workers or industries have often failed for various reasons. However, dystopian predictions about mass technological unemployment persist, as do retraining or reskilling programs that typically fail to produce much of value for workers or society. As public efforts to assist or train workers move from general to more specific, the potential for policy missteps grows greater. While transitional-support mechanisms can help alleviate some of the pain associated with fast-moving technological disruption, the most important thing policymakers can do is clear away barriers to economic dynamism and new opportunities for workers.

I do discuss some things that government can do to address automation fears at the end of the paper, but it’s important that policymakers first understand all the mistakes we’ve made with past retraining and reskilling efforts. The easiest thing to do to help in the short-term is clear away barriers to labor mobility and economic dynamism, I argue. Again, read the study for details.

For more info on other AI policy developments, check out my running list of research on AI, ML robotics policy.

This week, the U.S. Chamber of Commerce Commission on Artificial Intelligence Competitiveness, Inclusion, and Innovation (AI Commission) released a major report on the policy considerations surrounding AI, machine learning (ML) and algorithmic systems. The 120-page report concluded that “AI technology offers great hope for increasing economic opportunity, boosting incomes, speeding life science research at reduced costs, and simplifying the lives of consumers.” It was my honor to serve as one of the commissioners on the AI Commission and contribute to the report.

Over at the R Street Institute blog, I offer a quick summary of the major findings and recommendations from the report and argue that, along with the National Institute of Standards and Technology (NIST)’s recently released AI Risk Management Framework, the AI Commission report offers, “a constructive, consensus-driven framework for algorithmic governance rooted in flexibility, collaboration and iterative policymaking. This represents the uniquely American approach to AI policy that avoids the more heavy-handed regulatory approaches seen in other countries and it can help the United States again be a global leader in an important new technological field,” I conclude. Check out the blog post and the full AI Commission report if you are following debates of algorithmic policy issues. There’s lot of important material in there.

For more info on AI policy developments, check out my running list of research on AI, ML robotics policy.

In my latest R Street Institute blog post, “Mapping the AI Policy Landscape Circa 2023: Seven Major Fault Lines,” I discuss the big issues confronting artificial intelligence and machine learning in the coming year and beyond. I note that the AI regulatory proposals are multiplying fast and coming in two general varieties: broad-based and targeted. Broad-based algorithmic regulation would address the use of these technologies in a holistic fashion across many sectors and concerns. By contrast, targeted algorithmic regulation looks to address specific AI applications or concerns. In the short-term, it is more likely that targeted or “sectoral” regulatory proposals have a chance of being implemented.

I go on to identify seven major issues of concern that will drive these policy proposals. They include:

1) Privacy and Data Collection

2) Bias and Discrimination

3) Free Speech and Disinformation

4) Kids’ Safety

5) Physical Safety and Cybersecurity

6) Industrial Policy and Workforce Issues

7) National Security and Law Enforcement Issues

Continue reading →

[Originally published on Medium on 2/5/2022]

In an earlier essay, I explored “Why the Future of AI Will Not Be Invented in Europe” and argued that, “there is no doubt that European competitiveness is suffering today and that excessive regulation plays a fairly significant role in causing it.” This essay summarizes some of the major academic literature that leads to that conclusion.

Since the mid-1990s, the European Union has been layering on highly restrictive policies governing online data collection and use. The most significant of the E.U.’s recent mandates is the 2018 General Data Protection Regulation (GDPR). This regulation established even more stringent rules related to the protection of personal data, the movement thereof, and limits what organizations can do with data. Data minimization is the major priority of this system, but there are many different types of restrictions and reporting requirements involved in the regulatory scheme. This policy framework also has ramifications for the future of next-generation technologies, especially artificial intelligence and machine learning systems, which rely on high-quality data sets to improve their efficacy.

Whether or not the E.U.’s complicated regulatory regime has actually resulted in truly meaningful privacy protections for European citizens relative to people in other countries remains open to debate. It is very difficult to measure and compare highly subjective values like privacy across countries and cultures. This makes benefit-cost analysis for privacy regulation extremely challenging — especially on the benefits side of the equation.

What is no longer up for debate, however, is the cost side of the equation and the question of what sort of consequences the GDPR has had on business formation, competition, investment, and so on. On these matters, standardized metrics exist and the economic evidence is abundantly clear: the GDPR has been a disaster for Europe. Continue reading →