Articles by Adam Thierer

Adam ThiererTechnology policy analyst. Formerly a senior research fellow at the Mercatus Center at George Mason University, President of the Progress & Freedom Foundation, Director of Telecom. Studies at the Cato Institute, and Fellow in Economic Policy at the Heritage Foundation.


My colleague Wayne Brough and I recently went on the “Kibbe on Liberty” show to discuss how to discuss the state of free speech on the internet. We explained how censorship is a Big Government problem, not a Big Tech problem. Here’s the complete description of the show and the link to the full episode is below.

With Elon Musk’s purchase of Twitter, we are in the middle of a national debate about the tension between censorship and free expression online. On the Right, many people are calling for government to rein in what they perceive as the excesses of Big Tech companies, while the Left wants the government to crack down on speech they deem dangerous. Both approaches make the same mistake of giving politicians authority over what we are allowed to say and hear. And with recent revelations about government agents leaning on social media companies to censor speech, it’s clear that when it comes to the online conversation, there’s no such thing as a purely private company.”

For more on this issues, please see: “The Classical Liberal Approach to Digital Media Free Speech Issues.”

We are entering a new era for technology policy in which many pundits and policymakers will use “algorithmic fairness” as a universal Get Out of Jail Free card when they push for new regulations on digital speech and innovation. Proposals to regulate things like “online safety,” “hate speech,” “disinformation,” and “bias” among other things often raise thorny definitional questions because of their highly subjective nature. In the United States, efforts by government to control these things will often trigger judicial scrutiny, too, because restraints on speech violate the First Amendment. Proponents of prior restraint or even ex post punishments understand this reality and want to get around it. Thus, in an effort to avoid constitutional scrutiny and lengthy court battles, they are engaged in a rebranding effort and seeking to push their regulatory agendas through a techno-panicky prism of “algorithmic fairness” or “algorithmic justice.”

Hey, who could possibly be against FAIRNESS and JUSTICE? Of course, the devil is always in the details as Neil Chilson and I discuss in our new paper for the The Federalist Society and Regulatory Transparency Project on, “The Coming Onslaught of ‘Algorithmic Fairness’ Regulations.” We document how federal and state policymakers from both parties are currently considering a variety of new mandates for artificial intelligence (AI), machine learning, and automated systems that, if imposed, “would thunder through our economy with one of the most significant expansions of economic and social regulation – and the power of the administrative state – in recent history.” Continue reading →

For my latest Discourse column (“We Really Need To ‘Have a Conversation’ About AI … or Do We?”), I discuss two commonly heard lines in tech policy circles.

1) “We need to have conversation about the future of AI and the risks that it poses.”

2) “We should get a bunch of smart people in a room and figure this out.”

I note that, if you’ve read enough essays, books or social media posts about artificial intelligence (AI) and robotics—among other emerging technologies—then chances are you’ve stumbled on variants of these two arguments many times over. They are almost impossible to disagree with in theory, but when you start to investigate what they actually mean in practice, they are revealed to be largely meaningless rhetorical flourishes which threaten to hold up meaningful progress on the AI front. I continue on to argue in my essay that:

I’m not at all opposed to people having serious discussions about the potential risks associated with AI, algorithms, robotics or smart machines. But I do have an issue with: (a) the astonishing degree of ambiguity at work in the world of AI punditry regarding the nature of these “conversations,” and, (b) the fact that people who are making such statements apparently have not spent much time investigating the remarkable number of very serious conversations that have already taken place, or which are ongoing, about AI issues.

In fact, it very well could be the case that we have too many conversations going on currently about AI issues and that the bigger problem is instead one of better coordinating the important lessons and best practices that we have already learned from those conversations.

I then unpack each of those lines and explain what is wrong with them in more detail. Continue reading →

I’ve been floating around in conservative policy circles for 30 years and I have spent much of that time covering media policy and child safety issues. My time in conservative circles began in 1992 with a 9-year stint at the Heritage Foundation, where I launched the organization’s policy efforts on media regulation, the Internet, and digital technology. Meanwhile, my work on child safety has spanned 4 think tanks, multiple blue ribbon child safety commissions, countless essays, dozens of filings and testimonies, and even a multi-edition book.

During this three-decade run, I’ve tried my hardest to find balanced ways of addressing some of the legitimate concerns that many conservatives have about kids, media content, and online safety issues. Raising kids is the hardest job in the world. My daughter and son are now off at college, but the last twenty years of helping them figure out how to navigate the world and all the challenges it poses was filled with difficulties. This was especially true because my daughter and son faced completely different challenges when it came to media content and online interactions. Simply put, there is no one-size-fits-all playbook when it comes to raising kids or addressing concerns about healthy media interactions. Continue reading →

It was my pleasure this week to participate in a panel discussion about the future of innovation policy at the James Madison Institute’s 2022 Tech and Innovation Summit in Coral Gables, FL. Our conversation focused on the future of Progress Studies, which is one of my favorite topics. We were asked to discuss five major questions and below I have summarized some of my answers to them, plus some other thoughts I had about what I heard at the conference from others.

  1. What is progress studies and why is it so needed today?

In a sense, Progress Studies is nothing new. Progress studies goes back at least to the days of Adam Smith and plenty of important scholars have been thinking about it ever since. Those scholars and policy advocates have long been engaged in trying to figure out what’s the secret sauce that powers economic growth and human prosperity. It’s just that we didn’t call that Progress Studies in the old days.

The reason Progress Studies is important is because technological innovation has been shown to be the fundamental driver in improvements in human well-being over time.  When we can move the needle on progress, it helps individuals extend and improve their lives, incomes, and happiness. By extension, progress helps us live lives of our choosing. As Hans Rosling brilliantly argued, the goal of expanding innovation opportunities and raising incomes “is not just bigger piles of money” or more leisure time. “The ultimate goal is to have the freedom to do what we want.” Continue reading →

[Cross-posted from Medium.]

In an age of hyper-partisanship, one issue unites the warring tribes of American politics like no other: hatred of “Big Tech.” You know, those evil bastards who gave us instantaneous access to a universe of information at little to no cost. Those treacherous villains! People are quick to forget the benefits of moving from a world of Information Poverty to one of Information Abundance, preferring to take for granted all they’ve been given and then find new things to complain about.

But what mostly unites people against large technology platforms is the feeling that they are just too big or too influential relative to other institutions, including government. I get some of that concern, even if I strongly disagree with many of their proposed solutions, such as the highly dangerous sledgehammer of antitrust breakups or sweeping speech controls. Breaking up large tech companies would not only compromise the many benefits they provide us with, but it would undermine America’s global standing as a leader in information and computational technology. We don’t want that. And speech codes or meddlesome algorithmic regulations are on a collision course with the First Amendment and will just result in endless litigation in the courts.

There’s a better path forward. As President Ronald Reagan rightly said in 1987 when vetoing a bill to reestablish the Fairness Doctrine, “History has shown that the dangers of an overly timid or biased press cannot be averted through bureaucratic regulation, but only through the freedom and competition that the First Amendment sought to guarantee.” In other words, as I wrote in a previous essay about “The Classical Liberal Approach to Digital Media Free Speech Issues,” more innovation and competition are always superior to more regulation when it comes to encouraging speech and speech opportunities.

Continue reading →

[Cross-posted from Medium.]

James Pethokousis of AEI interviews me about the current miserable state of modern science fiction, which is just dripping with dystopian dread in every movie, show, and book plot. How does all the techno-apocalyptica affect societal and political attitudes about innovation broadly and emerging technologies in particular. Our discussion builds on my recent a recent Discourse article, “How Science Fiction Dystopianism Shapes the Debate over AI & Robotics.” [Pasted down below.] Swing on over to Jim’s “Faster, Please” newsletter and hear what Jim and I have to say. And, for a bonus question, Jim asked me is we doing a good job of inspiring kids to have a sense of wonder and to take risks. I have some serious concerns that we are falling short on that front.

Continue reading →

[Cross-posted from Medium]

There are two general types of technological governance that can be used to address challenges associated with artificial intelligence (AI) and computational sciences more generally. We can think of these as “on the ground” (bottom-up, informal “soft law”) governance mechanisms versus “on the books” (top-down, formal “hard law”) governance mechanisms.

Unfortunately, heated debates about the latter type of governance often divert attention from the many ways in which the former can (or already does) help us address many of the challenges associated with emerging technologies like AI, machine learning, and robotics. It is important that we think harder about how to optimize these decentralized soft law governance mechanisms today, especially as traditional hard law methods are increasingly strained by the relentless pace of technological change and ongoing dysfunctionalism in the legislative and regulatory arenas.

Continue reading →

For my latest column in The Hill, I explored the European Union’s (EU) endlessly expanding push to regulate all facets of the modern data economy. That now includes a new effort to regulate artificial intelligence (AI) using the same sort of top-down, heavy-handed, bureaucratic compliance regime that has stifled digital innovation on the continent over the past quarter century.

The European Commission (EC) is advancing a new Artificial Intelligence Act, which proposes banning some AI technologies while classifying many others under a heavily controlled “high-risk” category. A new bureaucracy, the European Artificial Intelligence Board, will be tasked with enforcing a wide variety of new rules, including “prior conformity assessments,” which are like permission slips for algorithmic innovators. Steep fines are also part of the plan. There’s a lengthy list of covered sectors and technologies, with many others that could be added in coming years. It’s no wonder, then, that the measure has been labelled the measure “the mother of all AI laws” and analysts have argued it will further burden innovation and investment in Europe.

As I noted in my new column, the consensus about Europe’s future on the emerging technology front is dismal to put it mildly. The International Economy journal recently asked 11 experts from Europe and the U.S. where the EU currently stood in global tech competition. Responses were nearly unanimous and bluntly summarized by the symposium’s title: “The Biggest Loser.” Respondents said Europe is “lagging behind in the global tech race,” and “unlikely to become a global hub of innovation.” “The future will not be invented in Europe,” another analyst bluntly concluded. Continue reading →