Technopanics & the Precautionary Principle

Recently, the Future of Life Institute released an open letter that included some computer science luminaries and others calling for a 6-month “pause” on the deployment and research of “giant” artificial intelligence (AI) technologies. Eliezer Yudkowsky, a prominent AI ethicist, then made news by arguing that the “pause” letter did not go far enough and he proposed that governments consider “airstrikes” against data processing centers, or even be open to the use of nuclear weapons. This is, of course, quite insane. Yet, this is the state of the things today as a AI technopanic seems to growing faster than any of the technopanic that I’ve covered in my 31 years in the field of tech policy—and I have covered a lot of them.

In a new joint essay co-authored with Brent Orrell of the American Enterprise Institute and Chris Messerole of Brookings, we argue that “the ‘pause’ we are most in need of is one on dystopian AI thinking.” The three of us recently served on a blue-ribbon Commission on Artificial Intelligence Competitiveness, Inclusion, and Innovation, an independent effort assembled by the U.S. Chamber of Commerce. In our essay, we note how:

Many of these breakthroughs and applications will already take years to work their way through the traditional lifecycle of development, deployment, and adoption and can likely be managed through legal and regulatory systems that are already in place. Civil rights laws, consumer protection regulations, agency recall authority for defective products, and targeted sectoral regulations already govern algorithmic systems, creating enforcement avenues through our courts and by common law standards allowing for development of new regulatory tools that can be developed as actual, rather than anticipated, problems arise.

“Instead of freezing AI we should leverage the legal, regulatory, and informal tools at hand to manage existing and emerging risks while fashioning new tools to respond to new vulnerabilities,” we conclude. Also on the pause idea, it’s worth checking out this excellent essay from Bloomberg Opinion editors on why “An AI ‘Pause’ Would Be a Disaster for Innovation.”

The problem is not with the “pause” per se. Even if the signatories could somehow enforce a worldwide stop-work order, six months probably wouldn’t do much to halt advances in AI. If a brief and partial moratorium draws attention to the need to think seriously about AI safety, it’s hard to see much harm. Unfortunately, a pause seems likely to evolve into a more generalized opposition to progress.

The editors continue on to rightly note:

This is a formula for outright stagnation. No one can ever be fully confident that a given technology or application will only have positive effects. The history of innovation is one of trial and error, risk and reward. One reason why the US leads the world in digital technology — why it’s home to virtually all the biggest tech platforms — is that it did not preemptively constrain the industry with well-meaning but dubious regulation. It’s no accident that all the leading AI efforts are American too.

That is 100% right, and I appreciate the Bloomberg editors linking to my latest study on AI governance when they made this point. In this new R Street Institute study, I explain why “Getting AI Innovation Culture Right,” is essential to make sure we can enjoy the many benefits that algorithmic systems offer, while also staying competitive in the global race for competitive advantage in this space. Continue reading →

In my latest R Street Institute report, I discuss the importance of “Getting AI Innovation Culture Right.” This is the first of a trilogy of major reports on what sort of policy vision and set of governance principles should guide the development of  artificial intelligence (AI), algorithmic systems, machine learning (ML), robotics, and computational science and engineering more generally. More specifically, these reports seek to answer the question, Can we achieve AI safety without innovation-crushing top-down mandates and massive new regulatory bureaucracies? 

These questions are particular pertinent as we just made it through a week in which we’ve seen a major open letter issued that calls for a 6-month freeze on the deployment of AI technologies, while a prominent AI ethicist argued that governments should go further and consider airstrikes data processing centers even if the exchange of nuclear weapons needed to be considered! On top of that, Italy became the first major nation to ban ChatGPT, the popular AI-enabled chatbot created by U.S.-based OpenAI.

My report begins from a different presumption: AI, ML and algorithmic technologies present society with enormously benefits and, while real risks are there, we can find better ways of addressing them. Continue reading →

Over at Discourse magazine this week, my R Street colleague Jonathan Cannon and I have posted a new essay on how it has been “Quite a Fall for Digital Tech.” We mean that both in the sense that the last few months have witnessed serious market turmoil for some of America’s leading tech companies, but also that the political situation for digital tech more generally has become perilous. Plenty of people on the Left and the Right now want a pound of flesh from the info-tech sector, and the starting cut at the body involves Section 230, the 1996 law that shields digital platforms from liability for content posted by third parties.

With the Supreme Court recently announcing it will hear Gonzalez v. Google, a case that could significantly narrow the scope of Section 230, the stakes have grown higher. It was already the case that federal and state lawmakers were looking to chip away at Sec. 230’s protections through an endless variety of regulatory measures. But if the Court guts Sec. 230 in Gonzalez, then it will really be open season on tech companies, as lawsuits will fly at every juncture whenever someone does not like a particular content moderation decision. Cannon and I note in our new essay that, Continue reading →

[Cross-posted from Medium.]

James Pethokousis of AEI interviews me about the current miserable state of modern science fiction, which is just dripping with dystopian dread in every movie, show, and book plot. How does all the techno-apocalyptica affect societal and political attitudes about innovation broadly and emerging technologies in particular. Our discussion builds on my recent a recent Discourse article, “How Science Fiction Dystopianism Shapes the Debate over AI & Robotics.” [Pasted down below.] Swing on over to Jim’s “Faster, Please” newsletter and hear what Jim and I have to say. And, for a bonus question, Jim asked me is we doing a good job of inspiring kids to have a sense of wonder and to take risks. I have some serious concerns that we are falling short on that front.

Continue reading →

[last updated 4/24/2024]

This a running list of all the essays and reports I’ve already rolled out on the governance of artificial intelligence (AI), machine learning (ML), and robotics. Why have I decided to spend so much time on this issue? Because this will become the most important technological revolution of our lifetimes. Every segment of the economy will be touched in some fashion by AI, ML, robotics, and the power of computational science. It should be equally clear that public policy will be radically transformed along the way.

Eventually, all policy will involve AI policy and computational considerations. As AI “eats the world,” it eats the world of public policy along with it. The stakes here are profound for individuals, economies, and nations. As a result, AI policy will be the most important technology policy fight of the next decade, and perhaps next quarter century. Those who are passionate about the freedom to innovate need to prepare to meet the challenge as proposals to regulate AI proliferate.

There are many socio-technical concerns surrounding algorithmic systems that deserve serious consideration and appropriate governance steps to ensure that these systems are beneficial to society. However, there is an equally compelling public interest in ensuring that AI innovations are developed and made widely available to help improve human well-being across many dimensions. And that’s the case that I’ll be dedicating my life to making in coming years.

Here’s the list of what I’ve done so far. I will continue to update this as new material is released: Continue reading →

Here’s a slide presentation on “The Future of Innovation Policy” that I presented to some student groups recently. It builds on themes discussed in my recent books, Permissionless Innovation: The Continuing Case for Comprehensive Technological Freedom, and Evasive Entrepreneurs and the Future of Governance: How Innovation Improves Economies and GovernmentsI specifically discuss the tension between permissionless innovation and the precautionary principle as competing policy defaults.

Continue reading →

Gabrielle Bauer, a Toronto-based medical writer, has just published one of the most concise explanations of what’s wrong with the precautionary principle that I have ever read. The precautionary principle, you will recall, generally refers to public policies that limit or even prohibit trial-and-error experimentation and risk-taking. Innovations are restricted until their creators can prove that they will not cause any harms or disruptions. In an essay for The New Atlantis entitled, “Danger: Caution Ahead,” Bauer uses the world’s recent experiences with COVID lockdowns as the backdrop for how society can sometimes take extreme caution too far, and create more serious dangers in the process. “The phrase ‘abundance of caution’ captures the precautionary principle in a more literary way,” Bauer notes. Indeed, another way to look at it is through the prism of the old saying, “better to be safe than sorry.” The problem, she correctly observes, is that, “extreme caution comes at a cost.” This is exactly right and it points to the profound trade-offs associated with precautionary principle thinking in practice.

In my own writing about the problems associated with the precautionary principle (see list of essays at bottom), I often like to paraphrase an ancient nugget of wisdom from St. Thomas Aquinas, who once noted in his Summa Theologica that, if the highest aim of a captain were merely to preserve their ship, then they would simply keep it in port forever. Of course, that is not the only goal of a captain has. The safety of the vessel and the crew is essential, of course, but captains brave the high seas because there are good reasons to take such risks. Most obviously, it might be how they make their living. But historically, captains have also taken to the seas as pioneering explorers, researchers, or even just thrill-seekers.

This was equally true when humans first decided to take to the air in balloons, blimps, airplanes, and rockets. A strict application of the precautionary principle would have instead told us we should keep our feet on the ground. Better to be safe than sorry! Thankfully, many brave souls ignored that advice and took the heavens in the spirit of exploration and adventure. As Wilbur Wright once famously said, “If you are looking for perfect safety, you would do well to sit on a fence and watch the birds.” Needless to say, humans would have never mastered the skies if the Wright brothers (and many others) had not gotten off the fence and taken the risks they did. Continue reading →

Discourse magazine has just published my latest essay, “‘Japan Inc.’ and Other Tales of Industrial Policy Apocalypse.” It is a short history of the hysteria surrounding the growth of Japan in the 1980s and early 1990s and its various industrial policy efforts. I begin by noting that, “American pundits and policymakers are today raising a litany of complaints about Chinese industrial policies, trade practices, industrial espionage and military expansion. Some of these concerns have merit. In each case, however, it is easy to find identical fears that were raised about Japan a generation ago.” I then walk through many of the leading books, opeds, movies, and other things from that past era to show how that was the case.

“Hysteria” is not too strong a word to use in this case. Many pundits and politicians were panicking about the rise of Japan economically and more specifically about the way Japan’s Ministry of International Trade and Industry (MITI) was formulating industrial policy schemes for industrial sectors in which they hoped to make advances. This resulted in veritable “MITI mania” here in America. “U.S. officials and market analysts came to view MITI with a combination of reverence and revulsion, believing that it had concocted an industrial policy cocktail that was fueling Japan’s success at the expense of American companies and interests,” I note. Countless books and essays were being published with breathless titles and predictions. I go through dozens of them in my essay. Meanwhile, the debate in policy circles and Capitol Hill even took on an ugly racial tinge, with some lawmakers calling the the Japanese “leeches.” and suggesting the U.S. should have dropped more atomic bombs on Japan during World War II. At one point, several members of Congress gathered on the lawn of the U.S. Capitol in 1987 to smash Japanese electronics with sledgehammers. Continue reading →

Time magazine recently declared 2020 “The Worst Year Ever.” By historical standards that may be a bit of hyperbole. For America’s digital technology sector, however, that headline rings true. After a remarkable 25-year run that saw an explosion of innovation and the rapid ascent of a group of U.S. companies that became household names across the globe, politicians and pundits in 2020 declared the party over.

“We now are on the cusp of a new era of tech policy, one in which the policy catches up with the technology,” says Darrell M. West of the Brookings Institution in a recent essay, “The End of Permissionless Innovation.” West cites the House Judiciary Antitrust Subcommittee’s October report on competition in digital markets—where it equates large tech firms with the “oil barons and railroad tycoons” of the Gilded Age—as the clearest sign that politicization of the internet and digital technology is accelerating.

It is hardly the only indication that America is set to abandon permissionless innovation and revisit the era of heavy-handed regulation for information and communication technology (ICT) markets. Equally significant is the growing bipartisan crusade against Section 230, the provision of the 1996 Telecommunications Act that shields “interactive computer services” from liability for information posted or published on their systems by users. No single policy has been more important to the flourishing of online speech or commerce than Sec. 230 because, without it, online platforms would be overwhelmed by regulation and lawsuits.

But now, long knives are coming out for the law, with plenty of politicians and academics calling for it to be gutted. Calls to reform or repeal Sec. 230 were once exclusively the province of left-leaning academics or policymakers, but this year it was conservatives in the White Houseon Capitol Hill and at the Federal Communications Commission (FCC) who became the leading cheerleaders for scaling back or eliminating the law. President Trump railed against Sec. 230 repeatedly on Twitter, and most recently vetoed the annual National Defense Authorization Act in part because Congress did not include a repeal of the law in the measure.

Meanwhile, conservative lawmakers in Congress such as Sens. Josh Hawley and Ted Cruz have used subpoenasangry letters and heated hearings to hammer digital tech executives about their content moderation practices. Allegations of anti-conservative bias have motivated many of these efforts. Even Supreme Court Justice Clarence Thomas questioned the law in a recent opinion.

Other proposed regulatory interventions include calls for new national privacy laws, an “Algorithmic Accountability Act” to regulate artificial intelligence technologies, and a growing variety of industrial policy measures that would open the door to widespread meddling with various tech sectors. Some officials in the Trump administration even pushed for a nationalized 5G communications network in the name of competing with China.

This growing “techlash” signals a bipartisan “Back to the Future” moment, with the possibility of the U.S. reviving a regulatory playbook that many believed had been discarded in history’s dustbin. Although plenty of politicians and pundits are taking victory laps and giving each other high-fives over the impending end of the permissionless innovation era, it is worth considering what America will be losing if we once again apply old top-down, permission slip-oriented policies to the technology sector. Continue reading →