Can we advance AI safety without new international regulatory bureaucracies, licensing schemes or global surveillance systems? I explore that question in my latest R Street Institute study, “Existential Risks & Global Governance Issues around AI & Robotics.” (31 pgs) My report rejects extremist thinking about AI arms control & stresses how the “realpolitik” of international AI governance is such that things cannot and must not be solved through silver-bullet gimmicks and grandiose global government regulatory regimes.
The report uses Nick Bostrom’s “vulnerable world hypothesis” as a launching point and discusses how his five specific control mechanisms for addressing AI risks have started having real-world influence with extreme regulatory proposals now being floated. My report also does a deep dive into the debate about a proposed global ban on “killer robots” and looks at how past treaties and arms control efforts might apply, or what we can learn from them about what won’t work.
I argue that proposals to impose global controls on AI through a worldwide regulatory authority are both unwise and unlikely to work. Calls for bans or “pauses” on AI developments are largely futile because many nations will not agree to them. As with nuclear and chemical weapons, treaties, accords, sanctions and other multilateral agreements can help address some threats of malicious uses of AI or robotics. But trade-offs are inevitable, and addressing one type of existential risk sometimes can give rise to other risks.
A culture of AI safety by design is critical. But there is an equally compelling interest in ensuring algorithmic innovations are developed and made widely available to society. The most effective solution to technological problems usually lies in more innovation, not less. Many other multistakeholder and multilateral efforts can help AI safety. Final third of my study is devoted to a discussion of that. Continuous communication, coordination, and cooperation—among countries, developers, professional bodies and other stakeholders—will be essential.
My new report on concludes with a plea to reject fatalism and fanaticism when discussing global AI risks. It’s worth recalling what Bertrand Russell said in 1951 about how only global government could save humanity. He predicted, “[t]he end of human life, perhaps of all life on our planet,” before the end
of the century unless the world unified under “a single government, possessing a monopoly of all the major weapons of war.” He was very wrong, of course, and thank God he did not get his wish because an effort to unite the world under one global government would have entailed different existential risks that he never bothered seriously considering. We need to reject extremist global government solutions as the basis for controlling technological risk.
Three quick notes.
First, this new report is the third in a trilogy of major R Street Institute studies on bottom-up, polycentric AI governance. If you only read one, make it this: “Flexible, Pro-Innovation Governance Strategies for Artificial Intelligence.”
Second, I wrapped up this latest report a few months ago, before the Microsoft and OpenAI floated new comprehensive AI regulatory controls. So, for an important follow-up to this report, please read: “Microsoft’s New AI Regulatory Framework & the Coming Battle over Computational Control.”
Finally, if you’d like to hear me discuss many of the findings from these new reports and essays at greater length, check out my recent appearance on TechFreedom’s “Tech Policy Podcast,” with Corbin K. Barthold. We do a deep dive on all these AI governance trends and regulatory proposals.
As always, all my writing on AI, ML and robotics can be found here and my most recent things are found below.
Additional Reading:
- INTERVIEW: “5 Quick Questions for AI policy analyst Adam Thierer,” interview for the Faster Please! newsletter with James Pethokoukis, June 12, 2024.
- PODCAST: “Who’s Afraid of Artificial Intelligence?” Tech Freedom TechPolicyPodcast, June 12, 2023.
- FILING: Comments of Adam Thierer, R Street Institute to the National Telecommunications and Information Administration (NTIA) on “AI Accountability Policy,” June 12, 2023.
- PODCAST: Adam Thierer: “Artificial Intelligence For Dummies,” SheThinks (Independent Women’s Forum) podcast, June 9, 2023.
- EVENT: “Does the US Need a New AI Regulator?” Center for Data Innovation & R Street Institute, June 6, 2023.
- Neil Chilson & Adam Thierer, “The Problem with AI Licensing & an ‘FDA for Algorithms,’” Federalist Society Blog, June 5, 2023.
- Adam Thierer, “Microsoft’s New AI Regulatory Framework & the Coming Battle over Computational Control,” Medium, May 29, 2023.
- PODCAST: Neil Chilson & Adam Thierer, “The Future of AI Regulation: Examining Risks and Rewards,” Federalist Society Regulatory Transparency Project podcast, May 26, 2023.
- Adam Thierer, “Here Come the Code Cops: Senate Hearing Opens Door to FDA for Algorithms & AI Occupational Licensing,” Medium, May16, 2023.
- Adam Thierer, “What OpenAI’s Sam Altman Should Say at the Senate AI Hearing,” R Street Institute Blog, May 15, 2023.
- PODCAST: “Should we regulate AI?” Adam Thierer and Matthew Lesh discuss artificial intelligence policy on the Institute for Economic Affairs podcast, May 6, 2023.
- Adam Thierer, “The Biden Administration’s Plan to Regulate AI without Waiting for Congress,” Medium, May 4, 2023.
- Adam Thierer, “NEPA for Al? The Problem with Mandating Algorithmic Audits & Impact Assessments,” Medium, April 23, 2023.
- Adam Thierer, “Flexible, Pro-Innovation Governance Strategies for Artificial Intelligence,” R Street Institute Policy Study No. 283 (April 2023).
- Adam Thierer, “A balanced AI governance vision for America,” The Hill, April 16, 2023.
- Adam Thierer, Brent Orrell, & Chris Meserole, “Stop the AI Pause,” AEI Ideas, April 6, 2023.
- Adam Thierer, “Getting AI Innovation Culture Right,” R Street Institute Policy Study No. 281 (March 2023).
- Adam Thierer, “Can We Predict the Jobs and Skills Needed for the AI Era?,” R Street Institute Policy Study No. 278 (March 2023).
- Adam Thierer, “U.S. Chamber AI Commission Report Offers Constructive Path Forward,” R Street Blog, March 9, 2023.
- Adam Thierer, “Statement for the Record on ‘Artificial Intelligence: Risks and Opportunities,’” U.S. Senate Homeland Security and Governmental Affairs Committee, March 8, 2023.
- Adam Thierer, “What If Everything You’ve Heard about AI Policy is Wrong?” Medium, February 20, 2023.
- Adam Thierer, “Policy Ramifications of the ChatGPT Moment: AI Ethics Meets Evasive Entrepreneurialism,” Medium, February 14, 2023.
- Adam Thierer, “Mapping the AI Policy Landscape Circa 2023: Seven Major Fault Lines,” R Street Blog, February 9, 2023.
- Adam Thierer, “Artificial Intelligence Primer: Definitions, Benefits & Policy Challenges,” Medium, December 2, 2022.
- Neil Chilson & Adam Thierer, “The Coming Onslaught of ‘Algorithmic Fairness’ Regulations,” Regulatory Transparency Project of the Federalist Society, November 2, 2022.
- Adam Thierer, “We Really Need To ‘Have a Conversation’ About AI … or Do We?” Discourse, October 6, 2022.