New Report: Do We Need Global Government to Address AI Risk?

by on June 16, 2023 · 0 comments

Can we advance AI safety without new international regulatory bureaucracies, licensing schemes or global surveillance systems? I explore that question in my latest R Street Institute study, “Existential Risks & Global Governance Issues around AI & Robotics.” (31 pgs)  My report rejects extremist thinking about AI arms control & stresses how the “realpolitik” of international AI governance is such that things cannot and must not be solved through silver-bullet gimmicks and grandiose global government regulatory regimes.

The report uses Nick Bostrom’s “vulnerable world hypothesis” as a launching point and discusses how his five specific control mechanisms for addressing AI risks have started having real-world influence with extreme regulatory proposals now being floated. My report also does a deep dive into the debate about a proposed global ban on “killer robots” and looks at how past treaties and arms control efforts might apply, or what we can learn from them about what won’t work.

I argue that proposals to impose global controls on AI through a worldwide regulatory authority are both unwise and unlikely to work. Calls for bans or “pauses” on AI developments are largely futile because many nations will not agree to them. As with nuclear and chemical weapons, treaties, accords, sanctions and other multilateral agreements can help address some threats of malicious uses of AI or robotics. But trade-offs are inevitable, and addressing one type of existential risk sometimes can give rise to other risks.

A culture of AI safety by design is critical. But there is an equally compelling interest in ensuring algorithmic innovations are developed and made widely available to society. The most effective solution to technological problems usually lies in more innovation, not less. Many other multistakeholder and multilateral efforts can help AI safety. Final third of my study is devoted to a discussion of that. Continuous communication, coordination, and cooperation—among countries, developers, professional bodies and other stakeholders—will be essential.

My new report on concludes with a plea to reject fatalism and fanaticism when discussing global AI risks. It’s worth recalling what Bertrand Russell said in 1951 about how only global government could save humanity. He predicted, “[t]he end of human life, perhaps of all life on our planet,” before the end
of the century unless the world unified under “a single government, possessing a monopoly of all the major weapons of war.” He was very wrong, of course, and thank God he did not get his wish because an effort to unite the world under one global government would have entailed different existential risks that he never bothered seriously considering. We need to reject extremist global government solutions as the basis for controlling technological risk.

Three quick notes.

First, this new report is the third in a trilogy of major R Street Institute studies on bottom-up, polycentric AI governance. If you only read one, make it this: “Flexible, Pro-Innovation Governance Strategies for Artificial Intelligence.” 

Second, I wrapped up this latest report a few months ago, before the Microsoft and OpenAI floated new comprehensive AI regulatory controls. So, for an important follow-up to this report, please read: “Microsoft’s New AI Regulatory Framework & the Coming Battle over Computational Control.”

Finally, if you’d like to hear me discuss many of the findings from these new reports and essays at greater length, check out my recent appearance on TechFreedom’s “Tech Policy Podcast,” with Corbin K. Barthold. We do a deep dive on all these AI governance trends and regulatory proposals.

As always, all my writing on AI, ML and robotics can be found here and my most recent things are found below.

Additional Reading:

Previous post:

Next post: