As I noted in a recent interview with James Pethokoukis for his Faster, Please! newsletter, “[t]he current policy debate over artificial intelligence is haunted by many mythologies and mistaken assumptions. The most problematic of these is the widespread belief that AI is completely ungoverned today.” In a recent R Street Institute report and series of other publications, I have documented just how wrong that particular assumption is.
The first thing I try to remind everyone is that the U.S. federal government is absolutely massive—2.1 million employees, 15 cabinet agencies, 50 independent federal commissions and 434 federal departments. Strangely, when policymakers and pundits deliver remarks on AI policy today, they seem to completely ignore all that regulatory capacity while simultaneously casually tossing out proposals to just add more and more layers of regulation and bureaucracy to it. Well, I say why not see if the existing regulations and bureaucracy are working first, and then we can have a chat about what more is needed to fill gaps.
Can we advance AI safety without new international regulatory bureaucracies, licensing schemes or global surveillance systems? I explore that question in my latest R Street Institute study, “Existential Risks & Global Governance Issues around AI & Robotics.” (31 pgs) My report rejects extremist thinking about AI arms control & stresses how the “realpolitik” of international AI governance is such that things cannot and must not be solved through silver-bullet gimmicks and grandiose global government regulatory regimes.
The report uses Nick Bostrom’s “vulnerable world hypothesis” as a launching point and discusses how his five specific control mechanisms for addressing AI risks have started having real-world influence with extreme regulatory proposals now being floated. My report also does a deep dive into the debate about a proposed global ban on “killer robots” and looks at how past treaties and arms control efforts might apply, or what we can learn from them about what won’t work.
I argue that proposals to impose global controls on AI through a worldwide regulatory authority are both unwise and unlikely to work. Calls for bans or “pauses” on AI developments are largely futile because many nations will not agree to them. As with nuclear and chemical weapons, treaties, accords, sanctions and other multilateral agreements can help address some threats of malicious uses of AI or robotics. But trade-offs are inevitable, and addressing one type of existential risk sometimes can give rise to other risks.
A culture of AI safety by design is critical. But there is an equally compelling interest in ensuring algorithmic innovations are developed and made widely available to society. The most effective solution to technological problems usually lies in more innovation, not less. Many other multistakeholder and multilateral efforts can help AI safety. Final third of my study is devoted to a discussion of that. Continuous communication, coordination, and cooperation—among countries, developers, professional bodies and other stakeholders—will be essential.Continue reading →
This week, I appeared on the Tech Freedom Tech Policy Podcast to discuss “Who’s Afraid of Artificial Intelligence?” It’s an in-depth, wide-ranging conversation about all things AI related. Here’s a summary of what host what Corbin Barthold and I discussed:
1. The “little miracles happening every day” thanks to AI
2. Is AI a “born free” technology?
3. Potential anti-competitive effects of AI regulation
4. The flurry of joint letters
5. new AI regulatory agency political realities
6. the EU’s Precautionary Principle tech policy disaster
7. The looming “war on computation” & open source
8. The role of common law for AI
9. Is Sam Altman breaking the very laws he proposes?
10. Do we need an IAEA for AI or an “AI Island”
11. Nick Bostrom’s global control & surveillance model
12. Why “doom porn” dominates in academic circles
13. Will AI take all the jobs?
14. Smart regulation of algorithmic technology
15. How the “pacing problem” is sometimes the “pacing benefit”
It was my pleasure to recently appear on the Independent Women’s Forum’s “She Thinks” podcast to discuss “Artificial Intelligence for Dummies.” In this 24-minute conversation with host Beverly Hallberg, I outline basic definitions, identify potential benefits, and then consider some of the risks associated with AI, machine learning, and algorithmic systems.
Here’s the video from a June 6th event on, “Does the US Need a New AI Regulator?” which was co-hosted by Center for Data Innovation & R Street Institute. We discuss algorithmic audits, AI licensing, an “FDA for algorithms” and other possible regulatory approaches, as well as various “soft law” self-regulatory efforts and targeted agency efforts. The event was hosted by Daniel Castro and included Lee Tiedrich, Shane Tews, Ben Shneiderman and me.
The Technology Liberation Front is the tech policy blog dedicated to keeping politicians' hands off the 'net and everything else related to technology. Learn more about TLF →