I’m finishing up my next book, which is tentatively titled, “A Flexible Governance Framework for Artificial Intelligence.” I thought I’d offer a brief preview here in the hope of connecting with others who care about innovation in this space and are also interested in helping to address these policy issues going forward.
The goal of my book is to highlight the ways in which artificial intelligence (AI) machine learning (ML), robotics, and the power of computational science are set to transform the world—and the world of public policy—in profound ways. As with all my previous books and research products, my goal in this book includes both empirical and normative components. The first objective is to highlight the tensions between emerging technologies and the public policies that govern them. The second is to offer a defense of a specific governance stance toward emerging technologies intended to ensure we can enjoy the fruits of algorithmic innovation.
AI is a transformational technology that is general-purpose and dual-use. AI and ML also build on top of other important technologies—computing, microprocessors, the internet, high-speed broadband networks, and data storage/processing systems—and they will become the building blocks for a great many other innovations going forward. This means that, eventually, all policy will involve AI policy and computational considerations at some level. It will become the most important technology policy issue here and abroad going forward.
The global race for AI supremacy has important implications for competitive advantage and other geopolitical issues. This is why nations are focusing increasing attention on what they need to do to ensure they are prepared for this next major technological revolution. Public policy attitudes and defaults toward innovative activities will have an important influence on these outcomes.
In my book, I argue that, if the United States hopes to maintain a global leadership position in AI, ML, and robotics, public policy should be guided by two objectives:
- Maximize the potential for innovation, entrepreneurialism, investment, and worker opportunities by seeking to ensure that firms and other organizations are prepared to compete at a global scale for talent and capital and that the domestic workforce is properly prepared to meet the same global challenges.
- Develop a flexible governance framework to address various ethical concerns about AI development or use to ensure these technologies benefit humanity, but work to accomplish this goal without undermining the goals set forth in the first objective.
The book primarily addresses the second of these priorities because getting the governance framework for AI right significantly improves the chances of successfully accomplishing the first goal of ensuring that the United States remains a leading global AI innovator.
I do a deep dive into the many different governance challenges and policy proposals that are floating out there today—both domestically and internationally. The most contentious of these issues involved the so-called “socio-algorithmic” concerns that are driving calls for comprehensive regulation today. Those include the safety, security, privacy, and discrimination risks that AI/ML technologies could pose for individuals and society.
These concerns deserve serious consideration and appropriate governance steps to ensure that these systems are beneficial to society. However, there is an equally compelling public interest in ensuring that AI innovations are developed and made widely available to help improve human well-being across many dimensions.
Getting the balance right requires agile governance strategies and decentralized, polycentric approaches. There are many different values and complex trade-offs in play in these debates, all of which demand tailored responses. But this should not be done in an overly rigid way through complicated, inflexible, time-consuming regulatory mandates that preemptively curtail or completely constrain innovation opportunities. There’s no need to worry about the future if we can’t even build it first. AI innovation must not be treated as guilty until proven innocent.
The more agile and adaptive governance approach I outline in my book builds on the core principles typically recommended by those favoring precautionary principle-based regulation. That is, it is similarly focused on (1) “baking in” best practices and aligning AI design with widely-shared goals and values; and, (2) keeping humans “in the loop” at critical stages of this process to ensure that they can continue to guide and occasionally realign those values and best practices as needed. However, a decentralized governance approach to AI focuses on accomplishing these objectives in a more flexible, evolutionary fashion without the costly baggage associated with precautionary principle-based regulatory regimes.
The key to the decentralized approach is a diverse toolkit of so-called soft law governance solutions. Soft law refers to agile, adaptable governance schemes for emerging technology that create substantive expectations and best practices for innovators without regulatory mandates. Precautionary regulatory restraints will be necessary in some limited circumstances—particular for certain types of very serious existential risk—but most AI innovations should be treated as innocent until proven guilty.
When things do go wrong, many existing remedies are available, including a wide variety of common law solutions (torts, class actions, contract law, etc), recall authority possessed by many regulatory agencies, and various consumer protection policies and other existing laws. Moreover, the most effective solution to technological problems usually lies in more innovation, not less of it. It is only through constant trial and error that humanity discovers better and safer ways of satisfying important wants and needs.
The book has six chapters currently, although I am toying with adding back in two other chapters (on labor market issues and industrial policy proposals) that I finished but then cut to keep the theme of the book more tightly focused on social and ethical considerations surrounding AI and robotics.
Here are the summaries of the current six chapters in the manuscript:
- Chapter 1: Understanding AI & Its Potential Benefits – Defining the nature and scope of artificial intelligence and its many components and related subsectors is complicated and this fact creates many governance challenges. But getting AI governance right is vital because these technologies offer individuals and society meaningful improvements in living standards across multiple dimensions.
- Chapter 2: The Importance of Policy Defaults for Innovation Culture – Every technology policy debate involves a choice between two general defaults: the precautionary principle and the proactionary principle or “permissionless innovation.” Setting the initial legal default for AI technologies closer to the green light of permissionless innovation will enable greater entrepreneurialism, investment, and global competitiveness.
- Chapter 3: Decentralized Governance for AI: A Framework – The process of embedding ethics in AI design is an ongoing, iterative process influenced by many forces and factors. There will be much trial and error when devising ethical guidelines for AI and hammering out better ways of keeping these systems aligned with human values. A top-down, one-size-fits-all regulatory framework for AI is unwise. A more decentralized, polycentric governance approach is needed—nationally and globally. [This chapter is the meat of the book and several derivative articles will be spun out of it beginning with a report on algorithmic auditing and AI impact assessments.]
- Chapter 4: The US Governance Model for AI So Far – U.S. digital technology and ecommerce sectors have enjoyed a generally “permissionless” policy environment since the early days of the Internet, and this has greatly benefited our innovation and global competitiveness. While AI has thus far been governed by a similar “light-touch” approach, many academics and policymakers are now calling for aggressive regulation of AI rooted in a precautionary principle-oriented mindset, which threatens to derail a great deal of AI innovation.
- Chapter 5: The European Regulatory Model & the Costs of Precaution by Default – Over the past quarter century, the European Union has taken a more aggressive approach to digital technology and data regulation, and is now advancing several new comprehensive regulatory frameworks, including an AI Act. The E.U.’s heavy-handed regulatory regime, which is rooted in the precautionary principle, discouraged innovation and investment across the continent in the past and will continue to do so as it grows to encompass AI technologies. The U.S. should reject this model and welcome European innovators looking to escape it.
- Chapter 6: Existential Risks & Global Governance Issues around AI & Robotics – AI and robotics could give rise to certain global risks that warrant greater attention and action. But policymakers must be careful to define existential risk properly and understand how it is often the case that the most important solution to such risks is more technological innovation to overcome those problems. The greatest existential risk of all would be to block further technological innovation and scientific progress. Proposals to impose global bans or regulatory agencies are both unwise and unworkable. Other approaches, including soft law efforts, will continue to play a role in addressing global AI risks and concerns.
This book, which I hope to have out some time later this year, grows out of a large body of research I’ve done over the past decade. [Some of that work is listed down below.] AI, ML, robotics, and algorithmic policy issues will dominate my research focus and outputs over the next few years.
I look forward to doing my small part to help ensure that America builds on the track record of success it has enjoyed with the Internet, ecommerce, and digital technologies. Again, that stunning success story was built on wise policy choices that promoted a culture of creativity and innovation and rejected calls to hold on to past technological, economic, or legal status quos.
Will America rise to the challenge once again by adopting wise policies to facilitate the next great technological revolution? I’m ready for that fight. I hope you are, too, because it will be the most important technology policy battle of our lifetimes.
___________
Recent Essays & Papers on AI & Robotics Policy
- Adam Thierer, “Why is the US Following the EU’s Lead on Artificial Intelligence Regulation?” The Hill, July 21, 2022.
- Adam Thierer, “Algorithmic Auditing and AI Impact Assessments: The Need for Balance,” Medium, July 13, 2022.
- Adam Thierer, “What I Learned about the Power of AI at the Cleveland Clinic,” Medium, May 6, 2022.
- Adam Thierer, Governing Emerging Technology in an Age of Policy Fragmentation and Disequilibrium, American Enterprise Institute (April 2022).
- Adam Thierer, “A Global Clash of Visions: The Future of AI Policy,” The Hill, May 4, 2021.
- Adam Thierer, “A Brief History of Soft Law in ICT Sectors: Four Case Studies,” Jurimetrics, Vol. 61 (Fall 2021): 79-119.
- Adam Thierer, “U.S. Artificial Intelligence Governance in the Obama–Trump Years,” IEEE Transactions on Technology and Society, Vol, 2, Issue 4 (2021).
- Adam Thierer, “The Worst Regulation Ever Proposed,” The Bridge, September 2019.
- Ryan Hagemann, Jennifer Huddleston Skees & Adam Thierer, “Soft Law for Hard Problems: The Governance of Emerging Technologies in an Uncertain Future,” Colorado Technology Law Journal, Vol. 17 (2018).
- Adam Thierer & Trace Mitchell, “No New Tech Bureaucracy,” Real Clear Policy, September 10, 2020.
- Adam Thierer, “OMB’s AI Guidance Embodies Wise Tech Governance,” Mercatus Center Public Comment, March 13, 2020.
- Adam Thierer, “Europe’s New AI Industrial Policy,” Medium, February 20, 2020.
- Adam Thierer, “Trump’s AI Framework & the Future of Emerging Tech Governance,” Medium, January 8, 2020.
- Adam Thierer, “Soft Law: The Reconciliation of Permissionless & Responsible Innovation,” Chapter 7 in Adam Thierer, Evasive Entrepreneurs & the Future of Governance (Washington, DC: Cato Institute, 2020): 183-240.
- Andrea O’Sullivan & Adam Thierer, “Counterpoint: Regulators Should Allow the Greatest Space for AI Innovation,” Communications of the ACM, Volume 61, Issue 12, (December 2018): 33-35.
- Adam Thierer, Andrea O’Sullivan & Raymond Russell, “Artificial Intelligence and Public Policy,” Mercatus Research, Mercatus Center at George Mason University, Arlington, VA, (2017).
- Adam Thierer, “Are ‘Permissionless Innovation’ and ‘Responsible Innovation’ Compatible?” Technology Liberation Front, July 12, 2017.
- Adam Thierer, “The Growing AI Technopanic,” Medium, April 27, 2017.
- Adam Thierer, “The Day the Machines Took Over,” Medium, May 11, 2017.
- Adam Thierer, “When the Trial Lawyers Come for the Robot Cars,” Slate, June 10, 2016.
- Adam Thierer, “Problems with Precautionary Principle-Minded Tech Regulation & a Federal Robotics Commission,” Medium, September 22, 2014.
- Adam Thierer, “On the Line between Technology Ethics vs. Technology Policy,” Technology Liberation Front, August 1, 2013.
Comments on this entry are closed.