I’m finishing up my next book, which is tentatively titled, “A Flexible Governance Framework for Artificial Intelligence.” I thought I’d offer a brief preview here in the hope of connecting with others who care about innovation in this space and are also interested in helping to address these policy issues going forward.
The goal of my book is to highlight the ways in which artificial intelligence (AI) machine learning (ML), robotics, and the power of computational science are set to transform the world—and the world of public policy—in profound ways. As with all my previous books and research products, my goal in this book includes both empirical and normative components. The first objective is to highlight the tensions between emerging technologies and the public policies that govern them. The second is to offer a defense of a specific governance stance toward emerging technologies intended to ensure we can enjoy the fruits of algorithmic innovation.
AI is a transformational technology that is general-purpose and dual-use. AI and ML also build on top of other important technologies—computing, microprocessors, the internet, high-speed broadband networks, and data storage/processing systems—and they will become the building blocks for a great many other innovations going forward. This means that, eventually, all policy will involve AI policy and computational considerations at some level. It will become the most important technology policy issue here and abroad going forward.
The global race for AI supremacy has important implications for competitive advantage and other geopolitical issues. This is why nations are focusing increasing attention on what they need to do to ensure they are prepared for this next major technological revolution. Public policy attitudes and defaults toward innovative activities will have an important influence on these outcomes.
In my book, I argue that, if the United States hopes to maintain a global leadership position in AI, ML, and robotics, public policy should be guided by two objectives:
Maximize the potential for innovation, entrepreneurialism, investment, and worker opportunities by seeking to ensure that firms and other organizations are prepared to compete at a global scale for talent and capital and that the domestic workforce is properly prepared to meet the same global challenges.
Develop a flexible governance framework to address various ethical concerns about AI development or use to ensure these technologies benefit humanity, but work to accomplish this goal without undermining the goals set forth in the first objective.
The book primarily addresses the second of these priorities because getting the governance framework for AI right significantly improves the chances of successfully accomplishing the first goal of ensuring that the United States remains a leading global AI innovator. Continue reading →
For my latest regular column in The Hill, I took a look at the trade-offs associated with the EU’s AI Act. This is derived from a much longer chapter on European AI policy that is in my forthcoming book, and I also plan on turning it into a free-standing paper at some point soon. My oped begins as follows:
In the intensifying race for global competitiveness in artificial intelligence (AI), the United States, China and the European Union are vying to be the home of what could be the most important technological revolution of our lifetimes. AI governance proposals are also developing rapidly, with the EU proposing an aggressive regulatory approach to add to its already-onerous regulatory regime.
It would be imprudent for the U.S. to adopt Europe’s more top-down regulatory model, however, which already decimated digital technology innovation in the past and now will do the same for AI. The key to competitive advantage in AI will be openness to entrepreneurialism, investment and talent, plus a flexible governance framework to address risks.
Jump over to The Hill to read the entire thing. And down below you will find all my recent writing on AI and robotics. This will be my primary research focus in coming years.
On July 12, I participated in a Bipartisan Policy Center event on “Civil Society Perspectives on Artificial Intelligence Impact Assessments.” It was an hour-long discussion moderated by Michele Nellenbach, Vice President of Strategic Initiatives at the Bipartisan Policy Center, and which also featured Miriam Vogel, President and CEO of EqualAI. We discussed the ins and outs of algorithmic auditing and impact assessments for artificial intelligence. This is one of the hottest topics in the field of AI governance today, with proposals multiplying rapidly in academic and public policy circles. Several governments are already considering mandating AI auditing and impact assessments.
You can watch the entire discussion here, and down below I have included some of my key talking points from the session. I am currently finishing up my next book, which is on how to craft a flexible governance framework for AI and algorithmic technologies. It includes a lengthy chapter on this issue and I also plan on eventually publishing a stand-alone study on this topic.
A growing number of conservatives are calling for Big Government censorship of social media speech platforms. Censorship proposals are to conservatives what price controls are to radical leftists: completely outlandish, unworkable, and usually unconstitutional fantasies of controlling things that are ultimately much harder to control than they realize. And the costs of even trying to impose and enforce such extremist controls are always enormous.
I’ll offer a few more thoughts here in addition to what I’ve already said elsewhere. First, here is my response to the Rosen essay. National Review gave me 250 words to respond to her proposal:
While admitting that “law is a blunt instrument for solving complicated social problems,” Christine Rosen (“Keep Them Offline,” June 27) nonetheless downplays the radicalness of her proposal to make all teenagers criminals for accessing the primary media platforms of their generation. She wants us to believe that allowing teens to use social media is the equivalent of letting them operate a vehicle, smoke tobacco, or drink alcohol. This is false equivalence. Being on a social-media site is not the same as operating two tons of steel and glass at speed or using mind-altering substances.
Teens certainly face challenges and risks in any new media environment, but to believe that complex social pathologies did not exist before the Internet is folly. Echoing the same “lost generation” claims made by past critics who panicked over comic books and video games, Rosen asks, “Can we afford to lose another generation of children?” and suggests that only sweeping nanny-state controls can save the day. This cycle is apparently endless: Those “lost generations” grow up fine, only to claim it’s the next generation that is doomed!
Rosen casually dismisses free-speech concerns associated with mass-media criminalization, saying that her plan “would not require censorship.” Nothing could be further from the truth. Rosen’s prohibitionist proposal would deny teens the many routine and mostly beneficial interactions they have with their peers online every day. While she belittles media literacy and other educational and empowerment-based solutions to online problems, those approaches continue to be a better response than the repressive regulatory regime she would have Big Government impose on society.
I have a few more things to say beyond these brief comments. Continue reading →
Profectus is an excellent new online magazine featuring essays and interviews on the intersection of academic literature, public policy, civilizational progress, and human flourishing. The Spring 2022 edition of the magazine features a “Progress Roundtable” in which six different scholars were asked to contribute their thoughts on three general questions:
What is progress?
What are the most significant barriers holding back further progress?
If those challenges can be overcome, what does the world look like in 50 years?
I was honored to be asked by Clay Routledge to contribute answers to those questions alongside others, including: Steven Pinker (Harvard University), Jason Crawford (Roots of Progress), Matt Clancy (Institute for Progress), Marian Tupy (HumanProgress.org), James Pethokoukis (AEI). I encourage you to jump over the roundtable and read all their excellent responses. I’ve included my answers down below:
On Thursday, June 9, it was my great pleasure to return to my first work office at the Adam Smith Institute in London and give a talk on the future of innovation policy and the governance of artificial intelligence. James Lawson, who is affiliated with the ASI and wrote a wonderful 2020 study on AI policy, introduced me and also offered some remarks. Among the issues discussed:
What sort of governance vision should govern the future of innovation generally and AI in particular: the “precautionary principle” or “permissionless innovation”?
Which AI sectors are witnessing the most exciting forms of innovation currently?
What are the fundamental policy fault lines in the AI policy debates today?
Will fears about disruption and automation lead to a new Luddite movement?
How can “soft law” and decentralized governance mechanism help us solve pressing policy concerns surrounding AI?
How did automation affect traditional jobs and sectors?
Will the European Union’s AI Act become a global model for regulation and will it have a “Brussels Effect” in terms of forcing innovators across the world to come into compliance with EU regulatory mandates?
How will global innovation arbitrage affect the efforts by governments in Europe and elsewhere to regulate AI innovation?
Can the common law help address AI risk? How is the UK common law system superior to the US legal system?
What do we mean by “existential risk” as it pertains to artificial intelligence?
I have a massive study in the works addressing all these issues. In the meantime, you can watch the video of my London talk here. And thanks again to my friends at the Adam Smith Institute for hosting!
[This is a draft of a section of a forthcoming study on “A Flexible Governance Framework for Artificial Intelligence,” which I hope to complete shortly. I welcome feedback. I have also cross-posted this essay at Medium.]
Debates about how to embed ethics and best practices into AI product design is where the question of public policy defaults becomes important. To the extent AI design becomes the subject of legal or regulatory decision-making, a choice must be made between two general approaches: the precautionary principle or the proactionary principle.[1] While there are many hybrid governance approaches in between these two poles, the crucial issue is whether the initial legal default for AI technologies will be set closer to the red light of the precautionary principle (i.e., permissioned innovation) or to the green light of the proactionary principle (i.e., (permissionless innovation). Each governance default will be discussed.
Just FYI, the James Madison Institute will be hosting its “2022 Tech and Innovation Summit” on Thursday, September 15 and Friday, September 16 in Coral Gables, Florida. I’m honored to be included among the roster of speakers announced so far, which includes:
Ajit Pai, Former Chairman of the Federal Communications Commission
Adam Thierer, the Mercatus Center at George Mason University
Will Duffield, Cato Institute
Utah State Representative Cory Maloy
Dane Ishihara, Director of Utah’s Office of Regulatory Relief
Why is it illegal in many states to purchase an electric vehicle directly from a manufacturer? In this new Federalist Society podcast, Univ. of Michigan law school professor Daniel Crane and I examine how state protectionist barriers block choice and innovation for no good reason whatsoever. The only group that benefits from these protectionist, anti-consumer direct sales bans are local car dealers who don’t want the competition.
Corbin Barthold invited me on Tech Freedom’s “Tech Policy Podcast” to discuss the history of antitrust and competition policy over the past half century. We covered a huge range of cases and controversies, including: the DOJ’s mega cases against IBM & AT&T, Blockbuster and Hollywood Video’s derailed merger, the Sirius-XM deal, the hysteria over the AOL-Time Warner merger, the evolution of competition in mobile markets, and how we finally ended that dreaded old MySpace monopoly!
What does the future hold for Google, Facebook, Amazon, and Netflix? Do antitrust regulators at the DOJ or FTC have enough to mount a case against these firms? Which case is most likely to have legs?
Corbin and I also talked about the of progress more generally and the troubling rise of more and more Luddite thinking on both the left and right. I encourage you to give it a listen:
The Technology Liberation Front is the tech policy blog dedicated to keeping politicians' hands off the 'net and everything else related to technology. Learn more about TLF →