For my latest regular column in The Hill, I took a look at the trade-offs associated with the EU’s AI Act. This is derived from a much longer chapter on European AI policy that is in my forthcoming book, and I also plan on turning it into a free-standing paper at some point soon. My oped begins as follows:
In the intensifying race for global competitiveness in artificial intelligence (AI), the United States, China and the European Union are vying to be the home of what could be the most important technological revolution of our lifetimes. AI governance proposals are also developing rapidly, with the EU proposing an aggressive regulatory approach to add to its already-onerous regulatory regime.
It would be imprudent for the U.S. to adopt Europe’s more top-down regulatory model, however, which already decimated digital technology innovation in the past and now will do the same for AI. The key to competitive advantage in AI will be openness to entrepreneurialism, investment and talent, plus a flexible governance framework to address risks.
Jump over to The Hill to read the entire thing. And down below you will find all my recent writing on AI and robotics. This will be my primary research focus in coming years.
On July 12, I participated in a Bipartisan Policy Center event on “Civil Society Perspectives on Artificial Intelligence Impact Assessments.” It was an hour-long discussion moderated by Michele Nellenbach, Vice President of Strategic Initiatives at the Bipartisan Policy Center, and which also featured Miriam Vogel, President and CEO of EqualAI. We discussed the ins and outs of algorithmic auditing and impact assessments for artificial intelligence. This is one of the hottest topics in the field of AI governance today, with proposals multiplying rapidly in academic and public policy circles. Several governments are already considering mandating AI auditing and impact assessments.
You can watch the entire discussion here, and down below I have included some of my key talking points from the session. I am currently finishing up my next book, which is on how to craft a flexible governance framework for AI and algorithmic technologies. It includes a lengthy chapter on this issue and I also plan on eventually publishing a stand-alone study on this topic.
A growing number of conservatives are calling for Big Government censorship of social media speech platforms. Censorship proposals are to conservatives what price controls are to radical leftists: completely outlandish, unworkable, and usually unconstitutional fantasies of controlling things that are ultimately much harder to control than they realize. And the costs of even trying to impose and enforce such extremist controls are always enormous.
I’ll offer a few more thoughts here in addition to what I’ve already said elsewhere. First, here is my response to the Rosen essay. National Review gave me 250 words to respond to her proposal:
While admitting that “law is a blunt instrument for solving complicated social problems,” Christine Rosen (“Keep Them Offline,” June 27) nonetheless downplays the radicalness of her proposal to make all teenagers criminals for accessing the primary media platforms of their generation. She wants us to believe that allowing teens to use social media is the equivalent of letting them operate a vehicle, smoke tobacco, or drink alcohol. This is false equivalence. Being on a social-media site is not the same as operating two tons of steel and glass at speed or using mind-altering substances.
Teens certainly face challenges and risks in any new media environment, but to believe that complex social pathologies did not exist before the Internet is folly. Echoing the same “lost generation” claims made by past critics who panicked over comic books and video games, Rosen asks, “Can we afford to lose another generation of children?” and suggests that only sweeping nanny-state controls can save the day. This cycle is apparently endless: Those “lost generations” grow up fine, only to claim it’s the next generation that is doomed!
Rosen casually dismisses free-speech concerns associated with mass-media criminalization, saying that her plan “would not require censorship.” Nothing could be further from the truth. Rosen’s prohibitionist proposal would deny teens the many routine and mostly beneficial interactions they have with their peers online every day. While she belittles media literacy and other educational and empowerment-based solutions to online problems, those approaches continue to be a better response than the repressive regulatory regime she would have Big Government impose on society.
I have a few more things to say beyond these brief comments. Continue reading →
Profectus is an excellent new online magazine featuring essays and interviews on the intersection of academic literature, public policy, civilizational progress, and human flourishing. The Spring 2022 edition of the magazine features a “Progress Roundtable” in which six different scholars were asked to contribute their thoughts on three general questions:
What is progress?
What are the most significant barriers holding back further progress?
If those challenges can be overcome, what does the world look like in 50 years?
I was honored to be asked by Clay Routledge to contribute answers to those questions alongside others, including: Steven Pinker (Harvard University), Jason Crawford (Roots of Progress), Matt Clancy (Institute for Progress), Marian Tupy (HumanProgress.org), James Pethokoukis (AEI). I encourage you to jump over the roundtable and read all their excellent responses. I’ve included my answers down below:
On Thursday, June 9, it was my great pleasure to return to my first work office at the Adam Smith Institute in London and give a talk on the future of innovation policy and the governance of artificial intelligence. James Lawson, who is affiliated with the ASI and wrote a wonderful 2020 study on AI policy, introduced me and also offered some remarks. Among the issues discussed:
What sort of governance vision should govern the future of innovation generally and AI in particular: the “precautionary principle” or “permissionless innovation”?
Which AI sectors are witnessing the most exciting forms of innovation currently?
What are the fundamental policy fault lines in the AI policy debates today?
Will fears about disruption and automation lead to a new Luddite movement?
How can “soft law” and decentralized governance mechanism help us solve pressing policy concerns surrounding AI?
How did automation affect traditional jobs and sectors?
Will the European Union’s AI Act become a global model for regulation and will it have a “Brussels Effect” in terms of forcing innovators across the world to come into compliance with EU regulatory mandates?
How will global innovation arbitrage affect the efforts by governments in Europe and elsewhere to regulate AI innovation?
Can the common law help address AI risk? How is the UK common law system superior to the US legal system?
What do we mean by “existential risk” as it pertains to artificial intelligence?
I have a massive study in the works addressing all these issues. In the meantime, you can watch the video of my London talk here. And thanks again to my friends at the Adam Smith Institute for hosting!
[This is a draft of a section of a forthcoming study on “A Flexible Governance Framework for Artificial Intelligence,” which I hope to complete shortly. I welcome feedback. I have also cross-posted this essay at Medium.]
Debates about how to embed ethics and best practices into AI product design is where the question of public policy defaults becomes important. To the extent AI design becomes the subject of legal or regulatory decision-making, a choice must be made between two general approaches: the precautionary principle or the proactionary principle.[1] While there are many hybrid governance approaches in between these two poles, the crucial issue is whether the initial legal default for AI technologies will be set closer to the red light of the precautionary principle (i.e., permissioned innovation) or to the green light of the proactionary principle (i.e., (permissionless innovation). Each governance default will be discussed.
Just FYI, the James Madison Institute will be hosting its “2022 Tech and Innovation Summit” on Thursday, September 15 and Friday, September 16 in Coral Gables, Florida. I’m honored to be included among the roster of speakers announced so far, which includes:
Ajit Pai, Former Chairman of the Federal Communications Commission
Adam Thierer, the Mercatus Center at George Mason University
Will Duffield, Cato Institute
Utah State Representative Cory Maloy
Dane Ishihara, Director of Utah’s Office of Regulatory Relief
Why is it illegal in many states to purchase an electric vehicle directly from a manufacturer? In this new Federalist Society podcast, Univ. of Michigan law school professor Daniel Crane and I examine how state protectionist barriers block choice and innovation for no good reason whatsoever. The only group that benefits from these protectionist, anti-consumer direct sales bans are local car dealers who don’t want the competition.
Corbin Barthold invited me on Tech Freedom’s “Tech Policy Podcast” to discuss the history of antitrust and competition policy over the past half century. We covered a huge range of cases and controversies, including: the DOJ’s mega cases against IBM & AT&T, Blockbuster and Hollywood Video’s derailed merger, the Sirius-XM deal, the hysteria over the AOL-Time Warner merger, the evolution of competition in mobile markets, and how we finally ended that dreaded old MySpace monopoly!
What does the future hold for Google, Facebook, Amazon, and Netflix? Do antitrust regulators at the DOJ or FTC have enough to mount a case against these firms? Which case is most likely to have legs?
Corbin and I also talked about the of progress more generally and the troubling rise of more and more Luddite thinking on both the left and right. I encourage you to give it a listen:
The American Enterprise Institute (AEI) has kicked off a new project called “Digital Platforms and American Life,” which will bring together a variety of scholars to answer the question: How should policymakers think about the digital platforms that have become embedded in our social and civic life? The series, which is being edited by AEI Senior Fellow Adam J. White, highlights how the democratization of knowledge and influence in the Internet age comes with incredible opportunities but also immense challenges. The contributors to this series will approach these issues from various perspectives and also address different aspects of policy as it pertains to the future of technological governance.
It is my honor to have the lead paper in this new series. My 19-page essay is entitled, Governing Emerging Technology in an Age of Policy Fragmentation and Disequilibrium, and it represents my effort to concisely tie together all my writing over the past 30 years on governance trends for the Internet and related technologies. The key takeaways from my essay are:
Traditional governance mechanisms are being strained by modern technological and political realities. Newer technologies, especially digital ones, are developing at an ever-faster rate and building on top of each other, blurring lines between sectors.
Congress has failed to keep up with the quickening pace of technological change. It also continues to delegate most of its constitutional authority to agencies to deal with most policy concerns. But agencies are overwhelmed too. This situation is unlikely to change, creating a governance gap.
Decentralized governance techniques are filling the gap. Soft law—informal, iterative, experimental, and collaborative solutions—represents the new normal for technological governance. This is particularly true for information sectors, including social media platforms, for which the First Amendment acts as a major constraint on formal regulation anyway.
No one-size-fits-all tool can address the many governance issues related to fast-paced science and technology developments; therefore, decentralized governance mechanisms may be better suited to address newer policy concerns.
My arguments will frustrate many people of varying political dispositions because I adopt a highly pragmatic approach to technological governance. Continue reading →
The Technology Liberation Front is the tech policy blog dedicated to keeping politicians' hands off the 'net and everything else related to technology. Learn more about TLF →