Profectus is an excellent new online magazine featuring essays and interviews on the intersection of academic literature, public policy, civilizational progress, and human flourishing. The Spring 2022 edition of the magazine features a “Progress Roundtable” in which six different scholars were asked to contribute their thoughts on three general questions:
What is progress?
What are the most significant barriers holding back further progress?
If those challenges can be overcome, what does the world look like in 50 years?
I was honored to be asked by Clay Routledge to contribute answers to those questions alongside others, including: Steven Pinker (Harvard University), Jason Crawford (Roots of Progress), Matt Clancy (Institute for Progress), Marian Tupy (HumanProgress.org), James Pethokoukis (AEI). I encourage you to jump over the roundtable and read all their excellent responses. I’ve included my answers down below:
On Thursday, June 9, it was my great pleasure to return to my first work office at the Adam Smith Institute in London and give a talk on the future of innovation policy and the governance of artificial intelligence. James Lawson, who is affiliated with the ASI and wrote a wonderful 2020 study on AI policy, introduced me and also offered some remarks. Among the issues discussed:
What sort of governance vision should govern the future of innovation generally and AI in particular: the “precautionary principle” or “permissionless innovation”?
Which AI sectors are witnessing the most exciting forms of innovation currently?
What are the fundamental policy fault lines in the AI policy debates today?
Will fears about disruption and automation lead to a new Luddite movement?
How can “soft law” and decentralized governance mechanism help us solve pressing policy concerns surrounding AI?
How did automation affect traditional jobs and sectors?
Will the European Union’s AI Act become a global model for regulation and will it have a “Brussels Effect” in terms of forcing innovators across the world to come into compliance with EU regulatory mandates?
How will global innovation arbitrage affect the efforts by governments in Europe and elsewhere to regulate AI innovation?
Can the common law help address AI risk? How is the UK common law system superior to the US legal system?
What do we mean by “existential risk” as it pertains to artificial intelligence?
I have a massive study in the works addressing all these issues. In the meantime, you can watch the video of my London talk here. And thanks again to my friends at the Adam Smith Institute for hosting!
The Technology Liberation Front is the tech policy blog dedicated to keeping politicians' hands off the 'net and everything else related to technology. Learn more about TLF →