The Technology Policy Institute has posted the video of my talk at the 2024 Aspen Forum panel on “How Should we Regulate the Digital World?” My remarks run from 33:33–44:12 of the video. I also elaborate briefly during Q&A.
1) “We need to have conversation about the future of AI and the risks that it poses.”
2) “We should get a bunch of smart people in a room and figure this out.”
I note that, if you’ve read enough essays, books or social media posts about artificial intelligence (AI) and robotics—among other emerging technologies—then chances are you’ve stumbled on variants of these two arguments many times over. They are almost impossible to disagree with in theory, but when you start to investigate what they actually mean in practice, they are revealed to be largely meaningless rhetorical flourishes which threaten to hold up meaningful progress on the AI front. I continue on to argue in my essay that:
I’m not at all opposed to people having serious discussions about the potential risks associated with AI, algorithms, robotics or smart machines. But I do have an issue with: (a) the astonishing degree of ambiguity at work in the world of AI punditry regarding the nature of these “conversations,” and, (b) the fact that people who are making such statements apparently have not spent much time investigating the remarkable number of very serious conversations that have already taken place, or which are ongoing, about AI issues.
In fact, it very well could be the case that we have too many conversations going on currently about AI issues and that the bigger problem is instead one of better coordinating the important lessons and best practices that we have already learned from those conversations.
I then unpack each of those lines and explain what is wrong with them in more detail. Continue reading →
I’ve been floating around in conservative policy circles for 30 years and I have spent much of that time covering media policy and child safety issues. My time in conservative circles began in 1992 with a 9-year stint at the Heritage Foundation, where I launched the organization’s policy efforts on media regulation, the Internet, and digital technology. Meanwhile, my work on child safety has spanned 4 think tanks, multiple blue ribbon child safety commissions, countless essays, dozens of filings and testimonies, and even a multi-edition book.
During this three-decade run, I’ve tried my hardest to find balanced ways of addressing some of the legitimate concerns that many conservatives have about kids, media content, and online safety issues. Raising kids is the hardest job in the world. My daughter and son are now off at college, but the last twenty years of helping them figure out how to navigate the world and all the challenges it poses was filled with difficulties. This was especially true because my daughter and son faced completely different challenges when it came to media content and online interactions. Simply put, there is no one-size-fits-all playbook when it comes to raising kids or addressing concerns about healthy media interactions. Continue reading →
On Thursday, June 9, it was my great pleasure to return to my first work office at the Adam Smith Institute in London and give a talk on the future of innovation policy and the governance of artificial intelligence. James Lawson, who is affiliated with the ASI and wrote a wonderful 2020 study on AI policy, introduced me and also offered some remarks. Among the issues discussed:
What sort of governance vision should govern the future of innovation generally and AI in particular: the “precautionary principle” or “permissionless innovation”?
Which AI sectors are witnessing the most exciting forms of innovation currently?
What are the fundamental policy fault lines in the AI policy debates today?
Will fears about disruption and automation lead to a new Luddite movement?
How can “soft law” and decentralized governance mechanism help us solve pressing policy concerns surrounding AI?
How did automation affect traditional jobs and sectors?
Will the European Union’s AI Act become a global model for regulation and will it have a “Brussels Effect” in terms of forcing innovators across the world to come into compliance with EU regulatory mandates?
How will global innovation arbitrage affect the efforts by governments in Europe and elsewhere to regulate AI innovation?
Can the common law help address AI risk? How is the UK common law system superior to the US legal system?
What do we mean by “existential risk” as it pertains to artificial intelligence?
I have a massive study in the works addressing all these issues. In the meantime, you can watch the video of my London talk here. And thanks again to my friends at the Adam Smith Institute for hosting!
If you haven’t yet had the chance to check out the new Progress Forum, I encourage you to do so. It’s a discussion group for progress studies and all things related to it. The Forum is sponsored by The Roots of Progress. Even though the Forum is still in pre-launch phase, there are already many interesting threads worth checking out. I was my honor to contribute one of the first on the topic, “Where is ‘Progress Studies’ Going?” It’s an effort to sort through some of the questions and challenges facing the Progress Studies movement in terms of focus and philosophical grounding. I thought I would just reproduce the essay here, but I encourage you to jump over to the Progress Forum to engage in discussion about it, or the many other excellent discussions happening there on other issues.
Almost every argument against technological innovation and progress that we hear today was identified and debunked by Samuel C. Florman a half century ago. Few others since him have mounted a more powerful case for the importance of innovation to human flourishing than Florman did throughout his lifetime.
Chances are you’ve never heard of him, however. As prolific as he was, Florman did not command as much attention as the endless parade of tech critics whose apocalyptic predictions grabbed all the headlines. An engineer by training, Florman became concerned about the growing criticism of his profession throughout the 1960s and 70s. He pushed back against that impulse in a series of books over the next two decades, including most notably: The Existential Pleasures of Engineering (1976), Blaming Technology: The Irrational Search for Scapegoats (1981), and The Civilized Engineer (1987). He was also a prolific essayist, penning hundreds of articles for a wide variety of journals, magazines, and newspapers beginning in 1959. He was also a regular columnist for MIT Technology Review for sixteen years.
Florman’s primary mission in his books and many of those essays was to defend the engineering profession against attacks emanating from various corners. More broadly, as he noted in a short autobiography on his personal website, Florman was interested in discussing, “the relationship of technology to the general culture.”
Florman could be considered a “rational optimist,” to borrow Matt Ridley’s notable term[1] for those of us who believe, as I have summarized elsewhere, that there is a symbiotic relationship between innovation, economic growth, pluralism, and human betterment.[2] Rational optimists are highly pragmatic and base their optimism on facts and historical analysis, not on dogmatism or blind faith in any particular viewpoint, ideology, or gut feeling. But they are unified in the belief that technological change is a crucial component of moving the needle on progress and prosperity.
Florman’s unique contribution to advancing rational optimism came in the way he itemized the various claims made by tech critics and then powerfully debunked each one of them. Continue reading →
In his debut essay for the new Agglomerations blog, my former colleague Caleb Watney, now Director of Innovation Policy for the Progressive Policy Institute, seeks to better define a few important terms, including: technology policy, innovation policy, and industrial policy. In the end, however, he decides to basically dispense with the term “industry policy” because, when it comes to defining these terms, “it is useful to have a limiting principle and it’s unclear what the limiting principle is for industrial policy.”
I sympathize. Debates about industrial policy are frustrating and unproductive when people cannot even agree to the parameters of sensible discussion. But I don’t think we need to dispense with the term altogether. We just need to define it somewhat more narrowly to make sure it remains useful.
First, let’s consider how this exact same issue played out three decades ago. In the 1980s, many articles and books featured raging debates about the proper scope of industrial policy. I spent my early years as a policy analyst devouring all these books and essays because I originally wanted to be a trade policy analyst. And in the late 1980s and early 1990s, you could not be a trade policy analyst without confronting industrial policy arguments.
Interoperability is a topic that has long been of interest to me. How networks, platforms, and devices work with each other–or sometimes fail to–is an important engineering, business, and policy issue. Back in 2012, I spilled out over 5,000 words on the topic when reviewing John Palfrey and Urs Gasser’s excellent book, Interop: The Promise and Perils of Highly Interconnected Systems.
I’ve always struggled with the interoperability issues, however, and often avoided them became of the sheer complexity of it all. Some interesting recent essays by sci-fi author and digital activist Cory Doctorow remind me that I need to get back on top of the issue. His latest essay is a call-to-arms in favor of what he calls “adversarial interoperability.” “[T]hat’s when you create a new product or service that plugs into the existing ones without the permission of the companies that make them,” he says. “Think of third-party printer ink, alternative app stores, or independent repair shops that use compatible parts from rival manufacturers to fix your car or your phone or your tractor.”
Doctorow is a vociferous defender of expanded digital access rights of many flavors and his latest essays on interoperability expand upon his previous advocacy for open access and a general freedom to tinker. He does much of this work with the Electronic Frontier Foundation (EFF), which shares his commitment to expanded digital access and interoperability rights in various contexts.
I’m in league with Doctorow and EFF on some of these things, but also find myself thinking they go much too far in other ways. At root, their work and advocacy raise a profound question: should there be any general right to exclude on digital platforms? Although he doesn’t always come right out and say it, Doctorow’s work often seems like an outright rejection of any sort of property rights in networks or platforms. Generally speaking, he does not want the law to recognize any right for tech platforms to exclude using digital fences of any sort. Continue reading →
This month’s Cato Unbound symposium features a conversation about the continuing relevance of Albert Hirschman’s Exit, Voice and Loyalty: Responses to Decline in Firms, Organizations, and States, fifty years after its publication. It was a slender by important book that has influenced scholars in many different fields over the past five decades. The Cato symposium features a discussion between me and three other scholars who have attempted to use Hirschman’s framework when thinking about modern social, political, and technological developments.
My lead essay considers how we might use Hirschman’s insights to consider how entrepreneurialism and innovative activities might be reconceptualized as types of voice and exit. Response essays by Mikayla Novak, Ilya Somin, and Max Borders broaden the discussion to highlight how to think about Hirschman’s framework in various contexts. And then I returned to the discussion this week with a response essay of my own attempting to tie those essays together and extend the discussion about how technological innovation might provide us with greater voice and exit options going forward. Each contributor offers important insights and illustrates the continuing importance of Hirschman’s book.
I encourage you to jump over to Cato Unbound to read the essays and join the conversations in the comments.
After reading LM Sacasas’ recent piece on moral communities, I couldn’t help but wonder if the piece was written in the esoteric mode.
Let me explain by some meandering.
Now, I am surely going to butcher his argument, so take a read of it yourself, but there is a bit of an interesting call and response structure to the piece. He begins with commentary on “frequent deployment of the rhetorical we,” in discussions over the morality of technology. Then, channeling Langdon Winner, he notes approvingly that “What matters here is that this lovely ‘we’ suggests the presence of a moral community that may not, in fact, exist at all, at least not in any coherent, self-conscious form.” Continue reading →
The Technology Liberation Front is the tech policy blog dedicated to keeping politicians' hands off the 'net and everything else related to technology. Learn more about TLF →