September 2023

The Brookings Institution hosted this excellent event on frontier AI regulation this week featuring a panel discussion I was on that followed opening remarks from Rep. Ted Lieu (D-CA). I come in around the 51-min mark of the event video and explain why I worry that AI policy now threatens to devolve into an all-out war on computation and open source innovation in particular. ​

I argue that some pundits and policymakers appear to be on the way to substituting a very real existential risk (authoritarian govt control over computation/science) for a hypothetic existential risk of powerful AGI. I explain how there are better, less destructive ways to address frontier AI concerns than the highly repressive approaches currently being considered.

I have developed these themes and arguments at much greater length in a series of essays over on Medium over the past few months. If you care to read more, the four key articles to begin with are:

In June, I also released this longer R Street Institute report on “Existential Risks & Global Governance Issues around AI & Robotics,” and then spent an hour talking about these issues on the TechPolicyPodcast about “Who’s Afraid of Artificial Intelligence?” All of my past writing and speaking on AI, ML, and robotic policy can be found here, and that list is update every month.

As always, I’ll have much more to say on this topic as the war on computation expands. This is quickly becoming the most epic technology policy battle of modern times.

I was my pleasure to participate in this Cato Institute event today on “Who’s Leading on AI Policy?
Examining EU and U.S. Policy Proposals and the Future of AI.” Cato’s Jennifer Huddleston hosted and also participating was Boniface de Champris, Policy Manager with the Computer and Communications Industry Association. Here’s a brief outline of some of the issues we discussed:

  • What are the 7 leading concerns driving AI policy today?
  • What is the difference between horizontal vs. vertical AI regulation?
  • Which agencies are moving currently to extend their reach and regulate AI tech?
  • What’s going on at the state, local, and municipal level in the US on AI policy?
  • How will the so-called “Brussels Effect” influence the course of AI policy in the US?
  • What have the results been of the EU’s experience with the GDPR?
  • How will the EU AI Act work in practice?
  • Can we make algorithmic systems perfectly transparent / “explainable”?
  • Should AI innovators be treated as ‘guilty until proven innocent’ of certain risks?
  • How will existing legal concepts and standards (like civil rights law and unfair and deceptive practices regulation) be applied to algorithmic technologies?
  • Do we have a fear-based model of AI governance currently? What role has science fiction played in fueling that?
  • What role will open source AI play going forward?
  • Is AI licensing a good idea? How would it even work?
  • Can AI help us identify and address societal bias and discrimination?

Again, you can watch the entire video here and, as always, here’s my “Running List of My Research on AI, ML & Robotics Policy.”