My Latest Study on AI Governance

by on April 20, 2023 · 0 comments

The R Street Institute has just released my latest study on AI governance and how to address “alignment” concerns in a bottom-up fashion. The 40-page report is entitled, “Flexible, Pro-Innovation Governance Strategies for Artificial Intelligence.”

My report asks, is it possible to address AI alignment without starting with the Precautionary Principle as the governance baseline default? I explain how that is indeed possible. While some critics claim that no one is seriously trying to deal with AI alignment today, my report explains how no technology in history has been more heavily scrutinized this early in its life-cycle as AI, machine learning and robotics. The number of ethical frameworks out there already is astonishing. We don’t have too few alignment frameworks; we probably have too many!

We need to get serious about bringing some consistency to these efforts and figure out more concrete ways to a culture of safety by embedding ethics-by-design. But there is an equally compelling interest in ensuring that algorithmic innovations are developed and made widely available to society.

Although some safeguards will be needed to minimize certain AI risks, a more agile and iterative governance approach can address these concerns without creating overbearing, top-down mandates, which would hinder algorithmic innovations – especially at a time when America is looking to stay ahead of China and other nations in the global AI race.

My report explores the many ethical frameworks that professional associations have already formulated as well as the various other “soft law” frameworks that have been devised. I also consider how AI auditing and algorithmic impact assessments can be used to help formalize the twin objectives of “ethics-by-design” and keeping “humans in the loop,” which are the two principles that drive most AI governance frameworks. But it is absolutely essential that audits and impact assessments are done right to ensure it does not become an overbearing, compliance-heavy, and politicized nightmare that would undermine algorithmic entrepreneurialism and computational innovation.

Finally, my report reviews the extensive array of existing government agencies and policies that ALREADY govern artificial intelligence and robotics as well as the wide variety of court-based common law solutions that cover algorithmic innovations. The notion that America has no law or regulation covering artificial intelligence today is massively wrong, as my report explains in detail.

I hope you’ll take the time to check out my new report. This and my previous report on “Getting AI Innovation Culture Right” serve as the foundation of everything we have coming on AI and robotics from the R Street Institute. Next up will be a massive study on global AI “existential risks” and national security issues. Stay tuned. Much more to come!

In the meantime, you can find all my recent work here on my “Running List of My Research on AI, ML & Robotics Policy.”


Additional Reading:

Previous post:

Next post: