7 AI Policy Issues to Watch in 2023 and Beyond

by on February 10, 2023 · 0 comments

In my latest R Street Institute blog post, “Mapping the AI Policy Landscape Circa 2023: Seven Major Fault Lines,” I discuss the big issues confronting artificial intelligence and machine learning in the coming year and beyond. I note that the AI regulatory proposals are multiplying fast and coming in two general varieties: broad-based and targeted. Broad-based algorithmic regulation would address the use of these technologies in a holistic fashion across many sectors and concerns. By contrast, targeted algorithmic regulation looks to address specific AI applications or concerns. In the short-term, it is more likely that targeted or “sectoral” regulatory proposals have a chance of being implemented.

I go on to identify seven major issues of concern that will drive these policy proposals. They include:

1) Privacy and Data Collection

2) Bias and Discrimination

3) Free Speech and Disinformation

4) Kids’ Safety

5) Physical Safety and Cybersecurity

6) Industrial Policy and Workforce Issues

7) National Security and Law Enforcement Issues

Of course, each of these issues includes many sub-issues and nuanced concerns. But I also noted that “this list only scratches the surface in terms of the universe of AI policy issues.” Algorithmic policy considerations are now being discussed in many other fields, including educationinsurancefinancial servicesenergy marketsintellectual propertyretail and trade, and more. I’ll be rolling out a new series of essays examining all these issues throughout the year.

But, as I note in concluding my new essay, the danger of over-reach exists with early regulatory efforts:

AI risks deserve serious attention, but an equally serious risk exists that an avalanche of fear-driven regulatory proposals will suffocate different life-enriching algorithmic innovations. There is a compelling interest in ensuring that AI innovations are developed and made widely available to society. Policymakers should not assume that important algorithmic innovations will just magically come about; our nation must get its innovation culture right if we hope to create a better, more prosperous future.

America needs a flexible governance approach for algorithmic systems that avoids heavy-handed, top-down controls as a first-order solution. “There is no use worrying about the future if we cannot even invent it first,” I conclude.

Additional Reading

Previous post:

Next post: