Is AI Really an Unregulated Wild West?

by on June 22, 2023 · 0 comments

As I noted in a recent interview with James Pethokoukis for his Faster, Please! newsletter, “[t]he current policy debate over artificial intelligence is haunted by many mythologies and mistaken assumptions. The most problematic of these is the widespread belief that AI is completely ungoverned today.” In a recent R Street Institute report and series of other publications, I have documented just how wrong that particular assumption is.

The first thing I try to remind everyone is that the U.S. federal government is absolutely massive—2.1 million employees, 15 cabinet agencies, 50 independent federal commissions and 434 federal departments. Strangely, when policymakers and pundits deliver remarks on AI policy today, they seem to completely ignore all that regulatory capacity while simultaneously casually tossing out proposals to just add more and more layers of regulation and bureaucracy to it. Well, I say why not see if the existing regulations and bureaucracy are working first, and then we can have a chat about what more is needed to fill gaps.

And a lot is being done on this front. In a new blog post for R Street, I offer a brief summary of some of the most important recent efforts.

  • In January, the National Institute of Standards and Technology released its “AI Risk Management Framework,” which was created through a multi-year, multi-stakeholder process. It is intended to help developers and policymakers better understand how to identify and address various types of potential algorithmic risk.
  • The Food and Drug Administration (FDA) has been using its broad regulatory powers to review and approve AI and ML-enabled medical devices for many years already, and the agency possesses broad recall authority that can address risks that develop from algorithmic or robotic systems. The FDA is currently refining its approach to AI/ML in a major proceeding.
  • The National Highway Traffic Safety Administration (NHTSA) has been issuing constant revisions to its driverless car policy guidelines since 2016. Like the FDA, the NHTSA also has broad recall authority, which it used in February 2023 to mandate a recall of Tesla’s full self-driving autonomous driving system, also requiring an over-the-air software update to over 300,000 vehicles that had the software package.
  • In 2021, the Consumer Product Safety Commission agency issued a major report highlighting the many policy tools it already has to address AI risks. Like the FDA and NHTSA, the agency has recall authority that can address risks that develop from consumer-facing algorithmic or robotic systems.
  • In April, Securities and Exchange Commission Chairman Gary Gensler told Congress that his agency is moving to address AI and predictive data analytics in finance and investing.
  • The Federal Trade Commission (FTC) has become increasingly active on AI policy issues and has noted in a series of recent blog posts that the agency is ready to use its broad authority to “unfair and deceptive practices,” involving algorithmic claims or applications.
  • The Equal Employment Opportunity Commission (EEOC) recently released a memo as part of its “ongoing effort to help ensure that the use of new technologies complies with federal [equal employment opportunity] law.” It outlines how existing employment antidiscrimination laws and policies cover algorithmic technologies.
  • In May, the Consumer Financial Protection Bureau (CFPB) issued a statement clarifying how existing federal anti-discrimination law already applies to complex algorithmic systems used for lending decisions.  The agency also recently released a report on the use of Chatbots in Consumer Finance, and explained the many ways that the “CFPB is actively monitoring the market” for risks associated with these new services.
  • Along with the EEOC, the FTC and the CFPB, the Civil Rights Division of the Department of Justice released an April joint statement saying that the agency heads said that they would be looking to take preemptive steps to address algorithmic discrimination.

“This is real-time algorithmic governance in action,” I argue. Again, additional regulatory steps may be needed later to fill gaps in current law, but policymakers should begin by acknowledging that a lot of algorithmic oversight authority exists across the federal government. Meanwhile, the courts and our common law system are also starting to address novel AI problems as cases develop. For more along these lines, see my recent essay on “The Many Ways Government Already Regulates Artificial Intelligence.”

So, next time someone suggests that AI is developing in an unregulated “Wild West,” remind them of all these existing laws, agencies, and regulatory efforts. And then also ask them a different question no one is really exploring currently: Could it be the case that many agencies are already overregulating some algorithmic and autonomous systems? (I’m looking at you, FAA!) Why is no one worried about that possibility as the global AI race with China and other countries intensifies?

Additional Reading:

Previous post:

Next post: