Here’s a new DC EKG podcast I recently appeared on to discuss the current state of policy development surrounding artificial intelligence. In our wide-ranging chat, we discussed:

* why a sectoral approach to AI policy is superior to general purpose licensing
* why comprehensive AI legislation will not pass in Congress
* the best way to deal with algorithmic deception
* why Europe lost its tech sector
* how a global AI regulator threatens our safety
* the problem with Biden’s AI executive order
* will AI policy follow same path as nuclear policy?
* global innovation arbitrage & the innovation cage
* AI, health care & FDA regulation
* AI regulation vs trade secrets
* is AI transparency / auditing the solution?

Listen to the full show here or here. To read more about current AI policy developments, check out my “Running List of My Research on AI, ML & Robotics Policy.”

 

My latest dispatch from the frontlines of the artificial intelligence policy wars in Washington looks at the major proposals to regulate AI. In my new essay, “Artificial Intelligence Legislative Outlook: Fall 2023 Update,” I argue that there are 3 major impediments to getting major AI legislation over the finish line in Congress: (1) Breadth and complexity of the issue; (2) Multiplicity of concerns & special interests; & (3) Extreme rhetoric / proposals are dominating the discussion.

If Congress wants to get something done in this session, they’ll need to do two things: (1) set aside the most radical regulatory proposals (like big new AI agencies or licensing schemes); and (2) break AI policy down into its smaller subcomponents and then prioritize among them where policy gaps might exist.

Prediction: Congress will not pass any AI-related legislation this session due to the factors identified in my essay. The temptation to “go big” with everything-and-the-kitchen-sink approaches to AI regulation will (especially with extreme ideas like new agencies & licenses) will doom AI legislation. It’s also worth noting that Washington’s swelling interest in AI policy is having a crowding-out effect on other important legislative proposals that might have advanced otherwise, such as the baseline privacy bill (ADPPA) and other things like driverless car legislation. Many want to advance those efforts first, but the AI focus makes that hard.

Read the entire essay here.

The Brookings Institution hosted this excellent event on frontier AI regulation this week featuring a panel discussion I was on that followed opening remarks from Rep. Ted Lieu (D-CA). I come in around the 51-min mark of the event video and explain why I worry that AI policy now threatens to devolve into an all-out war on computation and open source innovation in particular. ​

I argue that some pundits and policymakers appear to be on the way to substituting a very real existential risk (authoritarian govt control over computation/science) for a hypothetic existential risk of powerful AGI. I explain how there are better, less destructive ways to address frontier AI concerns than the highly repressive approaches currently being considered.

I have developed these themes and arguments at much greater length in a series of essays over on Medium over the past few months. If you care to read more, the four key articles to begin with are:

In June, I also released this longer R Street Institute report on “Existential Risks & Global Governance Issues around AI & Robotics,” and then spent an hour talking about these issues on the TechPolicyPodcast about “Who’s Afraid of Artificial Intelligence?” All of my past writing and speaking on AI, ML, and robotic policy can be found here, and that list is update every month.

As always, I’ll have much more to say on this topic as the war on computation expands. This is quickly becoming the most epic technology policy battle of modern times.

I was my pleasure to participate in this Cato Institute event today on “Who’s Leading on AI Policy?
Examining EU and U.S. Policy Proposals and the Future of AI.” Cato’s Jennifer Huddleston hosted and also participating was Boniface de Champris, Policy Manager with the Computer and Communications Industry Association. Here’s a brief outline of some of the issues we discussed:

  • What are the 7 leading concerns driving AI policy today?
  • What is the difference between horizontal vs. vertical AI regulation?
  • Which agencies are moving currently to extend their reach and regulate AI tech?
  • What’s going on at the state, local, and municipal level in the US on AI policy?
  • How will the so-called “Brussels Effect” influence the course of AI policy in the US?
  • What have the results been of the EU’s experience with the GDPR?
  • How will the EU AI Act work in practice?
  • Can we make algorithmic systems perfectly transparent / “explainable”?
  • Should AI innovators be treated as ‘guilty until proven innocent’ of certain risks?
  • How will existing legal concepts and standards (like civil rights law and unfair and deceptive practices regulation) be applied to algorithmic technologies?
  • Do we have a fear-based model of AI governance currently? What role has science fiction played in fueling that?
  • What role will open source AI play going forward?
  • Is AI licensing a good idea? How would it even work?
  • Can AI help us identify and address societal bias and discrimination?

Again, you can watch the entire video here and, as always, here’s my “Running List of My Research on AI, ML & Robotics Policy.”

The New York Times today published my response to an oped by Senators Lindsey Graham & Elizabeth Warren calling for a new “Digital Consumer Protection Commission” to micromanage the high-tech information economy. “Their new technocratic digital regulator would do nothing but hobble America as we prepare for the next great global technological revolution,” I argue. Here’s my full response:

Senators Lindsey Graham and Elizabeth Warren propose a new federal mega-regulator for the digital economy that threatens to undermine America’s global technology standing.

A new “licensing and policing” authority would stall the continued growth of advanced technologies like artificial intelligence in America, leaving China and others to claw back crucial geopolitical strategic ground.

America’s digital technology sector enjoyed remarkable success over the past quarter-century — and provided vast investment and job growth — because the U.S. rejected the heavy-handed regulatory model of the analog era, which stifled innovation and competition.

The tech companies that Senators Graham and Warren cite (along with countless others) came about over the past quarter-century because we opened markets and rejected the monopoly-preserving regulatory regimes that had been captured by old players.

The U.S. has plenty of federal bureaucracies, and many already oversee the issues that the senators want addressed. Their new technocratic digital regulator would do nothing but hobble America as we prepare for the next great global technological revolution.

There’s been exciting progress in US drone policy in the past few months. First, the FAA in April announced surprising new guidance in its Aeronautics Information Manual, re: drone airspace access. As I noted in an article for the State Aviation Journal, the new Manual notes:

There can be certain local restrictions to airspace. While the FAA is designated by federal law to be the regulator of the NAS [national airspace system], some state and local authorities may also restrict access to local airspace. UAS pilots should be aware of these local rules.

That April update has been followed up by a bigger, drone policy update from FAA. On July 14, the FAA went further than the April guidance and updated and replaced its 2015 guidance to states and localities about drone regulation and airspace policy.

In this July 2023 guidance, I was pleasantly surprised to see the FAA recognize some state and local authority in the “immediate reaches” airspace. Notably, in the new guidance the FAA expressly notes that that state laws that “prohibit [or] restrict . . . operations by UAS in the immediate reaches of property” are an example of laws not subject to conflict preemption.

A handful of legal scholars–like ASU Law Professor Troy Rule and myself–have urged federal officials for years to recognize that states, localities, and landowners have a significant say in what happens in very low-altitude airspace–the “immediate reaches” above land. That’s because the US Supreme Court in US v. Causby recognized that the “immediate reaches” above land is real property owned by the landowner:

[I]t is obvious that, if the landowner is to have full enjoyment of the land, he must have exclusive control of the immediate reaches of the enveloping atmosphere. …As we have said, the flight of airplanes, which skim the surface but do not touch it, is as much an appropriation of the use of the land as a more conventional entry upon it.

Prior to these recent updates, the FAA’s position on which rules apply in very low-altitude airspace–FAA rules or state property rules–was confusing. The agency informally asserts authority to regulate drone operations down to “the grass tips”; however, many landowners don’t want drones to enter the airspace immediately above their land without permission and would sue to protect their property rights. This is not a purely academic concern: the uncertainty about whether and when drones can fly in very low-altitude airspace has created damaging uncertainty for the industry. As the Government Accountability Office told Congress in 2020:

The legal uncertainty surrounding these [low-altitude airspace] issues is presenting challenges to integration of UAS [unmanned aircraft systems] into the national airspace system.


With this July update, the FAA helps clarify matters. To my knowledge, this is the first mention of “immediate reaches,” and implicit reference to Causby, by the FAA. The update helpfully protects, in my view, property rights and federalism. It also represents a win for the drone industry, which finally has some federal clarity on this after a decade of uncertainty about how low they can fly. Drone operators now know they can sometimes be subject to local rules about aerial trespass. States and cities now know that they can create certain, limited prohibitions, which will be helpful to protect sensitive locations like neighborhoods, stadiums, prisons, and state parks and conservation areas.

As an aside: It seems possible one motivation for the FAA adding this language is to foreclose future takings litigation (a la Cedar Point Nursery v. Hassid) against the FAA. With this new guidance, the FAA can now point out in future takings litigation that they do not authorize drone operations in the immediate reaches of airspace; this FAA guidance indicates that operations in the immediate reaches is largely a question of state property and trespass laws.

On the whole, I think this new FAA guidance is strong, especially the first formal FAA recognition of some state authority over the “immediate reaches.” That said, as a USDOT Inspector General report to Congress pointed out last year, the FAA has not been responsive when state officials have questions about creating drone rules to complement federal rules. In 2018, for instance, a lead State “participant [in an FAA drone program] requested a clarification as to whether particular State laws regarding UAS conflicted with Federal regulations. According to FAA, as of February 2022 . . . FAA has not yet provided an opinion in response to that request.”

Four years-plus of silence from the FAA is a long time for a state official to wait, and it’s a lifetime for a drone startup looking for legal clarity. I do worry about agency non-answers on preemption questions from states, and how other provisions in this new guidance will be interpreted. Hopefully this new guidance means FAA employees can be more responsive to inquiries from state officials. With the April and July airspace policy updates, the FAA, state aviation offices, the drone industry, and local officials are in a better position to create commercial drone networks nationwide, while protecting the property and privacy expectations of residents.

Further Reading

See my July report on drones and airspace policy for state officials, including state rankings: “2023 State Drone Commerce Rankings: How prepared is your state for drone commerce?”.

As I noted in a recent interview with James Pethokoukis for his Faster, Please! newsletter, “[t]he current policy debate over artificial intelligence is haunted by many mythologies and mistaken assumptions. The most problematic of these is the widespread belief that AI is completely ungoverned today.” In a recent R Street Institute report and series of other publications, I have documented just how wrong that particular assumption is.

The first thing I try to remind everyone is that the U.S. federal government is absolutely massive—2.1 million employees, 15 cabinet agencies, 50 independent federal commissions and 434 federal departments. Strangely, when policymakers and pundits deliver remarks on AI policy today, they seem to completely ignore all that regulatory capacity while simultaneously casually tossing out proposals to just add more and more layers of regulation and bureaucracy to it. Well, I say why not see if the existing regulations and bureaucracy are working first, and then we can have a chat about what more is needed to fill gaps.

And a lot is being done on this front. In a new blog post for R Street, I offer a brief summary of some of the most important recent efforts. Continue reading →

Can we advance AI safety without new international regulatory bureaucracies, licensing schemes or global surveillance systems? I explore that question in my latest R Street Institute study, “Existential Risks & Global Governance Issues around AI & Robotics.” (31 pgs)  My report rejects extremist thinking about AI arms control & stresses how the “realpolitik” of international AI governance is such that things cannot and must not be solved through silver-bullet gimmicks and grandiose global government regulatory regimes.

The report uses Nick Bostrom’s “vulnerable world hypothesis” as a launching point and discusses how his five specific control mechanisms for addressing AI risks have started having real-world influence with extreme regulatory proposals now being floated. My report also does a deep dive into the debate about a proposed global ban on “killer robots” and looks at how past treaties and arms control efforts might apply, or what we can learn from them about what won’t work.

I argue that proposals to impose global controls on AI through a worldwide regulatory authority are both unwise and unlikely to work. Calls for bans or “pauses” on AI developments are largely futile because many nations will not agree to them. As with nuclear and chemical weapons, treaties, accords, sanctions and other multilateral agreements can help address some threats of malicious uses of AI or robotics. But trade-offs are inevitable, and addressing one type of existential risk sometimes can give rise to other risks.

A culture of AI safety by design is critical. But there is an equally compelling interest in ensuring algorithmic innovations are developed and made widely available to society. The most effective solution to technological problems usually lies in more innovation, not less. Many other multistakeholder and multilateral efforts can help AI safety. Final third of my study is devoted to a discussion of that. Continuous communication, coordination, and cooperation—among countries, developers, professional bodies and other stakeholders—will be essential. Continue reading →

This week, I appeared on the Tech Freedom Tech Policy Podcast to discuss “Who’s Afraid of Artificial Intelligence?” It’s an in-depth, wide-ranging conversation about all things AI related. Here’s a summary of what host what Corbin Barthold and I discussed:

1. The “little miracles happening every day” thanks to AI

2. Is AI a “born free” technology?

3. Potential anti-competitive effects of AI regulation

4. The flurry of joint letters

5. new AI regulatory agency political realities

6. the EU’s Precautionary Principle tech policy disaster

7. The looming “war on computation” & open source

8. The role of common law for AI

9. Is Sam Altman breaking the very laws he proposes?

10. Do we need an IAEA for AI or an “AI Island”

11. Nick Bostrom’s global control & surveillance model

12. Why “doom porn” dominates in academic circles

13. Will AI take all the jobs?

14. Smart regulation of algorithmic technology

15. How the “pacing problem” is sometimes the “pacing benefit”

 

It was my pleasure to recently appear on the Independent Women’s Forum’s “She Thinks” podcast to discuss “Artificial Intelligence for Dummies.” In this 24-minute conversation with host Beverly Hallberg, I outline basic definitions, identify potential benefits, and then consider some of the risks associated with AI, machine learning, and algorithmic systems.

Reminder, you can find all my relevant past work on these issues via my, “Running List of My Research on AI, ML & Robotics Policy.”