Artificial Intelligence & Robotics – Technology Liberation Front https://techliberation.com Keeping politicians' hands off the Net & everything else related to technology Thu, 03 Apr 2025 23:20:10 +0000 en-US hourly 1 6772528 Event video: “AI Policy in President Trump’s Second Term” https://techliberation.com/2024/12/19/event-video-ai-policy-in-president-trumps-second-term/ https://techliberation.com/2024/12/19/event-video-ai-policy-in-president-trumps-second-term/#comments Thu, 19 Dec 2024 13:34:29 +0000 https://techliberation.com/?p=77199

Here the video from a December 10th Federalist Society event on “AI Policy In President Trump’s Second Term.” It features my comments alongside:

  • Neil Chilson, Head of AI Policy, Abundance Institute
  • Satya Thallam, Senior Vice President, Americans for Responsible Innovation
  • Prof. Kevin Frazier, Assistant Professor of Law, St. Thomas University Benjamin L. Crump College of Law

As always, all my recent essays, podcasts, and event video about AI policy can be found here.

]]>
https://techliberation.com/2024/12/19/event-video-ai-policy-in-president-trumps-second-term/feed/ 6267 77199
Panel Video: How Should We Regulate the Digital World & AI? https://techliberation.com/2024/09/06/panel-video-how-should-we-regulate-the-digital-world-ai/ https://techliberation.com/2024/09/06/panel-video-how-should-we-regulate-the-digital-world-ai/#comments Fri, 06 Sep 2024 22:44:36 +0000 https://techliberation.com/?p=77193

The Technology Policy Institute has posted the video of my talk at the 2024 Aspen Forum panel on “How Should we Regulate the Digital World?” My remarks run from 33:33–44:12 of the video. I also elaborate briefly during Q&A.

My remarks at this year’s TPI Aspen Forum panel were derived from my R Street Institute essay, “The Policy Origins of the Digital Revolution & the Continuing Case for the Freedom to Innovate,” which sketches out a pro-freedom vision for the Computational Revolution.

 

]]>
https://techliberation.com/2024/09/06/panel-video-how-should-we-regulate-the-digital-world-ai/feed/ 640 77193
We Need Federal Preemption of State & Local AI Regulation https://techliberation.com/2024/01/22/we-need-federal-preemption-of-state-local-ai-regulation/ https://techliberation.com/2024/01/22/we-need-federal-preemption-of-state-local-ai-regulation/#comments Mon, 22 Jan 2024 15:51:44 +0000 https://techliberation.com/?p=77175

In my latest column for The Hill, I explore how “State and Local Meddling Threatens to Undermine the AI Revolution” in America as mountains of parochial tech mandates accumulate. We need a federal response, but we’re not likely to get the right one, I argue.

I specifically highlight the danger of new measures from big states like NY and California, but it’s the patchwork of all the state and local regs that will result in a sort of ‘death-by-a-thousand-cuts’ for AI innovation as the red tape grows and hinders innovation and capital formation.

What we need is the same sort of principled, pro-innovation federal framework or AI that we adopted for the Internet a generation ago. Specifically, we need some sort of preemption of most of the state and local constraints on what is inherently national (and even global) commerce and speech.

Alas, Congress appears incapable of getting even basic things done on tech policy these days. As far as I can tell, not a single AI bill in front of Congress today would preempt most of this state and local AI regulatory activity.

Worse yet, if Congress did somehow pass anything on AI right now, it’d probably just include even more anti-innovation mandates and agencies without preempting any of the state and local ones. Thus, America would just be piling bad mandates on top of bad mandates until we basically become like Europe, where innovation goes to die under piles of bureaucratic red tape.

It’s a miserable state of affairs with horrible consequences for the U.S. as global competition from China heats up on the AI front. America is sacrificing its competitive advantage on digital technology because fear-based thinking and partisan politics continue to prevent the adoption of a principled, bipartisan vision for artificial intelligence policy.

See my new Hill column for more discussion, and also make sure to check out my earlier Hill essay on “A balanced AI governance vision for America,” as well as these two big R Street Institute reports from last year about how Congress can craft sensible, pro-innovation AI policy for America:

And here is some additional reading on the dangerous regulatory situation we are facing today in terms of over-regulating artificial intelligence by treating innovators as guilty until proven innocent. America is about to shoot itself in the foot as the global race begins for the more important technological revolution of our lifetime:

]]>
https://techliberation.com/2024/01/22/we-need-federal-preemption-of-state-local-ai-regulation/feed/ 489 77175
Podcast: “AI – DC Policymakers Face a Crossroads” https://techliberation.com/2023/12/12/podcast-ai-dc-policymakers-face-a-crossroads/ https://techliberation.com/2023/12/12/podcast-ai-dc-policymakers-face-a-crossroads/#comments Tue, 12 Dec 2023 13:06:14 +0000 https://techliberation.com/?p=77170

Here’s a new DC EKG podcast I recently appeared on to discuss the current state of policy development surrounding artificial intelligence. In our wide-ranging chat, we discussed:

  • why a sectoral approach to AI policy is superior to general purpose licensing
  • why comprehensive AI legislation will not pass in Congress
  • the best way to deal with algorithmic deception
  • why Europe lost its tech sector
  • how a global AI regulator threatens our safety
  • the problem with Biden’s AI executive order
  • will AI policy follow same path as nuclear policy?
  • global innovation arbitrage & the innovation cage
  • AI, health care & FDA regulation
  • AI regulation vs trade secrets
  • is AI transparency / auditing the solution?

Listen to the full show here or here. To read more about current AI policy developments, check out my “Running List of My Research on AI, ML & Robotics Policy.”

 

]]>
https://techliberation.com/2023/12/12/podcast-ai-dc-policymakers-face-a-crossroads/feed/ 374 77170
Can Any AI Legislation Pass Congress This Session? https://techliberation.com/2023/10/17/can-any-ai-legislation-pass-congress-this-session/ https://techliberation.com/2023/10/17/can-any-ai-legislation-pass-congress-this-session/#comments Tue, 17 Oct 2023 17:49:49 +0000 https://techliberation.com/?p=77162

My latest dispatch from the frontlines of the artificial intelligence policy wars in Washington looks at the major proposals to regulate AI. In my new essay, “Artificial Intelligence Legislative Outlook: Fall 2023 Update,” I argue that there are 3 major impediments to getting major AI legislation over the finish line in Congress: (1) Breadth and complexity of the issue; (2) Multiplicity of concerns & special interests; & (3) Extreme rhetoric / proposals are dominating the discussion.

If Congress wants to get something done in this session, they’ll need to do two things: (1) set aside the most radical regulatory proposals (like big new AI agencies or licensing schemes); and (2) break AI policy down into its smaller subcomponents and then prioritize among them where policy gaps might exist.

Prediction: Congress will not pass any AI-related legislation this session due to the factors identified in my essay. The temptation to “go big” with everything-and-the-kitchen-sink approaches to AI regulation will (especially with extreme ideas like new agencies & licenses) will doom AI legislation. It’s also worth noting that Washington’s swelling interest in AI policy is having a crowding-out effect on other important legislative proposals that might have advanced otherwise, such as the baseline privacy bill (ADPPA) and other things like driverless car legislation. Many want to advance those efforts first, but the AI focus makes that hard.

Read the entire essay here.

]]>
https://techliberation.com/2023/10/17/can-any-ai-legislation-pass-congress-this-session/feed/ 138 77162
Event Video: Debating Frontier AI Regulation https://techliberation.com/2023/09/15/event-video-debating-frontier-ai-regulation/ https://techliberation.com/2023/09/15/event-video-debating-frontier-ai-regulation/#comments Fri, 15 Sep 2023 14:39:59 +0000 https://techliberation.com/?p=77157

The Brookings Institution hosted this excellent event on frontier AI regulation this week featuring a panel discussion I was on that followed opening remarks from Rep. Ted Lieu (D-CA). I come in around the 51-min mark of the event video and explain why I worry that AI policy now threatens to devolve into an all-out war on computation and open source innovation in particular. ​

I argue that some pundits and policymakers appear to be on the way to substituting a very real existential risk (authoritarian govt control over computation/science) for a hypothetic existential risk of powerful AGI. I explain how there are better, less destructive ways to address frontier AI concerns than the highly repressive approaches currently being considered.

I have developed these themes and arguments at much greater length in a series of essays over on Medium over the past few months. If you care to read more, the four key articles to begin with are:

In June, I also released this longer R Street Institute report on “Existential Risks & Global Governance Issues around AI & Robotics,” and then spent an hour talking about these issues on the TechPolicyPodcast about “Who’s Afraid of Artificial Intelligence?” All of my past writing and speaking on AI, ML, and robotic policy can be found here, and that list is update every month.

As always, I’ll have much more to say on this topic as the war on computation expands. This is quickly becoming the most epic technology policy battle of modern times.

]]>
https://techliberation.com/2023/09/15/event-video-debating-frontier-ai-regulation/feed/ 11 77157
Good FAA Update on State and Local Rules for Drone Airspace https://techliberation.com/2023/08/07/good-faa-update-on-state-and-local-rules-for-drone-airspace/ https://techliberation.com/2023/08/07/good-faa-update-on-state-and-local-rules-for-drone-airspace/#comments Mon, 07 Aug 2023 14:36:02 +0000 https://techliberation.com/?p=77147

There’s been exciting progress in US drone policy in the past few months. First, the FAA in April announced surprising new guidance in its Aeronautics Information Manual, re: drone airspace access. As I noted in an article for the State Aviation Journal, the new Manual notes:

There can be certain local restrictions to airspace. While the FAA is designated by federal law to be the regulator of the NAS [national airspace system], some state and local authorities may also restrict access to local airspace. UAS pilots should be aware of these local rules.

That April update has been followed up by a bigger, drone policy update from FAA. On July 14, the FAA went further than the April guidance and updated and replaced its 2015 guidance to states and localities about drone regulation and airspace policy.

In this July 2023 guidance, I was pleasantly surprised to see the FAA recognize some state and local authority in the “immediate reaches” airspace. Notably, in the new guidance the FAA expressly notes that that state laws that “prohibit [or] restrict . . . operations by UAS in the immediate reaches of property” are an example of laws not subject to conflict preemption.

A handful of legal scholars–like ASU Law Professor Troy Rule and myself–have urged federal officials for years to recognize that states, localities, and landowners have a significant say in what happens in very low-altitude airspace–the “immediate reaches” above land. That’s because the US Supreme Court in US v. Causby recognized that the “immediate reaches” above land is real property owned by the landowner:

[I]t is obvious that, if the landowner is to have full enjoyment of the land, he must have exclusive control of the immediate reaches of the enveloping atmosphere. …As we have said, the flight of airplanes, which skim the surface but do not touch it, is as much an appropriation of the use of the land as a more conventional entry upon it.

Prior to these recent updates, the FAA’s position on which rules apply in very low-altitude airspace–FAA rules or state property rules–was confusing. The agency informally asserts authority to regulate drone operations down to “the grass tips”; however, many landowners don’t want drones to enter the airspace immediately above their land without permission and would sue to protect their property rights. This is not a purely academic concern: the uncertainty about whether and when drones can fly in very low-altitude airspace has created damaging uncertainty for the industry. As the Government Accountability Office told Congress in 2020:

The legal uncertainty surrounding these [low-altitude airspace] issues is presenting challenges to integration of UAS [unmanned aircraft systems] into the national airspace system.


With this July update, the FAA helps clarify matters. To my knowledge, this is the first mention of “immediate reaches,” and implicit reference to Causby, by the FAA. The update helpfully protects, in my view, property rights and federalism. It also represents a win for the drone industry, which finally has some federal clarity on this after a decade of uncertainty about how low they can fly. Drone operators now know they can sometimes be subject to local rules about aerial trespass. States and cities now know that they can create certain, limited prohibitions, which will be helpful to protect sensitive locations like neighborhoods, stadiums, prisons, and state parks and conservation areas.

As an aside: It seems possible one motivation for the FAA adding this language is to foreclose future takings litigation (a la Cedar Point Nursery v. Hassid) against the FAA. With this new guidance, the FAA can now point out in future takings litigation that they do not authorize drone operations in the immediate reaches of airspace; this FAA guidance indicates that operations in the immediate reaches is largely a question of state property and trespass laws.

On the whole, I think this new FAA guidance is strong, especially the first formal FAA recognition of some state authority over the “immediate reaches.” That said, as a USDOT Inspector General report to Congress pointed out last year, the FAA has not been responsive when state officials have questions about creating drone rules to complement federal rules. In 2018, for instance, a lead State “participant [in an FAA drone program] requested a clarification as to whether particular State laws regarding UAS conflicted with Federal regulations. According to FAA, as of February 2022 . . . FAA has not yet provided an opinion in response to that request.”

Four years-plus of silence from the FAA is a long time for a state official to wait, and it’s a lifetime for a drone startup looking for legal clarity. I do worry about agency non-answers on preemption questions from states, and how other provisions in this new guidance will be interpreted. Hopefully this new guidance means FAA employees can be more responsive to inquiries from state officials. With the April and July airspace policy updates, the FAA, state aviation offices, the drone industry, and local officials are in a better position to create commercial drone networks nationwide, while protecting the property and privacy expectations of residents.

Further Reading

See my July report on drones and airspace policy for state officials, including state rankings: “2023 State Drone Commerce Rankings: How prepared is your state for drone commerce?”.

]]>
https://techliberation.com/2023/08/07/good-faa-update-on-state-and-local-rules-for-drone-airspace/feed/ 19 77147
Is AI Really an Unregulated Wild West? https://techliberation.com/2023/06/22/is-ai-really-an-unregulated-wild-west/ https://techliberation.com/2023/06/22/is-ai-really-an-unregulated-wild-west/#comments Thu, 22 Jun 2023 15:04:44 +0000 https://techliberation.com/?p=77142

As I noted in a recent interview with James Pethokoukis for his Faster, Please! newsletter, “[t]he current policy debate over artificial intelligence is haunted by many mythologies and mistaken assumptions. The most problematic of these is the widespread belief that AI is completely ungoverned today.” In a recent R Street Institute report and series of other publications, I have documented just how wrong that particular assumption is.

The first thing I try to remind everyone is that the U.S. federal government is absolutely massive—2.1 million employees, 15 cabinet agencies, 50 independent federal commissions and 434 federal departments. Strangely, when policymakers and pundits deliver remarks on AI policy today, they seem to completely ignore all that regulatory capacity while simultaneously casually tossing out proposals to just add more and more layers of regulation and bureaucracy to it. Well, I say why not see if the existing regulations and bureaucracy are working first, and then we can have a chat about what more is needed to fill gaps.

And a lot  is being done on this front. In a new blog post for R Street, I offer a brief summary of some of the most important recent efforts.

  • In January, the National Institute of Standards and Technology released its “ AI Risk Management Framework ,” which was created through a multi-year, multi-stakeholder process. It is intended to help developers and policymakers better understand how to identify and address various types of potential algorithmic risk.
  • The Food and Drug Administration (FDA) has been using its broad regulatory powers  to review and approve AI and ML-enabled medical devices  for many years already, and the agency possesses  broad recall authority  that can address risks that develop from algorithmic or robotic systems. The FDA is currently refining its approach to AI/ML in  a major proceeding .
  • The National Highway Traffic Safety Administration (NHTSA) has been issuing  constant revisions  to its driverless car policy guidelines since 2016. Like the FDA, the NHTSA also has broad recall authority, which it used in February 2023 to  mandate a recall  of Tesla’s full self-driving autonomous driving system, also requiring an over-the-air software update to over 300,000 vehicles that had the software package.
  • In 2021, the Consumer Product Safety Commission agency issued  a major report  highlighting the many policy tools it already has to address AI risks. Like the FDA and NHTSA, the agency has recall authority that can address risks that develop from consumer-facing algorithmic or robotic systems.
  • In April, Securities and Exchange Commission Chairman Gary Gensler told Congress that his agency is  moving to address  AI and predictive data analytics in finance and investing.
  • The Federal Trade Commission (FTC) has become increasingly active on AI policy issues and has noted in a series of  recent   blog   posts  that the agency is ready to use its broad authority to “unfair and deceptive practices,” involving algorithmic claims or applications.
  • The Equal Employment Opportunity Commission (EEOC) recently  released a memo  as part of its “ongoing effort to help ensure that the use of new technologies complies with federal [equal employment opportunity] law.” It outlines how existing employment antidiscrimination laws and policies cover algorithmic technologies.
  • In May, the Consumer Financial Protection Bureau (CFPB) issued a statement clarifying how existing federal anti-discrimination law already applies to complex algorithmic systems used for lending decisions.  The agency also recently released a report on the use of Chatbots in Consumer Finance, and explained the many ways that the “CFPB is actively monitoring the market” for risks associated with these new services.
  • Along with the EEOC, the FTC and the CFPB, the Civil Rights Division of the Department of Justice released  an April joint statement  saying that the agency heads said that they would be looking to take preemptive steps to address algorithmic discrimination.

“This is real-time algorithmic governance in action,” I argue. Again, additional regulatory steps may be needed later to fill gaps in current law, but policymakers should begin by acknowledging that a lot of algorithmic oversight authority exists across the federal government. Meanwhile, the courts and our common law system are also starting to address novel AI problems as cases develop. For more along these lines, see my recent essay on “The Many Ways Government Already Regulates Artificial Intelligence.”

So, next time someone suggests that AI is developing in an unregulated “Wild West,” remind them of all these existing laws, agencies, and regulatory efforts. And then also ask them a different question no one is really exploring currently: Could it be the case that many agencies are already overregulating some algorithmic and autonomous systems? (I’m looking at you, FAA!) Why is no one worried about that possibility as the global AI race with China and other countries intensifies?

Additional Reading :

]]>
https://techliberation.com/2023/06/22/is-ai-really-an-unregulated-wild-west/feed/ 13 77142
New Report: Do We Need Global Government to Address AI Risk? https://techliberation.com/2023/06/16/new-report-do-we-need-global-government-to-address-ai-risk/ https://techliberation.com/2023/06/16/new-report-do-we-need-global-government-to-address-ai-risk/#comments Fri, 16 Jun 2023 13:27:15 +0000 https://techliberation.com/?p=77138

Can we advance AI safety without new international regulatory bureaucracies, licensing schemes or global surveillance systems? I explore that question in my latest R Street Institute study, “Existential Risks & Global Governance Issues around AI & Robotics.” (31 pgs)  My report rejects extremist thinking about AI arms control & stresses how the “realpolitik” of international AI governance is such that things cannot and must not be solved through silver-bullet gimmicks and grandiose global government regulatory regimes.

The report uses Nick Bostrom’s “vulnerable world hypothesis” as a launching point and discusses how his five specific control mechanisms for addressing AI risks have started having real-world influence with extreme regulatory proposals now being floated. My report also does a deep dive into the debate about a proposed global ban on “killer robots” and looks at how past treaties and arms control efforts might apply, or what we can learn from them about what won’t work.

I argue that proposals to impose global controls on AI through a worldwide regulatory authority are both unwise and unlikely to work. Calls for bans or “pauses” on AI developments are largely futile because many nations will not agree to them. As with nuclear and chemical weapons, treaties, accords, sanctions and other multilateral agreements can help address some threats of malicious uses of AI or robotics. But trade-offs are inevitable, and addressing one type of existential risk sometimes can give rise to other risks.

A culture of AI safety by design is critical. But there is an equally compelling interest in ensuring algorithmic innovations are developed and made widely available to society. The most effective solution to technological problems usually lies in more innovation, not less. Many other multistakeholder and multilateral efforts can help AI safety. Final third of my study is devoted to a discussion of that. Continuous communication, coordination, and cooperation—among countries, developers, professional bodies and other stakeholders—will be essential.

My new report on concludes with a plea to reject fatalism and fanaticism when discussing global AI risks. It’s worth recalling what Bertrand Russell said in 1951 about how only global government could save humanity. He predicted, “[t]he end of human life, perhaps of all life on our planet,” before the end of the century unless the world unified under “a single government, possessing a monopoly of all the major weapons of war.” He was very wrong, of course, and thank God he did not get his wish because an effort to unite the world under one global government would have entailed different existential risks that he never bothered seriously considering. We need to reject extremist global government solutions as the basis for controlling technological risk.

Three quick notes.

First, this new report is the third in a trilogy of major R Street Institute s tudies on bottom-up, polycentric AI governance. If you only read one, make it this: “Flexible, Pro-Innovation Governance Strategies for Artificial Intelligence.” 

Second, I wrapped up this latest report a few months ago, before the Microsoft and OpenAI floated new comprehensive AI regulatory controls. So, for an important follow-up to this report, please read: “Microsoft’s New AI Regulatory Framework & the Coming Battle over Computational Control.”

Finally, if you’d like to hear me discuss many of the findings from these new reports and essays at greater length, check out my recent appearance on TechFreedom’s “Tech Policy Podcast,” with Corbin K. Barthold. We do a deep dive on all these AI governance trends and regulatory proposals.

As always, all my writing on AI, ML and robotics can be found here and my most recent things are found below.

Additional Reading :

]]>
https://techliberation.com/2023/06/16/new-report-do-we-need-global-government-to-address-ai-risk/feed/ 4 77138
Podcast: “Who’s Afraid of Artificial Intelligence?” https://techliberation.com/2023/06/12/podcast-whos-afraid-of-artificial-intelligence/ https://techliberation.com/2023/06/12/podcast-whos-afraid-of-artificial-intelligence/#comments Mon, 12 Jun 2023 17:30:32 +0000 https://techliberation.com/?p=77136

This week, I appeared on the Tech Freedom Tech Policy Podcast to discuss “Who’s Afraid of Artificial Intelligence?” It’s an in-depth, wide-ranging conversation about all things AI related. Here’s a summary of what host what Corbin Barthold and I discussed:

  1. The “little miracles happening every day” thanks to AI

  2. Is AI a “born free” technology?

  3. Potential anti-competitive effects of AI regulation

  4. The flurry of joint letters

  5. new AI regulatory agency political realities

  6. the EU’s Precautionary Principle tech policy disaster

  7. The looming “war on computation” & open source

  8. The role of common law for AI

  9. Is Sam Altman breaking the very laws he proposes?

  10. Do we need an IAEA for AI or an “AI Island”

  11. Nick Bostrom’s global control & surveillance model

  12. Why “doom porn” dominates in academic circles

  13. Will AI take all the jobs?

  14. Smart regulation of algorithmic technology

  15. How the “pacing problem” is sometimes the “pacing benefit”

 

]]>
https://techliberation.com/2023/06/12/podcast-whos-afraid-of-artificial-intelligence/feed/ 4 77136
Podcast: “Artificial Intelligence for Dummies” https://techliberation.com/2023/06/12/podcast-artificial-intelligence-for-dummies/ https://techliberation.com/2023/06/12/podcast-artificial-intelligence-for-dummies/#comments Mon, 12 Jun 2023 12:29:49 +0000 https://techliberation.com/?p=77133

It was my pleasure to recently appear on the Independent Women’s Forum’s “She Thinks” podcast to discuss “Artificial Intelligence for Dummies.” In this 24-minute conversation with host Beverly Hallberg, I outline basic definitions, identify potential benefits, and then consider some of the risks associated with AI, machine learning, and algorithmic systems.

Reminder, you can find all my relevant past work on these issues via my, “Running List of My Research on AI, ML & Robotics Policy.”

]]>
https://techliberation.com/2023/06/12/podcast-artificial-intelligence-for-dummies/feed/ 4 77133
Podcast: Should We Regulate AI? https://techliberation.com/2023/05/08/podcast-should-we-regulate-ai/ https://techliberation.com/2023/05/08/podcast-should-we-regulate-ai/#comments Mon, 08 May 2023 12:15:12 +0000 https://techliberation.com/?p=77120

It was my pleasure to recently join Matthew Lesh, Director of Public Policy and Communications for the London-based Institute of Economic Affairs (IEA), for the IEA podcast discussion, “Should We Regulate AI?” In our wide-ranging 30-minute conversation, we discuss how artificial intelligence policy is playing out across nations and I explained why I feel the UK has positioned itself smartly relative to the US & EU on AI policy. I argued that the UK approach encourages a better ‘innovation culture’ than the new US model being formulated by the Biden Administration.

We also went through some of the many concerns driving calls to regulate AI today, including: fears about job dislocations, privacy and security issues, national security and existential risks, and much more.

Additional reading:

]]>
https://techliberation.com/2023/05/08/podcast-should-we-regulate-ai/feed/ 6 77120
My Latest Study on AI Governance https://techliberation.com/2023/04/20/my-latest-study-on-ai-governance/ https://techliberation.com/2023/04/20/my-latest-study-on-ai-governance/#comments Thu, 20 Apr 2023 18:25:29 +0000 https://techliberation.com/?p=77114

The R Street Institute has just released my latest study on AI governance and how to address “alignment” concerns in a bottom-up fashion. The 40-page report is entitled, “Flexible, Pro-Innovation Governance Strategies for Artificial Intelligence.”

My report asks, is it possible to address AI alignment without starting with the Precautionary Principle as the governance baseline default? I explain how that is indeed possible. While some critics claim that no one is seriously trying to deal with AI alignment today, my report explains how no technology in history has been more heavily scrutinized this early in its life-cycle as AI, machine learning and robotics. The number of ethical frameworks out there already is astonishing. We don’t have too few alignment frameworks; we probably have too many!

We need to get serious about bringing some consistency to these efforts and figure out more concrete ways to a culture of safety by embedding ethics-by-design. But there is an equally compelling interest in ensuring that algorithmic innovations are developed and made widely available to society.

Although some safeguards will be needed to minimize certain AI risks, a more agile and iterative governance approach can address these concerns without creating overbearing, top-down mandates, which would hinder algorithmic innovations – especially at a time when America is looking to stay ahead of China and other nations in the global AI race.

My report explores the many ethical frameworks that professional associations have already formulated as well as the various other “soft law” frameworks that have been devised. I also consider how AI auditing and algorithmic impact assessments can be used to help formalize the twin objectives of “ethics-by-design” and keeping “humans in the loop,” which are the two principles that drive most AI governance frameworks. But it is absolutely essential that audits and impact assessments are done right to ensure it does not become an overbearing, compliance-heavy, and politicized nightmare that would undermine algorithmic entrepreneurialism and computational innovation.

Finally, my report reviews the extensive array of existing government agencies and policies that ALREADY govern artificial intelligence and robotics as well as the wide variety of court-based common law solutions that cover algorithmic innovations. The notion that America has no law or regulation covering artificial intelligence today is massively wrong, as my report explains in detail.

I hope you’ll take the time to check out my new report. This and my previous report on “Getting AI Innovation Culture Right” serve as the foundation of everything we have coming on AI and robotics from the R Street Institute. Next up will be a massive study on global AI “existential risks” and national security issues. Stay tuned. Much more to come!

In the meantime, you can find all my recent work here on my “Running List of My Research on AI, ML & Robotics Policy.”


Additional Reading:

]]>
https://techliberation.com/2023/04/20/my-latest-study-on-ai-governance/feed/ 4 77114
On “Pausing” AI https://techliberation.com/2023/04/07/on-pausing-ai/ https://techliberation.com/2023/04/07/on-pausing-ai/#comments Fri, 07 Apr 2023 17:36:05 +0000 https://techliberation.com/?p=77111

Recently, the Future of Life Institute released an open letter that included some computer science luminaries and others  calling for a 6-month “pause” on the deployment and research of “giant” artificial intelligence (AI) technologies. Eliezer Yudkowsky, a prominent AI ethicist, then made news by arguing that the “pause” letter did not go far enough and he proposed that governments consider “airstrikes” against data processing centers, or even be open to the use of nuclear weapons. This is, of course, quite insane. Yet, this is the state of the things today as a AI technopanic seems to growing faster than any of the technopanic that I’ve covered in my 31 years in the field of tech policy—and I have covered a lot of them.

In a new joint essay co-authored with Brent Orrell of the American Enterprise Institute and Chris Messerole of Brookings, we argue that “the ‘pause’ we are most in need of is one on dystopian AI thinking.” The three of us recently served on a blue-ribbon  Commission on Artificial Intelligence Competitiveness, Inclusion, and Innovation, an independent effort assembled by the U.S. Chamber of Commerce. In our essay, we note how:

Many of these breakthroughs and applications will already take years to work their way through the traditional lifecycle of development, deployment, and adoption and can likely be managed through legal and regulatory systems that are already in place. Civil rights laws, consumer protection regulations, agency recall authority for defective products, and targeted sectoral regulations already govern algorithmic systems, creating enforcement avenues through our courts and by common law standards allowing for development of new regulatory tools that can be developed as actual, rather than anticipated, problems arise.

“Instead of freezing AI we should leverage the legal, regulatory, and informal tools at hand to manage existing and emerging risks while fashioning new tools to respond to new vulnerabilities,” we conclude. Also on the pause idea, it’s worth checking out this excellent essay from Bloomberg Opinion editors on why “An AI ‘Pause’ Would Be a Disaster for Innovation.”

The problem is not with the “pause” per se. Even if the signatories could somehow enforce a worldwide stop-work order, six months probably wouldn’t do much to halt advances in AI. If a brief and partial moratorium draws attention to the need to think seriously about AI safety, it’s hard to see much harm. Unfortunately, a pause seems likely to evolve into a more generalized opposition to progress.

The editors continue on to rightly note:

This is a formula for outright stagnation. No one can ever be fully confident that a given technology or application will only have positive effects. The history of innovation is one of trial and error, risk and reward. One reason why the US leads the world in digital technology — why it’s home to virtually all the biggest tech platforms — is that it did not preemptively constrain the industry with well-meaning but dubious regulation. It’s no accident that all the leading AI efforts are American too.

That is 100% right, and I appreciate the Bloomberg editors linking to my latest study on AI governance when they made this point. In this new R Street Institute study, I explain why “Getting AI Innovation Culture Right,” is essential to make sure we can enjoy the many benefits that algorithmic systems offer, while also staying competitive in the global race for competitive advantage in this space.

That report is the first in a trilogy of big studies on decentralized, flexible governance of artificial intelligence. We can achieve AI safety without crushing top-down bans or unworkable “pauses,” I argue. My next two papers are, “Flexible, Pro-Innovation Governance Strategies for Artificial Intelligence” (due out April 20th) and “Existential Risks & Global Governance Issues Surrounding AI & Robotics” (due out late May or early June). I’m also working on a co-authored essay taking a deep dive into the idea of AI impact assessments / auditing (late Spring / early Summer).

Relatedly, on April 7th, DeepLearningAI held an event on “Why at 6-Month AI Pause is a Bad Idea” featuring leading AI scientists Andrew Ng and Yann LeCun discussing the trade-offs associated with the proposal. A crucial point made in the discussion is that a pause, especially a pause in the form of a governmental ban, would be a misguided innovation policy decision. They stressed that there will be policy interventions to address targeted risks from specific algorithmic applications, but that it would be a serious mistake to stop the overall development of the underlying technological capabilities. It’s worth watching.

For more on AI policy, here’s a list of some of my latest reports and essays. Much more to come. AI policy will be the biggest tech policy fight of the our lifetimes.

 

 

]]>
https://techliberation.com/2023/04/07/on-pausing-ai/feed/ 2 77111
What Policy Vision for Artificial Intelligence? https://techliberation.com/2023/04/02/what-policy-vision-for-artificial-intelligence/ https://techliberation.com/2023/04/02/what-policy-vision-for-artificial-intelligence/#comments Sun, 02 Apr 2023 21:32:49 +0000 https://techliberation.com/?p=77103

In my latest R Street Institute report, I discuss the importance of “Getting AI Innovation Culture Right.” This is the first of a trilogy of major reports on what sort of policy vision and set of governance principles should guide the development of  artificial intelligence (AI), algorithmic systems, machine learning (ML), robotics, and computational science and engineering more generally. More specifically, these reports seek to answer the question, Can we achieve AI safety without innovation-crushing top-down mandates and massive new regulatory bureaucracies? 

These questions are particular pertinent as we just made it through a week in which we’ve seen a major open letter issued that calls for a 6-month freeze on the deployment of AI technologies, while a prominent AI ethicist argued that governments should go further and consider airstrikes data processing centers even if the exchange of nuclear weapons needed to be considered! On top of that, Italy became the first major nation to ban ChatGPT, the popular AI-enabled chatbot created by U.S.-based OpenAI.

My report begins from a different presumption: AI, ML and algorithmic technologies present society with enormously benefits and, while real risks are there, we can find better ways of addressing them. As I summarize:

The danger exists that policy for algorithmic systems could be formulated in such a way that innovations are treated as guilty until proven innocent—i.e., a precautionary principle approach to policy—resulting in many important AI applications never getting off the drawing board. If regulatory impediments block or slow the creation of life-enriching, and even life-saving, AI innovations, that would leave society less well-off and give rise to different types of societal risks.

I argue that it is essential we not trap AI in an “innovation cage” by establishing the wrong policy default for algorithmic governance but instead work through challenges as they come at us. The right policy default for the internet and for AI continues to be “innovation allowed.” But AI risks do require serious governance steps. Luckily, many tools exist and others are being created. While my next major report (due out April 20th) offers far more detail, this paper sketches out some of those mechanisms. 

The goal of algorithmic policy should be for policymakers and innovators to work together to find flexible, iterative, agile, bottom-up governance solutions over time. We can promote a culture of responsibility among leading AI innovators and balance safety and innovation for complex, rapidly evolving computational and computing technologies like AI. This approach is buttressed by existing laws and regulations, as well as common law and the courts.

The new Biden Admin “AI Bill of Rights” unfortunately represents a fear-based model of technology policymaking that breaks from the superior Clinton framework for the internet & digital technology. Our nation’s policy toward AI, robotics & algorithmic innovation should instead embrace a dynamic future and the enormous possibilities that await us.

Please check out my new paper for more details. Much more to come. And you can also check out my running list of research on AI, ML robotics policy.

]]>
https://techliberation.com/2023/04/02/what-policy-vision-for-artificial-intelligence/feed/ 2 77103
Why Isn’t Everyone Already Unemployed Due to Automation? https://techliberation.com/2023/03/11/why-isnt-everyone-already-unemployed-due-to-automation/ https://techliberation.com/2023/03/11/why-isnt-everyone-already-unemployed-due-to-automation/#comments Sat, 11 Mar 2023 14:16:41 +0000 https://techliberation.com/?p=77099

I have a new R Street Institute policy study out this week doing a deep dive into the question: “Can We Predict the Jobs and Skills Needed for the AI Era?” There’s lots of hand-wringing going on today about AI and the future of employment, but that’s really nothing new. In fact, in light of past automation panics, we might want to step back and ask: Why isn’t everyone already unemployed due to technological innovation?

To get my answers, please read the paper! In the meantime, here’s the executive summary:

To better plan for the economy of the future, many academics and policymakers regularly attempt to forecast the jobs and worker skills that will be needed going forward. Driving these efforts are fears about how technological automation might disrupt workers, skills, professions, firms and entire industrial sectors. The continued growth of artificial intelligence (AI), robotics and other computational technologies exacerbate these anxieties. Yet the limits of both our collective knowledge and our individual imaginations constrain well-intentioned efforts to plan for the workforce of the future. Past attempts to assist workers or industries have often failed for various reasons. However, dystopian predictions about mass technological unemployment persist, as do retraining or reskilling programs that typically fail to produce much of value for workers or society. As public efforts to assist or train workers move from general to more specific, the potential for policy missteps grows greater. While transitional-support mechanisms can help alleviate some of the pain associated with fast-moving technological disruption, the most important thing policymakers can do is clear away barriers to economic dynamism and new opportunities for workers.

I do discuss some things that government can do to address automation fears at the end of the paper, but it’s important that policymakers first understand all the mistakes we’ve made with past retraining and reskilling efforts. The easiest thing to do to help in the short-term is clear away barriers to labor mobility and economic dynamism, I argue. Again, read the study for details.

For more info on other AI policy developments, check out my running list of research on AI, ML robotics policy.

]]>
https://techliberation.com/2023/03/11/why-isnt-everyone-already-unemployed-due-to-automation/feed/ 3 77099
US Chamber AI Commission Launches https://techliberation.com/2023/03/11/us-chamber-ai-commission-launches/ https://techliberation.com/2023/03/11/us-chamber-ai-commission-launches/#respond Sat, 11 Mar 2023 13:54:14 +0000 https://techliberation.com/?p=77094

This week, the U.S. Chamber of Commerce Commission on Artificial Intelligence Competitiveness, Inclusion, and Innovation (AI Commission) released a major report on the policy considerations surrounding AI, machine learning (ML) and algorithmic systems. The 120-page report concluded that “AI technology offers great hope for increasing economic opportunity, boosting incomes, speeding life science research at reduced costs, and simplifying the lives of consumers.” It was my honor to serve as one of the commissioners on the AI Commission and contribute to the report.

Over at the R Street Institute blog, I offer a quick summary of the major findings and recommendations from the report and argue that, along with the National Institute of Standards and Technology (NIST)’s recently released AI Risk Management Framework, the AI Commission report offers, “a constructive, consensus-driven framework for algorithmic governance rooted in flexibility, collaboration and iterative policymaking. This represents the uniquely American approach to AI policy that avoids the more heavy-handed regulatory approaches seen in other countries and it can help the United States again be a global leader in an important new technological field,” I conclude. Check out the blog post and the full AI Commission report if you are following debates of algorithmic policy issues. There’s lot of important material in there.

For more info on AI policy developments, check out my running list of research on AI, ML robotics policy.

]]>
https://techliberation.com/2023/03/11/us-chamber-ai-commission-launches/feed/ 0 77094
7 AI Policy Issues to Watch in 2023 and Beyond https://techliberation.com/2023/02/10/7-ai-policy-issues-to-watch-in-2023-and-beyond/ https://techliberation.com/2023/02/10/7-ai-policy-issues-to-watch-in-2023-and-beyond/#respond Fri, 10 Feb 2023 13:33:58 +0000 https://techliberation.com/?p=77088

In my latest R Street Institute blog post, “Mapping the AI Policy Landscape Circa 2023: Seven Major Fault Lines,” I discuss the big issues confronting artificial intelligence and machine learning in the coming year and beyond. I note that the AI regulatory proposals are multiplying fast and coming in two general varieties: broad-based and targeted. Broad-based algorithmic regulation would address the use of these technologies in a holistic fashion across many sectors and concerns. By contrast, targeted algorithmic regulation looks to address specific AI applications or concerns. In the short-term, it is more likely that targeted or “sectoral” regulatory proposals have a chance of being implemented.

I go on to identify seven major issues of concern that will drive these policy proposals. They include:

1) Privacy and Data Collection

2) Bias and Discrimination

3) Free Speech and Disinformation

4) Kids’ Safety

5) Physical Safety and Cybersecurity

6) Industrial Policy and Workforce Issues

7) National Security and Law Enforcement Issues

Of course, each of these issues includes many sub-issues and nuanced concerns. But I also noted that “this list only scratches the surface in terms of the universe of AI policy issues.” Algorithmic policy considerations are now being discussed in many other fields, including educationinsurancefinancial servicesenergy marketsintellectual propertyretail and trade, and more. I’ll be rolling out a new series of essays examining all these issues throughout the year.

But, as I note in concluding my new essay, the danger of over-reach exists with early regulatory efforts:

AI risks deserve serious attention, but an equally serious risk exists that an avalanche of fear-driven regulatory proposals will suffocate different life-enriching algorithmic innovations. There is a compelling interest in ensuring that AI innovations are developed and made widely available to society. Policymakers should not assume that important algorithmic innovations will just magically come about; our nation must get its innovation culture right if we hope to create a better, more prosperous future.

America needs a flexible governance approach for algorithmic systems that avoids heavy-handed, top-down controls as a first-order solution. “There is no use worrying about the future if we cannot even invent it first,” I conclude.

Additional Reading

]]>
https://techliberation.com/2023/02/10/7-ai-policy-issues-to-watch-in-2023-and-beyond/feed/ 0 77088
AI Policy Research: My Year in Review https://techliberation.com/2022/12/26/ai-policy-research-my-year-in-review/ https://techliberation.com/2022/12/26/ai-policy-research-my-year-in-review/#comments Mon, 26 Dec 2022 20:07:40 +0000 https://techliberation.com/?p=77073

I spent much of 2022 writing about the growing policy debate over artificial intelligence, machine learning, robotics, and the Computational Revolution more generally. Here are some of the major highlights of my work on this front.

All these essays + dozens more can be found on my: “Running List of My Research on AI, ML & Robotics Policy.” I have several lengthy studies and many shorter essays coming in the first half of 2023.

Finally, here is a Federalist Society podcast discussion about AI policy hosted by Jennifer Huddleston in which Hodan Omaar of ITIF and I offer a big picture overview of where things are headed next.

]]>
https://techliberation.com/2022/12/26/ai-policy-research-my-year-in-review/feed/ 1 77073
Tech Regulation Will Increasingly Be Driven Through the Prism of “Algorithmic Fairness” https://techliberation.com/2022/11/06/tech-regulation-will-increasingly-be-driven-through-the-prism-of-algorithmic-fairness/ https://techliberation.com/2022/11/06/tech-regulation-will-increasingly-be-driven-through-the-prism-of-algorithmic-fairness/#comments Sun, 06 Nov 2022 18:51:21 +0000 https://techliberation.com/?p=77056

We are entering a new era for technology policy in which many pundits and policymakers will use “algorithmic fairness” as a universal Get Out of Jail Free card when they push for new regulations on digital speech and innovation. Proposals to regulate things like “online safety,” “hate speech,” “disinformation,” and “bias” among other things often raise thorny definitional questions because of their highly subjective nature. In the United States, efforts by government to control these things will often trigger judicial scrutiny, too, because restraints on speech violate the First Amendment. Proponents of prior restraint or even ex post punishments understand this reality and want to get around it. Thus, in an effort to avoid constitutional scrutiny and lengthy court battles, they are engaged in a rebranding effort and seeking to push their regulatory agendas through a techno-panicky prism of “algorithmic fairness” or “algorithmic justice.”

Hey, who could possibly be against FAIRNESS and JUSTICE? Of course, the devil is always in the details as Neil Chilson and I discuss in our new paper for the The Federalist Society and Regulatory Transparency Project on, “The Coming Onslaught of ‘Algorithmic Fairness’ Regulations.” We document how federal and state policymakers from both parties are currently considering a variety of new mandates for artificial intelligence (AI), machine learning, and automated systems that, if imposed, “would thunder through our economy with one of the most significant expansions of economic and social regulation – and the power of the administrative state – in recent history.”

We note how, at the federal level, bills are being floated with titles like the “Algorithmic Justice and Online Platform Transparency Act” and the “Protecting Americans from Dangerous Algorithms Act,” which would introduce far-reaching regulations requiring AI innovators to reveal more about how their algorithms work or even hold them liable if their algorithms are thought to be amplifying hateful or extremist content. Other proposed measures like the “Platform Accountability and Consumer Transparency Act” and the “Online Consumer Protection Act” would demand greater algorithmic transparency as it relates to social media content moderation policies and procedures. Finally, measures like the “Kids Online Safety Act” would require audits of algorithmic recommendation systems that supposed targeted or harmed children. Algorithmic regulation is also creeping into proposed privacy regulations, such as the “American Data Protection and Privacy Act of 2022.”

And then there are all the state laws–many of which have been pushed by conservatives–that would mandate “algorithmic transparency” for social media content moderation in the name of countering supposed viewpoint bias. Bills in Florida and Texas take this approach. Meanwhile, conservatives in Congress Senator Josh Hawley’s (R-MO) push for bills like the “Ending Support for Internet Censorship Act” that requires large tech companies undergo external audits proving that their algorithms and content-moderation techniques are politically unbiased. It’s an open invitation to regulators and trial lawyers to massively regulate technology and speech under the guise of “algorithmic fairness.” Countless left-leaning law professors and European officials have already proposed a comprehensive algorithmic audit apparatus to regulate innovators in every sector.

It’s the rise of the Code Cops. If we continue down this path, it ends with a complete rejection of the permissionless innovation ethos that made America’s information technology sector a global powerhouse. Instead, we’ll be stuck with the very worst type of “Mother, May I” precautionary principle-based regulatory regime that will be imposing the equivalent of occupational licensing requirements for coders.

If code is speech, algorithms are as well. Defenders of innovation freedom need to step up and prepare for the fight to come. [See my earlier essay, “AI Eats the World: Preparing for the Computational Revolution and the Policy Debates Ahead.”] Chilson and I outline the broad contours of the battle for freedom of speech and the freedom to innovation that is brewing. It will be the most important technology policy issue of the next ten years. I hope you take the time to read our new essay and understand why. And below you will find a few dozen more essay on the same topic if you’d like to dig even deeper.

Additional Reading :

 

]]>
https://techliberation.com/2022/11/06/tech-regulation-will-increasingly-be-driven-through-the-prism-of-algorithmic-fairness/feed/ 4 77056
We Need to Get All the Smart People in a Room & Have a Conversation https://techliberation.com/2022/10/16/we-need-to-get-all-the-smart-people-in-a-room-have-a-conversation/ https://techliberation.com/2022/10/16/we-need-to-get-all-the-smart-people-in-a-room-have-a-conversation/#comments Sun, 16 Oct 2022 12:51:13 +0000 https://techliberation.com/?p=77052

For my latest Discourse column (“We Really Need To ‘Have a Conversation’ About AI … or Do We?”), I discuss two commonly heard lines in tech policy circles.

1) “We need to have conversation about the future of AI and the risks that it poses.”

2) “We should get a bunch of smart people in a room and figure this out.”

I note that, if you’ve read enough essays, books or social media posts about artificial intelligence (AI) and robotics—among other emerging technologies—then chances are you’ve stumbled on variants of these two arguments many times over. They are almost impossible to disagree with in theory, but when you start to investigate what they actually mean in practice, they are revealed to be largely meaningless rhetorical flourishes which threaten to hold up meaningful progress on the AI front. I continue on to argue in my essay that:

I’m not at all opposed to people having serious discussions about the potential risks associated with AI, algorithms, robotics or smart machines. But I do have an issue with: (a) the astonishing degree of ambiguity at work in the world of AI punditry regarding the nature of these “conversations,” and, (b) the fact that people who are making such statements apparently have not spent much time investigating the remarkable number of very serious conversations that have already taken place, or which are ongoing, about AI issues. In fact, it very well could be the case that we have  too many conversations going on currently about AI issues and that the bigger problem is instead one of better coordinating the important lessons and best practices that we have already learned from those conversations.

I then unpack each of those lines and explain what is wrong with them in more detail. One thing that always bugs be about the “we need to have a conversation” aphorism is that those uttering it absolutely refuse to be nailed down on the specifics, like:

  1. What is the nature or goal of that conversation?
  2. Who is the “we” in this conversation?
  3. How is this conversation to be organized and managed?
  4. How do we know when the conversation is going on, or when it is sufficiently complete such that we can get on with things?
  5. And, most importantly, aren’t you implicitly suggesting that we should ban or limit the use of that technology until you (or the royal “we”) is somehow satisfied that the conversation is over or yielded satisfactory answers?

The other commonly heard line — “We need to get a bunch of smart people in a room and figure this out” — can be equally infuriating due to both a lack of specifics (which people? what room? where and when? etc) but also because of the fact that we already have had tons of the Very Smartest People on these issues meeting in countless rooms across the globe for many years. In an earlier essay, I documented the astonishing growth of AI governance frameworks, ethical best practices and professional codes of conduct: “The amount of interest surrounding AI ethics and safety dwarfs all other fields and issues. I sincerely doubt that ever in human history has so much attention been devoted to any technology as early in its lifecycle as AI.”

I also note that, practically speaking, “the most important conversations society has about new technologies are those we have every day when we all interact with those new technologies and with one another. Wisdom is born from experiences, including activities and interactions involving risk and the possibility of mistakes. This is how progress happens.” And I conclude by noting how:

We won’t ever be able to “have a conversation” about a new technology that yields satisfactory answers for some critics precisely because the questions just multiply and evolve endlessly over time, and they can only be answered through ongoing societal interactions and problem-solving. But we shouldn’t stop life-enriching innovations from happening just because we don’t have all the answers beforehand.

Anyway, I invite you to head over to  Discourse and read the entire essay. In the meantime, I propose we get all the smart people in a room and have a conversation about how these two lines came to dominate tech policy discussions before they end up doing real damage to human prosperity! It’s the ethical thing to do if you really care about the future.

]]>
https://techliberation.com/2022/10/16/we-need-to-get-all-the-smart-people-in-a-room-have-a-conversation/feed/ 2 77052
No Goldilocks Formula for Content Moderation in Social Media or the Metaverse, But Algorithms Still Help https://techliberation.com/2022/09/13/no-goldilocks-formula-for-content-moderation-in-social-media-or-the-metaverse-but-algorithms-still-help/ https://techliberation.com/2022/09/13/no-goldilocks-formula-for-content-moderation-in-social-media-or-the-metaverse-but-algorithms-still-help/#comments Tue, 13 Sep 2022 17:48:00 +0000 https://techliberation.com/?p=77041

[Cross-posted from Medium.]

In an age of hyper-partisanship, one issue unites the warring tribes of American politics like no other: hatred of “Big Tech.” You know, those evil bastards who gave us instantaneous access to a universe of information at little to no cost. Those treacherous villains! People are quick to forget the benefits of moving from a world of Information Poverty to one of Information Abundance, preferring to take for granted all they’ve been given and then find new things to complain about.

But what mostly unites people against large technology platforms is the feeling that they are just too big or too influential relative to other institutions, including government. I get some of that concern, even if I strongly disagree with many of their proposed solutions, such as the highly dangerous sledgehammer of antitrust breakups or sweeping speech controls. Breaking up large tech companies would not only compromise the many benefits they provide us with, but it would undermine America’s global standing as a leader in information and computational technology. We don’t want that. And speech codes or meddlesome algorithmic regulations are on a collision course with the First Amendment and will just result in endless litigation in the courts.

There’s a better path forward. As President Ronald Reagan rightly said in 1987 when vetoing a bill to reestablish the Fairness Doctrine, “History has shown that the dangers of an overly timid or biased press cannot be averted through bureaucratic regulation, but only through the freedom and competition that the First Amendment sought to guarantee.” In other words, as I wrote in a previous essay about “The Classical Liberal Approach to Digital Media Free Speech Issues,” more innovation and competition are always superior to more regulation when it comes to encouraging speech and speech opportunities.

Can Government Get Things Just Right?

But what about the accusations we hear on both the left and right about tech companies failing to properly manage or moderate online content in some fashion? This is not only a concern for today’s most popular social media platforms, but it is a growing concern for the so-called Metaverse, where questions about content policies already surround activities and interactions on AR and VR systems.

The problem here is that different people want different things from digital platforms when it comes to content moderation. As I noted in a column for The Hill late last year:

there is considerable confusion in the complaints both parties make about “Big Tech.” Democrats want tech companies doing more to limit content they claim is hate speech, misinformation, or that incites violence. Republicans want online operators to do less, because many conservatives believe tech platforms already take down too much of their content.

Thus, large digital intermediaries are expected to make all the problems of the world go away through a Goldilocks formula whereby digital platforms will get content moderation “just right.” It’s an impossible task with billions of voices speaking. Bureaucrats won’t do a better job refereeing these disputes, and letting them do so will turn every content spat into an endless regulatory proceeding.

What Algorithms Can and Cannot Do to Help

But we should be clear on one thing: These disputes will always be with us because every media platform in history has had some sort of content moderation policies, even if we didn’t call them that until recently. Creating what used to just be called guidelines or standards for information production and dissemination has always been a tricky business. But the big difference between the old and new days comes down to three big problems:

#1- the volume problem: There’s just a ton of content online to moderate today compared to the past.

#2- the subjectivity problem: Content moderation always involves “eye of the beholder” questions, but now there’s even more of those problems because of Problem #1.

#3- the crafty adversaries problem: There are a lot of people bound and determined to get around any rules or restrictions platforms impose, and they’ll find creative ways to do so.

These problems are nicely summarized in an excellent new AEI report by Alex Feerst on, “The Use of AI in Online Content Moderation.” This is the fifth in a series of new reports from the AEI’s Digital Platforms and American Life project. The goal of the project is to highlight how the “democratization of knowledge and influence comes with incredible opportunities but also immense challenges. How should policymakers think about the digital platforms that have become embedded in our social and civic life?” Various experts have been asked to sound off on that question and address different challenges. The series kicked off in April with an essay I wrote on “Governing Emerging Technology in an Age of Policy Fragmentation and Disequilibrium.” More studies are coming.

In Feerst’s new report, the focus is squarely on the issue of algorithmic content moderation policies and procedures. Feerst provides a brilliant summary of how digital media platforms currently utilize AI to assist their content moderation efforts. He notes:

The short answer to the question “why AI” is scale — the sheer never-ending vastness of online speech. Scale is the prime mover of online platforms, at least in their current, mainly ad-based form and maybe in all incarnations. It’s impossible to internalize the dynamics of running a digital platform without first spending some serious time just sitting and meditating on the dizzying, sublime amounts of speech we are talking about: 500 million tweets a day comes out to 200 billion tweets each year. More than 50 billion photos have been uploaded to Instagram. Over 700,000 hours of video are uploaded to YouTube every day. I could go on. Expression that would previously have been ephemeral or limited in reach under the existing laws of nature and pre-digital publishing economics can now proliferate and move around the world. It turns out that, given the chance, we really like to hear ourselves talk.

So that’s the scale/volume problem in a nutshell. Algorithmic systems are absolutely going to be needed to help do some sifting and sorting, therefore.

What Do You Want to Do about Man-Boobs?

But then we immediately run into the subjectivity problem that pervades so many content moderation issues. When it comes to topics like hate speech, “There will be as many opinions as there are people. Three well-meaning civic groups will agree on four different definitions of hate speech,” Feerst notes.

Indeed, these eye-of-the-beholder judgment calls are ubiquitous and endlessly frustrating for content moderators. Let me tell you a quick story I told a Wall Street Journal reporter who asked me in 2019 why I gave up helping tech companies figure out how to handle these content moderation controversies. I had spent many years trying to help companies and trade associations figure this stuff out because I had been writing about these challenges since the late 1990s. But then finally I gave up. Why? Because of man boobs. Yes, man boobs. Here’s the summary of my story from that WSJ article:

Adam Thierer, a senior research fellow at the right-leaning Mercatus Center at George Mason University, says he used to consult with Facebook and other tech companies. The futility of trying to please all sides hit home after he heard complaints about a debate at YouTube over how much skin could be seen in breast-feeding videos.

While some argued the videos had medical purposes, other advisers wondered whether videos of shirtless men with large mammaries should be permitted as well. “I decided I don’t want to be the person who decides on whether man boobs are allowed,” says Mr. Thierer.

No, seriously. This has been one of the many crazy problems that content moderators have had to deal with. There are scumbag dudes with large mammaries who not only salaciously jiggle them around on camera for the world to see, but then even put whipped cream on their own boobs and lick it off. Now, if a woman does that and posts it on almost any mainstream platform, it’ll get quickly flagged (probably by an algorithmic filter) and probably immediately blocked. But if a dude with man boobs does the same thing, shouldn’t the policy be the same? Well, in our still very sexist world of double standards, policies can vary on that question. And I didn’t want any part of trying to figure out an answer to that question (and others like it), so I largely got out of the business of helping companies do so. Not even King Solomon could figure out a fair resolution to some of this stuff.

Algorithms can only help us so much here because, at some point, humans must tell the machines what to flag or block using some sort of subjective standard that will lead to all sorts of problems later. This is one reason why Feerst reminds us of another important rule here: “Don’t confuse a subjectivity problem for an accuracy problem, especially when you’re using automation technology.” As he notes:

If the things we’re doing are controversial among humans and it’s not even clear that humans judge them consistently, then using AI is not going to help. It’s just going to allow you to achieve the same controversial outcomes more quickly and in greater volume. In other words, if you can’t get 50 humans to agree on whether a particular post violates content rules, whether that content rule is well formulated, or whether that rule should exist, then why would automating this process help?

So Many Troublemakers (Sometimes Accidental)

The man boobs moderation story also reminds us that the crafty adversary problem will always haunt us, too. There are just so many bastards out there looking to cause trouble for whatever reason. “There will never be ‘set it and forget it’ technologies for these issues,” Feerst argues. “At best, it’s possible to imagine a state of dynamic equilibrium — eternal cops and robbers.”

That is exactly right. It’s a never-ending learning/coping process, as I noted in my earlier paper in the AEI series: “There is no Goldilocks formula that can get things just right” when it comes to many tech governance issues, especially content moderation issues. Muddling through is the new normal. And the exact same process is now unfolding for Metaverse content moderation. Algorithmic moderation helps us weed out the worst stuff and gives us a better chance of letting humans — with their limited time and resources — deal with the hardest problems (and problem-makers) out there.

Sometimes the content infractions may even be accidental. Here’s another embarrassing story involving me. I was asked last year to sit in on a VR meeting about content moderation in the Metaverse. I was wearing my headset and sitting at a virtual table with about 8 other people in the room. Back in my real-world office, I had my coffee mug sitting far to the right of me on a side table. After about 45 minutes of discussion, I realized that every time I reached way over to my right to grab my coffee mug in the real-world, my virtual self’s hand was reaching over and touching the crotch of the guy sitting next to me in the Metaverse! It looked like I was fondling the dude virtually! What a nightmare. I’m surprised someone didn’t report me for virtual harassment. I would have had to plead the coffee mug defense and throw myself on the mercy of the Meta-Court judge or jury.

Ok, so that’s a funny story, but you can imagine little mistakes like this happening all throughout the Metaverse as we slowly figure out how to interact normally in new virtual environments. We’ll have to rely on users and algorithms flagging some of the worst behaviors and then have humans evaluate the tough calls to the best of their abilities. But let’s not be fooled into thinking that humans can handle all these questions because the task at hand is too overwhelming and expensive for many platform operators. “Ten thousand employees here, ten thousand ergonomic mouse pads there, and pretty soon we’re talking about real money,” Feerst notes. “This is what the cost of running a platform looks like, once you’ve internalized the harmful and inexorable externalities we’ve learned about the hard way over the past decade.”

The Problem with “Explainability”

The key takeaway here is that content moderation at scale is messy, confusing, and unsatisfying. Do platforms need to be more transparent about how their algorithms work to do this screening? Yes, they do. But perfect transparency or “explainability” is impossible.

It’s hard to perfectly explain how algorithms work for the same reason it’s hard for your car mechanic to explain to you exactly how your car engine works. Except it’s even harder with algorithmic systems. As Feerst notes:

AI outputs can be hard to explain. In some cases, even the creators or managers of a particular product are no longer sure why it is functioning a particular way. It’s not like the formula to Coca-Cola; it’s constantly evolving. Requirements to “disclose the algorithm” may not help much if it means that companies will simply post a bunch of not especially meaningful code.

And if explainability was mandated by law, it’d instantly be gamed by still other troublemakers out there. A mandate to make AI perfectly transparent is an open invitation to every scam artist in the world to game platforms with new phishing attacks, spammy scams, and other such nonsense. Again, this is the “crafty adversaries” problem at work. Endless cat-and-mouse or, as Feerst says “eternal cops and robbers.”

So, in sum, content moderation — including algorithmic content moderation — is a nightmarishly difficult task, and there is no Goldilocks formula available to us that will help us get things just right. It’ll always just be endless experimentation and iteration with lots and lots of failures along the way. Learning by doing and constantly refining our systems and procedures is the key to helping us muddle through.

And if you think government will somehow figure this all out through some sort of top-down regulatory regime, ask yourself how well that worked out for Analog Era efforts to create “community standards” for broadcast radio and television. And then multiply that problem by a zillion. It cannot be done without severely undermining free speech and innovation. We don’t want to go down that path.

____________

Additional Reading

· “Again, We Should Not Ban All Teens from Social Media

· “The Classical Liberal Approach to Digital Media Free Speech Issues

· “AI Eats the World: Preparing for the Computational Revolution and the Policy Debates Ahead

· “Left and right take aim at Big Tech — and the First Amendment

· “When It Comes to Fighting Social Media Bias, More Regulation Is Not the Answer

· “FCC’s O’Rielly on First Amendment & Fairness Doctrine Dangers

· “Conservatives & Common Carriage: Contradictions & Challenges

· “The Great Deplatforming of 2021

· “A Good Time to Re-Read Reagan’s Fairness Doctrine Veto

· “Sen. Hawley’s Radical, Paternalistic Plan to Remake the Internet

· “How Conservatives Came to Favor the Fairness Doctrine & Net Neutrality

· “Sen. Hawley’s Moral Panic Over Social Media

· “The White House Social Media Summit and the Return of ‘Regulation by Raised Eyebrow’

· “The Not-So-SMART Act

· “The Surprising Ideological Origins of Trump’s Communications Collectivism

]]>
https://techliberation.com/2022/09/13/no-goldilocks-formula-for-content-moderation-in-social-media-or-the-metaverse-but-algorithms-still-help/feed/ 1 77041
AI Eats the World: Preparing for the Computational Revolution and the Policy Debates Ahead https://techliberation.com/2022/09/12/ai-eats-the-world-preparing-for-the-computational-revolution-and-the-policy-debates-ahead/ https://techliberation.com/2022/09/12/ai-eats-the-world-preparing-for-the-computational-revolution-and-the-policy-debates-ahead/#comments Mon, 12 Sep 2022 23:52:26 +0000 https://techliberation.com/?p=77039

[Cross-posted from Medium.]

The Coming Computational Revolution

Thomas Edison once spoke of how electricity was a “field of fields.” This is even more true of AI, which is ready to bring about a sweeping technological revolution. In Carlota Perez’s influential 2009 paper on “Technological Revolutions and Techno-economic Paradigms,” she defined a technological revolution “as a set of interrelated radical breakthroughs, forming a major constellation of interdependent technologies; a cluster of clusters or a system of systems.” To be considered a legitimate technological revolution, Perez argued, the technology or technological process must be “opening a vast innovation opportunity space and providing a new set of associated generic technologies, infrastructures and organisational principles that can significantly increase the efficiency and effectiveness of all industries and activities.” In other words, she concluded, the technology must have “the power to bring about a transformation across the board.”

Expanding Our Skillset

Thus, AI (and AI policy) is multi-dimensional, amorphous, and ever-changing. It has many layers and complexities. This will require public policy analysts and institutions to reorient their focus and develop new capabilities.

Mapping the AI Policy Terrain: Broad vs. Narrow

Beyond talent development, the other major challenge is issue coverage. How can we cover all the AI policy bases? There are two general categories of AI concerns, and supporters of free markets need to be prepared to engage on both battlefields.

Confronting the Formidable Resistance to Change

Finally, free-market analysts and organizations must prepare to defend the general concept of progress through technological change as AI becomes a central social, economic, and legal battleground — both domestically and globally. Every technological revolution involves major social and economic disruptions and gives rise to intense efforts to defend the status quo and block progress. As Perez concludes, “the profound and wide-ranging changes made possible by each technological revolution and its techno-economic paradigm are not easily assimilated; they give rise to intense resistance.”

]]>
https://techliberation.com/2022/09/12/ai-eats-the-world-preparing-for-the-computational-revolution-and-the-policy-debates-ahead/feed/ 3 77039
AI Governance “on the Ground” vs “on the Books” https://techliberation.com/2022/08/24/ai-governance-on-the-ground-vs-on-the-books/ https://techliberation.com/2022/08/24/ai-governance-on-the-ground-vs-on-the-books/#respond Wed, 24 Aug 2022 15:14:56 +0000 https://techliberation.com/?p=77028

[Cross-posted from Medium]

There are two general types of technological governance that can be used to address challenges associated with artificial intelligence (AI) and computational sciences more generally. We can think of these as “on the ground” (bottom-up, informal “soft law”) governance mechanisms versus “on the books” (top-down, formal “hard law”) governance mechanisms.

Unfortunately, heated debates about the latter type of governance often divert attention from the many ways in which the former can (or already does) help us address many of the challenges associated with emerging technologies like AI, machine learning, and robotics. It is important that we think harder about how to optimize these decentralized soft law governance mechanisms today, especially as traditional hard law methods are increasingly strained by the relentless pace of technological change and ongoing dysfunctionalism in the legislative and regulatory arenas.

On the Grounds vs. On the Books Governance

Let’s unpack these “on the ground” and “on the books” notions a bit more. I am borrowing these descriptors from an important 2011 law review article by Kenneth A. Bamberger and Deirdre K. Mulligan, which explored the distinction between what they referred to as “Privacy on the Books and on the Ground.” They identified how privacy best practices were emerging in a decentralized fashion thanks to the activities of corporate privacy officers and privacy associations who helped formulate best practices for data collection and use.

The growth of privacy professional bodies and non­profit organizations — especially the International Association of Privacy Profession­als (IAPP) — helped better formalize privacy best practices by establishing and certifying internal champions to uphold key data-handling principles with organizations. By 2019, the IAPP had over 50,000 trained members globally, and its numbers keep swelling. Today, it is quite common to find Chief Privacy Officers throughout the corporate, governmental, and non-profit world.

These privacy professionals work together and in conjunction with a wide diversity of other players to “bake-in” widely-accepted information collection/ use practices within all these organizations. With the help of IAPP and other privacy advocates and academics, these professionals also look to constantly refine and improve their standards to account for changing circumstances and challenges in our fast-paced data economy. They also look to ensure that organizations live up to commitments they have made to the public or even governments to abide by various data-handling best practices.

Soft Law vs. Hard Law

These “on the ground” efforts have helped usher in a variety of corporate social responsibility best practices and provide a flexible governance model that can be a compliment to, or sometimes even a substitute for, formal “on the books” efforts. We can also think of this as the difference between soft law and hard law.

Soft law refers to agile, adaptable governance schemes for emerging technology that create substantive expectations and best practices for innovators without regulatory mandates. Soft law can take many forms, including guidelines, best practices, agency consultations & workshops, multistakeholder initiatives, and other experimental types of decentralized, non-binding commitments and efforts.

Soft law has become a bit of a gap-filler in the U.S. as hard law efforts fail for various reasons. The most obvious explanations for why the role of hard law governance has shrunk is that it’s just very hard for law to keep up with fast-moving technological developments today. This is known as the pacing problem. Many scholars have identified how the pacing problem gives rise to a “governance gap” or “competency trap” for policymakers because, just as quickly as they are coming to grips with new technological developments, other technologies are emerging quickly on their heels.

Think of modern technologies — especially informational and computational technologies — like a series of waves that come flowing in to shore faster and faster. As soon as one wave crests and then crashes down, another one comes right after it and soaks you again before you’ve had time to recover from the daze of the previous ones hitting you. In a world of combinatorial innovation, in which technologies build on top of one another in a symbiotic fashion, this process becomes self-reinforcing and relentless. For policymakers, this means that just when they’ve worked their way up one technological learning curve, the next wave hits and forces them to try to quickly learn about and prepare for the next one that has arrived. Lawmakers are often overwhelmed by this flood of technological change, making it harder and harder for policies to get put in place in a timely fashion — and equally hard to ensure that any new or even existing policies stay relevant as all this rapid-fire innovation continues.

Legislative dysfunctionalism doesn’t help. Congress has a hard time advancing bills on many issues, and technical matters often get pushed to the bottom of the priorities list. The end result is that Congress has increasingly become a non-actor on tech policy in the U.S. Most of the action lies elsewhere.

What’s Your Backup Plan?

This means there is a powerful pragmatic case for embracing soft law efforts that can at least provide us with some “on the ground” governance efforts and practices. Increasingly, soft law is filling the governance gap because hard law is failing for a variety of reasons already identified. Practically speaking, even if you are dead set on imposing a rigid, top-down, technocratic regulatory regime on any given sector or technology, you should at least have a backup plan in mind if you can’t accomplish that.

This is why privacy governance in the United States continues to depend heavily on such soft law efforts to fill the governance vacuum after years of failed attempts to enact a formal federal privacy law. While many academics and others continue to push for such an over-arching data handling law, bottom-up soft law efforts have played an important role in balancing privacy and innovation.

In a similar way, “on the ground” governance efforts are already flourishing for artificial intelligence and machine learning as policymakers continue to very slowly consider whether new hard law initiatives are wise or even possible. For example, congressional lawmakers have been considering a federal regulatory framework for driverless cars for the past several sessions of Congress. Many people in Congress and in academic circles agree that a federal framework is needed, if for no other reason than to preempt the much-dreaded specter of a patchwork of inconsistent state and local regulatory policies. With so much bipartisan agreement out there on driverless car legislation, it would seem like a federal bill would be a slam dunk. For that reason, year in and year out, people always predict: this is the year we’ll get driverless car legislation! And yet, it never happens due to a combination of special interest opposition from unions and trial lawyers, in addition to the pacing problem issue and Congress focusing its limited attention on other issues.

This is also already true for algorithmic regulation. We hear lots of calls to do something, but it remains unclear what that something is or whether it will get done any time soon. If we could not get a privacy bill through Congress after at least a dozen years of major efforts, chances are that broad-based AI regulation is going to be equally challenging.

Soft Law for AI is Exploding

Thus, soft law will likely fill the governance gap for AI. It already is. I’m working on a new book that documents the astonishing array of soft law mechanisms already in place or being developed to address various algorithmic concerns. I can’t seem to finish the book because there is just so much going on related to soft law governance efforts for algorithmic systems. As Mark Coeckelbergh noted in his recent book on AI Ethics, there’s been an “avalanche of​ initiatives and policy documents” around AI ethics and best practices in recent years. It is a bit overwhelming, but the good news is that there is a lot of consistency in these governance efforts.

To illustrate, a 2019 survey by a group of researchers based in Switzerland analyzed 84 AI ethical frameworks and found “a global convergence emerging around five ethical principles (transparency, justice and fairness, non-maleficence, responsibility and privacy).” A more recent 2021 meta-survey by a team of Arizona State University (ASU) legal scholars reviewed an astonishing 634 soft law AI programs that were formulated between 2016–2019. 36 percent of these efforts were initiated by governments, with the others being led by non-profits or private sector bodies. Echoing the findings from the Swiss researchers, the ASU report found widespread consensus among these soft law frameworks on values such as transparency and explainability, ethics/rights, security, and bias. This makes it clear that there is considerable consistency among ethical soft law frameworks in that most of them focus on a core set of values to embed within AI design. The UK-based Alan Turing Institute boils their list down to four “FAST Track Principles”: Fairness, Accountability, Sustainability, and Transparency.

The ASU scholars noted how ethical best practices for product design already influence developers today by creating powerful norms and expectations about responsible product design. “Once a soft law program is created, organizations may seek to enforce it by altering how their employees or representatives perform their duties through the creation and implementation of internal procedures,” they note. “Publicly committing to a course of action is a signal to society that generates expectations about an organization’s future actions.”

This is important because many major trade associations and individual companies have been formulating governance frameworks and ethical guidelines for AI development and use. For example, among large trade associations, the U.S. Chamber of Commerce, the Business Roundtable, the BSA | The Software Alliance, and ACT (The App Association) have all recently released major AI best practice guidelines. Notable corporate efforts to adopt guidelines for ethical AI practices include statements or frameworks by IBM, Intel, GoogleMicrosoftSalesforceSAP, and Sony, to just name a few. They are also creating internal champions to push AI ethics though either the appointment of Chief Ethical Officers, the creation of official departments, or both plus additional staff to guide the process of baking-in AI ethics by design.

Once again, there is remarkable consistency among these corporate statements in terms of the best practices and ethical guidelines they endorse. Each trade association or corporate set of guidelines align closely with the core values identified in the hundreds of other soft law frameworks that ASU scholars surveyed. These efforts go a long way toward helping to promote a culture of responsibility among leading AI innovators. We can think of this as the professionalization of AI best practices.

What Soft Law Critics Forget

Some will claim that “on the ground” soft law efforts are not enough, but they typically make two mistakes when saying so.

Their first mistake is thinking that hard law is practical or even optimal for fast-paced, highly mercurial AI and ML technologies. It’s not just that the pacing problem necessitates new thinking about governance. Critics fail to understand how hard law would likely significantly undermine algorithmic innovation because algorithmic systems can change by the minute and require a more agile and adaptive system of governance by their very nature.

This is a major focus of my book and I previously published a draft chapter from my book on “The Proper Governance Default for AI,” and another essay on “Why the Future of AI Will Not Be Invented in Europe.” These essays explain why a Precautionary Principle-oriented regulatory regime for algorithmic systems would stifle technological development, undermine entrepreneurialism, diminish competition and global competitive advantage, and even have a deleterious impact on our national security goals.

Traditional regulatory systems can be overly rigid, bureaucratic, inflexible, and slow to adapt to new realities. They focus on preemptive remedies that aim to predict the future, and future hypothetical problems that may not ever come about. Worse yet, administrative regulation generally preempts or prohibits the beneficial experiments that yield new and better ways of doing things. When innovators must seek special permission before they offer a new product or service, it raises the cost of starting a new venture and discourages activities that benefit society. We need to avoid that approach if we hope maximize the potential of AI-based technologies.

The second mistake that soft law critics make is that they fail to understand how many hard law mechanisms actually play a role in supporting soft law governance. AI applications already are regulated by a whole host of existing legal policies. If someone does something stupid or dangerous with AI systems, the Federal Trade Commission (FTC) has the power to address “unfair and deceptive practices” of any sort. And state Attorneys General and state consumer protection agencies also routinely address unfair practices and continue to advance their own privacy and data security policies, some of which are often more stringent than federal law.

Meanwhile, several existing regulatory agencies in the U.S. possess investigatory and recall authority that allows them to remove products from the market when certain unforeseen problems manifest themselves. For example, the National Highway Traffic Safety Administration (NHTSA), the Food & Drug Administration (FDA), and Consumer Product Safety Commission (CPSC) all possess broad recall authority that could be used to address risks that develop for many algorithmic or robotic systems. For example, NHTSA is currently using its investigative authority to evaluate Tesla’s claims about “full self-driving” technology and the agency has the power to take action against the company under existing regulations. Likewise, the FDA used its broad authority to crack down on genetic testing company 23andme many years ago. And CPSC and the FTC have broad authority to investigate claims made by innovators, and they’ve already used it. It’s not like our expansive regulatory state lacks considerable existing power to police new technology. If anything, the power of the administrative state is too broad and amorphous and it can be abused in certain instances.

Perhaps most importantly, our common law system can address other deficiencies with AI-based systems and applications using product defects law, torts, contract law, property law, and class action lawsuits. This is a better way of addressing risks compared to preemptive regulation of general-purpose AI technology because it at least allows the technologies to first develop and then see what actual problems manifest themselves. Better to treat innovators as innocent until proven guilty than the other way around.

There are other thorny issues that deserve serious policy consideration and perhaps even some new rules. But how risks are addressed matters deeply. Before we resort to heavy-handed, legalistic solutions for possible problems, we should exhaust all other potential remedies first.

In other words, “on the ground” soft law government mechanisms and ex post legal solutions should generally trump “ex ante (preemptive, precautionary) regulatory constraints. But we should look for ways to refine and improve soft law governance tools, perhaps through better voluntary certification and auditing regimes to hold developers to a high standard as it pertains to the important AI ethical practices we want them to uphold. This is the path forward to achieve responsible AI innovation without the heavy-handed baggage associated with more formalistic, inflexible, regulatory approaches that are ill-suited for complicated, rapidly-evolving computational and computing technologies.

___________________

Related Reading on AI & Robotics

]]>
https://techliberation.com/2022/08/24/ai-governance-on-the-ground-vs-on-the-books/feed/ 0 77028
Why the Future of AI Will Not Be Invented in Europe https://techliberation.com/2022/08/01/why-the-future-of-ai-will-not-be-invented-in-europe/ https://techliberation.com/2022/08/01/why-the-future-of-ai-will-not-be-invented-in-europe/#comments Mon, 01 Aug 2022 18:28:40 +0000 https://techliberation.com/?p=77016

For my latest column in The Hill, I explored the European Union’s (EU) endlessly expanding push to regulate all facets of the modern data economy. That now includes a new effort to regulate artificial intelligence (AI) using the same sort of top-down, heavy-handed, bureaucratic compliance regime that has stifled digital innovation on the continent over the past quarter century.

The European Commission (EC) is advancing a new Artificial Intelligence Act, which proposes banning some AI technologies while classifying many others under a heavily controlled “high-risk” category. A new bureaucracy, the European Artificial Intelligence Board, will be tasked with enforcing a wide variety of new rules, including “prior conformity assessments,” which are like permission slips for algorithmic innovators. Steep fines are also part of the plan. There’s a lengthy list of covered sectors and technologies, with many others that could be added in coming years. It’s no wonder, then, that the measure has been labelled the measure “the mother of all AI laws” and analysts have argued it will further burden innovation and investment in Europe.

As I noted in my new column, the consensus about Europe’s future on the emerging technology front is dismal to put it mildly. The International Economy journal recently asked 11 experts from Europe and the U.S. where the EU currently stood in global tech competition. Responses were nearly unanimous and bluntly summarized by the symposium’s title: “The Biggest Loser.” Respondents said Europe is “lagging behind in the global tech race,” and “unlikely to become a global hub of innovation.” “The future will not be invented in Europe,” another analyst bluntly concluded.

That’s a grim assessment, but there is no doubt that European competitiveness is suffering today and that excessive regulation plays a fairly significant role in causing it. As I noted in my column, “the EU’s risk-averse culture and preference for paperwork compliance over entrepreneurial freedom” had serious consequences for continent-wide innovation.  I note in my recent column how:

After the continent piled on layers of data restrictions beginning in the mid-1990s, innovation and investment suffered. Regulation grew more complex with the 2018 General Data Protection Regulation (GDPR), which further limits data collection and use. As a result of all the red tape, the EU came away from the digital revolution with “the complete absence of superstar companies.” There are no serious European versions of Microsoft, Google, Facebook, Apple or Amazon. Europe’s leading providers of digital technology services today are American-based companies.

Let’s take a look at a few numbers that illustrate what’s happened in Europe’s tech sector over the past quarter century. Here’s an old KPGM breakdown of market caps for public Internet companies over an important 20 year period, from 1995 to 2015, when the digital technology marketplace was taking shape. Besides the remarkable amount of churn over that period (with only Apple appearing on both lists), the other notable thing is the complete absence of any European companies in 2015.

Next, here’s a chart I constructed using CB Insights data for global unicorns ($billion valued companies) from 2010 up through early 2022. It shows how the U.S. dominates fully half the list with China having a 16% share, but all of the European Union’s firms equal just a 9 percent slice of the world’s share.

If you want to see a per capita breakdown of VC investment by country, here’s a handy Crunchbase News chart. While the U.S. is geographically much larger than Europe, a breakdown of VC funding on a per capita basis reveals that only Estonia ($915B) and Sweden ($700B) have startup investment on par with America ($808B). No other European country has even half as much per capita VC investment as the U.S., and most don’t even have a quarter as much.

As we enter the “age of AI,” what will the EU’s same regulatory model for mean for AI, machine learning, and robotics in Europe? We do have some early data on that, too. Here’s a breakdown of AI-related VC activity and AI unicorn in 2021 from the recent State of AI Report 2021, with European countries already trailing far behind:

Also, here’s some data on recent AI investment by region from the latest Stanford “AI Index Report 2022” which again highlights a gap that is only growing larger:

It’s important to listen to what actual AI innovators across the Atlantic have to say about the new EU regulatory efforts. Just last month, the UK-based Coalition for a Digital Economy (Coadec), an advocacy group for Britain’s technology-led startups, published a report entitled, “What do AI Startups Want from Regulation?” Coadec surveyed its members to gauge their feelings about the EU’s proposed approach to AI regulation, as well as the UK’s. 76% of those startups said that their business model would be either negatively affected or become infeasible if the UK were to echo the EU by making AI developers liable, and an equal percentage said they had varying concerns about whether it’s technically even feasible to make their datasets “free of errors,” as the EU looks set to demand. Respondents also said they feared that the new AI Act would be particularly burdensome to small and mid-size entrepreneurs because they cannot afford to deal with the costly compliance hassles like the larger competitors they face. This would end of being a replay of the burdens they faced from GDPR, which decimated small businesses. “The experience of GDPR demonstrated how unclear, complex and expensive regulations drove many startups out of business, and disproportionately impact startups that survived–GDPR compliance cost startups significantly more than it did the Tech Giants,” the Coadec report concluded.

At least those UK-based innovators might be in a slightly better position post-Brexit with the British government now looking to chart a different–and much less burdensome–governance approach for digital technologies. In fact, the UK government recently released a major policy document on “Establishing a Pro-Innovation Approach to Regulating AI,” which makes a concerted effort to distinguish its approach from the EU’s. “We will ask that regulators focus on high risk concerns rather than hypothetical or low risks associated with AI,” the report noted. “We want to encourage innovation and avoid placing unnecessary barriers in its way.” This is consistent with what the UK government has been saying on technology governance more generally. For example, in recent report advocating for Innovation Friendly Regulation, the UK government’s Regulatory Horizons Council argued that, when it comes to the regulation of emerging technologies like AI, “it is also necessary to consider the risk that the intervention itself poses.” “This would include the potential impact on benefits from a particular innovation that might be foregone; it would also include the potential creation of a ‘chilling effect’ on innovation more generally,” the Council concluded. Clearly, this approach to technology policy stands in stark contrast to the EU’s heavy-handed model. So, there is a chance that at least some innovators based in the UK can escape the EU’s regulatory hell.

What about AI innovators stuck on the European continent? What are they saying about the regulations they will soon face? The European DIGITAL SME Alliance, which is the largest network of small and medium sized enterprises (SMEs) in the European ICT sector, represents roughly 45,000 digital SMEs. In comments to the EC about the impact of the law, the Alliance highlighted how costly the AI Act’s conformity assessments and other regulations will be for smaller innovators. “This may put a burden on AI innovation” the Alliance argued, because smaller developers have limited financial and human resources of SMEs.” “[A] regulation that requires SMEs to make these significant investments, will likely push SMEs out of the market,” the group noted. “This is exactly the opposite of the intention to support a thriving and innovative AI ecosystem in Europe.” Moreover, “SMEs will not be able to pass on these costs to their customers in the final customer end pricing,” the Alliance correctly noted because, “[t[he market is global and highly competitive. Therefore, customers will choose cheaper solutions and Europe risks to be left behind in technology development and global competition.”

In March, the Alliance also hosted a forum on “The European AI Act and Digital SMEs,” which featured comments from some operators in this space. Some speakers were quite timid and you could sense that they might have feared pushing back too aggressively against the European Commission so as not to get on the bad side of regulators before the rules go into effect. But Mislav Malenica, Founder & CEO Mindsmiths didn’t pull any punches in his remarks. His company Mindsmiths is trying to build autonomous support systems in many different fields, but their ability to innovate and compete globally will be severely curtailed by the EU AI Act, he argued.

I usually don’t spend time transcribing people’s comments from events, but I went back and watched Malenica’s multiple times because his remarks are so powerful and I wanted to make sure others hear what he was saying. [Malenica’s opening comments during the event run from 42:29 to 49:34 of the video and then he has more to say during Q&A beginning at the 1:27:28 of the video.] Here’s a quick summary of a few of Malenica’s key points (listed chronologically):

  • “I’m not sure we are doing everything we can do actually to create an environment that’s innovation friendly.”
  • “we see a lot of uncertainty. We see fear.”
  • “basically we won’t be able to get funding here.”
  • while reading through the AI Act, he notes, “I don’t see start-ups being mentioned anywhere, and startups are the main vehicles of innovation.” […] “I find it very arrogant”
  • if AI Act becomes law, “what we’ll do in Europe is we’ll create a new market and that’s the AI markets based on fear,” and in how to just build products that avoid the wrath of government or lawsuits.
  • “we are really stifling innovation” and that means Europeans will have to import autonomous products from foreign companies instead of making them there.

Later, during in the Q&A period, Malenica notes how his first virtual currency startup had to use half it’s investment capital just dealing with regulatory compliance issues, and most venture capitalists wouldn’t get behind launching in Europe because of such legal hassles. He reflects upon what this mean for other innovators going forward as the EU prepares to expand their regulatory regime for AI sectors:

  • “I don’t think we’re missing talent. That’s just a consequence” of all the regulation. “We are missing a sense that you have opportunities here. If you the opportunities here, then the talent will come, the funding will come, and so on because people see that they’ll be able to make money, they’ll be able to build companies, and so on.”
  • “If we now take a look at the 10 biggest companies market capitalizations in the world, we’ll see that none of them comes actually from Europe” with U.S. tech companies dominating the list. “So, we missed that wave completely.” Why? “Because we didn’t inspire anyone to take action,” and that is about to happen for AI.
  • “We need to decide if we are going to be a land of opportunities, or will we be just consumers of other people’s tech, the same we are right now” for digital software and services.
  • “We’re already finding excuses for the loss” of the AI market, he argues.

Malenica’s comments are extraordinarily demoralizing if you care about innovation. Now, I’m an American and one way to look at this dismal situation is that, by hobbling its own startups and existing AI innovators, Europe is doing the U.S. another favor by essentially taking itself out of the running in next great global tech race. Europe’s actions may also mean that America gains many of their best and brightest if they come to the U.S. when looking to create the next great algorithmic service or application because they can’t do so in the EU. This is exactly what happened over the past few decades for Internet startups, Malenica noted.

But that’s dismal news in another sense. Europe is filled with brilliant innovators, highly-skilled talent, world-class educational institutions, and even many venture capitalists looking to invest in this arena. Unfortunately, the continent’s suffocating regulatory approach makes it nearly impossible for digital technology innovators to have a fighting chance. Through their heavy-handed policies, European officials have essentially declared their innovators “guilty until proven innocent.” And that means that Europeans and the rest of the world are being deprived of many important life-enriching and life-saving AI applications that those innovators could create. Technological innovation is not a zero-sum game that only one country can “win.” Innovation drives growth and prosperity and lifts all boats as its benefits spread throughout the world. When European innovators prosper, people all over the world prosper along with them.

Is there any chance the European Commission softens its stance toward emerging technologies and looks to adopt a more flexible governance approach that instead treats AI innovators as innocent until proven guilty? I think it is extremely unlikely that will happen because, as Malenica noted, European technology policy is too rooted in fear of disruption and extreme risk-aversion. EU officials are forgetting that the most important lesson from the history of technological innovation is there can be no progress without some risk-taking and corresponding disruption. My favorite quote about the relationship between risk-taking and human progress comes from Wilbur Wright who, along with his brother, helped pioneer human flight. “If you are looking for perfect safety,” Wright said, “you would do well to sit on a fence and watch the birds.” European policymakers are essentially forcing their best and brightest innovators to sit on the fence and watch the rest of the world fly right past them on the digital technology and AI front. The ramifications for the continent will be disastrous. Regardless, as I noted in concluding my recent Hill column, Europe’s approach to AI “shouldn’t be the model the U.S. follows if it hopes to maintain its early lead in AI and robotics. America should instead welcome European companies, workers and investors looking for a more hospitable place to launch bold new AI innovations.”

Alas, European officials appear ready to ignore the deleterious impact of their policies on innovation and competition and instead make regulation their leading export to the world. In fact, the European Commission will soon open a San Francisco office to work more closely with Silicon Valley companies affected by EU tech regulation. European leaders have basically surrendered on the idea of home-grown innovation and are now plowing all their energies into regulating the rest of the world’s largest digital technology companies, most of which are headquartered in the United States. It’s no wonder, then, that The Economist magazine concludes that, “Europe is the free-rider continent” that “has piggybacked on innovation from elsewhere, keeping up with rivals, not forging ahead.” Instead, “the cuddly form of capitalism embraced in Europe has markedly failed to create world-beating companies,” the magazine argues.

European officials want us to believe that they are somehow doing the world a favor by being its global tech regulator, when instead the are simply solidifying the power of the largest digital tech companies, who are the only ones with enough resources–mainly in the form of massive legal compliance teams–to live under the EU’s innovation-crushing regulations. Sadly, many US policymakers hate our own home-grown tech companies so much now, that they are willing to let this happen. In a better world, those American lawmakers would stand up to European officials looking to bully tech innovators and we would reject the innovation-killing recipe that the EU is cooking up for AI markets and expects the rest of the world to eat.


Additional Reading on AI & Robotics:

]]>
https://techliberation.com/2022/08/01/why-the-future-of-ai-will-not-be-invented-in-europe/feed/ 3 77016
Running List of My Research on AI, ML & Robotics Policy https://techliberation.com/2022/07/29/running-list-of-my-research-on-ai-ml-robotics-policy/ https://techliberation.com/2022/07/29/running-list-of-my-research-on-ai-ml-robotics-policy/#respond Fri, 29 Jul 2022 12:51:54 +0000 https://techliberation.com/?p=77020

[last updated 4/3/2025 – Check my Medium page for latest posts]

This a running list of all the essays and reports I’ve already rolled out on the governance of artificial intelligence (AI), machine learning (ML), and robotics. Why have I decided to spend so much time on this issue? Because this will become the most important technological revolution of our lifetimes. Every segment of the economy will be touched in some fashion by AI, ML, robotics, and the power of computational science. It should be equally clear that public policy will be radically transformed along the way.

Eventually, all policy will involve AI policy and computational considerations. As AI “eats the world,” it eats the world of public policy along with it. The stakes here are profound for individuals, economies, and nations. As a result, AI policy will be the most important technology policy fight of the next decade, and perhaps next quarter century. Those who are passionate about the freedom to innovate need to prepare to meet the challenge as proposals to regulate AI proliferate.

There are many socio-technical concerns surrounding algorithmic systems that deserve serious consideration and appropriate governance steps to ensure that these systems are beneficial to society. However, there is an equally compelling public interest in ensuring that AI innovations are developed and made widely available to help improve human well-being across many dimensions. And that’s the case that I’ll be dedicating my life to making in coming years.

Here’s the list of what I’ve done so far. I will continue to update this as new material is released:

2025

2024

2023

2022

2021 (and earlier)

]]>
https://techliberation.com/2022/07/29/running-list-of-my-research-on-ai-ml-robotics-policy/feed/ 0 77020
America Shouldn’t Follow EU’s Lead on AI Regulation https://techliberation.com/2022/07/22/america-shouldnt-follow-eus-lead-on-ai-regulation/ https://techliberation.com/2022/07/22/america-shouldnt-follow-eus-lead-on-ai-regulation/#comments Fri, 22 Jul 2022 15:42:08 +0000 https://techliberation.com/?p=77012

For my latest regular column in The Hill, I took a look at the trade-offs associated with the EU’s AI Act. This is derived from a much longer chapter on European AI policy that is in my forthcoming book, and I also plan on turning it into a free-standing paper at some point soon. My oped begins as follows:

In the intensifying race for global competitiveness in artificial intelligence (AI), the United States, China and the European Union are vying to be the home of what could be the most important technological revolution of our lifetimes. AI governance proposals are also developing rapidly, with the EU proposing an aggressive regulatory approach to add to its already-onerous regulatory regime. It would be imprudent for the U.S. to adopt Europe’s more top-down regulatory model, however, which already decimated digital technology innovation in the past and now will do the same for AI. The key to competitive advantage in AI will be openness to entrepreneurialism, investment and talent, plus a flexible governance framework to address risks.

Jump over to The Hill to read the entire thing. And down below you will find all my recent writing on AI and robotics. This will be my primary research focus in coming years.

Additional Reading :

]]>
https://techliberation.com/2022/07/22/america-shouldnt-follow-eus-lead-on-ai-regulation/feed/ 3 77012
Event Video on Algorithmic Auditing and AI Impact Assessments https://techliberation.com/2022/07/13/event-video-on-algorithmic-auditing-and-ai-impact-assessments/ https://techliberation.com/2022/07/13/event-video-on-algorithmic-auditing-and-ai-impact-assessments/#comments Wed, 13 Jul 2022 18:10:03 +0000 https://techliberation.com/?p=77008

Upsides:

  • Audits and impact assessments can help ensure organizations live up their promises as it pertains to “baking in” ethical best practices (on issues like safety, security, privacy, and non-discrimination).
  • Audits and impact assessments are already utilized in other fields to address safety practices, financial accountability, labor practices and human rights issues, supply chain practices, and various environmental concerns.
  • Internal auditing / Institute of Internal Auditors (IIA) efforts could expand to include AI risks
  • Eventually, more and more organizations will expand their internal auditing efforts to incorporate AI risks because it makes good business sense to stay on top of these issues and avoid liability, negative publicity, or other customer backlash.
  • the International Association of Privacy Professionals (IAPP) trains and certifies privacy professionals through formal credentialing programs, supplemented by regular meetings, annual awards, and a variety of outreach and educational initiatives.
  • We should use similar model for AI and start by supplementing Chief Privacy Officers with Chief Ethical Officers.
  • This is how we formalize the ethical frameworks and best practices that have been formulated by various professional associations such as IEEE, ISO, ACM and others.
  • OECD — Framework for the Classification of AI Systems with the twin goals of helping “to develop a common framework for reporting about AI incidents that facilitates global consistency and interoperability in incident reporting,” and advancing “related work on mitigation, compliance and enforcement along the AI system lifecycle, including as it pertains to corporate governance.”
  • NIST — AI Risk Management Framework “to better manage risks to individuals, organizations, and society associated with artificial intelligence.”
  • These frameworks being developed through a consensus-driven, open, transparent, and collaborative process. Not through top-down regulation.
  • Many AI developers and business groups have endorsed the use of such audits and assessments. BSA|The Software Alliance has said that, “By establishing a process for personnel to document key design choices and their underlying rationale, impact assessments enable organizations that develop or deploy high-risk AI to identify and mitigate risks that can emerge throughout a system’s lifecycle.”
  • Developers can still be held accountable for violations of certain ethical norms and bast practices both through private and potentially even through formal sanctions by consumer protection agencies (Federal Trade Commission / comparable state offices / by state AGs).
  • EqualAI / WEF — “Badge Program for Responsible AI Governance”
  • field of algorithmic consulting continues to expand (ex: O’Neil Risk Consulting)

Downsides:

  • constitutes a harm or impact in any given context will often be a contentious matter.
  • Auditing algorithms is nothing like auditing an accounting ledger, where the numbers either add up or they don’t.
  • With algorithms there are no binary metrics that can quantify the correct amount of privacy, safety, or security in any given system.
  • E.U. AI act will be a disaster for AI innovation and investment
  • Proposed U.S. Algorithmic Accountability Act of 2022 would require that developers perform impact assessments and file them with the Federal Trade Commission. A new Bureau of Technology would be created inside the agency to oversee the process.
  • If enforced through a rigid regulatory regime and another federal bureaucracy, compliance with algorithmic auditing mandates would likely become a convoluted, time-consuming bureaucratic process. That would likely slow the pace of AI development significantly.
  • Academic literature on AI auditing / impact assessment ignores potential costs; Mandatory auditing and assessments are treated as a sort of frictionless nirvana when we already know that such a process would entire significant costs.
  • Some AI scholars suggest that NEPA should be model for AI impact assessments / audits.
  • NEPA assessments were initially quite short (sometimes less than 10 pages), but today the average length of these statements is more than 600 pages and include appendices that average over 1,000 pages on top of that.
  • NEPA assessments take an average of 4.5 years to complete and that, between 2010 and 2017, there were four assessments that took at least 17 years to complete.
  • Many important public projects never get done or take far too long to complete at considerably higher expenditure than originally predicted.
  • would create a number of veto points that opponents of AI could use to stop much progress in the field. This is the “vetocracy” problem.
  • We cannot wait years or even months for bureaucracies to eventually getting around to formally signing off on audits or assessments, many of which would be obsolete before they were even done.
  • “global innovation arbitrage” problem would kick in: Innovators and investors increasingly relocate to the jurisdictions where they are treated most hospitably.
  • Both parties already accuse digital technology companies of manipulating their algorithms to censor their views.
  • Whichever party is in power at any given time could use the process to politicize terms like “safety,” “security,” and “non-discrimination” to nudge or even force private AI developers to alter their algorithms to satisfy the desires of partisan politicians or bureaucrats.
  • FCC abused its ambiguous authority to regulate “in the public interest” and indirectly censor broadcasters through intimidation via jawboning tactics and other “agency threats.” or “regulation by raised eyebrow”
  • There are potentially profound First Amendment issues in play with the regulation of algorithms that have not been explored here but which could become a major part of AI regulatory efforts going forward.

Summary:

  • Auditing and impact assessments can be a part of a more decentralized, polycentric governance framework.
  • Even in the absence of any sort of hard law mandates, algorithmic auditing and impact reviews represent an important way to encourage responsible AI development.
  • But we should be careful about mandating such things due to the many unanticipated cost and consequences of converting this into a top-down, bureaucratic regulatory regime.
  • The process should evolve gradually and organically, as it has in many other fields and sectors.
]]>
https://techliberation.com/2022/07/13/event-video-on-algorithmic-auditing-and-ai-impact-assessments/feed/ 2 77008
VIDEO: My London Talk about the Future of AI Governance https://techliberation.com/2022/06/13/video-my-london-talk-about-the-future-of-ai-governance/ https://techliberation.com/2022/06/13/video-my-london-talk-about-the-future-of-ai-governance/#comments Mon, 13 Jun 2022 09:29:50 +0000 https://techliberation.com/?p=76999

On Thursday, June 9, it was my great pleasure to return to my first work office at the Adam Smith Institute in London and give a talk on the future of innovation policy and the governance of artificial intelligence. James Lawson, who is affiliated with the ASI and wrote a wonderful 2020 study on AI policy, introduced me and also offered some remarks. Among the issues discussed:

  • What sort of governance vision should govern the future of innovation generally and AI in particular: the “precautionary principle” or “permissionless innovation”?
  • Which AI sectors are witnessing the most exciting forms of innovation currently?
  • What are the fundamental policy fault lines in the AI policy debates today?
  • Will fears about disruption and automation lead to a new Luddite movement?
  • How can “soft law” and decentralized governance mechanism help us solve pressing policy concerns surrounding AI?
  • How did automation affect traditional jobs and sectors?
  • Will the European Union’s AI Act become a global model for regulation and will it have a “Brussels Effect” in terms of forcing innovators across the world to come into compliance with EU regulatory mandates?
  • How will global innovation arbitrage affect the efforts by governments in Europe and elsewhere to regulate AI innovation?
  • Can the common law help address AI risk? How is the UK common law system superior to the US legal system?
  • What do we mean by “existential risk” as it pertains to artificial intelligence?

I have a massive study in the works addressing all these issues. In the meantime, you can watch the video of my London talk here. And thanks again to my friends at the Adam Smith Institute for hosting!

Additional Reading:

 

 

]]>
https://techliberation.com/2022/06/13/video-my-london-talk-about-the-future-of-ai-governance/feed/ 4 76999
The Proper Governance Default for AI https://techliberation.com/2022/05/26/the-proper-governance-default-for-ai/ https://techliberation.com/2022/05/26/the-proper-governance-default-for-ai/#comments Thu, 26 May 2022 20:15:21 +0000 https://techliberation.com/?p=76994

[This is a draft of a section of a forthcoming study on “A Flexible Governance Framework for Artificial Intelligence,” which I hope to complete shortly. I welcome feedback. I have also cross-posted this essay at Medium.]

Debates about how to embed ethics and best practices into AI product design is where the question of public policy defaults becomes important. To the extent AI design becomes the subject of legal or regulatory decision-making, a choice must be made between two general approaches: the precautionary principle or the proactionary principle.[1] While there are many hybrid governance approaches in between these two poles, the crucial issue is whether the initial legal default for AI technologies will be set closer to the red light of the precautionary principle (i.e., permissioned innovation) or to the green light of the proactionary principle (i.e., (permissionless innovation). Each governance default will be discussed.

The Problem with the Precautionary Principle as the Policy Default for AI

The precautionary principle holds that innovations are to be curtailed or potentially even disallowed until the creators of those new technologies can prove that they will not cause any theoretical harms. The classic formulation of the precautionary principle can be found in the “Wingspan Statement,” which was formulated at an academic conference that took place at the Wingspread Conference Center in Wisconsin in 1998. It read: “Where an activity raises threats of harm to the environment or human health, precautionary measures should be taken even if some cause and effect relationships are not fully established scientifically.”[2] There have been many reformulations of the precautionary principle over time but, as legal scholar Cass Sunstein has noted, “in all of them, the animating idea is that regulators should take steps to protect against potential harms, even if causal chains are unclear and even if we do not know that those harms will come to fruition.”[3] Put simply, under almost all varieties of the precautionary principle, innovation is treated as “guilty until proven innocent.”[4] We can also think of this as permissioned innovation.

The logic animating the precautionary principle reflects a well-intentioned desire to play it safe in the face of uncertainty. The problem lies in the way this instinct gets translated into law and regulation. Making the precautionary principle the public policy default for any given technology or sector has a strong bearing on how much innovation we can expect to flow from it. When trial-and-error experimentation is preemptively forbidden or discouraged by law, it can limit many of the positive outcomes that typically accompany efforts by people to be creative and entrepreneurial. This can, in turn, give rise to different risks for society in terms of forgone innovation, growth, and corresponding opportunities to improve human welfare in meaningful ways.

St. Thomas Aquinas once observed that if the sole goal of a captain were to preserve their ship, the captain would keep it in port forever. But that clearly is not the captain’s highest goal. Aquinas was making a simple but powerful point: There can be no reward without some effort and even some risk-taking. Ship captains brave the high seas because they are in search of a greater good, such as recognition, adventure, or income. Keeping ships in port forever would preserve their vessels, but at what cost?

Similarly, consider the wise words of Wilbur Wright, who pioneered human flight. Few people better understood the profound risks associated with entrepreneurial activities. After all, Wilbur and his brother were trying to figure out how to literally lift humans off the Earth. The dangers were real, but worth taking. “If you are looking for perfect safety,” Wright said, “you would do well to sit on a fence and watch the birds.” Humans would have never taken to the skies if the Wright brothers had not gotten off the fence and taken the risks they did. Risk-taking drives innovation and, over the long-haul, improves our well-being.[5] Nothing ventured, nothing gained.

These lessons can be applied to public policy by considering what would happen if, in the name of safety, public officials told captains to never leave port or told aspiring pilots to never leave the ground. The opportunity cost of inaction can be hard to quantify, but it should be clear that if we organized our entire society around a rigid application of the precautionary principle, progress and prosperity would suffer.

Heavy-handed preemptive restraints on creative acts can have deleterious effects because they raise barriers to entry, increase compliance costs, and create more risk and uncertainty for entrepreneurs and investors. Thus, it is the unseen costs—primarily in the form of forgone innovation opportunities—that makes the precautionary principle so problematic as a policy default. This is why scientist Martin Rees speaks of “the hidden cost of saying no” that is associated with the precautionary principle.[6]

The precise way the precautionary principle leads to this result is that it derails the so-called learning curve by limiting opportunities to learn from trial-and-error experimentation with new and better ways of doing things.[7] The learning curve refers to the way that individuals, organizations, or industries are able to learn from their mistakes, improve their designs, enhance productivity, lower costs, and then offer superior products based on the resulting knowledge.[8] In his recent book, Where Is My Flying Car?, J. Storrs Hall documents how, over the last half century, “regulation clobbered the learning curve” for many important technologies in the U.S., especially nuclear, nanotech, and advanced aviation.[9] Hall shows how society was denied many important innovations due to endless foot-dragging or outright opposition to change from special interests, anti-innovation activists, and over-zealous bureaucrats.

In many cases, innovators don’t even know what they are up against because, as many scholars have noted, “the precautionary principle, in all of its forms, is fraught with vagueness and ambiguity.”[10] It creates confusion and fear about the wisdom of taking action in the face of uncertainty. Worst case thinking paralyzes regulators who aim to “play it safe” at all costs. The result is an endless snafu of red tape as layer upon layer of mandates build up and block progress. The result is what many scholars now decry as a culture of “vetocracy,” which describes the many veto points within modern political systems that hold back innovation, development and economic opportunity.[11] This endless accumulation of potential veto points in the policy process in the form of mandates and restrictions can greatly curtail innovation opportunities. “Like sediment in a harbor, law has steadily accumulated, mainly since the 1960s, until most productive activity requires slogging through a legal swamp,” says Philip K. Howard, chair of Common Good.[12] “Too much law,” he argues, “can have similar effects as too little law,” because:

People slow down, they become defensive, they don’t initiate projects because they are surrounded by legal risks and bureaucratic hurdles. They tiptoe through the day looking over their shoulders rather than driving forward on the power of their instincts. Instead of trial and error, they focus on avoiding error.[13]

This is exactly why it is important that policymakers not get too caught up in attempts to preemptively resolve every potential hypothetical worst case scenarios associated with AI technologies. The problem with that approach was succinctly summarized by the political scientist Aaron Wildavsky when he noted, “If you can do nothing without knowing first how it will turn out, you cannot do anything at all.”[14] Or, as I have stated in a book on this topic, “living in constant fear of worst-case scenarios—and premising public policy on them—means that best-case scenarios will never come about.”[15]

This does not mean society should dismiss all concerns about the risks surrounding AI. Some technological risks do necessitate a degree of precautionary policy, but proportionality is crucial, notes Gabrielle Bauer, a Toronto-based medical writer. “Used too liberally,” she argues, “the precautionary principle can keep us stuck in a state of extreme risk-aversion, leading to cumbersome policies that weigh down our lives. To get to the good parts of life, we need to accept some risk.”[16] It is not enough to simply hypothesize that certain AI innovations might entail some risk. The critics need to prove it using risk analysis techniques that properly weigh both the potential costs and benefits.[17] Moreover, when conducting such analyses, the full range of trade-offs associated with preemptive regulation must be evaluated. Again, where precautionary constraints might deny society life-enriching devices or services, those costs must be acknowledged.

Generally speaking, the most extreme precautionary controls should only be imposed when the potential harms in question are highly probable, tangible, immediate, irreversible, catastrophic, or directly threatening to life and limb in some fashion.[18] In the context of AI and ML systems, it may be the case that such a test is satisfied already for law enforcement use of certain algorithmic profiling techniques. And that test is satisfied for so-called “killer robots,” or autonomous military technology.[19] These are often described as “existential risks.” The precautionary principle is the right default in these cases because it is abundantly clear how unrestricted use would have catastrophic consequences. For similar reasons, governments have long imposed comprehensive restrictions on certain types of weapons.[20] And although nuclear and chemical technologies have many important applications, their use must also be limited to some degree even outside of militaristic applications because they can pose grave danger if misused.

But the vast majority of AI-enabled technologies are not like this. Most innovations should not be treated the same a hand grenade or a ticking time bomb. In reality, most algorithmic failures will be more mundane and difficult to foresee in advance. By their very nature, algorithms are constantly evolving because programs and systems are being endlessly tweaked by designers to improve them. In his books on the evolution of engineering and systems design, Henry Petroski has noted that “the shortcomings of things are what drive their evolution.”[21] The normal state of things is “ubiquitous imperfection,” he notes, and it is precisely that reality that drives efforts to continuously innovate and iterate.[22]

Regulations rooted in the precautionary principle hope to preemptively find and address product imperfections before any harm comes from them. In reality, and as explained more below, it is only through ongoing experimentation that we find both the nature of failures and the knowledge to know how to correct them. As Petroski observes, “the history of engineering in general, may be told in its failures as well as in its triumphs. Success may be grand, but disappointment can often teach us more.”[23] This is particularly true for complex algorithmic systems, where rapid-fire innovation and incessant iteration are the norm.

Importantly, the problem with precautionary regulation for AI is not just that it might be over-inclusive in seeking to regulate hypothetical problems that never develop. Precautionary regulation can also be under-inclusive by missing problematic behavior or harms that no one anticipated before the fact. Only experience and experimentation reveal certain problems.

In sum, we should not presume that there is a clear preemptive regulatory solution to every problem some people raise about AI, nor should we presume we can even accurately identify all such problems that might come about in the future. Moreover, some risks will never be eliminated entirely, meaning that risk mitigation is the wiser approach. This is why a more flexible bottom-up governance strategy focused on responsiveness and resiliency makes more sense than heavy-handed, top-down strategies that would only avoid risks by making future innovations extremely difficult if not impossible.

The “Proactionary Principle” is the Better Default for AI Policy

The previous section made it clear why the precautionary principle should generally not be used as our policy default if we hope to encourage the development of AI applications and services. What we need is a policy approach that:

  • objectively evaluates the concerns raised about AI systems and applications;
  • considers whether more flexible governance approaches might be available to address them; and,
  • does so without resorting to the precautionary principle as a first-order response.

The proactionary principle is the better general policy default for AI because it satisfies these three objectives.[24] Philosopher Max More defines the proactionary principle as the idea that policymakers should, “[p]rotect the freedom to innovate and progress while thinking and planning intelligently for collateral effects.”[25] There are different names for this same concept, including the innovation principle, which Daniel Castro and Michael McLaughlin of the Information Technology and Innovation Foundation say represents the belief that “the vast majority of new innovations are beneficial and pose little risk, so government should encourage them.”[26] Permissionless innovation is another name for the same idea. Permissionless innovation refers to the idea that experimentation with new technologies and business models should generally be permitted by default.[27]

What binds these concepts together is the belief that innovation should generally be treated as innocent until proven guilty. There will be risks and failures, of course, but the permissionless innovation mindset views them as important learning experiences. These experiences are chances for individuals, organizations, and all of society to make constant improvements through incessant experimentation with new and better ways of doing things.[28] As Virginia Postrel argued in her 1998 book, The Future and Its Enemies, progress demands “a decentralized, evolutionary process” and mindset in which mistakes are not viewed as permanent disasters but instead as “the correctable by-products of experimentation.”[29] “No one wants to learn by mistakes,” Petroski once noted, “but we cannot learn enough from successes to go beyond the state of the art.”[30] Instead we must realize, as other scholars have observed, that “[s]uccess is the culmination of many failures”[31] and understand “failure as the natural consequence of risk and complexity.”[32]

This is why the default for public policy for AI innovation should, whenever possible, be more green lights than red ones to allow for the maximum amount of trial-and-error experimentation, which encourages ongoing learning.[33] “Experimentation matters,” observes Stefan H. Thomke of the Harvard Business School, “because it fuels the discovery and creation of knowledge and thereby leads to the development and improvement of products, processes, systems, and organizations.”[34]

Obviously, risks and mistakes are “the very things regulators inherently want to avoid,”[35] but “if innovators fear they will be punished for every mistake,” Daniel Castro and Alan McQuinn argue, “then they will be much less assertive in trying to develop the next new thing.”[36] And for all the reasons already stated, that would represent the end of progress because it would foreclose the learning process that allows society to discover new, better, and safer ways of doing things. Technology author Kevin Kelly puts it this way:

technologies must be evaluated in action, by action. We test them in labs, we try them out in prototypes, we use them in pilot programs, we adapt our expectations, we monitor their alterations, we redefine their aims as they are modified, we retest them given actual behavior, we re-direct them to new jobs when we are not happy with their outcomes.[37]

In other words, the proactionary principle appreciates the benefits that flow from learning by doing. The goal is to continuously assess and prioritize risks from natural and human-made systems alike, and then formulate and reformulate our toolkit of possible responses to those risks using the most practical and effective solutions available. This should make it clear that the proactionary approach is not synonymous with anarchy. Various laws, government bodies, and especially the courts play an important role in protecting rights, health, and order. But policies need to be formulated such that innovators and innovation are given the benefit of the doubt and risks are analyzed and addressed in a more flexible fashion.

Some of the most effective ways to address potential AI risks already exist in the form of “soft law” and decentralized governance solution. These will be discussed at greater length below. But existing legal remedies include various common law solutions (torts, class actions, contract law, etc), recall authority possessed by many regulatory agencies, and various consumer protection policies. Ex post remedies are generally superior to ex ante prior restraints if we hope to maximize innovation opportunities. Ex ante regulatory defaults are too often set closer to the red light of the precautionary principle and then enforced through volumes of convoluted red tape.

This is what the World Economic Forum has referred to as a “regulate-and-forget” system of governance,[38] or what others call a “build-and-freeze model” or regulation.[39] In such technological governance regimes, older rules are almost never revisited, even after new social, economic, and technical realities render them obsolete or ineffective.[40] A 2017 survey of U.S. Code of Regulations by Deloitte consultants revealed that 68 percent of federal regulations have never been updated and that 17 percent have only been updated once.[41] Public policies for complex and fast-moving technologies like AI cannot be set in stone and forgotten like that if America hopes to remain on the cutting edge of this sector.

Advocates of the proactionary principle look to counter this problem not by eliminating all laws or agencies, but by bringing them in line with flexible governance principles rooted in more decentralized approaches to policy concerns.[42] As many regulatory advocates suggest, it is important to embed or “bake in” various ethical best practices into AI systems to ensure that they benefit humanity. But this, too, is a process of ongoing learning and there are many ways to accomplish such goals without derailing important technological advances. What is often referred to as “value alignment” or “ethically-aligned design” is challenged by the fact that humans regularly disagree profoundly about many moral issues.[43] “Before we can put our values into machines, we have to figure out how to make our values clear and consistent,” says Harvard University psychologist Joshua D. Greene.[44]

The “Three Laws of Robotics” famously formulated decades ago by Isaac Asimov in his science fiction stories continue to be widely discussed today as a guide to embedding ethics into machines.[45] They read:

  1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
  2. A robot must obey orders given it by human beings except where such orders would conflict with the First Law.
  3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws.

What is usually forgotten about these principles, as AI expert Melanie Mitchell reminds us, is the way Asimov, “often focused on the unintended consequences of programming ethical rules into robots,” and how he made it clear that, if applied too literally, “such a set of rules would inevitably fail.”[46]

This is why flexibility and humility are essential virtues when thinking about AI policy. The optimal governance regime for AI can be shaped by responsible innovation practices and embed important ethical principles by design without immediately defaulting to a rigid application of the precautionary principle.[47] In other words, an innovation policy regime rooted in the proactionary principle can also be infused with the same values that animate a precautionary principle-based system.[48] The difference is that the proactionary principle-based approach will look to achieve these goals in a more flexible fashion using a variety of experimental governance approaches and ex post legal enforcement options, while also encouraging still more innovation to solve problems past innovations may have caused.

To reiterate, not every AI risk is foreseeable, and many risks and harms are more amorphous or uncertain. In this sense, the wisest governance approach for AI was recently outlined by the National Institute of Standards and Technology (NIST) in its initial draft AI Risk Management Framework, which is a multistakeholder effort “to describe how the risks from AI-based systems differ from other domains and to encourage and equip many different stakeholders in AI to address those risks purposefully.”[49] NIST notes that the goal of the Framework is:

to be responsive to new risks as they emerge rather than enumerating all known risks in advance. This flexibility is particularly important where impacts are not easily foreseeable, and applications are evolving rapidly. While AI benefits and some AI risks are well-known, the AI community is only beginning to understand and classify incidents and scenarios that result in harm.[50]

This is a sensible framework for how to address AI risks because it makes it clear that it will be difficult to preemptively identify and address all potential AI risks. At the same time, there will be a continuing need to advance AI innovation while addressing AI-related harms. The key to striking that balance will be decentralized governance approaches and soft law techniques described below.

[Note: The subsequent sections of the study will detail how decentralized governance approaches and soft law techniques already are helping to address concerns about AI risks.]

Endnotes:

[1]     Adam Thierer, Permissionless Innovation: The Continuing Case for Comprehensive Technological Freedom, 2nd ed. (Arlington, VA: Mercatus Center at George Mason University, 2016): 1-6, 23-38; Adam Thierer, Evasive Entrepreneurs & the Future of Governance (Washington, DC: Cato Institute, 2020): 48-54.

[2]     “Wingspread Statement on the Precautionary Principle,” January 1998, https://www.gdrc.org/u-gov/precaution-3.html.

[3]     Cass R. Sunstein, Laws of Fear: Beyond the Precautionary Principle (Cambridge, UK: Cambridge University Press, 2005). (“The Precautionary Principle takes many forms. But in all of them, the animating idea is that regulators should take steps to protect against potential harms, even if causal chains are unclear and even if we do not know that those harms will come to fruition.”)

[4]     Henk van den Belt, “Debating the Precautionary Principle: ‘Guilty until Proven Innocent’ or ‘Innocent until Proven Guilty’?” Plant Physiology 132 (2003): 1124.

[5]     H.W. Lewis, Technological Risk (New York: WW. Norton & Co., 1990): x. (“The history of the human race would be dreary indeed if none of our forebears had ever been willing to accept risk in return for potential achievement.”)

[6]     Martin Rees, On the Future: Prospects for Humanity (Princeton, NJ: Princeton University Press, 2018): 136.

[7]     Adam Thierer, “Failing Better: What We Learn by Confronting Risk and Uncertainty,” in Sherzod Abdukadirov (ed.), Nudge Theory in Action: Behavioral Design in Policy and Markets (Palgrave Macmillan, 2016): 65-94.

[8]     Adam Thierer, “How to Get the Future We Were Promised,” Discourse, January 18, 2022, https://www.discoursemagazine.com/culture-and-society/2022/01/18/how-to-get-the-future-we-were-promised.

[9]     J. Storrs Hall, Where Is My Flying Car? (San Francisco: Stripe Press, 2021)

[10]    Derek Turner and Lauren Hartzell Nichols, “The Lack of Clarity in the Precautionary Principle,” Environmental Values, Vol 13, No. 4 (2004): 449.

[11]    William Rinehart, “Vetocracy, the Costs of Vetos and Inaction,” Center for Growth & Opportunity at Utah State University, March 24, 2022, https://www.thecgo.org/benchmark/vetocracy-the-costs-of-vetos-and-inaction; Adam Thierer, “Red Tape Reform is the Key to Building Again,” The Hill, April 28, 2022, https://thehill.com/opinion/finance/3470334-red-tape-reform-is-the-key-to-building-again.

[12]    Philip K. Howard, “Radically Simplify Law,” Cato Institute, Cato Online Forum, http://www.cato.org/publications/cato-online-forum/radically-simplify-law.

[13]    Ibid.

[14]    Aaron Wildavsky, Searching for Safety (New Brunswick, NJ: Transaction Publishers, 1989): 38.

[15]    Thierer, Permissionless Innovation, at 2.

[16]    Gabrielle Bauer, “Danger: Caution Ahead,” The New Atlantis, February 4, 2022, https://www.thenewatlantis.com/publications/danger-caution-ahead.

[17]    Richard B. Belzer, “Risk Assessment, Safety Assessment, and the Estimation of Regulatory Benefits” (Mercatus Working Paper, Mercatus Center at George Mason University, Arlington, VA, 2012), 5, http://mercatus.org/publication/risk-assessment-safety-assessment-and-estimation-regulatory-benefits; John D. Graham and Jonathan Baert Wiener, eds. Risk vs. Risk: Tradeoffs in Protecting Health and the Environment, (Cambridge, MA: Harvard University Press, 1995).

[18]    Thierer, Permissionless Innovation, at 33-8.

[19]    Adam Satariano, Nick Cumming-Bruce and Rick Gladstone, “Killer Robots Aren’t Science Fiction. A Push to Ban Them Is Growing,” New York Times, December 17, 2021, https://www.nytimes.com/2021/12/17/world/robot-drone-ban.html.

[20]    Adam Thierer, “Soft Law: The Reconciliation of Permissionless & Responsible Innovation,” in Adam Thierer, Evasive Entrepreneurs & the Future of Governance (Washington, DC: Cato Institute, 2020): 183-240, https://www.mercatus.org/publications/technology-and-innovation/soft-law-reconciliation-permissionless-responsible-innovation.

[21]    Henry Petroski, The Evolution of Useful Things (New York: Vintage Books, 1994): 34.

[22]    Ibid., 27,

[23]    Henry Petroski, To Engineer is Human: The Role of Failure in Successful Design (New York: Vintage, 1992): 9.

[24]    James Lawson, These Are the Droids You’re Looking For: An Optimistic Vision for Artificial Intelligence, Automation and the Future of Work (London: Adam Smith Institute, 2020): 86, https://www.adamsmith.org/research/these-are-the-droids-youre-looking-for.

[25]    Max More, “The Proactionary Principle (March 2008),” Max More’s Strategic Philosophy, March 28, 2008, http://strategicphilosophy.blogspot.com/2008/03/proactionary-principle-march-2008.html.

[26]    Daniel Castro & Michael McLaughlin, “Ten Ways the Precautionary Principle Undermines Progress in Artificial Intelligence,” Information Technology and Innovation Foundation, February 4, 2019, https://itif.org/publications/2019/02/04/ten-ways-precautionary-principle-undermines-progress-artificial-intelligence.

[27]    Thierer, Permissionless Innovation.

[28]    Thierer, “Failing Better.”

[29]    Virginia Postrel, The Future and Its Enemies (New York: The Free Press, 1998): xiv.

[30]    Henry Petroski, To Engineer is Human: The Role of Failure in Successful Design (New York: Vintage, 1992): 62.

[31]    Kevin Ashton, How to Fly a Horse: The Secret History of Creation, Invention, and Discovery (New York: Doubleday, 2015): 67.

[32]    Megan McArdle, The Up Side of Down: Why Failing Well is the Key to Success (New York: Viking, 2014), 214.

[33]    F. A. Hayek, The Constitution of Liberty (London: Routledge, 1960, 1990): 81. (“Humiliating to human pride as it may be, we must recognize that the advance and even preservation of civilization are dependent upon a maximum of opportunity for accidents to happen.”)

[34]    Stefan H. Thomke, Experimentation Matters: Unlocking the Potential of New Technologies for Innovation (Harvard Business Review Press, 2003), 1.

[35]    Daniel Castro and Alan McQuinn, “How and When Regulators Should Intervene,” Information Technology and Innovation Foundation Reports, (February 2015): 2 http://www.itif.org/publications/how-and-when-regulators-should-intervene.

[36]    Ibid.

[37]    Kevin Kelly, “The Pro-Actionary Principle,” The Technium, November 11, 2008, https://kk.org/thetechnium/the-pro-actiona.

[38]    World Economic Forum, Agile Regulation for the Fourth Industrial Revolution (Geneva: Switzerland: 2020): 4, https://www.weforum.org/projects/agile-regulation-for-the-fourth-industrial-revolution.

[39]    Jordan Reimschisel and Adam Thierer, “’Build & Freeze’ Regulation Versus Iterative Innovation,” Plain Text, November 1, 2017, https://readplaintext.com/build-freeze-regulation-versus-iterative-innovation-8d5a8802e5da.

[40]    Adam Thierer, “Spring Cleaning for the Regulatory State,” AIER, May 23, 2019, https://www.aier.org/article/spring-cleaning-for-the-regulatory-state.

[41]    Daniel Byler, Beth Flores & Jason Lewris, “Using Advanced Analytics to Drive Regulatory Reform: Understanding Presidential Orders on Regulation Reform,” Deloitte, 2017, https://www2.deloitte.com/us/en/pages/public-sector/articles/advanced-analytics-federal-regulatory-reform.html.

[42]    Adam Thierer, Governing Emerging Technology in an Age of Policy Fragmentation and Disequilibrium, American Enterprise Institute (April 2022), https://platforms.aei.org/can-the-knowledge-gap-between-regulators-and-innovators-be-narrowed.

[43]    Brian Christian, The Alignment Problem: Machine Learning and Human Values (New York: W.W. Norton & Company, 2020).

[44]    Joshua D. Greene, “Our Driverless Dilemma,” Science (June 2016): 1515.

[45]    Susan Leigh Anderson, “Asimov’s ‘Three Laws of Robotics’ and Machine Metaethics,” AI and Society, Vol. 22, No. 4, (2008): 477-493.

[46]    Melanie Mitchell, Artificial Intelligence: A Guide for Thinking Humans (New York: Farrar, Straus and Giroux, 2019): 126 [Kindle edition.]

[47]    Thomas A. Hemphill, “The Innovation Governance Dilemma: Alternatives to the Precautionary Principle,” Technology in Society, Vol. 63 (2020): 6, https://ideas.repec.org/a/eee/teinso/v63y2020ics0160791x2030751x.html.

[48]    Adam Thierer, “Are ‘Permissionless Innovation’ and ‘Responsible Innovation’ Compatible?” Technology Liberation Front, July 12, 2017, https://techliberation.com/2017/07/12/are-permissionless-innovation-and-responsible-innovation-compatible.

[49]    The National Institute of Standards and Technology, “AI Risk Management Framework: Initial Draft,” (March 17, 2022): 1, https://www.nist.gov/itl/ai-risk-management-framework.

[50]    Ibid., at 5.

]]>
https://techliberation.com/2022/05/26/the-proper-governance-default-for-ai/feed/ 3 76994