Technology Liberation Front https://techliberation.com Keeping politicians' hands off the Net & everything else related to technology Mon, 22 Jan 2024 15:51:44 +0000 en-US hourly 1 We Need Federal Preemption of State & Local AI Regulation https://techliberation.com/2024/01/22/we-need-federal-preemption-of-state-local-ai-regulation/ https://techliberation.com/2024/01/22/we-need-federal-preemption-of-state-local-ai-regulation/#respond Mon, 22 Jan 2024 15:51:44 +0000 https://techliberation.com/?p=77175

In my latest column for The Hill, I explore how “State and Local Meddling Threatens to Undermine the AI Revolution” in America as mountains of parochial tech mandates accumulate. We need a federal response, but we’re not likely to get the right one, I argue.

I specifically highlight the danger of new measures from big states like NY and California, but it’s the patchwork of all the state and local regs that will result in a sort of ‘death-by-a-thousand-cuts’ for AI innovation as the red tape grows and hinders innovation and capital formation.

What we need is the same sort of principled, pro-innovation federal framework or AI that we adopted for the Internet a generation ago. Specifically, we need some sort of preemption of most of the state and local constraints on what is inherently national (and even global) commerce and speech.

Alas, Congress appears incapable of getting even basic things done on tech policy these days. As far as I can tell, not a single AI bill in front of Congress today would preempt most of this state and local AI regulatory activity.

Worse yet, if Congress did somehow pass anything on AI right now, it’d probably just include even more anti-innovation mandates and agencies without preempting any of the state and local ones. Thus, America would just be piling bad mandates on top of bad mandates until we basically become like Europe, where innovation goes to die under piles of bureaucratic red tape.

It’s a miserable state of affairs with horrible consequences for the U.S. as global competition from China heats up on the AI front. America is sacrificing its competitive advantage on digital technology because fear-based thinking and partisan politics continue to prevent the adoption of a principled, bipartisan vision for artificial intelligence policy.

See my new Hill column for more discussion, and also make sure to check out my earlier Hill essay on “A balanced AI governance vision for America,” as well as these two big R Street Institute reports from last year about how Congress can craft sensible, pro-innovation AI policy for America:

And here is some additional reading on the dangerous regulatory situation we are facing today in terms of over-regulating artificial intelligence by treating innovators as guilty until proven innocent. America is about to shoot itself in the foot as the global race begins for the more important technological revolution of our lifetime:

]]>
https://techliberation.com/2024/01/22/we-need-federal-preemption-of-state-local-ai-regulation/feed/ 0
Podcast: “AI – DC Policymakers Face a Crossroads” https://techliberation.com/2023/12/12/podcast-ai-dc-policymakers-face-a-crossroads/ https://techliberation.com/2023/12/12/podcast-ai-dc-policymakers-face-a-crossroads/#comments Tue, 12 Dec 2023 13:06:14 +0000 https://techliberation.com/?p=77170

Here’s a new DC EKG podcast I recently appeared on to discuss the current state of policy development surrounding artificial intelligence. In our wide-ranging chat, we discussed:

* why a sectoral approach to AI policy is superior to general purpose licensing
* why comprehensive AI legislation will not pass in Congress
* the best way to deal with algorithmic deception
* why Europe lost its tech sector
* how a global AI regulator threatens our safety
* the problem with Biden’s AI executive order
* will AI policy follow same path as nuclear policy?
* global innovation arbitrage & the innovation cage
* AI, health care & FDA regulation
* AI regulation vs trade secrets
* is AI transparency / auditing the solution?

Listen to the full show here or here. To read more about current AI policy developments, check out my “Running List of My Research on AI, ML & Robotics Policy.”

 

]]>
https://techliberation.com/2023/12/12/podcast-ai-dc-policymakers-face-a-crossroads/feed/ 1
Can Any AI Legislation Pass Congress This Session? https://techliberation.com/2023/10/17/can-any-ai-legislation-pass-congress-this-session/ https://techliberation.com/2023/10/17/can-any-ai-legislation-pass-congress-this-session/#respond Tue, 17 Oct 2023 17:49:49 +0000 https://techliberation.com/?p=77162

My latest dispatch from the frontlines of the artificial intelligence policy wars in Washington looks at the major proposals to regulate AI. In my new essay, “Artificial Intelligence Legislative Outlook: Fall 2023 Update,” I argue that there are 3 major impediments to getting major AI legislation over the finish line in Congress: (1) Breadth and complexity of the issue; (2) Multiplicity of concerns & special interests; & (3) Extreme rhetoric / proposals are dominating the discussion.

If Congress wants to get something done in this session, they’ll need to do two things: (1) set aside the most radical regulatory proposals (like big new AI agencies or licensing schemes); and (2) break AI policy down into its smaller subcomponents and then prioritize among them where policy gaps might exist.

Prediction: Congress will not pass any AI-related legislation this session due to the factors identified in my essay. The temptation to “go big” with everything-and-the-kitchen-sink approaches to AI regulation will (especially with extreme ideas like new agencies & licenses) will doom AI legislation. It’s also worth noting that Washington’s swelling interest in AI policy is having a crowding-out effect on other important legislative proposals that might have advanced otherwise, such as the baseline privacy bill (ADPPA) and other things like driverless car legislation. Many want to advance those efforts first, but the AI focus makes that hard.

Read the entire essay here.

]]>
https://techliberation.com/2023/10/17/can-any-ai-legislation-pass-congress-this-session/feed/ 0
Event Video: Debating Frontier AI Regulation https://techliberation.com/2023/09/15/event-video-debating-frontier-ai-regulation/ https://techliberation.com/2023/09/15/event-video-debating-frontier-ai-regulation/#respond Fri, 15 Sep 2023 14:39:59 +0000 https://techliberation.com/?p=77157

The Brookings Institution hosted this excellent event on frontier AI regulation this week featuring a panel discussion I was on that followed opening remarks from Rep. Ted Lieu (D-CA). I come in around the 51-min mark of the event video and explain why I worry that AI policy now threatens to devolve into an all-out war on computation and open source innovation in particular. ​

I argue that some pundits and policymakers appear to be on the way to substituting a very real existential risk (authoritarian govt control over computation/science) for a hypothetic existential risk of powerful AGI. I explain how there are better, less destructive ways to address frontier AI concerns than the highly repressive approaches currently being considered.

I have developed these themes and arguments at much greater length in a series of essays over on Medium over the past few months. If you care to read more, the four key articles to begin with are:

In June, I also released this longer R Street Institute report on “Existential Risks & Global Governance Issues around AI & Robotics,” and then spent an hour talking about these issues on the TechPolicyPodcast about “Who’s Afraid of Artificial Intelligence?” All of my past writing and speaking on AI, ML, and robotic policy can be found here, and that list is update every month.

As always, I’ll have much more to say on this topic as the war on computation expands. This is quickly becoming the most epic technology policy battle of modern times.

]]>
https://techliberation.com/2023/09/15/event-video-debating-frontier-ai-regulation/feed/ 0
EVENT VIDEO: “Who’s Leading on AI Policy?” https://techliberation.com/2023/09/12/event-video-whos-leading-on-ai-policy/ https://techliberation.com/2023/09/12/event-video-whos-leading-on-ai-policy/#respond Tue, 12 Sep 2023 18:55:30 +0000 https://techliberation.com/?p=77154

I was my pleasure to participate in this Cato Institute event today on “Who’s Leading on AI Policy?
Examining EU and U.S. Policy Proposals and the Future of AI.” Cato’s Jennifer Huddleston hosted and also participating was Boniface de Champris, Policy Manager with the Computer and Communications Industry Association. Here’s a brief outline of some of the issues we discussed:

  • What are the 7 leading concerns driving AI policy today?
  • What is the difference between horizontal vs. vertical AI regulation?
  • Which agencies are moving currently to extend their reach and regulate AI tech?
  • What’s going on at the state, local, and municipal level in the US on AI policy?
  • How will the so-called “Brussels Effect” influence the course of AI policy in the US?
  • What have the results been of the EU’s experience with the GDPR?
  • How will the EU AI Act work in practice?
  • Can we make algorithmic systems perfectly transparent / “explainable”?
  • Should AI innovators be treated as ‘guilty until proven innocent’ of certain risks?
  • How will existing legal concepts and standards (like civil rights law and unfair and deceptive practices regulation) be applied to algorithmic technologies?
  • Do we have a fear-based model of AI governance currently? What role has science fiction played in fueling that?
  • What role will open source AI play going forward?
  • Is AI licensing a good idea? How would it even work?
  • Can AI help us identify and address societal bias and discrimination?

Again, you can watch the entire video here and, as always, here’s my “Running List of My Research on AI, ML & Robotics Policy.”

]]>
https://techliberation.com/2023/09/12/event-video-whos-leading-on-ai-policy/feed/ 0
America Does Not Need a Digital Consumer Protection Commission https://techliberation.com/2023/08/10/america-does-not-need-a-digital-consumer-protection-commission/ https://techliberation.com/2023/08/10/america-does-not-need-a-digital-consumer-protection-commission/#respond Thu, 10 Aug 2023 15:25:01 +0000 https://techliberation.com/?p=77151

The New York Times today published my response to an oped by Senators Lindsey Graham & Elizabeth Warren calling for a new “Digital Consumer Protection Commission” to micromanage the high-tech information economy. “Their new technocratic digital regulator would do nothing but hobble America as we prepare for the next great global technological revolution,” I argue. Here’s my full response:

Senators Lindsey Graham and Elizabeth Warren propose a new federal mega-regulator for the digital economy that threatens to undermine America’s global technology standing.

A new “licensing and policing” authority would stall the continued growth of advanced technologies like artificial intelligence in America, leaving China and others to claw back crucial geopolitical strategic ground.

America’s digital technology sector enjoyed remarkable success over the past quarter-century — and provided vast investment and job growth — because the U.S. rejected the heavy-handed regulatory model of the analog era, which stifled innovation and competition.

The tech companies that Senators Graham and Warren cite (along with countless others) came about over the past quarter-century because we opened markets and rejected the monopoly-preserving regulatory regimes that had been captured by old players.

The U.S. has plenty of federal bureaucracies, and many already oversee the issues that the senators want addressed. Their new technocratic digital regulator would do nothing but hobble America as we prepare for the next great global technological revolution.

]]>
https://techliberation.com/2023/08/10/america-does-not-need-a-digital-consumer-protection-commission/feed/ 0
Good FAA Update on State and Local Rules for Drone Airspace https://techliberation.com/2023/08/07/good-faa-update-on-state-and-local-rules-for-drone-airspace/ https://techliberation.com/2023/08/07/good-faa-update-on-state-and-local-rules-for-drone-airspace/#respond Mon, 07 Aug 2023 14:36:02 +0000 https://techliberation.com/?p=77147

There’s been exciting progress in US drone policy in the past few months. First, the FAA in April announced surprising new guidance in its Aeronautics Information Manual, re: drone airspace access. As I noted in an article for the State Aviation Journal, the new Manual notes:

There can be certain local restrictions to airspace. While the FAA is designated by federal law to be the regulator of the NAS [national airspace system], some state and local authorities may also restrict access to local airspace. UAS pilots should be aware of these local rules.

That April update has been followed up by a bigger, drone policy update from FAA. On July 14, the FAA went further than the April guidance and updated and replaced its 2015 guidance to states and localities about drone regulation and airspace policy.

In this July 2023 guidance, I was pleasantly surprised to see the FAA recognize some state and local authority in the “immediate reaches” airspace. Notably, in the new guidance the FAA expressly notes that that state laws that “prohibit [or] restrict . . . operations by UAS in the immediate reaches of property” are an example of laws not subject to conflict preemption.

A handful of legal scholars–like ASU Law Professor Troy Rule and myself–have urged federal officials for years to recognize that states, localities, and landowners have a significant say in what happens in very low-altitude airspace–the “immediate reaches” above land. That’s because the US Supreme Court in US v. Causby recognized that the “immediate reaches” above land is real property owned by the landowner:

[I]t is obvious that, if the landowner is to have full enjoyment of the land, he must have exclusive control of the immediate reaches of the enveloping atmosphere. …As we have said, the flight of airplanes, which skim the surface but do not touch it, is as much an appropriation of the use of the land as a more conventional entry upon it.

Prior to these recent updates, the FAA’s position on which rules apply in very low-altitude airspace–FAA rules or state property rules–was confusing. The agency informally asserts authority to regulate drone operations down to “the grass tips”; however, many landowners don’t want drones to enter the airspace immediately above their land without permission and would sue to protect their property rights. This is not a purely academic concern: the uncertainty about whether and when drones can fly in very low-altitude airspace has created damaging uncertainty for the industry. As the Government Accountability Office told Congress in 2020:

The legal uncertainty surrounding these [low-altitude airspace] issues is presenting challenges to integration of UAS [unmanned aircraft systems] into the national airspace system.


With this July update, the FAA helps clarify matters. To my knowledge, this is the first mention of “immediate reaches,” and implicit reference to Causby, by the FAA. The update helpfully protects, in my view, property rights and federalism. It also represents a win for the drone industry, which finally has some federal clarity on this after a decade of uncertainty about how low they can fly. Drone operators now know they can sometimes be subject to local rules about aerial trespass. States and cities now know that they can create certain, limited prohibitions, which will be helpful to protect sensitive locations like neighborhoods, stadiums, prisons, and state parks and conservation areas.

As an aside: It seems possible one motivation for the FAA adding this language is to foreclose future takings litigation (a la Cedar Point Nursery v. Hassid) against the FAA. With this new guidance, the FAA can now point out in future takings litigation that they do not authorize drone operations in the immediate reaches of airspace; this FAA guidance indicates that operations in the immediate reaches is largely a question of state property and trespass laws.

On the whole, I think this new FAA guidance is strong, especially the first formal FAA recognition of some state authority over the “immediate reaches.” That said, as a USDOT Inspector General report to Congress pointed out last year, the FAA has not been responsive when state officials have questions about creating drone rules to complement federal rules. In 2018, for instance, a lead State “participant [in an FAA drone program] requested a clarification as to whether particular State laws regarding UAS conflicted with Federal regulations. According to FAA, as of February 2022 . . . FAA has not yet provided an opinion in response to that request.”

Four years-plus of silence from the FAA is a long time for a state official to wait, and it’s a lifetime for a drone startup looking for legal clarity. I do worry about agency non-answers on preemption questions from states, and how other provisions in this new guidance will be interpreted. Hopefully this new guidance means FAA employees can be more responsive to inquiries from state officials. With the April and July airspace policy updates, the FAA, state aviation offices, the drone industry, and local officials are in a better position to create commercial drone networks nationwide, while protecting the property and privacy expectations of residents.

Further Reading

See my July report on drones and airspace policy for state officials, including state rankings: “2023 State Drone Commerce Rankings: How prepared is your state for drone commerce?”.

]]>
https://techliberation.com/2023/08/07/good-faa-update-on-state-and-local-rules-for-drone-airspace/feed/ 0
Is AI Really an Unregulated Wild West? https://techliberation.com/2023/06/22/is-ai-really-an-unregulated-wild-west/ https://techliberation.com/2023/06/22/is-ai-really-an-unregulated-wild-west/#respond Thu, 22 Jun 2023 15:04:44 +0000 https://techliberation.com/?p=77142

As I noted in a recent interview with James Pethokoukis for his Faster, Please! newsletter, “[t]he current policy debate over artificial intelligence is haunted by many mythologies and mistaken assumptions. The most problematic of these is the widespread belief that AI is completely ungoverned today.” In a recent R Street Institute report and series of other publications, I have documented just how wrong that particular assumption is.

The first thing I try to remind everyone is that the U.S. federal government is absolutely massive—2.1 million employees, 15 cabinet agencies, 50 independent federal commissions and 434 federal departments. Strangely, when policymakers and pundits deliver remarks on AI policy today, they seem to completely ignore all that regulatory capacity while simultaneously casually tossing out proposals to just add more and more layers of regulation and bureaucracy to it. Well, I say why not see if the existing regulations and bureaucracy are working first, and then we can have a chat about what more is needed to fill gaps.

And a lot is being done on this front. In a new blog post for R Street, I offer a brief summary of some of the most important recent efforts.

  • In January, the National Institute of Standards and Technology released its “AI Risk Management Framework,” which was created through a multi-year, multi-stakeholder process. It is intended to help developers and policymakers better understand how to identify and address various types of potential algorithmic risk.
  • The Food and Drug Administration (FDA) has been using its broad regulatory powers to review and approve AI and ML-enabled medical devices for many years already, and the agency possesses broad recall authority that can address risks that develop from algorithmic or robotic systems. The FDA is currently refining its approach to AI/ML in a major proceeding.
  • The National Highway Traffic Safety Administration (NHTSA) has been issuing constant revisions to its driverless car policy guidelines since 2016. Like the FDA, the NHTSA also has broad recall authority, which it used in February 2023 to mandate a recall of Tesla’s full self-driving autonomous driving system, also requiring an over-the-air software update to over 300,000 vehicles that had the software package.
  • In 2021, the Consumer Product Safety Commission agency issued a major report highlighting the many policy tools it already has to address AI risks. Like the FDA and NHTSA, the agency has recall authority that can address risks that develop from consumer-facing algorithmic or robotic systems.
  • In April, Securities and Exchange Commission Chairman Gary Gensler told Congress that his agency is moving to address AI and predictive data analytics in finance and investing.
  • The Federal Trade Commission (FTC) has become increasingly active on AI policy issues and has noted in a series of recent blog posts that the agency is ready to use its broad authority to “unfair and deceptive practices,” involving algorithmic claims or applications.
  • The Equal Employment Opportunity Commission (EEOC) recently released a memo as part of its “ongoing effort to help ensure that the use of new technologies complies with federal [equal employment opportunity] law.” It outlines how existing employment antidiscrimination laws and policies cover algorithmic technologies.
  • In May, the Consumer Financial Protection Bureau (CFPB) issued a statement clarifying how existing federal anti-discrimination law already applies to complex algorithmic systems used for lending decisions.  The agency also recently released a report on the use of Chatbots in Consumer Finance, and explained the many ways that the “CFPB is actively monitoring the market” for risks associated with these new services.
  • Along with the EEOC, the FTC and the CFPB, the Civil Rights Division of the Department of Justice released an April joint statement saying that the agency heads said that they would be looking to take preemptive steps to address algorithmic discrimination.

“This is real-time algorithmic governance in action,” I argue. Again, additional regulatory steps may be needed later to fill gaps in current law, but policymakers should begin by acknowledging that a lot of algorithmic oversight authority exists across the federal government. Meanwhile, the courts and our common law system are also starting to address novel AI problems as cases develop. For more along these lines, see my recent essay on “The Many Ways Government Already Regulates Artificial Intelligence.”

So, next time someone suggests that AI is developing in an unregulated “Wild West,” remind them of all these existing laws, agencies, and regulatory efforts. And then also ask them a different question no one is really exploring currently: Could it be the case that many agencies are already overregulating some algorithmic and autonomous systems? (I’m looking at you, FAA!) Why is no one worried about that possibility as the global AI race with China and other countries intensifies?

Additional Reading:

]]>
https://techliberation.com/2023/06/22/is-ai-really-an-unregulated-wild-west/feed/ 0
New Report: Do We Need Global Government to Address AI Risk? https://techliberation.com/2023/06/16/new-report-do-we-need-global-government-to-address-ai-risk/ https://techliberation.com/2023/06/16/new-report-do-we-need-global-government-to-address-ai-risk/#respond Fri, 16 Jun 2023 13:27:15 +0000 https://techliberation.com/?p=77138

Can we advance AI safety without new international regulatory bureaucracies, licensing schemes or global surveillance systems? I explore that question in my latest R Street Institute study, “Existential Risks & Global Governance Issues around AI & Robotics.” (31 pgs)  My report rejects extremist thinking about AI arms control & stresses how the “realpolitik” of international AI governance is such that things cannot and must not be solved through silver-bullet gimmicks and grandiose global government regulatory regimes.

The report uses Nick Bostrom’s “vulnerable world hypothesis” as a launching point and discusses how his five specific control mechanisms for addressing AI risks have started having real-world influence with extreme regulatory proposals now being floated. My report also does a deep dive into the debate about a proposed global ban on “killer robots” and looks at how past treaties and arms control efforts might apply, or what we can learn from them about what won’t work.

I argue that proposals to impose global controls on AI through a worldwide regulatory authority are both unwise and unlikely to work. Calls for bans or “pauses” on AI developments are largely futile because many nations will not agree to them. As with nuclear and chemical weapons, treaties, accords, sanctions and other multilateral agreements can help address some threats of malicious uses of AI or robotics. But trade-offs are inevitable, and addressing one type of existential risk sometimes can give rise to other risks.

A culture of AI safety by design is critical. But there is an equally compelling interest in ensuring algorithmic innovations are developed and made widely available to society. The most effective solution to technological problems usually lies in more innovation, not less. Many other multistakeholder and multilateral efforts can help AI safety. Final third of my study is devoted to a discussion of that. Continuous communication, coordination, and cooperation—among countries, developers, professional bodies and other stakeholders—will be essential.

My new report on concludes with a plea to reject fatalism and fanaticism when discussing global AI risks. It’s worth recalling what Bertrand Russell said in 1951 about how only global government could save humanity. He predicted, “[t]he end of human life, perhaps of all life on our planet,” before the end
of the century unless the world unified under “a single government, possessing a monopoly of all the major weapons of war.” He was very wrong, of course, and thank God he did not get his wish because an effort to unite the world under one global government would have entailed different existential risks that he never bothered seriously considering. We need to reject extremist global government solutions as the basis for controlling technological risk.

Three quick notes.

First, this new report is the third in a trilogy of major R Street Institute studies on bottom-up, polycentric AI governance. If you only read one, make it this: “Flexible, Pro-Innovation Governance Strategies for Artificial Intelligence.” 

Second, I wrapped up this latest report a few months ago, before the Microsoft and OpenAI floated new comprehensive AI regulatory controls. So, for an important follow-up to this report, please read: “Microsoft’s New AI Regulatory Framework & the Coming Battle over Computational Control.”

Finally, if you’d like to hear me discuss many of the findings from these new reports and essays at greater length, check out my recent appearance on TechFreedom’s “Tech Policy Podcast,” with Corbin K. Barthold. We do a deep dive on all these AI governance trends and regulatory proposals.

As always, all my writing on AI, ML and robotics can be found here and my most recent things are found below.

Additional Reading:

]]>
https://techliberation.com/2023/06/16/new-report-do-we-need-global-government-to-address-ai-risk/feed/ 0
Podcast: “Who’s Afraid of Artificial Intelligence?” https://techliberation.com/2023/06/12/podcast-whos-afraid-of-artificial-intelligence/ https://techliberation.com/2023/06/12/podcast-whos-afraid-of-artificial-intelligence/#respond Mon, 12 Jun 2023 17:30:32 +0000 https://techliberation.com/?p=77136

This week, I appeared on the Tech Freedom Tech Policy Podcast to discuss “Who’s Afraid of Artificial Intelligence?” It’s an in-depth, wide-ranging conversation about all things AI related. Here’s a summary of what host what Corbin Barthold and I discussed:

1. The “little miracles happening every day” thanks to AI

2. Is AI a “born free” technology?

3. Potential anti-competitive effects of AI regulation

4. The flurry of joint letters

5. new AI regulatory agency political realities

6. the EU’s Precautionary Principle tech policy disaster

7. The looming “war on computation” & open source

8. The role of common law for AI

9. Is Sam Altman breaking the very laws he proposes?

10. Do we need an IAEA for AI or an “AI Island”

11. Nick Bostrom’s global control & surveillance model

12. Why “doom porn” dominates in academic circles

13. Will AI take all the jobs?

14. Smart regulation of algorithmic technology

15. How the “pacing problem” is sometimes the “pacing benefit”

 

]]>
https://techliberation.com/2023/06/12/podcast-whos-afraid-of-artificial-intelligence/feed/ 0
Podcast: “Artificial Intelligence for Dummies” https://techliberation.com/2023/06/12/podcast-artificial-intelligence-for-dummies/ https://techliberation.com/2023/06/12/podcast-artificial-intelligence-for-dummies/#respond Mon, 12 Jun 2023 12:29:49 +0000 https://techliberation.com/?p=77133

It was my pleasure to recently appear on the Independent Women’s Forum’s “She Thinks” podcast to discuss “Artificial Intelligence for Dummies.” In this 24-minute conversation with host Beverly Hallberg, I outline basic definitions, identify potential benefits, and then consider some of the risks associated with AI, machine learning, and algorithmic systems.

Reminder, you can find all my relevant past work on these issues via my, “Running List of My Research on AI, ML & Robotics Policy.”

]]>
https://techliberation.com/2023/06/12/podcast-artificial-intelligence-for-dummies/feed/ 0
event video: “Does the US Need a New AI Regulator?” https://techliberation.com/2023/06/07/event-video-does-the-us-need-a-new-ai-regulator/ https://techliberation.com/2023/06/07/event-video-does-the-us-need-a-new-ai-regulator/#respond Wed, 07 Jun 2023 12:41:49 +0000 https://techliberation.com/?p=77129

Here’s the video from a June 6th event on, “Does the US Need a New AI Regulator?” which was co-hosted by Center for Data Innovation & R Street Institute. We discuss algorithmic audits, AI licensing, an “FDA for algorithms” and other possible regulatory approaches, as well as various “soft law” self-regulatory efforts and targeted agency efforts. The event was hosted by Daniel Castro and included Lee Tiedrich, Shane Tews, Ben Shneiderman and me.

Additional Reading:

]]>
https://techliberation.com/2023/06/07/event-video-does-the-us-need-a-new-ai-regulator/feed/ 0
Podcast: Should We Regulate AI? https://techliberation.com/2023/05/08/podcast-should-we-regulate-ai/ https://techliberation.com/2023/05/08/podcast-should-we-regulate-ai/#respond Mon, 08 May 2023 12:15:12 +0000 https://techliberation.com/?p=77120

It was my pleasure to recently join Matthew Lesh, Director of Public Policy and Communications for the London-based Institute of Economic Affairs (IEA), for the IEA podcast discussion, “Should We Regulate AI?” In our wide-ranging 30-minute conversation, we discuss how artificial intelligence policy is playing out across nations and I explained why I feel the UK has positioned itself smartly relative to the US & EU on AI policy. I argued that the UK approach encourages a better ‘innovation culture’ than the new US model being formulated by the Biden Administration.

We also went through some of the many concerns driving calls to regulate AI today, including: fears about job dislocations, privacy and security issues, national security and existential risks, and much more.

Additional reading:

]]>
https://techliberation.com/2023/05/08/podcast-should-we-regulate-ai/feed/ 0
Finally: Clearer FAA Guidance on State and Local Airspace Restrictions https://techliberation.com/2023/05/07/finally-clearer-faa-guidance-on-state-and-local-airspace-restrictions/ https://techliberation.com/2023/05/07/finally-clearer-faa-guidance-on-state-and-local-airspace-restrictions/#respond Mon, 08 May 2023 03:17:10 +0000 https://techliberation.com/?p=77118

I stumbled across a surprising drone policy update in the FAA’s Aeronautical Information Manual (Manual) last week. The Manual contains official guidance and best practices to US airspace users. (My friend Marc Scribner reminds me that the Manual is not formally regulatory, though it often restates or summarizes regulations.) The manual has a (apparently) new section: “Airspace Access for UAS.” In subsection “Airspace Restrictions To Flight” (11-4-6) it notes:

There can be certain local restrictions to airspace. While the FAA is designated by federal law to be the regulator of the NAS [national airspace system], some state and local authorities may also restrict access to local airspace. UAS pilots should be aware of these local rules.

Legally speaking, the FAA is recognizing there is no “field preemption” when it comes to low-altitude airspace restrictions. In sharing this provision around with aviation and drone experts, each agreed this was a new and surprising policy guidance. The drone provisions appear to have been part of updates made on April 20, 2023. In my view, it’s very welcome guidance.

Some background: In 2015, the FAA released helpful “fact sheet” to state and local officials about drone regulations, as state legislatures began regulating drone operations in earnest. The FAA noted the several drone-related areas, including aviation safety, where federal aviation rules are extensive. The agency noted:

Laws traditionally related to state and local police power – including land use, zoning, privacy,
trespass, and law enforcement operations – generally are not subject to federal regulation.

To ensure state and federal drone laws were not in conflict, the FAA recommended that state and local officials consult with the FAA before creating “operational UAS restrictions on flight altitude, flight paths; operational bans; any regulation of the navigable airspace.”

That guidance is still current and still useful. Around 2017, however, it seems some within the FAA began publicly and privately taking a rather harder line regarding state and local rules about drone operations. For instance, in July 2018, someone at the FAA posted a confusing and brief new statement on the FAA website about state and local drone rules that is hard to reconcile with the 2015 guidance.

Others noticed and reported to Congress a change at the FAA and the legal uncertainty created as companies wanted to deploy and states and cities wanted reasonable rules on operations to protect their residents. Last year the USDOT Inspector General told Congress that in 2018 a lead State participant in an FAA drone program requested a clarification as to whether particular State laws regarding drones conflicted with FAA rules. When the Inspector General asked the FAA for an update, four years had passed, and “FAA has not yet provided an opinion in response to that request.” The GAO likewise told Congress a few years ago, an unsettled question has plagued the drone industry and state lawmakers for years: Can states enforce local restrictions for surface airspace? GAO reported that the federal government had not taken a formal position regarding whether local restrictions were enforceable.

Finally the FAA makes clear: Yes, in some circumstances, state and local officials may restrict access to local airspace.

Unfortunately the drone industry and aviation regulators nationwide have lost several years (and many companies) waiting for a clear federal position.

Courts on Field Preemption

Many drone advocates, even recently, assert that states and local regulators can’t restrict surface airspace. Some incorrectly claim, among other things, that only the FAA can regulate airspace and that state and local airspace rules are subject to “field preemption.” Courts have ruled against drone advocates in the three cases I’m aware of where field preemption was raised: Singer v. City of Newton, NPPA v. McCraw, and Xizmo v. New York City. As the court said in Singer:

the FAA explicitly contemplates state or local regulation of pilotless aircraft, defeating Singer’s argument that the whole field is exclusive to the federal government.

Legal Scholarship on Drone Regulation

Likewise, it was clear to many legal scholars that some state and local airspace rules would apply to drones. Around 2016, I set out to write a policy research paper on the need for clear and uniform federal rules about low-altitude airspace that small drones use (“surface airspace”). I ran into a problem with my thesis: surface airspace policy is not a straightforward exercise of federal regulation. Analysis by legal scholars like Prof. Troy Rule (ASU Law) Prof. Laura Donohue (Georgetown Law), and Prof. Henry Smith (Harvard Law) convinced me that any federal aviation rules purporting to authorize drone flights into surface airspace (say, below 200 feet altitude or so) would run into a buzzsaw of legal challenges from state governments and landowners concerning state authority, trespass, and private property takings.

That’s because it is black-letter law that “real property” in the US has a three-dimensional aspect that includes surface airspace. Further, determinations about landowners’ property rights and entitlements are typically determined by common law and state law, not federal aviation officials. 

My original thesis scrapped, my paper went in new direction. My research about drone policy took me through the history of surface airspace propertization, back to 19th century Anglo-American legal treatises and court decisions, which I explored in a working paper published by the Mercatus Center in 2020 (and edited and republished by the Akron Law Review). To accelerate commercial drone deployments nationwide, I proposed a “cooperative federalism”–not FAA alone–approach to permitting drone operations in surface airspace.

So: courts have been clear about this, legal scholars have been clear about this, and now, finally, the FAA has been clear about this in the updated Manual: “Some state and local authorities may also restrict access to local airspace. UAS pilots should be aware of these local rules.” 

With that long-awaited clear statement in April 2023, the major stakeholders–including FAA, state aviation offices, the drone industry, and local officials–can begin the hard work of building world-class commercial drone operations nationwide while protecting the property and privacy expectations of residents.

]]>
https://techliberation.com/2023/05/07/finally-clearer-faa-guidance-on-state-and-local-airspace-restrictions/feed/ 0
My Latest Study on AI Governance https://techliberation.com/2023/04/20/my-latest-study-on-ai-governance/ https://techliberation.com/2023/04/20/my-latest-study-on-ai-governance/#respond Thu, 20 Apr 2023 18:25:29 +0000 https://techliberation.com/?p=77114

The R Street Institute has just released my latest study on AI governance and how to address “alignment” concerns in a bottom-up fashion. The 40-page report is entitled, “Flexible, Pro-Innovation Governance Strategies for Artificial Intelligence.”

My report asks, is it possible to address AI alignment without starting with the Precautionary Principle as the governance baseline default? I explain how that is indeed possible. While some critics claim that no one is seriously trying to deal with AI alignment today, my report explains how no technology in history has been more heavily scrutinized this early in its life-cycle as AI, machine learning and robotics. The number of ethical frameworks out there already is astonishing. We don’t have too few alignment frameworks; we probably have too many!

We need to get serious about bringing some consistency to these efforts and figure out more concrete ways to a culture of safety by embedding ethics-by-design. But there is an equally compelling interest in ensuring that algorithmic innovations are developed and made widely available to society.

Although some safeguards will be needed to minimize certain AI risks, a more agile and iterative governance approach can address these concerns without creating overbearing, top-down mandates, which would hinder algorithmic innovations – especially at a time when America is looking to stay ahead of China and other nations in the global AI race.

My report explores the many ethical frameworks that professional associations have already formulated as well as the various other “soft law” frameworks that have been devised. I also consider how AI auditing and algorithmic impact assessments can be used to help formalize the twin objectives of “ethics-by-design” and keeping “humans in the loop,” which are the two principles that drive most AI governance frameworks. But it is absolutely essential that audits and impact assessments are done right to ensure it does not become an overbearing, compliance-heavy, and politicized nightmare that would undermine algorithmic entrepreneurialism and computational innovation.

Finally, my report reviews the extensive array of existing government agencies and policies that ALREADY govern artificial intelligence and robotics as well as the wide variety of court-based common law solutions that cover algorithmic innovations. The notion that America has no law or regulation covering artificial intelligence today is massively wrong, as my report explains in detail.

I hope you’ll take the time to check out my new report. This and my previous report on “Getting AI Innovation Culture Right” serve as the foundation of everything we have coming on AI and robotics from the R Street Institute. Next up will be a massive study on global AI “existential risks” and national security issues. Stay tuned. Much more to come!

In the meantime, you can find all my recent work here on my “Running List of My Research on AI, ML & Robotics Policy.”

______________

Additional Reading:

]]>
https://techliberation.com/2023/04/20/my-latest-study-on-ai-governance/feed/ 0
On “Pausing” AI https://techliberation.com/2023/04/07/on-pausing-ai/ https://techliberation.com/2023/04/07/on-pausing-ai/#respond Fri, 07 Apr 2023 17:36:05 +0000 https://techliberation.com/?p=77111

Recently, the Future of Life Institute released an open letter that included some computer science luminaries and others calling for a 6-month “pause” on the deployment and research of “giant” artificial intelligence (AI) technologies. Eliezer Yudkowsky, a prominent AI ethicist, then made news by arguing that the “pause” letter did not go far enough and he proposed that governments consider “airstrikes” against data processing centers, or even be open to the use of nuclear weapons. This is, of course, quite insane. Yet, this is the state of the things today as a AI technopanic seems to growing faster than any of the technopanic that I’ve covered in my 31 years in the field of tech policy—and I have covered a lot of them.

In a new joint essay co-authored with Brent Orrell of the American Enterprise Institute and Chris Messerole of Brookings, we argue that “the ‘pause’ we are most in need of is one on dystopian AI thinking.” The three of us recently served on a blue-ribbon Commission on Artificial Intelligence Competitiveness, Inclusion, and Innovation, an independent effort assembled by the U.S. Chamber of Commerce. In our essay, we note how:

Many of these breakthroughs and applications will already take years to work their way through the traditional lifecycle of development, deployment, and adoption and can likely be managed through legal and regulatory systems that are already in place. Civil rights laws, consumer protection regulations, agency recall authority for defective products, and targeted sectoral regulations already govern algorithmic systems, creating enforcement avenues through our courts and by common law standards allowing for development of new regulatory tools that can be developed as actual, rather than anticipated, problems arise.

“Instead of freezing AI we should leverage the legal, regulatory, and informal tools at hand to manage existing and emerging risks while fashioning new tools to respond to new vulnerabilities,” we conclude. Also on the pause idea, it’s worth checking out this excellent essay from Bloomberg Opinion editors on why “An AI ‘Pause’ Would Be a Disaster for Innovation.”

The problem is not with the “pause” per se. Even if the signatories could somehow enforce a worldwide stop-work order, six months probably wouldn’t do much to halt advances in AI. If a brief and partial moratorium draws attention to the need to think seriously about AI safety, it’s hard to see much harm. Unfortunately, a pause seems likely to evolve into a more generalized opposition to progress.

The editors continue on to rightly note:

This is a formula for outright stagnation. No one can ever be fully confident that a given technology or application will only have positive effects. The history of innovation is one of trial and error, risk and reward. One reason why the US leads the world in digital technology — why it’s home to virtually all the biggest tech platforms — is that it did not preemptively constrain the industry with well-meaning but dubious regulation. It’s no accident that all the leading AI efforts are American too.

That is 100% right, and I appreciate the Bloomberg editors linking to my latest study on AI governance when they made this point. In this new R Street Institute study, I explain why “Getting AI Innovation Culture Right,” is essential to make sure we can enjoy the many benefits that algorithmic systems offer, while also staying competitive in the global race for competitive advantage in this space.

That report is the first in a trilogy of big studies on decentralized, flexible governance of artificial intelligence. We can achieve AI safety without crushing top-down bans or unworkable “pauses,” I argue. My next two papers are, “Flexible, Pro-Innovation Governance Strategies for Artificial Intelligence” (due out April 20th) and “Existential Risks & Global Governance Issues Surrounding AI & Robotics” (due out late May or early June). I’m also working on a co-authored essay taking a deep dive into the idea of AI impact assessments / auditing (late Spring / early Summer).

Relatedly, on April 7th, DeepLearningAI held an event on “Why at 6-Month AI Pause is a Bad Idea” featuring leading AI scientists Andrew Ng and Yann LeCun discussing the trade-offs associated with the proposal. A crucial point made in the discussion is that a pause, especially a pause in the form of a governmental ban, would be a misguided innovation policy decision. They stressed that there will be policy interventions to address targeted risks from specific algorithmic applications, but that it would be a serious mistake to stop the overall development of the underlying technological capabilities. It’s worth watching.

For more on AI policy, here’s a list of some of my latest reports and essays. Much more to come. AI policy will be the biggest tech policy fight of the our lifetimes.

 

 

]]>
https://techliberation.com/2023/04/07/on-pausing-ai/feed/ 0
What Policy Vision for Artificial Intelligence? https://techliberation.com/2023/04/02/what-policy-vision-for-artificial-intelligence/ https://techliberation.com/2023/04/02/what-policy-vision-for-artificial-intelligence/#respond Sun, 02 Apr 2023 21:32:49 +0000 https://techliberation.com/?p=77103

In my latest R Street Institute report, I discuss the importance of “Getting AI Innovation Culture Right.” This is the first of a trilogy of major reports on what sort of policy vision and set of governance principles should guide the development of  artificial intelligence (AI), algorithmic systems, machine learning (ML), robotics, and computational science and engineering more generally. More specifically, these reports seek to answer the question, Can we achieve AI safety without innovation-crushing top-down mandates and massive new regulatory bureaucracies? 

These questions are particular pertinent as we just made it through a week in which we’ve seen a major open letter issued that calls for a 6-month freeze on the deployment of AI technologies, while a prominent AI ethicist argued that governments should go further and consider airstrikes data processing centers even if the exchange of nuclear weapons needed to be considered! On top of that, Italy became the first major nation to ban ChatGPT, the popular AI-enabled chatbot created by U.S.-based OpenAI.

My report begins from a different presumption: AI, ML and algorithmic technologies present society with enormously benefits and, while real risks are there, we can find better ways of addressing them. As I summarize:

The danger exists that policy for algorithmic systems could be formulated in such a way that innovations are treated as guilty until proven innocent—i.e., a precautionary principle approach to policy—resulting in many important AI applications never getting off the drawing board. If regulatory impediments block or slow the creation of life-enriching, and even life-saving, AI innovations, that would leave society less well-off and give rise to different types of societal risks.

I argue that it is essential we not trap AI in an “innovation cage” by establishing the wrong policy default for algorithmic governance but instead work through challenges as they come at us. The right policy default for the internet and for AI continues to be “innovation allowed.” But AI risks do require serious governance steps. Luckily, many tools exist and others are being created. While my next major report (due out April 20th) offers far more detail, this paper sketches out some of those mechanisms. 

The goal of algorithmic policy should be for policymakers and innovators to work together to find flexible, iterative, agile, bottom-up governance solutions over time. We can promote a culture of responsibility among leading AI innovators and balance safety and innovation for complex, rapidly evolving computational and computing technologies like AI. This approach is buttressed by existing laws and regulations, as well as common law and the courts.

The new Biden Admin “AI Bill of Rights” unfortunately represents a fear-based model of technology policymaking that breaks from the superior Clinton framework for the internet & digital technology. Our nation’s policy toward AI, robotics & algorithmic innovation should instead embrace a dynamic future and the enormous possibilities that await us.

Please check out my new paper for more details. Much more to come. And you can also check out my running list of research on AI, ML robotics policy.

Spectrum of Technological Governance Options

]]>
https://techliberation.com/2023/04/02/what-policy-vision-for-artificial-intelligence/feed/ 0
Why Isn’t Everyone Already Unemployed Due to Automation? https://techliberation.com/2023/03/11/why-isnt-everyone-already-unemployed-due-to-automation/ https://techliberation.com/2023/03/11/why-isnt-everyone-already-unemployed-due-to-automation/#respond Sat, 11 Mar 2023 14:16:41 +0000 https://techliberation.com/?p=77099

I have a new R Street Institute policy study out this week doing a deep dive into the question: “Can We Predict the Jobs and Skills Needed for the AI Era?” There’s lots of hand-wringing going on today about AI and the future of employment, but that’s really nothing new. In fact, in light of past automation panics, we might want to step back and ask: Why isn’t everyone already unemployed due to technological innovation?

To get my answers, please read the paper! In the meantime, here’s the executive summary:

To better plan for the economy of the future, many academics and policymakers regularly attempt to forecast the jobs and worker skills that will be needed going forward. Driving these efforts are fears about how technological automation might disrupt workers, skills, professions, firms and entire industrial sectors. The continued growth of artificial intelligence (AI), robotics and other computational technologies exacerbate these anxieties.

Yet the limits of both our collective knowledge and our individual imaginations constrain well-intentioned efforts to plan for the workforce of the future. Past attempts to assist workers or industries have often failed for various reasons. However, dystopian predictions about mass technological unemployment persist, as do retraining or reskilling programs that typically fail to produce much of value for workers or society. As public efforts to assist or train workers move from general to more specific, the potential for policy missteps grows greater. While transitional-support mechanisms can help alleviate some of the pain associated with fast-moving technological disruption, the most important thing policymakers can do is clear away barriers to economic dynamism and new opportunities for workers.

I do discuss some things that government can do to address automation fears at the end of the paper, but it’s important that policymakers first understand all the mistakes we’ve made with past retraining and reskilling efforts. The easiest thing to do to help in the short-term is clear away barriers to labor mobility and economic dynamism, I argue. Again, read the study for details.

For more info on other AI policy developments, check out my running list of research on AI, ML robotics policy.

]]>
https://techliberation.com/2023/03/11/why-isnt-everyone-already-unemployed-due-to-automation/feed/ 0
US Chamber AI Commission Launches https://techliberation.com/2023/03/11/us-chamber-ai-commission-launches/ https://techliberation.com/2023/03/11/us-chamber-ai-commission-launches/#respond Sat, 11 Mar 2023 13:54:14 +0000 https://techliberation.com/?p=77094

This week, the U.S. Chamber of Commerce Commission on Artificial Intelligence Competitiveness, Inclusion, and Innovation (AI Commission) released a major report on the policy considerations surrounding AI, machine learning (ML) and algorithmic systems. The 120-page report concluded that “AI technology offers great hope for increasing economic opportunity, boosting incomes, speeding life science research at reduced costs, and simplifying the lives of consumers.” It was my honor to serve as one of the commissioners on the AI Commission and contribute to the report.

Over at the R Street Institute blog, I offer a quick summary of the major findings and recommendations from the report and argue that, along with the National Institute of Standards and Technology (NIST)’s recently released AI Risk Management Framework, the AI Commission report offers, “a constructive, consensus-driven framework for algorithmic governance rooted in flexibility, collaboration and iterative policymaking. This represents the uniquely American approach to AI policy that avoids the more heavy-handed regulatory approaches seen in other countries and it can help the United States again be a global leader in an important new technological field,” I conclude. Check out the blog post and the full AI Commission report if you are following debates of algorithmic policy issues. There’s lot of important material in there.

For more info on AI policy developments, check out my running list of research on AI, ML robotics policy.

]]>
https://techliberation.com/2023/03/11/us-chamber-ai-commission-launches/feed/ 0
7 AI Policy Issues to Watch in 2023 and Beyond https://techliberation.com/2023/02/10/7-ai-policy-issues-to-watch-in-2023-and-beyond/ https://techliberation.com/2023/02/10/7-ai-policy-issues-to-watch-in-2023-and-beyond/#respond Fri, 10 Feb 2023 13:33:58 +0000 https://techliberation.com/?p=77088

In my latest R Street Institute blog post, “Mapping the AI Policy Landscape Circa 2023: Seven Major Fault Lines,” I discuss the big issues confronting artificial intelligence and machine learning in the coming year and beyond. I note that the AI regulatory proposals are multiplying fast and coming in two general varieties: broad-based and targeted. Broad-based algorithmic regulation would address the use of these technologies in a holistic fashion across many sectors and concerns. By contrast, targeted algorithmic regulation looks to address specific AI applications or concerns. In the short-term, it is more likely that targeted or “sectoral” regulatory proposals have a chance of being implemented.

I go on to identify seven major issues of concern that will drive these policy proposals. They include:

1) Privacy and Data Collection

2) Bias and Discrimination

3) Free Speech and Disinformation

4) Kids’ Safety

5) Physical Safety and Cybersecurity

6) Industrial Policy and Workforce Issues

7) National Security and Law Enforcement Issues

Of course, each of these issues includes many sub-issues and nuanced concerns. But I also noted that “this list only scratches the surface in terms of the universe of AI policy issues.” Algorithmic policy considerations are now being discussed in many other fields, including educationinsurancefinancial servicesenergy marketsintellectual propertyretail and trade, and more. I’ll be rolling out a new series of essays examining all these issues throughout the year.

But, as I note in concluding my new essay, the danger of over-reach exists with early regulatory efforts:

AI risks deserve serious attention, but an equally serious risk exists that an avalanche of fear-driven regulatory proposals will suffocate different life-enriching algorithmic innovations. There is a compelling interest in ensuring that AI innovations are developed and made widely available to society. Policymakers should not assume that important algorithmic innovations will just magically come about; our nation must get its innovation culture right if we hope to create a better, more prosperous future.

America needs a flexible governance approach for algorithmic systems that avoids heavy-handed, top-down controls as a first-order solution. “There is no use worrying about the future if we cannot even invent it first,” I conclude.

Additional Reading

]]>
https://techliberation.com/2023/02/10/7-ai-policy-issues-to-watch-in-2023-and-beyond/feed/ 0
Studies Document Growing Cost of EU Privacy Regulations https://techliberation.com/2023/02/09/studies-document-growing-cost-of-eu-privacy-regulations/ https://techliberation.com/2023/02/09/studies-document-growing-cost-of-eu-privacy-regulations/#respond Thu, 09 Feb 2023 16:22:47 +0000 https://techliberation.com/?p=77086

[Originally published on Medium on 2/5/2022]

In an earlier essay, I explored “Why the Future of AI Will Not Be Invented in Europe” and argued that, “there is no doubt that European competitiveness is suffering today and that excessive regulation plays a fairly significant role in causing it.” This essay summarizes some of the major academic literature that leads to that conclusion.

Since the mid-1990s, the European Union has been layering on highly restrictive policies governing online data collection and use. The most significant of the E.U.’s recent mandates is the 2018 General Data Protection Regulation (GDPR). This regulation established even more stringent rules related to the protection of personal data, the movement thereof, and limits what organizations can do with data. Data minimization is the major priority of this system, but there are many different types of restrictions and reporting requirements involved in the regulatory scheme. This policy framework also has ramifications for the future of next-generation technologies, especially artificial intelligence and machine learning systems, which rely on high-quality data sets to improve their efficacy.

Whether or not the E.U.’s complicated regulatory regime has actually resulted in truly meaningful privacy protections for European citizens relative to people in other countries remains open to debate. It is very difficult to measure and compare highly subjective values like privacy across countries and cultures. This makes benefit-cost analysis for privacy regulation extremely challenging — especially on the benefits side of the equation.

What is no longer up for debate, however, is the cost side of the equation and the question of what sort of consequences the GDPR has had on business formation, competition, investment, and so on. On these matters, standardized metrics exist and the economic evidence is abundantly clear: the GDPR has been a disaster for Europe.

Summary of Major Studies on Impact of EU Data Regulation

Consider the impact of E.U. data controls on business startups and market structure. GDPR and other regulations greatly limit the flow of data to innovative upstarts who need it most to compete, leaving only the largest companies who can afford to comply to control most of the market. Benjamin Mueller of ITIF notes that it is already the case that just “two of the world’s 30 largest technology firms by market capitalization are from the EU,” and only “5 of the 100 most promising AI startups are based in Europe,” while private funding of AI startups in Europe for 2020 ($4 billion) was dwarfed by US ($36 billion) and China ($25 billion). These issues are even more pressing as the E.U. looks to advance a new AI Act, which would layer on still more regulatory restrictions.

In concrete terms, this has meant that the E.U. came away from the digital revolution with “the complete absence of superstar companies,” argue competition policy experts Nicolas Petit and David Teece. There are no European versions of Microsoft, Google, or Apple, even though Europeans clearly demand the sort of products and services those US-based companies provide. Entrepreneurialism scholar Zoltan Acs asks: “What has been the outcome of E.U. policy in limiting entrepreneurial activity over recent decades?” His conclusion:

It is immediately clear… that the United States and China dominate the platform landscape. Based on the market value of top companies, the United States alone represents 66% of the world’s platform economy with 41 of the top 100 companies. European platform-based companies play a marginal role, with only 3% of market value.

Several recent studies have documented the costs associated with the GDPR and the E.U.’s heavy-handed approach to data flows more generally. Here is a rundown of some of the academic evidence and a summary of the major findings from these studies.

“There is a growing body of economic literature and commentary showing that the costs of implementing the GDPR benefit large online platforms, and that consent-based data collection gives a competitive advantage to firms offering a range of consumer-facing products compared to smaller market actors. This in turn increases concentration in a number of digital markets where access to data is important, by creating barriers to entry or encouraging market exit.” (p. 2–3)

“this paper examines how privacy regulation shaped firm performance in a large sample of companies across 61 countries and 34 industries. Controlling for firm and country-industry-year unobserved characteristics, we compare the outcomes of firms at different levels of exposure to EU markets, before and after the enforcement of the GDPR in 2018. We find that enhanced data protection had the unintended consequence of reducing the financial performance of companies targeting European consumers. Across our full sample, firms exposed to the regulation experienced a 8% decline in profits, and a 2% reduction in sales. An exception is large technology companies, which were relatively unaffected by the regulation on both performance measures. Meanwhile, we find the negative impact on profits among small technology companies to be almost double the average effect across our full sample. Following several robustness tests and placebo regressions, we conclude that the GDPR has had significant negative impacts on firm performance in general, and on small companies in particular.” (p. 1)

“We show that websites’ vendor use falls after the European Union’s General Data Protection Regulation (GDPR), but that market concentration also increases among technology vendors that provide support services to websites. We collect panel data on the web technology vendors selected by more than 27,000 top websites internationally. The week after the GDPR’s enforcement, website use of web technology vendors falls by 15% for EU residents. Websites are more likely to drop smaller vendors, which increases the relative concentration of the vendor market by 17%. Increased concentration predominantly arises among vendors that use personal data such as cookies, and from the increased relative shares of Facebook and Google-owned vendors, but not from website consent requests. Though the aggregate changes in vendor use and vendor concentration dissipate by the end of 2018, we find that the GDPR impact persists in the advertising vendor category most scrutinized by regulators. Our findings shed light on potential explanations for the sudden drop and subsequent rebound in vendor usage.” (p. 1)

GDPR creates inherent tradeoffs between data protection and other dimensions of welfare, including competition and innovation. While some of these effects were acknowledged when constructing the legal data regime, many were disregarded. Furthermore, the magnitude and breadth of such effects may well constitute an unintended and unheeded welfare-reducing consequence. As this article shows, the GDPR limits competition and increases concentration in data and data-related markets, and potentially strengthens large data controllers. It also further reinforces the already existing barriers to data sharing in the EU, thereby potentially reducing data synergies that might result from combining different datasets controlled by separate entities.” (pp. 3–4)

“Using data on 4.1 million apps at the Google Play Store from 2016 to 2019, we document that GDPR induced the exit of about a third of available apps; and in the quarters following implementation, entry of new apps fell by half. We estimate a structural model of demand and entry in the app market. Comparing long-run equilibria with and without GDPR, we find that GDPR reduces consumer surplus and aggregate app usage by about a third. Whatever the privacy benefits of GDPR, they come at substantial costs in foregone innovation.”

“this paper empirically quantifies the effects of the enforcement of the EU’s General Data Protection Regulation (GDPR) on online user behavior over time, analyzing data from 6,286 websites spanning 24 industries during the 10 months before and 18 months after the GDPR’s enforcement in 2018. A panel differences estimator, with a synthetic control group approach, isolates the short- and long-term effects of the GDPR on user behavior. The results show that, on average, the GDPR’s effects on user quantity and usage intensity are negative; e.g., the numbers of total visits to a website decrease by 4.9% and 10% due to GDPR in respectively the short- and long-term. These effects could translate into average revenue losses of $7 million for e-commerce websites and almost $2.5 million for ad-based websites 18 months after GDPR. The GDPR’s effects vary across websites, with some industries even benefiting from it; moreover, more-popular websites suffer less, suggesting that the GDPR increased market concentration.”

“This paper investigates the impact of the General Data Protection Regulation (GDPR for short) on consumers’ online browsing and search behavior using consumer panels from four countries, United Kingdom, Spain, United States, and Brazil. We find that after GDPR, a panelist exposed to GDPR submits 21.6% more search terms to access information and browses 16.3% more pages to access consumer goods and services compared to a non-exposed panelist, indicating higher friction in online search. The implications of increased friction are heterogeneous across firms: Bigger e-commerce firms see an increase in consumer traffic and more online transactions. The increase in the number of transactions at large websites is about 6 times the increase experienced by smaller firms. Overall, the post-GDPR online environment may be less competitive for online retailers and may be more difficult for EU consumers to navigate through.”

“Privacy regulations should increase trust because they provide laws that increase transparency and allow for punishment in cases in which the trustee violates trust. […] We collected survey panel data in Germany around the implementation date and ran a survey experiment with a GDPR information treatment. Our observational and experimental evidence does not support the hypothesis that the GDPR has positively affected trust. This finding and our discussion of the underlying reasons are relevant for the wider research field of trust, privacy, and big data.”

“We follow more than 110,000 websites and their third-party HTTP requests for 12 months before and 6 months after the GDPR became effective and show that websites substantially reduced their interactions with web technology providers. Importantly, this also holds for websites not legally bound by the GDPR. These changes are especially pronounced among less popular websites and regarding the collection of personal data. We document an increase in market concentration in web technology services after the introduction of the GDPR: Although all firms suffer losses, the largest vendor — Google — loses relatively less and significantly increases market share in important markets such as advertising and analytics. Our findings contribute to the discussion on how regulating privacy, artificial intelligence and other areas of data governance relate to data minimization, regulatory competition, and market structure.”

William Rinehart of the Center for Growth and Opportunity has compiled and summarized many additional studies that document the costs associated with restrictions on data, including many state privacy laws imposed in the United States.

“The Biggest Loser”: Innovation Culture Gone Wrong

Taken together, this evidence makes it clear that, “Well-meaning privacy laws can have the unintended consequence of penalizing smaller companies within technology markets.” It can also have broader geopolitical ramifications for continental competitive advantage and engagement between countries. Some have argued that the United Kingdom’s so-called “Brexit” from the EU can be viewed as not only an effort to reclaim its sovereignty but more specifically “to escape its crippling regulatory structure.” The E.U.’s approach to emerging technology regulation likely had some bearing on this. Acs argues that Britain’s move was logical, “because E.U. regulations were holding back the U.K.’s strong DPE (digital platform economy).” “If the United Kingdom was to realize its economic potential,” he says, “it had to extricate itself from the European Union,” due to the growing “dysfunctional E.U. bureaucracy.”

Can Europe turn things around? Most market watchers do not believe that the E.U. will be willing to change its regulatory course in such a way that the continent would suddenly become more open to data-driven innovation. As part of a Spring 2022 journal symposium, The International Economy asked 11 experts from Europe and the U.S. to consider where the European Union currently stood in “the global tech race.” The responses were nearly unanimous and bluntly summarized in the symposium’s title: “The Biggest Loser.” Several respondents observed how “Europe is considered to be lagging behind in the global tech race,” and “is unlikely to become a global hub of innovation.” “The future will not be invented in Europe,” another respondent concluded. Europe’s risk-averse culture and preference for meticulously detailed and highly precautionary regulatory regimes were repeatedly cited as factors.

Europe has become the biggest loser on the digital technology front not because of their people but because of their policy. Europe is filled with some of the most important advanced education and engineering programs in the world, and countless brilliant minds there could be leading world-leading digital technology companies that could rival the U.S., China, and the rest of the world. But Europe’s current “innovation culture” simply will not allow it.

Innovation culture refers to “the various social and political attitudes and pronouncements towards innovation, technology, and entrepreneurial activities that, taken together, influence the innovative capacity of a culture or nation.” A positive innovation culture depends upon a dynamic, open economy that encourages new entry, entrepreneurialism, continuous investment, and the free movement of goods, ideas, and talent.

At this point in time, it is clear that — at least for data-driven sectors — the E.U. has created the equivalent of an anti-innovation culture, and the GDPR has clearly played a major rule in that outcome. This regulatory regime has also had devastating consequences for venture capital formation and investment more generally in Europe. “Public policy and attitudes explain the relative technological decline and lack of economic dynamism,” Petit and Teece argue, and it has resulted in, “weak venture capital markets, fragmented research capabilities, low worker mobility and frustrated entrepreneurs.”

Industrial Policy Won’t Save Europe

While the E.U. is aggressively regulating data-driven sectors, it is simultaneously trying to use industrial policy programs to advance new technological capabilities and innovations. European policymakers would obviously like to avoid a repeat of the past quarter century and the lack of digital technology competition and innovation they witnessed.

But past European industrial policy efforts on the digital technology front have largely failed, as Connor Haaland and I documented earlier. Zoltan Acs notes that, despite many state efforts to promote digital innovation across the continent in recent decades, the E.U.’s regulatory policies have resulted in the opposite. “The European Union protected traditional industries and hoped that existing firms would introduce new technologies. This was a policy designed to fail,” he argues. A major recent book, Questioning the Entrepreneurial State: Status-quo, Pitfalls, and the Need for Credible Innovation Policy (Springer, 2022), offers additional evidence of the failure of European industrial policy efforts. No amount of industrial policy planning and spending is going to be able to overcome a negative innovation culture that suffocates entrepreneurialism and investment out of the gates.

These findings have lessons for policymakers in the United States, too, especially with President Biden and even many Republicans now calling for heavy-handed top-down regulation of digital technology companies. Basically, “President Biden Wants America to Become Europe on Tech Regulation,” I argued in a recent R Street Institute blog post. In a letter to the Wall Street JournalI responded to recent opeds by both President Biden and former Trump Administration Attorney General William Barr in which they both advocated regulations that would take us down the disastrous path that the European Union has already charted.

“The only thing Europe exports now on the digital-technology front is regulation,” I noted in my response, and that makes it all the more mind-boggling that Biden and Barr want to go down that same path. “Overregulation by EU bureaucrats led Europe’s best entrepreneurs and investors to flee to the U.S. or elsewhere in search of the freedom to innovate.” This is the wrong innovation culture for the United States if we hope to be a leader in the Computational Revolution that is unfolding — and match expanding efforts by the Chinese to top us at it.

In closing, policymakers should never lose sight of the most fundamental lesson of innovation policy, which can be stated quite simply: You only get as much innovation as you allow to begin with. If the public policy defaults are all set to be maximally restrictive and limit entrepreneurialism and experimentation by design, then it should be no surprise when the country or continent fails to generate meaningful innovation, investment, new companies, and global competitive advantage. The European model is no model for America.

Additional reading:

]]>
https://techliberation.com/2023/02/09/studies-document-growing-cost-of-eu-privacy-regulations/feed/ 0
Quick Thoughts on Biden’s Tech-Bashing in the State of the Union https://techliberation.com/2023/02/07/quick-thoughts-on-bidens-tech-bashing-in-the-state-of-the-union/ https://techliberation.com/2023/02/07/quick-thoughts-on-bidens-tech-bashing-in-the-state-of-the-union/#respond Wed, 08 Feb 2023 03:43:49 +0000 https://techliberation.com/?p=77080
  • President Biden began his 2023 State of the Union remarks by saying America is defined by possibilities. Correct! Unfortunately, his tech-bashing will undermine those possibilities by discouraging technological innovation & online freedom in the United States.
  • America became THE global leader on digital tech because we rejected heavy-handed controls on innovators & speech. We shouldn’t return to the broken model of the past by layering on red tape, economic controls & speech restrictions.
  • What has the tech economy done for us lately? Here is a look at the value added to the U.S. economy by the digital sector from 2005-2021. That’s $2.4 TRILLION (with a T) added in 2021. These are astonishing numbers.
  • FACT: According to the BEA, in 2021, “the U.S. digital economy accounted for $3.70 trillion of gross output, $2.41 trillion of value added (translating to 10.3 % of U.S. GDP), $1.24 trillion of compensation + 8.0 million jobs.”

In 2021…

  • $3.70 trillion of gross output
  • $2.41 trillion of value added (=10.3% percent GDP)
  • $1.24 trillion of compensation
  • 8.0 million jobs

FACT: globally, 49 of the top 100 digital tech firms with most employees are US companies. Here they are. Smart public policy made this list possible.

  • FACT: 18 of the world’s Top 25 tech companies by Market Cap are US-based firms.
  • It’d be a huge mistake to adopt Europe’s approach to tech regulation. As I noted recently in the Wall Street Journal, “The only thing Europe exports now on the digital-technology front is regulation.”  Yet, Biden would have us import the EU model to our shores.
  • My R Street colleague Josh Withrow has also noted how, “the EU’s approach appears to be, in sum, ‘If you can’t innovate, regulate.’” America should not be following the disastrous regulatory path of the European Union on digital technology policy.
  • On antitrust regulation, here is a study by my R Street colleague Wayne Brough on the dangerous approach that the Biden administration wants, which would swing a wrecking ball through the tech economy. We have to avoid this.
  • It is particularly important that the US not follow the EU’s lead on artificial intelligence regulation at a time when we are in heated competition w China on the AI front as I noted here.
  • American tech innovators flourished thanks to a positive innovation culture rooted in permissionless innovation & policies like Section 230, which allowed American firms to become global powerhouses. And we’ve moved from a world of information scarcity to one of information abundance. Let’s keep it that way.
]]>
https://techliberation.com/2023/02/07/quick-thoughts-on-bidens-tech-bashing-in-the-state-of-the-union/feed/ 0
Self-Inflicted Technological Suicide https://techliberation.com/2023/01/26/self-inflicted-technological-suicide/ https://techliberation.com/2023/01/26/self-inflicted-technological-suicide/#respond Fri, 27 Jan 2023 00:26:11 +0000 https://techliberation.com/?p=77077

The Wall Street Journal has run my response to troubling recent opeds by President Biden (“Republicans and Democrats, Unite Against Big Tech Abuses“) and former Trump Administration Attorney General William Barr (“Congress Must Halt Big Tech’s Power Grab“) in which they both called for European-style regulation of U.S. digital technology markets.

“The only thing Europe exports now on the digital-technology front is regulation,” I noted in my response, and that makes it all the more mind-boggling that Biden and Barr want to go down that same path. “[T]he EU’s big-government regulatory crusade against digital tech: Stagnant markets, limited innovation and a dearth of major players. Overregulation by EU bureaucrats led Europe’s best entrepreneurs and investors to flee to the U.S. or elsewhere in search of the freedom to innovate.”

Thus, the Biden and Barr plans for importing European-style tech mandates, “would be a stake through the heart of the ‘permissionless innovation’ that made America’s info-tech economy a global powerhouse.” In a longer response to the Biden oped that I published on the R Street blog, I note that:

“It is remarkable to think that after years of everyone complaining about the lack of bipartisanship in Washington, we might get the one type of bipartisanship America absolutely does not need: the single most destructive technological suicide in U.S. history, with mandates being substituted for markets, and permission slips for entrepreneurial freedom.”

What makes all this even more remarkable is that they calls for hyper-regulation come at a time when China is challenging America’s dominance in technology and AI. Thus, “new mandates could compromise America’s lead,” I conclude. “Shackling our tech sectors with regulatory chains will hobble our nation’s ability to meet global competition and undermine innovation and consumer choice domestically.”

Jump over to the WSJ to read my entire response (“EU-Style Regulation Begets EU-Style Stagnation“) and to the R Street blog for my longer essay (“President Biden Wants America to Become Europe on Tech Regulation“).

]]>
https://techliberation.com/2023/01/26/self-inflicted-technological-suicide/feed/ 0
AI Policy Research: My Year in Review https://techliberation.com/2022/12/26/ai-policy-research-my-year-in-review/ https://techliberation.com/2022/12/26/ai-policy-research-my-year-in-review/#respond Mon, 26 Dec 2022 20:07:40 +0000 https://techliberation.com/?p=77073

I spent much of 2022 writing about the growing policy debate over artificial intelligence, machine learning, robotics, and the Computational Revolution more generally. Here are some of the major highlights of my work on this front.

All these essays + dozens more can be found on my: “Running List of My Research on AI, ML & Robotics Policy.” I have several lengthy studies and many shorter essays coming in the first half of 2023.

Finally, here is a Federalist Society podcast discussion about AI policy hosted by Jennifer Huddleston in which Hodan Omaar of ITIF and I offer a big picture overview of where things are headed next.

]]>
https://techliberation.com/2022/12/26/ai-policy-research-my-year-in-review/feed/ 0
Revisionist Histories of America’s Digital Revolution https://techliberation.com/2022/12/11/revisionist-histories-of-americas-digital-revolution/ https://techliberation.com/2022/12/11/revisionist-histories-of-americas-digital-revolution/#respond Sun, 11 Dec 2022 16:15:09 +0000 https://techliberation.com/?p=77068

Everywhere you look in tech policy land these days, people decry China as a threat to America’s technological supremacy or our national security. Many of these claims are well-founded, while others are somewhat overblown. Regardless, as I argue in a new piece for National Review this week, “America Won’t Beat China by Becoming China.” Many pundits and policymakers seem to think that only a massive dose of central planning and Big Government technocratic bureaucracy can counter the Chinese threat. It’s a recipe for a great deal of policy mischief.

Some of these advocates for a ‘let’s-be-more-like-China’ approach to tech policy also engage in revisionist histories about America’s recent success stories in the personal computing revolution and internet revolution. As I note in my essay, “[t]he revisionists instead prefer to believe that someone high up in government was carefully guiding this decentralized innovation. In the new telling of this story, deregulation had almost nothing to do with it.” In fact, I was asked by National Review to write this piece in response to a recent essay by Wells King of American Compass, who has penned some rather remarkable revisionist tales of government basically being responsible for all the innovation in digital tech sectors over the past quarter century. Markets and venture capital had nothing to do with it by his reasoning. It’s what Science writer Matt Ridley correctly labels “innovation creationism,” or the notion that it basically takes a village to raise an innovator.

Perhaps the best example of this sort of twisted logic was President Barack Obama’s infamous 2012 “you didn’t build that” speech, which was widely mocked by many conservatives at the time as being completely off the mark. The conservative critics rightly lambasted Obama for underplaying the role of markets, entrepreneurs, and private investors as the primary engine of America’s remarkably innovative economy. Unfortunately, however, many of today’s “national conservatives” are borrowing Obama’s twisted revisionist vision and, worse yet, fabricating entirely new nonsensical ‘it-takes-a-village’ narratives that go well beyond it.

In my essay, I explain why innovation creationism about the internet and the Digital Revolution gets the story of the past quarter century horribly wrong. The tech revisionist misidentify and overplay the role government played in this arena and they also ignore the many mistakes our government and other governments (especially in Europe) have made when trying to technocratically plan tech systems. As I conclude in my essay,

America’s world-leading digital-technology companies and technologies were not the product of intentional design or bureaucratic initiatives. Corporatism and central planning should be rejected as the basis for U.S. technology policy. And regardless of whether they happen to be trendy right now, economically illiterate arguments like King’s should be relegated to the ash heap of history.

Jump over to National Review to read the entire essay.  And here’s a list of some of my other recent writing on industrial policy:

]]>
https://techliberation.com/2022/12/11/revisionist-histories-of-americas-digital-revolution/feed/ 0
Gonzalez v Google, Section 230 & the Future of Permissionless Innovation https://techliberation.com/2022/12/09/gonzalez-v-google-section-230-the-future-of-permissionless-innovation/ https://techliberation.com/2022/12/09/gonzalez-v-google-section-230-the-future-of-permissionless-innovation/#respond Fri, 09 Dec 2022 13:15:15 +0000 https://techliberation.com/?p=77066

Over at Discourse magazine this week, my R Street colleague Jonathan Cannon and I have posted a new essay on how it has been “Quite a Fall for Digital Tech.” We mean that both in the sense that the last few months have witnessed serious market turmoil for some of America’s leading tech companies, but also that the political situation for digital tech more generally has become perilous. Plenty of people on the Left and the Right now want a pound of flesh from the info-tech sector, and the starting cut at the body involves Section 230, the 1996 law that shields digital platforms from liability for content posted by third parties.

With the Supreme Court recently announcing it will hear Gonzalez v. Google, a case that could significantly narrow the scope of Section 230, the stakes have grown higher. It was already the case that federal and state lawmakers were looking to chip away at Sec. 230’s protections through an endless variety of regulatory measures. But if the Court guts Sec. 230 in Gonzalez, then it will really be open season on tech companies, as lawsuits will fly at every juncture whenever someone does not like a particular content moderation decision. Cannon and I note in our new essay that,

if the court moves to weaken liability protections for digital platforms, the ramifications will be profoundly negative. While many critics today complain that the law’s liability protections have been too generous, the reality is that Section 230 has been the legal linchpin supporting the permissionless innovation model that fueled America’s commanding lead in the digital information revolution. Thanks to the law, digital entrepreneurs have been free to launch bold new ideas without fear of punishing lawsuits or regulatory shenanigans. This has boosted economic growth and dramatically broadened consumer information and communications options.

Many critics of Sec. 230 claim that reforms are needed to “rein in Big Tech.” But, ironically, gutting Sec. 230 would probably only make big tech companies even bigger because the smaller players in the market would struggle to deal with the mountains of regulations and lawsuits that would come about in its absence. Cannon and I continue on to explore what it means for the next generation of online innovators if these court cases go badly and Section 230 is scaled back or gutted:

Section 230 has been a legal cornerstone of the entire ecosystem. All the large-scale platforms we depend on for our online experience would never have gotten off the ground without its protection. […] More importantly, these platforms have relied on being able to host third-party content without fear of opening a Pandora’s box of private litigation and endless challenges from governments. By removing these protections, platforms will be forced to significantly increase their moderation practices to reduce risk of suits from zealous litigants. Besides the chilling effect this will have on speech, it also will put up a cost-prohibitive barrier for smaller entrants who lack the resources to have an army of content moderators to find and eliminate undesirable content.

The broader effect on market dynamism and the nation’s technological competitiveness will be profound as permissionless innovation is replaced by mountains of top-down permission slips. “If America’s digital sector gets kneecapped by the Supreme Court, or if new regulations or legislative proposals scale back Section 230 protections, it will be significantly more difficult for U.S. firms to continue to lead in the development and commercialization of new technologies,” we conclude.

Jump over to Discourse to read the entire piece.

]]>
https://techliberation.com/2022/12/09/gonzalez-v-google-section-230-the-future-of-permissionless-innovation/feed/ 0
Sunsets & Sandboxes Can Help Slay ‘Zombie Government’ https://techliberation.com/2022/12/08/sunsets-sandboxes-can-help-slay-zombie-government/ https://techliberation.com/2022/12/08/sunsets-sandboxes-can-help-slay-zombie-government/#respond Thu, 08 Dec 2022 16:18:40 +0000 https://techliberation.com/?p=77064

I have a new oped in the Orange County Register discussing reforms that can help address the growing problem of “zombie government,” or old government policies and programs that just seem to never die even thought they have long outlived their usefulness. While there is no single solution to this sort of “set-it-and-forget-it” approach to government that locks in old policies and programs, but I note that:

sunsets and sandboxes are two policy innovations that can help liberate California from old and cumbersome government regulations and rules. Sunsets pause or end rules or programs regularly to ensure they don’t grow stale. Sandboxes are policy experiments that allow for the temporary relaxation of regulations to see what approaches might work better.

When California, other states, and the federal government fail to occasional do spring cleanings of unneeded old rules and programs, it creates chronic regulatory accumulation that has real costs and consequences for the efficient operation of markets and important government programs.

Jump over to the OCR site to read the entire oped.

]]>
https://techliberation.com/2022/12/08/sunsets-sandboxes-can-help-slay-zombie-government/feed/ 0
Video: Censorship is a Big Government Problem, Not a Big Tech Problem https://techliberation.com/2022/12/06/video-censorship-is-a-big-government-problem-not-a-big-tech-problem/ https://techliberation.com/2022/12/06/video-censorship-is-a-big-government-problem-not-a-big-tech-problem/#respond Wed, 07 Dec 2022 00:54:38 +0000 https://techliberation.com/?p=77062

My colleague Wayne Brough and I recently went on the “Kibbe on Liberty” show to discuss how to discuss the state of free speech on the internet. We explained how censorship is a Big Government problem, not a Big Tech problem. Here’s the complete description of the show and the link to the full episode is below.

With Elon Musk’s purchase of Twitter, we are in the middle of a national debate about the tension between censorship and free expression online. On the Right, many people are calling for government to rein in what they perceive as the excesses of Big Tech companies, while the Left wants the government to crack down on speech they deem dangerous. Both approaches make the same mistake of giving politicians authority over what we are allowed to say and hear. And with recent revelations about government agents leaning on social media companies to censor speech, it’s clear that when it comes to the online conversation, there’s no such thing as a purely private company.”

For more on this issues, please see: “The Classical Liberal Approach to Digital Media Free Speech Issues.”

]]>
https://techliberation.com/2022/12/06/video-censorship-is-a-big-government-problem-not-a-big-tech-problem/feed/ 0
Tech Regulation Will Increasingly Be Driven Through the Prism of “Algorithmic Fairness” https://techliberation.com/2022/11/06/tech-regulation-will-increasingly-be-driven-through-the-prism-of-algorithmic-fairness/ https://techliberation.com/2022/11/06/tech-regulation-will-increasingly-be-driven-through-the-prism-of-algorithmic-fairness/#respond Sun, 06 Nov 2022 18:51:21 +0000 https://techliberation.com/?p=77056

We are entering a new era for technology policy in which many pundits and policymakers will use “algorithmic fairness” as a universal Get Out of Jail Free card when they push for new regulations on digital speech and innovation. Proposals to regulate things like “online safety,” “hate speech,” “disinformation,” and “bias” among other things often raise thorny definitional questions because of their highly subjective nature. In the United States, efforts by government to control these things will often trigger judicial scrutiny, too, because restraints on speech violate the First Amendment. Proponents of prior restraint or even ex post punishments understand this reality and want to get around it. Thus, in an effort to avoid constitutional scrutiny and lengthy court battles, they are engaged in a rebranding effort and seeking to push their regulatory agendas through a techno-panicky prism of “algorithmic fairness” or “algorithmic justice.”

Hey, who could possibly be against FAIRNESS and JUSTICE? Of course, the devil is always in the details as Neil Chilson and I discuss in our new paper for the The Federalist Society and Regulatory Transparency Project on, “The Coming Onslaught of ‘Algorithmic Fairness’ Regulations.” We document how federal and state policymakers from both parties are currently considering a variety of new mandates for artificial intelligence (AI), machine learning, and automated systems that, if imposed, “would thunder through our economy with one of the most significant expansions of economic and social regulation – and the power of the administrative state – in recent history.”

We note how, at the federal level, bills are being floated with titles like the “Algorithmic Justice and Online Platform Transparency Act” and the “Protecting Americans from Dangerous Algorithms Act,” which would introduce far-reaching regulations requiring AI innovators to reveal more about how their algorithms work or even hold them liable if their algorithms are thought to be amplifying hateful or extremist content. Other proposed measures like the “Platform Accountability and Consumer Transparency Act” and the “Online Consumer Protection Act” would demand greater algorithmic transparency as it relates to social media content moderation policies and procedures. Finally, measures like the “Kids Online Safety Act” would require audits of algorithmic recommendation systems that supposed targeted or harmed children. Algorithmic regulation is also creeping into proposed privacy regulations, such as the “American Data Protection and Privacy Act of 2022.”

And then there are all the state laws–many of which have been pushed by conservatives–that would mandate “algorithmic transparency” for social media content moderation in the name of countering supposed viewpoint bias. Bills in Florida and Texas take this approach. Meanwhile, conservatives in Congress Senator Josh Hawley’s (R-MO) push for bills like the “Ending Support for Internet Censorship Act” that requires large tech companies undergo external audits proving that their algorithms and content-moderation techniques are politically unbiased. It’s an open invitation to regulators and trial lawyers to massively regulate technology and speech under the guise of “algorithmic fairness.” Countless left-leaning law professors and European officials have already proposed a comprehensive algorithmic audit apparatus to regulate innovators in every sector.

It’s the rise of the Code Cops. If we continue down this path, it ends with a complete rejection of the permissionless innovation ethos that made America’s information technology sector a global powerhouse. Instead, we’ll be stuck with the very worst type of “Mother, May I” precautionary principle-based regulatory regime that will be imposing the equivalent of occupational licensing requirements for coders.

If code is speech, algorithms are as well. Defenders of innovation freedom need to step up and prepare for the fight to come. [See my earlier essay, “AI Eats the World: Preparing for the Computational Revolution and the Policy Debates Ahead.”] Chilson and I outline the broad contours of the battle for freedom of speech and the freedom to innovation that is brewing. It will be the most important technology policy issue of the next ten years. I hope you take the time to read our new essay and understand why. And below you will find a few dozen more essay on the same topic if you’d like to dig even deeper.

Additional Reading:

 

]]>
https://techliberation.com/2022/11/06/tech-regulation-will-increasingly-be-driven-through-the-prism-of-algorithmic-fairness/feed/ 0
We Need to Get All the Smart People in a Room & Have a Conversation https://techliberation.com/2022/10/16/we-need-to-get-all-the-smart-people-in-a-room-have-a-conversation/ https://techliberation.com/2022/10/16/we-need-to-get-all-the-smart-people-in-a-room-have-a-conversation/#respond Sun, 16 Oct 2022 12:51:13 +0000 https://techliberation.com/?p=77052

For my latest Discourse column (“We Really Need To ‘Have a Conversation’ About AI … or Do We?”), I discuss two commonly heard lines in tech policy circles.

1) “We need to have conversation about the future of AI and the risks that it poses.”

2) “We should get a bunch of smart people in a room and figure this out.”

I note that, if you’ve read enough essays, books or social media posts about artificial intelligence (AI) and robotics—among other emerging technologies—then chances are you’ve stumbled on variants of these two arguments many times over. They are almost impossible to disagree with in theory, but when you start to investigate what they actually mean in practice, they are revealed to be largely meaningless rhetorical flourishes which threaten to hold up meaningful progress on the AI front. I continue on to argue in my essay that:

I’m not at all opposed to people having serious discussions about the potential risks associated with AI, algorithms, robotics or smart machines. But I do have an issue with: (a) the astonishing degree of ambiguity at work in the world of AI punditry regarding the nature of these “conversations,” and, (b) the fact that people who are making such statements apparently have not spent much time investigating the remarkable number of very serious conversations that have already taken place, or which are ongoing, about AI issues.

In fact, it very well could be the case that we have too many conversations going on currently about AI issues and that the bigger problem is instead one of better coordinating the important lessons and best practices that we have already learned from those conversations.

I then unpack each of those lines and explain what is wrong with them in more detail. One thing that always bugs be about the “we need to have a conversation” aphorism is that those uttering it absolutely refuse to be nailed down on the specifics, like:

  1. What is the nature or goal of that conversation?
  2. Who is the “we” in this conversation?
  3. How is this conversation to be organized and managed?
  4. How do we know when the conversation is going on, or when it is sufficiently complete such that we can get on with things?
  5. And, most importantly, aren’t you implicitly suggesting that we should ban or limit the use of that technology until you (or the royal “we”) is somehow satisfied that the conversation is over or yielded satisfactory answers?

The other commonly heard line — “We need to get a bunch of smart people in a room and figure this out” — can be equally infuriating due to both a lack of specifics (which people? what room? where and when? etc) but also because of the fact that we already have had tons of the Very Smartest People on these issues meeting in countless rooms across the globe for many years. In an earlier essay, I documented the astonishing growth of AI governance frameworks, ethical best practices and professional codes of conduct: “The amount of interest surrounding AI ethics and safety dwarfs all other fields and issues. I sincerely doubt that ever in human history has so much attention been devoted to any technology as early in its lifecycle as AI.”

I also note that, practically speaking, “the most important conversations society has about new technologies are those we have every day when we all interact with those new technologies and with one another. Wisdom is born from experiences, including activities and interactions involving risk and the possibility of mistakes. This is how progress happens.” And I conclude by noting how:

We won’t ever be able to “have a conversation” about a new technology that yields satisfactory answers for some critics precisely because the questions just multiply and evolve endlessly over time, and they can only be answered through ongoing societal interactions and problem-solving. But we shouldn’t stop life-enriching innovations from happening just because we don’t have all the answers beforehand.

Anyway, I invite you to head over to Discourse and read the entire essay. In the meantime, I propose we get all the smart people in a room and have a conversation about how these two lines came to dominate tech policy discussions before they end up doing real damage to human prosperity! It’s the ethical thing to do if you really care about the future.

]]>
https://techliberation.com/2022/10/16/we-need-to-get-all-the-smart-people-in-a-room-have-a-conversation/feed/ 0