Miscellaneous

I was my pleasure to participate in this Cato Institute event today on “Who’s Leading on AI Policy?
Examining EU and U.S. Policy Proposals and the Future of AI.” Cato’s Jennifer Huddleston hosted and also participating was Boniface de Champris, Policy Manager with the Computer and Communications Industry Association. Here’s a brief outline of some of the issues we discussed:

  • What are the 7 leading concerns driving AI policy today?
  • What is the difference between horizontal vs. vertical AI regulation?
  • Which agencies are moving currently to extend their reach and regulate AI tech?
  • What’s going on at the state, local, and municipal level in the US on AI policy?
  • How will the so-called “Brussels Effect” influence the course of AI policy in the US?
  • What have the results been of the EU’s experience with the GDPR?
  • How will the EU AI Act work in practice?
  • Can we make algorithmic systems perfectly transparent / “explainable”?
  • Should AI innovators be treated as ‘guilty until proven innocent’ of certain risks?
  • How will existing legal concepts and standards (like civil rights law and unfair and deceptive practices regulation) be applied to algorithmic technologies?
  • Do we have a fear-based model of AI governance currently? What role has science fiction played in fueling that?
  • What role will open source AI play going forward?
  • Is AI licensing a good idea? How would it even work?
  • Can AI help us identify and address societal bias and discrimination?

Again, you can watch the entire video here and, as always, here’s my “Running List of My Research on AI, ML & Robotics Policy.”

Here’s the video from a June 6th event on, “Does the US Need a New AI Regulator?” which was co-hosted by Center for Data Innovation & R Street Institute. We discuss algorithmic audits, AI licensing, an “FDA for algorithms” and other possible regulatory approaches, as well as various “soft law” self-regulatory efforts and targeted agency efforts. The event was hosted by Daniel Castro and included Lee Tiedrich, Shane Tews, Ben Shneiderman and me.

Continue reading →

I stumbled across a surprising drone policy update in the FAA’s Aeronautical Information Manual (Manual) last week. The Manual contains official guidance and best practices to US airspace users. (My friend Marc Scribner reminds me that the Manual is not formally regulatory, though it often restates or summarizes regulations.) The manual has a (apparently) new section: “Airspace Access for UAS.” In subsection “Airspace Restrictions To Flight” (11-4-6) it notes:

There can be certain local restrictions to airspace. While the FAA is designated by federal law to be the regulator of the NAS [national airspace system], some state and local authorities may also restrict access to local airspace. UAS pilots should be aware of these local rules.

Legally speaking, the FAA is recognizing there is no “field preemption” when it comes to low-altitude airspace restrictions. In sharing this provision around with aviation and drone experts, each agreed this was a new and surprising policy guidance. The drone provisions appear to have been part of updates made on April 20, 2023. In my view, it’s very welcome guidance.

Some background: In 2015, the FAA released helpful “fact sheet” to state and local officials about drone regulations, as state legislatures began regulating drone operations in earnest. The FAA noted the several drone-related areas, including aviation safety, where federal aviation rules are extensive. The agency noted:

Laws traditionally related to state and local police power – including land use, zoning, privacy,
trespass, and law enforcement operations – generally are not subject to federal regulation.

To ensure state and federal drone laws were not in conflict, the FAA recommended that state and local officials consult with the FAA before creating “operational UAS restrictions on flight altitude, flight paths; operational bans; any regulation of the navigable airspace.”

That guidance is still current and still useful. Around 2017, however, it seems some within the FAA began publicly and privately taking a rather harder line regarding state and local rules about drone operations. For instance, in July 2018, someone at the FAA posted a confusing and brief new statement on the FAA website about state and local drone rules that is hard to reconcile with the 2015 guidance. Continue reading →

[Originally published on Medium on 2/5/2022]

In an earlier essay, I explored “Why the Future of AI Will Not Be Invented in Europe” and argued that, “there is no doubt that European competitiveness is suffering today and that excessive regulation plays a fairly significant role in causing it.” This essay summarizes some of the major academic literature that leads to that conclusion.

Since the mid-1990s, the European Union has been layering on highly restrictive policies governing online data collection and use. The most significant of the E.U.’s recent mandates is the 2018 General Data Protection Regulation (GDPR). This regulation established even more stringent rules related to the protection of personal data, the movement thereof, and limits what organizations can do with data. Data minimization is the major priority of this system, but there are many different types of restrictions and reporting requirements involved in the regulatory scheme. This policy framework also has ramifications for the future of next-generation technologies, especially artificial intelligence and machine learning systems, which rely on high-quality data sets to improve their efficacy.

Whether or not the E.U.’s complicated regulatory regime has actually resulted in truly meaningful privacy protections for European citizens relative to people in other countries remains open to debate. It is very difficult to measure and compare highly subjective values like privacy across countries and cultures. This makes benefit-cost analysis for privacy regulation extremely challenging — especially on the benefits side of the equation.

What is no longer up for debate, however, is the cost side of the equation and the question of what sort of consequences the GDPR has had on business formation, competition, investment, and so on. On these matters, standardized metrics exist and the economic evidence is abundantly clear: the GDPR has been a disaster for Europe. Continue reading →

The Wall Street Journal has run my response to troubling recent opeds by President Biden (“Republicans and Democrats, Unite Against Big Tech Abuses“) and former Trump Administration Attorney General William Barr (“Congress Must Halt Big Tech’s Power Grab“) in which they both called for European-style regulation of U.S. digital technology markets.

“The only thing Europe exports now on the digital-technology front is regulation,” I noted in my response, and that makes it all the more mind-boggling that Biden and Barr want to go down that same path. “[T]he EU’s big-government regulatory crusade against digital tech: Stagnant markets, limited innovation and a dearth of major players. Overregulation by EU bureaucrats led Europe’s best entrepreneurs and investors to flee to the U.S. or elsewhere in search of the freedom to innovate.”

Thus, the Biden and Barr plans for importing European-style tech mandates, “would be a stake through the heart of the ‘permissionless innovation’ that made America’s info-tech economy a global powerhouse.” In a longer response to the Biden oped that I published on the R Street blog, I note that:

“It is remarkable to think that after years of everyone complaining about the lack of bipartisanship in Washington, we might get the one type of bipartisanship America absolutely does not need: the single most destructive technological suicide in U.S. history, with mandates being substituted for markets, and permission slips for entrepreneurial freedom.”

What makes all this even more remarkable is that they calls for hyper-regulation come at a time when China is challenging America’s dominance in technology and AI. Thus, “new mandates could compromise America’s lead,” I conclude. “Shackling our tech sectors with regulatory chains will hobble our nation’s ability to meet global competition and undermine innovation and consumer choice domestically.”

Jump over to the WSJ to read my entire response (“EU-Style Regulation Begets EU-Style Stagnation“) and to the R Street blog for my longer essay (“President Biden Wants America to Become Europe on Tech Regulation“).

Everywhere you look in tech policy land these days, people decry China as a threat to America’s technological supremacy or our national security. Many of these claims are well-founded, while others are somewhat overblown. Regardless, as I argue in a new piece for National Review this week, “America Won’t Beat China by Becoming China.” Many pundits and policymakers seem to think that only a massive dose of central planning and Big Government technocratic bureaucracy can counter the Chinese threat. It’s a recipe for a great deal of policy mischief.

Some of these advocates for a ‘let’s-be-more-like-China’ approach to tech policy also engage in revisionist histories about America’s recent success stories in the personal computing revolution and internet revolution. As I note in my essay, “[t]he revisionists instead prefer to believe that someone high up in government was carefully guiding this decentralized innovation. In the new telling of this story, deregulation had almost nothing to do with it.” In fact, I was asked by National Review to write this piece in response to a recent essay by Wells King of American Compass, who has penned some rather remarkable revisionist tales of government basically being responsible for all the innovation in digital tech sectors over the past quarter century. Markets and venture capital had nothing to do with it by his reasoning. It’s what Science writer Matt Ridley correctly labels “innovation creationism,” or the notion that it basically takes a village to raise an innovator. Continue reading →

I have a new oped in the Orange County Register discussing reforms that can help address the growing problem of “zombie government,” or old government policies and programs that just seem to never die even thought they have long outlived their usefulness. While there is no single solution to this sort of “set-it-and-forget-it” approach to government that locks in old policies and programs, but I note that:

sunsets and sandboxes are two policy innovations that can help liberate California from old and cumbersome government regulations and rules. Sunsets pause or end rules or programs regularly to ensure they don’t grow stale. Sandboxes are policy experiments that allow for the temporary relaxation of regulations to see what approaches might work better.

When California, other states, and the federal government fail to occasional do spring cleanings of unneeded old rules and programs, it creates chronic regulatory accumulation that has real costs and consequences for the efficient operation of markets and important government programs.

Jump over to the OCR site to read the entire oped.

I’m finishing up my next book, which is tentatively titled, “A Flexible Governance Framework for Artificial Intelligence.” I thought I’d offer a brief preview here in the hope of connecting with others who care about innovation in this space and are also interested in helping to address these policy issues going forward.

The goal of my book is to highlight the ways in which artificial intelligence (AI) machine learning (ML), robotics, and the power of computational science are set to transform the world—and the world of public policy—in profound ways. As with all my previous books and research products, my goal in this book includes both empirical and normative components. The first objective is to highlight the tensions between emerging technologies and the public policies that govern them. The second is to offer a defense of a specific governance stance toward emerging technologies intended to ensure we can enjoy the fruits of algorithmic innovation.

AI is a transformational technology that is general-purpose and dual-use. AI and ML also build on top of other important technologies—computing, microprocessors, the internet, high-speed broadband networks, and data storage/processing systems—and they will become the building blocks for a great many other innovations going forward. This means that, eventually, all policy will involve AI policy and computational considerations at some level. It will become the most important technology policy issue here and abroad going forward.

The global race for AI supremacy has important implications for competitive advantage and other geopolitical issues. This is why nations are focusing increasing attention on what they need to do to ensure they are prepared for this next major technological revolution. Public policy attitudes and defaults toward innovative activities will have an important influence on these outcomes.

In my book, I argue that, if the United States hopes to maintain a global leadership position in AI, ML, and robotics, public policy should be guided by two objectives:

  1. Maximize the potential for innovation, entrepreneurialism, investment, and worker opportunities by seeking to ensure that firms and other organizations are prepared to compete at a global scale for talent and capital and that the domestic workforce is properly prepared to meet the same global challenges.
  2. Develop a flexible governance framework to address various ethical concerns about AI development or use to ensure these technologies benefit humanity, but work to accomplish this goal without undermining the goals set forth in the first objective.

The book primarily addresses the second of these priorities because getting the governance framework for AI right significantly improves the chances of successfully accomplishing the first goal of ensuring that the United States remains a leading global AI innovator. Continue reading →

Are you a student or young scholar looking for opportunities to advance your studies and future career opportunities? The Mercatus Center at George Mason University can help. I’ve been with Mercatus for 12 years now and the most rewarding part of my job has always been the chance to interact with students and up-and-coming scholars who are hungry to learn more and make their mark on the world. Of course, learning and researching takes time and money. Mercatus works with students and scholars in many different fields to help them advance their careers by offering them some financial assistance to make their dreams easier to achieve. 

The Mercatus Center’s Academic & Student Programs team (ASP) are the ones that make all this happen. ASP is currently accepting applications for various fellowships running through the 2022-2023 academic year (for students) and 2023 calendar year (for our early-career scholars).  ASP recruits, trains, and supports graduate students who have gone on to pursue careers in academia, government, and public policy. Additionally, ASP supports scholars pursuing research on the cutting edge of academia. Mercatus fellows have an opportunity to learn from and interact with an impressive collection of Mercatus faculty, affiliated scholars, and visitors.

ASP offers several different fellowship programs to suit every need. Our fellows explore and discuss the foundations of political economy and public policy and pursue research on pressing issues. For graduate students who follow this blog and are generally interested in the big questions surrounding innovation, we especially encourage you to consider the Frédéric Bastiat Fellowship which will be premiering its innovation study track for the 2022-2023 academic year. I usually am an instructor at the session on tech and innovation policy. 

Here are more details on all the academic fellowships that Mercatus currently offers. Please pass along this information to any students or early-career scholars who might be interested.

Continue reading →

A short presentation I do for Mercatus Center graduate students every couple of years offering advice to aspiring policy scholars looking to develop their personal brand & be more effective public policy analysts.