The New York Times today published my response to an oped by Senators Lindsey Graham & Elizabeth Warren calling for a new “Digital Consumer Protection Commission” to micromanage the high-tech information economy. “Their new technocratic digital regulator would do nothing but hobble America as we prepare for the next great global technological revolution,” I argue. Here’s my full response:
Senators Lindsey Graham and Elizabeth Warren propose a new federal mega-regulator for the digital economy that threatens to undermine America’s global technology standing.
A new “licensing and policing” authority would stall the continued growth of advanced technologies like artificial intelligence in America, leaving China and others to claw back crucial geopolitical strategic ground.
America’s digital technology sector enjoyed remarkable success over the past quarter-century — and provided vast investment and job growth — because the U.S. rejected the heavy-handed regulatory model of the analog era, which stifled innovation and competition.
The tech companies that Senators Graham and Warren cite (along with countless others) came about over the past quarter-century because we opened markets and rejected the monopoly-preserving regulatory regimes that had been captured by old players.
The U.S. has plenty of federal bureaucracies, and many already oversee the issues that the senators want addressed. Their new technocratic digital regulator would do nothing but hobble America as we prepare for the next great global technological revolution.
As I noted in a recent interview with James Pethokoukis for his Faster, Please! newsletter, “[t]he current policy debate over artificial intelligence is haunted by many mythologies and mistaken assumptions. The most problematic of these is the widespread belief that AI is completely ungoverned today.” In a recent R Street Institute report and series of other publications, I have documented just how wrong that particular assumption is.
The first thing I try to remind everyone is that the U.S. federal government is absolutely massive—2.1 million employees, 15 cabinet agencies, 50 independent federal commissions and 434 federal departments. Strangely, when policymakers and pundits deliver remarks on AI policy today, they seem to completely ignore all that regulatory capacity while simultaneously casually tossing out proposals to just add more and more layers of regulation and bureaucracy to it. Well, I say why not see if the existing regulations and bureaucracy are working first, and then we can have a chat about what more is needed to fill gaps.
And a lot is being done on this front. In a new blog post for R Street, I offer a brief summary of some of the most important recent efforts. Continue reading →
The R Street Institute has just released my latest study on AI governance and how to address “alignment” concerns in a bottom-up fashion. The 40-page report is entitled, “Flexible, Pro-Innovation Governance Strategies for Artificial Intelligence.”
My report asks, is it possible to address AI alignment without starting with the Precautionary Principle as the governance baseline default? I explain how that is indeed possible. While some critics claim that no one is seriously trying to deal with AI alignment today, my report explains how no technology in history has been more heavily scrutinized this early in its life-cycle as AI, machine learning and robotics. The number of ethical frameworks out there already is astonishing. We don’t have too few alignment frameworks; we probably have too many!
We need to get serious about bringing some consistency to these efforts and figure out more concrete ways to a culture of safety by embedding ethics-by-design. But there is an equally compelling interest in ensuring that algorithmic innovations are developed and made widely available to society.
Continue reading →
In my latest R Street Institute report, I discuss the importance of “Getting AI Innovation Culture Right.” This is the first of a trilogy of major reports on what sort of policy vision and set of governance principles should guide the development of artificial intelligence (AI), algorithmic systems, machine learning (ML), robotics, and computational science and engineering more generally. More specifically, these reports seek to answer the question, Can we achieve AI safety without innovation-crushing top-down mandates and massive new regulatory bureaucracies?
These questions are particular pertinent as we just made it through a week in which we’ve seen a major open letter issued that calls for a 6-month freeze on the deployment of AI technologies, while a prominent AI ethicist argued that governments should go further and consider airstrikes data processing centers even if the exchange of nuclear weapons needed to be considered! On top of that, Italy became the first major nation to ban ChatGPT, the popular AI-enabled chatbot created by U.S.-based OpenAI.
My report begins from a different presumption: AI, ML and algorithmic technologies present society with enormously benefits and, while real risks are there, we can find better ways of addressing them. Continue reading →
I have a new R Street Institute policy study out this week doing a deep dive into the question: “Can We Predict the Jobs and Skills Needed for the AI Era?” There’s lots of hand-wringing going on today about AI and the future of employment, but that’s really nothing new. In fact, in light of past automation panics, we might want to step back and ask: Why isn’t everyone already unemployed due to technological innovation?
To get my answers, please read the paper! In the meantime, here’s the executive summary:
To better plan for the economy of the future, many academics and policymakers regularly attempt to forecast the jobs and worker skills that will be needed going forward. Driving these efforts are fears about how technological automation might disrupt workers, skills, professions, firms and entire industrial sectors. The continued growth of artificial intelligence (AI), robotics and other computational technologies exacerbate these anxieties.
Yet the limits of both our collective knowledge and our individual imaginations constrain well-intentioned efforts to plan for the workforce of the future. Past attempts to assist workers or industries have often failed for various reasons. However, dystopian predictions about mass technological unemployment persist, as do retraining or reskilling programs that typically fail to produce much of value for workers or society. As public efforts to assist or train workers move from general to more specific, the potential for policy missteps grows greater. While transitional-support mechanisms can help alleviate some of the pain associated with fast-moving technological disruption, the most important thing policymakers can do is clear away barriers to economic dynamism and new opportunities for workers.
I do discuss some things that government can do to address automation fears at the end of the paper, but it’s important that policymakers first understand all the mistakes we’ve made with past retraining and reskilling efforts. The easiest thing to do to help in the short-term is clear away barriers to labor mobility and economic dynamism, I argue. Again, read the study for details.
For more info on other AI policy developments, check out my running list of research on AI, ML robotics policy.
In my latest R Street Institute blog post, “Mapping the AI Policy Landscape Circa 2023: Seven Major Fault Lines,” I discuss the big issues confronting artificial intelligence and machine learning in the coming year and beyond. I note that the AI regulatory proposals are multiplying fast and coming in two general varieties: broad-based and targeted. Broad-based algorithmic regulation would address the use of these technologies in a holistic fashion across many sectors and concerns. By contrast, targeted algorithmic regulation looks to address specific AI applications or concerns. In the short-term, it is more likely that targeted or “sectoral” regulatory proposals have a chance of being implemented.
I go on to identify seven major issues of concern that will drive these policy proposals. They include:
1) Privacy and Data Collection
2) Bias and Discrimination
3) Free Speech and Disinformation
4) Kids’ Safety
5) Physical Safety and Cybersecurity
6) Industrial Policy and Workforce Issues
7) National Security and Law Enforcement Issues
Continue reading →
- President Biden began his 2023 State of the Union remarks by saying America is defined by possibilities. Correct! Unfortunately, his tech-bashing will undermine those possibilities by discouraging technological innovation & online freedom in the United States.
- America became THE global leader on digital tech because we rejected heavy-handed controls on innovators & speech. We shouldn’t return to the broken model of the past by layering on red tape, economic controls & speech restrictions.
- What has the tech economy done for us lately? Here is a look at the value added to the U.S. economy by the digital sector from 2005-2021. That’s $2.4 TRILLION (with a T) added in 2021. These are astonishing numbers.
- FACT: According to the BEA, in 2021, “the U.S. digital economy accounted for $3.70 trillion of gross output, $2.41 trillion of value added (translating to 10.3 % of U.S. GDP), $1.24 trillion of compensation + 8.0 million jobs.”
In 2021…
- $3.70 trillion of gross output
- $2.41 trillion of value added (=10.3% percent GDP)
- $1.24 trillion of compensation
- 8.0 million jobs
FACT: globally, 49 of the top 100 digital tech firms with most employees are US companies. Here they are. Smart public policy made this list possible.
- FACT: 18 of the world’s Top 25 tech companies by Market Cap are US-based firms.
- It’d be a huge mistake to adopt Europe’s approach to tech regulation. As I noted recently in the Wall Street Journal, “The only thing Europe exports now on the digital-technology front is regulation.” Yet, Biden would have us import the EU model to our shores.
- My R Street colleague Josh Withrow has also noted how, “the EU’s approach appears to be, in sum, ‘If you can’t innovate, regulate.’” America should not be following the disastrous regulatory path of the European Union on digital technology policy.
- On antitrust regulation, here is a study by my R Street colleague Wayne Brough on the dangerous approach that the Biden administration wants, which would swing a wrecking ball through the tech economy. We have to avoid this.
- It is particularly important that the US not follow the EU’s lead on artificial intelligence regulation at a time when we are in heated competition w China on the AI front as I noted here.
- American tech innovators flourished thanks to a positive innovation culture rooted in permissionless innovation & policies like Section 230, which allowed American firms to become global powerhouses. And we’ve moved from a world of information scarcity to one of information abundance. Let’s keep it that way.
For my latest Discourse column (“We Really Need To ‘Have a Conversation’ About AI … or Do We?”), I discuss two commonly heard lines in tech policy circles.
1) “We need to have conversation about the future of AI and the risks that it poses.”
2) “We should get a bunch of smart people in a room and figure this out.”
I note that, if you’ve read enough essays, books or social media posts about artificial intelligence (AI) and robotics—among other emerging technologies—then chances are you’ve stumbled on variants of these two arguments many times over. They are almost impossible to disagree with in theory, but when you start to investigate what they actually mean in practice, they are revealed to be largely meaningless rhetorical flourishes which threaten to hold up meaningful progress on the AI front. I continue on to argue in my essay that:
I’m not at all opposed to people having serious discussions about the potential risks associated with AI, algorithms, robotics or smart machines. But I do have an issue with: (a) the astonishing degree of ambiguity at work in the world of AI punditry regarding the nature of these “conversations,” and, (b) the fact that people who are making such statements apparently have not spent much time investigating the remarkable number of very serious conversations that have already taken place, or which are ongoing, about AI issues.
In fact, it very well could be the case that we have too many conversations going on currently about AI issues and that the bigger problem is instead one of better coordinating the important lessons and best practices that we have already learned from those conversations.
I then unpack each of those lines and explain what is wrong with them in more detail. Continue reading →
It was my pleasure this week to participate in a panel discussion about the future of innovation policy at the James Madison Institute’s 2022 Tech and Innovation Summit in Coral Gables, FL. Our conversation focused on the future of Progress Studies, which is one of my favorite topics. We were asked to discuss five major questions and below I have summarized some of my answers to them, plus some other thoughts I had about what I heard at the conference from others.
- What is progress studies and why is it so needed today?
In a sense, Progress Studies is nothing new. Progress studies goes back at least to the days of Adam Smith and plenty of important scholars have been thinking about it ever since. Those scholars and policy advocates have long been engaged in trying to figure out what’s the secret sauce that powers economic growth and human prosperity. It’s just that we didn’t call that Progress Studies in the old days.
The reason Progress Studies is important is because technological innovation has been shown to be the fundamental driver in improvements in human well-being over time. When we can move the needle on progress, it helps individuals extend and improve their lives, incomes, and happiness. By extension, progress helps us live lives of our choosing. As Hans Rosling brilliantly argued, the goal of expanding innovation opportunities and raising incomes “is not just bigger piles of money” or more leisure time. “The ultimate goal is to have the freedom to do what we want.” Continue reading →
[Cross-posted from Medium.]
In an age of hyper-partisanship, one issue unites the warring tribes of American politics like no other: hatred of “Big Tech.” You know, those evil bastards who gave us instantaneous access to a universe of information at little to no cost. Those treacherous villains! People are quick to forget the benefits of moving from a world of Information Poverty to one of Information Abundance, preferring to take for granted all they’ve been given and then find new things to complain about.
But what mostly unites people against large technology platforms is the feeling that they are just too big or too influential relative to other institutions, including government. I get some of that concern, even if I strongly disagree with many of their proposed solutions, such as the highly dangerous sledgehammer of antitrust breakups or sweeping speech controls. Breaking up large tech companies would not only compromise the many benefits they provide us with, but it would undermine America’s global standing as a leader in information and computational technology. We don’t want that. And speech codes or meddlesome algorithmic regulations are on a collision course with the First Amendment and will just result in endless litigation in the courts.
There’s a better path forward. As President Ronald Reagan rightly said in 1987 when vetoing a bill to reestablish the Fairness Doctrine, “History has shown that the dangers of an overly timid or biased press cannot be averted through bureaucratic regulation, but only through the freedom and competition that the First Amendment sought to guarantee.” In other words, as I wrote in a previous essay about “The Classical Liberal Approach to Digital Media Free Speech Issues,” more innovation and competition are always superior to more regulation when it comes to encouraging speech and speech opportunities.
Continue reading →