The Wall Street Journal has run my response to troubling recent opeds by President Biden (“Republicans and Democrats, Unite Against Big Tech Abuses“) and former Trump Administration Attorney General William Barr (“Congress Must Halt Big Tech’s Power Grab“) in which they both called for European-style regulation of U.S. digital technology markets.

“The only thing Europe exports now on the digital-technology front is regulation,” I noted in my response, and that makes it all the more mind-boggling that Biden and Barr want to go down that same path. “[T]he EU’s big-government regulatory crusade against digital tech: Stagnant markets, limited innovation and a dearth of major players. Overregulation by EU bureaucrats led Europe’s best entrepreneurs and investors to flee to the U.S. or elsewhere in search of the freedom to innovate.”

Thus, the Biden and Barr plans for importing European-style tech mandates, “would be a stake through the heart of the ‘permissionless innovation’ that made America’s info-tech economy a global powerhouse.” In a longer response to the Biden oped that I published on the R Street blog, I note that:

“It is remarkable to think that after years of everyone complaining about the lack of bipartisanship in Washington, we might get the one type of bipartisanship America absolutely does not need: the single most destructive technological suicide in U.S. history, with mandates being substituted for markets, and permission slips for entrepreneurial freedom.”

What makes all this even more remarkable is that they calls for hyper-regulation come at a time when China is challenging America’s dominance in technology and AI. Thus, “new mandates could compromise America’s lead,” I conclude. “Shackling our tech sectors with regulatory chains will hobble our nation’s ability to meet global competition and undermine innovation and consumer choice domestically.”

Jump over to the WSJ to read my entire response (“EU-Style Regulation Begets EU-Style Stagnation“) and to the R Street blog for my longer essay (“President Biden Wants America to Become Europe on Tech Regulation“).

I spent much of 2022 writing about the growing policy debate over artificial intelligence, machine learning, robotics, and the Computational Revolution more generally. Here are some of the major highlights of my work on this front.

All these essays + dozens more can be found on my: “Running List of My Research on AI, ML & Robotics Policy.” I have several lengthy studies and many shorter essays coming in the first half of 2023.

Finally, here is a Federalist Society podcast discussion about AI policy hosted by Jennifer Huddleston in which Hodan Omaar of ITIF and I offer a big picture overview of where things are headed next.

Everywhere you look in tech policy land these days, people decry China as a threat to America’s technological supremacy or our national security. Many of these claims are well-founded, while others are somewhat overblown. Regardless, as I argue in a new piece for National Review this week, “America Won’t Beat China by Becoming China.” Many pundits and policymakers seem to think that only a massive dose of central planning and Big Government technocratic bureaucracy can counter the Chinese threat. It’s a recipe for a great deal of policy mischief.

Some of these advocates for a ‘let’s-be-more-like-China’ approach to tech policy also engage in revisionist histories about America’s recent success stories in the personal computing revolution and internet revolution. As I note in my essay, “[t]he revisionists instead prefer to believe that someone high up in government was carefully guiding this decentralized innovation. In the new telling of this story, deregulation had almost nothing to do with it.” In fact, I was asked by National Review to write this piece in response to a recent essay by Wells King of American Compass, who has penned some rather remarkable revisionist tales of government basically being responsible for all the innovation in digital tech sectors over the past quarter century. Markets and venture capital had nothing to do with it by his reasoning. It’s what Science writer Matt Ridley correctly labels “innovation creationism,” or the notion that it basically takes a village to raise an innovator. Continue reading →

Over at Discourse magazine this week, my R Street colleague Jonathan Cannon and I have posted a new essay on how it has been “Quite a Fall for Digital Tech.” We mean that both in the sense that the last few months have witnessed serious market turmoil for some of America’s leading tech companies, but also that the political situation for digital tech more generally has become perilous. Plenty of people on the Left and the Right now want a pound of flesh from the info-tech sector, and the starting cut at the body involves Section 230, the 1996 law that shields digital platforms from liability for content posted by third parties.

With the Supreme Court recently announcing it will hear Gonzalez v. Google, a case that could significantly narrow the scope of Section 230, the stakes have grown higher. It was already the case that federal and state lawmakers were looking to chip away at Sec. 230’s protections through an endless variety of regulatory measures. But if the Court guts Sec. 230 in Gonzalez, then it will really be open season on tech companies, as lawsuits will fly at every juncture whenever someone does not like a particular content moderation decision. Cannon and I note in our new essay that, Continue reading →

I have a new oped in the Orange County Register discussing reforms that can help address the growing problem of “zombie government,” or old government policies and programs that just seem to never die even thought they have long outlived their usefulness. While there is no single solution to this sort of “set-it-and-forget-it” approach to government that locks in old policies and programs, but I note that:

sunsets and sandboxes are two policy innovations that can help liberate California from old and cumbersome government regulations and rules. Sunsets pause or end rules or programs regularly to ensure they don’t grow stale. Sandboxes are policy experiments that allow for the temporary relaxation of regulations to see what approaches might work better.

When California, other states, and the federal government fail to occasional do spring cleanings of unneeded old rules and programs, it creates chronic regulatory accumulation that has real costs and consequences for the efficient operation of markets and important government programs.

Jump over to the OCR site to read the entire oped.

My colleague Wayne Brough and I recently went on the “Kibbe on Liberty” show to discuss how to discuss the state of free speech on the internet. We explained how censorship is a Big Government problem, not a Big Tech problem. Here’s the complete description of the show and the link to the full episode is below.

With Elon Musk’s purchase of Twitter, we are in the middle of a national debate about the tension between censorship and free expression online. On the Right, many people are calling for government to rein in what they perceive as the excesses of Big Tech companies, while the Left wants the government to crack down on speech they deem dangerous. Both approaches make the same mistake of giving politicians authority over what we are allowed to say and hear. And with recent revelations about government agents leaning on social media companies to censor speech, it’s clear that when it comes to the online conversation, there’s no such thing as a purely private company.”

For more on this issues, please see: “The Classical Liberal Approach to Digital Media Free Speech Issues.”

We are entering a new era for technology policy in which many pundits and policymakers will use “algorithmic fairness” as a universal Get Out of Jail Free card when they push for new regulations on digital speech and innovation. Proposals to regulate things like “online safety,” “hate speech,” “disinformation,” and “bias” among other things often raise thorny definitional questions because of their highly subjective nature. In the United States, efforts by government to control these things will often trigger judicial scrutiny, too, because restraints on speech violate the First Amendment. Proponents of prior restraint or even ex post punishments understand this reality and want to get around it. Thus, in an effort to avoid constitutional scrutiny and lengthy court battles, they are engaged in a rebranding effort and seeking to push their regulatory agendas through a techno-panicky prism of “algorithmic fairness” or “algorithmic justice.”

Hey, who could possibly be against FAIRNESS and JUSTICE? Of course, the devil is always in the details as Neil Chilson and I discuss in our new paper for the The Federalist Society and Regulatory Transparency Project on, “The Coming Onslaught of ‘Algorithmic Fairness’ Regulations.” We document how federal and state policymakers from both parties are currently considering a variety of new mandates for artificial intelligence (AI), machine learning, and automated systems that, if imposed, “would thunder through our economy with one of the most significant expansions of economic and social regulation – and the power of the administrative state – in recent history.” Continue reading →

For my latest Discourse column (“We Really Need To ‘Have a Conversation’ About AI … or Do We?”), I discuss two commonly heard lines in tech policy circles.

1) “We need to have conversation about the future of AI and the risks that it poses.”

2) “We should get a bunch of smart people in a room and figure this out.”

I note that, if you’ve read enough essays, books or social media posts about artificial intelligence (AI) and robotics—among other emerging technologies—then chances are you’ve stumbled on variants of these two arguments many times over. They are almost impossible to disagree with in theory, but when you start to investigate what they actually mean in practice, they are revealed to be largely meaningless rhetorical flourishes which threaten to hold up meaningful progress on the AI front. I continue on to argue in my essay that:

I’m not at all opposed to people having serious discussions about the potential risks associated with AI, algorithms, robotics or smart machines. But I do have an issue with: (a) the astonishing degree of ambiguity at work in the world of AI punditry regarding the nature of these “conversations,” and, (b) the fact that people who are making such statements apparently have not spent much time investigating the remarkable number of very serious conversations that have already taken place, or which are ongoing, about AI issues.

In fact, it very well could be the case that we have too many conversations going on currently about AI issues and that the bigger problem is instead one of better coordinating the important lessons and best practices that we have already learned from those conversations.

I then unpack each of those lines and explain what is wrong with them in more detail. Continue reading →

I’ve been floating around in conservative policy circles for 30 years and I have spent much of that time covering media policy and child safety issues. My time in conservative circles began in 1992 with a 9-year stint at the Heritage Foundation, where I launched the organization’s policy efforts on media regulation, the Internet, and digital technology. Meanwhile, my work on child safety has spanned 4 think tanks, multiple blue ribbon child safety commissions, countless essays, dozens of filings and testimonies, and even a multi-edition book.

During this three-decade run, I’ve tried my hardest to find balanced ways of addressing some of the legitimate concerns that many conservatives have about kids, media content, and online safety issues. Raising kids is the hardest job in the world. My daughter and son are now off at college, but the last twenty years of helping them figure out how to navigate the world and all the challenges it poses was filled with difficulties. This was especially true because my daughter and son faced completely different challenges when it came to media content and online interactions. Simply put, there is no one-size-fits-all playbook when it comes to raising kids or addressing concerns about healthy media interactions. Continue reading →

It was my pleasure this week to participate in a panel discussion about the future of innovation policy at the James Madison Institute’s 2022 Tech and Innovation Summit in Coral Gables, FL. Our conversation focused on the future of Progress Studies, which is one of my favorite topics. We were asked to discuss five major questions and below I have summarized some of my answers to them, plus some other thoughts I had about what I heard at the conference from others.

  1. What is progress studies and why is it so needed today?

In a sense, Progress Studies is nothing new. Progress studies goes back at least to the days of Adam Smith and plenty of important scholars have been thinking about it ever since. Those scholars and policy advocates have long been engaged in trying to figure out what’s the secret sauce that powers economic growth and human prosperity. It’s just that we didn’t call that Progress Studies in the old days.

The reason Progress Studies is important is because technological innovation has been shown to be the fundamental driver in improvements in human well-being over time.  When we can move the needle on progress, it helps individuals extend and improve their lives, incomes, and happiness. By extension, progress helps us live lives of our choosing. As Hans Rosling brilliantly argued, the goal of expanding innovation opportunities and raising incomes “is not just bigger piles of money” or more leisure time. “The ultimate goal is to have the freedom to do what we want.” Continue reading →