Technopanics & the Precautionary Principle – Technology Liberation Front https://techliberation.com Keeping politicians' hands off the Net & everything else related to technology Thu, 03 Apr 2025 23:20:10 +0000 en-US hourly 1 6772528 On “Pausing” AI https://techliberation.com/2023/04/07/on-pausing-ai/ https://techliberation.com/2023/04/07/on-pausing-ai/#comments Fri, 07 Apr 2023 17:36:05 +0000 https://techliberation.com/?p=77111

Recently, the Future of Life Institute released an open letter that included some computer science luminaries and others  calling for a 6-month “pause” on the deployment and research of “giant” artificial intelligence (AI) technologies. Eliezer Yudkowsky, a prominent AI ethicist, then made news by arguing that the “pause” letter did not go far enough and he proposed that governments consider “airstrikes” against data processing centers, or even be open to the use of nuclear weapons. This is, of course, quite insane. Yet, this is the state of the things today as a AI technopanic seems to growing faster than any of the technopanic that I’ve covered in my 31 years in the field of tech policy—and I have covered a lot of them.

In a new joint essay co-authored with Brent Orrell of the American Enterprise Institute and Chris Messerole of Brookings, we argue that “the ‘pause’ we are most in need of is one on dystopian AI thinking.” The three of us recently served on a blue-ribbon  Commission on Artificial Intelligence Competitiveness, Inclusion, and Innovation, an independent effort assembled by the U.S. Chamber of Commerce. In our essay, we note how:

Many of these breakthroughs and applications will already take years to work their way through the traditional lifecycle of development, deployment, and adoption and can likely be managed through legal and regulatory systems that are already in place. Civil rights laws, consumer protection regulations, agency recall authority for defective products, and targeted sectoral regulations already govern algorithmic systems, creating enforcement avenues through our courts and by common law standards allowing for development of new regulatory tools that can be developed as actual, rather than anticipated, problems arise.

“Instead of freezing AI we should leverage the legal, regulatory, and informal tools at hand to manage existing and emerging risks while fashioning new tools to respond to new vulnerabilities,” we conclude. Also on the pause idea, it’s worth checking out this excellent essay from Bloomberg Opinion editors on why “An AI ‘Pause’ Would Be a Disaster for Innovation.”

The problem is not with the “pause” per se. Even if the signatories could somehow enforce a worldwide stop-work order, six months probably wouldn’t do much to halt advances in AI. If a brief and partial moratorium draws attention to the need to think seriously about AI safety, it’s hard to see much harm. Unfortunately, a pause seems likely to evolve into a more generalized opposition to progress.

The editors continue on to rightly note:

This is a formula for outright stagnation. No one can ever be fully confident that a given technology or application will only have positive effects. The history of innovation is one of trial and error, risk and reward. One reason why the US leads the world in digital technology — why it’s home to virtually all the biggest tech platforms — is that it did not preemptively constrain the industry with well-meaning but dubious regulation. It’s no accident that all the leading AI efforts are American too.

That is 100% right, and I appreciate the Bloomberg editors linking to my latest study on AI governance when they made this point. In this new R Street Institute study, I explain why “Getting AI Innovation Culture Right,” is essential to make sure we can enjoy the many benefits that algorithmic systems offer, while also staying competitive in the global race for competitive advantage in this space.

That report is the first in a trilogy of big studies on decentralized, flexible governance of artificial intelligence. We can achieve AI safety without crushing top-down bans or unworkable “pauses,” I argue. My next two papers are, “Flexible, Pro-Innovation Governance Strategies for Artificial Intelligence” (due out April 20th) and “Existential Risks & Global Governance Issues Surrounding AI & Robotics” (due out late May or early June). I’m also working on a co-authored essay taking a deep dive into the idea of AI impact assessments / auditing (late Spring / early Summer).

Relatedly, on April 7th, DeepLearningAI held an event on “Why at 6-Month AI Pause is a Bad Idea” featuring leading AI scientists Andrew Ng and Yann LeCun discussing the trade-offs associated with the proposal. A crucial point made in the discussion is that a pause, especially a pause in the form of a governmental ban, would be a misguided innovation policy decision. They stressed that there will be policy interventions to address targeted risks from specific algorithmic applications, but that it would be a serious mistake to stop the overall development of the underlying technological capabilities. It’s worth watching.

For more on AI policy, here’s a list of some of my latest reports and essays. Much more to come. AI policy will be the biggest tech policy fight of the our lifetimes.

 

 

]]>
https://techliberation.com/2023/04/07/on-pausing-ai/feed/ 2 77111
What Policy Vision for Artificial Intelligence? https://techliberation.com/2023/04/02/what-policy-vision-for-artificial-intelligence/ https://techliberation.com/2023/04/02/what-policy-vision-for-artificial-intelligence/#comments Sun, 02 Apr 2023 21:32:49 +0000 https://techliberation.com/?p=77103

In my latest R Street Institute report, I discuss the importance of “Getting AI Innovation Culture Right.” This is the first of a trilogy of major reports on what sort of policy vision and set of governance principles should guide the development of  artificial intelligence (AI), algorithmic systems, machine learning (ML), robotics, and computational science and engineering more generally. More specifically, these reports seek to answer the question, Can we achieve AI safety without innovation-crushing top-down mandates and massive new regulatory bureaucracies? 

These questions are particular pertinent as we just made it through a week in which we’ve seen a major open letter issued that calls for a 6-month freeze on the deployment of AI technologies, while a prominent AI ethicist argued that governments should go further and consider airstrikes data processing centers even if the exchange of nuclear weapons needed to be considered! On top of that, Italy became the first major nation to ban ChatGPT, the popular AI-enabled chatbot created by U.S.-based OpenAI.

My report begins from a different presumption: AI, ML and algorithmic technologies present society with enormously benefits and, while real risks are there, we can find better ways of addressing them. As I summarize:

The danger exists that policy for algorithmic systems could be formulated in such a way that innovations are treated as guilty until proven innocent—i.e., a precautionary principle approach to policy—resulting in many important AI applications never getting off the drawing board. If regulatory impediments block or slow the creation of life-enriching, and even life-saving, AI innovations, that would leave society less well-off and give rise to different types of societal risks.

I argue that it is essential we not trap AI in an “innovation cage” by establishing the wrong policy default for algorithmic governance but instead work through challenges as they come at us. The right policy default for the internet and for AI continues to be “innovation allowed.” But AI risks do require serious governance steps. Luckily, many tools exist and others are being created. While my next major report (due out April 20th) offers far more detail, this paper sketches out some of those mechanisms. 

The goal of algorithmic policy should be for policymakers and innovators to work together to find flexible, iterative, agile, bottom-up governance solutions over time. We can promote a culture of responsibility among leading AI innovators and balance safety and innovation for complex, rapidly evolving computational and computing technologies like AI. This approach is buttressed by existing laws and regulations, as well as common law and the courts.

The new Biden Admin “AI Bill of Rights” unfortunately represents a fear-based model of technology policymaking that breaks from the superior Clinton framework for the internet & digital technology. Our nation’s policy toward AI, robotics & algorithmic innovation should instead embrace a dynamic future and the enormous possibilities that await us.

Please check out my new paper for more details. Much more to come. And you can also check out my running list of research on AI, ML robotics policy.

]]>
https://techliberation.com/2023/04/02/what-policy-vision-for-artificial-intelligence/feed/ 2 77103
Gonzalez v Google, Section 230 & the Future of Permissionless Innovation https://techliberation.com/2022/12/09/gonzalez-v-google-section-230-the-future-of-permissionless-innovation/ https://techliberation.com/2022/12/09/gonzalez-v-google-section-230-the-future-of-permissionless-innovation/#comments Fri, 09 Dec 2022 13:15:15 +0000 https://techliberation.com/?p=77066

Over at Discourse magazine this week, my R Street colleague Jonathan Cannon and I have posted a new essay on how it has been “Quite a Fall for Digital Tech.” We mean that both in the sense that the last few months have witnessed serious market turmoil for some of America’s leading tech companies, but also that the political situation for digital tech more generally has become perilous. Plenty of people on the Left and the Right now want a pound of flesh from the info-tech sector, and the starting cut at the body involves Section 230, the 1996 law that shields digital platforms from liability for content posted by third parties.

With the Supreme Court recently announcing it will hear Gonzalez v. Google, a case that could significantly narrow the scope of Section 230, the stakes have grown higher. It was already the case that federal and state lawmakers were looking to chip away at Sec. 230’s protections through an endless variety of regulatory measures. But if the Court guts Sec. 230 in Gonzalez, then it will really be open season on tech companies, as lawsuits will fly at every juncture whenever someone does not like a particular content moderation decision. Cannon and I note in our new essay that,

if the court moves to weaken liability protections for digital platforms, the ramifications will be profoundly negative. While many critics today complain that the law’s liability protections have been too generous, the reality is that Section 230 has been the legal linchpin supporting the permissionless innovation model that fueled America’s commanding lead in the digital information revolution. Thanks to the law, digital entrepreneurs have been free to launch bold new ideas without fear of punishing lawsuits or regulatory shenanigans. This has boosted economic growth and dramatically broadened consumer information and communications options.

Many critics of Sec. 230 claim that reforms are needed to “rein in Big Tech.” But, ironically, gutting Sec. 230 would probably only make big tech companies even bigger because the smaller players in the market would struggle to deal with the mountains of regulations and lawsuits that would come about in its absence. Cannon and I continue on to explore what it means for the next generation of online innovators if these court cases go badly and Section 230 is scaled back or gutted:

Section 230 has been a legal cornerstone of the entire ecosystem. All the large-scale platforms we depend on for our online experience would never have gotten off the ground without its protection. […] More importantly, these platforms have relied on being able to host third-party content without fear of opening a Pandora’s box of private litigation and endless challenges from governments. By removing these protections, platforms will be forced to significantly increase their moderation practices to reduce risk of suits from zealous litigants. Besides the chilling effect this will have on speech, it also will put up a cost-prohibitive barrier for smaller entrants who lack the resources to have an army of content moderators to find and eliminate undesirable content.

The broader effect on market dynamism and the nation’s technological competitiveness will be profound as permissionless innovation is replaced by mountains of top-down permission slips. “If America’s digital sector gets kneecapped by the Supreme Court, or if new regulations or legislative proposals scale back Section 230 protections, it will be significantly more difficult for U.S. firms to continue to lead in the development and commercialization of new technologies,” we conclude.

Jump over to Discourse to read the entire piece.

]]>
https://techliberation.com/2022/12/09/gonzalez-v-google-section-230-the-future-of-permissionless-innovation/feed/ 1 77066
AI Eats the World: Preparing for the Computational Revolution and the Policy Debates Ahead https://techliberation.com/2022/09/12/ai-eats-the-world-preparing-for-the-computational-revolution-and-the-policy-debates-ahead/ https://techliberation.com/2022/09/12/ai-eats-the-world-preparing-for-the-computational-revolution-and-the-policy-debates-ahead/#comments Mon, 12 Sep 2022 23:52:26 +0000 https://techliberation.com/?p=77039

[Cross-posted from Medium.]

The Coming Computational Revolution

Thomas Edison once spoke of how electricity was a “field of fields.” This is even more true of AI, which is ready to bring about a sweeping technological revolution. In Carlota Perez’s influential 2009 paper on “Technological Revolutions and Techno-economic Paradigms,” she defined a technological revolution “as a set of interrelated radical breakthroughs, forming a major constellation of interdependent technologies; a cluster of clusters or a system of systems.” To be considered a legitimate technological revolution, Perez argued, the technology or technological process must be “opening a vast innovation opportunity space and providing a new set of associated generic technologies, infrastructures and organisational principles that can significantly increase the efficiency and effectiveness of all industries and activities.” In other words, she concluded, the technology must have “the power to bring about a transformation across the board.”

Expanding Our Skillset

Thus, AI (and AI policy) is multi-dimensional, amorphous, and ever-changing. It has many layers and complexities. This will require public policy analysts and institutions to reorient their focus and develop new capabilities.

Mapping the AI Policy Terrain: Broad vs. Narrow

Beyond talent development, the other major challenge is issue coverage. How can we cover all the AI policy bases? There are two general categories of AI concerns, and supporters of free markets need to be prepared to engage on both battlefields.

Confronting the Formidable Resistance to Change

Finally, free-market analysts and organizations must prepare to defend the general concept of progress through technological change as AI becomes a central social, economic, and legal battleground — both domestically and globally. Every technological revolution involves major social and economic disruptions and gives rise to intense efforts to defend the status quo and block progress. As Perez concludes, “the profound and wide-ranging changes made possible by each technological revolution and its techno-economic paradigm are not easily assimilated; they give rise to intense resistance.”

]]>
https://techliberation.com/2022/09/12/ai-eats-the-world-preparing-for-the-computational-revolution-and-the-policy-debates-ahead/feed/ 3 77039
Why the Endless Techno-Apocalyptica in Modern Sci-Fi? https://techliberation.com/2022/09/02/why-the-endless-techno-apocalyptica-in-modern-sci-fi/ https://techliberation.com/2022/09/02/why-the-endless-techno-apocalyptica-in-modern-sci-fi/#comments Fri, 02 Sep 2022 15:06:06 +0000 https://techliberation.com/?p=77033

James Pethokousis of AEI interviews me about the current miserable state of modern science fiction, which is just dripping with dystopian dread in every movie, show, and book plot. How does all the techno-apocalyptica affect societal and political attitudes about innovation broadly and emerging technologies in particular. Our discussion builds on my recent a recent Discourse article, “How Science Fiction Dystopianism Shapes the Debate over AI & Robotics.” [Pasted down below.] Swing on over to Jim’s “Faster, Please” newsletter and hear what Jim and I have to say. And, for a bonus question, Jim asked me is we doing a good job of inspiring kids to have a sense of wonder and to take risks. I have some serious concerns that we are falling short on that front.

How Science Fiction Dystopianism Shapes the Debate over AI & Robotics

[Originally ran on Discourse on July 26, 2022.]

George Jetson will be born this year. We don’t know the exact date of this fictional cartoon character’s birth, but thanks to some skillful Hanna-Barbera hermeneutics the consensus seems to be sometime in 2022.

In the same episode that we learn George’s approximate age, we’re also told the good news that his life expectancy in the future is 150 years old. It was one of the many ways The Jestons, though a cartoon for children, depicted a better future for humanity thanks to exciting innovations. Another was a helpful robot named Rosie, along with a host of other automated technologies—including a flying car—that made George and his family’s life easier.

 

Most fictional portrayals of technology today are not as optimistic as  The Jetsons, however. Indeed, public and political conceptions about artificial intelligence (AI) and robotics in particular are being strongly shaped by the relentless dystopianism of modern science fiction novels, movies and television shows. And we are worse off for it.

AI, machine learning, robotics and the power of computational science hold the potential to drive explosive economic growth and profoundly transform a diverse array of sectors, while providing humanity with countless technological improvements in medicine and healthcarefinancial servicestransportationretailagricultureentertainmentenergyaviationthe automotive industry and many others. Indeed, these technologies are already deeply embedded in these and other industries and making a huge difference.

But that progress could be slowed and in many cases even halted if public policy is shaped by a precautionary-principle-based mindset that imposes heavy-handed regulation based on hypothetical worst-case scenarios. Unfortunately, the persistent dystopianism found in science fiction portrayals of AI and robotics conditions the ground for public policy debates, while also directing attention away from some of the more real and immediate issues surrounding these technologies.

Incessant Dystopianism Untethered from Reality

In his recent book Robots, Penn State business professor John Jordan observes how over the last century “science fiction set the boundaries of the conceptual playing field before the engineers did.” Pointing to the plethora of literature and film that depicts robots, he notes: “No technology has ever been so widely described and explored before its commercial introduction.” Not the internet, cell phones, atomic energy or any others.

Indeed, public conceptions of these technologies, and even the very vocabulary of the field, has been shaped heavily by sci-fi plots beginning a hundred years ago with the 1920 play  R.U.R. (Rossum’s Universal Robots)which gave us the term “robot,” and Fritz Lang’s 1927 silent film Metropolis, with its memorable Maschinenmensch, or “machine-human.” There has been a deep and rich imagination surrounding AI and robotics since then, but it has tended to be mostly negative and has grown more hostile over time.

The result has been a public and policy dialogue about AI and robotics that is focused on an endless parade of horribles about these technologies. Not surprisingly, popular culture also affects journalistic framings of AI and robotics. Headlines breathlessly scream of how “Robots May Shatter the Global Economic Order Within a Decade,” but only if we’re not dead already because… “If Robots Kill Us, It’s Because It’s Their Job.”

Dark depictions of AI and robotics are ever-present in popular modern sci-fi movies and television shows. A short list includes:  2001: A Space Odyssey, Avengers: Age of Ultron, Battlestar Galactica (both the 1978 original and the 2004 reboot), Black Mirror, Blade Runner, Ex Machina, Her, The Matrix, Robocop, The Stepford Wives, Terminator, Transcendence, Tron, WALL-E, Wargames and Westworld, among countless others. The least nefarious plots among these films and television shows rest on the idea that AI and robotics are going to drive us to a life of distraction, addiction or sloth. In more extreme cases, we’re warned about a future in which we are either going to be enslaved or destroyed by our new robotic or algorithmic overlords.

Don’t get me wrong; the movies and shows on the above list are some of my favorites.  2001 and Blade Runner are both in my top 5 all-time flicks, and the reboot of Battlestar is one of my favorite TV shows. The plots of all these movies and shows are terrifically entertaining and raise many interesting issues that make for fun discussions.

But they are not representative of reality. In fact, the vast majority of computer scientists and academic experts on AI and robotics agree that claims about machine “superintelligence” are wildly overplayed and that there is no possibility of machines gaining human-equivalent knowledge any time soon—or perhaps ever. “In any ranking of near-term worries about AI, superintelligence should be far down the list,” argues Melanie Mitchell, author of Artificial Intelligence: A Guide for Thinking Humans.

Contra the  Terminator-esque nightmares envisioned in so many sci-fi plots, MIT roboticist Rodney Brooks says that “fears of runaway AI systems either conquering humans or making them irrelevant aren’t even remotely well grounded.” John Jordan agrees, noting: “The fear and uncertainty generated by fictional representations far exceed human reactions to real robots, which are often reported to be ‘underwhelming.’”

The same is true for AI more generally. “A close inspection of AI reveals an embarrassing gap between actual progress by computer scientists working on AI and the futuristic visions they and others like to describe,” says Erik Larson, author of, The Myth of Artificial Intelligence: Why Computers Can’t Think the Way We Do. Larson refers to this extreme thinking about superintelligent AI as “technological kitsch,” or exaggerated sentimentality and melodrama that is untethered from reality. Yet, the public imagination remains captivated by tales of impending doom.

Seeding the Ground with Misery and Misguided Policy

But isn’t it all just harmless fun? After all, it’s just make believe. Moreover, can’t science fiction—no matter how full of techno-misery—help us think through morally weighty issues and potential ethical conundrums involving AI and robotics?

Yes and no. Titillating fiction has always had a cathartic element to it and helped us cope with the unknown and mysterious. Most historians believe it was Aristotle in his Poetics who first used the term katharsis when discussing how Greek tragedies helped the audience “through pity and fear effecting the proper purgation of these emotions.”

But are modern science fiction depictions of AI and robotics helping us cope with technological change, or instead just stoking a constant fear of it? Modern sci-fi isn’t so much purging negative emotion about the topic at hand as it is endlessly adding to the sense of dread surrounding these technologies. What are the societal and political ramifications of a cultural frame of reference that suggests an entire new class of computational technologies will undermine rather than enrich our human experiences and, possibly, our very existence?

The New Yorker’s Jill Lepore says we live in “A Golden Age for Dystopian Fiction,” but she worries that this body of work “cannot imagine a better future, and it doesn’t ask anyone to bother to make one.” She argues this “fiction of helplessness and hopelessness” instead “nurses grievances and indulges resentments” and that “[i]ts only admonition is: Despair more.” Lapore goes so far as to claim that, because “the radical pessimism of an unremitting dystopianism” has appeal to many on both the left and right, it “has itself contributed to the unravelling of the liberal state and the weakening of a commitment to political pluralism.”

I’m not sure dystopian fiction is driving the unravelling of pluralism, but Lapore is on to something when she notes how a fiction rooted in misery about the future will likely have political consequences at some point.

Techno-panic Thinking Shapes Policy Discussions

The ultimate question is whether public policy toward new AI and robotic technologies will be shaped by this hyperpessimistic thinking in the form of precautionary principle regulation, which essentially treats innovations as “guilty until proven innocent” and seeks to intentionally slow or retard their development.

If the extreme fears surrounding AI and robotics  do inspire precautionary controls—as they already have in the European Union—then we need to ask how the preservation of the technological status quo could undermine human well-being by denying society important new life-enriching and life-saving goods and services. Technological stasis does not provide a safer or healthier society, but instead holds back our collective ability to innovate, prosper and better our lives in meaningful ways.

Louis Anslow, curator of  Pessimists Archive calls this “the Black Mirror fallacy,” referencing the British television show that has enjoyed great success peddling tales of impending techno-disasters. Anslow defines the fallacy as follows: “When new technologies are treated as much more threatening and risky than old technologies with proven risks/harms. When technological progress is seen as a bigger threat than technological stagnation.”

Anslow’s Pessimists Archive collects real-world case studies of how moral panic and techno-panics have accompanied the introduction of new inventions throughout history. He notes, “Science fiction has conditioned us to be hypervigilant about avoiding dystopias born of technological acceleration and totally indifferent to avoiding dystopias born of technological stagnation.”

Techno-panics can have real-world consequences when they come to influence policymaking. Robert Atkinson, president of the Information Technology & Innovation Foundation (ITIF), has documented the many ways that “the social and political commentary [about AI] has been hype, bordering on urban myth, and even apocalyptic.” The more these attitudes and arguments come to shape policy considerations, the more likely it is precautionary principle-based recommendations will drive AI and robotics policy, preemptively limiting their potential. ITIF has published a report documenting “Ten Ways the Precautionary Principle Undermines Progress in Artificial Intelligence,” identifying how it will slow algorithmic advances in key sectors.

Similarly, in his important recent book Where Is My Flying Car ?, scientist J. Storrs Hall documents how “regulation clobbered the learning curve” for many important technologies in the U.S. over the last half century, especially nuclear, nanotech and advanced aviation. Society lost out on many important innovations due to endless bureaucratic delays, often thanks to opposition from special interests, anti-innovation activists, overzealous trial lawyers and a hostile media. Hall explained how this also sent a powerful signal to talented young people who might have been considering careers in those sectors. Why go into a field demonized by so many and where your creative abilities will be hamstrung by precautionary constraints?

Disincentivizing Talent

Hall argues that in those crucial sectors, this sort of mass talent migration “took our best and brightest away from improving our lives,” and he warns that those who still hope to make a career in such fields should be prepared to be “misconstrued and misrepresented by activists, demonized by ignorant journalists, and strangled by regulation.”

Is this what the future holds for AI and robotics? Hopefully not, and America continues to generate world-class talent on this front today in a diverse array of businesses and university programs. But if the waves of negativism about AI and robotics persist, we shouldn’t be surprised if it results in a talent shift away from building these technologies and toward fields that instead look to restrict them.

For example, Hall documents how, following the sudden shift in public attitudes surrounding nuclear power 50 years ago, “interests, and career prospects, in nuclear physics imploded” and “major discoveries stopped coming.” Meanwhile, enrollment in law schools and other soft sciences typically critical of technological innovation enjoyed greater success. Nobody writes any sci-fi stories about what a disaster that development has been for innovation in the energy sphere, even though it is now abundantly clear how precautionary principle policies have undermined environment goals and human welfare, with major geopolitical consequences for many nations.

If America loses the talent race on the AI front, it has ramifications for global competitive advantage going forward, especially as China races to catch up. In a world of global innovation arbitrage, talent and venture capital will flow to wherever it is treated most hospitably. Demonizing AI and robotics won’t help recruit or retain the next generation of talent and investors America needs to remain on top.

Flipping the Script

Some folks have had enough of the relentless pessimism surrounding technology and progress in modern science fiction and are trying to do something to reverse it. In a 2011  Wired essay decrying the dangers of “Innovation Starvation,” the acclaimed novelist Neal Stephenson decried the fact that “the techno-optimism of the Golden Age of [science fiction] has given way to fiction written in a generally darker, more skeptical and ambiguous tone.” While good science fiction, “supplies a plausible, fully thought-out picture of an alternate reality in which some sort of compelling innovation has taken place,” Stephenson said modern sci-fi was almost entirely focused on its potential downsides.

To help reverse this trend, Stephenson worked with the Center for Science and the Imagination at Arizona State University to launch Project Hieroglyph, an effort to support authors willing to take a more optimistic view of the future. It yielded a 2014 book, Hieroglyph: Stories and Visions for a Better Future that included almost 20 contributors. Later, in 2018, The Verge launched the “Better Worlds” project to support 10 writers of “stories that inspire hope” about innovation and the future. “Contemporary science fiction often feels fixated on a sort of pessimism that peers into the world of tomorrow and sees the apocalypse looming more often than not,” said Verge culture editor Laura Hudson when announcing the project.

Unfortunately, these efforts have not captured much public attention and that’s hardly surprising. “Pessimism has always been big box office,” says science writer Matt Ridley, primary because it really is more entertaining. Even though many of great sci-fi writers of the past, including Isaac Asimov, Arthur C. Clarke, and Robert Heinlein, wrote positively about technology, they ultimately had more success selling stories with darker themes. It’s just the nature of things more generally, from the best of Greek tragedy to Shakespeare and on down the line. There’s a reason they’re still rebooting Beowulf all these years later, after all.

So, There’s Star Trek and What Else?

While technological innovation will never enjoy the respect it deserves for being the driving force behind human progress, one can at least hope that more pop culture treatments of it might give it a fair shake. When I ask crowds of people to name a popular movie or television show that includes mostly positive depictions of technology, Star Trek is usually the first (and sometimes the only) thing people mention. It’s true that, on balance, technology was treated as a positive force in the original series, although “V’Ger”—a defunct space probe that attains a level of consciousness—was the prime antagonist in Star Trek: The Motion Picture. Later, Star Trek: The Next Generation gave us the always helpful android Data, but also created the lasting mental image of the Borg, a terrifying race of cyborgs hell-bent on assimilating everyone into their hive mind.

The Borg provided some of The Next Generation’s most thrilling moments, but also created a new cultural meme, with tech critics often worrying about how today’s humans are being assimilated into the hive mind of modern information systems. Philosopher Michael Sacasas even coined the term “the Borg Complex,” to refer to a supposed tendency “exhibited by writers and pundits who explicitly assert or implicitly assume that resistance to technology is futile.” After years of a friendly back-and-forth with Sacasas, I felt compelled to even wrap up my book Permissionless Innovation with a warning to other techno-optimists not to fall prey to this deterministic trap when defending technological change. Regardless of where one falls on that issue, the fact that Sacasas and I were having a serious philosophical discussion premised on a famous TV plotline serves as another indication of how much science fiction shapes public and intellectual debate over progress and innovation.

And, truth be told, some movies know how to excite the senses without resorting to dystopianism.  Interstellar and The Martian are two recent examples that come to mind. Interestingly, space exploration technologies themselves usually get a fair shake in many sci-fi plots, often only to be undermined by onboard Ais or androids, as occurred not only in 2001 with the eerie HAL 9000, but also Alien.

There are some positive (and sometimes humorous) depictions of robots as in  Robot & Frank, or touching ones as in Bicentennial Man. Beyond The Jetsons, other cartoons like Iron Giant and Big Hero 6 offer more kindly visions of robots. KITT, a super-intelligent robot car, was Michael Knight’s dependable ally in NBC’s Knight Rider. And R2-D2 is always a friendly helper throughout the Star Wars franchise. But generally speaking, modern sci-fi continues to churn out far more negativism about AI and robotics.

What If We Took It All Seriously?

So long as the public and political imagination is spellbound by machine machinations that dystopian sci-fi produces, we’ll be at risk of being stuck with absurd debates that have no meaningful solution other than “Stop the clock!” or “Ban it all!” Are we really being assimilated into the Borg hive mind, or just buying time until a coming robopocalypse grinds us into dust (or dinner)?

If there was a kernel of truth to any of this, then we should adopt some of the extreme solutions, Nick Bostrom of Oxford suggests in his writing on these issues. Those radical steps include worldwide surveillance and enforcement mechanisms for scientists and researchers developing algorithmic and robotic systems, as well as some sort of global censorship of information about these capabilities to ensure the technology is not used by bad actors.

To Bostrom’s great credit, he is at least willing to tell us how far he’d go. Most of today’s tech critics prefer to just spread a gospel of gloom and doom and suggest  something must be done, without getting into the ugly details about what a global control regime for computational science and robotic engineering looks like. We should reject such extremist hypothesizing and understand that silly sci-fi plots, bombastic headlines and kooky academic writing should not be our baseline for serious discussions about the governance of artificial intelligence and robotics.

At the same time, we absolutely should consider what downsides any technology poses for individuals and society. And, yes, some precautions will be needed of a regulatory nature. But most of the problems envisioned by sci-fi writers are not what we should be concerned with. There are far more specific and nuanced problems AI and robotics confronts us with today that deserve more serious consideration and governance steps. How to program safer drones and driverless cars, improve the accuracy of algorithmic medical and financial technologies, and ensure better transparency for government uses of AI are all more mundane but very important issues that require reasoned discussion and balanced solutions today. Dystopian thinking gives us no roadmap to get there other than extreme solutions.

Imagining a Better Future

The way forward here is neither to indulge in apocalyptic fantasies nor pollyannaish techno-optimism, but to approach these technologies with reasoned risk analysis, sensible industry best practices, educational efforts and other agile governance steps. In a forthcoming book on flexible governance strategies for AI and robotics, I outline how these and other strategies are already being formulated to address real-world challenges in fields as diverse as driverless cars, drones, machine learning in medicine and much more.

A wide variety of ethical frameworks, offered by professional associations, academic groups and others, already exists to “bake in” best practices and align AI design with widely shared goals and values while also “keeping humans in the loop” at critical stages of the design process to ensure that they can continue to guide and occasionally realign those values and best practices as needed.

When things do go wrong, many existing remedies are available, including a wide variety of common law solutions (torts, class actions, contract law, etc.), recall authority possessed by many regulatory agencies, and various consumer protection policies and other existing laws. Moreover, the most effective solution to technological problems usually lies in more innovation, not less. It is only through constant trial and error that humanity discovers better  and safer ways of satisfying important wants and needs.

These are complicated and nuanced issues that demand tailored and iterative governance responses. But this should not be done using inflexible, innovation-limiting mandates. Concerns about AI dangers deserve serious consideration and appropriate governance steps to ensure that these systems are beneficial to society. However, there is an equally compelling public interest in ensuring that AI innovations are developed and made widely available to help improve human well-being across many dimensions.

So, enjoy your next dopamine hit of sci-fi hysteria—I know I will, too. But don’t let that be your guide to the world that awaits us. Even if most sci-fi writers can’t imagine a better future, the rest of us can.

]]>
https://techliberation.com/2022/09/02/why-the-endless-techno-apocalyptica-in-modern-sci-fi/feed/ 2 77033
Running List of My Research on AI, ML & Robotics Policy https://techliberation.com/2022/07/29/running-list-of-my-research-on-ai-ml-robotics-policy/ https://techliberation.com/2022/07/29/running-list-of-my-research-on-ai-ml-robotics-policy/#respond Fri, 29 Jul 2022 12:51:54 +0000 https://techliberation.com/?p=77020

[last updated 4/3/2025 – Check my Medium page for latest posts]

This a running list of all the essays and reports I’ve already rolled out on the governance of artificial intelligence (AI), machine learning (ML), and robotics. Why have I decided to spend so much time on this issue? Because this will become the most important technological revolution of our lifetimes. Every segment of the economy will be touched in some fashion by AI, ML, robotics, and the power of computational science. It should be equally clear that public policy will be radically transformed along the way.

Eventually, all policy will involve AI policy and computational considerations. As AI “eats the world,” it eats the world of public policy along with it. The stakes here are profound for individuals, economies, and nations. As a result, AI policy will be the most important technology policy fight of the next decade, and perhaps next quarter century. Those who are passionate about the freedom to innovate need to prepare to meet the challenge as proposals to regulate AI proliferate.

There are many socio-technical concerns surrounding algorithmic systems that deserve serious consideration and appropriate governance steps to ensure that these systems are beneficial to society. However, there is an equally compelling public interest in ensuring that AI innovations are developed and made widely available to help improve human well-being across many dimensions. And that’s the case that I’ll be dedicating my life to making in coming years.

Here’s the list of what I’ve done so far. I will continue to update this as new material is released:

2025

2024

2023

2022

2021 (and earlier)

]]>
https://techliberation.com/2022/07/29/running-list-of-my-research-on-ai-ml-robotics-policy/feed/ 0 77020
Slide Presentation on “The Future of Innovation Policy” https://techliberation.com/2022/04/18/slide-presentation-on-the-future-of-innovation-policy/ https://techliberation.com/2022/04/18/slide-presentation-on-the-future-of-innovation-policy/#comments Mon, 18 Apr 2022 19:24:10 +0000 https://techliberation.com/?p=76968

Here’s a slide presentation on “The Future of Innovation Policy” that I presented to some student groups recently. It builds on themes discussed in my recent books, Permissionless Innovation: The Continuing Case for Comprehensive Technological Freedom, and Evasive Entrepreneurs and the Future of Governance: How Innovation Improves Economies and GovernmentsI specifically discuss the tension between permissionless innovation and the precautionary principle as competing policy defaults.

]]>
https://techliberation.com/2022/04/18/slide-presentation-on-the-future-of-innovation-policy/feed/ 1 76968
The Precautionary Principle: A Plea for Proportionality https://techliberation.com/2022/02/07/the-precautionary-principle-a-plea-for-proportionality/ https://techliberation.com/2022/02/07/the-precautionary-principle-a-plea-for-proportionality/#comments Mon, 07 Feb 2022 19:57:03 +0000 https://techliberation.com/?p=76949

Gabrielle Bauer, a Toronto-based medical writer, has just published one of the most concise explanations of what’s wrong with the precautionary principle that I have ever read. The precautionary principle, you will recall, generally refers to public policies that limit or even prohibit trial-and-error experimentation and risk-taking. Innovations are restricted until their creators can prove that they will not cause any harms or disruptions. In an essay for The New Atlantis entitled, “Danger: Caution Ahead,” Bauer uses the world’s recent experiences with COVID lockdowns as the backdrop for how society can sometimes take extreme caution too far, and create more serious dangers in the process. “The phrase ‘abundance of caution’ captures the precautionary principle in a more literary way,” Bauer notes. Indeed, another way to look at it is through the prism of the old saying, “better to be safe than sorry.” The problem, she correctly observes, is that, “extreme caution comes at a cost.” This is exactly right and it points to the profound trade-offs associated with precautionary principle thinking in practice.

In my own writing about the problems associated with the precautionary principle (see list of essays at bottom), I often like to paraphrase an ancient nugget of wisdom from St. Thomas Aquinas, who once noted in his Summa Theologica that, if the highest aim of a captain were merely to preserve their ship, then they would simply keep it in port forever. Of course, that is not the only goal of a captain has. The safety of the vessel and the crew is essential, of course, but captains brave the high seas because there are good reasons to take such risks. Most obviously, it might be how they make their living. But historically, captains have also taken to the seas as pioneering explorers, researchers, or even just thrill-seekers.

This was equally true when humans first decided to take to the air in balloons, blimps, airplanes, and rockets. A strict application of the precautionary principle would have instead told us we should keep our feet on the ground. Better to be safe than sorry! Thankfully, many brave souls ignored that advice and took the heavens in the spirit of exploration and adventure. As Wilbur Wright once famously said, “If you are looking for perfect safety, you would do well to sit on a fence and watch the birds.” Needless to say, humans would have never mastered the skies if the Wright brothers (and many others) had not gotten off the fence and taken the risks they did.

Opportunity Costs Matter

Here we get to the true danger of strict versions of the precautionary principle: It essentially becomes a crime to get off the fence and do anything risky at all. This sets up the potential for stasis and stagnation as societal learning is severely curtailed. Progress becomes harder because there can be no reward without some risk. — both individually or societally. “Caution makes sense except when it doesn’t,” Bauer notes. She continues on to note:

Used too liberally, the precautionary principle can keep us stuck in a state of extreme risk-aversion, leading to cumbersome policies that weigh down our lives. To get to the good parts of life, we need to accept some risk.

As I argued in a book on these issues, the root problem with precautionary principle thinking is that “living in constant fear of worst-case scenarios—and premising public policy on them—means that best-case scenarios will never come about.” If societal attitudes and public policy will not tolerate the idea of any error resulting from experimentation with new and better ways of doing things, then we will obviously not get many new and better things! Scientist Martin Rees refers to this truism about the precautionary principle as “the hidden cost of saying no.”

The opportunity cost of inaction or stasis can be hard to quantify but imagine if we organized our entire society around a rigid application of the precautionary principle. Bauer notes that this is basically what we did during COVID. And the results are in. “It’s far past time we ask ourselves when  abundance really means excess, when our precautionary measures against Covid have gone too far, when we have ignored the costs and lost all sense of proportionality.” Unfortunately, the precautionary mindset–which is always rooted in fear of the unknown–took control. As Bauer notes:

It should have been socially acceptable to debate the merits of these tradeoffs, with nuance and without censure. But that is not what happened. Early in the pandemic, an unspoken rule — thou shalt not question the costs — sprang up and stifled discourse.

“And here’s the worst of it: the costs of excess caution can persist long after the initial danger has passed,” she notes. “It’s no different with Covid: our knee-jerk caution may have downstream effects that persist after the virus has ceased to be a threat.” She cites many compelling examples of the negative effects associated with extreme precautionary thinking during COVID, noting how, “[t]he impact of travel and trade restrictions on food security and childhood vaccination in developing countries will likely reverberate for decades.” Moreover:

The Covid-19 pandemic has laid bare the risks of extreme protection: lost businesses, lost livelihoods, lost graduations, lost loves, lost goodbyes; the loss of personal agency over life’s most intimate and meaningful moments; the loss, quite possibly, of our cherished principles of liberal democracy. A recent report by International IDEA, a democracy advocacy organization, concluded that many countries had become more authoritarian as they took steps to contain the pandemic.

This list of lockdown trade-offs goes on and the aggregate costs will be staggering once economists and others get around to better estimating them. As noted, gauging those costs will be challenging because of the many variables and values that come into play. But it remains vital that society takes risk analysis and trade-offs more seriously so that we don’t make these mistakes again and again.

Proportionality is the Key

Toward that end, Bauer makes “a plea for proportionality.” She wants society to strike a more reasonable balance when it comes to policy measures that might block actions and research that could help us better understand how to deal with risk uncertainties. Accordingly, “we must understand when to apply the precautionary principle and when to move on from it.”

“The precautionary principle doesn’t come with such checks and balances. On the contrary, it tends to perpetuate itself and acquire a life of its own,” she notes. In other words, once set in place initially for a given issue or sector, precautionary principle thinking tends to grow like bad weeds until it has taken over everything in sight. (To see the consequences of that in fields like aviation, space, nanotech, and others, please check out J. Storrs Hall’s amazing new book, Where Is My Flying Car?)

Of course, proportionality cuts both ways, and as I noted in my last two books, there are some instances in which at least a light version of the precautionary principle should be preemptively applied, but they are limited to scenarios where the threat in question is tangible, immediate, irreversible, and catastrophic in nature. In such cases, I argue, society might be better suited thinking about when an “anti-catastrophe principle” is needed, which narrows the scope of the precautionary principle and focuses it more appropriately on the most unambiguously worst-case scenarios that meet those criteria. Generally speaking, however, this test is not satisfied in the vast majority of cases. “Innovation Allowed” should be our default principle. 

Conclusion

The single most important thing that we must always remember when debating precautionary principle-based policies is that, just because someone has good intentions and claims safety as their goal, that does not automatically make the world a safer place. To repeat: Excessive safety-related measure can result in less safety overall. Or again, as Bauer says, “extreme caution comes at a cost.”

No one ever summarized this truism more clearly than the great political scientist Aaron Wildavsky, who devoted much of his life’s work to proving how efforts to create a risk-free society would instead lead to an extremely unsafe society. In his 1988 book, Searching for Safety, Wildavsky warned of the dangers of “trial without error” reasoning, and contrasted it with the trial-and-error method of evaluating risk and seeking wise solutions to it. He argued that wisdom is born of experience and that we can learn how to be wealthier and healthier as individuals and a society only by first being willing to embrace uncertainty and even occasional failure. Here was the crucial takeaway:

The direct implication of trial without error is obvious: If you can do nothing without knowing first how it will turn out, you cannot do anything at all. An indirect implication of trial without error is that if trying new things is made more costly, there will be fewer departures from past practice; this very lack of change may itself be dangerous in forgoing chances to reduce existing hazards. . . . Existing hazards will continue to cause harm if we fail to reduce them by taking advantage of the opportunity to benefit from repeated trials.

Trial and error is the basis of all societal learning, and without it, humanity will be less safe and less prosperous over the long run. Gabrielle Bauer’s new essay captures that insight better than anything I’ve read since Wildavsky was writing about the dangers of the precautionary principle. I beg you to jump over to New Atlantis and read her entire article. It’s absolutely essential.


Additional reading from Adam Thierer on the precautionary principle

]]>
https://techliberation.com/2022/02/07/the-precautionary-principle-a-plea-for-proportionality/feed/ 5 76949
Remembering the ‘Japan Inc.’ Industrial Policy Scare of the 1980s & 1990s https://techliberation.com/2021/06/29/remembering-the-japan-inc-industrial-policy-scare-of-the-1980s-1990s/ https://techliberation.com/2021/06/29/remembering-the-japan-inc-industrial-policy-scare-of-the-1980s-1990s/#respond Tue, 29 Jun 2021 16:12:22 +0000 https://techliberation.com/?p=76892

Discourse magazine has just published my latest essay, “‘Japan Inc.’ and Other Tales of Industrial Policy Apocalypse.” It is a short history of the hysteria surrounding the growth of Japan in the 1980s and early 1990s and its various industrial policy efforts. I begin by noting that, “American pundits and policymakers are today raising a litany of complaints about Chinese industrial policies, trade practices, industrial espionage and military expansion. Some of these concerns have merit. In each case, however, it is easy to find identical fears that were raised about Japan a generation ago.” I then walk through many of the leading books, opeds, movies, and other things from that past era to show how that was the case.

“Hysteria” is not too strong a word to use in this case. Many pundits and politicians were panicking about the rise of Japan economically and more specifically about the way Japan’s Ministry of International Trade and Industry (MITI) was formulating industrial policy schemes for industrial sectors in which they hoped to make advances. This resulted in veritable “MITI mania” here in America. “U.S. officials and market analysts came to view MITI with a combination of reverence and revulsion, believing that it had concocted an industrial policy cocktail that was fueling Japan’s success at the expense of American companies and interests,” I note. Countless books and essays were being published with breathless titles and predictions. I go through dozens of them in my essay. Meanwhile, the debate in policy circles and Capitol Hill even took on an ugly racial tinge, with some lawmakers calling the the Japanese “leeches.” and suggesting the U.S. should have dropped more atomic bombs on Japan during World War II. At one point, several members of Congress gathered on the lawn of the U.S. Capitol in 1987 to smash Japanese electronics with sledgehammers.

All this hysteria about Japan and MITI bore little semblance to reality. In fact, as I note in the essay, the MITI industrial planning model fell apart after it made a host of horrible bad bets and the stock market tanked in the late 1980s. Corruption also became a huge problem within many state-led efforts. A 2000 report by the Policy Research Institute within Japan’s Ministry of Finance concluded that “the Japanese model was not the source of Japanese competitiveness but the cause of our failure.” MITI was renamed the Ministry of Economy, Trade and Industry at about the same time, and its mission shifted more toward market-oriented reforms.

Industrial policy came to be viewed as a bit of a joke in America after that, but now it is back with a vengeance, thanks largely to the rise of Chinese economic power. Thus, because “we hear echoes from the Japan Inc. era debates in today’s policy discussions about China and industrial policy planning,” I end my essay with some lessons from the ‘Japan Inc.’ era for today’s industrial policy debates:

This similarity demonstrates the first lesson we can learn from the previous era: It is important to separate serious geopolitical and economic analysis from breathless fear-mongering and borderline xenophobia. The former has a serious place in policy discussions; the latter needs to be called out and shunned. After all, there are many legitimate worries about rising Chinese power, particularly when it involves Chinese Communist Party efforts to squash human rights domestically or to engage in industrial espionage, trade mercantilism and military adventurism abroad. Separating serious matters from trivial or imaginary ones is crucial, especially to help keep peace between nations. Avoiding hysteria is especially pertinent today with a wave of anti-Asian sentiment and attacks on the rise in the U.S. A second lesson from the Japan Inc. experience relates to today’s renewed interest in industrial policy: Forecasting the future of nations and economies—and trying to plan for it—is a tricky business. A huge range of variables affects global competitiveness and technological advancement. A nonexhaustive list of some of the most important factors would include legal and political stability, physical and intellectual property rights, tax burdens, competition policy, trade and investment laws, monetary policy, research and development efforts, and even demographic factors and access to certain natural resources. Understanding how these and other factors all work together is an inexact science. When targeted industrial policy mechanisms are added to the mix, it becomes even harder to untangle which variables are making the most difference. Both in the past and today, a less visible group of scholars has suggested that an embrace of entrepreneurialism and free trade was the fundamental factor driving Japanese economic expansion in the past and China’s amazing growth today. Openness to markets, they say, drove the enormous economic expansions—which also happened during times of much-needed catch-up modernization in both countries. But these perspectives have usually been shouted out of the room by louder voices, who either bombastically blast or praise industrial policy mechanisms as the prime mover in the economic rejuvenation of both nations. We need to tamp down on the magical thinking that governments can easily achieve technological innovation and economic growth by simply spinning a few industrial policy gauges. A few big bets may pay off, but that doesn’t justify governments engaging in casino economics regularly. History more often shows that grandiose industrial policy schemes simply result in cost overruns, cronyism and even corruption.

I also conclude by noting that:

Perhaps the most ironic indictment of industrial policy punditry lies in the way all the earlier books and essays about Japanese planning not only failed to forecast the many flops associated with it, but also did not foresee China as a potential future economic juggernaut. Korea, Singapore and Taiwan were mentioned as potential Asian challengers, but no one gave China much consideration. What might that tell us about the ability of experts to predict the future course of countries and economies? It is a reminder of the wisdom of another great Yogi Berra quote: “It’s tough to make predictions, especially about the future.”

You can read the entire piece, as well as several others listed below, over at Discourse.


Recent writing on industrial policy:
]]>
https://techliberation.com/2021/06/29/remembering-the-japan-inc-industrial-policy-scare-of-the-1980s-1990s/feed/ 0 76892
The End of Permissionless Innovation? https://techliberation.com/2021/01/10/the-end-of-permissionless-innovation/ https://techliberation.com/2021/01/10/the-end-of-permissionless-innovation/#comments Sun, 10 Jan 2021 21:24:12 +0000 https://techliberation.com/?p=76823

Time magazine recently declared 2020 “The Worst Year Ever.” By historical standards that may be a bit of hyperbole. For America’s digital technology sector, however, that headline rings true. After a remarkable 25-year run that saw an explosion of innovation and the rapid ascent of a group of U.S. companies that became household names across the globe, politicians and pundits in 2020 declared the party over. “We now are on the cusp of a new era of tech policy, one in which the policy catches up with the technology,” says Darrell M. West of the Brookings Institution in a recent essay, “The End of Permissionless Innovation.” West cites the House Judiciary Antitrust Subcommittee’s October report on competition in digital markets—where it equates large tech firms with the “oil barons and railroad tycoons” of the Gilded Age—as the clearest sign that politicization of the internet and digital technology is accelerating. It is hardly the only indication that America is set to abandon permissionless innovation and revisit the era of heavy-handed regulation for information and communication technology (ICT) markets. Equally significant is the growing bipartisan crusade against Section 230, the provision of the 1996 Telecommunications Act that shields “interactive computer services” from liability for information posted or published on their systems by users. No single policy has been more important to the flourishing of online speech or commerce than Sec. 230 because, without it, online platforms would be overwhelmed by regulation and lawsuits. But now, long knives are coming out for the law, with plenty of politicians and academics calling for it to be gutted. Calls to reform or repeal Sec. 230 were once exclusively the province of left-leaning academics or policymakers, but this year it was conservatives in the White Houseon Capitol Hill and at the Federal Communications Commission (FCC) who became the leading cheerleaders for scaling back or eliminating the law. President Trump railed against Sec. 230 repeatedly on Twitter, and most recently vetoed the annual National Defense Authorization Act in part because Congress did not include a repeal of the law in the measure. Meanwhile, conservative lawmakers in Congress such as Sens. Josh Hawley and Ted Cruz have used subpoenasangry letters and heated hearings to hammer digital tech executives about their content moderation practices. Allegations of anti-conservative bias have motivated many of these efforts. Even Supreme Court Justice Clarence Thomas questioned the law in a recent opinion. Other proposed regulatory interventions include calls for new national privacy laws, an “Algorithmic Accountability Act” to regulate artificial intelligence technologies, and a growing variety of industrial policy measures that would open the door to widespread meddling with various tech sectors. Some officials in the Trump administration even pushed for a nationalized 5G communications network in the name of competing with China. This growing “techlash” signals a bipartisan “Back to the Future” moment, with the possibility of the U.S. reviving a regulatory playbook that many believed had been discarded in history’s dustbin. Although plenty of politicians and pundits are taking victory laps and giving each other high-fives over the impending end of the permissionless innovation era, it is worth considering what America will be losing if we once again apply old top-down, permission slip-oriented policies to the technology sector.

Permissionless Innovation: The Basics

As an engineering principle, permissionless innovation represents the general freedom to tinker and develop new ideas and products in a relatively unconstrained fashion. As I noted in a recent book on the topic, permissionless innovation can also describe a governance disposition or regulatory default toward entrepreneurial activities. In this sense, permissionless innovation refers to the idea that experimentation with new technologies and innovations should generally be permitted by default and that prior restraints on creative activities should be avoided except in those cases where clear and immediate harm is evident. There is an obvious relationship between the narrow and broad definitions of permissionless innovation. When governments lean toward permissionless innovation as a policy default, it is likely to encourage freewheeling experimentation more generally. But permissionless innovation can sometimes occur in the wild, even when public policy instead tends toward its antithesis—the precautionary principle. As I noted in my latest book, tinkerers and innovators sometimes behave evasively and act to make permissionless innovation a reality even when public policy discourages it through precautionary restraints. To be clear, permissionless innovation as a policy default has not meant anarchy. Quite the opposite, in fact. In the United States, over the past 25 years, no major federal agencies that regulate technology or laws that do so were eliminated. Indeed, most agencies grew bigger. But in spite of this, entrepreneurs during this period got more green lights than red ones, and innovation was treated as innocent until proven guilty. This is how and why social media and the sharing economy developed and prospered here and not in other countries, where layers of permission slips prevented such innovations from ever getting off the drawing board. The question now is, how will the shift to end permissionless innovation as a policy default in the U.S. affect innovative activity here more generally? Economic historians Deirdre McCloskey and Joel Mokyr teach us that societal and political attitudes toward growth, risk-taking and entrepreneurialism have a powerful connection with the competitive standing of nations and the possibility of long-term prosperity. If America’s innovation culture sours on the idea of permissionless-ness and moves toward a precautionary principle-based model, creative minds will find it harder to experiment with bold new ideas that could help enrich the nation and improve the well-being of the citizenry—which is exactly why America discarded its old top-down regulatory model in the first place.

Why America Junked the Old Model

Perhaps the easiest way to put some rough bookends on the beginning and end of America’s permissionless innovation era is to date it to the birth and impending death of Sec. 230 itself. The enactment in 1996 of the Telecommunications Act was important, not only because it included Sec. 230, but also because the law created a sort of policy firewall between the old and new worlds of ICT regulation. The old ICT regime was rooted in a complex maze of federal, state and local regulatory permission slips. If you wanted to do anything truly innovative in the old days, you typically needed to get some regulator’s blessing first—sometimes multiple blessings. The exception was the print sector, which enjoyed robust First Amendment protection from the time of the nation’s founding. Newspapers, magazines and book publishers were left largely free of prior restraints regarding what they published or how they innovated. The electronic media of the 20th century were not so lucky. Telephony, radio, television, cable, satellite and other technologies were quickly encumbered with a crazy quilt of federal and state regulations. Those restraints include price controls, entry restrictions, speech restrictions and endless agency threats. ICT policy started turning the corner in the late 1980s after the old regulatory model failed to achieve its mission of more choice, higher quality and lower prices for media and communications. Almost everyone accepted that change was needed, and it came fast. The 1990s became a whirlwind of policy and technological change. In the mid-1990s, the Clinton administration decided to allow open commercialization of the internet, which, until then, had mostly been a plaything for government agencies and university researchers. But it was the enactment of the 1996 telecommunications law that sealed the deal. Not only did the new law largely avoid regulating the internet like analog-era ICT, but, more importantly, it included Sec. 230, which helped ensure that future regulators or overzealous tort lawyers would not undermine this wonderful new resource. A year later, the Clinton administration put a cherry on top with the release of its Framework for Global Electronic Commerce. This bold policy statement announced a clean break from the past, arguing that “the private sector should lead [and] the internet should develop as a market-driven arena, not a regulated industry.” Permissionless innovation had become the foundation of American tech policy.

The Results

Ideas have consequences, as they say, and that includes ramifications for domestic business formation and global competitiveness. While the U.S. was allowing the private sector to largely determine the shape of the internet, Europe was embarking on a very different policy path, one that would hobble its tech sector. America’s more flexible policy ecosystem proved to be fertile ground for digital startups. Consider the rise of “unicorns,” shorthand for companies valued at $1+ billion. “In terms of the global distribution of startup success,” notes the State of the Venture Capital Industry in 2019, “the number of private unicorns has grown from an initial list of 82 in 2015 to 356 in Q2 2019,” and fully half of them are U.S.-based. The United States is also home to the most innovative tech firms. Over the past decade, Strategy& (PricewaterhouseCooper’s strategy consulting business) has compiled a list of the world’s most innovative companies, based on R&D efforts and revenue. Each year that list is dominated by American tech companies. In 2013, 9 of the top 10 most innovative companies were based in the U.S., and most of them were involved in computing, software and digital technology. Global competition is intensifying, but in the most recent 2018 list, 15 of the top 25 companies are still U.S.-based giants, with Amazon, Google, Intel, Microsoft, Apple, Facebook, Oracle and Cisco leading the way. Meanwhile, European digital tech companies cannot be found on any such list. While America’s tech companies are household names across the European continent, most people struggle to name a single digital innovator headquartered in the EU. Permissionless innovation crushed the precautionary principle in the trans-Atlantic policy wars. European policymakers have responded to the continent’s digital stagnation by doubling down on their aggressive regulatory efforts. The EU closed out 2020 with two comprehensive new measures (the Digital Services Act and the Digital Markets Act), while the U.K. simultaneously pursued a new “online harms” law. Taken together, these proposals represent “the biggest potential expansion of global tech regulation in years,” according to The Wall Street Journal. The measures will greatly expand extraterritorial control over American tech companies. Having decimated their domestic technology base and driven away innovators and investors, EU officials are now resorting to plugging budget shortfalls with future antitrust fines on U.S.-based tech companies. It has essentially been a lost quarter century for Europe on the information technology front, and now American companies are expected to pay for it.

Republicans Revive ‘Regulation-By-Raised-Eyebrow’

In light of the failure of Europe’s precautionary principle-based policy paradigm, and considering the threat now posed by the growing importance of various Chinese tech companies, one might think U.S. policymakers would be celebrating the competitive advantages created by a quarter century of American tech dominance and contemplating how to apply this winning vision to other sectors of the economy. Alas, despite its amazing run, business and political leaders are now turning against permissionless innovation as America’s policy lodestar. What is most surprising is how this reversal is now being championed by conservative Republicans, who traditionally support deregulation. President Trump also called for tightening the screws on Big Tech. For example, in a May 2020 Executive Order on “Preventing Online Censorship,” he accused online platforms of “selective censorship that is harming our national discourse” and suggested that “these platforms function in many ways as a 21st century equivalent of the public square.” Trump and his supporters put Google, Facebook, Twitter and Amazon in their crosshairs, accusing them of discriminating against conservative viewpoints or values. The irony here is that no politician owes more to modern social media platforms than Donald Trump, who effectively used them to communicate his ideas directly to the American people. Moreover, conservative pundits now enjoy unparalleled opportunity to get their views out to the wider world thanks to all the digital soapboxes they now can stand on. YouTube and Twitter are chock-full of conservative punditry, and the daily list of top 10 search terms on Facebook is dominated consistently by conservative voices, where “the right wing has a massive advantage,” according to Politico. Nonetheless, conservatives insist they still don’t get a fair shake from the cornucopia of new communications platforms that earlier generations of conservatives could have only dreamed about having at their disposal. They think the deck is stacked against them by Silicon Valley liberals. This growing backlash culminated in a remarkable Senate Commerce Committee hearing on Oct. 28 in which congressional Republicans hounded tech CEOs and called for more favorable treatment of conservatives, and threatened social media companies with regulation if conservative content was taken down. Liberal lawmakers, by contrast, uniformly demanded the companies do more to remove content they felt was harmful or deceptive in some fashion. In many cases, lawmakers on both sides of the aisle were talking about the exact same content, putting the companies in the impossible position of having to devise a Goldilocks formula to get the content balance just right, even though it would be impossible to make both sides happy. In the broadcast era, this sort of political harassment was known as the “regulation-by-raised-eyebrow” approach, which allowed officials to get around First Amendment limitations on government content control. Congressional lawmakers and regulators at the FCC would set up show trial hearings and use political intimidation to gain programming concessions from licensed radio and television operators. These shakedown tactics didn’t always work, but they often resulted in forms of soft censorship, with media outlets editing content to make politicians happy. The same dynamic is at work today. Thus, when a firebrand politician like Sen. Josh Hawley suggests “we’d be better off if Facebook disappeared,” or when Sohrab Ahmari, the conservative op-ed editor at the New York Postcalls for the nationalization of Twitter, they likely understand these extreme proposals won’t happen. But such jawboning represents an easy way to whip up your base while also indirectly putting intense pressure on companies to tweak their policies. Make us happy, or else! It is not always clear what that “or else” entails, but the accumulated threats probably have some effect on content decisions made by these firms. Whether all this means that Sec. 230 gets scrapped or not shouldn’t distract from the more pertinent fact: few on the political right are preaching the gospel of permissionless innovation anymore. Even tech companies and Silicon Valley-backed organizations now actively distance themselves from the term. Zachary Graves, head of policy at Lincoln Network, a tech advocacy organization, worries that permissionless innovation is little more than a “legitimizing facade for anarcho-capitalists, tech bros, and cynical corporate flacks.” He lines up with the growing cast of commentators on both the left and right who endorse a “Tech New Deal” without getting concrete about what that means in practice. What it likely means is a return to a well-worn regulatory playbook of the past that resulted in innovation stagnation and crony capitalism.

A More Political Future

Indeed, as was the case during past eras of permission slip-based policy, our new regulatory era will be a great boon to the largest tech companies. Many people advocate greater regulation in the name of promoting competition, choice, quality and lower prices. But merely because someone proclaims that they are looking to serve the public interest doesn’t mean the regulatory policies they implement will achieve those well-intentioned goals. The means to the end—new rules, regulations and bureaucracies—are messy, imprecise and often counterproductive. Fifty years ago, the Nobel prize-winning economist George Stigler taught us that, “as a rule, regulation is acquired by the industry and is designed and operated primarily for its benefits.” In other words, new regulations often help to entrench existing players rather than fostering greater competition. Countless experts since then have documented the problem of regulatory capture in various contexts. If the past is prologue, we can expect many large tech firms to openly embrace regulation as they come to see it as a useful way of preserving market share and fending off pesky new rivals, most of whom will not be able to shoulder the compliance burdens and liability threats associated with permission slip-based regulatory regimes. True to form, in recent congressional hearings, Facebook head Mark Zuckerberg called on lawmakers to begin regulating social media markets. The company then rolled out a slick new website and advertising campaign inviting new rules on various matters. It is always easy for the king of the hill to call for more regulation when that hill is a mound of red tape of their own making—and which few others can ascend. It is a lesson we should have learned in the AT&T era, when a decidedly unnatural monopoly was formed through a partnership between company officials and the government.

Image Credit: Infrogmation/Wikimedia Commons

Many independent telephone companies existed across America before AT&T’s leaders cut sweetheart deals with policymakers that tilted the playing field in its favor and undermined competition. With rivals hobbled by entry restrictions and other rules, Ma Bell went on to enjoy more than a half century of stable market share and guaranteed rates of return. Consumers, by contrast, were expected to be content with plain-vanilla telephone services that barely changed. Some of us are old enough to remember when the biggest “innovation” in telephony involved the move from rotary-dial phones to the push-button Princess phone, which, we were thrilled to discover, came in multiple colors and had a longer cord. In a similar way, the impending close of the permissionless innovation era signals the twilight of technological creative destruction and its replacement by a new regime of political favor-seeking and logrolling, which could lead to innovation stagnation. The CEOs of the remaining large tech companies will be expected to make regular visits to the halls of Congress and regulatory agencies (and to all those fundraising parties, too) to get their marching orders, just as large telecom and broadcaster players did in the past. We will revert to the old historical trajectory, which saw communications and media companies securing marketplace advantages more through political machinations than marketplace merit.

Will Politics Really Catch Up?

While permissionless innovation may be falling out of favor with elites, America’s entrepreneurial spirit will be hard to snuff out, even when layers of red tape make it riskier to be creative. If for no other reason, permissionless innovation still has a fighting chance so long as Congress struggles to enact comprehensive technology measures. General legislative dysfunction and profound technological ignorance are two reasons that Congress has largely become a non-actor on tech policy in recent years. But the primary limitation on legislative meddling is the so-called pacing problem, which refers to the way technological innovation often outpaces the ability of laws and regulations to keep up. “I have said more than once that innovation moves at the speed of imagination and that government has traditionally moved at, well, the speed of government,” observed former Federal Aviation Administration head Michael Huerta in a 2016 speech.

DNA sequencing machine. Image Credit: Assembly/Getty Images

The same factors that drove the rise of the internet revolution—digitization, miniaturization, ubiquitous mobile connectivity and constantly increasing processing power—are spreading to many other sectors and challenging precautionary policies in the process. For example, just as “Moore’s Law” relentlessly powers the pace of change in ICT sectors, the “Carlson curve” now fuels genetic innovation. The curve refers to the fact that, over the past two decades, the cost of sequencing a human genome has plummeted from over $100 million to under $1,000, a rate nearly three times faster than Moore’s Law. Speed isn’t the only factor driving the pacing problem. Policymakers also struggle with metaphysical considerations about how to define the things they seek to regulate. It used to be easy to agree what a phone, television or medical tracking device was for regulatory purposes. But what do those terms really mean in the age of the smartphone, which incorporates all of them and much more? “‘Tech’ is a very diverse, widely-spread industry that touches on all sorts of different issues,” notes tech analyst Benedict Evans. “These issues generally need detailed analysis to understand, and they tend to change in months, not decades.” This makes regulating the industry significantly more challenging than it was in the past. It doesn’t mean the end of regulation—especially for sectors already encumbered by many layers of preexisting rules. But these new realities lead to a more interesting game of regulatory whack-a-mole: pushing down technological innovation in one way often means it simply pops up somewhere else. The continued rapid growth of what some call “the new technologies of freedom”—artificial intelligence, blockchain, the Internet of Things, etc.—should give us some reasons for optimism. It’s hard to put these genies back in their bottles now that they’re out. This is even more true thanks to the growth of innovation arbitrage—both globally and domestically. Creators and capital now move fluidly across borders in pursuit of more hospitable innovation and investment climates. Recently, some high-profile tech CEOs like Elon Musk and Joe Lonsdale have relocated from California to Texas, citing tax and regulatory burdens as key factors in their decisions. Oracle, America’s second-largest software company, also just announced it is moving its corporate headquarters from Silicon Valley to Austin, just over a week after Hewlett Packard Enterprise said it too is moving its headquarters from California to Texas—in this case, Houston. “Voting with your feet” might actually still mean something, especially when it is major tech companies and venture capitalists abandoning high-tax, over-regulated jurisdictions.

Advocacy Remains Essential

But we shouldn’t imagine that technological change is inevitable or fall into the trap of thinking of it as a sort of liberation theology that will magically free us from repressive government controls. Policy advocacy still matters. Innovation defenders will need to continue to push back against the most burdensome precautionary policies, while also promoting reforms that protect entrepreneurial endeavors. The courts offer us great hope. Groups like the Institute for Justice, the Goldwater Institute, the Pacific Legal Foundation and others continue to litigate successfully in defense of the freedom to innovate. While the best we can hope for in the legislative arena may be perpetual stalemate, these and other public interest law firms are netting major victories in courtrooms across America. Sometimes court victories force positive legislative changes, too. For example, in 2015, the Supreme Court handed down North Carolina State Board of Dental Examiners v. Federal Trade Commission, which held that local government cannot claim broad immunity from federal antitrust laws when it delegates power to nongovernmental bodies, such as licensing boards. This decision made much-needed occupational licensing reform an agenda item across America. Many states introduced or adopted bipartisan legislation aimed at reforming or sunsetting occupational licensing rules that undermine entrepreneurship. Even more exciting are proposals that would protect citizens’ “right to earn a living.” This right would allow individuals to bring suit if they believe a regulatory scheme or decision has unnecessarily infringed upon their ability to earn a living within a legally permissible line of work. Meanwhile, there have been ongoing state efforts to advance “right to try” legislation that would expand medical treatment options for Americans tired of overly paternalistic health regulations. Perhaps, then, it is too early to close the book on the permissionless innovation era. While dark political clouds loom over America’s technological landscape, there are still reasons to believe the entrepreneurial spirit can prevail.
]]>
https://techliberation.com/2021/01/10/the-end-of-permissionless-innovation/feed/ 2 76823
Existential Risk & Emerging Technology Governance https://techliberation.com/2020/08/05/existential-risk-emerging-technology-governance/ https://techliberation.com/2020/08/05/existential-risk-emerging-technology-governance/#comments Wed, 05 Aug 2020 16:51:39 +0000 https://techliberation.com/?p=76795

“The world should think better about catastrophic and existential risks.” So says a new feature essay in The Economist. Indeed it should, and that includes existential risks associated with emerging technologies.

The primary focus of my research these days revolves around broad-based governance trends for emerging technologies. In particular, I have spent the last few years attempting to better understand how and why “soft law” techniques have been tapped to fill governance gaps. As I noted in this recent post compiling my recent writing on the topic;

soft law refers to informal, collaborative, and constantly evolving governance mechanisms that differ from hard law in that they lack the same degree of enforceability. Soft law builds upon and operates in the shadow of hard law. But soft law lacks the same degree of formality that hard law possess. Despite many shortcomings and criticisms, compared with hard law, soft law can be more rapidly and flexibly adapted to suit new circumstances and address complex technological governance challenges. This is why many regulatory agencies are tapping soft law methods to address shortcomings in the traditional hard law governance systems.

I argued in recent law review articles as well as my latest book, despite its imperfections, I believe that soft law has an important role to play in filling governance gaps that hard law struggles to address. But there are some instances where soft law simply will not cut it. As I noted in Chapter 7 of my new book, there may be very legitimate existential threats out there that we should be spending more time addressing because the scope, severity, and probability of severe risk are present. Hard law solutions will still be needed in such instances, even if they may be challenged by many of the same factors that are fueling the shift toward soft law for other sectors or issues.

Of course, we are immediately confronted with a definitional challenge: What exactly counts as an “existential risk”? I argue that it is important that we spend more time discussing this question because far too many people today throw around the term “existential risk” when referencing risks that are noting of the sort. For example, increased social media use may indeed be a threat to data security and personal privacy, but those risks are not “existential” in the same way chemical or nuclear weapons proliferation are threats to our existence. This gets to the heart of the matter: the root of “existential” is existence. By definition, an existential risk needs to have some direct bearing on the future of humanity’s ability to survive. Efforts to conflate lesser risks into existential ones cheapen the very meaning of the term.

This shouldn’t be controversial, but somehow it is. Countless pundits today want to suggest that almost every new technological development might somehow pose an existential threat to humanity. But it just isn’t the case. That does not mean their concerns are not important, or potentially deserving of some government attention. It simply means that we need to take risk prioritization more seriously. If everything is an existential risk, than nothing is an existential risk. We must have some sort of ranking of risks if we hope to have a rational conversation about how to use scare societal resources to address matters of public concern.

These issues are discussed at far greater length in the sections of my book (pgs. 228-240) that you will find embedded down below. How should society deal with “killer robots” or the accelerated development of genetic editing capabilities? What kind of coordinated compliance regime might help address rouge actors who seek to use new technological capabilities for nefarious purposes? What can we learn from past global enforcement efforts for chemical and nuclear weapons? These are just some of the questions I take on in this section of the book and plan to spend more time addressing in coming years. Scan these pages from the book to see my initial thoughts on these matters. But I am really just scratching the surface here. I’ll have much more to say on these matters in coming months and years. It’s a massively complicated topic.

]]>
https://techliberation.com/2020/08/05/existential-risk-emerging-technology-governance/feed/ 2 76795
Some Recent Essays on the Importance of Innovation & the Fight over Technological Progress https://techliberation.com/2020/07/28/some-recent-essays-on-the-importance-of-innovation-the-fight-over-technological-progress/ https://techliberation.com/2020/07/28/some-recent-essays-on-the-importance-of-innovation-the-fight-over-technological-progress/#comments Tue, 28 Jul 2020 15:35:34 +0000 https://techliberation.com/?p=76778

[Updated: March 2022]

I was speaking at a conference recently and discussing my life’s work, which for 30 years has been focused on the importance of innovation and intellectual battles over what we mean progress. I put together up a short list of some things I have written over the last few years on this topic and thought I would just re-post them here. I will try to keep this regularly updated, at least for a few years.

UNDERSTANDING THE CHALLENGE WE FACE:

HOW WE MUST RESPOND = “Rational Optimism” / Right to Earn a Living / Permissionless Innovation

ADDITIONAL READING:

NEW BOOK (tying together all the essays and papers listed above):

 

]]>
https://techliberation.com/2020/07/28/some-recent-essays-on-the-importance-of-innovation-the-fight-over-technological-progress/feed/ 1 76778
Matt Ridley on the Freedom to Experiment and Try New Things https://techliberation.com/2020/05/17/matt-ridley-on-the-freedom-to-experiment-and-try-new-things/ https://techliberation.com/2020/05/17/matt-ridley-on-the-freedom-to-experiment-and-try-new-things/#respond Sun, 17 May 2020 18:35:34 +0000 https://techliberation.com/?p=76732

Matt RidleyThere are few things more exciting to innovation policy geeks that than the week a new Matt Ridley book drops. Thankfully, that time is upon us once again. This week, Ridley’s latest book, How Innovation Works: And Why It Flourishes in Freedom, is being released. I can’t wait to dig in.

This weekend, the Wall Street Journal published an essay condensed from the book entitled, “Innovation Can’t Be Forced, but It Can Be Quashed.” Here are some of the highlights from Ridley’s piece:

Innovation relies upon freedom to experiment and try new things, which requires sensible regulation that is permissive, encouraging and quick to give decisions. By far the surest way to rediscover rapid economic growth when the pandemic is over will be to study the regulatory delays and hurdles that have now been hastily swept aside to help innovators in medical devices and therapies, and to see whether such reforms could be applied to other parts of the economy too. … Dealing with Covid-19 has forcibly reminded governments of the value of innovation. But if we are to get faster vaccines and treatments—and better still, more innovation across all fields in the future—then innovators need to be freed from the shackles that hold them back.

These are crucial point, and ones I discuss in the launch essay and the afterward of my new book, Evasive Entrepreneurs and the Future of Governance. Alas, as I pointed out in that launch essay and my last book on Permissionless Innovation, a great many barriers stand in the way of the freedom to experiment and try new things. As Ridley points out:

There is nothing new about resistance to innovation. […] Incumbent vested interests, overcautious regulators, opportunistic activists and rent-seeking patent holders combine to oppose or delay almost every innovation.

And that’s a real shame because, Ridley correctly concludes, “It turns out that continuous tinkering to develop and refine a better product is much more important than protecting what you’ve already created.”

Spot on. Head over to the  Wall Street Journal to read the entire thing and then go order a copy of Ridley’s new book. He’s one of the most important living defenders of technological innovation and human progress. His work has had a huge influence on my way of thinking about innovation, science, and technology. Thank you Matt!

 

 

]]>
https://techliberation.com/2020/05/17/matt-ridley-on-the-freedom-to-experiment-and-try-new-things/feed/ 0 76732
Panicking About 5G is a Celebrity Trend You Shouldn’t Follow https://techliberation.com/2020/05/13/panicking-about-5g-is-a-celebrity-trend-you-shouldnt-follow/ https://techliberation.com/2020/05/13/panicking-about-5g-is-a-celebrity-trend-you-shouldnt-follow/#respond Wed, 13 May 2020 14:00:03 +0000 https://techliberation.com/?p=76728

The COVID-19 pandemic has shown how important technology is for enabling social distancing measures while staying connected to friends, family, school, and work. But for some, including a number of celebrities, it has also heightened fears of emerging technologies that could further improve our connectivity. The latest technopanic should not make us fear technology that has added so much to our lives and that promises to help us even more.

Celebrities such as Keri Hilson, John Cusack, and Woody Harrelson have repeated concerns about 5G—from how it could be weakening our immune systems to even causing this pandemic. These claims about 5G have gotten serious enough that Google banned ads with misleading health information regarding 5G, and Twitter has stated it will remove tweets with 5G and health misinformation that could potentially cause harm in light of the COVID-19 pandemic. 5G is not causing the current pandemic, nor has it been linked to other health concerns. As the director of American Public Health Association Dr. Georges C. Benjamin has stated, “COVID-19 is caused by a virus that came through a natural animal source and has no relation to 5G, or any radiation linked to technology.”  As the New York Times has pointed out, much of the non-COVID-19 5G health concerns originated from Russian propaganda news source RT or trace back to a single decades-old flawed study. In short, there is no evidence to support many of the outrageous health claims regarding 5G.

New technologies have often faced unfounded concerns about their potential risks. In the late 19 th and early 20th centuries, many people feared electricity in the home was making people tired and weak (similar to the health claims about 5G today). More recently, many were concerned that technologies such as microwave ovens and cell phones might cause cancer or other health issues, but studies have proved that these worst fears have little grounding in science.

Some of these fears are based on misunderstandings of how technology works or confusion over similar but distinct technologies. For example, in the case of concerns about cell phones and cancer, the fears may stem from misunderstandings about the differences between ionizing and non-ionizing radiation. In a time of uncertainty, we may want to rush to maintain the status quo. But any number of innovations such as the radio, trains, or cars that were once feared have themselves become part of the status quo.

Why does it matter if some people are afraid of new technologies? While it is completely rational to want to avoid catastrophic and irreversible harms, unfounded fears can risk delaying important and beneficial technologies. For example, work by Linda Simon suggests that the exaggerated claims and fears of electricity’s impact on health may have slowed its adoption. While all technologies carry some risks, can we imagine all that might have been lost if we had listened to those trying to convince us to avoid electricity out of an abundance of caution? we may laugh about fears of electricity and not understanding its benefits, we still see extreme reactions out of fear of new technology, such as recent attempts to burn 5G towers in the United Kingdom because of misinformation about the health risks.

The recent pandemic should remind why constantly improving connectivity and internet infrastructure has been beneficial. As more of us are working from home and have an increased number of connected devices, 5G will increase network capacity and enable faster download speeds. These improvements also play a key role in the development of a number of emerging technologies from smart home devices and virtual reality to driverless cars and remote surgery.

The problem is not in individual choices to avoid a specific technology, but rather how such technopanics can impact broader adoption of beneficial technologies and innovation-friendly public policies. The good news is policymakers recognize the importance of policies that enable 5G and are also informing the public on the facts about wireless technology and health. During the COVID-19 pandemic, the Federal Communications Commission has continued to pursue policies that can improve connectivity, including for advancements toward 5G.

While we may want to follow celebrity trends when it comes to the latest fashion or TikTok dances, we should only let them scare us in the movies and not when it comes to 5G. If we only focus on the most outrageous and unfounded claims, our fear might distract us too much to see its benefits.

]]>
https://techliberation.com/2020/05/13/panicking-about-5g-is-a-celebrity-trend-you-shouldnt-follow/feed/ 0 76728
Introductory Chapter: “Evasive Entrepreneurs and the Future of Governance” https://techliberation.com/2020/05/11/introductory-chapter-evasive-entrepreneurs-and-the-future-of-governance/ https://techliberation.com/2020/05/11/introductory-chapter-evasive-entrepreneurs-and-the-future-of-governance/#comments Mon, 11 May 2020 21:01:05 +0000 https://techliberation.com/?p=76726

I’m making the opening chapter of my new book, Evasive Entrepreneurs and the Future of Governance: How Innovation Improves Economies and Governments, available here. Also here’s the launch essay and the event launch video, which discuss how the themes discussed throughout the book have become even more visible during the coronavirus crisis.

Also, here are some lists of 10 major themes from the book13 key terms found in the book, and 5 innovation policy scholars who inspired my thinking. Reminder: this book is a sequel to my previous book, Permissionless Innovation: The Continuing Case for Comprehensive Technological Freedom.

I hope you will consider buying Evasive Entrepreneurs after reading this opening chapter.

]]>
https://techliberation.com/2020/05/11/introductory-chapter-evasive-entrepreneurs-and-the-future-of-governance/feed/ 1 76726
“Human Needs Are Breaking Down Yesterday’s Precautionary Approaches” https://techliberation.com/2020/05/06/human-needs-are-breaking-down-yesterdays-precautionary-approaches/ https://techliberation.com/2020/05/06/human-needs-are-breaking-down-yesterdays-precautionary-approaches/#respond Wed, 06 May 2020 18:02:15 +0000 https://techliberation.com/?p=76709

I really liked this new essay, “Innovation is thriving in the fight against Covid-19,” by Norman Lewis over at Spiked, a UK-based publication. In it, he makes several important points similar to themes discussed in my book launch essay last week (“Evasive Entrepreneurialism and Technological Civil Disobedience in the Midst of a Pandemic.”) Lewis begins by noting that:

There is nothing like a crisis to concentrate the mind. And the Covid-19 catastrophe has certainly done this. It has speeded up latent trends and posed new questions. The issue of our technologically informed capacity to solve problems is just one example.

He continues on to argue:

a crisis like Covid-19 will necessarily pose new urgent questions that could not have been anticipated. New initiatives will rise to meet these. Pre-existing skills, knowledge, technologies and attitudes will always be the starting point of new problem-solving quests. Where and how we focus attention will, in part, be based on prior cultural assumptions and existing technologies, and also on the novelty of the problem to be solved.

Lewis discusses how innovative minds are pushing back against archaic regulatory barriers, business models and government regulations. As he nicely summarizes:

Unimagined solutions are being pushed while a more open attitude towards experimentation, risk-taking and side-stepping onerous and costly regulation is starting to emerge. Human needs are breaking down yesterday’s precautionary approaches.

That last line really resonated with me because it’s a major theme that runs throughout my new book, “Evasive Entrepreneurs and the Future of Governance: How Innovation Improves Economies and Governments.” As I summarized in my book launch essay:

Eventually, people take notice of how regulators and their rules encumber entrepreneurial activities, and they act to evade them when public welfare is undermined. Working around the system becomes inevitable when the permission society becomes so completely dysfunctional and counterproductive.

This was happening before the coronavirus outbreak, but the crisis has supercharged this phenomenon. Evasive entrepreneurs are taking advantage of the growth of new devices and platforms that let citizens circumvent (or perhaps just ignore) public policies that limit innovative efforts. These can include common tools like smartphones, computers, and various new interactive platforms, as well as more specialized technologies like cryptocurrencies, private drones, immersive technologies (like virtual reality), 3D printers, the “Internet of Things,” and sharing economy platforms and services. But that list just scratches the surface and the public is increasingly using these new technological capabilities to assert themselves and push back against laws and regulations that defy common sense and hold back progress.

Lawmakers and regulators need to consider a balanced response to evasive entrepreneurialism that is rooted in the realization that technology creators and users are less likely to seek to evade laws and regulations when public policies are more in line with common sense. Yesterday’s heavy-handed approaches that are rooted in the Precautionary Principle will need to be reformed to make sure progress can happen. 

Read my book to find out more!

 

]]>
https://techliberation.com/2020/05/06/human-needs-are-breaking-down-yesterdays-precautionary-approaches/feed/ 0 76709
Barriers to a Builder’s Movement: Thoughts on Andreessen’s Manifesto https://techliberation.com/2020/04/21/barriers-to-a-builders-movement-thoughts-on-andreessens-manifesto/ https://techliberation.com/2020/04/21/barriers-to-a-builders-movement-thoughts-on-andreessens-manifesto/#comments Tue, 21 Apr 2020 16:48:50 +0000 https://techliberation.com/?p=76692

[First published by AIER on April 20, 2020 as “Innovation and the Trouble with the Precautionary Principle.”]

In a much-circulated new essay (“It’s Time to Build”), Marc Andreessen has penned a powerful paean to the importance of building. He says the COVID crisis has awakened us to the reality that America is no longer the bastion of entrepreneurial creativity it once was. “Part of the problem is clearlyforesight, a failure of imagination,” he argues. “But the other part of the problem is what we didn’t do in advance, and what we’re failing to do now. And that is a failure of action, and specifically our widespread inability to build.”The Mind of Marc Andreessen | The New Yorker

Andreessen suggests that, somewhere along the line, something changed in the DNA of the American people and they essentially stopped having the desire to build as they once did. “You don’t just see this smug complacency, this satisfaction with the status quo and the unwillingness to build, in the pandemic, or in healthcare generally,” he says. “You see it throughout Western life, and specifically throughout American life.” He continues:

“The problem is desire. We need to want these things. The problem is inertia. We need to want these things more than we want to prevent these things. The problem is regulatory capture. We need to want new companies to build these things, even if incumbents don’t like it, even if only to force the incumbents to build these things.”

Accordingly, Andreessen continues on to make the case to both the political right and left to change their thinking about building more generally. “It’s time for full-throated, unapologetic, uncompromised political support from the right for aggressive investment in new products, in new industries, in new factories, in new science, in big leaps forward.”

What’s missing in Andreessen’s manifesto is a concrete connection between America’s apparent dwindling desire to build these things and the political realities on the ground that contribute to that problem. Put simply, policy influences attitudes. More specifically, policies that frown upon entrepreneurial risk-taking actively disincentivize the building of new and better things. Thus, to correct the problem Andreessen identifies, it is essential that we must first remove political barriers to productive entrepreneurialism or else we will never get back to being the builders we once were.    

Attitudes about Progress Matter 

The economic historian Joel Mokyr has noted how, “technological progress requires above all tolerance toward the unfamiliar and the eccentric” and that the innovation that undergirds economic growth is best viewed as “a fragile and vulnerable plant” that “is highly sensitive to the social and economic environment and can easily be arrested by relatively small external changes.” Specifically, societal and political attitudes toward growth, risk-taking, and entrepreneurial activities (and failures) are important to the competitive standing of nations and the possibility of long-term prosperity. “How the citizens of any country think about economic growth, and what actions they take in consequence, are,” Benjamin Friedman observes, “a matter of far broader importance than we conventionally assume.”Image

Former Federal Reserve chairman Alan Greenspan and co-author Adrian Wooldridge have observed that “[t]he key to America’s success lies in its unique toleration for ‘creative destruction,’” and an “enduring preference for change over stability.” This is consistent with the findings of Deirdre McCloskey’s recent 3-volume trilogy about the history of modern economic growth. McCloskey meticulously documents how an embrace of “bourgeois virtues” (i.e., positive attitudes about markets and innovation) was the crucial factor propelling the invention and economic growth that resulted in the Industrial Revolution.The importance of positive attitudes toward innovation and risk-taking were equally important for the Information Revolution more recently. In turn, that also helps explain why so many US-based tech innovators became global powerhouses, while firms from other countries tend to flounder because their innovation culture was more precautionary in orientation.

There are limits to how much policymakers can do to influence the attitudes among citizens toward innovation, entrepreneurialism, and economic growth. When policymakers set the right tone with a positive attitude toward innovation, however, it inevitably infuses various institutions and creates powerful incentives for entrepreneurial efforts to be undertaken. This, in turn, influences broader societal attitudes and institutions toward innovation and creates a positive feedback loop. “If we learn anything from the history of economic development,” argued David Landes in his magisterial The Wealth and Poverty of Nations: Why Some Are So Rich and Some Are So Poor, “it is that culture makes all the difference.” Research by other scholars finds that, “existing cultural conditions determine whether, when, how and in what form a new innovation will be adopted.”

Economists like Mancur Olson speak of the importance of a “structure of incentives” that helps explain why “the great differences in the wealth of nations are mainly due to differences in the quality of their institutions and economic policies.”In this sense, “institutions” include what Elhanan Helpman defines as “systems of rules, beliefs, and organizations,”including the rule of law and court systems,property rights,contracts, free trade policies and institutions, light-touch regulations and regulatory regimes, freedom to travel, and various other incentives to invest. Image

It is the freedom to invest, the freedom to work, and the freedom to build that particularly concerns Marc Andreessen. But he needs to draw the connection with the specific public policies that hold back our ability to exercise those freedoms. 

Policy Defaults toward Innovation Matter Even More

Unfortunately, a great many barriers exist to entrepreneurial efforts. Those barriers to building include inflexible health and safety regulation, occupational licensing rules, cronyist industrial protectionist schemes, inefficient (industry-rigged) tax schemes, rigid zoning ordinances, and many other layers of regulatory red tape at the federal, state, and local level.  

What unifies all these policies is risk aversion and the precautionary principle. As I argued in my last book, we have a choice when it comes to setting defaults for innovation policy. We can choose to set innovation defaults closer to the green light of “permissionless innovation,” generally allowing entrepreneurial acts unless a compelling case can be made not to. Alternatively, we can set our default closer to the red light of the precautionary principle, which disallows risk-taking or entrepreneurialism until some authority gives us permission to proceed. 

My book made the case for permissionless innovation as the superior default regime. My argument for rejecting the precautionary principle as the default came down to belief that, “living in constant fear of worst-case scenarios—and premising public policy on them—means that best-case scenarios will never come about. When public policy is shaped by precautionary principle reasoning,” I argued, “it poses a serious threat to technological progress, economic entrepreneurialism, social adaptation, and long-run prosperity.”  

Image

Heavy-handed preemptive restraints on innovative acts have such deleterious effects because they raise barriers to entry, increase compliance costs, and create more risk and uncertainty for entrepreneurs and investors. Progress is impossible without constant trial-and-error experimentation and entrepreneurial risk-taking. Thus, it is the unseen costs of forgone innovation opportunities that make the precautionary principle so troubling as a policy default. Without risk, there can be no reward. Scientist Martin Rees refers to this truism about the precautionary principle as “the hidden cost of saying no.”  

More generally, risk analysts have noted that the precautionary principle “lacks a firm logical foundation” and is “literally incoherent” because it fails to specify a clear standard by which to judge which risks are most serious and worthy of preemptive control. Moreover, regulatory policy experts have criticized the fact that the precautionary principle, “may be misused for protectionist ends; it tends to undermine international regulatory cooperation; and it may have highly undesirable distributive consequences.” Specifically, large incumbent firms are almost always more likely able to deal with rigid, expensive regulatory regimes or, worse yet, can game those systems by “capturing” policymakers and using regulatory regimes to exclude new rivals.  

Precaution Suffocates Productive Entrepreneurialism 

The problem today is that a massive volume of precautionary policies exist that discourage “productive entrepreneurship” (i.e., building) and instead actively encourage “unproductive entrepreneurship” (i.e., preservation of the status quo). Andreessen identifies this problem when he speaks of “smug complacency, this satisfaction with the status quo and the unwillingness to build.” But he doesn’t fully connect the dots between how the attitudes came about and the public policy incentives that actively encourage such thinking. 

Why try to build when all the incentives are aligned against you? Andreessen wants to know “Where are the supersonic aircraft? Where are the millions of delivery drones? Where are the high speed trains, the soaring monorails, the hyperloops, and yes, the flying cars?” Well, I’ll tell you where they are at. They are trapped in the minds of inventive people who cannot bring them to fruition so long as an endless string of barriers makes it costly or impossible for them to realize those dreams. 

Read Eli Dourado’s important essay on “How Do We Move the Needle on Progress?” to get a more concrete feel for the specific barriers to building in the fields where productive entrepreneurialism is most needed: health, housing, energy, and transportation.Image

The bottom line, as Dustin Chambers and Jonathan Munemo noted in a 2017 Mercatus Center report on the impact of regulation on entrepreneurial activity, is that “If a nation wishes to promote higher levels of domestic entrepreneurship in both the short and long run, top priority should be given to reducing barriers to entry for new firms and to improving overall institutional quality (especially political stability, regulatory quality, and voice and accountability).” 

This doesn’t mean there is no role for government in helping to promote “building” and entrepreneurialism. A healthy debate continues to rage about “state capacity” as it pertains to government investments in research and development, for example. While I am skeptical, there may very well be some steps governments can take to encourage more and better investments in the sectors and technologies we desperately need. But all the “state capacity” in the world isn’t going to help until we first clear away the barriers that hold back the productive spirit of the people. 

Oiling the Wheels of Novelty

My new book, which is due out next week, discusses how innovation improves economies and government institutions. It builds on the fundamental insight of Calestous Juma, who concluded his masterwork Innovation and Its Enemies by reminding us of the continued importance of “oiling the wheels of novelty,” to constantly replenish the well of important ideas and innovations. “The biggest risk that society faces by adopting approaches that suppress innovation,” Juma said, “is that they amplify the activities of those who want to preserve the status quo by silencing those arguing for a more open future.” Image

The openness Juma had in mind represents a tolerance of new ideas, inventions, and unknown futures. It can and should also represent an openness to new, more flexible methods of governance. For, if it doesn’t, the builder movement that Andreessen and others long for will remain just a distant dream, incapable of ever being realized so long as the wheels of novelty are gummed up by decades of inefficient, archaic, counterproductive public policies.

_________

P.S. I highly recommend this excellent essay by Jerry Brito, “We don’t want to build? Maybe we should build anyway.” It touches on many of the same themes I discuss in my response essay as well as in my new book, Evasive Entrepreneurs and the Future of Governance: How Innovation Improves Economies and Governments.

]]>
https://techliberation.com/2020/04/21/barriers-to-a-builders-movement-thoughts-on-andreessens-manifesto/feed/ 2 76692
The APA’s Welcome New Statement on Video Game Violence https://techliberation.com/2020/03/06/the-apas-welcome-new-statement-on-video-game-violence/ https://techliberation.com/2020/03/06/the-apas-welcome-new-statement-on-video-game-violence/#respond Fri, 06 Mar 2020 14:52:13 +0000 https://techliberation.com/?p=76676

I was pleased to see the American Psychological Association’s new statement slowly reversing course on misguided past statements about video games and acts of real-world violence. As Kyle Orland reports in Ars Technica, the APA has clarified its earlier statement on this relationship between watching video game depictions of violence and actual youth behavior. The APA’s old statement said that evidence “confirms [the] link between playing violent video games and aggression.”  But the APA has come around and now says that, “there is insufficient scientific evidence to support a causal link between violent video games and violent behavior.” More specifically, the APA says: 

The following resolution should not be misinterpreted or misused by attributing violence, such as mass shootings, to violent video game use. Violence is a complex social problem that likely stems from many factors that warrant attention from researchers, policy makers and the public. Attributing violence to violent video gaming is not scientifically sound and draws attention away from other factors.

This is a welcome change of course because the APA’s earlier statements were being used by politicians and media activists who favored censorship of video games. Hopefully that will no longer happen.

“Monkey see, monkey do” theories of media exposure leading to acts of real-world violence have long been among the most outrageously flawed theories in the fields of psychology and media studies.  All the evidence points the opposite way, as I documented a decade ago in a variety of studies. (For a summary, see my 2010 essay, “More on Monkey See-Monkey Do Theories about Media Violence & Real-World Crime.”)

In fact, there might even be something to the “cathartic effect hypothesis,” or the idea first articulated by Aristotle (“katharsis”) that watching dramatic portrayals of violence could lead to “the proper purgation of these emotions.” (See my 2010 essay on this, “Video Games, Media Violence & the Cathartic Effect Hypothesis.”)

Of course, this doesn’t mean that endless exposure to video game or TV and movie violence is a good thing. Prudence and good parenting are still essential. Some limits are smart. But the idea that a kid playing or watching violent act will automatically become violent themselves was always nonsense. It’s time we put that theory to rest. Thanks to the new APA statement, we are one step closer.

P.S. I recently penned an essay about my long love affair with video games that you might find entertaining: “Confessions of a ‘Vidiot’: 50 Years of Video Games & Moral Panics

]]>
https://techliberation.com/2020/03/06/the-apas-welcome-new-statement-on-video-game-violence/feed/ 0 76676
Podcast: Problems with the Precautionary Principle https://techliberation.com/2020/02/20/podcast-problems-with-the-precautionary-principle/ https://techliberation.com/2020/02/20/podcast-problems-with-the-precautionary-principle/#comments Thu, 20 Feb 2020 20:02:13 +0000 https://techliberation.com/?p=76669

On the latest Institute for Energy Research podcast, I joined Paige Lambermont to discuss:

  • the precautionary principle vs. permissionless innovation;
  • risk analysis trade-offs;
  • the future of nuclear power;
  • the “pacing problem”;
  • regulatory capture;
  • evasive entrepreneurialism;
  • “soft law”;
  • … and why I’m still bitter about losing the 6th grade science fair!

Our discussion was inspired by my recent essay, “How Many Lives Are Lost Due to the Precautionary Principle?”

]]>
https://techliberation.com/2020/02/20/podcast-problems-with-the-precautionary-principle/feed/ 2 76669
Europe’s New AI Industrial Policy https://techliberation.com/2020/02/20/europes-new-ai-industrial-policy/ https://techliberation.com/2020/02/20/europes-new-ai-industrial-policy/#comments Thu, 20 Feb 2020 19:37:48 +0000 https://techliberation.com/?p=76667

The race for artificial intelligence (AI) supremacy is on with governments across the globe looking to take the lead in the next great technological revolution. As they did before during the internet era, the US and Europe are once again squaring off with competing policy frameworks.

In early January, the Trump Administration announced a new light-touch regulatory framework and then followed up with a proposed doubling of federal R&D spending on AI and quantum computing. This week, the European Union Commission issued a major policy framework for AI technologies and billed it as “a European approach to excellence and trust.”

It seems the EU basically wants to have its cake and eat it too by marrying up an ambitious industrial policy with a precautionary regulatory regime. We’ve seen this show before. Europe is doubling down on the same policy regime it used for the internet and digital commerce. It did not work out well for the continent then, and there are reasons to think it will backfire on them again for AI technologies.

An Ambitious Industrial Policy Vision

The new EU framework includes a lot of catchphrases and proposals that are an industrial policy lover’s dream. In an attempt to create “an ecosystem of excellence” and ensure the “human-centric development if AI,” it identifies a variety of existing or new industrial planning efforts, including: Digital Innovation Hubs, Enterprise Resource Planning, the Digital Europe Programme, the Key Digital Technology Joint Undertaking, and broad-based public private partnerships. This is all part of an official “Coordinated Plan” prepared together with the Member States “to foster the development and use of AI in Europe.”

To accomplish that, the Commission says it will “facilitate the creation of excellence and testing centres” that will “concentrate in sectors where Europe has the potential to become a global champion.” The Commission also wants to give special consideration to growing small and mid-size enterprises (SMEs) is establishing these plans.

Again, it’s an ambitious industrial policy vision, and one that will be accompanied by a wide variety of (yet-to-be-determined) regulatory enactments to shape the development and use of AI. But if that approach really works, why aren’t European digital companies global leaders today? Instead, firms based mostly in the US have risen to become household names across the globe. Regulation had an influence on that result because American firms enjoyed a policy regime that was rooted in “permissionless innovation,” which generally allows experimentation by default and addresses concerns by using more flexible, ex post remedies. By contrast, Europe’s internet policy approach was rooted in the precautionary principle, or the notion that innovation is essentially guilty until proven innocent. New technologies are to be subjected to prior constraints—or what the new European Commission white paper calls “prior conformity assessments”—before being allow into the wild.

Precautionary Regulation Dominates

Despite losing that last round of the innovation wars, the new EU white paper makes it clear that Europe will keep using a precautionary approach. What does that mean for AI regulation? The problem here begins with defining what is a “high-risk” AI application requiring prior restraints. The white paper defines it in a somewhat circular fashion, saying that, “an AI application should be considered high-risk where…(it) is employed in a sector where, given the characteristics of the activities typically undertaken, significant risks can be expected to occur” and is “used in such a manner that significant risks are likely to arise.” Instead of providing legal certainty, this definition clarifies almost nothing and will require future regulatory inquires to determine the full scope and nature of AI controls.

There’s also a lot of talk in the proposal about preemptively addressing “risks for fundamental rights,” which is understandable. AI innovations can raise various safety, security, and privacy concerns that deserve to be taken seriously. But what about the risk of not having access to important AI innovations at all? What about the risk of losing out on life-enriching—and in many cases life-saving—innovations because, instead of “building trust,” the regulatory regime builds the exact opposite: fear of innovating.

Entrepreneurs and investors respond to incentives. Before building or investing in a new technology, they want to know how long it will take to get that good or service launched—assuming they can get approval at all. Every innovator and investor factors such political risk into their business plans. When the potential costs of product launch overwhelm the likely benefits, they will abandon innovative efforts or look to engage in them elsewhere.

The EU says “the race for global leadership is ongoing,” and claims that, “Europe offers significant potential, knowledge and expertise” through its efforts to make the continent an AI innovation hub. Indeed, some of the best AI researchers are in Europe, and there are plenty of brilliant people brimming with entrepreneurial enthusiasm about creating world-class AI applications. But all that knowledge and enthusiasm do not matter much if the regulatory deck is stacked against innovation from the start.

And Even More Expansive Regulation Down the Road

Beyond the precautionary approach in that document, the EU’s accompanying white paper on safety and liability implications of AI leaves open the possibility of an expansion in preemptive regulatory requirements. “Additional obligations may be needed for manufacturers to ensure that they provide features to prevent the upload of software having an impact on safety during the lifetime of the AI products,” the document notes. Moreover, if an ongoing AI software update “modifies substantially the product in which it is downloaded, the entire product might be considered as a new product and compliance with the relevant safety product legislation must be reassessed at the time the modification is performed.”

That sort of regulatory regime may sound quite sensible at first blush. In practice, however, it means that every conceivable tweak to an algorithm requires costly and complex regulatory approval. If traditional computer software had required regulatory approval before any new modifications could be made, most consumers would still be stuck with an aol.com email address and Windows 95 as an operating system.

What the European Commission proves with its new AI policy framework is that it is easy to talk a big game about planning for an innovative future, but it is an entirely different thing to actually bring one about. The European approach will have clear competitive effects, or more specifically, anti-competitive effects. As is already the case with the EU’s regulatory approach to the data economy and GDPR in particular, regulatory compliance costs continue to skyrocket and small and mid-size enterprises struggle to cope. This means that only firms operating the largest digital platforms are able to shoulder these burdens, leaving consumers without as many competitive, low-cost choices as they might otherwise enjoy. Not even generous government support for SMEs will be able to counter-balance the costly entry barriers associated with over-regulation.

Solidifying Market Power of Existing Giants?

This is why it is so ironic how worried the EU is about the market power of Google, Facebook and other US-based tech giants: the regulatory burden now helps those firms maintain their market dominance. Over-regulation by the EU undermined both home-grown and international investment and competition that might challenge those existing players. With each addition layer of AI regulation that now gets piled on top of the Europe’s existing regulatory burden, the prospects for creative destruction decrease, as do the chances for life-enriching innovations to ever make it to consumers.

While the European Commission will, no doubt, insist that they are implementing this new AI regime with the very best of intentions in mind, there is no escaping the fact that regulation involves complex trade-offs and unforeseeable consequences. The consequences in this case are likely a bit easier to predict, however: By smothering new AI applications in layers of red tape, we can expect fewer innovations and less competition.

Despite all the talk of boosting SMEs, perhaps the EU will eventually become more like China and unabashedly support larger home-grown firms to make sure they are part of the global AI race. China has already made waves on this front with its 2017 “New Generation Artificial Intelligence Development Plan,” an audacious industrial policy plan which seeks “to build China’s first-mover advantage in the development of AI [and] to accelerate the construction of an innovative nation and global power in science and technology.” The document is as much a manifesto about geopolitical power as it is about technological governance. And it does not try to hide China’s authoritarian impulse to meticulously plan every facet of daily life under the auspices of promoting global technological leadership. China’s AI manifesto even concludes with a section on “public opinion guidance” that creepily insists the country will, “Fully use all kinds of traditional media and new media to quickly propagate new progress and new achievements in AI, to let the healthy development of AI become a consensus in all of society, and muster the vigor of all of society to participate in and support the development of AI.”

The new European AI industrial policy framework does not go as far as China’s, not only because the continent is obviously more open and democratic by nature, but also because the EU is a collection of many countries and cultures that will never be able to speak as coherently and forcefully with one voice on all technological governance matters. In fact, the EU’s new governance framework explicitly leaves room for more tailored AI regulation by individual member states.

Conclusion

This leaves Europe stuck between the polar opposites of China and the US when it comes to AI governance. China’s meticulously detailed, highly centralized, state-driven approach stands in stark contrast to the more bottom-up, adaptive American approach which insists that regulators, “must avoid a precautionary approach that holds AI systems to such an impossibly high standard that society cannot enjoy their benefits.”

The US approach also leans heavily on “soft law,” or informal governance mechanisms that are not as burdensome as precautionary regulatory controls. Soft law can include a wide variety of tools and methods for addressing policy concerns, including multistakeholder initiatives, best practices and standards, agency workshops and guidance documents, educational efforts, and much more. These are the governance tools the dominated for the internet and digital platforms for that past twenty years in the US, and they will likely continue to be the primary governance mechanisms for artificial intelligence, robotics, the internet of things, and other emerging tech sectors.

The EU probably thinks it has found the Goldilocks formula and gotten AI policy just right by falling between China and the US on the governance spectrum. It is more likely, however, that European policymakers will be unable to resist the urge to over-plan and micro-manage AI markets until they are once again left wondering how they got stuck trying to regulate market leaders that are headquartered oceans away from them. With the US once again adopting a more flexible approach, we could see a replay of the Web Wars, with innovators and investors putting their efforts behind AI launches in the US instead of Europe. Meanwhile, China will likely attract far more global venture capital for AI and robotics launches than they did for digital platforms. This could really put the squeeze on Europe.

Only time will tell. But, to paraphrase Yoda, when it comes to global artificial intelligence governance, one thing is clear: Begun the AI war has.

]]>
https://techliberation.com/2020/02/20/europes-new-ai-industrial-policy/feed/ 1 76667
Podcast on Driverless Cars, AI & “Soft Law” Governance https://techliberation.com/2020/01/21/podcast-on-driverless-cars-ai-soft-law-governance/ https://techliberation.com/2020/01/21/podcast-on-driverless-cars-ai-soft-law-governance/#comments Tue, 21 Jan 2020 16:55:06 +0000 https://techliberation.com/?p=76652

Here’s a new Federalist Society Regulatory Transparency “Tech Roundup” podcast about driverless cars, artificial intelligence and the growth of “soft law” governance for both. The 34-minute podcast features a conversation between Caleb Watney and me about new Trump Administration AI guidelines as well as the Department of Transportation’s new “Version 4.0” guidance for automated vehicles.

This podcast builds on my recent essay, “Trump’s AI Framework & the Future of Emerging Tech Governance” as well as an earlier law review article, “Soft Law for Hard Problems: The Governance of Emerging Technologies in an Uncertain Future.”

]]>
https://techliberation.com/2020/01/21/podcast-on-driverless-cars-ai-soft-law-governance/feed/ 1 76652
How Much Precaution is Wise? https://techliberation.com/2019/11/01/how-much-precaution-is-wise/ https://techliberation.com/2019/11/01/how-much-precaution-is-wise/#comments Fri, 01 Nov 2019 14:29:11 +0000 https://techliberation.com/?p=76634

In a new essay for the Mercatus Bridge, I ask, “How Many Lives Are Lost Due to the Precautionary Principle?” The essay builds on two recent case studies of how the precautionary principle can result in unnecessary suffering and deaths. The first case study involves the Japanese government’s decision in 2011 to entirely abandon nuclear energy following the Fukushima Daiichi nuclear accident. The second involves Golden Rice, a form of rice that was genetically engineered to contain beta-carotene, which helps combat vitamin A deficiency. Anti-GMO resistance among environmental activists and regulatory officials held up the diffusion of this miracle food. New reports and books now document how these precautionary decisions diminished human welfare instead of improving it. I encourage you to jump over to the Bridge and read the entire story.

I concluded the essay by noting that, “It is time to reject the simplistic logic of the precautionary principle and move toward a more rational, balanced approach to the governance of technologies. Our lives and well-being depend upon it.” Some read that as a complete rejection of  all preemptive regulation. I certainly was not arguing that, so let me clarify a few things.

There are, of course, “hard” and “soft” variants of the precautionary principle (PP). In my new essay, I am mostly focused on the very hardest variety (of a prohibitionary nature). They are the most concerning because they completely foreclose all future experimentation with new and better ways of doing things. In a section of my last book entitled, “When Does Precaution Make Sense?” I noted that outright bans on new goods and services are justified when the risk being evaluated can be shown to be highly probably, tangible, immediate, irreversible, and potentially catastrophic in nature. [See this essay for more on this point, including that entire section of my book reprinted as an appendix.]

However, “existential” risks are open to interpretation and far rarer than some suggest. Governments justly restrict the possession of uranium and bazookas and such grounds, but it would be imprudent to ban the development of all new AI technologies on the theory that one day we might get a Terminator scenario if we don’t.

Softer PP varieties of a permitting nature (such as FAA and FDA permitting regimes) are somewhat easier to justify because they at least leave the door open for some innovation, albeit after significant delay. It is impossible in advance to determine exactly how many lives are saved or lost because of long regulatory review processes, but some new products (such as large aircraft or pharmaceuticals) obviously deserve greater scrutiny because of the potential for adverse and catastrophic outcomes without some degree of initial oversight.

However, taken to the extreme and applied in too rigid of a fashion, even softer varieties of the PP can result in unnecessary suffering and deaths. Slowing experiments with potentially new and better ways of doing things means we are stuck with a status quo that can be sub-optimal, even deadly in its own right.

All roads lead back to improved benefit-cost analysis, better risk modeling, constant retrospective review, and stepped-up risk education/communication efforts. But the over-zealous and unthinking application of the PP shuts down that process almost entirely and forecloses any sort of policy or market experimentation. Flexibility, adaptability, and humility in policymaking are crucial to avoid policy errors.

Toward that end, as I noted in my last law review article, newer “soft law” governance tools offer us the chance to craft superior governance frameworks for existing and emerging technologies. Multistakeholder processes, agency guidances, collaborative best practices, and various other informal governance mechanisms are often better suited to address fast-moving sectors and technologies. In my next book, I argue that this is even true for many “existential risk” scenarios that people fear today. Preemptive controls – including some of a precautionary nature – will still be needed in many circumstances. (Genetic editing will be one such candidate). But we must still guard against overreaction and excessive control of technologies that have the potential to fundamentally improve human well-being.

In sum, trial-and-error is valuable both in the marketplace and in government policymaking settings. The fundamental problem with the precautionary principle is that is ends all such trial-and-error experimentation, including within regulatory regimes themselves! Greater flexibility is needed to ensure that public policy can more accurately balance risk and benefits and improve human well-being as a result. But the precautionary principle will almost never achieve that. We need more open, adaptive, and entrepreneurial governance mechanisms to achieve superior public health outcomes.

My next book, due out in April 2020, does a deeper dive into these issues. Stay tuned for more.

]]>
https://techliberation.com/2019/11/01/how-much-precaution-is-wise/feed/ 2 76634
Why Apocalyptic Rhetoric Dominates Tech Policy Debates https://techliberation.com/2019/10/02/why-apocalyptic-rhetoric-dominates-tech-policy-debates/ https://techliberation.com/2019/10/02/why-apocalyptic-rhetoric-dominates-tech-policy-debates/#comments Wed, 02 Oct 2019 15:20:32 +0000 https://techliberation.com/?p=76603

The endless apocalyptic rhetoric surrounding Net Neutrality and many other tech policy debates proves there’s no downside to gloom-and-doomism as a rhetorical strategy. Being a techno-Jeremiah nets one enormous media exposure and even when such a person has been shown to be laughably wrong, the press comes back for more. Not only is there is no penalty for hyper-pessimistic punditry, but the press actually furthers the cause of such “fear entrepreneurs” by repeatedly showering them with attention and letting them double-down on their doomsday-ism. Bad news sells, for both the pundit and the press.

But what is most remarkable is that the press continues to label these preachers of the techno-apocalypse as “experts” despite a track record of failed predictions. I suppose it’s because, despite all the failed predictions, they are viewed as thoughtful & well-intentioned. It is another reminder that John Stuart Mill’s 1828 observation still holds true today: “I have observed that not the man who hopes when others despair, but the man who despairs when others hope, is admired by a large class of persons as a sage.”

Additional Reading:

]]>
https://techliberation.com/2019/10/02/why-apocalyptic-rhetoric-dominates-tech-policy-debates/feed/ 1 76603
Black Mirror Episodes from Medieval Times https://techliberation.com/2019/07/02/black-mirror-episodes-from-medieval-times/ https://techliberation.com/2019/07/02/black-mirror-episodes-from-medieval-times/#comments Tue, 02 Jul 2019 18:28:22 +0000 https://techliberation.com/?p=76516

CollegeHumor has created this amazing video, “Black Mirror Episodes from Medieval Times,” which is a fun parody of the relentless dystopianism of the Netflix show “Black Mirror.” If you haven’t watched Black Mirror, I encourage you to do so. It’s both great fun and ridiculously bleak and over-the-top in how it depicts modern or future technology destroying all that is good on God’s green earth.

The CollegeHumor team picks up on that and rewinds the clock about a 1,000 years to imagine how Black Mirror might have played out on a stage during the medieval period. The actors do quick skits showing how books become sentient, plows dig holes to Hell and unleash the devil, crossbows destroy the dexterity of archers, and labor-saving yokes divert people from godly pursuits. As one of the audience members says after watching all the episodes, “technology will truly be the ruin of us all!” That’s generally the message of not only Black Mirror, but the vast majority of modern science fiction writing about technology (and also a huge chunk of popular non-fiction writing, too.)

If you go far enough back in the history of technology and technological criticism, you actually can find plenty of people insisting that the latest and greatest tech of the day would be the ruin of us all. As I noted here before, you can trace tech criticism at least back to Plato’s Phaedrus, which warned about the dangers of the written word. My colleague Tyler Cowen argues you can trace it even further back to the Bible and the Book of Genesis, especially the story of the Tower of Babel.

One can almost imagine how scorn was heaped on the first person to fashion a blade or a wheel out of stone. Before his untimely passing a few years ago, the great Calestous Juma used to occasionally tweet this hilarious cartoon that depicted just that moment in time. The people that carry those “NO” signs are still all around us today. Technopanics and fear cycles just repeat endlessly, as I have noted in dozens of essays and papers through the years.

Image result for cartoon Protesting against technology the early years

 

]]>
https://techliberation.com/2019/07/02/black-mirror-episodes-from-medieval-times/feed/ 2 76516
How Conservatives Came to Favor the Fairness Doctrine & Net Neutrality https://techliberation.com/2019/06/19/how-conservatives-came-to-favor-the-fairness-doctrine-net-neutrality/ https://techliberation.com/2019/06/19/how-conservatives-came-to-favor-the-fairness-doctrine-net-neutrality/#comments Thu, 20 Jun 2019 01:09:52 +0000 https://techliberation.com/?p=76507

I have been covering telecom and Internet policy for almost 30 years now. During much of that time – which included a nine year stint at the Heritage Foundation — I have interacted with conservatives on various policy issues and often worked very closely with them to advance certain reforms.

If I divided my time in Tech Policy Land into two big chunks of time, I’d say the biggest tech-related policy issue for conservatives during the first 15 years I was in the business (roughly 1990 – 2005) was preventing the resurrection of the so-called Fairness Doctrine. And the biggest issue during the second 15-year period (roughly 2005 – present) was stopping the imposition of “Net neutrality” mandates on the Internet. In both cases, conservatives vociferously blasted the notion that unelected government bureaucrats should sit in judgment of what constituted “fairness” in media or “neutrality” online.

Many conservatives are suddenly changing their tune, however. President Trump and Sen. Ted Cruz, for example, have been increasingly critical of both traditional media and new tech companies in various public statements and suggested an openness to increased regulation. The President has gone after old and new media outlets alike, while Sen. Cruz (along with others like Sen. Lindsay Graham) has suggested during congressional hearings that increased oversight of social media platforms is needed, including potential antitrust action.

Meanwhile, during his short time in office, Sen. Josh Hawley (R-Mo.) has become one of the most vocal Internet critics on the Right. In a shockingly-worded USA Today editorial in late May, Hawley said, “social media wastes our time and resources” and is “a field of little productive value” that have only “given us an addiction economy.” He even referred to these sites as “parasites” and blamed them for a long list of social problems, leading him to suggest that, “we’d be better off if Facebook disappeared” along with various other sites and services.

Hawley’s moral panic over social media has now bubbled over into a regulatory crusade that would unleash federal bureaucrats on the Internet in an attempt to dictate “fair” speech on the Internet. He has introduced an astonishing piece of legislation aimed at undoing the liability protections that Internet providers rely upon to provide open platforms for speech and commerce. If Hawley’s absurdly misnamed new “Ending Support for Internet Censorship Act” is implemented, it would essentially combine the core elements of the Fairness Doctrine and Net Neutrality to create a massive new regulatory regime for the Internet.

The bill would gut the immunities Internet companies enjoy under 47 USC 230 (“Section 230”) of the Communications Decency Act. Eric Goldman of the Santa Clara University School of Law has described Section 230 as the “best Internet law” and “a big part of the reason why the Internet has been such a massive success.” Indeed, as I pointed out in a Forbes column on the occasion of its 15th anniversary, Section 230 is “the foundation of our Internet freedoms” because it gives online intermediaries generous leeway to determine what content and commerce travels over their systems without the fear that they will be overwhelmed by lawsuits if other parties object to some of that content.

The Hawley bill would overturn this important legal framework for Internet freedom and instead replace it with a new “permissioned” approach. In true “Mother-May-I” style, Internet companies would need to apply for an “immunity certification” from the FTC, which would undertake investigations to determine if the petitioning platform satisfied a “requirement of politically unbiased content moderation.”

The vague language of the measure is an open invitation to massive political abuse. The entirety of the bill hinges upon the ability of Federal Trade Commission officials to define and enforce “political neutrality” online. Let’s consider what this will mean in practice.

Under the bill, the FTC must evaluate whether platforms have engaged in “politically biased moderation,” which is defined as moderation practices that are supposedly, “designed to negatively affect” or “disproportionately restricts or promote access to … a political party, political candidate, or political viewpoint.” As Blake Reid of the University of Colorado Law School rightly asks, “How, exactly, is the FTC supposed to figure out what the baseline is for ‘disproportionately restricting or promoting’? How much access or availability to information about political parties, candidates, or viewpoints is enough, or not enough, or too much?”

There is no Goldilocks formula for getting things just right when it comes to content moderation. It’s a trial-and-error process that is nightmarishly difficult because of the endless eye-of-the-beholder problems associated with constructing acceptable use policies for large speech platforms. We struggled with the same issues in the broadcast and cable era, but they have been magnified a million-fold in the era of the global Internet with the endless tsunami of new content that hits our screens and devices every day. “Do we want less moderation?” asks Sec, 230 guru Jeff Kosseff. “I think we need to look at that question hard.  Because we’re seeing two competing criticisms of Section 230,” he notes. “Some argue that there is too much moderation, others argue that there is not enough.”

The Hawley bill seems to imagine that a handful of FTC officials will magically be able to strike the right balance through regulatory investigations. That’s a pipe dream, of course, but let’s imagine for a moment that regulators could somehow sort through all the content on message boards, tweets, video clips, live streams, gaming sites, and whatever else, and then somehow figure out what constituted a violation of “political neutrality” in any given context. That would actually be a horrible result because let’s be perfectly clear about what that would really be: It would be a censorship board. By empowering unelected bureaucrats to make decisions about what constitutes “neutral” or “fair” speech, the Hawley measure would, as Elizabeth Nolan Brown of Reason summarizes, “put Washington in charge of Internet speech.” Or, as Sen. Ron Wyden argues more bluntly, the bill “will turn the federal government into Speech Police.” “Perhaps a more accurate title for this bill would be ‘Creating Internet Censorship Act,'” Eric Goldman is forced to conclude.

The measure is creating other strange bedfellows. You won’t see Berin Szoka of TechFreedom and Harold Feld of Public Knowledge ever agreeing on much, but they both quickly and correctly labelled Hawley’s bill a “Fairness Doctrine for the Internet.” That is quite right, and much like the old Fairness Doctrine, Hawley’s new Internet speech control regime would be open to endless political shenanigans as parties, policymakers, companies, and the various complainants line up to have their various political beefs heard and acted upon. “That’s the kind of thing Republicans said was unconstitutional (and subject to FCC agency capture and political manipulation) for decades,” says Daphne Keller of the Stanford Center for Internet & Society. Moreover, during the Net Neutrality holy wars, GOP conservatives endlessly blasted the notion that bureaucrats should be determining what constitute “neutrality” online because it, too, would result in abuses of the regulatory process. Yet, Sen. Hawley’s bill would now mandate that exact same thing.

What is even worse is that, as law professor Josh Blackman observes, “the bill also makes it exceedingly difficult to obtain a certification” because applicants need a supermajority of 4 of the 5 FTC Commissioners. This is public choice fiasco waiting to happen. Anyone who has studied the long, sordid history of broadcast radio and television licensing understands the danger associated with politicizing certification processes. The lawyers and lobbyists in the DC “swamp” will benefit from all the petitioning and paperwork, but it is not clear how creating a regulatory certification regime for Internet speech really benefits the general public (or even conservatives, for that matter).

Former FTC Commissioner Josh Wright identifies another obvious problem with the Hawley Bill: it “offers the choice of death by bureaucratic board or the plaintiffs’ bar.” That’s because by weakening Sec. 230’s protections, Hawley’s bill could open the floodgates to waves of frivolous legal claims in the courts if companies can’t get (or lose) certification. The irony of that result, of course, is that this bill could become a massive gift to the tort bar that Republicans love to hate!

Of course, if the law ever gets to court, it might be ruled unconstitutional. “The terms ‘politically biased’ and ‘moderation’ would have vagueness and overbreadth problems, as they can chill protected speech,” Josh Blackman argues. So it could, perhaps, be thrown out like earlier online censorship efforts. But a lot of harm could be done—both to online speech and competition—in the years leading up to a final determination about the law’s constitutionality by higher courts.

What is most outrageous about all this is that the core rationale behind Hawley’s effort—the idea that conservatives are somehow uniquely disadvantaged by large social media platforms—is utterly preposterous. In May, the Trump Administration launched a “tech bias” portal which “asked Americans to share their stories of suspected political bias.” The portal is already closed and it is unclear what, if anything, will come out of this effort. But this move and Hawley’s proposal point to the broader trend of conservatives getting more comfortable asking Big Government to redress imaginary grievances about supposed “bias” or “exclusion.”

In reality, today’s social media tools and platforms have been the greatest thing that ever happened to conservatives. Mr. Trump owes his presidency to his unparalleled ability to directly reach his audience through Twitter and other platforms. As recently as June 12, President Trump tweeted, “The Fake News has never been more dishonest than it is today. Thank goodness we can fight back on Social Media.” Well, there you have it!

Beyond the President, one need only peruse any social media site for a few minutes to find an endless stream of conservative perspectives on display. This isn’t exclusion; it’s amplification on steroids. Conservatives have more soapboxes to stand on and preach than ever before in the history of this nation.

Finally, if they were true to their philosophical priors, then conservatives also would not be insisting that they have any sort of “right” to be on any platform. These are private platforms, after all, and it is outrageous to suggest that conservatives (or any other person or group) are entitled to have a spot on any other them.

Some conservatives are fond of ridiculing liberals for being “snowflakes” when it comes to other free speech matters, such as free speech on college campuses. Many times they are right. But one has to ask who the real snowflakes are when conservative lawmakers are calling on regulatory bureaucracies to reorder speech on private platform based on the mythical fear of not getting “fair” treatment. One also cannot help but wonder if those conservatives have thought through how this new Internet regulatory regime will play out once a more liberal administration takes back the reins of power. Conservatives will only have themselves to blame when the Speech Police come for them.


Addendum: Several folks have pointed out another irony associated with Hawley’s bill is that it would greatly expand the powers of the administrative state, which conservatives already (correctly) feel has too much broad, unaccountable power. I should have said more on that point, but here’s a nice comment from David French of National Review, which alludes to that problem and then ties it back to my closing argument above: i.e., that this proposal will come back to haunt conservatives in the long-run:

when coercion locks in — especially when that coercion is tied to constitutionally suspect broad and vague policies that delegate immense powers to the federal government — conservatives should sound the alarm. One of the best ways to evaluate the merits of legislation is to ask yourself whether the bill would still seem wise if the power you give the government were to end up in the hands of your political opponents. Is Hawley striking a blow for freedom if he ends up handing oversight of Facebook’s political content to Bernie Sanders? I think not.

Additional thoughts on the Hawley bill:

Josh Wright

Daphne Keller

Blake Reid

TechFreedom

Josh Blackman

Sen. Ron Wyden

Jeff Kosseff

Eric Goldman

CCIA

NetChoice

Internet Association

David French at National Review

John Samples

]]>
https://techliberation.com/2019/06/19/how-conservatives-came-to-favor-the-fairness-doctrine-net-neutrality/feed/ 1 76507
An Epic Moral Panic Over Social Media https://techliberation.com/2019/05/30/an-epic-moral-panic-over-social-media/ https://techliberation.com/2019/05/30/an-epic-moral-panic-over-social-media/#comments Thu, 30 May 2019 17:36:14 +0000 https://techliberation.com/?p=76493

[This essay originally appeared on the AIER blog on May 28, 2019. The USA TODAY also ran a shorter version of this essay as a letter to the editor on June 2, 2019.]

In a hotly-worded USA Today op-ed last week, Senator Josh Hawley (R-Missouri) railed against social media sites Facebook, Instagram, and Twitter. He argued that, “social media wastes our time and resources,” and is “a field of little productive value” that have only “given us an addiction economy.” Sen. Hawley refers to these sites as “parasites” and blames them for a litany of social problems (including an unproven link to increased suicide), leading him to declare that, “we’d be better off if Facebook disappeared.”

As far as moral panics go, Sen. Hawley’s will go down as one for the ages. Politicians have always castigated new technologies, media platforms, and content for supposedly corrupting the youth of their generation. But Sen. Hawley’s inflammatory rhetoric and proposals are something we haven’t seen in quite some time.

He sounds like those fire-breathing politicians and pundits of the past century who vociferously protested everything from comic books to cable television, the waltz to the Walkman, and rock-and-roll to rap music. In order to save the youth of America, many past critics said, we must destroy the media or media platforms they are supposedly addicted to. That is exactly what Sen. Hawley would have us do to today’s leading media platforms because, in his opinion, they “do our country more harm than good.”

We have to hope that Sen. Hawley is no more successful than past critics and politicians who wanted to take these choices away from the public. Paternalistic politicians should not be dictating content choices for the rest of us or destroying technologies and platforms that millions of people benefit from.

Addiction Panics: We’ve Been Here Before

Ironically, Sen. Hawley isn’t even right about what the youth of America are apparently obsessed with. Most kids view Facebook and Twitter as places where old people hang out. My teenage kids laugh when I ask them about those sites. Pew Research polling finds that many younger users are increasingly deleting Facebook (if they used it at all) or flocking to other platforms, such as Snapchat or YouTube.

But shouldn’t we be concerned with kids overusing social media more generally? Yes, of course we should—but that’s no reason to call for their outright elimination, as Sen. Hawley recommends. Such rhetoric is particularly concerning at a time when critics are proposing a “break up” of tech companies. Sen. Hawley sits on the U.S. Senate Judiciary Committee’s Subcommittee on Antitrust, Competition Policy and Consumer Rights. It is likely he and others will employ these arguments to fan the flames of regulatory intervention or antitrust action against at least Facebook.

Forcing social media sites to “disappear” or be broken up is one of the worst ways to deal with these concerns. It is always wise to mentor our youth and teach them how to achieve a balanced media diet. Many youths—and many adults—are probably overusing certain technologies (smartphones, in particular) and over-consuming some types of media. For those truly suffering from addiction, it is worth considering targeted strategies to address that problem. However, that is not what antitrust law is meant to address.

Moreover, concerns about addiction and distraction have popped up repeatedly during past moral panics and we should take such claims with a big grain of salt. Sociologist Frank Furedi has documented how, “inattention has served as a sublimated focus for apprehensions about moral authority” going back to at least the early 1700s. With each new form of media or means of communication, the older generation taps into the same “kids-these-days!” fears about how the younger generation has apparently lost the ability to concentrate or reason effectively.

For example, in the past century, critics said the same thing about radio and television broadcasting, comparing them to tobacco in terms of addiction and suggesting that media companies were “manipulating” us into listening or watching. Rock-and-roll and rap music got the same treatment, and similar panics about video games are still with us today.

Strangely, many elites, politicians, and parents forget that they, too, were once kids and that their generation was probably also considered hopelessly lost in the “vast wasteland” of whatever the popular technology or content of the day was. The Pessimists Archive podcast has documented dozens of examples of this reoccurring phenomenon. Each generation makes it through the panic du jour, only to turn around and start lambasting newer media or technologies that they worry might be rotting their kids to the core. While these panics come and go, the real danger is that they sometimes result in concrete policy actions that censor content or eliminate choices that the public enjoys. Such regulatory actions can also discourage the emergence of new choices.

Missed Opportunity, or Marvelous Achievement?

Sen. Hawley makes another audacious assertion in his essay when he suggests that social media has not provided any real benefit to American workers or consumers. He says the rise of the Digital Economy has “encouraged a generation of our brightest engineers to enter a field of little productive value,” which he regards as “an opportunity missed for the nation.”

This is an astonishing statement, made more troubling by Hawley’s claim that all these digital innovators could have done far more good by choosing other professions. “What marvels might these bright minds have produced,” Hawley asks, “had they been oriented toward the common good?”

Why is it that Sen. Hawley gets to decide which professions are in “the common good”? This logic is insulting to all those who make a living in these sectors, but there is a deeper hubris in Sen. Hawley’s argument that social media does not serve “the common good.” Had some benevolent philosopher kings in Washington stopped the digital economy from developing over the past quarter century, would all those tech workers really have chosen more noble-minded and worthwhile professions? Could he or others in Congress really have had the foresight to steer us in a better direction?

In reality, U.S. tech companies produce high-quality jobs and affordable, collaborative communications platforms that are popular across the globe. In response to Sen. Hawley’s screed, the Internet Association, which represents America’s leading digital technology companies, noted that, in Sen. Hawley’s home state of Missouri alone, the Internet supports 63,000 jobs at 3,400 companies and contributed $17 billion in GDP to the state’s economy. Presumably, Sen. Hawley would not want to see those benefits “disappear” along with the social media sites that helped give rise to them.

But the Internet and social media have an equally profound impact on the entire U.S. economy, adding over 9,000 jobs and nearly 570 businesses to each metropolitan statistical area. The Digital Economy is a great American success story that is the envy of the world, not something to be lamented and disparaged as Sen. Hawley has.

For someone who believes that Facebook is a “drug” and a “parasite,” it is curious how active Sen. Hawley is on Facebook, as well as on Twitter. If he really believes that “we’d be better off if Facebook disappeared,” then he should lead by example and get off the sites. But that is a decision he will have to make for himself. He should not, however, make it for the rest of us.

]]>
https://techliberation.com/2019/05/30/an-epic-moral-panic-over-social-media/feed/ 2 76493
I (Eye), Robot? https://techliberation.com/2019/05/08/i-eye-robot/ https://techliberation.com/2019/05/08/i-eye-robot/#comments Wed, 08 May 2019 14:24:57 +0000 https://techliberation.com/?p=76482

[Originally published on the Mercatus Bridge blog on May 7, 2019.]

I became a little bit more of a cyborg this month with the addition of two new eyes—eye lenses, actually. Before I had even turned 50, the old lenses that Mother Nature gave me were already failing due to cataracts. But after having two operations this past month and getting artificial lenses installed, I am seeing clearly again thanks to the continuing miracles of modern medical technology.

Cataracts can be extraordinarily debilitating. One day you can see the world clearly, the next you wake up struggling to see through a cloudy ocular soup. It is like looking through a piece of cellophane wrap or a continuously unfocused camera.

If you depend on your eyes to make a living as most of us do, then cataracts make it a daily struggle to get even basic things done. I spend most of my time reading and writing each workday. Once the cataracts hit, I had to purchase a half-dozen pair of strong reading glasses and spread them out all over the place: in my office, house, car, gym bag, and so on. Without them, I was helpless.

Reading is especially difficult in dimly lit environments, and even with strong glasses you can forget about reading the fine print on anything. Every pillbox becomes a frightening adventure. I invested in a powerful magnifying glass to make sure I didn’t end up ingesting the wrong things.

For those afflicted with particularly bad cataracts, it becomes extraordinarily risky to drive or operate machinery. More mundane things—watching TV, tossing a ball with your kid, reading a menu at many restaurants, looking at art in a gallery—also become frustrating.

Open Your Eyes to the Wonders of Innovation

In the past, there was very little that could be done about cataracts unless one was willing to undergo extremely dangerous procedures. The oldest type of cataract surgery (“couching”) involved the use of sharp instruments such as thorns and needles to rip the cloudy lens out of the eye. Unsurprisingly, blindness was a common result of this primitive practice. As medical techniques and instruments improved, doctors were able to perform more sophisticated and successful surgeries, albeit still with some risks because human hands were still doing much of the work.

Today, thanks to remarkable advances in medicine, all this is done in a few minutes with the assistance of laser technology. Better yet, patients get to choose exactly what sort of replacement lens they will have installed. I chose “multifocal intraocular” replacement lenses, which let me see near and far equally well.

When you have cataracts in both eyes, they usually perform the surgeries a few weeks apart to make sure one eye comes out alright before getting the other done. Both my outpatient procedures were quick, painless, and remarkably effective. Astonishingly, within 24 hours of having both surgeries, I tested at better than 20/15 vision, which is close to perfect. It was like regaining a lost superpower.

Am I a Cyborg?

My first-hand experience with the miracles of modern medical technology makes me feel even more strongly about what I do for a living. I have spent my life covering emerging technology policy and responding to tech critics, who have a litany of grievances about modern inventions. One common complaint is that today’s technologies are “dehumanizing,” or threaten to turn us all into some sort of cyborgs.

To be sure, my eye surgeries did indeed make me just a little bit less human. After all, I am walking around today with artificial lenses affixed to my eyeballs. Moreover, I previously had eye surgery to correct strabismus, which is basically a form of crossed eyes. Had I remained perfectly “human” or “natural,” I would still be trying to look at the world through two crossed eyes covered with cloudy lenses. No thanks, Mother Nature!

Incidentally, I also have a metal plate and six pins in my ankle from a nasty compound fracture I sustained in the late 1990s. So, my foot isn’t completely “natural” either. But without those implants, I would not likely have walked properly again. Also, due to a combination of bad genes and poor dietary habits, my mouth is full of so many replacement teeth and crowns that I can’t even count them all. Without them, I probably would have needed dentures by age 40, just as my poor grandmother did once her teeth failed her for similar reasons.

Meanwhile, my left knee and right hip have been acting up in recent years, making me wonder if replacements may be needed down the road. Finally, my hearing isn’t so great either after years of abusing my ears at concerts and with speakers played at unhealthy volumes. (Turn down those headphones, kids!) I suspect some sort of hearing supplement awaits me in the future so I can continue to hear properly.

Enhancing Our Humanity

Given the medical procedures I’ve had done or might do, it’s fair to say that the critics are correct: I really am becoming more of a cyborg—part biological, part technological. But what of it? Certainly, my life and the lives of countless other people have been improved thanks to “artificial” improvements to our bodies.

As Joel Garreau noted in his brilliant 2005 book, Radical Evolution: The Promise and Peril of Enhancing Our Minds, Our Bodies—And What It Means to Be Human, the history of our species in one of constant improvements to our health and capabilities through technological means. We have augmented our senses and abilities through the use of spectacles, hearing aids, artificial limbs, implants, and various other specialized medicines and treatments. We are living longer, healthier, less painful lives because of it.

Some critics respond by saying that certain “basic” technological improvements to human health are fine, or perhaps should even be subsidized and available to all. One era’s “radical” enhancements become the next generation’s human rights! We have seen that story unfold in the realm of reproductive health, for example. As Jordan Reimschisel and I have documented, in vitro fertilization (IVF) was originally met with hostility in the 1970s, with various authorities objecting to the idea of being able to “play God.” Opposition subsided quickly, however, as public acceptance and demand grew. Today, IVF is often covered by insurance plans.

Still, critics of newer technological capabilities tend to frown upon more sophisticated technological enhancements that could radically enhance our capabilities in ways that supposedly “dehumanize” us. There are always risks associated with new technological capabilities, but through ongoing trial and error experimentation, we find new ways to counter adversity and ailments—and yes, even overcome some of our inherent human limitations. We are not destined to become mindless automatons just because technology enhances our humanity in these ways. Indeed, there is nothing more human than building new and better tools to improve the quality of the lives of people across the globe.

We Can Cope with Change

Critics are fond of falling back on worst-case “technopanic” scenarios ripped from sci-fi novels, movies, and shows to explain how, if we are not careful, we are all just one modification away from creating (or becoming) Frankenstein monsters. We should heed those warnings to some extent, but not to the extent those critics suggest.

There are legitimate ethical issues associated with certain medical treatments and human enhancements. Genetic editing, for example, holds both promise and peril for our species. By modifying our genetic code, we can counter or even defeat debilitating or deadly diseases or ailments before they hobble us or our children. Of course, genetic modification could also be used in unsettling ways by parents or governments to create “designer babies” that have no choice in how their genetic code is altered before birth.

Ethical guidelines, and even some public policies, will need to be crafted and continuously updated to keep pace with these challenges. But, we must not let worst-case thinking determine the future of  all forms of human modification such that the many possible best-case outcomes are discouraged in the process. That would represent a massive setback for the millions of humans, including the unborn ones, who might be threatened by debilitating ailments.

Just as technological innovation gave me (quite literally) a new outlook on the world, so too can it open up new possibilities for countless others. Each day brings inspiring news about how innovation is helping us overcome whatever ails us. The Wall Street Journal reported recently that, “[s]cientists have harnessed artificial intelligence to translate brain signals into speech, in a step toward brain implants that one day could let people with impaired abilities speak their minds.”

More modern miracles like that await us—so long as critics and regulators don’t hold back important innovations in medical technology. In the meantime, thanks to my new cyborg eyes, I have seven old pairs of reading glasses I no longer need, in case anyone wants them.

]]>
https://techliberation.com/2019/05/08/i-eye-robot/feed/ 1 76482
Podcast about the Future of Emerging Tech Innovation & Entrepreneurialism https://techliberation.com/2019/04/08/podcast-about-the-future-of-emerging-tech-innovation-entrepreneurialism/ https://techliberation.com/2019/04/08/podcast-about-the-future-of-emerging-tech-innovation-entrepreneurialism/#respond Mon, 08 Apr 2019 19:24:33 +0000 https://techliberation.com/?p=76469

It was my great pleasure to recently join Paul Matzko and Will Duffield on the Building Tomorrow podcast to discuss some of the themes in my last book and my forthcoming one. During our 50-minute conversation, which you can listen to here, we discussed:

  • the “pacing problem” and how it complicates technological governance efforts;
  • the steady rise of “innovation arbitrage” and medical tourism across the globe;
  • the continued growth of “evasive entrepreneurialism” (i.e., efforts to evade traditional laws & regs while innovating);
  • new forms of “technological civil disobedience;”
  • the rapid expansion of “soft law” governance mechanism as a response to these challenges; and,
  • craft beer bootlegging tips!  (Seriously, I move a lot of beer in the underground barter markets).

Bounce over to the Building Tomorrow site and give the show a listen. Fun chat.

]]>
https://techliberation.com/2019/04/08/podcast-about-the-future-of-emerging-tech-innovation-entrepreneurialism/feed/ 0 76469
Three Short Responses To The Pacing Problem https://techliberation.com/2018/11/27/three-short-responses-to-the-pacing-problem/ https://techliberation.com/2018/11/27/three-short-responses-to-the-pacing-problem/#respond Tue, 27 Nov 2018 17:16:38 +0000 https://techliberation.com/?p=76419

Contemporary tech criticism displays an anti-nostalgia. Instead of being reverent for the past, anxiety about the future abounds. In these visions, the future is imagined as a strange, foreign land, beset with problems. And yet, to quote that old adage, tomorrow is the visitor that is always coming but never arrives. The future never arrives because we are assembling it today.  

The distance between the now and the future finds its hook in tech policy in the pacing problem, a term describing the mismatch between advancing technologies and society’s efforts to cope with them. Vivek Wadhwa explained that , “We haven’t come to grips with what is ethical, let alone with what the laws should be, in relation to technologies such as social media.” In The Laws of Disruption , Larry Downes explained the pacing problem like this: “technology changes exponentially, but social, economic, and legal systems change incrementally.” Or, as Adam Thierer wondered , “What happens when technological innovation outpaces the ability of laws and regulations to keep up?”

Here are three short responses.

Technological Determinism

Part of what drives the worry about a pacing problem is rooted in a belief in technological determinism . Determinism aligns human actors and technological objects in a causal relationship. Technology acts on society as an outside force. In this view of the world, technology is separate from society and thus can advance by leaps and bounds before society and regulation can catch up. In other words, technology is made an independent variable with acts upon us all.

Yet, that doesn’t describe the world in which technological objects are created and sustained. The iPhone was created by Apple following the success of the iPod in melding the hardware platform with the content of the mobile web, ultimately for the purpose of boosting sales. And people became enamored with it, lining up days before its release to grab one. Technologies aren’t alien objects. They are molded by particular interests and institutional goals, and rooted in society, especially the bourgeois virtues.

Technologies exist within human ecology, just as economic systems do. To make technology an outside force misplaces the role of human values in the creation and adoption of innovation. As separated from society, determinism allows for technology to be both mythologized and demonized. Technologies cannot outpace our ability to adapt. Rather, the speed of change, of innovation, is rate limited by society’s ability to adapt. As Robin Hanson explained , “society’s ability to adapt is the primary constraint on how fast we adopt new technologies.”

The Technological Accident

The pacing problem also gains purchase because new technologies create the possibility for new accidents. As philosopher Paul Virilio wrote ,

To invent the sailing ship or the steamer is to invent the shipwreck. To invent the train is to invent the rail accident of derailment. To invent the family automobile is to produce the pile-up on the highway.

Every newly created technology comes with the potential for problems. So the possibility set for accidents increases dramatically when a new technology comes onto the scene. But it isn’t the case that all of those risk will be manifested. Only a subset of potential problems will ever become realized. As such, it isn’t is that social and regulatory responses systems need to have all answers. Rather, there needs to be in place flexible systems to deal with actualized issues.      

Regulation as a Real Option

Perhaps, however, we have been thinking about the pacing problem incorrectly. Maybe the pacing problem isn’t a problem as much as it is a reflection of uncertainty. Again, Vivek Wadhwa pithilty explained this problem, saying, “We haven’t come to grips with what is ethical, let alone with what the laws should be , in relation to technologies such as social media.” Consider that phrase I have highlighted. There is little agreement as to how we should regulate social media. In other words, there is regulatory uncertainty. The concept of real option might help make sense of this.

Real options are the investment choices that a company’s management will makes in order “to expand, change or curtail projects based on changing economic, technological or market conditions.” While originally used in strictly financial terms, economists Avinash Dixit and Robert Pindyck have adapted this concept to understand how firms invest, or not, in the face of regulatory uncertainty. As you read this paragraph from the first chapter of their book on the subject , replace the term investment with regulation and see what you think,  

Most investment decisions share three important characteristics it varying degrees. First, the investment is partially or completely irreversible. In other words, the initial cost of investment is at least partially sunk; you cannot recover it all should you change your mind. Second, there is uncertainty over the future rewards from the investment. The best you can do is to assess the probabilities of the alternative outcomes that can mean greater or smaller profit (or loss) for your venture. Third, you have some leeway about the timing of your investment. You can postpone action to get more information (but never, of course, complete certainty) about the future.   

There are strong corollaries. First, most regulatory decisions are difficult to reverse. It is rare for regulations to be stricken from the books, and even if they are, the affected industries are often impacted in more subtle ways. Second off, the potential benefits from a regulatory action are uncertain as Wadhwa pointed out. And finally, government bodies do have some leeway about the timing of their regulatory. Putting all of this together then, regulation might be thought of as a real option.

As economists Bronwyn H. Hall and Beethika Khan explained ,  

The most important thing to observe about this kind of [investment] decision is that at any point in time the choice being made is not a choice between adopting and not adopting but a choice between adopting now or deferring the decision until later.

In the same way, government regulation isn’t about regulating now or not regulating at all, but about regulating now or deferring the decision until later. That sounds a lot to me like the pacing problem.  

]]>
https://techliberation.com/2018/11/27/three-short-responses-to-the-pacing-problem/feed/ 0 76419
Book Review: Cathy O’Neil’s “Weapons of Math Destruction” https://techliberation.com/2018/11/07/book-review-cathy-oneils-weapons-of-math-destruction/ https://techliberation.com/2018/11/07/book-review-cathy-oneils-weapons-of-math-destruction/#comments Wed, 07 Nov 2018 17:01:28 +0000 https://techliberation.com/?p=76408

To read Cathy O’Neil’s Weapons of Math Destruction (2016) is to experience another in a line of progressive pugilists of the technological age. Where Tim Wu took on the future of the Internet and Evgeny Morozov chided online slactivism , O’Neil takes on algorithms, or what she has dubbed weapons of math destruction (WMD).

O’Neil’s book came at just the right moment in 2016. It sounded the alarm about big data just as it was becoming a topic for public discussion. And now, two years later, her worries seem prescient. As she explains in the introduction,

Big Data has plenty of evangelists, but I’m not one of them. This book will focus sharply in the other direction, on the damage inflicted by WMDs and the injustice they perpetuate. We will explore harmful examples that affect people at critical life moments: going to college, borrowing money, getting sentenced to prison, or finding and holding a job. All of these life domains are increasingly controlled by secret models wielding arbitrary punishments.

O’Neil is explicit about laying out the blame at the feet of the WMDs, “You cannot appeal to a WMD. That’s part of their fearsome power. They do not listen.” Yet, these models aren’t deployed and adopted in a frictionless environment. Instead, they “reflect goals and ideology” as O’Neil readily admits. Where Weapons of Math Destruction falters is that it ascribes too much agency to algorithms in places, and in doing so misses the broader politics behind algorithmic decision making.

For example, O’Neil begins her book with a story about Sarah Wysocki, a teacher who got fired from the D.C. public school system because of how the teacher evaluation system ranked her abilities. O’Neil writes,

Yet at the end of the 2010-11 school year, Wysocki received a miserable score on her IMPACT evaluation. Her problem was a new scoring system known as value-added modeling, which purported to measure her effectiveness in teaching math and language skills. That score, generated by an algorithm, represented half of her overall evaluation, and it outweighed the positive reviews from school administrators and the community. This left the district with no choice but to fire her, along with 205 other teachers who has IMPACT scores below the minimal threshold.

In the ensuing pages, O’Neil describes the scoring system, how it was designed, and how it affected Wysocki. But the broader politics behind the scoring system that ousted Wysocki are just as important.

Why, for example, was the value-added score such a prominent feature in the teacher evaluation as compared to administrative and parent input? Well, research from the Bill & Melinda Gates Foundation found that a teacher’s value-added track record is among the strongest predictors of student achievement gains. So, the school district changed around their evaluations to make it a central feature. As Jason Kamras, chief of human capital for D.C. schools, told the Washington Post , “We put a lot of stock in it.” But that decision wasn’t without its critics, including Washington Teachers’ Union President Nathan Saunders who said, “You can get me to walk down the road with you to say value-added is relevant, but 50 percent is too weighted.”

Moreover, the weights changed in 2009 because the Chancellor of D.C. public schools, Michelle Rhee, had negotiated a new deal with the teachers union. In exchange for 20 percent pay raises and bonuses of $20,000 to 30,000 for effective teachers, the district was given more leeway to fire teachers for poor performance, which they did using the IMPACT system. In part, this fight was spurred on because Obama-era Education Secretary Arne Duncan was doling out $3.4 billion in Race to the Top grants that focused on teacher effectiveness measures. Moreover, Rhee was a Chancellor because D.C. Mayor Adrian Fenty had passed legislation that would bypass the Board of Education and give him control of the schools.           

Yes, Wysocki might have been a false positive, but what about all of the poor performing teachers that the previous system hadn’t let go? By focusing on the teachers, O’Neil steers the conversation away from what should be the central concern, did the change actually help students learn and achieve?

Truth be told, my quibbles with Weapons of Math Destruction fit into two types. The first class relates to questions of emphasis and scope, which become important when the reader tallies off the costs and benefits of algorithms. Perhaps it is the case that “The U.S. News college ranking has great scale, inflicts widespread damage, and generates an almost endless spiral of destructive feedback loops.” But on the other hand, lower ranked colleges have decreased their net tuition and accepted a larger share of applicants. Yes, credit scores “open doors for some of us, while slamming them in the face of others,” but in which proportion? In Chile, for example, credit bureaus were forced to stop reporting defaults in 2012. The change was found to reduce the costs for most of the poorer defaulters, but raised the costs for non-defaulters, leading to a 3.5 percent decrease in lending and a reduction in aggregate welfare. It could be case that “the payday loan industry operates WMDs,” but it is unclear where low-income Americans will find short-term loans if they are outlawed.

Second, Weapons of Math Destruction continuously toys with important questions regarding the moral agency of technologies but never explicitly lays them out. How much value should be ascribed to technologies? To what degree are technologies value-neutral or value-laden? All technologies, including the algorithms that O’Neil describes, are designed and implemented for certain kinds of instrumental outcomes by companies and government agencies. An institution has to take on the task on adopting an algorithm for decision-making purposes, and thus, the algorithm reflects the institutional goals.

Should the algorithm be blamed, the institutional structures that put it into place, or some combination of the both? Reading with a careful eye, one will easily see that this is the fundamental question of the book, especially since O’Neil wonders whether “we’ve eliminated human bias or simply camouflaged it with technology.” But the real answer isn’t in this binary. Algorithmic problems are pluralist.

]]>
https://techliberation.com/2018/11/07/book-review-cathy-oneils-weapons-of-math-destruction/feed/ 1 76408