Search Results for “regulatory capture” – Technology Liberation Front https://techliberation.com Keeping politicians' hands off the Net & everything else related to technology Thu, 10 Aug 2023 15:25:01 +0000 en-US hourly 1 6772528 America Does Not Need a Digital Consumer Protection Commission https://techliberation.com/2023/08/10/america-does-not-need-a-digital-consumer-protection-commission/ https://techliberation.com/2023/08/10/america-does-not-need-a-digital-consumer-protection-commission/#comments Thu, 10 Aug 2023 15:25:01 +0000 https://techliberation.com/?p=77151

The New York Times today published my response to an oped by Senators Lindsey Graham & Elizabeth Warren calling for a new “Digital Consumer Protection Commission” to micromanage the high-tech information economy. “Their new technocratic digital regulator would do nothing but hobble America as we prepare for the next great global technological revolution,” I argue. Here’s my full response:

Senators Lindsey Graham and Elizabeth Warren propose a new federal mega-regulator for the digital economy that threatens to undermine America’s global technology standing.

A new “licensing and policing” authority would stall the continued growth of advanced technologies like artificial intelligence in America, leaving China and others to claw back crucial geopolitical strategic ground.

America’s digital technology sector enjoyed remarkable success over the past quarter-century — and provided vast investment and job growth — because the U.S. rejected the heavy-handed regulatory model of the analog era, which stifled innovation and competition.

The tech companies that Senators Graham and Warren cite (along with countless others) came about over the past quarter-century because we opened markets and rejected the monopoly-preserving regulatory regimes that had been captured by old players.

The U.S. has plenty of federal bureaucracies, and many already oversee the issues that the senators want addressed. Their new technocratic digital regulator would do nothing but hobble America as we prepare for the next great global technological revolution.

]]>
https://techliberation.com/2023/08/10/america-does-not-need-a-digital-consumer-protection-commission/feed/ 3 77151
Dispatch from JMI’s “Tech & Innovation Summit” Panel on Progress Studies https://techliberation.com/2022/09/16/dispatch-from-jmis-tech-innovation-summit-panel-on-progress-studies/ https://techliberation.com/2022/09/16/dispatch-from-jmis-tech-innovation-summit-panel-on-progress-studies/#respond Fri, 16 Sep 2022 13:59:12 +0000 https://techliberation.com/?p=77044

It was my pleasure this week to participate in a panel discussion about the future of innovation policy at the James Madison Institute’s 2022 Tech and Innovation Summit in Coral Gables, FL. Our conversation focused on the future of Progress Studies, which is one of my favorite topics. We were asked to discuss five major questions and below I have summarized some of my answers to them, plus some other thoughts I had about what I heard at the conference from others.

  1. What is progress studies and why is it so needed today?

In a sense, Progress Studies is nothing new. Progress studies goes back at least to the days of Adam Smith and plenty of important scholars have been thinking about it ever since. Those scholars and policy advocates have long been engaged in trying to figure out what’s the secret sauce that powers economic growth and human prosperity. It’s just that we didn’t call that Progress Studies in the old days.

The reason Progress Studies is important is because technological innovation has been shown to be the fundamental driver in improvements in human well-being over time.  When we can move the needle on progress, it helps individuals extend and improve their lives, incomes, and happiness. By extension, progress helps us live lives of our choosing. As Hans Rosling brilliantly argued, the goal of expanding innovation opportunities and raising incomes “is not just bigger piles of money” or more leisure time. “The ultimate goal is to have the freedom to do what we want.”

  1. What don’t policymakers get about progress?

Policymakers often fail to appreciate the connection between innovation policy defaults and actual real-world innovation outcomes. Here is the biggest no-duh statement ever uttered: If you discourage innovation by default, you’ll get a lot less of it. In other words, incentives matters if you hope to create a positive innovation culture. Innovation culture refers to the various social and political attitudes, policies and entrepreneurial activities that, taken together, influence the innovative capacity of a particular region.

Thus, when policymakers make the Precautionary Principle the legal default for innovative activities, it means that government has put a red light in front of entrepreneurs and treated them and their innovations as guilty until proven innocent.  That’s a sure-fire recipe for stagnation.

The better approach is to make Permissionless Innovation our policy default and treat entrepreneurs and innovations as innocent until proven guilty. When our policy defaults offer entrepreneurs more green lights instead of red ones, it encourages more experimentation with new and better ways of doing things. In turn, this spurs business formation, job creation, new industries and products, and broad-based economic growth.

But policymakers consistently ignore this fundamental reality about the connection between policy and progress.

  1. Can you think of any states or governments that are doing a good job of putting the insights of progress studies into practice?

This summer, I co-authored an essay about, “How Arizona Is Getting Innovation Culture Right,” and highlighted the many important reforms undertaken over the past eight years by Gov. Doug Ducey and the Arizona Legislature. Arizona has advanced several reforms that have helped the state get its innovation culture right both broadly and narrowly. Broadly speaking, the state took steps to minimize red tape burdens and streamline permitting process and occupational licensing mandates. They also promoted “right to earn a living” and “right to try” initiatives to broaden worker and patient opportunities.

In terms of more targeted reforms, Arizona took steps to clear the way for greater broadband rollout and encouraged experimentation with commercial drones and driverless cars. The state also helped pioneer the use of “regulatory sandboxes,” which grant innovators a temporary safe space free of excessive regulatory burdens so they can experiment with new products and services.

And then there’s the city of Miami. At the JMI event, Miami Mayor Francis Suarez delivered a keynote address and he identified 3 keys to attracting talent and building opportunity: (1) Keep taxes low, (2) keep people safe, and (3) focus on innovation. He’s following that script and making Miami a hotbed of entrepreneurial opportunity.

Mayor Suarez spoke of how he is embracing emerging technologies like blockchain to compete with the traditional geographic Goliaths of tech, like San Francisco and New York. There’s been a massive inflow of companies and investors as a result. The city has become #1 in tech job growth and the inflow of tech entrepreneurs. “It turns out that if you welcome people… they come,” he said. “They want to migrate to places that are on the cutting edge of technology” and find “pathways to prosperity.”

Miami and Arizona offer great models that other cities and states could follow if they hope to improve their own innovation culture.

  1. What is the difference between progress studies and industrial organization, or industrial policy, or “government planning, but for innovation”?

Many policymakers foolishly believe there exists a precise technocratic cocktail that can immediately unlock innovation through highly targeted interventions and spending initiatives. In reality, achieving consistent growth and prosperity requires more than Big Government gimmicks. It’s a long game.

Many politicians and pundits are often fond of using machine-like metaphors and insisting that they have the ability to “fine-tune” innovative outcomes or “dial-in” economic development according to a precise formula. This is how we end up trillions in debt without much to show for it. Most recently, we’ve witnessed an “orgy of spending” on industrial policy schemes at the federal level.

The better metaphor for thinking about a nation’s innovation culture might be a plant or garden. Two of the great Progress Studies thinkers are F. A. Hayek and Joel Mokyr. Hayek once suggested that policymakers should aim to “cultivate a growth by providing the appropriate environment, in the manner in which the gardener does this for his plants.”  And Mokyr has argued that technological innovation and economic progress must be viewed as “a fragile and vulnerable plant, whose flourishing is not only dependent on the appropriate surroundings and climate, but whose life is almost always short. It is highly sensitive to the social and economic environment and can easily be arrested by relatively small external changes.”

Thus, the technocratic industrial policy mindset is always looking for “sexy” initiatives that capture a lot of short-term media attention, but typically fail to produce meaningful innovations or lasting growth. What’s more important to long-term prosperity is that policymakers get the “boring” stuff right.

The building blocks of the “boring” general approach economic development is a mix of broadly applicable tax, spending, regulatory and legal rules that help create a stable innovation ecosystem. Again, it’s like Mayor Suarez’s 3-prong approach of low taxes, safe communities, and a welcoming embrace of entrepreneurialism. That’s the secret sauce that fuels long-term progress and a sustainable prosperity.

  1. Is there a disconnect between the theories of progress and the practice – in other words, is it a problem of governance forms?

Indeed, I already mentioned the difference between the Precautionary Principle and Permissionless Innovation and it’s always interesting to me how my scholars ignore the importance of these governance forms when thinking about how to advance progress. There exists an unfortunate tendency among many to either ignore or repeat the mistakes of the past. Having made significant economic and societal gains thanks to past technological progress, many pundits and policymakers come to take much of it for granted. Thus, Progress Studies requires a process of constant re-education to remind each new generation of what helped raise our living standards so dramatically over the past two centuries.

The dramatic growth in incomes, life expectancy, and human welfare were not the product of sheer luck but of important policy choices. The freedom to think, to innovate, and to trade are the three freedoms that gave us our modern riches. If our governance forms limit those foundational freedoms, our current welfare and future prosperity will suffer. This is the great lesson of Progress Studies.


Additional Reading from Adam Thierer on Progress Studies

 

]]>
https://techliberation.com/2022/09/16/dispatch-from-jmis-tech-innovation-summit-panel-on-progress-studies/feed/ 0 77044
Why the Endless Techno-Apocalyptica in Modern Sci-Fi? https://techliberation.com/2022/09/02/why-the-endless-techno-apocalyptica-in-modern-sci-fi/ https://techliberation.com/2022/09/02/why-the-endless-techno-apocalyptica-in-modern-sci-fi/#respond Fri, 02 Sep 2022 15:06:06 +0000 https://techliberation.com/?p=77033

James Pethokousis of AEI interviews me about the current miserable state of modern science fiction, which is just dripping with dystopian dread in every movie, show, and book plot. How does all the techno-apocalyptica affect societal and political attitudes about innovation broadly and emerging technologies in particular. Our discussion builds on my recent a recent Discourse article, “How Science Fiction Dystopianism Shapes the Debate over AI & Robotics.” [Pasted down below.] Swing on over to Jim’s “Faster, Please” newsletter and hear what Jim and I have to say. And, for a bonus question, Jim asked me is we doing a good job of inspiring kids to have a sense of wonder and to take risks. I have some serious concerns that we are falling short on that front.

How Science Fiction Dystopianism Shapes the Debate over AI & Robotics

[Originally ran on Discourse on July 26, 2022.]

George Jetson will be born this year. We don’t know the exact date of this fictional cartoon character’s birth, but thanks to some skillful Hanna-Barbera hermeneutics the consensus seems to be sometime in 2022.

In the same episode that we learn George’s approximate age, we’re also told the good news that his life expectancy in the future is 150 years old. It was one of the many ways The Jestons, though a cartoon for children, depicted a better future for humanity thanks to exciting innovations. Another was a helpful robot named Rosie, along with a host of other automated technologies—including a flying car—that made George and his family’s life easier.

 

Most fictional portrayals of technology today are not as optimistic as  The Jetsons, however. Indeed, public and political conceptions about artificial intelligence (AI) and robotics in particular are being strongly shaped by the relentless dystopianism of modern science fiction novels, movies and television shows. And we are worse off for it.

AI, machine learning, robotics and the power of computational science hold the potential to drive explosive economic growth and profoundly transform a diverse array of sectors, while providing humanity with countless technological improvements in medicine and healthcarefinancial servicestransportationretailagricultureentertainmentenergyaviationthe automotive industry and many others. Indeed, these technologies are already deeply embedded in these and other industries and making a huge difference.

But that progress could be slowed and in many cases even halted if public policy is shaped by a precautionary-principle-based mindset that imposes heavy-handed regulation based on hypothetical worst-case scenarios. Unfortunately, the persistent dystopianism found in science fiction portrayals of AI and robotics conditions the ground for public policy debates, while also directing attention away from some of the more real and immediate issues surrounding these technologies.

Incessant Dystopianism Untethered from Reality

In his recent book Robots, Penn State business professor John Jordan observes how over the last century “science fiction set the boundaries of the conceptual playing field before the engineers did.” Pointing to the plethora of literature and film that depicts robots, he notes: “No technology has ever been so widely described and explored before its commercial introduction.” Not the internet, cell phones, atomic energy or any others.

Indeed, public conceptions of these technologies, and even the very vocabulary of the field, has been shaped heavily by sci-fi plots beginning a hundred years ago with the 1920 play  R.U.R. (Rossum’s Universal Robots)which gave us the term “robot,” and Fritz Lang’s 1927 silent film Metropolis, with its memorable Maschinenmensch, or “machine-human.” There has been a deep and rich imagination surrounding AI and robotics since then, but it has tended to be mostly negative and has grown more hostile over time.

The result has been a public and policy dialogue about AI and robotics that is focused on an endless parade of horribles about these technologies. Not surprisingly, popular culture also affects journalistic framings of AI and robotics. Headlines breathlessly scream of how “Robots May Shatter the Global Economic Order Within a Decade,” but only if we’re not dead already because… “If Robots Kill Us, It’s Because It’s Their Job.”

Dark depictions of AI and robotics are ever-present in popular modern sci-fi movies and television shows. A short list includes:  2001: A Space Odyssey, Avengers: Age of Ultron, Battlestar Galactica (both the 1978 original and the 2004 reboot), Black Mirror, Blade Runner, Ex Machina, Her, The Matrix, Robocop, The Stepford Wives, Terminator, Transcendence, Tron, WALL-E, Wargames and Westworld, among countless others. The least nefarious plots among these films and television shows rest on the idea that AI and robotics are going to drive us to a life of distraction, addiction or sloth. In more extreme cases, we’re warned about a future in which we are either going to be enslaved or destroyed by our new robotic or algorithmic overlords.

Don’t get me wrong; the movies and shows on the above list are some of my favorites.  2001 and Blade Runner are both in my top 5 all-time flicks, and the reboot of Battlestar is one of my favorite TV shows. The plots of all these movies and shows are terrifically entertaining and raise many interesting issues that make for fun discussions.

But they are not representative of reality. In fact, the vast majority of computer scientists and academic experts on AI and robotics agree that claims about machine “superintelligence” are wildly overplayed and that there is no possibility of machines gaining human-equivalent knowledge any time soon—or perhaps ever. “In any ranking of near-term worries about AI, superintelligence should be far down the list,” argues Melanie Mitchell, author of Artificial Intelligence: A Guide for Thinking Humans.

Contra the  Terminator-esque nightmares envisioned in so many sci-fi plots, MIT roboticist Rodney Brooks says that “fears of runaway AI systems either conquering humans or making them irrelevant aren’t even remotely well grounded.” John Jordan agrees, noting: “The fear and uncertainty generated by fictional representations far exceed human reactions to real robots, which are often reported to be ‘underwhelming.’”

The same is true for AI more generally. “A close inspection of AI reveals an embarrassing gap between actual progress by computer scientists working on AI and the futuristic visions they and others like to describe,” says Erik Larson, author of, The Myth of Artificial Intelligence: Why Computers Can’t Think the Way We Do. Larson refers to this extreme thinking about superintelligent AI as “technological kitsch,” or exaggerated sentimentality and melodrama that is untethered from reality. Yet, the public imagination remains captivated by tales of impending doom.

Seeding the Ground with Misery and Misguided Policy

But isn’t it all just harmless fun? After all, it’s just make believe. Moreover, can’t science fiction—no matter how full of techno-misery—help us think through morally weighty issues and potential ethical conundrums involving AI and robotics?

Yes and no. Titillating fiction has always had a cathartic element to it and helped us cope with the unknown and mysterious. Most historians believe it was Aristotle in his Poetics who first used the term katharsis when discussing how Greek tragedies helped the audience “through pity and fear effecting the proper purgation of these emotions.”

But are modern science fiction depictions of AI and robotics helping us cope with technological change, or instead just stoking a constant fear of it? Modern sci-fi isn’t so much purging negative emotion about the topic at hand as it is endlessly adding to the sense of dread surrounding these technologies. What are the societal and political ramifications of a cultural frame of reference that suggests an entire new class of computational technologies will undermine rather than enrich our human experiences and, possibly, our very existence?

The New Yorker’s Jill Lepore says we live in “A Golden Age for Dystopian Fiction,” but she worries that this body of work “cannot imagine a better future, and it doesn’t ask anyone to bother to make one.” She argues this “fiction of helplessness and hopelessness” instead “nurses grievances and indulges resentments” and that “[i]ts only admonition is: Despair more.” Lapore goes so far as to claim that, because “the radical pessimism of an unremitting dystopianism” has appeal to many on both the left and right, it “has itself contributed to the unravelling of the liberal state and the weakening of a commitment to political pluralism.”

I’m not sure dystopian fiction is driving the unravelling of pluralism, but Lapore is on to something when she notes how a fiction rooted in misery about the future will likely have political consequences at some point.

Techno-panic Thinking Shapes Policy Discussions

The ultimate question is whether public policy toward new AI and robotic technologies will be shaped by this hyperpessimistic thinking in the form of precautionary principle regulation, which essentially treats innovations as “guilty until proven innocent” and seeks to intentionally slow or retard their development.

If the extreme fears surrounding AI and robotics  do inspire precautionary controls—as they already have in the European Union—then we need to ask how the preservation of the technological status quo could undermine human well-being by denying society important new life-enriching and life-saving goods and services. Technological stasis does not provide a safer or healthier society, but instead holds back our collective ability to innovate, prosper and better our lives in meaningful ways.

Louis Anslow, curator of  Pessimists Archive calls this “the Black Mirror fallacy,” referencing the British television show that has enjoyed great success peddling tales of impending techno-disasters. Anslow defines the fallacy as follows: “When new technologies are treated as much more threatening and risky than old technologies with proven risks/harms. When technological progress is seen as a bigger threat than technological stagnation.”

Anslow’s Pessimists Archive collects real-world case studies of how moral panic and techno-panics have accompanied the introduction of new inventions throughout history. He notes, “Science fiction has conditioned us to be hypervigilant about avoiding dystopias born of technological acceleration and totally indifferent to avoiding dystopias born of technological stagnation.”

Techno-panics can have real-world consequences when they come to influence policymaking. Robert Atkinson, president of the Information Technology & Innovation Foundation (ITIF), has documented the many ways that “the social and political commentary [about AI] has been hype, bordering on urban myth, and even apocalyptic.” The more these attitudes and arguments come to shape policy considerations, the more likely it is precautionary principle-based recommendations will drive AI and robotics policy, preemptively limiting their potential. ITIF has published a report documenting “Ten Ways the Precautionary Principle Undermines Progress in Artificial Intelligence,” identifying how it will slow algorithmic advances in key sectors.

Similarly, in his important recent book Where Is My Flying Car ?, scientist J. Storrs Hall documents how “regulation clobbered the learning curve” for many important technologies in the U.S. over the last half century, especially nuclear, nanotech and advanced aviation. Society lost out on many important innovations due to endless bureaucratic delays, often thanks to opposition from special interests, anti-innovation activists, overzealous trial lawyers and a hostile media. Hall explained how this also sent a powerful signal to talented young people who might have been considering careers in those sectors. Why go into a field demonized by so many and where your creative abilities will be hamstrung by precautionary constraints?

Disincentivizing Talent

Hall argues that in those crucial sectors, this sort of mass talent migration “took our best and brightest away from improving our lives,” and he warns that those who still hope to make a career in such fields should be prepared to be “misconstrued and misrepresented by activists, demonized by ignorant journalists, and strangled by regulation.”

Is this what the future holds for AI and robotics? Hopefully not, and America continues to generate world-class talent on this front today in a diverse array of businesses and university programs. But if the waves of negativism about AI and robotics persist, we shouldn’t be surprised if it results in a talent shift away from building these technologies and toward fields that instead look to restrict them.

For example, Hall documents how, following the sudden shift in public attitudes surrounding nuclear power 50 years ago, “interests, and career prospects, in nuclear physics imploded” and “major discoveries stopped coming.” Meanwhile, enrollment in law schools and other soft sciences typically critical of technological innovation enjoyed greater success. Nobody writes any sci-fi stories about what a disaster that development has been for innovation in the energy sphere, even though it is now abundantly clear how precautionary principle policies have undermined environment goals and human welfare, with major geopolitical consequences for many nations.

If America loses the talent race on the AI front, it has ramifications for global competitive advantage going forward, especially as China races to catch up. In a world of global innovation arbitrage, talent and venture capital will flow to wherever it is treated most hospitably. Demonizing AI and robotics won’t help recruit or retain the next generation of talent and investors America needs to remain on top.

Flipping the Script

Some folks have had enough of the relentless pessimism surrounding technology and progress in modern science fiction and are trying to do something to reverse it. In a 2011  Wired essay decrying the dangers of “Innovation Starvation,” the acclaimed novelist Neal Stephenson decried the fact that “the techno-optimism of the Golden Age of [science fiction] has given way to fiction written in a generally darker, more skeptical and ambiguous tone.” While good science fiction, “supplies a plausible, fully thought-out picture of an alternate reality in which some sort of compelling innovation has taken place,” Stephenson said modern sci-fi was almost entirely focused on its potential downsides.

To help reverse this trend, Stephenson worked with the Center for Science and the Imagination at Arizona State University to launch Project Hieroglyph, an effort to support authors willing to take a more optimistic view of the future. It yielded a 2014 book, Hieroglyph: Stories and Visions for a Better Future that included almost 20 contributors. Later, in 2018, The Verge launched the “Better Worlds” project to support 10 writers of “stories that inspire hope” about innovation and the future. “Contemporary science fiction often feels fixated on a sort of pessimism that peers into the world of tomorrow and sees the apocalypse looming more often than not,” said Verge culture editor Laura Hudson when announcing the project.

Unfortunately, these efforts have not captured much public attention and that’s hardly surprising. “Pessimism has always been big box office,” says science writer Matt Ridley, primary because it really is more entertaining. Even though many of great sci-fi writers of the past, including Isaac Asimov, Arthur C. Clarke, and Robert Heinlein, wrote positively about technology, they ultimately had more success selling stories with darker themes. It’s just the nature of things more generally, from the best of Greek tragedy to Shakespeare and on down the line. There’s a reason they’re still rebooting Beowulf all these years later, after all.

So, There’s Star Trek and What Else?

While technological innovation will never enjoy the respect it deserves for being the driving force behind human progress, one can at least hope that more pop culture treatments of it might give it a fair shake. When I ask crowds of people to name a popular movie or television show that includes mostly positive depictions of technology, Star Trek is usually the first (and sometimes the only) thing people mention. It’s true that, on balance, technology was treated as a positive force in the original series, although “V’Ger”—a defunct space probe that attains a level of consciousness—was the prime antagonist in Star Trek: The Motion Picture. Later, Star Trek: The Next Generation gave us the always helpful android Data, but also created the lasting mental image of the Borg, a terrifying race of cyborgs hell-bent on assimilating everyone into their hive mind.

The Borg provided some of The Next Generation’s most thrilling moments, but also created a new cultural meme, with tech critics often worrying about how today’s humans are being assimilated into the hive mind of modern information systems. Philosopher Michael Sacasas even coined the term “the Borg Complex,” to refer to a supposed tendency “exhibited by writers and pundits who explicitly assert or implicitly assume that resistance to technology is futile.” After years of a friendly back-and-forth with Sacasas, I felt compelled to even wrap up my book Permissionless Innovation with a warning to other techno-optimists not to fall prey to this deterministic trap when defending technological change. Regardless of where one falls on that issue, the fact that Sacasas and I were having a serious philosophical discussion premised on a famous TV plotline serves as another indication of how much science fiction shapes public and intellectual debate over progress and innovation.

And, truth be told, some movies know how to excite the senses without resorting to dystopianism.  Interstellar and The Martian are two recent examples that come to mind. Interestingly, space exploration technologies themselves usually get a fair shake in many sci-fi plots, often only to be undermined by onboard Ais or androids, as occurred not only in 2001 with the eerie HAL 9000, but also Alien.

There are some positive (and sometimes humorous) depictions of robots as in  Robot & Frank, or touching ones as in Bicentennial Man. Beyond The Jetsons, other cartoons like Iron Giant and Big Hero 6 offer more kindly visions of robots. KITT, a super-intelligent robot car, was Michael Knight’s dependable ally in NBC’s Knight Rider. And R2-D2 is always a friendly helper throughout the Star Wars franchise. But generally speaking, modern sci-fi continues to churn out far more negativism about AI and robotics.

What If We Took It All Seriously?

So long as the public and political imagination is spellbound by machine machinations that dystopian sci-fi produces, we’ll be at risk of being stuck with absurd debates that have no meaningful solution other than “Stop the clock!” or “Ban it all!” Are we really being assimilated into the Borg hive mind, or just buying time until a coming robopocalypse grinds us into dust (or dinner)?

If there was a kernel of truth to any of this, then we should adopt some of the extreme solutions, Nick Bostrom of Oxford suggests in his writing on these issues. Those radical steps include worldwide surveillance and enforcement mechanisms for scientists and researchers developing algorithmic and robotic systems, as well as some sort of global censorship of information about these capabilities to ensure the technology is not used by bad actors.

To Bostrom’s great credit, he is at least willing to tell us how far he’d go. Most of today’s tech critics prefer to just spread a gospel of gloom and doom and suggest  something must be done, without getting into the ugly details about what a global control regime for computational science and robotic engineering looks like. We should reject such extremist hypothesizing and understand that silly sci-fi plots, bombastic headlines and kooky academic writing should not be our baseline for serious discussions about the governance of artificial intelligence and robotics.

At the same time, we absolutely should consider what downsides any technology poses for individuals and society. And, yes, some precautions will be needed of a regulatory nature. But most of the problems envisioned by sci-fi writers are not what we should be concerned with. There are far more specific and nuanced problems AI and robotics confronts us with today that deserve more serious consideration and governance steps. How to program safer drones and driverless cars, improve the accuracy of algorithmic medical and financial technologies, and ensure better transparency for government uses of AI are all more mundane but very important issues that require reasoned discussion and balanced solutions today. Dystopian thinking gives us no roadmap to get there other than extreme solutions.

Imagining a Better Future

The way forward here is neither to indulge in apocalyptic fantasies nor pollyannaish techno-optimism, but to approach these technologies with reasoned risk analysis, sensible industry best practices, educational efforts and other agile governance steps. In a forthcoming book on flexible governance strategies for AI and robotics, I outline how these and other strategies are already being formulated to address real-world challenges in fields as diverse as driverless cars, drones, machine learning in medicine and much more.

A wide variety of ethical frameworks, offered by professional associations, academic groups and others, already exists to “bake in” best practices and align AI design with widely shared goals and values while also “keeping humans in the loop” at critical stages of the design process to ensure that they can continue to guide and occasionally realign those values and best practices as needed.

When things do go wrong, many existing remedies are available, including a wide variety of common law solutions (torts, class actions, contract law, etc.), recall authority possessed by many regulatory agencies, and various consumer protection policies and other existing laws. Moreover, the most effective solution to technological problems usually lies in more innovation, not less. It is only through constant trial and error that humanity discovers better  and safer ways of satisfying important wants and needs.

These are complicated and nuanced issues that demand tailored and iterative governance responses. But this should not be done using inflexible, innovation-limiting mandates. Concerns about AI dangers deserve serious consideration and appropriate governance steps to ensure that these systems are beneficial to society. However, there is an equally compelling public interest in ensuring that AI innovations are developed and made widely available to help improve human well-being across many dimensions.

So, enjoy your next dopamine hit of sci-fi hysteria—I know I will, too. But don’t let that be your guide to the world that awaits us. Even if most sci-fi writers can’t imagine a better future, the rest of us can.

]]>
https://techliberation.com/2022/09/02/why-the-endless-techno-apocalyptica-in-modern-sci-fi/feed/ 0 77033
The Proper Governance Default for AI https://techliberation.com/2022/05/26/the-proper-governance-default-for-ai/ https://techliberation.com/2022/05/26/the-proper-governance-default-for-ai/#comments Thu, 26 May 2022 20:15:21 +0000 https://techliberation.com/?p=76994

[This is a draft of a section of a forthcoming study on “A Flexible Governance Framework for Artificial Intelligence,” which I hope to complete shortly. I welcome feedback. I have also cross-posted this essay at Medium.]

Debates about how to embed ethics and best practices into AI product design is where the question of public policy defaults becomes important. To the extent AI design becomes the subject of legal or regulatory decision-making, a choice must be made between two general approaches: the precautionary principle or the proactionary principle.[1] While there are many hybrid governance approaches in between these two poles, the crucial issue is whether the initial legal default for AI technologies will be set closer to the red light of the precautionary principle (i.e., permissioned innovation) or to the green light of the proactionary principle (i.e., (permissionless innovation). Each governance default will be discussed.

The Problem with the Precautionary Principle as the Policy Default for AI

The precautionary principle holds that innovations are to be curtailed or potentially even disallowed until the creators of those new technologies can prove that they will not cause any theoretical harms. The classic formulation of the precautionary principle can be found in the “Wingspan Statement,” which was formulated at an academic conference that took place at the Wingspread Conference Center in Wisconsin in 1998. It read: “Where an activity raises threats of harm to the environment or human health, precautionary measures should be taken even if some cause and effect relationships are not fully established scientifically.”[2] There have been many reformulations of the precautionary principle over time but, as legal scholar Cass Sunstein has noted, “in all of them, the animating idea is that regulators should take steps to protect against potential harms, even if causal chains are unclear and even if we do not know that those harms will come to fruition.”[3] Put simply, under almost all varieties of the precautionary principle, innovation is treated as “guilty until proven innocent.”[4] We can also think of this as permissioned innovation.

The logic animating the precautionary principle reflects a well-intentioned desire to play it safe in the face of uncertainty. The problem lies in the way this instinct gets translated into law and regulation. Making the precautionary principle the public policy default for any given technology or sector has a strong bearing on how much innovation we can expect to flow from it. When trial-and-error experimentation is preemptively forbidden or discouraged by law, it can limit many of the positive outcomes that typically accompany efforts by people to be creative and entrepreneurial. This can, in turn, give rise to different risks for society in terms of forgone innovation, growth, and corresponding opportunities to improve human welfare in meaningful ways.

St. Thomas Aquinas once observed that if the sole goal of a captain were to preserve their ship, the captain would keep it in port forever. But that clearly is not the captain’s highest goal. Aquinas was making a simple but powerful point: There can be no reward without some effort and even some risk-taking. Ship captains brave the high seas because they are in search of a greater good, such as recognition, adventure, or income. Keeping ships in port forever would preserve their vessels, but at what cost?

Similarly, consider the wise words of Wilbur Wright, who pioneered human flight. Few people better understood the profound risks associated with entrepreneurial activities. After all, Wilbur and his brother were trying to figure out how to literally lift humans off the Earth. The dangers were real, but worth taking. “If you are looking for perfect safety,” Wright said, “you would do well to sit on a fence and watch the birds.” Humans would have never taken to the skies if the Wright brothers had not gotten off the fence and taken the risks they did. Risk-taking drives innovation and, over the long-haul, improves our well-being.[5] Nothing ventured, nothing gained.

These lessons can be applied to public policy by considering what would happen if, in the name of safety, public officials told captains to never leave port or told aspiring pilots to never leave the ground. The opportunity cost of inaction can be hard to quantify, but it should be clear that if we organized our entire society around a rigid application of the precautionary principle, progress and prosperity would suffer.

Heavy-handed preemptive restraints on creative acts can have deleterious effects because they raise barriers to entry, increase compliance costs, and create more risk and uncertainty for entrepreneurs and investors. Thus, it is the unseen costs—primarily in the form of forgone innovation opportunities—that makes the precautionary principle so problematic as a policy default. This is why scientist Martin Rees speaks of “the hidden cost of saying no” that is associated with the precautionary principle.[6]

The precise way the precautionary principle leads to this result is that it derails the so-called learning curve by limiting opportunities to learn from trial-and-error experimentation with new and better ways of doing things.[7] The learning curve refers to the way that individuals, organizations, or industries are able to learn from their mistakes, improve their designs, enhance productivity, lower costs, and then offer superior products based on the resulting knowledge.[8] In his recent book, Where Is My Flying Car?, J. Storrs Hall documents how, over the last half century, “regulation clobbered the learning curve” for many important technologies in the U.S., especially nuclear, nanotech, and advanced aviation.[9] Hall shows how society was denied many important innovations due to endless foot-dragging or outright opposition to change from special interests, anti-innovation activists, and over-zealous bureaucrats.

In many cases, innovators don’t even know what they are up against because, as many scholars have noted, “the precautionary principle, in all of its forms, is fraught with vagueness and ambiguity.”[10] It creates confusion and fear about the wisdom of taking action in the face of uncertainty. Worst case thinking paralyzes regulators who aim to “play it safe” at all costs. The result is an endless snafu of red tape as layer upon layer of mandates build up and block progress. The result is what many scholars now decry as a culture of “vetocracy,” which describes the many veto points within modern political systems that hold back innovation, development and economic opportunity.[11] This endless accumulation of potential veto points in the policy process in the form of mandates and restrictions can greatly curtail innovation opportunities. “Like sediment in a harbor, law has steadily accumulated, mainly since the 1960s, until most productive activity requires slogging through a legal swamp,” says Philip K. Howard, chair of Common Good.[12] “Too much law,” he argues, “can have similar effects as too little law,” because:

People slow down, they become defensive, they don’t initiate projects because they are surrounded by legal risks and bureaucratic hurdles. They tiptoe through the day looking over their shoulders rather than driving forward on the power of their instincts. Instead of trial and error, they focus on avoiding error.[13]

This is exactly why it is important that policymakers not get too caught up in attempts to preemptively resolve every potential hypothetical worst case scenarios associated with AI technologies. The problem with that approach was succinctly summarized by the political scientist Aaron Wildavsky when he noted, “If you can do nothing without knowing first how it will turn out, you cannot do anything at all.”[14] Or, as I have stated in a book on this topic, “living in constant fear of worst-case scenarios—and premising public policy on them—means that best-case scenarios will never come about.”[15]

This does not mean society should dismiss all concerns about the risks surrounding AI. Some technological risks do necessitate a degree of precautionary policy, but proportionality is crucial, notes Gabrielle Bauer, a Toronto-based medical writer. “Used too liberally,” she argues, “the precautionary principle can keep us stuck in a state of extreme risk-aversion, leading to cumbersome policies that weigh down our lives. To get to the good parts of life, we need to accept some risk.”[16] It is not enough to simply hypothesize that certain AI innovations might entail some risk. The critics need to prove it using risk analysis techniques that properly weigh both the potential costs and benefits.[17] Moreover, when conducting such analyses, the full range of trade-offs associated with preemptive regulation must be evaluated. Again, where precautionary constraints might deny society life-enriching devices or services, those costs must be acknowledged.

Generally speaking, the most extreme precautionary controls should only be imposed when the potential harms in question are highly probable, tangible, immediate, irreversible, catastrophic, or directly threatening to life and limb in some fashion.[18] In the context of AI and ML systems, it may be the case that such a test is satisfied already for law enforcement use of certain algorithmic profiling techniques. And that test is satisfied for so-called “killer robots,” or autonomous military technology.[19] These are often described as “existential risks.” The precautionary principle is the right default in these cases because it is abundantly clear how unrestricted use would have catastrophic consequences. For similar reasons, governments have long imposed comprehensive restrictions on certain types of weapons.[20] And although nuclear and chemical technologies have many important applications, their use must also be limited to some degree even outside of militaristic applications because they can pose grave danger if misused.

But the vast majority of AI-enabled technologies are not like this. Most innovations should not be treated the same a hand grenade or a ticking time bomb. In reality, most algorithmic failures will be more mundane and difficult to foresee in advance. By their very nature, algorithms are constantly evolving because programs and systems are being endlessly tweaked by designers to improve them. In his books on the evolution of engineering and systems design, Henry Petroski has noted that “the shortcomings of things are what drive their evolution.”[21] The normal state of things is “ubiquitous imperfection,” he notes, and it is precisely that reality that drives efforts to continuously innovate and iterate.[22]

Regulations rooted in the precautionary principle hope to preemptively find and address product imperfections before any harm comes from them. In reality, and as explained more below, it is only through ongoing experimentation that we find both the nature of failures and the knowledge to know how to correct them. As Petroski observes, “the history of engineering in general, may be told in its failures as well as in its triumphs. Success may be grand, but disappointment can often teach us more.”[23] This is particularly true for complex algorithmic systems, where rapid-fire innovation and incessant iteration are the norm.

Importantly, the problem with precautionary regulation for AI is not just that it might be over-inclusive in seeking to regulate hypothetical problems that never develop. Precautionary regulation can also be under-inclusive by missing problematic behavior or harms that no one anticipated before the fact. Only experience and experimentation reveal certain problems.

In sum, we should not presume that there is a clear preemptive regulatory solution to every problem some people raise about AI, nor should we presume we can even accurately identify all such problems that might come about in the future. Moreover, some risks will never be eliminated entirely, meaning that risk mitigation is the wiser approach. This is why a more flexible bottom-up governance strategy focused on responsiveness and resiliency makes more sense than heavy-handed, top-down strategies that would only avoid risks by making future innovations extremely difficult if not impossible.

The “Proactionary Principle” is the Better Default for AI Policy

The previous section made it clear why the precautionary principle should generally not be used as our policy default if we hope to encourage the development of AI applications and services. What we need is a policy approach that:

  • objectively evaluates the concerns raised about AI systems and applications;
  • considers whether more flexible governance approaches might be available to address them; and,
  • does so without resorting to the precautionary principle as a first-order response.

The proactionary principle is the better general policy default for AI because it satisfies these three objectives.[24] Philosopher Max More defines the proactionary principle as the idea that policymakers should, “[p]rotect the freedom to innovate and progress while thinking and planning intelligently for collateral effects.”[25] There are different names for this same concept, including the innovation principle, which Daniel Castro and Michael McLaughlin of the Information Technology and Innovation Foundation say represents the belief that “the vast majority of new innovations are beneficial and pose little risk, so government should encourage them.”[26] Permissionless innovation is another name for the same idea. Permissionless innovation refers to the idea that experimentation with new technologies and business models should generally be permitted by default.[27]

What binds these concepts together is the belief that innovation should generally be treated as innocent until proven guilty. There will be risks and failures, of course, but the permissionless innovation mindset views them as important learning experiences. These experiences are chances for individuals, organizations, and all of society to make constant improvements through incessant experimentation with new and better ways of doing things.[28] As Virginia Postrel argued in her 1998 book, The Future and Its Enemies, progress demands “a decentralized, evolutionary process” and mindset in which mistakes are not viewed as permanent disasters but instead as “the correctable by-products of experimentation.”[29] “No one wants to learn by mistakes,” Petroski once noted, “but we cannot learn enough from successes to go beyond the state of the art.”[30] Instead we must realize, as other scholars have observed, that “[s]uccess is the culmination of many failures”[31] and understand “failure as the natural consequence of risk and complexity.”[32]

This is why the default for public policy for AI innovation should, whenever possible, be more green lights than red ones to allow for the maximum amount of trial-and-error experimentation, which encourages ongoing learning.[33] “Experimentation matters,” observes Stefan H. Thomke of the Harvard Business School, “because it fuels the discovery and creation of knowledge and thereby leads to the development and improvement of products, processes, systems, and organizations.”[34]

Obviously, risks and mistakes are “the very things regulators inherently want to avoid,”[35] but “if innovators fear they will be punished for every mistake,” Daniel Castro and Alan McQuinn argue, “then they will be much less assertive in trying to develop the next new thing.”[36] And for all the reasons already stated, that would represent the end of progress because it would foreclose the learning process that allows society to discover new, better, and safer ways of doing things. Technology author Kevin Kelly puts it this way:

technologies must be evaluated in action, by action. We test them in labs, we try them out in prototypes, we use them in pilot programs, we adapt our expectations, we monitor their alterations, we redefine their aims as they are modified, we retest them given actual behavior, we re-direct them to new jobs when we are not happy with their outcomes.[37]

In other words, the proactionary principle appreciates the benefits that flow from learning by doing. The goal is to continuously assess and prioritize risks from natural and human-made systems alike, and then formulate and reformulate our toolkit of possible responses to those risks using the most practical and effective solutions available. This should make it clear that the proactionary approach is not synonymous with anarchy. Various laws, government bodies, and especially the courts play an important role in protecting rights, health, and order. But policies need to be formulated such that innovators and innovation are given the benefit of the doubt and risks are analyzed and addressed in a more flexible fashion.

Some of the most effective ways to address potential AI risks already exist in the form of “soft law” and decentralized governance solution. These will be discussed at greater length below. But existing legal remedies include various common law solutions (torts, class actions, contract law, etc), recall authority possessed by many regulatory agencies, and various consumer protection policies. Ex post remedies are generally superior to ex ante prior restraints if we hope to maximize innovation opportunities. Ex ante regulatory defaults are too often set closer to the red light of the precautionary principle and then enforced through volumes of convoluted red tape.

This is what the World Economic Forum has referred to as a “regulate-and-forget” system of governance,[38] or what others call a “build-and-freeze model” or regulation.[39] In such technological governance regimes, older rules are almost never revisited, even after new social, economic, and technical realities render them obsolete or ineffective.[40] A 2017 survey of U.S. Code of Regulations by Deloitte consultants revealed that 68 percent of federal regulations have never been updated and that 17 percent have only been updated once.[41] Public policies for complex and fast-moving technologies like AI cannot be set in stone and forgotten like that if America hopes to remain on the cutting edge of this sector.

Advocates of the proactionary principle look to counter this problem not by eliminating all laws or agencies, but by bringing them in line with flexible governance principles rooted in more decentralized approaches to policy concerns.[42] As many regulatory advocates suggest, it is important to embed or “bake in” various ethical best practices into AI systems to ensure that they benefit humanity. But this, too, is a process of ongoing learning and there are many ways to accomplish such goals without derailing important technological advances. What is often referred to as “value alignment” or “ethically-aligned design” is challenged by the fact that humans regularly disagree profoundly about many moral issues.[43] “Before we can put our values into machines, we have to figure out how to make our values clear and consistent,” says Harvard University psychologist Joshua D. Greene.[44]

The “Three Laws of Robotics” famously formulated decades ago by Isaac Asimov in his science fiction stories continue to be widely discussed today as a guide to embedding ethics into machines.[45] They read:

  1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
  2. A robot must obey orders given it by human beings except where such orders would conflict with the First Law.
  3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws.

What is usually forgotten about these principles, as AI expert Melanie Mitchell reminds us, is the way Asimov, “often focused on the unintended consequences of programming ethical rules into robots,” and how he made it clear that, if applied too literally, “such a set of rules would inevitably fail.”[46]

This is why flexibility and humility are essential virtues when thinking about AI policy. The optimal governance regime for AI can be shaped by responsible innovation practices and embed important ethical principles by design without immediately defaulting to a rigid application of the precautionary principle.[47] In other words, an innovation policy regime rooted in the proactionary principle can also be infused with the same values that animate a precautionary principle-based system.[48] The difference is that the proactionary principle-based approach will look to achieve these goals in a more flexible fashion using a variety of experimental governance approaches and ex post legal enforcement options, while also encouraging still more innovation to solve problems past innovations may have caused.

To reiterate, not every AI risk is foreseeable, and many risks and harms are more amorphous or uncertain. In this sense, the wisest governance approach for AI was recently outlined by the National Institute of Standards and Technology (NIST) in its initial draft AI Risk Management Framework, which is a multistakeholder effort “to describe how the risks from AI-based systems differ from other domains and to encourage and equip many different stakeholders in AI to address those risks purposefully.”[49] NIST notes that the goal of the Framework is:

to be responsive to new risks as they emerge rather than enumerating all known risks in advance. This flexibility is particularly important where impacts are not easily foreseeable, and applications are evolving rapidly. While AI benefits and some AI risks are well-known, the AI community is only beginning to understand and classify incidents and scenarios that result in harm.[50]

This is a sensible framework for how to address AI risks because it makes it clear that it will be difficult to preemptively identify and address all potential AI risks. At the same time, there will be a continuing need to advance AI innovation while addressing AI-related harms. The key to striking that balance will be decentralized governance approaches and soft law techniques described below.

[Note: The subsequent sections of the study will detail how decentralized governance approaches and soft law techniques already are helping to address concerns about AI risks.]

Endnotes:

[1]     Adam Thierer, Permissionless Innovation: The Continuing Case for Comprehensive Technological Freedom, 2nd ed. (Arlington, VA: Mercatus Center at George Mason University, 2016): 1-6, 23-38; Adam Thierer, Evasive Entrepreneurs & the Future of Governance (Washington, DC: Cato Institute, 2020): 48-54.

[2]     “Wingspread Statement on the Precautionary Principle,” January 1998, https://www.gdrc.org/u-gov/precaution-3.html.

[3]     Cass R. Sunstein, Laws of Fear: Beyond the Precautionary Principle (Cambridge, UK: Cambridge University Press, 2005). (“The Precautionary Principle takes many forms. But in all of them, the animating idea is that regulators should take steps to protect against potential harms, even if causal chains are unclear and even if we do not know that those harms will come to fruition.”)

[4]     Henk van den Belt, “Debating the Precautionary Principle: ‘Guilty until Proven Innocent’ or ‘Innocent until Proven Guilty’?” Plant Physiology 132 (2003): 1124.

[5]     H.W. Lewis, Technological Risk (New York: WW. Norton & Co., 1990): x. (“The history of the human race would be dreary indeed if none of our forebears had ever been willing to accept risk in return for potential achievement.”)

[6]     Martin Rees, On the Future: Prospects for Humanity (Princeton, NJ: Princeton University Press, 2018): 136.

[7]     Adam Thierer, “Failing Better: What We Learn by Confronting Risk and Uncertainty,” in Sherzod Abdukadirov (ed.), Nudge Theory in Action: Behavioral Design in Policy and Markets (Palgrave Macmillan, 2016): 65-94.

[8]     Adam Thierer, “How to Get the Future We Were Promised,” Discourse, January 18, 2022, https://www.discoursemagazine.com/culture-and-society/2022/01/18/how-to-get-the-future-we-were-promised.

[9]     J. Storrs Hall, Where Is My Flying Car? (San Francisco: Stripe Press, 2021)

[10]    Derek Turner and Lauren Hartzell Nichols, “The Lack of Clarity in the Precautionary Principle,” Environmental Values, Vol 13, No. 4 (2004): 449.

[11]    William Rinehart, “Vetocracy, the Costs of Vetos and Inaction,” Center for Growth & Opportunity at Utah State University, March 24, 2022, https://www.thecgo.org/benchmark/vetocracy-the-costs-of-vetos-and-inaction; Adam Thierer, “Red Tape Reform is the Key to Building Again,” The Hill, April 28, 2022, https://thehill.com/opinion/finance/3470334-red-tape-reform-is-the-key-to-building-again.

[12]    Philip K. Howard, “Radically Simplify Law,” Cato Institute, Cato Online Forum, http://www.cato.org/publications/cato-online-forum/radically-simplify-law.

[13]    Ibid.

[14]    Aaron Wildavsky, Searching for Safety (New Brunswick, NJ: Transaction Publishers, 1989): 38.

[15]    Thierer, Permissionless Innovation, at 2.

[16]    Gabrielle Bauer, “Danger: Caution Ahead,” The New Atlantis, February 4, 2022, https://www.thenewatlantis.com/publications/danger-caution-ahead.

[17]    Richard B. Belzer, “Risk Assessment, Safety Assessment, and the Estimation of Regulatory Benefits” (Mercatus Working Paper, Mercatus Center at George Mason University, Arlington, VA, 2012), 5, http://mercatus.org/publication/risk-assessment-safety-assessment-and-estimation-regulatory-benefits; John D. Graham and Jonathan Baert Wiener, eds. Risk vs. Risk: Tradeoffs in Protecting Health and the Environment, (Cambridge, MA: Harvard University Press, 1995).

[18]    Thierer, Permissionless Innovation, at 33-8.

[19]    Adam Satariano, Nick Cumming-Bruce and Rick Gladstone, “Killer Robots Aren’t Science Fiction. A Push to Ban Them Is Growing,” New York Times, December 17, 2021, https://www.nytimes.com/2021/12/17/world/robot-drone-ban.html.

[20]    Adam Thierer, “Soft Law: The Reconciliation of Permissionless & Responsible Innovation,” in Adam Thierer, Evasive Entrepreneurs & the Future of Governance (Washington, DC: Cato Institute, 2020): 183-240, https://www.mercatus.org/publications/technology-and-innovation/soft-law-reconciliation-permissionless-responsible-innovation.

[21]    Henry Petroski, The Evolution of Useful Things (New York: Vintage Books, 1994): 34.

[22]    Ibid., 27,

[23]    Henry Petroski, To Engineer is Human: The Role of Failure in Successful Design (New York: Vintage, 1992): 9.

[24]    James Lawson, These Are the Droids You’re Looking For: An Optimistic Vision for Artificial Intelligence, Automation and the Future of Work (London: Adam Smith Institute, 2020): 86, https://www.adamsmith.org/research/these-are-the-droids-youre-looking-for.

[25]    Max More, “The Proactionary Principle (March 2008),” Max More’s Strategic Philosophy, March 28, 2008, http://strategicphilosophy.blogspot.com/2008/03/proactionary-principle-march-2008.html.

[26]    Daniel Castro & Michael McLaughlin, “Ten Ways the Precautionary Principle Undermines Progress in Artificial Intelligence,” Information Technology and Innovation Foundation, February 4, 2019, https://itif.org/publications/2019/02/04/ten-ways-precautionary-principle-undermines-progress-artificial-intelligence.

[27]    Thierer, Permissionless Innovation.

[28]    Thierer, “Failing Better.”

[29]    Virginia Postrel, The Future and Its Enemies (New York: The Free Press, 1998): xiv.

[30]    Henry Petroski, To Engineer is Human: The Role of Failure in Successful Design (New York: Vintage, 1992): 62.

[31]    Kevin Ashton, How to Fly a Horse: The Secret History of Creation, Invention, and Discovery (New York: Doubleday, 2015): 67.

[32]    Megan McArdle, The Up Side of Down: Why Failing Well is the Key to Success (New York: Viking, 2014), 214.

[33]    F. A. Hayek, The Constitution of Liberty (London: Routledge, 1960, 1990): 81. (“Humiliating to human pride as it may be, we must recognize that the advance and even preservation of civilization are dependent upon a maximum of opportunity for accidents to happen.”)

[34]    Stefan H. Thomke, Experimentation Matters: Unlocking the Potential of New Technologies for Innovation (Harvard Business Review Press, 2003), 1.

[35]    Daniel Castro and Alan McQuinn, “How and When Regulators Should Intervene,” Information Technology and Innovation Foundation Reports, (February 2015): 2 http://www.itif.org/publications/how-and-when-regulators-should-intervene.

[36]    Ibid.

[37]    Kevin Kelly, “The Pro-Actionary Principle,” The Technium, November 11, 2008, https://kk.org/thetechnium/the-pro-actiona.

[38]    World Economic Forum, Agile Regulation for the Fourth Industrial Revolution (Geneva: Switzerland: 2020): 4, https://www.weforum.org/projects/agile-regulation-for-the-fourth-industrial-revolution.

[39]    Jordan Reimschisel and Adam Thierer, “’Build & Freeze’ Regulation Versus Iterative Innovation,” Plain Text, November 1, 2017, https://readplaintext.com/build-freeze-regulation-versus-iterative-innovation-8d5a8802e5da.

[40]    Adam Thierer, “Spring Cleaning for the Regulatory State,” AIER, May 23, 2019, https://www.aier.org/article/spring-cleaning-for-the-regulatory-state.

[41]    Daniel Byler, Beth Flores & Jason Lewris, “Using Advanced Analytics to Drive Regulatory Reform: Understanding Presidential Orders on Regulation Reform,” Deloitte, 2017, https://www2.deloitte.com/us/en/pages/public-sector/articles/advanced-analytics-federal-regulatory-reform.html.

[42]    Adam Thierer, Governing Emerging Technology in an Age of Policy Fragmentation and Disequilibrium, American Enterprise Institute (April 2022), https://platforms.aei.org/can-the-knowledge-gap-between-regulators-and-innovators-be-narrowed.

[43]    Brian Christian, The Alignment Problem: Machine Learning and Human Values (New York: W.W. Norton & Company, 2020).

[44]    Joshua D. Greene, “Our Driverless Dilemma,” Science (June 2016): 1515.

[45]    Susan Leigh Anderson, “Asimov’s ‘Three Laws of Robotics’ and Machine Metaethics,” AI and Society, Vol. 22, No. 4, (2008): 477-493.

[46]    Melanie Mitchell, Artificial Intelligence: A Guide for Thinking Humans (New York: Farrar, Straus and Giroux, 2019): 126 [Kindle edition.]

[47]    Thomas A. Hemphill, “The Innovation Governance Dilemma: Alternatives to the Precautionary Principle,” Technology in Society, Vol. 63 (2020): 6, https://ideas.repec.org/a/eee/teinso/v63y2020ics0160791x2030751x.html.

[48]    Adam Thierer, “Are ‘Permissionless Innovation’ and ‘Responsible Innovation’ Compatible?” Technology Liberation Front, July 12, 2017, https://techliberation.com/2017/07/12/are-permissionless-innovation-and-responsible-innovation-compatible.

[49]    The National Institute of Standards and Technology, “AI Risk Management Framework: Initial Draft,” (March 17, 2022): 1, https://www.nist.gov/itl/ai-risk-management-framework.

[50]    Ibid., at 5.

]]>
https://techliberation.com/2022/05/26/the-proper-governance-default-for-ai/feed/ 1 76994
Book Review: “Questioning the Entrepreneurial State” https://techliberation.com/2022/04/26/book-review-questioning-the-entrepreneurial-state/ https://techliberation.com/2022/04/26/book-review-questioning-the-entrepreneurial-state/#respond Tue, 26 Apr 2022 20:14:03 +0000 https://techliberation.com/?p=76975

An important new book launched this week in Europe on issues related to innovation policy and industrial policy. “Questioning the Entrepreneurial State: Status-quo, Pitfalls, and the Need for Credible Innovation Policy” (Springer, 2022) brings together more than 30 scholars who contribute unique chapters to this impressive volume. It was edited by Karl Wennberg of the Stockholm School of Economics and Christian Sandström of the Jönköping (Sweden) International Business School.

As the title of this book suggests, the authors are generally pushing back against the thesis found in Mariana Mazzucato’s book The Entrepreneurial State (2011). That book, like many other books and essays written recently, lays out a romantic view of industrial policy that sees government as the prime mover of markets and innovation. Mazzucato calls for “a bolder vision for the State’s dynamic role in fostering economic growth” and innovation. She wants the state fully entrenched in technological investments and decision-making throughout the economy because she believes that is the best way to expand the innovative potential of a nation.

The essays in Questioning the Entrepreneurial State offer a different perspective, rooted in the realities on the ground in Europe today. Taken together, the chapters tell a fairly consistent story: Despite the existence of many different industrial policy schemes at the continental and country level, Europe isn’t in very good shape on the tech and innovation front. The heavy-handed policies and volumes of regulations imposed by the European Union and its member states have played a role in that outcome. But these governments have simultaneously been pushing to promote innovation using a variety of technocratic policy levers and industrial policy schemes. Despite all those well-intentioned efforts, the EU has struggled to keep up with the US and China in most important modern tech sectors.

As Wennberg and Sandström note in their introductory chapter:

Grand schemes toward noble outcomes have a disappointing track record in human political and economic history. Conventional wisdom regarding authorities’ inability to selectively pinpoint certain technologies, sectors, or firms as winners, and the fact that large support structures for specific technologies are bound to distort incentives and result in opportunism, seem to have been forgotten.

In summarizing the chapters, they conclude that, “while the idea of aiming high and leveraging large portions of society’s resources to address some fundamental human challenges may sound appealing to many, such ideas have limited scientific credibility.”

Why do governments frequently fail in attempts to be entrepreneurial? Johan P. Larsson gets at the heart of the matter in his chapter when noting how, “[t]he state entrepreneur is not subject to real risk, often faces no market, and cannot be properly evaluated. It pays no price for being wrong and it struggles in assigning responsibility.” Which leads to two questions that are rarely asked, he notes: “[F]irst, how do we ensure that the state pays a price for being wrong? And second, when is that price high enough for us to know it is time to cut our losses?”

The authors of another chapter (Murtinu, Foss & Klein) concur and note how, “even well-intentioned and strongly motivated public actors lack the ability to manage the process of innovation.” “As stewards of resources owned by the public,” they note, “government bureaucrats do not exercise the ultimate responsibility that comes with ownership.” In other words, the state faces problems of misaligned incentives.

Several authors in the book highlight the various public choice problems often associated with large-scale industrial policy initiatives, including rent-seeking and capture. Wennberg and Sandström note how this results in less disruption as established players don’t seek to challenge existing market or technological status quos but instead simply seek to benefit from it. “[S]upport structures, platforms for private-public cooperation, and large volumes of technology-specific money usually end up in the hands of established interest groups,” they note. “Hence, they are not very likely to question these policies but will rather go along with the ride.”

John-Erik Bergkvist and Jerker Moodysson devote an entire chapter to this problem and offer a grim assessment of how past industrial policy schemes have exacerbated it:

Assuming that policies and programs are shaped by the interest groups that are affected by the policies, we highlight the risk that policymaking may end up as support for established interest groups rather than supporting the emergence of those who could act as institutional entrepreneurs or disruptors. Policies and programs may thus be captivated by dominant actors in the established regime, who have superior financial and relational resources. The result would then be that innovation policies sustain the established socio-technical structures of industries rather than contributing to the emergence of new structures.”

Other organizations are incentivized to support the status quo when big money is on the line. One of the most interesting chapters in the book was co-authored by Wennberg and Sandström along with Elias Collin. They examine the conflicts of interest inherent in many evaluations of industrial policy programs by various third parties, including academics and consultants who receive generous state contracts:

the overwhelming majority of evaluations are positive or neutral and that very few evaluations are negative. While this is the case across all categories of evaluators, we note that consulting firms stand out as particularly inclined to provide positive evaluations. The absence of negative or critical reports can be related to the fact that most of the studies do not rely upon methods that make it possible to discuss effects. This discrepancy between so many positive evaluations on the one hand and comparatively weak evaluation methods on the other hand leads us to suspect that evaluators are not sufficiently independent. Consultants and scholars that are funded by a government agency in order to evaluate the agency’s policies and programs are put in a position where it is difficult to maintain objectivity.

This is one reason why industrial policy continues to have such currency in European policy discussions despite a long track record of failure, as documented throughout this new book. The biggest problem for Europe lies in its layers of regulatory bureaucracy and heavy-handed treatment of entrepreneurs.

Later in the book, Zoltan J. Acs offers a grim account of just how bad things have been for Europe on the digital technology front in recent decades, despite the many state-led efforts to promote the sector. “The European Union protected traditional industries and hoped that existing firms would introduce new technologies. This was a policy designed to fail,” Acs argues. “What has been the outcome of E.U. policy in limiting entrepreneurial activity over recent decades?” he asks. Acs concludes that:

It is immediately clear… that the United States and China dominate the platform landscape. Based on the market value of top companies, the United States alone represents 66% of the world’s platform economy with 41 of the top 100 companies. European platform-based companies play a marginal role, with only 3% of market value.

He says that the United Kingdom’s “Brexit” from the European Union was a logical move, “because E.U. regulations were holding back the U.K.’s strong DPE (digital platform economy).” “If the United Kingdom was to realize its economic potential, it had to extricate itself from the European Union,” Acs says, due to the “dysfunctional E.U. bureaucracy.” No amount of industrial policy support is going to allow European firms to overcome those burdens. In fact, many of Europe’s industrial policy programs create the very disincentives that retard innovation and discourage entrepreneurialism in key sectors.

Several of the authors in the collection stress how the better role for the state is usually to set the table for innovation and growth without trying to determine everything that is served on the plate. As Wennberg and Sandström summarize:

the best policies to promote innovation are those that promote productive economic activity more generally: property rights protection, open and contestable markets, a stable monetary system, and legal rules that favor competition and entrepreneurship. Policy should promote an institutional environment in which innovation and entrepreneurship can flourish without trying to anticipate the specific outcomes of those processes—an impossible task in the face of uncertainty, technological change, and a dynamic, knowledge-based economy.

That’s good advice, as is everything found throughout the book. I encourage all those interested in these issues to take a hard look at it because it is particularly relevant even here in the Unites States, as Congress is currently considering a massive new 3,000-page, $350 billion industrial policy bill that I’ve labelled “The Most Corporatist & Wasteful Industrial Policy Ever.” There doesn’t seem to be anything stopping the momentum of this effort with both liberals and conservatives lining up to pass out the pork. I wish I could put a copy of Questioning the Entrepreneurial State in all their hands and ask them to read every word of it before they gamble hundreds of billions on such foolish efforts.


Additional Reading:

]]>
https://techliberation.com/2022/04/26/book-review-questioning-the-entrepreneurial-state/feed/ 0 76975
Samuel Florman & the Continuing Battle over Technological Progress https://techliberation.com/2022/04/06/samuel-florman-the-continuing-battle-over-technological-progress/ https://techliberation.com/2022/04/06/samuel-florman-the-continuing-battle-over-technological-progress/#respond Wed, 06 Apr 2022 18:37:45 +0000 https://techliberation.com/?p=76961

Almost every argument against technological innovation and progress that we hear today was identified and debunked by Samuel C. Florman a half century ago. Few others since him have mounted a more powerful case for the importance of innovation to human flourishing than Florman did throughout his lifetime.

Chances are you’ve never heard of him, however. As prolific as he was, Florman did not command as much attention as the endless parade of tech critics whose apocalyptic predictions grabbed all the headlines. An engineer by training, Florman became concerned about the growing criticism of his profession throughout the 1960s and 70s. He pushed back against that impulse in a series of books over the next two decades, including most notably: The Existential Pleasures of Engineering (1976), Blaming Technology: The Irrational Search for Scapegoats (1981), and The Civilized Engineer (1987). He was also a prolific essayist, penning hundreds of articles for a wide variety of journals, magazines, and newspapers beginning in 1959. He was also a regular columnist for MIT Technology Review for sixteen years.

Florman’s primary mission in his books and many of those essays was to defend the engineering profession against attacks emanating from various corners. More broadly, as he noted in a short autobiography on his personal website, Florman was interested in discussing, “the relationship of technology to the general culture.”

Florman could be considered a “rational optimist,” to borrow Matt Ridley’s notable term [1] for those of us who believe, as I have summarized elsewhere, that there is a symbiotic relationship between innovation, economic growth, pluralism, and human betterment.[2] Rational optimists are highly pragmatic and base their optimism on facts and historical analysis, not on dogmatism or blind faith in any particular viewpoint, ideology, or gut feeling. But they are unified in the belief that technological change is a crucial component of moving the needle on progress and prosperity.

Florman’s unique contribution to advancing rational optimism came in the way he itemized the various claims made by tech critics and then powerfully debunked each one of them. He was providing other rational optimists with a blueprint for how defend technological innovation against its many critics and criticisms. As he argued in The Civilized Engineer, we need to “broaden our conception of engineering to include all technological creativity.”[3] And then we need to defend it with vigor.

In 1982, the American Society of Mechanical Engineers appropriately awarded Florman the distinguished Ralph Coats Roe Medal for his “outstanding contribution toward a better public understanding and appreciation of the engineer’s worth to contemporary society.” Carl Sagan had won the award the previous year. Alas, Forman never attained the same degree of notoriety as Sagan. That is a shame because Florman was as much a philosopher and a historian as he was an engineer, and his robust thinking on technology and society deserves far greater attention. More generally, his plain-spoken style and straight-forward defense of technological progress continues to be a model for how to counter today’s techno-pessimists.

This essay highlights some of the most important themes and arguments found in Florman’s writing and explains its continuing relevance to the ongoing battles over technology and progress.

What Motivates The “Antitechnologists”?

Florman was interested in answering questions about what motivates both engineers as well as their critics. He dug deep into psychology and history to figure out what makes these people tick. Who are engineers, and why do they do what they do? That was his primary question, and we will turn to his answers momentarily. But he also wanted to know what drove the technology critics to oppose innovation so vociferously.

Florman’s most important contribution to the history of ideas lies in his 6-part explanation of “the main themes that run through the works of the antitechnologists.”[4] Florman used the term “antitechnologists” to describe the many different critics of engineering and innovation. He recognized that the term wasn’t perfect and that some people he labelled as such would object to it. Nevertheless, because they offer no umbrella label for their movement or way of thinking, Florman noted that opposition to, or general discomfort with, technology was what motivated these critics. Hence, the label “antitechnologists.”

Florman surveyed a wide swath of technological critics from many different disciplines—philosophy, sociology, law, and other fields. He condensed their main criticisms into six general points:

  • Technology is a “thing” or a force that has escaped from human control and is spoiling our lives.
  • Technology forces man to do work that is tedious and degrading.
  • Technology forces man to consume things that he does not really desire.
  • Technology creates an elite class of technocrats, and so disenfranchises the masses.
  • Technology cripples man by cutting him off from the natural world in which he evolved.
  • Technology provides man with technical diversions which destroy his existential sense of his own being.[5]

No one else before this had ever crafted such a taxonomy of complaints from tech critics, and no one has done it better since Florman did so in 1976. In fact, it is astonishing how well Florman’s list continues to identify what motivates modern technology critics. New technologies have come and gone, but these same concerns tend to be brought up again and again. Florman’s books addressed and debunked each of these concerns in powerful fashion.

The Relentless Pessimism & Elitism of the Antitechnologists

Florman identified the way a persistent pessimism unifies antitechnologists. “Our intellectual journals are full of gloomy tracts that depict a society debased by technology,” he noted.[6] What motivated such gloom and doom? “It is fear. They are terrified by the scene unfolding before their eyes.”[7] He elaborated:

“The antitechnologists are frightened; they counsel halt and retreat. They tell the people that Satan (technology) is leading them astray, but the people have heard that story before. They will not stand still for vague promises of a psychic contentment that is to follow in the wake of voluntary temperance.”[8]

The antitechnologist’s worldview isn’t just relentlessly pessimistic but also highly elitist and paternalistic, Florman argued. He referred to it as “Platonic snobbery.”[9] The economist and political scientist Thomas Sowell would later call that snobbish attitude, “the vision of the anointed.”[10] Like Sowell, Florman was angered at the way critics stared down their noses at average folk and disregarded their values and choices:

“The antitechnologists have every right to be gloomy, and have a bounden duty to express their doubts about the direction our lives are taking. But their persistent disregard of the average person’s sentiments is a crucial weakness in their argument—particularly when they then ask us to consider the ‘real’ satisfactions that they claim ordinary people experienced in other cultures of other times.”[11]

Florman noted that critics commonly complain about “too many people wanting too many things,” but he noted that, “[t]his is not caused by technology; it is a consequence of the type of creature that man is.”[12] One can moralize all they want about supposed over-consumption or “conspicuous consumption,” but in the end, most of us strive to better our lives in various ways—including by working to attain things that may be out of our reach or even superfluous in the eyes of others.

For many antitechnologists and other social critics, only the noble search for truth and wisdom will suffice. Basically, everybody should just get back to studying philosophy, sociology, and other soft sciences. Modern tech critics, Forman said, fashion themselves as the intellectual descendants of Greek philosophers who believed that, “[t]he ideal of the new Athenian citizen was to care for his body in the gymnasium, reason his way to Truth in the academy, gossip in the agora, and debate in the senate. Technology was not deemed worthy of a free man’s time.”[13]

“It is not surprising to find philosophers recommending the study of philosophy as a way of life,” Florman noted amusingly.[14] But that does not mean all of us want (or even need) to devote our lives to such things. Nonetheless, critics often sneer at the choices made by the rest of us—especially when they involve the fruits of science and technology. “The most effective weapon in the arsenal of the antitechnologists is self-righteousness,” he noted,[15] and, “[a]s seen by the antitechnologists, engineers and scientists are half-men whose analysis and manipulation of the world deprives them of the emotional experiences that are the essence of the good life.”[16]

Indeed, it is not uncommon (both in the past and today) to see tech critics self-anoint themselves “humanists” and then suggest that anyone who thinks differently from them (namely, those who are pro-innovation) are the equivalent of anti-humanistic. I wrote about this in my 2018 essay, “Is It ‘Techno-Chauvinist’ & ‘Anti-Humanist’ to Believe in the Transformative Potential of Technology?” I argued that, “[p]roperly understood, ‘technology’ and technological innovation are simply extensions of our humanity and represent efforts to continuously improve the human condition. In that sense, humanism and technology are compliments, not opposites.”

But the critics remain fundamentally hostile to that notion and they often suggest that there is something suspicious about those who believe, along with Florman, that there is a symbiotic relationship between innovation, economic growth, pluralism, and human betterment. We rational optimists, the critics suggest, are simply too focused on crass, materialistic measures of happiness and human flourishing.

Florman observed this when noting how much grief he and fellow engineers and scientists got when engaging with critics. “Anyone who has attempted to defend technology against the reproaches of an avowed humanist soon discovers that beneath all the layers of reasoning—political, environmental, aesthetic, or moral—lies a deep-seated disdain for ‘the scientific view.’”[17]

Everywhere you look in the world of Science & Technology Studies (STS) today, you find this attitude at work. In fact, the field is perhaps better labelled Anti-Science & Technology Studies, or at least Science & Technology Skeptical Studies. For most STSers, the burden of proof lies squarely on scientists, engineers, and innovators who must prove to some (often undefined) higher authorities that their ideas and inventions will bring worth to society (however the critics measure worth and value, which is often very unclear). Until then, just go slow, the critics say. Better yet, consult your local philosophy department for a proper course of action!

The critics will retort that they are just looking out for society’s best interests and trying to counter that selfish, materialist side of humanity. Florman countered by noting how, “most people are in search of the good life—not ‘the goods life’ as [Lewis] Mumford puts it, although some goods are entailed—and most human desires are for good things in moderate amounts.”[18] Trying to better our lives through the creation and acquisition of new and better goods and services is just a natural and quite healthy human instinct to help us attain some ever-changing definition of whatever each of us considers “the good life.” “Something other than technology is responsible for people wanting to live in a house on a grassy plot beyond walking distance to job, market, neighbor, and school,” Florman responded.[19] We all want to “get ahead” and improve our lot in life. That’s not because technology forces the urge upon us. Rather, that urge comes quite naturally as part of a desire to improve our lot in life.

The Power of Nostalgia

I have spent a fair amount of time in my own writing documenting the central role that nostalgia plays in motivating technological criticism.[20] Florman’s books repeatedly highlighted this reality. “The antitechnologists romanticize the work of earlier times in an attempt to make it seem more appealing than work in a technological age,” he noted. “But their idyllic descriptions of peasant life do not ring true.”[21]

The funny thing is, it is hard to pin down the critics regarding exactly when the “golden era” or “good ‘ol days” were. But if there is one thing that they all agree on, it’s that those days have long passed us by. In a 2019 essay on “Four Flavors of Doom: A Taxonomy of Contemporary Pessimism,” philosopher Maarten Boudry noted:

“In the good old days, everything was better. Where once the world was whole and beautiful, now everything has gone to ruin. Different nostalgic thinkers locate their favorite Golden Age in different historical periods. Some yearn for a past that they were lucky enough to experience in their youth, while others locate utopia at a point farther back in time…”

Not all nostalgia is bad. Clay Routledge has written eloquently about how “nostalgia serves important psychological functions,” and can sometimes possess a positive character that strengthens individuals and society. But the nostalgia found in the works of tech critics is usually a different thing altogether. It is rooted in misery about the present and dread of the future—all because technology has apparently stolen away or destroyed all that was supposedly great about the past. Florman noted how, “the current pessimism about technology is a renewed manifestation of pastoralism,” that is typically rooted in historical revisionism about bygone eras.[22] Many critics engage in what rhetoricians call “appeals to nature” and wax poetic about the joys of life for Pre-Technological Man, who apparently enjoyed an idyllic life free of the annoying intrusions created by modern contrivances.

Such “good ol days” romanticism is largely untethered from reality. “For most of recorded history humanity lived on the brink of starvation,” Wall Street Journal columnist Greg Ip noted in a column in early 2019. Even a cursory review of history offers voluminous, unambiguous proof that the old days were, in reality, eras of abject misery. Widespread poverty, mass hunger, poor hygiene, disease, short lifespans, and so on were the norm. What lifted humanity up and improved our lot as a species is that we learned how to apply knowledge to tasks in a better way through incessant trial-and-error experimentation. Recent books by Hans Rosling,[23] Steven Pinker,[24] and many others[25] have thoroughly documented these improvements to human well-being over time.

The critics are unmoved by such evidence, preferring to just jump around in time and cherry-pick moments when they feel life was better than it is now. “Fond as they are of tribal and peasant life, the antitechnologists become positively euphoric over the Middle Ages,” Florman quipped.[26] Why? Mostly because the Middle Ages lacked the technological advances of modern times, which the critics loathe. But facts are pesky things, and as Florman insisted, “it is fair to go on to ask whether or not life was ‘better’ in these earlier cultures than it is in our own.”[27] “We all are moved to reverie by talk of an arcadian golden age,” he noted. “But when we awaken from this reverie, we realize that the antitechnologists have diverted us with half-truths and distortions.”[28]

The critics’ reverence for the old days would be humorous if it wasn’t rooted in an arrogant and dangerous belief that society can be somehow reshaped to resemble whatever preferred past the critics desire. “Recognizing that we cannot return to earlier times, the antitechnologists nevertheless would have us attempt to recapture the satisfactions of these vanished cultures,” Florman noted. “In order to do this, what is required is nothing less than a change in the nature of man.”[29] That is, the critics will insist that, “something must be done” (namely be forced from above via some grand design) to remake humans and discourage their inner homo faber desire to be an incessant tool-builder. But this is madness, Florman argued in one of the best passages from his work:

“we are beginning to realize that for mankind there will never be a time to rest at the top of the mountain. There will be no new arcadian age. There will always be new burdens, new problems, new failures, new beginnings. And the glory of man is to respond to his harsh fate with zest and ever-renewed effort.”[30]

If the critics had their way, however, that zest would be dampened and those efforts restrained in the name of recapturing some mythical lost age. This sort of “rosy retrospection bias” is all the more shocking coming, as it does, from learned people who should know a lot more about the actual history of our species and the long struggle to escape utter despair and destitution. Alas, as the great Scottish philosopher David Hume observed in a 1777 essay, “The humour of blaming the present, and admiring the past, is strongly rooted in human nature, and has an influence even on persons endued with the profoundest judgment and most extensive learning.”[31]

Why Invent? Homo Faber is our Nature

While taking on the critics and debunking their misplaced nostalgia about the past, Florman mounted a defense of engineers and innovators by noting that the need to tinker and create is in our blood. He began by noting how “the nature of engineering has been misconceived”[32] because, in a sense, we are all engineers and innovators to some degree.

Florman’s thinking was very much in line with Benjamin Franklin, who once noted, “man is a tool-making animal.” “Both genetically and culturally the engineering instinct has been nurtured within us,” Florman argued, and this instinct “was as old as the human race.”[33] “To be human is to be technological. When we are being technological we are being human—we are expressing the age-old desire of the tribe to survive and prosper.”[34] In fact, he claimed, it was no exaggeration to say that humans, “are driven to technological creativity because of instincts hardly less basic than hunger and sex.”[35] Had our past situation been as rosy as the critics sometimes suggest, perhaps we would have never bothered to fashion tools to escape those eras! It was precisely because humans wanted to improve their lives and the lives of their loved ones that we started crafting more and better tools. Flint and firewood were never going to suffice.

But our engineering instincts do not end with basic needs. “Engineering responds to impulses that go beyond mere survival: a craving for variety and new possibilities, a feeling for proportion—for beauty—that we share with the artist,” Florman argued.[36] In essence, engineering and innovation respond to both basic human needs and higher ones at every stage of “Maslow’s pyramid,” which describes a five-level hierarchy of human needs. This same theme is developed in Arthur Diamond’s recent book, Openness to Creative Destruction: Sustaining Innovative Dynamism. As Diamond argues, one of the most unheralded features of technological innovation is that, “by providing goods that are especially useful in pursuing a life plan full of challenging, worthwhile creative projects,” it allows each of us the pursue different conceptions of what we consider a good life.[37] But we are only able to do so by first satisfying our basic physiological needs, which innovation also handles for us.

Florman was frustrated that critics failed to understand this point and equally concerned that engineers and innovators had been cast as uncaring gadget-worshipers who did not see beauty and truth in higher arts and other more worldly goals and human values. That’s hogwash, he argued:

“What an ironic turn of events! For if ever there was a group dedicated to—obsessed with—morality, conscience, and social responsibility, it has been the engineering profession. Practically every description of the practice of engineering has stressed the concept of service to humanity.[38] [. . .] Even in an age of global affluence, the main existential pleasure of the engineer will always be to contribute to the well-being of his fellow man.”[39]

Engineers and innovators do not always set out with some grandiose design to change the world, although some aspire to do so. Rather, the “existential pleasures of engineering” that Florman described in the title of his most notable book comes about by solving practical day-to-day problems:

“The engineer does not find existential pleasure by seeking it frontally. It comes to him gratuitously, seeping into him unawares. He does not arise in the morning and say, ‘Today I shall find happiness.’ Quite the contrary. He arises and says, ‘Today I will do the work that needs to be done, the work for which I have been trained, the work which I want to do because in doing it I feel challenged and alive.’ Then happiness arrives mysteriously as a byproduct of his effort.”[40]

And this pleasure of getting practical work done is something that engineers and innovators enjoy collectively by coming together and using specialized skills in new and unique combinations. “[T]echnological progress depends upon a variety of skills and knowledge that are far beyond the capacity of any one individual,” he insisted. “High civilization requires a high degree of specialization, and it was toward high civilization that the human journey appears always to have been directed.”[41] Adam Smith could not have said it any better.

“Muddling Through”: Why Trial-and-Error is the Key to Progress

My favorite insights from Florman’s work relate to the way humans have repeatedly faced up to adversity and found ways to “muddle through.” This was the focus of an old essay of mine— “Muddling Through: How We Learn to Cope with Technological Change”—which argued that humans are a remarkably resilient species and that we regularly find creative ways to deal with major changes through constant trial-and-error experimentation and the learning that results from it.[42]

Florman made this same point far more eloquently long ago:

“We have been attempting to muddle along, acknowledging that we are selfish and foolish, and proceeding by means of trial and error. We call ourselves pragmatists. Mistakes are made, of course. Also, tastes change, so that what seemed desirable to one generation appears disagreeable to the next. But our overriding concern has been to make sure that matters of taste do not become matters of dogma, for that is the way toward violent conflict and tyranny. Trial and error, however, is exactly what the antitechnologists cannot abide.[43]

It is the error part of trial-and-error that is so vital to societal learning. “Even the most cautious engineer recognizes that risk is inherent in what he or she does,” Florman noted. “Over the long haul the improbable becomes the inevitable, and accidents will happen. The unanticipated will occur.”[44] But “[s]ometimes the only way to gain knowledge is by experiencing failure,” he correctly observed[45] “To be willing to learn through failure—failure that cannot be hidden—requires tenacity and courage.”[46]

I’ve argued that this represents the central dividing line between innovation supporters and technology critics. The critics are so focused on risk-adverse, precautionary principle-based thinking that they simply cannot tolerate the idea that society can learn more through trial-and-error than through preemptive planning. They imagine it is possible to override that process and predetermine the proper course of action to create a safer, more stable society. In this mindset, failure is to be avoided at all costs through prescriptions and prohibitions. Innovation is to be treated as guilty until proven innocent in the hope of eliminating the error (or risk / failure) associated with trial-and-error experiments. To reiterate, this logic misses the fact that the entire point of trial-and-error is to learn from our mistakes and “fail better” next time, until we’ve solved the problem at hand entirely.[47]

Florman noted that, “sensible people have agreed that there is no free lunch; there are only difficult choices, options, and trade-offs.”[48] In other words, precautionary controls come at a cost. “All we can do is do the best we can, plan where we can, agree where we can, and compromise where we must,” he said.[49] But, again, the antitechnologists absolutely cannot accept this worldview. They are fundamentally hostile to it because they either believe that a precautionary approach will do a better job improving public welfare, or they believe that trial-and-error fails to safeguard any number of other values or institutions that they regard as sacrosanct. This shuts down the learning process from which wisdom is generated. As the old adage goes, “nothing ventured, nothing gained.” There can be no reward without some risk, and there can be no human advances without unless we are free to learn from the error portion of trial-and-error.

The Costs of Precautionary Regulation

Florman did not spend much time in his writing mulling over the finer points of public policy, but he did express skepticism about our collective ability to define and enforce “the public interest” in various contexts. A great many regulatory regimes—and their underlying statutes—rest on the notion of “protecting the public interest.” It is impossible to be against that notion, but it is often equally impossible to define what it even means.[50]

This leads to what Florman called, “the search for virtues that nobody can define”[51] “As engineers we are agreed that the public interest is very important; but it is folly to think that we can agree on what the public interest is. We cannot even agree on the scientific facts!”[52] This is especially true today in debates over what constitutes “responsible innovation” or “ethical innovation.”[53] What Florman noted about such conversations three decades ago is equally true today:

“Whenever engineering ethics is on the agenda, emotions come quickly to a boil. […] “It is oh so easy to mouth clichés, for example to pledge to protect the public interest, as the various codes of engineering ethics do. But such a pledge is only a beginning and hardly that. The real questions remain: What is the public interest, and how is it to be served?”[54]

That reality makes it extremely difficult to formulate consensus regarding public polices for emerging technologies. And it makes it particularly difficult to define and enforce a “precautionary principle” for emerging technologies that will somehow strike the Goldilocks balance of getting things just right. This was the focus of my 2016 book Permissionless Innovation, which argued that the precautionary principle should be the last resort when contemplating innovation policy. Experimentation with new technologies and business models should generally be permitted by default because, “living in constant fear of worst-case scenarios—and premising public policy on them—means that best-case scenarios will never come about,” I argued. The precautionary principle should only be tapped when the harms alleged to be associated with a new technology are highly probable, tangible, immediate, irreversible, catastrophic, or directly threatening to life and limb in some fashion.

For his part, Florman did not want to get his defense of engineering mixed up with politics and regulatory considerations. Engineers and technologists, he noted, come in many flavors and supported many different causes. Generally speaking, they tend to be quite pragmatic and shun strong ideological leanings and political pronouncements.

Of course, at some point, there is no avoiding this fight; one must comment on how to strike the right balance when politics enter the picture and threatens to stifle technological creativity. Florman’s perspectives on regulatory policy were somewhat jumbled, however. On one hand, he expressed concern about excessive and misguided regulations, but he also saw government playing an important role both in supporting various types of engineering projects and regulating certain technological developments:

“The regulatory impulse, running wild, wreaks havoc, first of all by stifling creative and productive forces that are vital to national survival. But it does harm also—and perhaps more ominously—by fomenting a counter-revolution among outraged industrialists, the intensity of which threatens to sweep away many of the very regulations we most need.”[55]

In his 1987 book, The Civilized Engineer, Florman even expressed surprise and regret about growing pushback against regulation during the Reagan years. He also expressed skepticism about “the deceptive allure” of benefit-cost analysis, which was on the rise at the time, saying that the “attempt to apply mathematical consistency to the regulatory process was deplorably simplistic.”[56] I have always been a big believer in the importance of benefit-cost analysis (BCA), so I was surprised to read of Florman’s skepticism of it. But he was writing in the early days of BCA and it was not entirely clear how well it work in practice. Four decades on, BCA has become far more rigorous, academically respected, and well-established throughout government. It has widespread and bipartisan support as a policy evaluation tool.

Florman adamantly opposed any sort of “technocracy”—or administration of government by technically-skilled elites. He thought it was silly that so many tech critics believe that such a thing already existed. “The myth of the technocratic elite is an expression of fear, like a fairy tale about ogres,” he argued. “It springs from an understandable apprehension, but since it has no basis in reality, it has no place in serious discourse.”[57] Nor did he believe that there was any real chance a technocracy would ever take hold. “No matter how complex technology becomes, and no matter how important it turns out to be in human affairs, we are not likely to see authority vested in a class of technocrats.”[58]

Florman hoped for wiser administration of law and regulations that affected engineering endeavors and innovation more generally. Like so many others, he did not necessarily want more law, just better law. One cannot fault that instinct, but Florman was not really interested in fleshing out the finer details of policy about how to accomplish that objective. He preferred instead to use history as a rough guide for policy. From the fall of the Roman Empire to the decline of Britain’s economic might in more recent times, Florman observed the ways in which societal and governmental attitudes toward innovation influenced the relative growth of science, technology, and national economies. In essence, he was explaining how “innovation culture” and “innovation arbitrage” had been realities for far longer than most people realize.[59]

“Where the entrepreneurial spirit cannot be rewarded, and where non-productive workers cannot be discharged, stagnation will set in,” Florman concluded.[60] This is very much in line with the thinking of economic historians like Joel Mokyr[61] and Deirdre McCloskey,[62] who have identified how attitudes toward creativity and entrepreneurialism affect the aggregate innovative capacity of nations, and thus their competitive advantage and relative prosperity in the world.

Debunking Determinism, Anxiety & Alienation Concerns

One of the ironies of modern technological criticism is the way many critics can’t seem to get their story straight when it comes to “technological determinism” versus social determinism. In the extreme view, technological determinism is the idea that technology drives history and almost has a will of its own. It is like an autonomous force that is practically unstoppable. By contrast, social determinism means that society (individuals, institutions, etc.) guide and control the development of technology.

In the field of Science and Technology Studies, technological determinism is a very hot matter. Academic and social critics are fond of painting innovation advocates as rigid tech determinists who are little better than uncaring anti-humanistic gadget-worshipers. The critics have employed a variety of other creative labels to describe tech determinism, including: “techno-fundamentalism,” “technological solutionism,” and even “techno-chauvinism.”

Engineers and other innovators often get hit with such labels and accused of being rigid technological determinists who just want to see tech plow over people and politics. But this was, and remains, a ridiculous argument. Sure, there will always be some wild-eyed futurists and extropian extremists who make preposterous claims about how “there is no stopping technology.” “Even now the salvation-through-technology doctrine has some adherents whose absurdities have helped to inspire the antitechnological movement, Florman said.”[63] But that hardly represents the majority of innovation supporters, who well understand that society and politics play a crucial role in shaping the future course of technological development.

As Florman noted, we can dismiss extreme deterministic perspectives for a rather simple reason: technologies fail all the time! “If promising technologies can suffer fatal blows from unexpected circumstances,” Florman correctly argued, then “[t]his means that we are still—however precariously—in control of our own destiny.”[64] He believed that, “technology is not an independent force, much less a thing, but merely one of the types of activities in which people engage.”[65] The rigid view of tech determinism can be dismissed, he said, because “it can be shown that technology is still very much under society’s control, that it is in fact an expression of our very human desires, fancies, and fears.”[66]

But what is amazing about this debate is that some of the most rigid technological determinists are the technology critics themselves! Recall how Florman began his 6-part taxonomy of common complaints from tech critics. “A primary characteristic of the antitechnologists,” Florman argued, “is the way in which they refer to ‘technology’ as a thing, or at least a force, as if it had an existence of its own” and which “has escaped from human control and is spoiling our lives.”[67]

He noted that many of the leading tech critics of the post-war era often spoke in remarkably deterministic ways. “The idea that a man of the masses has no thoughts of his own, but is something on the order of a programmed machine, owes part of its popularity with the antitechnologists to the influential writings of Herbert Marcuse,” he believed.[68] But then such thinking accelerated and gained greater favor with the popularity of critics like French philosopher Jacques Ellul, American historian Lewis Mumford, and American cultural critic Neil Postman.

Their books painted a dismal portrait of a future in which humans were subjugated to the evils of “technique” (Ellul), “technics” (Mumford), or “technopoly” (Postman).  The narrative of their works read like dystopian science fiction. Essentially, there was no escaping the iron grip that technology had on us. Postman claimed, for example, that technology was destined to destroy “the vital sources of our humanity” and lead to “a culture without a moral foundation” by undermining “certain mental processes and social relations that make human life worth living.”

Which gets us to commonly heard concerns about how technology leads to “anxiety” and “alienation.” “Having established the view of technology as an evil force, the antitechnologists then proceed to depict the average citizen as a helpless slave, driven by this force to perform work he detests,” Florman notes.[69] “Anxiety and alienation are the watchwords of the day, as if material comforts made life worse, rather than better.”[70]

These concerns about anxiety, alienation, and “dehumanization” are omnipresent in the work of modern tech critics, and they are also tied up with traditional worries about “conspicuous consumption.” It’s all part of the “false consciousness” narrative they also peddle, which basically views humans as too ignorant to look out for their own good. In this worldview, people are sheep being led to the slaughter by conniving capitalists and tech innovators, who are just trying to sell them things they don’t really need.

Florman pointed out how preposterous this line of thinking is when he noted how critics seem to always forget that, “a basic human impulse precedes and underlies each technological development”:[71]

“Very often this impulse, or desire, is directly responsible for the new invention. But even when this is not the case, even when the invention is not a response to any particular consumer demand, the impulse is alive and at the ready, sniffing about like a mouse in a maze, seeking its fulfillment. We may regret having some of these impulses. We certainly regret giving expression to some of them. But this hardly gives us the right to blame our misfortunes on a devil external to ourselves.”[72]

Consider the automobile, for example. Industrial era critics often focused on it and lambasted the way they thought industrialists pushed auto culture and technologies on the masses. Did we really need all those cars? All those colors? All those options? Did we really even need cars? The critics wanted us to believe that all these things were just imposed upon us. We were being force-fed options we really didn’t even need or want. “Choice” in this worldview is just a fiction; a front for the nefarious ends of our corporate overlords.

Florman demolished this reasoning throughout his books. “However much we deplore the growth of our automobile culture, clearly it has been created by people making choices, not by a runaway technology,” he argued.[73] Consumer demand and choice is not some fiction fabricated and forced upon us, as the antitechnologists suggest. We make decisions. “Those who would blame all of life’s problems on an amorphous technology, inevitably reject the concept of individual responsibility,” Florman retorted. “This is not humanism. It is a perversion of the humanistic impulse.”[74]

A modern tweak on the conspicuous consumption and false consciousness arguments is found in the work of leading tech critics like Evgeny Morozov, who pens attention-grabbing screeds decrying what he regards as “the folly of technological solutionism.” Morozov bluntly states that “our enemy is the romantic and revolutionary problem solver who resides within” of us, but most specifically within the engineers and technologists.[75]

But would the world really be better place it tinkerers didn’t try to scratch that itch?[76] In 2021, the Wall Street Journal profiled JoeBen Bevirt, an engineer and serial entrepreneur who has been working to bring flying cars from sci-fi to reality. Channeling Florman’s defense of the existential pleasures associated with engineering, Bevirt spoke passionately about the way innovators can help “move our species forward” through their constant tinkering to find solutions to hard problems. “That’s kind of the ethos of who we are,” he said. “We see problems, we’re engineers, we work to try to fix them.”[77]

When tech critics like Morozov decry “solutionism,” they are essentially saying that innovators like Bevirt need to just shut up and sit down. Don’t try to improve the world through tinkering; just settle for the status quo, the critics basically state. That’s the kiss of death for human progress, however, because it is only through incessant experimentation with the new and different approaches to hard problems that we can advance human well-being. “Solutionism” isn’t about just creating some shiny new toy; it’s about expanding the universe of potentially life-enriching and life-saving technologies available to humanity.

Conclusion

This review of Samuel Florman’s work may seem comprehensive, but it only scratches the surface of his wide-ranging writing. Florman was troubled that engineering lacked support or at least understanding. Perhaps that was because, he reasoned, that “[t]here is no single truth that embodies the practice of engineering, no patron saint, no motto or simple credo. There is no unique methodology that has been distilled from millenia of technological effort.”  Or, more simply, it may also be the case that the profession lacked articulate defenders. “The engineer may merely be waiting for his Shakespeare,” he suggested.[78]

Through his life’s work, however, Samuel Florman became that Shakespeare; the great bard of engineering and passionate defender of technological innovation and rational optimism more generally. In looking for a quote or two to close out my latest book, I ended with this one from Florman:

“By turning our backs on technological change, we would be expressing our satisfaction with current world levels of hunger, disease, and privation. Further, we must press ahead in the name of the human adventure. Without experimentation and change our existence would be a dull business.”[79]

Let us resolve to make sure that Florman’s greatest fear does not come to pass. Let us resolve to make sure that the great human adventure never ends. And let us resolve to counter the antitechnologists and their fundamentally anti-humanist worldview, which would most assuredly make our existence the “dull business” that Florman dreaded.

We can do better when we put our minds and hands to work innovating in an attempt to build a better future for humanity. Samuel Florman, the great prophet of progress, showed us the way forward.

 

Additional Reading from Adam Thierer:

 

Endnotes:

[1]    Matt Ridley, The Rational Optimist: How Prosperity Evolves (New York: Harper Collins, 2010).

[2]    Adam Thierer, “Defending Innovation Against Attacks from All Sides,” Discourse, November 9, 2021, https://www.discoursemagazine.com/ideas/2021/11/09/defending-innovation-against-attacks-from-all-sides.

[3]    Samuel C. Forman, The Civilized Engineer (New York: St. Martin’s Griffin, 1987), p. 26.

[4]    Samuel C. Florman, The Existential Pleasures of Engineering (New York, St. Martins Griffin, 2nd Edition, 1994), p. 53-4.

[5]    Existential Pleasures of Engineering, p. 53-4.

[6]    Samuel C. Florman, Blaming Technology: The Irrational Search for Scapegoats (New York: St. Martin’s Press, 1981), p. 186.

[7]    Existential Pleasures of Engineering, p. 76.

[8]    Existential Pleasures of Engineering, p. 77.

[9]    The Civilized Engineer, p. 38.

[10]   Thomas Sowell, The Vision of the Anointed: Self-Congratulation as a Basis for Social Policy (New York: Basic Books, 1995).

[11]   Existential Pleasures of Engineering, p. 72.

[12]   Existential Pleasures of Engineering, p. 76.

[13]   The Civilized Engineer, p. 35.

[14]   Existential Pleasures of Engineering, p. 102.

[15]   Blaming Technology, p. 162.

[16]   Existential Pleasures of Engineering, p. 55.

[17]   Blaming Technology, p. 70.

[18]   Existential Pleasures of Engineering, p. 77.

[19]   Existential Pleasures of Engineering, p. 60.

[20]   Adam Thierer, “Technopanics, Threat Inflation, and the Danger of an Information Technology Precautionary Principle,” Minnesota Journal of Law, Science & Technology 14, no. 1 (2013), p. 312–50, https://papers.ssrn.com/sol3/papers.cfm?abstract_id=2012494.

[21]   Existential Pleasures of Engineering, p. 62.

[22]   Blaming Technology, p. 9.

[23]   Hans Rosling, Factfulness: Ten Reasons We’re Wrong about the World—and Why Things Are Better Than You Think (New York: Flatiron Books, 2018).

[24]   Steven Pinker, Enlightenment Now: The Case for Reason, Science, Humanism, and Progress (New York: Viking, 2018).

[25]   Gregg Easterbrook, It’s Better than It Looks: Reasons for Optimism in an Age of Fear (New York: Public Affairs, 2018); Michael A. Cohen & Micah Zenko, Clear and Present Safety: The World Has Never Been Better and Why That Matters to Americans (New Haven, CT: Yale University Press, 2019).

[26]   Existential Pleasures of Engineering, p. 54.

[27]   Existential Pleasures of Engineering, p. 72.

[28]   Existential Pleasures of Engineering, p. 72.

[29]   Existential Pleasures of Engineering, p. 55.

[30]   Existential Pleasures of Engineering, p. 117.

[31]   David Hume, “Of the Populousness of Ancient Nations,” (1777), https://oll.libertyfund.org/titles/hume-essays-moral-political-literary-lf-ed.

[32]   The Civilized Engineer, p. 20.

[33]   Existential Pleasures of Engineering, p. 6.

[34]   The Civilized Engineer, p. 20.

[35]   Existential Pleasures of Engineering, p. 115.

[36]   The Civilized Engineer, p. 20.

[37]   Arthur Diamond, Openness to Creative Destruction: Sustaining Innovative Dynamism (Oxford: Oxford University Press, 2019).

[38]   Existential Pleasures of Engineering, p. 19.

[39]   Existential Pleasures of Engineering, p. 147.

[40]   Existential Pleasures of Engineering, p. 148.

[41]   The Civilized Engineer, p. 30.

[42]   Adam Thierer, “Muddling Through: How We Learn to Cope with Technological Change,” Medium, June 30, 2014, https://medium.com/tech-liberation/muddling-through-how-we-learn-to-cope-with-technological-change-6282d0d342a6.

[43]   Existential Pleasures of Engineering, p. 84.

[44]   The Civilized Engineer, p. 71.

[45]   The Civilized Engineer, p. 72.

[46]   The Civilized Engineer, p. 72.

[47]   Adam Thierer, “Failing Better: What We Learn by Confronting Risk and Uncertainty,” in Sherzod Abdukadirov (ed.), Nudge Theory in Action: Behavioral Design in Policy and Markets (Palgrave Macmillan, 2016): 65-94.

[48]   The Civilized Engineer, p. xi.

[49]   Existential Pleasures of Engineering, p. 85.

[50]   Adam Thierer, “Is the Public Served by the Public Interest Standard?” The Freeman, September 1, 1996,  https://fee.org/articles/is-the-public-served-by-the-public-interest-standard.

[51]   The Civilized Engineer, p. 84.

[52]   The Existential Pleasures of Engineering, p. 22.

[53]   Adam Thierer, “Are ‘Permissionless Innovation’ and ‘Responsible Innovation’ Compatible?” Technology Liberation Front, July 12, 2017, https://techliberation.com/2017/07/12/are-permissionless-innovation-and-responsible-innovation-compatible.

[54]   The Civilized Engineer, p. 79.

[55]   Blaming Technology, p. 106.

[56]   The Civilized Engineer, p. 158.

[57]   Blaming Technology, p. 41.

[58]   Blaming Technology, p. 40-1.

[59]   Adam Thierer, “Embracing a Culture of Permissionless Innovation,” Cato Online Forum, November 17, 2014, https://www.cato.org/publications/cato-online-forum/embracing-culture-permissionless-innovation; Christopher Koopman, “Creating an Environment for Permissionless Innovation,” Testimony before the US Congress Joint Economic Committee, May 22, 2018, https://www.mercatus.org/publications/creating-environment-permissionless-innovation.

[60]   The Civilized Engineer, p. 117.

[61]   Joel Mokyr, Lever of Riches: Technological Creativity and Economic Progress (New York: Oxford University Press, 1990).

[62]   Deirdre N. McCloskey, The Bourgeois Virtues: Ethics for an Age of Commerce (Chicago: The University of Chicago Press, 2006); Deirdre N. McCloskey, Bourgeois Dignity: Why Economics Can’t Explain the Modern World (Chicago: The University of Chicago Press. 2010).

[63]   Existential Pleasures of Engineering, p. 57.

[64]   Blaming Technology, p. 22.

[65]   The Existential Pleasures of Engineering, p. 58.

[66]   Blaming Technology, p. 10.

[67]   The Existential Pleasures of Engineering, p. 48, 53.

[68]   Existential Pleasures of Engineering, p. 70.

[69]   Existential Pleasures of Engineering, p. 49.

[70]   Existential Pleasures of Engineering, p. 16.

[71]   Existential Pleasures of Engineering, p. 61.

[72]   Existential Pleasures of Engineering, p. 61.

[73]   Existential Pleasures of Engineering, p. 60.

[74]   Blaming Technology, p. 104.

[75]   Evgeny Morozov, To Save Everything, Click Here: The Folly of Technological Solutionism (New York: Public Affairs, 2013).

[76]   Adam Thierer, “A Net Skeptic’s Conservative Manifesto,” Reason, April 27, 2013, https://reason.com/2013/04/27/a-net-skeptics-conservative-manifesto-2/.

[77]   Emily Bobrow, “JoeBen Bevirt Is Bringing Flying Taxis from Sci-Fi to Reality,” Wall Street Journal, July 9, 2021, https://www.wsj.com/articles/joeben-bevirt-is-bringing-flying-taxis-from-sci-fi-to-reality-11625848177.

[78]   Existential Pleasures of Engineering, p. 96.

[79]   Blaming Technology, p. 193.

]]>
https://techliberation.com/2022/04/06/samuel-florman-the-continuing-battle-over-technological-progress/feed/ 0 76961
The End of Permissionless Innovation? https://techliberation.com/2021/01/10/the-end-of-permissionless-innovation/ https://techliberation.com/2021/01/10/the-end-of-permissionless-innovation/#comments Sun, 10 Jan 2021 21:24:12 +0000 https://techliberation.com/?p=76823

Time magazine recently declared 2020 “The Worst Year Ever.” By historical standards that may be a bit of hyperbole. For America’s digital technology sector, however, that headline rings true. After a remarkable 25-year run that saw an explosion of innovation and the rapid ascent of a group of U.S. companies that became household names across the globe, politicians and pundits in 2020 declared the party over. “We now are on the cusp of a new era of tech policy, one in which the policy catches up with the technology,” says Darrell M. West of the Brookings Institution in a recent essay, “The End of Permissionless Innovation.” West cites the House Judiciary Antitrust Subcommittee’s October report on competition in digital markets—where it equates large tech firms with the “oil barons and railroad tycoons” of the Gilded Age—as the clearest sign that politicization of the internet and digital technology is accelerating. It is hardly the only indication that America is set to abandon permissionless innovation and revisit the era of heavy-handed regulation for information and communication technology (ICT) markets. Equally significant is the growing bipartisan crusade against Section 230, the provision of the 1996 Telecommunications Act that shields “interactive computer services” from liability for information posted or published on their systems by users. No single policy has been more important to the flourishing of online speech or commerce than Sec. 230 because, without it, online platforms would be overwhelmed by regulation and lawsuits. But now, long knives are coming out for the law, with plenty of politicians and academics calling for it to be gutted. Calls to reform or repeal Sec. 230 were once exclusively the province of left-leaning academics or policymakers, but this year it was conservatives in the White Houseon Capitol Hill and at the Federal Communications Commission (FCC) who became the leading cheerleaders for scaling back or eliminating the law. President Trump railed against Sec. 230 repeatedly on Twitter, and most recently vetoed the annual National Defense Authorization Act in part because Congress did not include a repeal of the law in the measure. Meanwhile, conservative lawmakers in Congress such as Sens. Josh Hawley and Ted Cruz have used subpoenasangry letters and heated hearings to hammer digital tech executives about their content moderation practices. Allegations of anti-conservative bias have motivated many of these efforts. Even Supreme Court Justice Clarence Thomas questioned the law in a recent opinion. Other proposed regulatory interventions include calls for new national privacy laws, an “Algorithmic Accountability Act” to regulate artificial intelligence technologies, and a growing variety of industrial policy measures that would open the door to widespread meddling with various tech sectors. Some officials in the Trump administration even pushed for a nationalized 5G communications network in the name of competing with China. This growing “techlash” signals a bipartisan “Back to the Future” moment, with the possibility of the U.S. reviving a regulatory playbook that many believed had been discarded in history’s dustbin. Although plenty of politicians and pundits are taking victory laps and giving each other high-fives over the impending end of the permissionless innovation era, it is worth considering what America will be losing if we once again apply old top-down, permission slip-oriented policies to the technology sector.

Permissionless Innovation: The Basics

As an engineering principle, permissionless innovation represents the general freedom to tinker and develop new ideas and products in a relatively unconstrained fashion. As I noted in a recent book on the topic, permissionless innovation can also describe a governance disposition or regulatory default toward entrepreneurial activities. In this sense, permissionless innovation refers to the idea that experimentation with new technologies and innovations should generally be permitted by default and that prior restraints on creative activities should be avoided except in those cases where clear and immediate harm is evident. There is an obvious relationship between the narrow and broad definitions of permissionless innovation. When governments lean toward permissionless innovation as a policy default, it is likely to encourage freewheeling experimentation more generally. But permissionless innovation can sometimes occur in the wild, even when public policy instead tends toward its antithesis—the precautionary principle. As I noted in my latest book, tinkerers and innovators sometimes behave evasively and act to make permissionless innovation a reality even when public policy discourages it through precautionary restraints. To be clear, permissionless innovation as a policy default has not meant anarchy. Quite the opposite, in fact. In the United States, over the past 25 years, no major federal agencies that regulate technology or laws that do so were eliminated. Indeed, most agencies grew bigger. But in spite of this, entrepreneurs during this period got more green lights than red ones, and innovation was treated as innocent until proven guilty. This is how and why social media and the sharing economy developed and prospered here and not in other countries, where layers of permission slips prevented such innovations from ever getting off the drawing board. The question now is, how will the shift to end permissionless innovation as a policy default in the U.S. affect innovative activity here more generally? Economic historians Deirdre McCloskey and Joel Mokyr teach us that societal and political attitudes toward growth, risk-taking and entrepreneurialism have a powerful connection with the competitive standing of nations and the possibility of long-term prosperity. If America’s innovation culture sours on the idea of permissionless-ness and moves toward a precautionary principle-based model, creative minds will find it harder to experiment with bold new ideas that could help enrich the nation and improve the well-being of the citizenry—which is exactly why America discarded its old top-down regulatory model in the first place.

Why America Junked the Old Model

Perhaps the easiest way to put some rough bookends on the beginning and end of America’s permissionless innovation era is to date it to the birth and impending death of Sec. 230 itself. The enactment in 1996 of the Telecommunications Act was important, not only because it included Sec. 230, but also because the law created a sort of policy firewall between the old and new worlds of ICT regulation. The old ICT regime was rooted in a complex maze of federal, state and local regulatory permission slips. If you wanted to do anything truly innovative in the old days, you typically needed to get some regulator’s blessing first—sometimes multiple blessings. The exception was the print sector, which enjoyed robust First Amendment protection from the time of the nation’s founding. Newspapers, magazines and book publishers were left largely free of prior restraints regarding what they published or how they innovated. The electronic media of the 20th century were not so lucky. Telephony, radio, television, cable, satellite and other technologies were quickly encumbered with a crazy quilt of federal and state regulations. Those restraints include price controls, entry restrictions, speech restrictions and endless agency threats. ICT policy started turning the corner in the late 1980s after the old regulatory model failed to achieve its mission of more choice, higher quality and lower prices for media and communications. Almost everyone accepted that change was needed, and it came fast. The 1990s became a whirlwind of policy and technological change. In the mid-1990s, the Clinton administration decided to allow open commercialization of the internet, which, until then, had mostly been a plaything for government agencies and university researchers. But it was the enactment of the 1996 telecommunications law that sealed the deal. Not only did the new law largely avoid regulating the internet like analog-era ICT, but, more importantly, it included Sec. 230, which helped ensure that future regulators or overzealous tort lawyers would not undermine this wonderful new resource. A year later, the Clinton administration put a cherry on top with the release of its Framework for Global Electronic Commerce. This bold policy statement announced a clean break from the past, arguing that “the private sector should lead [and] the internet should develop as a market-driven arena, not a regulated industry.” Permissionless innovation had become the foundation of American tech policy.

The Results

Ideas have consequences, as they say, and that includes ramifications for domestic business formation and global competitiveness. While the U.S. was allowing the private sector to largely determine the shape of the internet, Europe was embarking on a very different policy path, one that would hobble its tech sector. America’s more flexible policy ecosystem proved to be fertile ground for digital startups. Consider the rise of “unicorns,” shorthand for companies valued at $1+ billion. “In terms of the global distribution of startup success,” notes the State of the Venture Capital Industry in 2019, “the number of private unicorns has grown from an initial list of 82 in 2015 to 356 in Q2 2019,” and fully half of them are U.S.-based. The United States is also home to the most innovative tech firms. Over the past decade, Strategy& (PricewaterhouseCooper’s strategy consulting business) has compiled a list of the world’s most innovative companies, based on R&D efforts and revenue. Each year that list is dominated by American tech companies. In 2013, 9 of the top 10 most innovative companies were based in the U.S., and most of them were involved in computing, software and digital technology. Global competition is intensifying, but in the most recent 2018 list, 15 of the top 25 companies are still U.S.-based giants, with Amazon, Google, Intel, Microsoft, Apple, Facebook, Oracle and Cisco leading the way. Meanwhile, European digital tech companies cannot be found on any such list. While America’s tech companies are household names across the European continent, most people struggle to name a single digital innovator headquartered in the EU. Permissionless innovation crushed the precautionary principle in the trans-Atlantic policy wars. European policymakers have responded to the continent’s digital stagnation by doubling down on their aggressive regulatory efforts. The EU closed out 2020 with two comprehensive new measures (the Digital Services Act and the Digital Markets Act), while the U.K. simultaneously pursued a new “online harms” law. Taken together, these proposals represent “the biggest potential expansion of global tech regulation in years,” according to The Wall Street Journal. The measures will greatly expand extraterritorial control over American tech companies. Having decimated their domestic technology base and driven away innovators and investors, EU officials are now resorting to plugging budget shortfalls with future antitrust fines on U.S.-based tech companies. It has essentially been a lost quarter century for Europe on the information technology front, and now American companies are expected to pay for it.

Republicans Revive ‘Regulation-By-Raised-Eyebrow’

In light of the failure of Europe’s precautionary principle-based policy paradigm, and considering the threat now posed by the growing importance of various Chinese tech companies, one might think U.S. policymakers would be celebrating the competitive advantages created by a quarter century of American tech dominance and contemplating how to apply this winning vision to other sectors of the economy. Alas, despite its amazing run, business and political leaders are now turning against permissionless innovation as America’s policy lodestar. What is most surprising is how this reversal is now being championed by conservative Republicans, who traditionally support deregulation. President Trump also called for tightening the screws on Big Tech. For example, in a May 2020 Executive Order on “Preventing Online Censorship,” he accused online platforms of “selective censorship that is harming our national discourse” and suggested that “these platforms function in many ways as a 21st century equivalent of the public square.” Trump and his supporters put Google, Facebook, Twitter and Amazon in their crosshairs, accusing them of discriminating against conservative viewpoints or values. The irony here is that no politician owes more to modern social media platforms than Donald Trump, who effectively used them to communicate his ideas directly to the American people. Moreover, conservative pundits now enjoy unparalleled opportunity to get their views out to the wider world thanks to all the digital soapboxes they now can stand on. YouTube and Twitter are chock-full of conservative punditry, and the daily list of top 10 search terms on Facebook is dominated consistently by conservative voices, where “the right wing has a massive advantage,” according to Politico. Nonetheless, conservatives insist they still don’t get a fair shake from the cornucopia of new communications platforms that earlier generations of conservatives could have only dreamed about having at their disposal. They think the deck is stacked against them by Silicon Valley liberals. This growing backlash culminated in a remarkable Senate Commerce Committee hearing on Oct. 28 in which congressional Republicans hounded tech CEOs and called for more favorable treatment of conservatives, and threatened social media companies with regulation if conservative content was taken down. Liberal lawmakers, by contrast, uniformly demanded the companies do more to remove content they felt was harmful or deceptive in some fashion. In many cases, lawmakers on both sides of the aisle were talking about the exact same content, putting the companies in the impossible position of having to devise a Goldilocks formula to get the content balance just right, even though it would be impossible to make both sides happy. In the broadcast era, this sort of political harassment was known as the “regulation-by-raised-eyebrow” approach, which allowed officials to get around First Amendment limitations on government content control. Congressional lawmakers and regulators at the FCC would set up show trial hearings and use political intimidation to gain programming concessions from licensed radio and television operators. These shakedown tactics didn’t always work, but they often resulted in forms of soft censorship, with media outlets editing content to make politicians happy. The same dynamic is at work today. Thus, when a firebrand politician like Sen. Josh Hawley suggests “we’d be better off if Facebook disappeared,” or when Sohrab Ahmari, the conservative op-ed editor at the New York Postcalls for the nationalization of Twitter, they likely understand these extreme proposals won’t happen. But such jawboning represents an easy way to whip up your base while also indirectly putting intense pressure on companies to tweak their policies. Make us happy, or else! It is not always clear what that “or else” entails, but the accumulated threats probably have some effect on content decisions made by these firms. Whether all this means that Sec. 230 gets scrapped or not shouldn’t distract from the more pertinent fact: few on the political right are preaching the gospel of permissionless innovation anymore. Even tech companies and Silicon Valley-backed organizations now actively distance themselves from the term. Zachary Graves, head of policy at Lincoln Network, a tech advocacy organization, worries that permissionless innovation is little more than a “legitimizing facade for anarcho-capitalists, tech bros, and cynical corporate flacks.” He lines up with the growing cast of commentators on both the left and right who endorse a “Tech New Deal” without getting concrete about what that means in practice. What it likely means is a return to a well-worn regulatory playbook of the past that resulted in innovation stagnation and crony capitalism.

A More Political Future

Indeed, as was the case during past eras of permission slip-based policy, our new regulatory era will be a great boon to the largest tech companies. Many people advocate greater regulation in the name of promoting competition, choice, quality and lower prices. But merely because someone proclaims that they are looking to serve the public interest doesn’t mean the regulatory policies they implement will achieve those well-intentioned goals. The means to the end—new rules, regulations and bureaucracies—are messy, imprecise and often counterproductive. Fifty years ago, the Nobel prize-winning economist George Stigler taught us that, “as a rule, regulation is acquired by the industry and is designed and operated primarily for its benefits.” In other words, new regulations often help to entrench existing players rather than fostering greater competition. Countless experts since then have documented the problem of regulatory capture in various contexts. If the past is prologue, we can expect many large tech firms to openly embrace regulation as they come to see it as a useful way of preserving market share and fending off pesky new rivals, most of whom will not be able to shoulder the compliance burdens and liability threats associated with permission slip-based regulatory regimes. True to form, in recent congressional hearings, Facebook head Mark Zuckerberg called on lawmakers to begin regulating social media markets. The company then rolled out a slick new website and advertising campaign inviting new rules on various matters. It is always easy for the king of the hill to call for more regulation when that hill is a mound of red tape of their own making—and which few others can ascend. It is a lesson we should have learned in the AT&T era, when a decidedly unnatural monopoly was formed through a partnership between company officials and the government.

Image Credit: Infrogmation/Wikimedia Commons

Many independent telephone companies existed across America before AT&T’s leaders cut sweetheart deals with policymakers that tilted the playing field in its favor and undermined competition. With rivals hobbled by entry restrictions and other rules, Ma Bell went on to enjoy more than a half century of stable market share and guaranteed rates of return. Consumers, by contrast, were expected to be content with plain-vanilla telephone services that barely changed. Some of us are old enough to remember when the biggest “innovation” in telephony involved the move from rotary-dial phones to the push-button Princess phone, which, we were thrilled to discover, came in multiple colors and had a longer cord. In a similar way, the impending close of the permissionless innovation era signals the twilight of technological creative destruction and its replacement by a new regime of political favor-seeking and logrolling, which could lead to innovation stagnation. The CEOs of the remaining large tech companies will be expected to make regular visits to the halls of Congress and regulatory agencies (and to all those fundraising parties, too) to get their marching orders, just as large telecom and broadcaster players did in the past. We will revert to the old historical trajectory, which saw communications and media companies securing marketplace advantages more through political machinations than marketplace merit.

Will Politics Really Catch Up?

While permissionless innovation may be falling out of favor with elites, America’s entrepreneurial spirit will be hard to snuff out, even when layers of red tape make it riskier to be creative. If for no other reason, permissionless innovation still has a fighting chance so long as Congress struggles to enact comprehensive technology measures. General legislative dysfunction and profound technological ignorance are two reasons that Congress has largely become a non-actor on tech policy in recent years. But the primary limitation on legislative meddling is the so-called pacing problem, which refers to the way technological innovation often outpaces the ability of laws and regulations to keep up. “I have said more than once that innovation moves at the speed of imagination and that government has traditionally moved at, well, the speed of government,” observed former Federal Aviation Administration head Michael Huerta in a 2016 speech.

DNA sequencing machine. Image Credit: Assembly/Getty Images

The same factors that drove the rise of the internet revolution—digitization, miniaturization, ubiquitous mobile connectivity and constantly increasing processing power—are spreading to many other sectors and challenging precautionary policies in the process. For example, just as “Moore’s Law” relentlessly powers the pace of change in ICT sectors, the “Carlson curve” now fuels genetic innovation. The curve refers to the fact that, over the past two decades, the cost of sequencing a human genome has plummeted from over $100 million to under $1,000, a rate nearly three times faster than Moore’s Law. Speed isn’t the only factor driving the pacing problem. Policymakers also struggle with metaphysical considerations about how to define the things they seek to regulate. It used to be easy to agree what a phone, television or medical tracking device was for regulatory purposes. But what do those terms really mean in the age of the smartphone, which incorporates all of them and much more? “‘Tech’ is a very diverse, widely-spread industry that touches on all sorts of different issues,” notes tech analyst Benedict Evans. “These issues generally need detailed analysis to understand, and they tend to change in months, not decades.” This makes regulating the industry significantly more challenging than it was in the past. It doesn’t mean the end of regulation—especially for sectors already encumbered by many layers of preexisting rules. But these new realities lead to a more interesting game of regulatory whack-a-mole: pushing down technological innovation in one way often means it simply pops up somewhere else. The continued rapid growth of what some call “the new technologies of freedom”—artificial intelligence, blockchain, the Internet of Things, etc.—should give us some reasons for optimism. It’s hard to put these genies back in their bottles now that they’re out. This is even more true thanks to the growth of innovation arbitrage—both globally and domestically. Creators and capital now move fluidly across borders in pursuit of more hospitable innovation and investment climates. Recently, some high-profile tech CEOs like Elon Musk and Joe Lonsdale have relocated from California to Texas, citing tax and regulatory burdens as key factors in their decisions. Oracle, America’s second-largest software company, also just announced it is moving its corporate headquarters from Silicon Valley to Austin, just over a week after Hewlett Packard Enterprise said it too is moving its headquarters from California to Texas—in this case, Houston. “Voting with your feet” might actually still mean something, especially when it is major tech companies and venture capitalists abandoning high-tax, over-regulated jurisdictions.

Advocacy Remains Essential

But we shouldn’t imagine that technological change is inevitable or fall into the trap of thinking of it as a sort of liberation theology that will magically free us from repressive government controls. Policy advocacy still matters. Innovation defenders will need to continue to push back against the most burdensome precautionary policies, while also promoting reforms that protect entrepreneurial endeavors. The courts offer us great hope. Groups like the Institute for Justice, the Goldwater Institute, the Pacific Legal Foundation and others continue to litigate successfully in defense of the freedom to innovate. While the best we can hope for in the legislative arena may be perpetual stalemate, these and other public interest law firms are netting major victories in courtrooms across America. Sometimes court victories force positive legislative changes, too. For example, in 2015, the Supreme Court handed down North Carolina State Board of Dental Examiners v. Federal Trade Commission, which held that local government cannot claim broad immunity from federal antitrust laws when it delegates power to nongovernmental bodies, such as licensing boards. This decision made much-needed occupational licensing reform an agenda item across America. Many states introduced or adopted bipartisan legislation aimed at reforming or sunsetting occupational licensing rules that undermine entrepreneurship. Even more exciting are proposals that would protect citizens’ “right to earn a living.” This right would allow individuals to bring suit if they believe a regulatory scheme or decision has unnecessarily infringed upon their ability to earn a living within a legally permissible line of work. Meanwhile, there have been ongoing state efforts to advance “right to try” legislation that would expand medical treatment options for Americans tired of overly paternalistic health regulations. Perhaps, then, it is too early to close the book on the permissionless innovation era. While dark political clouds loom over America’s technological landscape, there are still reasons to believe the entrepreneurial spirit can prevail.
]]>
https://techliberation.com/2021/01/10/the-end-of-permissionless-innovation/feed/ 1 76823
Latest Soft Law Development: DoT’s NETT Council Report https://techliberation.com/2020/07/31/latest-soft-law-development-dots-nett-council-report/ https://techliberation.com/2020/07/31/latest-soft-law-development-dots-nett-council-report/#comments Fri, 31 Jul 2020 18:13:26 +0000 https://techliberation.com/?p=76780

Cover of the Pathways DocumentOn July 23rd, the U.S. Department of Transportation (DoT) released Pathways to the Future of Transportation, which was billed as “a policy document that is intended to serve as a roadmap for innovators of new cross modal technologies to engage with the Department.” This guidance document was created by a new body called the Non-Traditional and Emerging Transportation Technology (NETT) Council, which was formed by U.S. Transportation Secretary Elaine L. Chao last year. The NETT Council is described as “an internal deliberative body to identify and resolve jurisdictional and regulatory gaps that may impede the deployment of new technologies.”

The creation of NETT Council and the issuance of its first major report highlight the continued growth of “soft law” as a major governance trend for emerging technology in the US. Soft law refers to informal, collaborative, and constantly evolving governance mechanisms that differ from hard law in that they lack the same degree of enforceability. A partial inventory of soft law methods includes: multistakeholder processes, industry best practices or codes of conduct, technical standards, private certifications, agency workshops and guidance documents, informal negotiations, and education and awareness efforts. But this list of soft law mechanisms is amorphous and ever-changing.

Soft law systems and processes are multiplying at every level of government today: federal, state, local, and even globally. Such mechanisms are being tapped by government bodies today to deal with fast-moving technologies that are evolving faster than the law’s ability to keep up.

The US Department of Transportation has become a leading candidate for Soft Law Central at the federal level. The agency has been tapping a variety of soft law mechanisms and approaches to deal with driverless cars and drone policy issues in particular. (See the essays listed down below for more details).

The NETT Council represents the next wave of this governance trend. We might consider it an effort to bring a greater degree of formality and coordination to the agency’s soft law efforts. The DoT’s overview of the NETT Council explains its purpose as follows:

Inventors and investors approach USDOT to obtain necessary safety authorizations, permits, and funding and often face uncertainty about how to coordinate with the Department. The NETT Council will address these challenges by ensuring that the traditional modal silos at DOT do not impede the safe deployment of new technology. Furthermore, it will give project sponsors a single point of access to discuss plans and proposals.

In its new guidance document, the NETT Council seeks to outline how it will work to develop “the principles informing the [DoT] policies in transformative technologies,” as well as “the overarching regulatory framework for non-traditional and emerging transportation technologies.” A lot of stress is placed on “how the Council will engage with innovators and entrepreneurs” to strike the balance between continued safety and increased innovation.

Although much of the document simply discusses existing agency regulatory authority, the Council also identifies how the agency and its subdivisions will seek a more flexible governance approach going forward. A premium is placed on expanding dialogue among affected parties. The section discussing environmental review requirements is indicative of this, noting: “The Department encourages innovators, project sponsors or proponents to engage in a dialogue with the NETT Council when the proponent anticipates seeking Federal financial assistance or an authorization.”

“Any innovator can approach the NETT Council with its ideas,” the document says in another section, although engagement level may vary by issue and department. It continues on to note that, “during the formation stage, the NETT Council would likely be willing to have an informational meeting and establish a point of contact to maintain a level of awareness for Department staff regarding the new project.” “Successful collaboration tends to be characterized by industry initiation and leadership with a limited and defined federal role,” it notes. Several examples are highlighted.

In addition to the importance of early dialogue between innovators and regulators, the document stresses the dangers associated with regulatory uncertainty. It also includes some discussion about the problems associated with a lack of regulatory flexibility in some instances “and the potential deterrent to innovation caused by attempting to ‘shoehorn’ a particular technology into a regulatory regime that does not fit.” There is also some discussion of how international or private sector standards might help provide governance solutions in some instances.

Again, these are all examples of soft law mechanisms. To be clear, the NETT Council is not proposing the abandonment of hard law enforcement efforts. To the contrary, the document repeatedly reiterates what those powers are and how they might be used. But it is equally clear that the DoT realizes that the old regulatory systems are being severely strained by the “pacing problem,” or the notion that technological developments are often moving considerably faster than traditional regulatory processes.

The NETT Council report is a welcome effort to broaden the dialogue about what sort of governance systems might make the more sense going forward for emerging technologies. This is a pressing problem for the DoT because of the convergence of digital and analog sectors and technologies. AI and machine-learning technologies are invading the crusty old world of transportation networks and regulations. Momentous changes are happening. Law will need to adapt. Soft law systems will increasingly be tapped to help out if for no other reason than there isn’t a better backup plan. If America hopes to be a leader in transportation innovation, new governance approaches will be essential.

Below you will find some additional essays on the growing soft law-ization of technological governance in the US. Many of them are about transportation technologies and recent developments at the federal and state levels. I also recommend this new essay by John Villasenor over at Brookings on “Soft law as a complement to AI regulation.” Finally, if you want to do a deep dive in the nature of soft law and the full range of governance issues associated with it, then you absolutely must follow the work being done by Gary Marchant and his impressive team of colleagues at Arizona State University. Begin with this essay on “Soft Law Governance Of Artificial Intelligence,” and then get your hands on this huge book on the topic that Marchant co-edited. It’s the best thing I have read on soft law and alternative governance systems for emerging technologies.

In the meantime, give the new DoT NETT Council report a glance because, for better or worse, this is what the future of technological governance looks like.


]]>
https://techliberation.com/2020/07/31/latest-soft-law-development-dots-nett-council-report/feed/ 1 76780
Video: Launch Event for “Evasive Entrepreneurs” Book https://techliberation.com/2020/04/29/video-launch-event-for-evasive-entrepreneurs-book/ https://techliberation.com/2020/04/29/video-launch-event-for-evasive-entrepreneurs-book/#respond Wed, 29 Apr 2020 15:22:06 +0000 https://techliberation.com/?p=76706

Here’s yesterday’s full launch event video for the release of my new book, Evasive Entrepreneurs and the Future of Governance: How Innovation Improves Economies and Governments. My thanks to Matthew Feeney, Director of the Project on Emerging Technologies at the Cato Institute, for hosting the discussion and sorting through audience questions. The video is below and some of the topics we discussed are listed down below:

* innovation culture
* charter cities, innovation hubs & competitive federalism
* the pacing problem
* technological determinism
* innovation arbitrage
* existential risk
* the Precautionary Principle vs. Permissionless Innovation
* responsible innovation
* drones, facial recognition & surveillance tech
* why privacy & cybersecurity bills never pass
* regulatory accumulation
* applying Moore’s Law to government
* technological civil disobedience
* 3D printing
* biohacking & the “Right to Try” movement
* technologies of resistance
* “born free” technologies vs. “born in captivity” tech
* regulatory capture
* agency threats & “regulation by raised eyebrow”
* soft law vs. hard law
* autonomous systems & “killer robots”!
]]>
https://techliberation.com/2020/04/29/video-launch-event-for-evasive-entrepreneurs-book/feed/ 0 76706
Barriers to a Builder’s Movement: Thoughts on Andreessen’s Manifesto https://techliberation.com/2020/04/21/barriers-to-a-builders-movement-thoughts-on-andreessens-manifesto/ https://techliberation.com/2020/04/21/barriers-to-a-builders-movement-thoughts-on-andreessens-manifesto/#comments Tue, 21 Apr 2020 16:48:50 +0000 https://techliberation.com/?p=76692

[First published by AIER on April 20, 2020 as “Innovation and the Trouble with the Precautionary Principle.”]

In a much-circulated new essay (“It’s Time to Build”), Marc Andreessen has penned a powerful paean to the importance of building. He says the COVID crisis has awakened us to the reality that America is no longer the bastion of entrepreneurial creativity it once was. “Part of the problem is clearlyforesight, a failure of imagination,” he argues. “But the other part of the problem is what we didn’t do in advance, and what we’re failing to do now. And that is a failure of action, and specifically our widespread inability to build.”The Mind of Marc Andreessen | The New Yorker

Andreessen suggests that, somewhere along the line, something changed in the DNA of the American people and they essentially stopped having the desire to build as they once did. “You don’t just see this smug complacency, this satisfaction with the status quo and the unwillingness to build, in the pandemic, or in healthcare generally,” he says. “You see it throughout Western life, and specifically throughout American life.” He continues:

“The problem is desire. We need to want these things. The problem is inertia. We need to want these things more than we want to prevent these things. The problem is regulatory capture. We need to want new companies to build these things, even if incumbents don’t like it, even if only to force the incumbents to build these things.”

Accordingly, Andreessen continues on to make the case to both the political right and left to change their thinking about building more generally. “It’s time for full-throated, unapologetic, uncompromised political support from the right for aggressive investment in new products, in new industries, in new factories, in new science, in big leaps forward.”

What’s missing in Andreessen’s manifesto is a concrete connection between America’s apparent dwindling desire to build these things and the political realities on the ground that contribute to that problem. Put simply, policy influences attitudes. More specifically, policies that frown upon entrepreneurial risk-taking actively disincentivize the building of new and better things. Thus, to correct the problem Andreessen identifies, it is essential that we must first remove political barriers to productive entrepreneurialism or else we will never get back to being the builders we once were.    

Attitudes about Progress Matter 

The economic historian Joel Mokyr has noted how, “technological progress requires above all tolerance toward the unfamiliar and the eccentric” and that the innovation that undergirds economic growth is best viewed as “a fragile and vulnerable plant” that “is highly sensitive to the social and economic environment and can easily be arrested by relatively small external changes.” Specifically, societal and political attitudes toward growth, risk-taking, and entrepreneurial activities (and failures) are important to the competitive standing of nations and the possibility of long-term prosperity. “How the citizens of any country think about economic growth, and what actions they take in consequence, are,” Benjamin Friedman observes, “a matter of far broader importance than we conventionally assume.”Image

Former Federal Reserve chairman Alan Greenspan and co-author Adrian Wooldridge have observed that “[t]he key to America’s success lies in its unique toleration for ‘creative destruction,’” and an “enduring preference for change over stability.” This is consistent with the findings of Deirdre McCloskey’s recent 3-volume trilogy about the history of modern economic growth. McCloskey meticulously documents how an embrace of “bourgeois virtues” (i.e., positive attitudes about markets and innovation) was the crucial factor propelling the invention and economic growth that resulted in the Industrial Revolution.The importance of positive attitudes toward innovation and risk-taking were equally important for the Information Revolution more recently. In turn, that also helps explain why so many US-based tech innovators became global powerhouses, while firms from other countries tend to flounder because their innovation culture was more precautionary in orientation.

There are limits to how much policymakers can do to influence the attitudes among citizens toward innovation, entrepreneurialism, and economic growth. When policymakers set the right tone with a positive attitude toward innovation, however, it inevitably infuses various institutions and creates powerful incentives for entrepreneurial efforts to be undertaken. This, in turn, influences broader societal attitudes and institutions toward innovation and creates a positive feedback loop. “If we learn anything from the history of economic development,” argued David Landes in his magisterial The Wealth and Poverty of Nations: Why Some Are So Rich and Some Are So Poor, “it is that culture makes all the difference.” Research by other scholars finds that, “existing cultural conditions determine whether, when, how and in what form a new innovation will be adopted.”

Economists like Mancur Olson speak of the importance of a “structure of incentives” that helps explain why “the great differences in the wealth of nations are mainly due to differences in the quality of their institutions and economic policies.”In this sense, “institutions” include what Elhanan Helpman defines as “systems of rules, beliefs, and organizations,”including the rule of law and court systems,property rights,contracts, free trade policies and institutions, light-touch regulations and regulatory regimes, freedom to travel, and various other incentives to invest. Image

It is the freedom to invest, the freedom to work, and the freedom to build that particularly concerns Marc Andreessen. But he needs to draw the connection with the specific public policies that hold back our ability to exercise those freedoms. 

Policy Defaults toward Innovation Matter Even More

Unfortunately, a great many barriers exist to entrepreneurial efforts. Those barriers to building include inflexible health and safety regulation, occupational licensing rules, cronyist industrial protectionist schemes, inefficient (industry-rigged) tax schemes, rigid zoning ordinances, and many other layers of regulatory red tape at the federal, state, and local level.  

What unifies all these policies is risk aversion and the precautionary principle. As I argued in my last book, we have a choice when it comes to setting defaults for innovation policy. We can choose to set innovation defaults closer to the green light of “permissionless innovation,” generally allowing entrepreneurial acts unless a compelling case can be made not to. Alternatively, we can set our default closer to the red light of the precautionary principle, which disallows risk-taking or entrepreneurialism until some authority gives us permission to proceed. 

My book made the case for permissionless innovation as the superior default regime. My argument for rejecting the precautionary principle as the default came down to belief that, “living in constant fear of worst-case scenarios—and premising public policy on them—means that best-case scenarios will never come about. When public policy is shaped by precautionary principle reasoning,” I argued, “it poses a serious threat to technological progress, economic entrepreneurialism, social adaptation, and long-run prosperity.”  

Image

Heavy-handed preemptive restraints on innovative acts have such deleterious effects because they raise barriers to entry, increase compliance costs, and create more risk and uncertainty for entrepreneurs and investors. Progress is impossible without constant trial-and-error experimentation and entrepreneurial risk-taking. Thus, it is the unseen costs of forgone innovation opportunities that make the precautionary principle so troubling as a policy default. Without risk, there can be no reward. Scientist Martin Rees refers to this truism about the precautionary principle as “the hidden cost of saying no.”  

More generally, risk analysts have noted that the precautionary principle “lacks a firm logical foundation” and is “literally incoherent” because it fails to specify a clear standard by which to judge which risks are most serious and worthy of preemptive control. Moreover, regulatory policy experts have criticized the fact that the precautionary principle, “may be misused for protectionist ends; it tends to undermine international regulatory cooperation; and it may have highly undesirable distributive consequences.” Specifically, large incumbent firms are almost always more likely able to deal with rigid, expensive regulatory regimes or, worse yet, can game those systems by “capturing” policymakers and using regulatory regimes to exclude new rivals.  

Precaution Suffocates Productive Entrepreneurialism 

The problem today is that a massive volume of precautionary policies exist that discourage “productive entrepreneurship” (i.e., building) and instead actively encourage “unproductive entrepreneurship” (i.e., preservation of the status quo). Andreessen identifies this problem when he speaks of “smug complacency, this satisfaction with the status quo and the unwillingness to build.” But he doesn’t fully connect the dots between how the attitudes came about and the public policy incentives that actively encourage such thinking. 

Why try to build when all the incentives are aligned against you? Andreessen wants to know “Where are the supersonic aircraft? Where are the millions of delivery drones? Where are the high speed trains, the soaring monorails, the hyperloops, and yes, the flying cars?” Well, I’ll tell you where they are at. They are trapped in the minds of inventive people who cannot bring them to fruition so long as an endless string of barriers makes it costly or impossible for them to realize those dreams. 

Read Eli Dourado’s important essay on “How Do We Move the Needle on Progress?” to get a more concrete feel for the specific barriers to building in the fields where productive entrepreneurialism is most needed: health, housing, energy, and transportation.Image

The bottom line, as Dustin Chambers and Jonathan Munemo noted in a 2017 Mercatus Center report on the impact of regulation on entrepreneurial activity, is that “If a nation wishes to promote higher levels of domestic entrepreneurship in both the short and long run, top priority should be given to reducing barriers to entry for new firms and to improving overall institutional quality (especially political stability, regulatory quality, and voice and accountability).” 

This doesn’t mean there is no role for government in helping to promote “building” and entrepreneurialism. A healthy debate continues to rage about “state capacity” as it pertains to government investments in research and development, for example. While I am skeptical, there may very well be some steps governments can take to encourage more and better investments in the sectors and technologies we desperately need. But all the “state capacity” in the world isn’t going to help until we first clear away the barriers that hold back the productive spirit of the people. 

Oiling the Wheels of Novelty

My new book, which is due out next week, discusses how innovation improves economies and government institutions. It builds on the fundamental insight of Calestous Juma, who concluded his masterwork Innovation and Its Enemies by reminding us of the continued importance of “oiling the wheels of novelty,” to constantly replenish the well of important ideas and innovations. “The biggest risk that society faces by adopting approaches that suppress innovation,” Juma said, “is that they amplify the activities of those who want to preserve the status quo by silencing those arguing for a more open future.” Image

The openness Juma had in mind represents a tolerance of new ideas, inventions, and unknown futures. It can and should also represent an openness to new, more flexible methods of governance. For, if it doesn’t, the builder movement that Andreessen and others long for will remain just a distant dream, incapable of ever being realized so long as the wheels of novelty are gummed up by decades of inefficient, archaic, counterproductive public policies.

_________

P.S. I highly recommend this excellent essay by Jerry Brito, “We don’t want to build? Maybe we should build anyway.” It touches on many of the same themes I discuss in my response essay as well as in my new book, Evasive Entrepreneurs and the Future of Governance: How Innovation Improves Economies and Governments.

]]>
https://techliberation.com/2020/04/21/barriers-to-a-builders-movement-thoughts-on-andreessens-manifesto/feed/ 2 76692
Podcast: Problems with the Precautionary Principle https://techliberation.com/2020/02/20/podcast-problems-with-the-precautionary-principle/ https://techliberation.com/2020/02/20/podcast-problems-with-the-precautionary-principle/#respond Thu, 20 Feb 2020 20:02:13 +0000 https://techliberation.com/?p=76669

On the latest Institute for Energy Research podcast, I joined Paige Lambermont to discuss:

  • the precautionary principle vs. permissionless innovation;
  • risk analysis trade-offs;
  • the future of nuclear power;
  • the “pacing problem”;
  • regulatory capture;
  • evasive entrepreneurialism;
  • “soft law”;
  • … and why I’m still bitter about losing the 6th grade science fair!

Our discussion was inspired by my recent essay, “How Many Lives Are Lost Due to the Precautionary Principle?”

]]>
https://techliberation.com/2020/02/20/podcast-problems-with-the-precautionary-principle/feed/ 0 76669
Locast and deteriorating TV laws https://techliberation.com/2019/10/15/locast-and-deteriorating-tv-laws/ https://techliberation.com/2019/10/15/locast-and-deteriorating-tv-laws/#respond Tue, 15 Oct 2019 18:55:54 +0000 https://techliberation.com/?p=76616

In the US there is a tangle of communications laws that were added over decades by Congress as–one-by-one–broadcast, cable, and satellite technologies transformed the TV marketplace. The primary TV laws are from 1976, 1984, and 1992, though Congress creates minor patches when the marketplace changes and commercial negotiations start to unravel.

Congress, to its great credit, largely has left alone Internet-based TV (namely, IPTV and vMVPDs) which has created a novel “problem”–too much TV. Internet-based TV, however, for years has put stress on the kludge-y legacy legal system we have, particularly the impenetrable mix of communications and copyright laws that regulates broadcast TV distribution.

Internet-based TV does two things–it undermines the current system with regulatory arbitrage but also shows how tons of diverse TV programming can be distributed to millions of households without Congress (and the FCC and the Copyright Office) injecting politics into the TV marketplace.

Locast TV is the latest Internet-based TV distributor to threaten to unravel parts the current system. In July, broadcast programmers sued Locast (its founder, David Goodfriend) and in September, Locast filed its own suit against the broadcast programmers.

A portion of US TV regulations.

Many readers will remember the 2014 Aereo decision from the Supreme Court. Much like Aereo, Locast TV captures free broadcast TV signals in the markets it operates and transmits the programming via the Internet to viewers in that market. That said, Locast isn’t Aereo.

Aereo’s position was that it could relay broadcast signals without paying broadcasters because it wasn’t a “cable company” (a critical category in copyright law). The majority of the Supreme Court disagreed; Aereo closed up shop.

Locast has a different position: it says it can relay broadcast signals without paying because it is a nonprofit.

It’s a plausible argument. Federal copyright law has a carveout allowing “nonprofit organizations” to relay broadcast signals without payment so long as the nonprofit operates “without any purpose of direct or indirect commercial advantage.”

The broadcasters are focusing on this latter provision, that any nonprofit taking advantage of the carveout mustn’t have commercial purpose. David Goodfriend, the Locast founder, is a lawyer and professor who, apparently, sought to abide by the law. However, the broadcasters argue, his past employment and commercial ties to pay-TV companies mean that the nonprofit is operating for commercial advantage.

It’s hard to say how a court will rule. Assuming a court takes up the major issues, judges will have to decide what “indirect commercial advantage” means. That’s a fact-intensive inquiry. The broadcasters will likely search for hot docs or other evidence that Locast is not a “real” nonprofit. Whatever the facts are, Locast’s arbitrage of the existing regulations is one that could be replicated.

Nobody likes the existing legacy TV regulation system: Broadcasters dislike being subject to compulsory licenses; Cable and satellite operators dislike being forced to carry some broadcast TV and to pay for a bizarre “retransmission” right. Copyright holders are largely sidelined in these artificial commercial negotiations. Wholesale reform–so that programming negotiations look more like the free-market world of Netflix and Hulu programming–would mean every party has give up something they like improve the overall system.

The Internet’s effect on traditional providers’ market share has been modest to date, but hopefully Congress will anticipate the changing marketplace before regulatory distortions become intolerable.

Additional reading: Adam Thierer & Brent Skorup, Video Marketplace Regulation: A Primer on the History of Television Regulation and Current Legislative Proposals (2014).

]]>
https://techliberation.com/2019/10/15/locast-and-deteriorating-tv-laws/feed/ 0 76616
New Report: “Raising Rivals’ Costs Using the GDPR” (Just $1999!) https://techliberation.com/2019/10/10/new-report-raising-rivals-costs-using-the-gdrp-just-1999/ https://techliberation.com/2019/10/10/new-report-raising-rivals-costs-using-the-gdrp-just-1999/#comments Thu, 10 Oct 2019 19:19:12 +0000 https://techliberation.com/?p=76614

“Rent-Seeking Consultants, Inc.,” a subsidiary of the Strategies and Tactics to Annoy Neighbors (SATAN) Group, is pleased to announce its latest product for clients looking to exploit well-intentioned regulation to serve their own ends. Our new report, “Raising Rivals’ Costs Using the GDPR: A Strategic Guide to Thwarting Competition, Expanding Market Share & Enhancing Profits with Minimal Effort,” is available for immediate download for just $1,999 (discounted to just $999 for our loyal “Dante’s Ninth Circle” club members).

Over the last three decades, our experts at Rent-Seeking Consultants have dedicated themselves to the mission of advancing narrow interests at the expense of public welfare. We have done so by creatively exploiting laws and regulations that — while often implemented with the very best of intentions in mind — we recognized could be converted into a tool to advantage the few at the expense of the many.

Our motto: Where others see good intentions, we see good opportunities!

Our “Raising Rivals’ Costs Using the GDPR” report continues our latest line of new products, which aim to take Europe’s bold new privacy regulatory regime and convert it into a rent-seeker’s paradise. Our previous report outlined, “How to Pretend Compliance Costs Will Destroy Your Big Company, While Also Letting Your Shareholders Know It is Actually an Amazing Way to Crush the Competition.”

In our new report, we discuss how to weaponize the GDPR complaint process to your advantage. In this regard, some crowd-sourced efforts already exist, such as the “Ship Your Enemies GDPR” website. The site helps you take advantage of GDPR’s legal requirements by forcing rival firms to respond to as many frivolous claims as you can send their way. “We’ll help you send them a GDPR Data Access Request designed to waste as much of their time as possible,” the site notes.

More recently, angry gamers took to Reddit to devise a plan to use GDPR to harass gaming giant Blizzard. Fans were mad that Blizzard had kowtowed to the Chinese government by suspending a professional gamer who had voiced support for Hong Kong protestors. In essence, the Reddit protestors hope to use the GDPR to generate the equivalent of a DDOS attack on a company through massive, coordinated data requests. Brilliant!

We admire the spirit of these ingenious initiatives, but we aim to more fully capture the value associated with them for our clients using concerted manipulation of whatever political levers we can help you pull. How? Weaponizing complaint processes is a tactic that Rent-Seeking Consultants, Inc. has used effectively in the past. When a small handful of censorial-minded folks wanted to get the Federal Communications Commission to beef up fines and penalties for broadcast “indecency,” we helped them stuff the ballot box at the agency with form letters and fake complaints to make regulators believe the public was clamoring for greater censorship, when it reality it was just serving a very small group of people who wanted a heckler’s veto over broadcast programming. We tied those broadcasters up in courts for years with these tactics! Meanwhile, the new media operators we also represented were able to race ahead with whatever content they wanted to post on their platforms. Victory!

This led to the creation of our Scaring Consumers Really Effectively While Earning Money (SCREWEM™) initiative, which eventually won the prestigious Lobbying Award for Manipulating Effectively (LAME) Award in the “Creating Needless Panic” category. Our latest report highlights how we can use that same SCREWEM™ system to whip up serious privacy-related troubles for your rivals using the GDPR complaint process — all while pretending that this is all in the public interest.

We hope you will consider ordering our new report, and please let us know what we can do to help our trusted clients take advantage of well-intentioned regulation to undermine the public good on an ongoing basis. Finally, with California set to impose costly new privacy mandates extraterritorially on the entire nation, you can count on us being in touch again soon about exciting new opportunities for raising rivals’ costs using the machinery of the State.

Sincerely,

I.M. Prehensile Director of Strategic Political Exploits for S.A.T.A.N.


[This has been an act of satire, but the unintended consequences of GDPR are quite real. For some hard facts about what GDPR has meant in practice, see: Alec Stapp, “ GDPR after One Year: Costs and Unintended Consequences ,” and Eline Chivot and Daniel Castro, “ What the Evidence Shows About the Impact of the GDPR After One Year .” More generally, see: “Tech Policy, Unintended Consequences & the Failure of Good Intentions.”]

]]>
https://techliberation.com/2019/10/10/new-report-raising-rivals-costs-using-the-gdrp-just-1999/feed/ 1 76614
15 Years of the Tech Liberation Front: The Greatest Hits https://techliberation.com/2019/08/15/15-years-of-the-tech-liberation-front-the-greatest-hits/ https://techliberation.com/2019/08/15/15-years-of-the-tech-liberation-front-the-greatest-hits/#respond Thu, 15 Aug 2019 14:34:51 +0000 https://techliberation.com/?p=76579

The Technology Liberation Front just marked its 15th year in existence. That’s a long time in the blogosphere. (I’ve only been writing at TLF since 2012 so I’m still the new guy.)

Everything from Bitcoin to net neutrality to long-form pieces about technology and society were featured and debated here years before these topics hit the political mainstream.

Thank you to our contributors and our regular readers. Here are the most-read tech policy posts from TLF in the past 15 years (I’ve omitted some popular but non-tech policy posts).

No. 15: Bitcoin is going mainstream. Here is why cypherpunks shouldn’t worry. by Jerry Brito, October 2013

Today is a bit of a banner day for Bitcoin. It was five years ago today that Bitcoin was first described in a paper by Satoshi Nakamoto. And today the New York Times has finally run a profile of the cryptocurrency in its “paper of record” pages. In addition, TIME’s cover story this week is about the “deep web” and how Tor and Bitcoin facilitate it.

The fact is that Bitcoin is inching its way into the mainstream.

No. 14: Is fiber to the home (FTTH) the network of the future, or are there competing technologies? by Roslyn Layton, August 2013

There is no doubt that FTTH is a cool technology, but the love of a particular technology should not blind one to look at the economics.  After some brief background, this blog post will investigate fiber from three perspectives (1) the bandwidth requirements of web applications (2) cost of deployment and (3) substitutes and alternatives. Finally it discusses the notion of fiber as future proof.

No. 13: So You Want to Be an Internet Policy Analyst? by Adam Thierer, December 2012

Each year I am contacted by dozens of people who are looking to break into the field of information technology policy as a think tank analyst, a research fellow at an academic institution, or even as an activist. Some of the people who contact me I already know; most of them I don’t. Some are free-marketeers, but a surprising number of them are independent analysts or even activist-minded Lefties. Some of them are students; others are current professionals looking to change fields (usually because they are stuck in boring job that doesn’t let them channel their intellectual energies in a positive way). Some are lawyers; others are economists, and a growing number are computer science or engineering grads. In sum, it’s a crazy assortment of inquiries I get from people, unified only by their shared desire to move into this exciting field of public policy.

. . . Unfortunately, there’s only so much time in the day and I am sometimes not able to get back to all of them. I always feel bad about that, so, this essay is an effort to gather my thoughts and advice and put it all one place . . . .

No. 12: Violent Video Games & Youth Violence: What Does Real-World Evidence Suggest? by Adam Thierer, February 2010

So, how can we determine whether watching depictions of violence will turn us all into killing machines, rapists, robbers, or just plain ol’ desensitized thugs? Well, how about looking at the real world! Whatever lab experiments might suggest, the evidence of a link between depictions of violence in media and the real-world equivalent just does not show up in the data. The FBI produces ongoing Crime in the United States reports that document violent crimes trends. Here’s what the data tells us about overall violent crime, forcible rape, and juvenile violent crime rates over the past two decades: They have all fallen. Perhaps most impressively, the juvenile crime rate has fallen an astonishing 36% since 1995 (and the juvenile murder rate has plummeted by 62%).

No. 11: Wedding Phtography and Copyright Release by Tim Lee, September 2008

I’m getting married next Spring, and I’m currently negotiating the contract with our photographer. The photography business is weird because even though customers typically pay hundreds, if not thousands, of dollars up front to have photos taken at their weddings, the copyright in the photographs is typically retained by the photographer, and customers have to go hat in hand to the photographer and pay still more money for the privilege of getting copies of their photographs.

This seems absurd to us . . . .

No. 10: Why would anyone use Bitcoin when PayPal or Visa work perfectly well? by Jerry Brito, December 2013

A common question among smart Bitcoin skeptics is, “Why would one use Bitcoin when you can use dollars or euros, which are more common and more widely accepted?” It’s a fair question, and one I’ve tried to answer by pointing out that if Bitcoin were just a currency (except new and untested), then yes, there would be little reason why one should prefer it to dollars. The fact, however, is that Bitcoin is more than money, as I recently explained in Reason. Bitcoin is better thought of as a payments system, or as a distributed ledger, that (for technical reasons) happens to use a new currency called the bitcoin as the unit of account. As Tim Lee has pointed out, Bitcoin is therefore a platform for innovation, and it is this potential that makes it so valuable.

No. 9: The Hidden Benefactor: How Advertising Informs, Educates & Benefits Consumers by Adam Thierer & Berin Szoka, February 2010

Advertising is increasingly under attack in Washington. . . . This regulatory tsunami could not come at a worse time, of course, since an attack on advertising is tantamount to an attack on media itself, and media is at a critical point of technological change. As we have pointed out repeatedly, the vast majority of media and content in this country is supported by commercial advertising in one way or another-particularly in the era of “free” content and services.

No. 8: Reverse Engineering and Innovation: Some Examples by Tim Lee, June 2006

Reverse engineering the CSS encryption scheme, by itself, isn’t an especially innovative activity. However, what I think Prof. Picker is missing is how important such reverse engineering can be as a pre-condition for subsequent innovation. To illustrate the point, I’d like to offer three examples of companies or open source projects that have forcibly opened a company’s closed architecture, and trace how these have enabled subsequent innovation . . . .

No. 7: Are You An Internet Optimist or Pessimist? The Great Debate over Technology’s Impact on Society by Adam Thierer, January 2010

The cycle goes something like this. A new technology appears. Those who fear the sweeping changes brought about by this technology see a sky that is about to fall. These “techno-pessimists” predict the death of the old order (which, ironically, is often a previous generation’s hotly-debated technology that others wanted slowed or stopped). Embracing this new technology, they fear, will result in the overthrow of traditions, beliefs, values, institutions, business models, and much else they hold sacred.

The pollyannas, by contrast, look out at the unfolding landscape and see mostly rainbows in the air. Theirs is a rose-colored world in which the technological revolution du jour is seen as improving the general lot of mankind and bringing about a better order. If something has to give, then the old ways be damned! For such “techno-optimists,” progress means some norms and institutions must adapt—perhaps even disappear—for society to continue its march forward.

No. 6: Copyright Duration and the Mickey Mouse Curve by Tom Bell, August 2009

Given the rough-and-tumble of real world lawmaking, does the rhetoric of “delicate balancing” merit any place in copyright jurisprudence? The Copyright Act does reflect compromises struck between the various parties that lobby congress and the administration for changes to federal law. A truce among special interests does not and cannot delicately balance all the interests affected by copyright law, however. Not even poetry can license the metaphor, which aggravates copyright’s public choice affliction by endowing the legislative process with more legitimacy than it deserves. To claim that copyright policy strikes a “delicate balance” commits not only legal fiction; it aids and abets a statutory tragedy.

No. 5: Cyber-Libertarianism: The Case for Real Internet Freedom by Adam Thierer & Berin Szoka, August 2009

Generally speaking, the cyber-libertarian’s motto is “Live & Let Live” and “Hands Off the Internet!” The cyber-libertarian aims to minimize the scope of state coercion in solving social and economic problems and looks instead to voluntary solutions and mutual consent-based arrangements.

Cyber-libertarians believe true “Internet freedom” is freedom from state action; not freedom for the State to reorder our affairs to supposedly make certain people or groups better off or to improve some amorphous “public interest”—an all-to convenient facade behind which unaccountable elites can impose their will on the rest of us.

No. 4: Here’s why the Obama FCC Internet regulations don’t protect net neutrality by Brent Skorup, July 2017

It’s becoming clearer why, for six years out of eight, Obama’s appointed FCC chairmen resisted regulating the Internet with Title II of the 1934 Communications Act. Chairman Wheeler famously did not want to go that legal route. It was only after President Obama and the White House called on the FCC in late 2014 to use Title II that Chairman Wheeler relented. If anything, the hastily-drafted 2015 Open Internet rules provide a new incentive to ISPs to curate the Internet in ways they didn’t want to before.

No. 3: 10 Years Ago Today… (Thinking About Technological Progress) by Adam Thierer, February 2009

As I am getting ready to watch the Super Bowl tonight on my amazing 100-inch screen via a Sanyo high-def projector that only cost me $1,600 bucks on eBay, I started thinking back about how much things have evolved (technologically-speaking) over just the past decade. I thought to myself, what sort of technology did I have at my disposal exactly 10 years ago today, on February 1st, 1999? Here’s the miserable snapshot I came up with . . . .

No. 2: Regulatory Capture: What the Experts Have Found by Adam Thierer, December 2010

While capture theory cannot explain all regulatory policies or developments, it does provide an explanation for the actions of political actors with dismaying regularity. Because regulatory capture theory conflicts mightily with romanticized notions of “independent” regulatory agencies or “scientific” bureaucracy, it often evokes a visceral reaction and a fair bit of denialism. . . . Yet, countless studies have shown that regulatory capture has been at work in various arenas: transportation and telecommunications; energy and environmental policy; farming and financial services; and many others.

No. 1: Defining “Technology” by Adam Thierer, April 2014

I spend a lot of time reading books and essays about technology; more specifically, books and essays about technology history and criticism. Yet, I am often struck by how few of the authors of these works even bother defining what they mean by “technology.” . . . Anyway, for what it’s worth, I figured I would create this post to list some of the more interesting definitions of “technology” that I have uncovered in my own research.

]]>
https://techliberation.com/2019/08/15/15-years-of-the-tech-liberation-front-the-greatest-hits/feed/ 0 76579
Did You Need Another Reason to Hate Lobbyists & Cronyism? https://techliberation.com/2019/07/15/did-you-need-another-reason-to-hate-lobbyists-cronyism/ https://techliberation.com/2019/07/15/did-you-need-another-reason-to-hate-lobbyists-cronyism/#respond Mon, 15 Jul 2019 13:43:25 +0000 https://techliberation.com/?p=76524

My latest AIER column examines the impact increased lobbying and regulatory accumulation have on entrepreneurialism and innovation more generally. Unsurprisingly, it’s not a healthy relationship. A growing body of economic evidence concludes that increases in the former lead to much less of the latter.

This is a topic that my Mercatus Center colleagues and I have done a lot of work on through the years. But what got me thinking about the topic again was a new NBER working paper by economists Germán Gutiérrez and Thomas Philippon entitled, “The Failure of Free Entry.” Their new study finds that “regulations and lobbying explain rather well the decline in the allocation of entry” that we have seen in recent years.

Many economists have documented how business dynamism–new firm creation, entry, churn, etc–appears to have slowed in the US. Explanations for why vary but Gutiérrez and Philippon show that, “regulations have a negative impact on small firms, especially in industries with high lobbying expenditures.” Their results also document how regulations, “have a first order impact on incumbent profits and suggest that the regulatory capture may have increased in recent years.”

In other words, lobbying and cronyism breed a culture of rent-seeking, over-regulation, and rule accumulation that directly limit new startup activity and innovation more generally. This is a recipe for economic stagnation if left unchecked.

I cite almost a dozen sources in my essays which document this problem in far greater detail and which propose a variety of reforms. In a previous essay for AIER, I argued a periodic “spring cleaning for the regulatory state” was essential if we hope to address regulatory accumulation. In continued:

For starters, we need to get the problem of over-licensing and over-permitting under control at the federal, state, and local level. While sometimes justified, licenses are a direct restriction on entry and entrepreneurialism and should only be employed for the riskiest activities and professions. “Permissionless innovation” should be the default.

Other regulatory reforms will be needed, and I continue to be a big fan of “sunsets” as one way of cleaning out the stables. Sunsets are not silver-bullet solutions, but they can help create a periodic reset button of sorts for government programs or regulations that have outlived their usefulness, or did not make sense to begin with.

Some people will push back against such regulatory reforms, suggesting they will undermine public health or welfare. But that’s nonsense. Getting regulatory accumulation under control isn’t just about improving opportunities for innovation, entrepreneurialism, and worker opportunities.  It’s also about ensuring that government functions in more efficient and effective fashion.

When regulations accumulate without any rhyme or reason, it makes it more difficult for public officials to do their jobs effectively. Streamlining rules and cleaning up old, outdated regs will help public officials better serve the public interest and give economic dynamism a boost at the same time.

We need to get serious about getting this problem under control. Again, read my latest and previous AIER columns for more detail about how we can start that process.

 

]]>
https://techliberation.com/2019/07/15/did-you-need-another-reason-to-hate-lobbyists-cronyism/feed/ 0 76524
How Conservatives Came to Favor the Fairness Doctrine & Net Neutrality https://techliberation.com/2019/06/19/how-conservatives-came-to-favor-the-fairness-doctrine-net-neutrality/ https://techliberation.com/2019/06/19/how-conservatives-came-to-favor-the-fairness-doctrine-net-neutrality/#respond Thu, 20 Jun 2019 01:09:52 +0000 https://techliberation.com/?p=76507

I have been covering telecom and Internet policy for almost 30 years now. During much of that time – which included a nine year stint at the Heritage Foundation — I have interacted with conservatives on various policy issues and often worked very closely with them to advance certain reforms.

If I divided my time in Tech Policy Land into two big chunks of time, I’d say the biggest tech-related policy issue for conservatives during the first 15 years I was in the business (roughly 1990 – 2005) was preventing the resurrection of the so-called Fairness Doctrine. And the biggest issue during the second 15-year period (roughly 2005 – present) was stopping the imposition of “Net neutrality” mandates on the Internet. In both cases, conservatives vociferously blasted the notion that unelected government bureaucrats should sit in judgment of what constituted “fairness” in media or “neutrality” online.

Many conservatives are suddenly changing their tune, however. President Trump and Sen. Ted Cruz, for example, have been increasingly critical of both traditional media and new tech companies in various public statements and suggested an openness to increased regulation. The President has gone after old and new media outlets alike, while Sen. Cruz (along with others like Sen. Lindsay Graham) has suggested during congressional hearings that increased oversight of social media platforms is needed, including potential antitrust action.

Meanwhile, during his short time in office, Sen. Josh Hawley (R-Mo.) has become one of the most vocal Internet critics on the Right. In a shockingly-worded USA Today editorial in late May, Hawley said, “social media wastes our time and resources” and is “a field of little productive value” that have only “given us an addiction economy.” He even referred to these sites as “parasites” and blamed them for a long list of social problems, leading him to suggest that, “we’d be better off if Facebook disappeared” along with various other sites and services.

Hawley’s moral panic over social media has now bubbled over into a regulatory crusade that would unleash federal bureaucrats on the Internet in an attempt to dictate “fair” speech on the Internet. He has introduced an astonishing piece of legislation aimed at undoing the liability protections that Internet providers rely upon to provide open platforms for speech and commerce. If Hawley’s absurdly misnamed new “Ending Support for Internet Censorship Act” is implemented, it would essentially combine the core elements of the Fairness Doctrine and Net Neutrality to create a massive new regulatory regime for the Internet.

The bill would gut the immunities Internet companies enjoy under 47 USC 230 (“Section 230”) of the Communications Decency Act. Eric Goldman of the Santa Clara University School of Law has described Section 230 as the “best Internet law” and “a big part of the reason why the Internet has been such a massive success.” Indeed, as I pointed out in a Forbes column on the occasion of its 15th anniversary, Section 230 is “the foundation of our Internet freedoms” because it gives online intermediaries generous leeway to determine what content and commerce travels over their systems without the fear that they will be overwhelmed by lawsuits if other parties object to some of that content.

The Hawley bill would overturn this important legal framework for Internet freedom and instead replace it with a new “permissioned” approach. In true “Mother-May-I” style, Internet companies would need to apply for an “immunity certification” from the FTC, which would undertake investigations to determine if the petitioning platform satisfied a “requirement of politically unbiased content moderation.”

The vague language of the measure is an open invitation to massive political abuse. The entirety of the bill hinges upon the ability of Federal Trade Commission officials to define and enforce “political neutrality” online. Let’s consider what this will mean in practice.

Under the bill, the FTC must evaluate whether platforms have engaged in “politically biased moderation,” which is defined as moderation practices that are supposedly, “designed to negatively affect” or “disproportionately restricts or promote access to … a political party, political candidate, or political viewpoint.” As Blake Reid of the University of Colorado Law School rightly asks, “How, exactly, is the FTC supposed to figure out what the baseline is for ‘disproportionately restricting or promoting’? How much access or availability to information about political parties, candidates, or viewpoints is enough, or not enough, or too much?”

There is no Goldilocks formula for getting things just right when it comes to content moderation. It’s a trial-and-error process that is nightmarishly difficult because of the endless eye-of-the-beholder problems associated with constructing acceptable use policies for large speech platforms. We struggled with the same issues in the broadcast and cable era, but they have been magnified a million-fold in the era of the global Internet with the endless tsunami of new content that hits our screens and devices every day. “Do we want less moderation?” asks Sec, 230 guru Jeff Kosseff. “I think we need to look at that question hard.  Because we’re seeing two competing criticisms of Section 230,” he notes. “Some argue that there is too much moderation, others argue that there is not enough.”

The Hawley bill seems to imagine that a handful of FTC officials will magically be able to strike the right balance through regulatory investigations. That’s a pipe dream, of course, but let’s imagine for a moment that regulators could somehow sort through all the content on message boards, tweets, video clips, live streams, gaming sites, and whatever else, and then somehow figure out what constituted a violation of “political neutrality” in any given context. That would actually be a horrible result because let’s be perfectly clear about what that would really be: It would be a censorship board. By empowering unelected bureaucrats to make decisions about what constitutes “neutral” or “fair” speech, the Hawley measure would, as Elizabeth Nolan Brown of Reason summarizes, “put Washington in charge of Internet speech.” Or, as Sen. Ron Wyden argues more bluntly, the bill “will turn the federal government into Speech Police.” “Perhaps a more accurate title for this bill would be ‘Creating Internet Censorship Act,'” Eric Goldman is forced to conclude.

The measure is creating other strange bedfellows. You won’t see Berin Szoka of TechFreedom and Harold Feld of Public Knowledge ever agreeing on much, but they both quickly and correctly labelled Hawley’s bill a “Fairness Doctrine for the Internet.” That is quite right, and much like the old Fairness Doctrine, Hawley’s new Internet speech control regime would be open to endless political shenanigans as parties, policymakers, companies, and the various complainants line up to have their various political beefs heard and acted upon. “That’s the kind of thing Republicans said was unconstitutional (and subject to FCC agency capture and political manipulation) for decades,” says Daphne Keller of the Stanford Center for Internet & Society. Moreover, during the Net Neutrality holy wars, GOP conservatives endlessly blasted the notion that bureaucrats should be determining what constitute “neutrality” online because it, too, would result in abuses of the regulatory process. Yet, Sen. Hawley’s bill would now mandate that exact same thing.

What is even worse is that, as law professor Josh Blackman observes, “the bill also makes it exceedingly difficult to obtain a certification” because applicants need a supermajority of 4 of the 5 FTC Commissioners. This is public choice fiasco waiting to happen. Anyone who has studied the long, sordid history of broadcast radio and television licensing understands the danger associated with politicizing certification processes. The lawyers and lobbyists in the DC “swamp” will benefit from all the petitioning and paperwork, but it is not clear how creating a regulatory certification regime for Internet speech really benefits the general public (or even conservatives, for that matter).

Former FTC Commissioner Josh Wright identifies another obvious problem with the Hawley Bill: it “offers the choice of death by bureaucratic board or the plaintiffs’ bar.” That’s because by weakening Sec. 230’s protections, Hawley’s bill could open the floodgates to waves of frivolous legal claims in the courts if companies can’t get (or lose) certification. The irony of that result, of course, is that this bill could become a massive gift to the tort bar that Republicans love to hate!

Of course, if the law ever gets to court, it might be ruled unconstitutional. “The terms ‘politically biased’ and ‘moderation’ would have vagueness and overbreadth problems, as they can chill protected speech,” Josh Blackman argues. So it could, perhaps, be thrown out like earlier online censorship efforts. But a lot of harm could be done—both to online speech and competition—in the years leading up to a final determination about the law’s constitutionality by higher courts.

What is most outrageous about all this is that the core rationale behind Hawley’s effort—the idea that conservatives are somehow uniquely disadvantaged by large social media platforms—is utterly preposterous. In May, the Trump Administration launched a “tech bias” portal which “asked Americans to share their stories of suspected political bias.” The portal is already closed and it is unclear what, if anything, will come out of this effort. But this move and Hawley’s proposal point to the broader trend of conservatives getting more comfortable asking Big Government to redress imaginary grievances about supposed “bias” or “exclusion.”

In reality, today’s social media tools and platforms have been the greatest thing that ever happened to conservatives. Mr. Trump owes his presidency to his unparalleled ability to directly reach his audience through Twitter and other platforms. As recently as June 12, President Trump tweeted, “The Fake News has never been more dishonest than it is today. Thank goodness we can fight back on Social Media.” Well, there you have it!

Beyond the President, one need only peruse any social media site for a few minutes to find an endless stream of conservative perspectives on display. This isn’t exclusion; it’s amplification on steroids. Conservatives have more soapboxes to stand on and preach than ever before in the history of this nation.

Finally, if they were true to their philosophical priors, then conservatives also would not be insisting that they have any sort of “right” to be on any platform. These are private platforms, after all, and it is outrageous to suggest that conservatives (or any other person or group) are entitled to have a spot on any other them.

Some conservatives are fond of ridiculing liberals for being “snowflakes” when it comes to other free speech matters, such as free speech on college campuses. Many times they are right. But one has to ask who the real snowflakes are when conservative lawmakers are calling on regulatory bureaucracies to reorder speech on private platform based on the mythical fear of not getting “fair” treatment. One also cannot help but wonder if those conservatives have thought through how this new Internet regulatory regime will play out once a more liberal administration takes back the reins of power. Conservatives will only have themselves to blame when the Speech Police come for them.


Addendum: Several folks have pointed out another irony associated with Hawley’s bill is that it would greatly expand the powers of the administrative state, which conservatives already (correctly) feel has too much broad, unaccountable power. I should have said more on that point, but here’s a nice comment from David French of National Review, which alludes to that problem and then ties it back to my closing argument above: i.e., that this proposal will come back to haunt conservatives in the long-run:

when coercion locks in — especially when that coercion is tied to constitutionally suspect broad and vague policies that delegate immense powers to the federal government — conservatives should sound the alarm. One of the best ways to evaluate the merits of legislation is to ask yourself whether the bill would still seem wise if the power you give the government were to end up in the hands of your political opponents. Is Hawley striking a blow for freedom if he ends up handing oversight of Facebook’s political content to Bernie Sanders? I think not.

Additional thoughts on the Hawley bill:

Josh Wright

Daphne Keller

Blake Reid

TechFreedom

Josh Blackman

Sen. Ron Wyden

Jeff Kosseff

Eric Goldman

CCIA

NetChoice

Internet Association

David French at National Review

John Samples

]]>
https://techliberation.com/2019/06/19/how-conservatives-came-to-favor-the-fairness-doctrine-net-neutrality/feed/ 0 76507
Debating the Future of Artificial Intelligence: G7 Multistakeholder Conference https://techliberation.com/2018/12/04/debating-the-future-of-artificial-intelligence-g7-multistakeholder-conference/ https://techliberation.com/2018/12/04/debating-the-future-of-artificial-intelligence-g7-multistakeholder-conference/#comments Tue, 04 Dec 2018 15:27:40 +0000 https://techliberation.com/?p=76423

This week I will be traveling to Montreal to participate in the 2018 G7 Multistakeholder Conference on Artificial Intelligence. This conference follows the G7’s recent Ministerial Meeting on “Preparing for the Jobs of the Future” and will also build upon the  G7 Innovation Ministers’ Statement on Artificial Intelligence . The goal of Thursday’s conference is to, “focus on how to enable environments that foster societal trust and the responsible adoption of AI, and build upon a common vision of human-centric AI.” About 150 participants selected by G7 partners are expected to participate, and I was invited to attend as a U.S. expert, which is a great honor. 

I look forward to hearing and learning from other experts and policymakers who are attending this week’s conference. I’ve been spending a lot of time thinking about the future of AI policy in recent books, working papers, essays, and debates. My most recent essay concerning a vision for the future of AI policy was co-authored with Andrea O’Sullivan and it appeared as part of a point/counterpoint debate in the latest edition of the Communications of the ACM. The ACM is the Association for Computing Machinery, the world’s largest computing society, which “brings together computing educators, researchers, and professionals to inspire dialogue, share resources, and address the field’s challenges.” The latest edition of the magazine features about a dozen different essays on “Designing Emotionally Sentient Agents” and the future of AI and machine-learning more generally.

In our portion of the debate in the new issue, Andrea and I argue that “Regulators Should Allow the Greatest Space for AI Innovation.” “While AI-enabled technologies can pose some risks that should be taken seriously,” we note, “it is important that public policy not freeze the development of life-enriching innovations in this space based on speculative fears of an uncertain future.” We contrast two different policy worldviews — the precautionary principle versus permissionless innovation — and argue that:

artificial intelligence technologies should largely be governed by a policy regime of permissionless innovation so that humanity can best extract all of the opportunities and benefits they promise. A precautionary approach could, alternatively, rob us of these life-saving benefits and leave us all much worse off.

That’s not to say that AI won’t pose some serious policy challenges for us going forward that deserve serious attention. Rather, we are warning against the dangers of allowing worst-case thinking to be the default position in these discussions.

But what about some of the policy concerns regarding AI, including privacy, “algorithmic accountability,” or more traditional fears about automation leading to job displacement or industrial disruption. Some of the these issues deserve greater scrutiny, but as Andrea and I pointed out in a much longer paper with Raymond Russell, there often exists better ways of dealing with such issues before resorting to preemptive, top-down controls on fast-moving, hard-to-predict technologies.

“Soft law” options will often serve us better than old hard law approaches. Soft law mechanisms, as I write in my latest law review article with Jennifer Skees and Ryan Hagemann, are a useful way to bring diverse parties together to address pressing policy concerns without destroying the innovative promise of important new technologies. Among other things, soft law includes multistakeholder processes and ongoing efforts to craft flexible “best practices.” It can also include important collaborative efforts such as this recent IEEE “Global Initiative on Ethics of Autonomous and Intelligent Systems,” which serves as “an incubation space for new standards and solutions, certifications and codes of conduct, and consensus building for ethical implementation of intelligent technologies.” This approach brings together diverse voices from across the globe to develop rough consensus on what “ethically-aligned design” looks like for AI and aims to establish a framework and set of best practices for the development of these technologies over time.

Others have developed similar frameworks, including the ACM itself. The ACM developed a Code of Ethics and Professional Conduct in the early 1970s and then refined it in the early 1990s and then again just recently in 2018. Each iteration of the ACM Code reflected ongoing technological developments from the mainframe era to the PC and Internet revolution and on through today’s machine-learning and AI era. The latest version of the Code “affirms an obligation of computing professionals, both individually and collectively, to use their skills for the benefit of society, its members, and the environment surrounding them,” and insists that computing professionals “should consider whether the results of their efforts will respect diversity, will be used in socially responsible ways, will meet social needs, and will be broadly accessible.” The document also stresses how, “[a]n essential aim of computing professionals is to minimize negative consequences of computing, including threats to health, safety, personal security, and privacy. When the interests of multiple groups conflict, the needs of those less advantaged should be given increased attention and priority.”

Of course, over time, more targeted or applied best practices and codes of conduct will be formulated as new technological developments make them necessary. It is impossible to perfectly anticipate and plan for all the challenges that we may face down the line. But we can establish some rough best practices and ethical guidelines to help us deal with some of them. As we do so, we need to think hard about how to craft those principles and policies in such a way so as to not undermine the potentially amazing, life-enriching — and potentially even life- saving — benefits that AI technologies could bring about.

You can hear more about these and other issues surrounding the future of AI in this 6-minute video that  Communications of the ACM put together to coincide with my debate with Oren Etzioni of the Allen Institute for Artificial Intelligence. As you will probably notice, there’s actually a lot more common ground between us in this discussion that you might initially suspect. For example, we agree that it would be a serious mistake to regulate AI at the general-purpose level and that it instead makes more sense to zero-in on specific AI applications to determine where policy interventions might be needed.

Of course, things get more contentious when we consider  what kind of policy interventions we might want for specific AI applications, and also the much more challenging question about how to define and measure “harm” in this context. And this all assumes we can even come to some general consensus about how to first define what we even mean by “artificial intelligence” or “robotics” in general. That’s harder than many realize and it is important because it has a bearing on the overall scope and practicality of regulation in various contexts.

Another thing that seems to be the source of serious ongoing debate between people in this field concerns the wisdom of creating an entirely new agency or centralized authority of some sort to oversee or guide the development AI or robotics. I’ve debated that question many times with Ryan Calo, who first pitched the idea a few years back in a working paper for Brookings. In response, I noted that we already have quite a few “robot regulators” in existence today in the form of technocratic agencies that oversee the specific development of various types of robotic and AI-oriented applications. For example, NHTSA already oversees driverless cars, FAA regulates drones, and the FDA handles AI-based medical devices and applications. Will adding another big, over-arching Robotics Commission really add much value to the process? Or will it simply add another bureaucratic layer of red tape to the process of getting life-enriching services out to the public? I doubt, for example, that the Digital Revolution would have been somehow improved much had America created a Federal Computer Commission or Federal Internet Commission 25 years ago.

Moreover, had we adopted such entities, I worry about how the tech companies of an earlier generation might have utilized that process to keep new players and technologies from emerging. As I noted this week in a tweet that got a lot of attention, I used to have the adjoining poster from PC Computing magazine on my office wall over 20 years ago. It was entitled, “Roadmap to Top Online Services,” and showed how the powerful Big 4 online service providers — America Online, Prodigy, Compuserve, and Microsoft — were spreading their tentacles. People used to see this poster on my wall and ask me whether there was any hope of disrupting the perceived choke-hold that these companies had on the market at the time.

Of course, we now look back and laugh at the idea that these firms could have bottled up innovation and kept competition at bay. But ask yourself: When disruptive innovations appeared on the scene, what would those incumbent firms have done if they had regulators to run to for help down at a Federal Computer Commission or Federal Internet Commission? I think we know exactly what they would have done because the lamentable history of so much Federal Communication Commission regulation shows us that  the powerful will grab for the levers of power wherever they exist. Some critics don’t accept the idea that “rent-seeking” and regulatory capture are real problems, or they believe that we can find creative ways to avoid those problems. But history shows this has been a reoccurring problem in countless sectors and one that we should try to avoid as much as possible by not establishing mechanisms that could exclude beneficial forms of competition and innovation from coming about to begin with.

That could certainly happen right now with the regulatory mechanisms already in place. For example, just this week, Jennifer Huddleston Skees and I wrote about the dangers of “Emerging Tech Export Controls Run Amok,” as the Trump Administration ponders a potentially massive expansion of export restrictions on a wide variety of technologies. More than a dozen different AI or autonomous system technologies appear on the list for consideration. That could pose real trouble not just for commercial innovators in this space, but also for non-commercial research and collaborative open source efforts involving these technologies.

Again, that doesn’t mean AI and robotics should develop in a complete policy vacuum. We need “governance” but we don’t need the sort of heavy-handed, top-down, competition-killing, innovation-restricting sort of regulatory regimes of the past. I continue to believe that more flexible, adaptive “soft law” mechanisms provide the reasonable path forward for most of the concerns we hear about AI and robotics today. These are challenging issues, however, and I look forward to learning more from other experts in the field when I visit Montreal for this week’s G7 discussion.


Additional Reading:

]]>
https://techliberation.com/2018/12/04/debating-the-future-of-artificial-intelligence-g7-multistakeholder-conference/feed/ 1 76423
Don’t game EPA regulations to help DSRC car technology https://techliberation.com/2018/11/01/dont-game-epa-regulations-to-help-dsrc-car-technology/ https://techliberation.com/2018/11/01/dont-game-epa-regulations-to-help-dsrc-car-technology/#respond Thu, 01 Nov 2018 19:55:34 +0000 https://techliberation.com/?p=76398

By Brent Skorup and Michael Kotrous

In 1999, the FCC completed one of its last spectrum “beauty contests.” A sizable segment of spectrum was set aside for free for the US Department of Transportation (DOT) and DOT-selected device companies to develop DSRC, a communications standard for wireless automotive communications, like vehicle-to-vehicle (V2V) and vehicle-to-infrastructure (V2I). The government’s grand plans for DSRC never materialized and in the intervening 20 years, new tech—like lidar, radar, and cellular systems—advanced and now does most of what regulators planned for DSRC.

Too often, however, government technology plans linger, kept alive by interest groups that rely on the new regulatory privilege, even when the market moves on. At the eleventh hour of the Obama administration, NHTSA proposed mandating DSRC devices in all new vehicles, an unprecedented move that Brent and other free-market groups opposed in public interest comment filings.  As Brent wrote last year ,

In the fast-moving connected car marketplace, there is no reason to force products with reliability problems [like DSRC] on consumers. Any government-designed technology that is “so good it must be mandated” warrants extreme skepticism….

Further,

Rather than compel automakers to add costly DSRC systems to cars, NHTSA should consider a certification or emblem system for vehicle-to-vehicle safety technologies, similar to its five-star crash safety ratings. Light-touch regulatory treatment would empower consumer choice and allow time for connected car innovations to develop.

Fortunately, the Trump administration put the brakes on the mandate , which would have added cost and complexity to cars for uncertain and unlikely benefits.

However, some regulators and companies are trying to revive the DSRC device industry while NHTSA’s proposed DSRC mandate is on life support. Marc Scribner at CEI uncovered a sneaky attempt to create DSRC technology sales via an EPA proceeding. The stalking horse DSRC boosters have chosen is the Corporate Average Fuel Economy (CAFE) regulations—specifically the EPA’s off-cycle program. EPA and NHTSA jointly manage these regulations. That program rewards manufacturers who adopt new technologies that reduce a vehicle’s emissions in ways not captured by conventional measures like highway fuel economy.

Under the proposed rules , auto makers that install V2V or V2I capabilities can receive credit for having reduced emissions. The EPA proposal doesn’t say “DSRC” but it singles out only one technology standard that would be favored in this scheme: a standard underlying DSRC

This proposal comes as a bit of surprise for those who have followed auto technology; we’re aware of no studies showing DSRC improves emissions. (DSRC’s primary use-case today is collision warnings to the driver.) But the EPA proposes a helpful end-around that problem: simply waiving the requirement that manufacturers provide data showing a reduction in harmful emissions. Instead of requiring emissions data, the EPA proposes a much lower bar, that auto makers show that these devices merely “have some connection to overall environmental benefits.” Unless the agency applies credits in a tech-neutral way and requires more rigor in the final rules, which is highly unlikely, this looks like a backdoor subsidy to DSRC via gaming of emission reduction regulations.

Hopefully EPA regulators will discover the ruse and drop the proposal. It was a pleasant surprise last week when a DOT spokesman committed that the agency favored a tech-neutral approach for this “talking car” band. But after 20 years,  this 75 MHz of spectrum gifted to DSRC device makers should be repurposed by the FCC for flexible-use. Fortunately, the FCC has started thinking about alternative uses for the DSRC spectrum. In 2015 Commissioners O’Rielly and Rosenworcel said the agency should consider flexible-use alternatives to this DSRC-only band.

The FCC would be wise to follow through and push even farther. Until the gifted spectrum that powers DSRC is reallocated to flexible use, interest groups will continue to pull any regulatory lever it has to subsidize or mandate adoption of talking-car technology. If DSRC is the best V2V technology available, device makers should win market share by convincing auto companies, not by convincing regulators.

]]>
https://techliberation.com/2018/11/01/dont-game-epa-regulations-to-help-dsrc-car-technology/feed/ 0 76398
Senate Should Either Fix Or Get Off The Pot On Copyright Office Bill https://techliberation.com/2017/05/26/senate-should-either-fix-or-get-off-the-pot-on-copyright-office-bill/ https://techliberation.com/2017/05/26/senate-should-either-fix-or-get-off-the-pot-on-copyright-office-bill/#respond Fri, 26 May 2017 18:03:55 +0000 https://techliberation.com/?p=76146

Guest post from Sasha Moss, R Street Institute (Originally published on TechDirt on 5/24/07)

The U.S. Senate is about to consider mostly pointless legislation that would make the nation’s register of copyrights—the individual who heads the U.S. Copyright Office, officially a part of the Library of Congress—a presidential appointment that would be subject to Senate confirmation.

While the measure has earned praise from some in the content industry, including the Motion Picture Association of America, unless senators can find better ways to modernize our copyright system, they really should just go back to the drawing board.

The Register of Copyrights Selection and Accountability Act of 2017 already cleared the U.S. House in April by a 378-48 margin. Under the bill and its identical Senate companion, the power to select the register would be taken away from Librarian of Congress Dr. Carla Hayden. Instead, the president would select an appointment from among three names put forward by a panel that includes the librarian, the speaker of the House and the majority and minority leaders of both the House and Senate. And the register would now be subject to a 10-year term with the option of multiple reappointments, like the Librarian of Congress.

The legislation is ostensibly the product of the House Judiciary Committee’s multiyear series of roundtables and comments on modernizing the U.S. Copyright Office. In addition to changes to the process of selecting the register, the committee had recommended creating a stakeholder advisory board, a chief economist, a chief technology officer, making information technology upgrades at the office, creating a searchable digital database of ownership information to lower transaction costs in licensing and royalty payments, and creating a small claims court for relatively minor copyright disputes.

Alas, while it’s billed as a “first step,” the current legislation gives up most of those more substantive reforms and instead amounts largely to a partisan battle over who will have the power to select the next register: Hayden, who was appointed by Barack Obama, or President Donald Trump.

Opponents argue the bill will make the register and the Copyright Office more politicized and vulnerable to capture by special interests, while ceding more power to the executive. They argue that vetting the register through the nomination process could delay modernization efforts. Hayden needs the position to be filled expeditiously to implement her modernization program, and Trump already faces a sizable confirmation backlog .

Meanwhile, proponents argue a more independent register, less tethered to the will of the Library of Congress, will make USCO more accountable. They say it will make the office run more efficiently and allow it to modernize. They also believe it will address important constitutional questions, such as the separation of powers and oversight by the president.

At the heart of these constitutional questions is the fact the Library of Congress has both significant legislative and executive functions. Housed within the legislative branch, it also sets royalty rates and rules on exemptions from the Digital Millennium Copyright Act. Critics have derided the Copyright Office for being slippery about whether it is serving a legislative or executive role, depending on who’s asking. The contention is that this unusual arrangement renders USCO a “constitutional chameleon.”

Of course, it is not uncommon for entities in one branch to perform the functions of another. The president has a role in the legislative process through his veto power. The International Trade Commission performs judicial functions, but is an independent agency housed within the executive branch. The federal government’s separation of powers is not absolute. But there does come a point where those lines become so blurred as to call the original classification into question. In that respect, Congress should consider taking certain functions—such as the Copyright Royalty Board or the Triennial Section 1201 Proceeding—out of the Copyright Office.

Some would propose moving the entire Copyright Office out of the Library of Congress and rendering it a standalone agency, which would elevate the register’s position to one of an officer of the United States. Under that highly controversial scenario, the Constitution’s Appointments Clause definitely would require the job be filled by the president. But for now, since the librarian still has ultimate authority over the substantive regulatory powers surrounding copyright, changing who appoints the register won’t change anything outside of a short-term political calculation of who the next register is.

The bottom line is that the current bill simply doesn’t do that much, good or bad. Making the position a presidential appointment is unlikely to speed up IT modernization efforts, at a time when the office has faced numerous setbacks and problems getting that IT infrastructure in place. The original policy proposal drafted by the House Judiciary Committee was a more comprehensive and substantial approach to modernization and many of its provisions were supported broadly. First step or not, this is a feeble try.

As the Senate considers the bill in the coming weeks, they should either amend the legislation so that it will do something to modernize copyright, or just jettison it entirely. As currently written, the bill serves no purpose, and Congress shouldn’t waste its time on it.

 

]]>
https://techliberation.com/2017/05/26/senate-should-either-fix-or-get-off-the-pot-on-copyright-office-bill/feed/ 0 76146
Title II, Broadcast Regulation, and the First Amendment https://techliberation.com/2016/10/27/title-ii-broadcast-regulation-and-the-first-amendment/ https://techliberation.com/2016/10/27/title-ii-broadcast-regulation-and-the-first-amendment/#respond Thu, 27 Oct 2016 19:23:14 +0000 https://techliberation.com/?p=76089

Title II allows the FCC to determine what content and media Internet access providers must transmit on their own private networks, so the First Amendment has constantly dogged the FCC’s “net neutrality” proceedings. If the Supreme Court agrees to take up an appeal from the DC Circuit Court of Appeals, which rejected a First Amendment challenge this summer, it will likely be because of Title II’s First Amendment deficiencies.

Title II has always been about handicapping ISPs qua speakers and preventing ISPs from offering curated Internet content. As former FCC commissioner Copps said, absent the Title II rules, “a big cable company could block access to an investigative report about its less-than-stellar customer service.” Tim Wu told members of Congress that net neutrality was intended to prevent ISPs from favoring, say, particular news sources or sports teams.

But just as a cable company chooses to offer some channels and not others, and a search engine chooses to promote some pages and not others, choosing to offer a curated Internet to, say, children, religious families, or sports fans involves editorial decisions. As communications scholar Stuart Benjamin said about Title II’s problem, under current precedent, ISPs “can say they want to engage in substantive editing, and that’s enough for First Amendment purposes.”

Title II – Bringing Broadcast Regulation to the Internet

Title II regulation of the Internet is frequently compared to the Fairness Doctrine, which activists used for decades to drive conservatives out of broadcast radio and TV. As a pro-net neutrality media professor explained in The Atlantic last year, the motivation for the Fairness Doctrine and Title II Internet regulation is the same: to “rescue a potentially democratic medium from commercial capture.” This is why there is almost perfect overlap between the organizations and advocates who support the Fairness Doctrine and those who lobbied for Title II regulation of the Internet.

These advocates know that FCC regulation of media has proceeded in similar ways for decades. Apply the expansive “gatekeeper” label to a media distributor and then the FCC will regulate distributor operations, including the content transmitted. Today, all electronic media distributors–broadcast TV and radio, satellite TV and radio, cable TV, and ISPs–whether serving 100 customers or 100 million customers, are considered “gatekeepers” and their services and content are subject to FCC intervention.

With broadband convergence, however, the FCC risked losing the ability to regulate mass media. Title II gives the FCC direct and indirect authority to shape Internet media like it shapes broadcast media. In fact, Chairman Wheeler called the Title II rules “must carry–updated for the 21st century.”

The comparison is apt and suggests why the FCC can’t escape the First Amendment challenges to Title II. Must-carry rules require cable TV companies to transmit all local broadcast stations to their cable TV subscribers. Since the must-carry rules prevent the cable operator editorial discretion over their own networks, the Supreme Court held in Turner I that the rules interfered with the First Amendment rights of cable operators.

But the Communications Act Allows Internet Filtering

Internet regulation advocates faced huge problem, though. Unlike other expansions of FCC authority into media, Congress was not silent about regulation of the Internet. Congress announced a policy in the 1996 update to the Communications Act that Internet access providers should remain “unfettered by State and Federal regulation.”

Regulation advocates dislike Section 230 because of its deregulatory message and because it expressly allows Internet access providers to filter the Internet.

Professor Yochai Benkler, in agreement with Lawrence Lessig, noted that Section 230 gives Internet access providers editorial discretion. Benkler warned that because of 230, “ISPs…will interject themselves between producers and users of information.” Further, these “intermediaries will be reintroduced not because of any necessity created by the technology, or because the medium requires a clearly defined editor. Intermediaries will be reintroduced solely to acquire their utility as censors of morally unpalatable materials.”  

Professor Jack Balkin noted likewise that “…§ 230(c)(2) immunizes [ISPs] when they censor the speech of others, which may actually encourage business models that limit media access in some circumstances.” 

Even the FCC acknowledges the consumer need for curated services and says in the Open Internet Order that Title II providers can offer “a service limited to offering ‘family friendly’ materials to end users who desire only such content.”

While that concession represents a half-hearted effort to bring the Order within compliance of Section 230, it simply exposes the FCC to court scrutiny. Allowing “family friendly” offers but not other curated offers is content-based distinction. Under Supreme Court RAV v. City of St. Paul, “[c]ontent-based regulations are presumptively invalid.”  Further, the Supreme Court said in US v. Playboy, content-based burdens must satisfy the same scrutiny as content-based bans on content. 

Circuit Split over the First Amendment Rights of Common Carriers

Hopefully the content-based nature of the Title II regulations are reason enough for the Supreme Court to take up an appeal. Another reason is that there is now a circuit split regarding the extent of First Amendment protections for common carriers.

The DC Circuit said that the FCC can prohibit content blocking because ISPs have been labeled common carriers.

In contrast, other courts have held that common carriers are permitted to block content on common carrier lines. In Information Providers Coalition v. FCC, the 9th Circuit held that common carriers “are private companies, not state actors…and accordingly are not obliged to continue…services of particular subscribers.” As such, regulated common carriers are “free under the Constitution to terminate service” to providers of offensive content. The Court relied on its decision a few years earlier in  Carlin Communications v. Mountain States Telephone and Telegraph Company that when a common carrier phone company is connecting thousands of subscribers simultaneously to the same content, the “phone company resembles less a common carrier than it does a small radio station” with First Amendment rights to block content. 

Similarly, the 4th Circuit in Chesapeake & Potomac Telephone Co. v. US held that common carrier phone companies are First Amendment speakers when they bundle and distribute TV programming, and that a law preventing such distribution “impairs the telephone companies’ ability to engage in a form of protected speech .” 

The full DC Circuit will be deciding whether to take up the Title II challenges. If the judges decline review, the Supreme Court would be the final opportunity for a rehearing. If appeal is granted, the First Amendment could play a major role. The Court will be faced with a choice: Should the Internet remain “unfettered” from federal regulation as Congress intended? Or is the FCC permitted to perpetuate itself by bringing legacy media regulations to the online world?

]]>
https://techliberation.com/2016/10/27/title-ii-broadcast-regulation-and-the-first-amendment/feed/ 0 76089
Why is the FCC Doubling Down on Regulating the TV Industry and Set Top Boxes? https://techliberation.com/2016/09/21/why-is-the-fcc-doubling-down-on-regulating-the-tv-industry-and-set-top-boxes/ https://techliberation.com/2016/09/21/why-is-the-fcc-doubling-down-on-regulating-the-tv-industry-and-set-top-boxes/#comments Wed, 21 Sep 2016 20:32:30 +0000 https://techliberation.com/?p=76085

The FCC appears to be dragging the TV industry, which is increasingly app- and Internet-based, into years of rulemakings, unnecessary standards development and oversight, and drawn-out lawsuits. The FCC hasn’t made a final decision but the general outline is pretty clear. T he FCC wants to use a 20 year-old piece of corporate welfare, calculated to help a now-dead electronics retailer, as authority to regulate today’s TV apps and their licensing terms. Perhaps they’ll succeed in expanding their authority over set top boxes and TV apps. But as TV is being revolutionized by the Internet the legacy providers are trying to stay ahead of the new players (Netflix, Amazon, Layer 3), regulating TV apps and boxes will likely impede the competitive process and distract the FCC from more pressing matters, like spectrum and infrastructure.

In the 1996 Telecom Act, a provision was added about set top boxes sold by cable and satellite companies. In the FCC’s words, Section 629 charges the FCC “to assure the commercial availability of devices that consumers use to access multichannel video programming.”  The law adds that such devices, boxes, and equipment must be from “manufacturers, retailers, and other vendors not affiliated with any multichannel video programming distributor.” In English: Congress wants to ensure that consumers can gain access to TV programming via devices sold by parties other than cable and satellite TV companies.

The FCC’s major effort to effect this this law did not end well. To create a market for “non-affiliated equipment,” the FCC created rules in 1998 that established the CableCARD technology, a module designed to the FCC’s specifications that could be inserted into “nonaffiliated” set top boxes.

CableCARD was developed and released to consumers, but after years of complex lawsuits and technology dead ends, cable technology had advanced and few consumers demanded CableCARD devices. The results reveal the limits of lawmaker-designed “competition.” In 2010, 14 years after passage of the law and all those years of agency resources, fewer than 1% of pay-TV customers had “unaffiliated” set top boxes.

It’s a strangely specific statute with no analogues for other technology devices. Why was this law created? Multichannel News reporting in 1998, representative of other reports at the time, has some clues.

[Rep.] Bliley, whose district includes the headquarters of electronics retailer Circuit City, sponsored the provision that requires the FCC to adopt rules to promote the retail sale of cable set-top boxes and navigation devices. 

So it it was a small addition to the Act, presumably added at the behest of Circuit City, so that electronics retailers and device companies could sell more consumer devices.

TV regs chart small

The good news is that by the law’s straightforward terms and intent, mission: accomplished. Despite CableCARD’s failure, electronics retailers today are selling devices that give consumers access to TV programming. That’s because, increasingly, TV providers are letting their apps do much of the work that set top boxes do. Today, many consumers can watch TV programming by installing a provider’s streaming TV app on their device of their choice, manufactured and sold by dozens of companies, like Samsung, Apple, and Google, and retailers. Unfortunately, Circuit City shuttered its last stores in 2009 and wasn’t around to benefit.

But the new FCC proposal says, no, mission: not accomplished. There’s some interpretative gymnastics to reach this conclusion. The FCC says “devices” and “equipment” should be interpreted broadly in order to capture apps made by pay-TV providers. Yet, while “devices and equipment” is broad enough to capture software like apps, it is not broad enough to capture actual devices and equipment, like smartphones, smart TVs, tablets, computers, and Chromecasts that consumers use to access pay-TV programming.

This strained reading of statutory language will create a regulatory mess out of the evolving pay-TV industry, that already has labyrinthine regulations.

But if you look at the history of FCC regulation, and TV regulation in particular, it’s pretty unexceptional. Advocates for FCC regulation have long seen a competitive and vibrant TV marketplace as a threat to the agency’s authority.

As former FCC chairman Newton Minow warned in his 1995 book, Abandoned in the Wasteland, the FCC would lose its ability to regulate TV if it didn’t find new justifications:

A television system with hundreds or thousands of channels—especially channels that people pay to watch—not only destroys the notion of channel scarcity upon which the public-trustee theory rests but simultaneously breathes life and logic into the libertarian model.

Minow advocated, therefore, that the FCC needed to find alternative reasons to retain some control of the TV industry, including affordability, social inclusiveness, education of youth, and elimination of violence. Special interests have manufactured a crisis in TV–“monopoly control” [sic] of set top boxes by TV distributors. As Scott Wallsten and others have suggested, bundling a set top box with a TV subscription is likely not a competitive problem and the FCC’s remedies are unlikely to work. 

The FCC’s blinkered view of the TV industry is necessary because the US TV and media marketplace is blossoming. Consumers have never had more access to programming on more devices. More than 100 standalone streaming video-on-demand products launched in 2015 alone. T he major TV providers are going where consumers are and launching their own streaming apps. The market won’t develop perfectly to the Commissioners’ liking and there will be hiccups, but competition is vigorous, output and quality are high, and consumers are benefiting.

The FCC decision to devote its highly-educated agency staff and resources (which will balloon when challenged in court or during the app specification proceedings) to an arcane consumer issue with such cynical origins is a lamentable waste of agency resources.

This an agency that for decades has done a hundred things poorly. In an increasingly competitive telecom and media marketplace, it should instead do a handful of things well. (Commissioner Pai has proposed useful infrastructure reforms and Commissioner Rosenworcel has an interesting proposal, that I’ve written about, to deploy federal spectrum into commercial markets). Let’s hope the agency leadership reassesses the necessity the this proceeding before dragging the TV industry into another wild goose chase.


Related research: This week Mercatus released a paper by MA Economics Fellow Joe Kane and me about the FCC’s reinvention as a social and cultural regulator: “The FCC and Quasi–Common Carriage A Case Study of Agency Survival.”

]]>
https://techliberation.com/2016/09/21/why-is-the-fcc-doubling-down-on-regulating-the-tv-industry-and-set-top-boxes/feed/ 2 76085
Global Innovation Arbitrage: Drone Delivery Edition https://techliberation.com/2016/08/25/global-innovation-arbitrage-drone-delivery-edition/ https://techliberation.com/2016/08/25/global-innovation-arbitrage-drone-delivery-edition/#respond Thu, 25 Aug 2016 15:46:01 +0000 https://techliberation.com/?p=76076

Dominos pizza drone
Just three days ago I penned another installment in my ongoing series about the growing phenomenon of “global innovation arbitrage” — or the idea that “innovators can, and increasingly will, move to those countries and continents that provide a legal and regulatory environment more hospitable to entrepreneurial activity.” And now it’s already time for another entry in the series!

My previous column focused on driverless car innovation moving overseas, and earlier installments discussed genetic testingdrones, and the sharing economy. Now another drone-related example has come to my attention, this time from New Zealand. According to the New Zealand Herald:

Aerial pizza delivery may sound futuristic but Domino’s has been given the green light to test New Zealand pizza delivery via drones. The fast food chain has partnered with drone business Flirtey to launch the first commercial drone delivery service in the world, starting later this year.

Importantly, according to the story, “If it is successful the company plans to extend the delivery method to six other markets – Australia, Belgium, France, The Netherlands, Japan and Germany.” That’s right, America is not on the list. In other words, a popular American pizza delivery chain is looking overseas to find the freedom to experiment with new delivery methods. And the reason they are doing so is because of the seemingly endless bureaucratic foot-dragging by federal regulators at the FAA.

Some may scoff and say, ‘Who cares? It’s just pizza!’ Well, even if you don’t care about innovation in the field of food delivery, how do you feel about getting medicines or vital supplies delivered on a more timely and efficient basis in the future? What may start as a seemingly mundane or uninteresting experiment with pizza delivery through the sky could quickly expand to include a wide range of far more important things. But it will never happen unless you give innovators a little breathing room–i.e., “permissionless innovation”–to try new and different ways of doing things.

Incidentally, Flirtey, the drone deliver company that Domino’s partnered with in New Zealand, is also an American-based company. On the company’s website, the firm notes that: “Drones can be operated commercially in a growing number of countries. We’re in discussions with regulators all around the world, and we’re helping to shape the regulations and systems that will make drone delivery the most effective, personal and frictionless delivery method in the market.”

That’s just another indication of the reality that global innovation arbitrage is at work today. If the U.S. puts it head in the sand and lets bureaucrats continue to slow the pace of progress, America’s next generation of great innovators will increasingly look offshore in search of patches of freedom across the planet where they can try out their exciting new products and services.

BTW, I wrote all about this in Chapter 3 of my Permissionless Innovation book. And here’s some additional Mercatus research on the topic.


Additional  Reading

]]>
https://techliberation.com/2016/08/25/global-innovation-arbitrage-drone-delivery-edition/feed/ 0 76076
Book Review: Calestous Juma’s “Innovation and Its Enemies” https://techliberation.com/2016/07/29/book-review-calestous-jumas-innovation-and-its-enemies/ https://techliberation.com/2016/07/29/book-review-calestous-jumas-innovation-and-its-enemies/#respond Fri, 29 Jul 2016 15:32:42 +0000 https://techliberation.com/?p=76052

Juma book cover

“The quickest way to find out who your enemies are is to try doing something new.” Thus begins Innovation and Its Enemies, an ambitious new book by Calestous Juma that will go down as one of the decade’s most important works on innovation policy.

Juma, who is affiliated with the Harvard Kennedy School’s Belfer Center for Science and International Affairs, has written a book that is rich in history and insights about the social and economic forces and factors that have, again and again, lead various groups and individuals to oppose technological change. Juma’s extensive research documents how “technological controversies often arise from tensions between the need to innovate and the pressure to maintain continuity, social order, and stability” (p. 5) and how this tension is “one of today’s biggest policy challenges.” (p. 8)

What Juma does better than any other technology policy scholar to date is that he identifies how these tensions develop out of deep-seated psychological biases that eventually come to affect attitudes about innovations among individuals, groups, corporations, and governments. “Public perceptions about the benefits and risks of new technologies cannot be fully understood without paying attention to intuitive aspects of human psychology,” he correctly observes. (p. 24)

Opposition to Change: It’s All in Your Head

Juma documents, for example, how “status quo bias,” loss aversion, and other psychological tendencies tend to encourage resistance to technological change. [Note: I discussed these and other “root-cause” explanations of opposition to technological change in Chapter 2 of my book, Permissionless Innovation: The Continuing Case for Comprehensive Technological Freedom, as well as in my 2012 law review article on “Technopanics, Threat Inflation, and the Danger of an Information Technology Precautionary Principle.”]  Juma notes, for example, that “society is most likely to oppose a new technology if it perceives that the risks are likely to occur in the short run and the benefits will only accrue in the long run.” (p. 5) Moreover, “much of the concern is driven by perception of loss, not necessarily by concrete evidence of loss.” (p. 11)

Juma’s approach to innovation policy studies is strongly influenced by the path-breaking work of Austrian economist Joseph Schumpeter, who long ago documented how entrepreneurial activity and the “perennial gales of creative destruction” were the prime forces that spurred innovation and propelled society forward. But Schumpeter was also one of the first scholars to realize that psychological fears about such turbulent change was what ultimately lead to much of the short-term opposition to new technologies that, in due time, we eventually come to see as life-enriching or even life-essential innovations.  Juma uses Schumpeter’s insight as the launching point for his exploration and he successfully verifies it using meticulously-detailed case studies.

Case Study-Driven Analysis

Juma
Short-term opposition to change is particularly acute among incumbent industries and interest groups, who often feel they have the most to lose. In this regard, Innovation and Its Enemies contains some spectacular histories of how special interests have resisted new technologies and developments throughout the centuries. Those case studies include: coffee and coffeehouses, the printing press, margarine, farm machinery, electricity, mechanical refrigeration, recorded music, transgenic crops, and genetically engineered salmon. These case studies are remarkably detailed histories that offer engaging and enlightening accounts of “the tensions between innovation and incumbency.”

My favorite case study in the book discusses how the dairy industry fought the creation and spread of margarine (excuse the pun!). I had no idea how ugly that situation got, but Juma provides all the gory details in what I consider one of the very best crony capitalist case studies ever penned.

In particular, in a subsection of that chapter entitled “The Laws against Margarine,” he provides a litany of examples of how effective the dairy industry was in convincing lawmakers to enact ridiculous anti-consumer regulations to stop margarine, even though the product offered the public a much-needed, and much more affordable, substitute for traditional butter. At one point, the daily industry successfully lobbied five states to adopt rules mandating that any imitation butter product had to be dyed pink! Other states enacted labelling laws that required butter substitutes to come in ominous-looking black packaging. Again, all this was done at the request of the incumbent dairy industry and the National Dairy Council, which would resort to almost any sort of deceptive tactic to keep a cheaper competing product out of the hands of consumers.

And so it goes in chapter after chapter of Juma’s book. The amount of detail in each of these unique case studies is absolutely stunning, but they nonetheless remain highly readable accounts of sectoral protectionism, special interest rent-seeking, and regulatory capture. In this way, Juma is plowing some familiar ground already covered by other economic historians and political scientists, such as Joel Mokyr and Mancur Olson, both of whom are mentioned in the book, as well as a long line of public choice scholars who are, somewhat surprisingly, not discussed in the text. Nonetheless, Juma’s approach is still fresh, unique, and highly informative. In fact, I don’t think I’ve ever seen so many distinct and highly detailed case studies assembled in one place by a single scholar.  What Juma has done here is truly impressive.

Related Innovation Policy Paradigms

Beyond Schumpeter’s clear influence, Juma’s approach to studying innovation policy also shares a great deal in common with two other unmentioned innovation policy scholars, Virginia Postrel and Robert D. Atkinson.

Postrel’s 1998 book, The Future and Its Enemies, contrasted the conflicting worldviews of “dynamism” and “stasis” and showed how the tensions between these two visions would affect the course of human affairs. She made the case for embracing dynamism — “a world of constant creation, discovery, and competition” — over the “regulated, engineered world” of the stasis mentality. Similarly, in his 2004 book, The Past and Future of America’s Economy, Atkinson documented how “American history is rife with resistance to change,” and in recounting some of the heated battles over previous technological revolutions he showed how two camps were always evident: “preservationists” and “modernizers.”

When Juma repeatedly recounts the fight between “innovation and incumbency” in his case studies, he is essentially describing the same paradigmatic divide that Postrel and Atkinson highlight in their works when they discuss “dynamist” vs. “stasis” tensions and the “modernizers” vs. “preservationists” battles that we have seen throughout history. [Note: In my 2014 essay on, “Thinking about Innovation Policy Debates: 4 Related Paradigms,” I discussed Postrel and Atkinson’s books and other approaches to understanding tech policy divisions and then related them to the paradigms I contrast in my work: the so-called “precautionary principle” vs. “permissionless Innovation” mindsets.]

Finally, Juma’s book could also be compared to another freshly released book, The Politics of Innovation, by Mark Zachary Taylor. Taylor’s book is also essential reading on this lamentable history of industrial protectionism and the resulting political opposition to change we have seen over time. [Note: Brent Skorup and provided many other high-tech cronyist case studies like these in our 2013 law review article, “A History of Cronyism and Capture in the Information Technology Sector.”]

To counter the prevalence of special interest influence and poor policymaking more generally, Juma stresses the need for evidence-based analysis and a corresponding rejection of fear-mongering and deceptive tactics by public officials and activist groups. He’s particularly concerned with “the use of demonization and false analogies to amplify the perception of risks associated with a new product.”

Accordingly, he would like to see improved educational and risk communication efforts aimed at better informing the public about risk trade-offs and the many potential future benefits of emerging technologies. “Learning how to communicate to the general public is an important aspect of reducing distrust [in new technologies],” Juma argues. (p. 312)

On the Pacing Problem

But Juma never really adequately squares that recommendation with another point he makes throughout the text about how “the pace of technological innovation is discernibly fast,” (p. 5) and how it is accelerating in an exponential fashion. “The implications of exponential growth will continue to elude political leaders if they persist in operating with linear worldviews.” (p. 14) But if it is indeed the case that things are moving that fast, then are we not potentially doomed to live in never-ending cycles of technopanics and misinformation campaigns about new technologies no matter how much education we try to do?

Regardless, Juma’s argument about the speed of modern technological change is quite valid and shared by many other scholars. He is essentially making the same case that Larry Downes did in his excellent 2009 book, The Laws of Disruption: Harnessing the New Forces That Govern Life and Business in the Digital Age. Downes argued that lawmaking in the information age is inexorably governed by the “law of disruption” or the fact that “technology changes exponentially, but social, economic, and legal systems change incrementally.”  This law, Downes said, is “a simple but unavoidable principle of modern life,” and it will have profound implications for the way businesses, government, and culture evolve going forward.  “As the gap between the old world and the new gets wider,” he argued, “conflicts between social, economic, political, and legal systems” will intensify and “nothing can stop the chaos that will follow.”

Again, Juma makes that same point repeatedly throughout the chapters of his book. This is also a restatement of the so-called “pacing problem,” as it is called in the field of the philosophy of technology. I discussed the pacing problem at length in my recent review of Wendell Wallach’s important new book, A Dangerous Master: How to Keep Technology from Slipping beyond Our Control. Wallach nicely defined the pacing problem as “the gap between the introduction of a new technology and the establishment of laws, regulations, and oversight mechanisms for shaping its safe development.” “There has always been a pacing problem,” he noted but, like Juma, Wallach believes that modern technological innovation is occurring at an unprecedented pace, making it harder than ever to “govern” using traditional legal and regulatory mechanisms.

New Approaches to Technological Governance Needed

Both Wallach in A Dangerous Master and Juma in Innovation and Its Enemies struggle with how to solve this problem. Wallach advocates “soft law” mechanisms or even informal “Governance Coordinating Committees,” which would oversee the development of new technology policies and advise existing governmental institutions. Juma is somewhat ambiguous regarding potential solutions, but he does stress the general need for a flexible approach to policy, as he notes on pg. 252:

It is important to make clear distinctions between hazards and risks. It is necessary to find a legal framework for addressing hazards. But such a framework should not take the form of rigid laws whose adoption needs to be guided by evidence of harm. More flexible standards that allow continuous assessment of emerging safety issues related to a new product are another way to address hazards. This approach would allow for evidence-based regulation.

Beyond that Juma wants to see “entrepreneurialism exercised in the public arena” (p. 282) and calls for “decisive leaders to champion the application of new technologies.” (p. 283) He argues such leadership is needed to ensure that life-enriching technologies are not derailed by opponents of change.

On the other hand, Juma sees a broader role for policymakers in helping to counter some of the potential side effects associated with many emerging technologies. He highlights three primary areas of concern. First, he suggests political leaders might need to find ways “to help balance the benefits and risks of automation” due to the rapid rise of robotics and artificial intelligence. Second, he notes that synthetic biology and gene-editing will give rise to many thorny issues that require policymakers to balance “potentially extraordinary benefits and the risk of catastrophic consequences.” (p. 284)  Finally, he points out that medicine and healthcare are set to be radically transformed by emerging technologies, but they are also threatened by archaic policies and practices in many countries.

In each case, Juma hopes that “decisive,” “adaptive” and “flexible” leaders will steer a sensible policy course with an eye toward limiting “the spread of political unrest and resentment toward technological innovation.” (p. 284)  That’s a noble goal, but Juma remains a bit vague on the steps needed to accomplish that balancing act without tipping public policy in favor a full-blown precautionary principle-based regime for new technologies. Juma clearly wants to avoid that result, but it remains unclear how or where he would draw clear lines in the sand to prevent it from occurring while at the same time achieving “decisive leadership” aimed at balancing potential risks and benefits.

Similarly, his repeated calls in the closing chapter for “inclusive innovation” efforts and strategies sounds sensible in theory, but Juma speaks in abstract generalities about what the term means and doesn’t provide a clear vision for how that would translate into concrete actions that would not end up giving vested interests a veto over new forms of technological innovation that they disfavor.

[CARTOON] Consider Every Risk Except

Nothing Ventured, Nothing Gained

Generally speaking, however, Juma wants this balance struck in favor of greater openness to change and an ongoing freedom to experiment with new technological capabilities. As he notes in his concluding chapter:

The biggest risk that society faces by adopting approaches that suppress innovation is that they amplify the activities of those who want to preserve the status quo by silencing those arguing for a more open future. […] Keeping the future open and experimenting in an inclusive and transparent way is more rewarding that imposing the dictum of old patterns. (pgs. 289, 316)

In that regard, the thing I liked most about Innovation and Its Enemies is the way throughout the text that Juma stressed the symbiotic relationship between risk-taking and progress. One of the ways he does so is by kicking off every chapter with a fun quote on that theme from some notable figure. He includes gems like these:

  • “Nothing will ever be attempted if all possible objections must be first overcome.” – Samuel Johnson
  • “Only those will risk going too far can possibly find out how far one can go.” – T.S. Eliot
  • “If you risk nothing, then you risk everything.” – Geena Davis
  • “Test fast, fail fast, adjust fast.” – Tom Peters

Of course, I was bound to enjoy his repeated discussion of this theme because that was the central thesis of my latest book, in which I made the argument that, “if we spend all our time living in constant fear of worst-case scenarios—and premising public policy upon such fears—then many best-case scenarios will never come about.” Or more simply, as the old saying goes: “nothing ventured, nothing gained.”

CARTOON - Protesting Against New Technology - the Early Days

On Pastoral Myths

I also liked the way that Juma used his case studies to remind us how “the topics may have changed, but the tactics have not.” (p. 143) For example, much of the fear-mongering and deceptive tactics we have seen through the years are based on “pastoral ideals,” i.e., appeals to nature, farm life, old traditions, of just the proverbial “good old days,” whenever those supposedly were! “Demonizing innovation is often associated with campaigns to romanticize past products and practices,” Juma notes. “Opponents of innovation hark back to traditions as if traditions themselves were not inventions at some point in the past.” (p. 309)  So very true!

That was especially the case in battles over new farming methods and technologies, when opponents of change were frequently “championing a moral cause to preserve a way of life,” as Juma discusses in several chapters. (p. 129) New products or methods of production were repeatedly but wrongly characterized as dangerous simply because they were not supposedly “natural” or “traditional” enough in character.

Of course, if all farming and other work was to remain frozen in some past “natural” state, we’d all still be hunters and gathers struggling to find the next meal to put in our bellies. Or, if we were all still on the farms of the “good old days,” then we’d still be stuck using an ox and plow in the name of preserving the “traditional” ways of doing things.

Humanity has made amazing strides—including being able to feed more people more easily and cheaply than ever before—precisely because we broke with those old, “natural” traditions. Alas, many vested interests and even quite a few academics today still employ these same pastoral appeals and myths to oppose new forms of technological change. Juma’s case studies powerfully illustrate why that dynamic continues to be a driving force in innovation policy debates and how it has delayed the diffusion of many important new goods and services throughout history. When the opponents of change rest their case on pastoral myths and nostalgic arguments about the good old days we should remind them that the good old days weren’t really that great after all.

Conclusion

In closing, Innovation and Its Enemies earns my highest recommendation. Even though 2016 is only half done as I write this, Professor Juma’s book is probably already a shoo-in as my choice for best innovation policy book of the year. And I am certain that it will also go down as one of the decade’s most important innovation policy books. Buy the book now and read every word of it. It is well worth your time.


 

Additional material related to Juma’s book:

Other Related Books

In addition to the books that I already mentioned throughout this review, readers who find Juma’s book and the issues he discusses in it of interest should also consider reading these other books on innovation policy, technological governance, and regulatory capture.  Although many of them are more squarely focused on the information technology sector or other emerging technology fields, they all relate to the general subject matter and approach found throughout Juma’s book. [NOTE: Links, where provided, are to my reviews of these books.]

 

]]>
https://techliberation.com/2016/07/29/book-review-calestous-jumas-innovation-and-its-enemies/feed/ 0 76052
Elizabeth Warren on Regulatory Capture & Simple Rules https://techliberation.com/2016/06/15/elizabeth-warren-on-regulatory-capture-simple-rules/ https://techliberation.com/2016/06/15/elizabeth-warren-on-regulatory-capture-simple-rules/#comments Wed, 15 Jun 2016 14:39:58 +0000 https://techliberation.com/?p=76037

Elizabeth_Warren
The folks over at RegBlog are running a series of essays on “Rooting Out Regulatory Capture,” a problem that I’ve spent a fair amount of time discussing here and elsewhere in the past. (See, most notably, my compendium on, “Regulatory Capture: What the Experts Have Found.”) The first major contribution in the RegBlog series is from Sen. Elizabeth Warren (D-MA) and it is entitled, “Corporate Capture of the Rulemaking Process.”

Sen. Warren makes many interesting points about the dangers of regulatory capture, but the heart of her argument about how to deal with the problem can basically be summarized as ‘Let’s Build a Better Breed of Bureaucrat and Give Them More Money.’  In her own words, she says we should “limit opportunities for ‘cultural’ capture'” of government officials and also “give agencies the money that they need to do their jobs.”

It may sound good in theory, but I’m always a bit perplexed by that argument because the implicit claims here are that:

(a) the regulatory officials of the past were somehow less noble-minded and more open to corruption than some hypothetical better breed of bureaucrat that is out there waiting to be found and put into office; and

(b) that the regulatory agencies of the past were somehow starved for resources and lacked “the money that they need to do their jobs.”

Neither of these assumptions is true and yet those arguments seem to animate most of the reform proposals set forth by progressive politicians and scholars for how to deal with the problem of capture.

I think it’s wishful thinking at best and willful ignorance of history at worst. First, people–including regulators–were no different in the past than they are today. We are not magically going to find a more noble lot who will walk into office and be immune from these pressures. If anything, you could make the argument that the regulators of the early Progressive Era were less susceptible to this sort of influence because they were riding a wave of impassioned regulatory zeal that accompanied that period. I don’t buy it, but it’s more believable tale than the opposite story.

Secondly, if you think that the problem of regulatory capture is solved by simply giving agencies more money, you’ve got it exactly backwards. Regulated interests go to where the power and money is. They find it and influence it. You can deny it all you want, but that’s what history shows us. So long as we are delegating broad administrative powers to administrative agencies and then sending them big bags of enforcement money at the same time, special interests will seek and find ways to influence that process.

Is that too grim of a statement on the modern administrative state? No, it’s simply a perspective informed by history; a history that has best been told, incidentally, by progressive scholars and critics! And yet they all too often don’t seem willing to learn the lessons of that history.

The cycle of influence doesn’t end just because you try to erect more firewalls to keep the special interests out. Where power exists, they will  always find a way to flex their muscle. It’s only really a question if you want this activity to be over or under the table. The whole “get-all-the-money-out-of-politics” fiction is, well, just that–a fiction. It’s a fine-sounding fairly tale that we continue to repeat again and again and yet nothing much ever changes. And, yet, a whole hell of lot of smart people continue to believe in that fairy tale if for no other reason than they can’t possible live with the idea that perhaps the only way to get this problem under control is to limit the underlying discretion and power of regulatory agencies to begin with.

On a better, more optimistic note, I want to highlight one argument Sen, Warren made in her essay with which I find myself in wholehearted agreement: We need more simple rules. As she correctly notes:

Complex rules take longer to finalize, are harder for the public to understand, and inevitably contain more special interest carve-outs that favor big business interests over small businesses and individuals. Complex rules are also more reliant on industry itself to provide additional detail and expertise—and that means more opportunities for capture. Simple works better.

Amen to all that! This is an issue I address in Chapter 6 of my recent book,  Permissionless Innovation: The Continuing Case for Comprehensive Technological FreedomIn subjection F beginning on pg. 140, I explain why policymakers should “Rely on ‘Simple Rules for a Complex World’ When Regulation Is Needed.” I build that section around the insights of Philip K. Howard and Richard Epstein. Howard, who is chair of Common Good and the author of The Rule of Nobody, notes:

Too much law . . can have similar effects as too little law. People slow down, they become defensive, they don’t initiate projects because they are surrounded by legal risks and bureaucratic hurdles. They tiptoe through the day looking over their shoulders rather than driving forward on the power of their instincts. Instead of trial and error, they focus on avoiding error. Modern America is the land of too much law. Like sediment in a harbor, law has steadily accumulated, mainly since the 1960s, until most productive activity requires slogging through a legal swamp. It’s degenerative. Law is denser now than it was 10 years ago, and will be denser still in the next decade. This growing legal burden impedes economic growth.

That’s exactly why we need, to borrow the title of Richard Epstein’s 1995 book of the same name, “simple rules for a complex world.” As I argue in my book:

This is why flexible, bottom-up approaches to solving complex problems. . .  are almost always superior to top-down laws and regulations. For example, we have already identified how social norms and pressure from the public, media, or activist groups can “regulate” behavior and curb potential abuses. And we have seen how education, awareness-building, transparency, and empowerment-based efforts can often help alleviate the problems associated with new forms of technological change. But there are other useful approaches that can be tapped to address or alleviate concerns or harms associated with new innovations. To the extent that other public policies are needed to guide technological developments, simple legal principles are greatly preferable to technology-specific, micromanaged regulatory regimes. Ex ante (preemptive and precautionary) regulation is often highly inefficient, even dangerous. Prospective regulation based on hypothesizing about future harms that may never materialize is likely to come at the expense of innovation and growth opportunities. To the extent that any corrective action is needed to address harms, ex post measures, especially via the common law, are typically superior.

I itemized those “simple rules” and solutions in another recent piece (“What 20 Years of Internet Law Teaches Us about Innovation Policy“). They include both formal mechanisms (property and contract law, torts, class action activity, and other common law tools) and informal strategies (ongoing voluntary negotiations, multistakeholder agreements, industry self-regulatory best practices and codes of conduct, education and transparency efforts, and so on). We should exhaust those sorts of solutions first before turning to administrative regulation. And then we should subject such regulatory proposals to a strict benefit-cost analysis (BCA). As I note in my Permissionless Innovation book,

All new proposed regulatory enactments should be subjected to strict BCA and, if they are formally enacted, they should also be retroactively reviewed to gauge their cost-effectiveness. Better yet, the sunsetting guidelines recommended above should be applied to make sure outdated regulations are periodically removed from the books so that innovation is not discouraged.

If Sen. Warren is serious about crafting more sensible “simple” rules and working to end the problem of regulatory chapter, this is a better approach than simply trying, yet again, to build a better breed of bureaucrat.

]]>
https://techliberation.com/2016/06/15/elizabeth-warren-on-regulatory-capture-simple-rules/feed/ 1 76037
Wendell Wallach on the Challenge of Engineering Better Technology Ethics https://techliberation.com/2016/04/20/wendell-wallach-on-the-challenge-of-engineering-better-technology-ethics/ https://techliberation.com/2016/04/20/wendell-wallach-on-the-challenge-of-engineering-better-technology-ethics/#respond Wed, 20 Apr 2016 19:08:57 +0000 https://techliberation.com/?p=76026

DM cover
On May 3rd, I’m excited to be participating in a discussion with Yale University bioethicist Wendell Wallach at the Microsoft Innovation & Policy Center in Washington, DC. (RSVP here.) Wallach and I will be discussing issues we write about in our new books, both of which focus on possible governance models for emerging technologies and the question of how much preemptive control society should exercise over new innovations.

Wallach’s latest book is entitled, A Dangerous Master: How to Keep Technology from Slipping beyond Our Control. And, as I’ve noted here recently, the greatly expanded second edition of my latest book, Permissionless Innovation: The Continuing Case for Comprehensive Technological Freedom, has just been released.

Of all the books of technological criticism or skepticism that I’ve read in recent years—and I have read stacks of them!— A Dangerous Master is by far the most thoughtful and interesting. I have grown accustomed to major works of technological criticism being caustic, angry affairs. Most of them are just dripping with dystopian dread and a sense of utter exasperation and outright disgust at the pace of modern technological change.

Although he is certainly concerned about a wide variety of modern technologies—drones, robotics, nanotech, and more—Wallach isn’t a purveyor of the politics of panic. There are some moments in the book when he resorts to some hyperbolic rhetoric, such as when he frets about an impending “techstorm” and the potential, as the book’s title suggests, for technology to become a “dangerous master” of humanity. For the most part, however, his approach is deeper and more dispassionate than what is found in the leading tracts of other modern techno-critics.

Many Questions, Few Clear Answers

Wallach does a particularly good job framing the major questions about emerging technologies and their effect on society. “Navigating the future of technological possibilities is a hazardous venture,” he observes. “It begins with learning to ask the right questions—questions that reveal the pitfalls of inaction, and more importantly, the passageways available for plotting a course to a safe harbor.” (p. 7) Wallach then embarks on a 260+ page inquiry that bombards the reader with an astonishing litany of questions about the wisdom of various forms of technological innovation—both large and small. While I wasn’t about to start an exact count, I would say that the number of questions Wallach poses in the book runs well into the hundreds. In fact, many paragraphs of the book are nothing but an endless string of questions.

Thus, if there is a primary weakness with A Dangerous Master, it’s that Wallach spends so much time formulating such a long list of smart and nuanced questions that some readers may come away disappointed when they do not find equally satisfying answers. On the other hand, the lack of clear answers is also completely understandable because, as Wallach notes, there really are no simple answers to most of these questions.

Just Slow Down!

Moving on to substance, let me make clear where Wallach and I generally see eye-to-eye and where we part ways.

Generally speaking, we agree about the need to come up with better “soft governance” systems for emerging technologies, which might include multistakeholder process, developer codes of conduct, sectoral self-regulation, sensible liability rules, and so on. (More on those strategies in a moment.)

But while we both believe it is wise to consider how we might “bake-in” better ethics and norms into the process of technological development, Wallach seems much more inclined than me to expect that we will be able to pre-ordain (or potentially require?) all this happens before much of this experimentation and innovation actually moves forward. Wallach opens by asking:

Determining when to bow to the judgment of experts and whether to intervene in the deployment of a new technology is certainly not easy. How can government leaders or informed citizens effectively discern which fields of research are truly promising and which pose serious risks? Do we have the intelligence and means to mitigate the serious risks that can be anticipated? How should we prepare for unanticipated risks? (p. 6)

Again, many good questions here! But this really gets to the primary difference between Wallach’s preferred approach and my own: I tend to believe that many of these things can only be worked out through ongoing trial and error, the constant reformulation of the various norms that govern the process of innovation, and the development of sensible ex post solutions to some of the most difficult problems posed by turbulent technological change.

By contrast, Wallach’s generally attitude toward technological evolution is probably best summarized by the phrases: “Slow down!” and, “Let’s have a conversation about it first!” As he puts it in his own words: “Slowing down the accelerating adoption of technology should be done as a responsible means to ensure basic human safety and to support broadly shared values.” (p. 13)

But I tend to believe that it’s not always possible to preemptively determine which innovations to slow down, or even how to determine what those “shared values” are that will help us make this determination. More importantly, I worry that there are very serious potential risks and unintended consequences associated with slowing down many forms of technological innovation, which could improve human welfare in important ways. There can be no prosperity, after all, without a certain degree of risk-taking and disruption.

Getting Out Ahead of the Pacing Problem

WW
It’s not that Wallach is completely hostile to new forms of technological innovation or blind to the many ways those innovations might improve our lives. To the contrary, he does a nice job throughout the book highlighting the many benefits associated with various new technologies, or he is at least willing to acknowledge that there can be many downsides associated with efforts aimed at limiting research and experimentation with new technological capabilities.

Yet, what concerns Wallach most is the much-discussed issue from the field of the philosophy of technology, the so-called “pacing problem.” Wallach concisely defines the pacing problem as “the gap between the introduction of a new technology and the establishment of laws, regulations, and oversight mechanisms for shaping its safe development.” (p. 251) “There has always been a pacing problem,” he notes, but he is concerned that technological innovation—especially highly disruptive and potentially uncontrollable forms of innovation—is now accelerating at an absolutely unprecedented pace.

(Just as an aside for all the philosophy nerds out there…  Such a rigid belief in the “pacing problem” represents a techno-deterministic viewpoint that is, ironically, sometimes shared by technological skeptics like Wallach as well as technological optimists like Larry Downes and even many in the middle of this debate, like Vivek Wadhwa. See, for example, The Laws of Disruption by Downes and “Laws and Ethics Can’t Keep Pace with Technology” by Wadhwa. Although these scholars approach technology ethics and politics quite differently, they all seem to believe that the pace of modern technological change is so relentless as to almost be an unstoppable force of nature. I guess the moral of the story is that, to some extent, we’re all technological determinists now!)

Despite his repeated assertions that modern technologies are accelerating at such a potentially uncontrollable pace, Wallach nonetheless hopes we can achieve some semblance of control over emerging technologies before they reach a critical “inflection point.” In the study of history and science, an inflection point generally represents a moment when a situation and trend suddenly changes in a significant way and things begin moving rapidly in a new direction. These inflections points can sometimes develop quite abruptly, ushering in major changes by creating new social, economic, or political paradigms. As it relates to technology in particular, inflection points can refer to the moment with a particular technology achieves critical mass in terms of adoption or, more generally, to the time when that technology begins to profoundly transform the way individuals and institutions act.

Another related concept that Wallach discusses is the so-called “Collingridge dilemma,” which refers to the notion that it is difficult to put the genie back in the bottle once a given technology has reached a critical mass of public adoption or acceptance. The concept is named after David Collingridge, who wrote about this in his 1980 book, The Social Control of Technology. “The social consequences of a technology cannot be predicated early in the life of the technology,” Collingridge argued. “By the time undesirable consequences are discovered, however, the technology is often so much part of the whole economics and social fabric that its control is extremely difficult.”

On “Having a Discussion” & Coming Up with “a Broad Plan”

These related concepts of inflection points and the Collingridge dilemma constitute the operational baseline of Wallach’s worldview. “In weighing speedy development against long-term risks, speedy development wins,” he worries. “This is particularly true when the risks are uncertain and the perceived benefits great.” (p. 85)

Consequently, throughout his book, Wallach pleads with us to take what I will call Technological Time Outs. He says we need to pause at times so that we can have “a full public discussion” (p. 13) and make sure there is a “broad plan in place to manage our deployment of new technologies” (p. 19) to make sure that innovation happens only at “a humanly manageable pace” (p. 261) “to fortify the safety of people affected by unpredictable disruptions.” (p. 262) Wallach’s call for Technological Time Outs is rooted in his belief that “the accelerating pace [of modern technological innovation] undermines the quality of each of our lives.” (p. 263)

That is Wallach’s weakest assertion in the book and he doesn’t really offer much evidence to prove that the velocity of modern technological is hurting us rather than helping us, as many of us believe. Rather, he treats it as a widely accepted truism that necessitates some sort of collective effort to slow things down if the proverbial genie is about to exit the bottle, or to make sure those genies don’t get out of their bottles without a lot of preemptive planning regarding how they are to be released into the world. In the following passage on pg. 72, Wallach very succinctly summarizes his approach recommended throughout A Dangerous Master:

this book will champion the need for more upstream governance: more control over the way that potentially harmful technologies are developed or introduced into the larger society. Upstream management is certainly better than introducing regulations downstream, after a technology is deeply entrenched or something major has already gone wrong. Yet, even when we can access risks, there remain difficulties in recognizing when or determining how much control should be introduced. When does being precautionary make sense, and when is precaution an over-reaction to the risks? (p. 72)

Those who have read my Permissionless Innovation book will recall that I open by framing innovation policy debates in almost exactly the same way as Wallach suggests in that last line above. I argue in the first lines of my book that:

The central fault line in innovation policy debates today can be thought of as ‘the permission question.’  The permission question asks: Must the creators of new technologies seek the blessing of public officials before they develop and deploy their innovations? How that question is answered depends on the disposition one adopts toward new inventions and risk-taking, more generally.  Two conflicting attitudes are evident. One disposition is known as the ‘precautionary principle.’ Generally speaking, it refers to the belief that new innovations should be curtailed or disallowed until their developers can prove that they will not cause any harm to individuals, groups, specific entities, cultural norms, or various existing laws, norms, or traditions. The other vision can be labeled ‘permissionless innovation.’ It refers to the notion that experimentation with new technologies and business models should generally be permitted by default. Unless a compelling case can be made that a new invention will bring serious harm to society, innovation should be allowed to continue unabated and problems, if any develop, can be addressed later.

So, by contrasting these passages, you can see what I am setting up here is a clash of visions between what appears to be Wallach’s precautionary principle-based approach versus my own permissionless innovation-focused worldview.

How Much Formal Precaution?

But that would be a tad bit too simplistic because just a few paragraphs after Wallach makes the statement just above about “upstream management” being superior to ex post solutions formulated “after a technology is deeply entrenched,” Wallach begins slowly backing away from an overly-rigid approach to precautionary principle-based governance of technological processes and systems.

He admits, for example, that “precautionary measures in the form of regulations and governmental oversight can slow the development of research whose overall society impact will be beneficial,” (p. 26) and that can “be costly” and “slow innovation.” For countries, Wallach admits, this can have real consequences because “Countries with more stringent precautionary policies are at a competitive disadvantage to being the first to introduce a new tool or process.” (p. 74)

So, he’s willing to admit that what we might call a hard precautionary principle usually won’t be sensible or effective in practice, but he is far more open to soft precaution. But this is where real problems begin to develop with Wallach’s approach, and it presents us with a chance to turn the tables on him a bit and begin posing some serious questions about his vision for governing technology.

Much of what follows below are my miscellaneous ramblings about the current state of the intellectual dialogue about tech ethics and technological control efforts. I have discussed these issues at greater length in my new book as well as a series of essays here in past years, most notably: “On the Line between Technology Ethics vs. Technology Policy; “What Does It Mean to “Have a Conversation” about a New Technology?”; and, “Making Sure the “Trolley Problem” Doesn’t Derail Life-Saving Innovation.”

As I’ve argued in those and other essays, my biggest problem with modern technological criticism is that specifics are in scandalously short supply in this field! Indeed, I often find the lack of details in this arena to be utterly exasperating. Most modern technological criticism follows a simple formula:

TECHNOLOGY –>> POTENTIAL PROBLEMS –>> DO SOMETHING!

But almost all the details come in the discussion about the nature of the technology in question and the apparent many problems associated with it. Far, far less thought goes into the “DO SOMETHING!” part of the critics’ work. One reason for that is probably self-evident: There are no easy solutions. Wallach admits as much at many junctures throughout the book. But that doesn’t excuse the need for the critics to give us a more concrete blueprint for identifying and then potentially rectifying the supposed problems.

Of course, the other reason that many critics are short of specifics is because what they really mean when they quip how much we need to “have a conversation” about a new disruptive technology is that we need to have a conversation about stopping that technology.

Where Shall We Draw the Line between Hard and Soft Law?

But this is what I found most peculiar about Wallach’s book: He never really gives us a good standard by which to determine when we should look to hard governance (traditional top-down regulation) versus soft governance (more informal, bottom-up and non-regulatory approaches).

On one hand, he very much wants society to exercise greatly restraint and precaution when it comes to many of the technologies he and others worry about today. Again, he’s particularly concerned about the potential runaway development and use of drones, genetic editing, nanotech, robotics, and artificial intelligence. For at least one class of robotics—autonomous military robots—Wallach does call for immediate policy action in the form of an Executive Order to ban “killer” autonomous systems. (Incidentally, there’s also a major effort underway called the “Campaign to Stop Killer Robots” that aims to make such a ban part of international law through a multinational treaty.)

But Wallach also acknowledges the many trade-offs associated with efforts to preemptively controls on robotics and other technology. Perhaps for that reason, Wallach doesn’t develop a clear test for when the Precautionary Principle should be applied to new forms of innovation.

Clearly there are times when it is appropriate, although I believe it is only in an extremely narrow subset of cases. In the 2 nd Edition of my Permissionless Innovation book, I tried to offer a rough framework for when formal precautionary regulation (i.e., highly-restrictive policy defaults are necessary, such as operational restrictions, licensing requirements, research limitations, or even formal bans) might be necessary. I do not want to interrupt the flow of this review of Wallach’s book too much, so I have decided to just cut-and-paste that portion of Chapter 3 of my book (“When Does Precaution Make Sense?”) down below as an appendix to this essay.

The key takeaway of that passage from my book is that all of us who study innovation policy and the philosophy of technology—Wallach, myself, the whole darn movement—have done a remarkably poor job being specific about precisely when formal policy precaution is warranted. What is the test? All too often, we get lazy and apply what we might call an “I-Know-It-When-I-See-It” standard. Consider the possession of bazookas, tanks, and uranium. Almost all of us would agree that citizens should not be allowed to possess or use such things. Why? Well, it seems obvious, right? They just shouldn’t! But what is the exact standard we use to make that determination.

In coming years, I plan on spending a lot more time articulating a better test by which Precautionary Principle-based policies could be reasonably applied. Those who know me may be taken aback by what I just said. After all, I’ve spend many years explaining why Precautionary Principle-based thinking threatens human prosperity and should be rejected in the vast majority of cases. But that doesn’t excuse the lack of a serious and detailed exploration of the exact standard by which we determine when we should impose some limits on technological innovation.

Generally speaking, while I strongly believe that “permissionless innovation” should remain the policy default for most technologies, there certainly exists some scenarios where the threat of harm associated with a new innovation might be highly probable, tangible, immediate, irreversible, and catastrophic in nature. If so, that could qualify it for at least a light version of the Precautionary Principle. In a future paper or book chapter I’m just now starting to research, I hope to fuller develop those qualifiers and formulate a more robust test around them.

I would have very much liked to see Wallach articulate and defend a test of his own for when formal precaution would make sense. And, by extension, when should we default to soft precaution, or soft law and informal governance mechanisms for emerging technologies.

We turn to that issue next.

Toward Soft Governance & the Engineering of Better Technological Ethics

Even though Wallach doesn’t provide us with a test for determining when precaution makes sense or when we should instead default to soft governance, he does a much better job explaining the various models of soft law or informal governance that might help us deal with the potential negative ramifications of highly disruptive forms of technological change.

What Wallach proposes, in essence, is that we bake a dose of precautionary directly into the innovation process through a wide variety of informal governance/oversight mechanisms. “By embedding shared values in the very design of new tools and techniques, engineers improve the prospect of a positive outcome,” he claims. “The upstream embedding of shared values during the design process can ease the need for major course adjustments when it’s often too late.” (p. 261)

Wallach’s favored instrument of soft governance is what he refers to as “Governance Coordinating Committees” (GCCs). These Committees would coordinate “the separate initiatives by the various government agencies, advocacy groups, and representatives of industry” who would serve as “issue managers for the comprehensive oversight of each field of research.” (p. 250) He elaborates and details the function of GCCs as follows:

These committees, led by accomplished elders who have already achieved wide respect, are meant to work together with all the interested stakeholders to monitor technological development and formulate solutions to perceived problems. Rather than overlap with or function as a regulatory body, the committee would work together with existing institutions. (p. 250-51)

Wallach discussed the GCC idea in much greater detail in a 2013 book chapter he penned with Gary E. Marchant for a collected volume of essays on Innovative Governance Models for Emerging Technologies. (I highly recommend you pick up that book if you can afford it! Many terrific essays in that book on these issues.) In their chapter, Marchant and Wallach specify some of the soft law mechanisms we might use to instill a bit of precaution preemptively. These mechanisms include: “codes of conduct, statements of principles, partnership programs, voluntary programs and standards, certification programs and private industry initiatives.”

If done properly, GCCs could provide exactly the sort of wise counsel and smart recommendations that Wallach desires. In my book and many law review articles on various disruptive technologies, I have endorsed many of the ideas and strategies Wallach identifies. I’ve also stressed the importance of many other mechanisms, such as education and empowerment-based strategies that could help the public learn to cope with new innovations or use them appropriately. In addition, I’ve highlighted the many flexible, adaptive ex post remedies that can help when things go wrong. Those mechanisms include common law remedies such as product defects law, various torts, contract law, property law, and even class action lawsuits. Finally, I have written extensively about the very active role played by the Federal Trade Commission (FTC) and other consumer protection agencies, which have broad discretion to police “unfair and deceptive practices” by innovators.

Moreover, we already have a quasi-GCC model developing today with the so-called “multistakeholder governance” model that is often used in both informal and formal ways to handle many emerging technology policy issues.  The Department of Commerce (the National Telecommunications and Information Administration in particular) and the FTC have already developed many industry codes of conduct and best practices for technologies such as biometrics, big data, the Internet of Things, online advertising, and much more. Those agencies and others (such as the FDA and FAA) are continuing to investigate other codes or guidelines for things like advanced medical devices and drones, respectively. Meanwhile, I’ve heard other policymakers and academics float the idea of “digital ombudsmen,” “data ethicists,” and “private IRBs” (institutional review boards) as other potential soft law solutions that technology companies might consider. Perhaps going forward, many tech firms will have Chief Ethical Officers just as many of them today have Chief Privacy Officers or Chief Security Officers.

In other words, there’s already a lot of “soft law” activities going on in this space. And I haven’t even begun an inventory of the many other bodies or groups that already exist in each sector today that has set forth their own industry self-regulatory codes, but they exist in almost every field that Wallach worries about.

So, I’m not sure how much his GCC idea will add to this existing mix, but I would not be opposed to them playing the sort of coordinating “issue manager” role he describes. But I still have many questions about GCC’s, including:

  • How many of them are needed and how we will know which one is the definitive GCC for each sector or technology?
  • If they are overly formal in character and dominated by the most vociferous opponents of any particular technology, a real danger exists that a GCC could end up granting a small cabal a “heckler’s veto” over particular forms of innovation.
  • Alternatively, the possibility of “regulatory capture” could be a problem for some GCCs if incumbent companies come to dominate their membership.
  • Even if everything went fairly smoothly and the GCCs produced balanced reports and recommendations, future developers might wonder if and why they are to be bound by older guidelines.
  • And if those future developers choose not to play by the same set of guidelines, what’s the penalty for non-compliance?
  • And how are such guidelines enforced in a world where what I’ve called “global innovation arbitrage” is an increasing reality?

Challenging Questions for Both Hard and Soft Law

To summarize, whether we are speaking of “hard” or “soft” law approaches to technological governance, I am just not nearly as optimistic as Wallach seems to be that we will be able to find consensus on these three things:

(1) what constitutes “harm” in many of these circumstances;

(2) which “shared values” should prevail when “society” debates the shaping of ethics or guiding norms for emerging technologies but has highly contradictory opinions about those values (consider online privacy as a good example, where many people enjoy hyper-sharing while other demand hyper-privacy); and,

(3) that we can create a legitimate “governing body” (or bodies) that will be responsible for formulating these guidelines in a fair way without completely derailing the benefits of innovation in new fields and also remaining relevant for very long.

Nonetheless, as he and others have suggested, the benefit of adopting a soft law/informal governance approach to these issues is that it at least seeks to address these questions in more flexible and adaptive fashion. As I noted in my book, traditional regulatory systems “tend to be overly rigid, bureaucratic, inflexible, and slow to adapt to new realities. They focus on preemptive remedies that aim to predict the future, and future hypothetical problems that may not ever come about. Worse yet, administrative regulation generally preempts or prohibits the beneficial experiments that yield new and better ways of doing things.” ( Permissionless Innovation, p. 120)

So, despite the questions I have raised here, I welcome the more flexible soft law approach that Wallach sets forth in his book. I think it represents a far more constructive way forward when compared to the opposite “top-down” or “command-and-control” regulatory systems of the past. But I very much want to make sure that even these new and more flexible soft law approaches leave plenty of breathing room for ongoing trial-and-error experimentation with new technologies and systems.

Conclusion

In closing, I want to reiterate that not only did I appreciate the excellent questions raised by Wendell Wallach in A Dangerous Master, but I take them very seriously. When I sat down to revise and expand my Permissionless Innovation book last year, I decided to include this warning from Wallach in my revised preface: “The promoters of new technologies need to speak directly to the disquiet over the trajectory of emerging fields of research. They should not ignore, avoid, or superficially dampen criticism to protect scientific research.” (p. 28–9)

As I noted, in response to Wallach: “I take this charge seriously, as should others who herald the benefits of permissionless innovation as the optimal default for technology policy. We must be willing to take on the hard questions raised by critics and then also offer constructive strategies for dealing with a world of turbulent technological change.”

Serious questions deserve serious answers. Of course, sometimes those posing those questions fail to provide many answers of their own! Perhaps it is because they believe the questions answer themselves. Other times, it’s because they are willing to admit that easy answers to these questions typically prove quite elusive. In Wallach’s case, I believe it’s more the latter.

To wrap up, I’ll just reiterated that both Wallach and I share a common desire to find solutions to the hard questions about technological innovation. But the crucial question that probably separates his worldview and my own is this: Whether we are talking about hard or soft governance, how much faith should we place in preemptive planning vs. ongoing trial and error experimentation to solve technological challenges? Wallach is more inclined to believe we can divine these things with the sagacious foresight of “accomplished elders” and technocratic “issue managers,” who will help us slow things down until we figure out how to properly ease a new technology into society (if at all). But I believe that the only way we will find many of the answers we are searching for is by allowing still more experimentation with the very technologies that he and others seek to control the development of. We humans are outstanding problem-solvers and have the uncanny ability among all mammals to adapt to changing circumstances. We roll with the punches, learn from them, and become more resilient in the process. As I noted in my 2014 essay, “Muddling Through: How We Learn to Cope with Technological Change”:

we modern pragmatic optimists must continuously point to the unappreciated but unambiguous benefits of technological innovation and dynamic change. But we should also continue to remind the skeptics of the amazing adaptability of the human species in the face of adversity. [. . .] Humans have consistently responded to technological change in creative, and sometimes completely unexpected ways. There’s no reason to think we can’t get through modern technological disruptions using similar coping and adaptation strategies.

Will the technologies that Wallach fears bring about a “techstorm” that overwhelms our culture, our economy, and even our very humanity? It’s certainly possible, and we should continue to seriously discuss the issues that he and other skeptics raise about our expanding technological capabilities and the potential for many of them to do great harm. Because some of them truly could.

But it is equally plausible—in fact, some of us would say, highly probable—that instead of overwhelming us, we learn how to bend these new technological capabilities to our will and make them work for our collective benefit. Instead of technology becoming “a dangerous master,” we will instead make it our helpful servant, just as we have so many times before.


APPENDIX: When Does Precaution Make Sense?

[excerpt from chapter 3 of Permissionless Innovation: The Continuing Case for Comprehensive Technological Freedom. Footnotes omitted. See book for all references.]

But aren’t there times when a certain degree of precautionary policymaking makes good sense? Indeed, there are, and it is important to not dismiss every argument in favor of precautionary principle–based policymaking, even though it should not be the default policy rule in debates over technological innovation.

The challenge of determining when precautionary policies make sense comes down to weighing the (often limited) evidence about any given technology and its impact and then deciding whether the potential downsides of unrestricted use are so potentially catastrophic that trial-and-error experimentation simply cannot be allowed to continue. There certainly are some circumstances when such a precautionary rule might make sense. Governments restrict the possession of uranium and bazookas, to name just two obvious examples.

Generally speaking, permissionless innovation should remain the norm in the vast majority of cases, but there will be some scenarios where the threat of tangible, immediate, irreversible, catastrophic harm associated with new innovations could require at least a light version of the precautionary principle to be applied.  In these cases, we might be better suited to think about when an “anti-catastrophe principle” is needed, which narrows the scope of the precautionary principle and focuses it more appropriately on the most unambiguously worst-case scenarios that meet those criteria.

Precaution might make sense when harm is … Precaution generally doesn’t make sense for asserted harms that are …
Highly probable Highly improbable
Tangible (physical) Intangible (psychic)
Immediate Distant / unclear timeline
Irreversible Reversible / changeable
Catastrophic Mundane / trivial

 

But most cases don’t fall into this category. Instead, we generally allow innovators and consumers to freely experiment with technologies, and even engage in risky behaviors, unless a compelling case can be made that precautionary regulation is absolutely necessary.  How is the determination made regarding when precaution makes sense? This is where the role of benefit-cost analysis (BCA) and regulatory impact analysis is essential to getting policy right.  BCA represents an effort to formally identify the tradeoffs associated with regulatory proposals and, to the maximum extent feasible, quantify those benefits and costs.  BCA generally cautions against preemptive, precautionary regulation unless all other options have been exhausted—thus allowing trial-and-error experimentation and “learning by doing” to continue. (The mechanics of BCA are discussed in more detail in section VII.)

This is not the end of the evaluation, however. Policymakers also need to consider the complexities associated with traditional regulatory remedies in a world where technological control is increasingly challenging and quite costly. It is not feasible to throw unlimited resources at every problem, because society’s resources are finite.  We must balance risk probabilities and carefully weigh the likelihood that any given intervention has a chance of creating positive change in a cost-effective fashion.  And it is also essential to take into account the potential unintended consequences and long-term costs of any given solution because, as Harvard law professor Cass Sunstein notes, “it makes no sense to take steps to avert catastrophe if those very steps would create catastrophic risks of their own.”  “The precautionary principle rests upon an illusion that actions have no consequences beyond their intended ends,” observes Frank B. Cross of the University of Texas. But “there is no such thing as a risk-free lunch. Efforts to eliminate any given risk will create some new risks,” he says.

Oftentimes, after working through all these considerations about whether to regulate new technologies or technological processes, the best solution will be to do nothing because, as noted throughout this book, we should never underestimate the amazing ingenuity and resiliency of humans to find creative solutions to the problems posed by technological change.  (Section V discusses the importance of individual and social adaptation and resiliency in greater detail.) Other times we might find that, while some solutions are needed to address the potential risks associated with new technologies, nonregulatory alternatives are also available and should be given a chance before top-down precautionary regulations are imposed. (Section VII considers those alternative solutions in more detail.)

Finally, it is again essential to reiterate that we are talking here about the dangers of precautionary thinking as a public policy prerogative—that is, precautionary regulations that are mandated and enforced by government officials. By contrast, precautionary steps may be far more wise when undertaken in a more decentralized manner by individuals, families, businesses, groups, and other organizations. In other words, as I have noted elsewhere in much longer articles on the topic, “there is a different choice architecture at work when risk is managed in a localized manner as opposed to a society-wide fashion,” and risk-mitigation strategies that might make a great deal of sense for individuals, households, or organizations, might not be nearly as effective if imposed on the entire population as a legal or regulatory directive.

Finally, at times, more morally significant issues may exist that demand an even more exhaustive exploration of the impact of technological change on humanity. Perhaps the most notable examples arise in the field of advance medical treatments and biotechnology. Genetic experimentation and human cloning, for example, raise profound questions about altering human nature or abilities as well as the relationship between generations.

The case for policy prudence in these matters is easier to make because we are quite literally talking about the future of what it means to be human.  Controversies have raged for decades over the question of when life begins and how it should end. But these debates will be greatly magnified and extended in coming years to include equally thorny philosophical questions.  Should parents be allowed to use advanced genetic technologies to select the specific attributes they desire in their children? Or should parents at least be able to take advantage of genetic screening and genome modification technologies that ensure their children won’t suffer from specific diseases or ailments once born?

Outside the realm of technologically enhanced procreation, profound questions are already being raised about the sort of technological enhancements adults might make to their own bodies. How much of the human body can be replaced with robotic or bionic technologies before we cease to be human and become cyborgs?  As another example, “biohacking”—efforts by average citizens working together to enhance various human capabilities, typically by experimenting on their own bodies —could become more prevalent in coming years.  Collaborative forums, such as Biohack.Me, already exist where individuals can share information and collaborate on various projects of this sort.  Advocates of such amateur biohacking sometimes refer to themselves as “grinders,” which Ben Popper of the Verge defines as “homebrew biohackers [who are] obsessed with the idea of human enhancement [and] who are looking for new ways to put machines into their bodies.”

These technologies and capabilities will raise thorny ethical and legal issues as they advance. Ethically, they will raise questions of what it means to be human and the limits of what people should be allowed to do to their own bodies. In the field of law, they will challenge existing health and safety regulations imposed by the FDA and other government bodies.

Again, most innovation policy debates—including most of the technologies discussed throughout this book—do not involve such morally weighty questions. In the abstract, of course, philosophers might argue that every debate about technological innovation has an impact on the future of humanity and “what it means to be human.” But few have much of a direct influence on that question, and even fewer involve the sort of potentially immediate, irreversible, or catastrophic outcomes that should concern policymakers.

In most cases, therefore, we should let trial-and-error experimentation continue because “experimentation is part and parcel of innovation” and the key to social learning and economic prosperity.  If we froze all forms of technological innovation in place while we sorted through every possible outcome, no progress would ever occur. “Experimentation matters,” notes Harvard Business School professor Stefan H. Thomke, “because it fuels the discovery and creation of knowledge and thereby leads to the development and improvement of products, processes, systems, and organizations.”

Of course, ongoing experimentation with new technologies always entails certain risks and potential downsides, but the central argument of this book is that (a) the upsides of technological innovation almost always outweigh those downsides and that (b) humans have proven remarkably resilient in the face of uncertain, ever-changing futures.

In sum, when it comes to managing or coping with the risks associated with technological change, flexibility and patience is essential. One size most certainly does not fit all. And one-size-fits-all approaches to regulating technological risk are particularly misguided when the benefits associated with technological change are so profound. Indeed, “[t]echnology is widely considered the main source of economic progress”; therefore, nothing could be more important for raising long-term living standards than creating a policy environment conducive to ongoing technological change and the freedom to innovate.

]]>
https://techliberation.com/2016/04/20/wendell-wallach-on-the-challenge-of-engineering-better-technology-ethics/feed/ 0 76026
Important New White House Report Documents Costs of Occupational Licensing https://techliberation.com/2015/07/29/important-new-white-house-report-documents-costs-of-occupational-licensing/ https://techliberation.com/2015/07/29/important-new-white-house-report-documents-costs-of-occupational-licensing/#respond Wed, 29 Jul 2015 22:25:37 +0000 http://techliberation.com/?p=75633

Yesterday, the White House Council of Economic Advisers released an important new report entitled, “Occupational Licensing: A Framework for Policymakers.” (PDF, 76 pgs.) The report highlighted the costs that outdated or unneeded licensing regulations can have on diverse portions of the citizenry. Specifically, the report concluded that:

the current licensing regime in the United States also creates substantial costs, and often the requirements for obtaining a license are not in sync with the skills needed for the job. There is evidence that licensing requirements raise the price of goods and services, restrict employment opportunities,  and make it more difficult for workers to take their skills across State lines. Too often, policymakers do not carefully weigh these costs and benefits when making decisions about whether or how to regulate a profession through licensing.

The report supported these conclusions with a wealth of evidence. In that regard, I was pleased to see that research from Mercatus Center-affiliated scholars was cited in the White House report (specifically on pg. 34). Mercatus Center scholars have repeatedly documented the costs of occupational licensing and offered suggestions for how to reform or eliminate unnecessary licensing practices. Most recently, my colleagues and I have explored the costs of licensing restrictions for new sharing economy platforms and innovators. The White House report cited, for example, the recently-released Mercatus paper on “How the Internet, the Sharing Economy, and Reputational Feedback Mechanisms Solve the ‘Lemons Problem,’” which I co-authored with Christopher Koopman, Anne Hobson, and Chris Kuiper. And it also cited a new essay by Tyler Cowen and Alex Tabarrok on “The End of Asymmetric Information.”

Moreover, along with Christopher Koopman and Matt Mitchell, I recently submitted comments to the Federal Trade Commission for a sharing economy workshop. In those comments, as well as a recent paper on the same subject, we documented how occupational licensing rules were often “captured” by affected interests and are then used to discourage new forms of competition and innovation. This harms both consumers and workers by depriving them of new and better options. Many sharing economy operations are having great success in breaking down these barriers and proving that consumers and workers do better in an environment free of unnecessary and costly licensing restrictions. This suggests that consumer welfare would be improved even more by reforming other licensing regimes.

Mercatus has published dozens of other things related to this issue, many of which I have listed down below. Just recently, in fact, we published a new paper on “Breaking Down the Barriers: Three Ways State and Local Governments Can Improve the Lives of the Poor,” by economist Steven Horwitz. The report begins by documenting how “occupational licensure laws disproportionately burden the poor by requiring them to spend significant resources just to enter a market.” This is consistent with the findings from other Mercatus reports and other academic publications.

Anyway, check out the new White House report and, if you are serious about studying the issue of occupational licensing in more detail, you’ll want to take a closer look at some of these other Mercatus Center publications on the issue. The case for occupational licensing reform is strong and non-partisan, as the release of this White House report makes clear.


Mercatus Center publications and related material on occupational licensing & barriers to entry 

]]>
https://techliberation.com/2015/07/29/important-new-white-house-report-documents-costs-of-occupational-licensing/feed/ 0 75633
What Should the FTC Do about State & Local Barriers to Sharing Economy Innovation? https://techliberation.com/2015/05/12/what-should-the-ftc-do-about-state-local-barriers-to-sharing-economy-innovation/ https://techliberation.com/2015/05/12/what-should-the-ftc-do-about-state-local-barriers-to-sharing-economy-innovation/#respond Tue, 12 May 2015 20:21:02 +0000 http://techliberation.com/?p=75549

The Federal Trade Commission (FTC) is taking a more active interest in state and local barriers to entry and innovation that could threaten the continued growth of the digital economy in general and the sharing economy in particular. The agency recently announced it would be hosting a June 9th workshop “to examine competition, consumer protection, and economic issues raised by the proliferation of online and mobile peer-to peer business platforms in certain sectors of the [sharing] economy.” Filings are due to the agency in this matter by May 26th. (Along with my Mercatus Center colleagues, I will be submitting comments and also releasing a big paper on reputational feedback mechanisms that same week. We have already released this paper on the general topic.)

Relatedly, just yesterday, the FTC sent a letter to Michigan policymakers about restricting entry by Tesla and other direct-to-consumer sellers of vehicles. Michigan passed a law in October 2014 prohibiting such direct sales. The FTC’s strongly-worded letter decries the state’s law as “protectionism for independent franchised dealers” noting that “current provisions operate as a special protection for dealers—a protection that is likely harming both competition and consumers.” The agency argues that:

consumers are the ones best situated to choose for themselves both the vehicles they want to buy and how they want to buy them. Automobile manufacturers have an economic incentive to respond to consumer preferences by choosing the most effective distribution method for their vehicle brands. Absent supportable public policy considerations, the law should permit automobile manufacturers to choose their distribution method to be responsive to the desires of motor vehicle buyers.

The agency cites the “well-developed body of research on these issues strongly suggests that government restrictions on distribution are rarely desirable for consumers” and the staff letter continues on to utterly demolish the bogus arguments set forth by defenders of the blatantly self-serving, cronyist law. (For more discussion of just how anti-competitive and anti-consumer these laws are in practice, see this January 2015 Mercatus Center study, “State Franchise Law Carjacks Auto Buyers,” by Jerry Ellig and Jesse Martinez.)

The FTC’s letter is another example of how the agency can take steps using its advocacy tools to explain to state and local policymakers how their laws may be protectionist and anti-consumer in character. Needless to say, this also has ramifications for how the agency approaches parochial restraints on entry and innovation affecting the sharing economy.

In our forthcoming Mercatus Center comments to the FTC for its June 6th sharing economy workshop, Christopher Koopman, Matt Mitchell, and I will address many issues related to the sharing economy and its regulation. Beyond addressing all five of the specific questions asked in the Commission’s workshop notice, we also include a discussion about “Federal Responses to Local Anticompetitive Regulations.” Down below I have reproduced the current rough draft of that section of our filing in the hope of getting input from others. Needless to say, the idea of the FTC aggressively using its advocacy efforts or even federal antitrust laws to address state and local barriers to trade and innovation will make some folks uncomfortable–especially on federalism grounds. But we argue that a good case can be made for the agency using both its advocacy and antitrust tools to address these issues. Let us know what you think.

 


 

The Federal Trade Commission possesses two primary tools to address public restraints of trade created by state and local authorities: advocacy and antitrust.[1]

Through its advocacy program, the Commission can provide specific comments to state and local officials regarding the effects of both proposed and existing regulations.[2] Commissioner Joshua Wright has noted that, “For many years, the FTC has used its mantle to comment on legislation and regulation that may restrain competition in a way that harms consumers.”[3] Thus, at a minimum, the Commission can and should shine light on parochial governmental efforts to restrain trade and limit innovation throughout the sharing economy.[4] By shining more light on state or local anti-competitive rules, the Commission will hopefully make governments, or their surrogate bodies (such as licensing boards), more transparent about their practices and more accountable for laws or regulations that could harm consumer welfare. However, to be successful, the Commission’s advocacy efforts depend upon the willingness of state and local legislators and regulators to heed its advice.[5]

The Commission has already used its advisory role in its recent guidance to state and local policymakers regarding the regulation of ridesharing services. The Commission noted then that “a regulatory framework should be responsive to new methods of competition,” and set forth the following vision regarding what it regards as the proper approach to parochial regulation of passenger transportation services:

Staff recommends that a regulatory framework for passenger vehicle transportation should allow for flexibility and adaptation in response to new and innovative methods of competition, while still maintaining appropriate consumer protections. [Regulators] also should proceed with caution in responding to calls for change that may have the effect of impairing new forms or methods of competition that are desirable to consumers. . . .  In general, competition should only be restricted when necessary to achieve some countervailing procompetitive virtue or other public benefit such as protecting the public from significant harm.[6]

This represents a reasonable framework for addressing concerns about parochial regulation of the sharing economy more generally.

Unfortunately, in areas relevant to the regulation of the sharing economy (e.g., taxicab regulations and rules governing home and apartment rentals) anticompetitive regulations have remained on the books—and in some instances have expanded—in spite of more than 30 years of Commission comment and advocacy.[7]  In fact, as Public Citizen noted in a recent Supreme Court filing:

[M]any more occupations are regulated than ever before, and most boards doing the regulating—in both traditional and new professions—are dominated by industry members who compete in the regulated market. Those board member-competitors, in turn, commonly engage in regulation that can be seen as anticompetitive self-protection. The particular forms anticompetitive regulations take are highly varied, the possibilities seemingly limited only by the imaginations of the board members.[8]

In these instances, the Commission’s antitrust enforcement authority may need to be utilized when its advocacy efforts fall short with regard to regulations that favor incumbents by limiting competition and entry.[9] Many academics have endorsed expanded antitrust oversight of public barriers to trade and innovation.[10] As Commissioner Wright has argued, “the FTC is in a good position to use its full arsenal of tools to ensure that state and local regulators do not thwart new entrants from using technology to disrupt existing marketplace.”[11] He notes specifically that he is “quite confident that a significant shift of agency resources away from enforcement efforts aimed at taming private restraints of trade and instead toward fighting public restraints would improve consumer welfare.”[12] We agree.

The Supreme Court’s recent decision in North Carolina State Board of Dental Examiners v. Federal Trade Commission made it clear that local authorities cannot claim broad immunity from federal antitrust laws.[13] This is particularly true, the Court noted, “where a State delegates control over a market to a nonsovereign actor,” such as a professional licensing board consisting primarily of members of the affected interest being regulated.[14] “Limits on state-action immunity are most essential when a State seeks to delegate its regulatory power to active market participants,” the Court held, “for dual allegiances are not always apparent to an actor and prohibitions against anticompetitive self-regulation by active market participants are an axiom of federal antitrust policy.”[15]

The touchstone of this case and the Court’s related jurisprudence in this area is political accountability.[16] State officials must (1) “clearly articulate” and (2) “actively supervise” licensing arrangements and regulatory bodies if they hope to withstand federal antitrust scrutiny.[17] The Court clarified this test in N.C. Dental holding that “the Sherman Act confers immunity only if the State accepts political accountability for the anticompetitive conduct it permits and controls.”[18] In other words, if state and local officials want to engage in protectionist activities that restrain trade in pursuit of some other countervailing objective, then they need to own up to it by being transparent about their anticompetitive intentions and then actively oversee the process after that to ensure it is not completely captured by affected interests.[19]

Some might argue that this does not go far enough to eradicate anti-competitive barriers to trade at the state or local level that could restrain the innovative potential of the sharing economy. While that may be true, some limits on the Commission’s federal antitrust discretion are necessary to avoid impinging upon legitimate state and local priorities.

Over time, it is our hope that by empowering the public with more options, more information and better ways to shine light on bad actors, the sharing economy will continue to make many of those old regulations unnecessary. Thus, in line with Commissioner Maureen Ohlhausen’s wise advice, the Commission should encourage state and local officials to exercise patience and humility as they confront technological changes that disrupt traditional regulatory systems.[20]

But when parochial regulators engage in blatantly anti-competitive activities that restrain trade, foster cartelization, or harm consumer welfare in other ways, the Commission can act to counter the worst of those tendencies.[21] The Commission’s standard of review going forward was appropriately articulated by Commissioner Wright recently when he noted that, “in the context of potentially disruptive forms of competition through new technologies or new business models, we should generally be skeptical of regulatory efforts that have the effect of favoring incumbent industry participants.”[22]

Such parochial protectionist barriers to trade and innovation will become even more concerning as the potential reach of so many sharing economy businesses grows larger. The boundary between intrastate and interstate commerce is sometimes difficult to determine for many sharing economy platforms. Clearly, much of the commerce in question occurs within the boundaries of a state or municipality, but sharing economy services also rely upon Internet-enabled platforms with a broader reach. To the extent state or local restrictions on sharing economy operations create negative externalities in the form of “interstate spillovers,” the case for federal intervention is strengthened.[23] It would be preferable if Congress chose to deal with such spillovers using its Commerce Clause authority (Art. 1, Sec. 8 of the Constitution),[24] but the presence of such negative externalities might also bolster the case for the Commission’s use of antitrust to address parochial restraints on trade.


[1]     See Maureen K. Ohlhausen, Reflections on the Supreme Court’s North Carolina Dental Decision and the FTC’s Campaign to Rein in State Action Immunity, before the Heritage Foundation, Washington, DC, March 31, 2015, at 19-20.

[2]     Id., at 20. (“The primary goal of such advocacy is to convince policymakers to consider and then minimize any adverse effects on competition that may result from regulations aimed at preventing various consumer harms.”) Also see James C. Cooper and William E. Kovacic, “U.S. Convergence with International Competition Norms: Antitrust Law and Public Restraints on Competition,” Boston University Law Review, Vol. 90, No. 4, (August 2010): 1582, “Competition advocacy helps solve consumers’ collective action problem by acting within the regulatory process to advocate for regulations that do not restrict competition unless there is a compelling consumer protection rationale for imposing such costs on citizens.”).

[3]     Joshua D. Wright, “Regulation in High-Tech Markets:  Public Choice, Regulatory Capture, and the FTC,” Remarks of Joshua D. Wright Commissioner, Federal Trade Commission at the Big Ideas about Information Lecture Clemson University, Clemson, South Carolina, April 2, 2015, at 15, https://www.ftc.gov/public-statements/2015/04/regulation-high-tech-markets-public-choice-regulatory-capture-ftc.

[4]     Cooper and Kovacic, “U.S. Convergence with International Competition Norms,” at 1610, (“Competition agencies could devote greater resources to conduct research to measure the effects of public policies that restrict competition. A research program could accumulate and analyze empirical data that assesses the consumer welfare effects of specific restrictions. Such a program could also assess whether the stated public interest objectives of government restrictions are realized in practice.”)

[5]     Cooper and Kovacic, “U.S. Convergence with International Competition Norms,” at 1582, (“The value of competition advocacy should be measured by (1) the degree to which comments altered regulatory outcomes times (2) the value to consumers of those improved outcomes. For all practical purposes, however, both elements are difficult to measure with any degree of certainty.”).

[6]     Federal Trade Commission, Staff Comments Before the Colorado Public Utilities Commission In The Matter of The Proposed Rules Regulating Transportation By Motor Vehicle, 4 Code of Colorado Regulations, (March 6, 2013), http://ftc.gov/os/2013/03/130703coloradopublicutilities.pdf.

[7]     Marvin Ammori, “Can the FTC Save Uber,” Slate, March 12, 2013, http://www.slate.com/articles/technology/future_tense/2013/03/uber_lyft_sidecar_can_the_ftc_fight_local_taxi_commissions.html (noting that, “not only does the FTC have the authority to take these cities to impartial federal courts and end their anticompetitive actions; it also has deep expertise in taxi markets and antitrust doctrines.”) Also see, Edmund W. Kitch, “Taxi Reform—The FTC Can Hack It,” Regulation, May/June 1984, http://object.cato.org/sites/cato.org/files/serials/files/regulation/1984/5/v8n3-3.pdf.

[8]     Brief of Amici Curiae Public Citizen in Support of Respondent, North Carolina State Bd. of Dental Exam’rs v. FTC, (August 2014): 24.

[9]     Brief of Antitrust Scholars as Amici Curiae in Support of Respondent, North Carolina State Bd. of Dental Exam’rs v. FTC, (August 6, 2014): 24, (“Antitrust review is entirely appropriate for curbing the excesses of occupational licensing because the anticompetitive effect has a similar effect on the market—and in particular consumers—as does traditional cartel activity.”)

[10]   See Mark A. Perry, “Municipal Supervision and State Action Antitrust Immunity,” The University of Chicago Law Review, Vol. 57, (Fall 1990): 1413-1445; William J. Martin, “State Action Antitrust Immunity for Municipally Supervised Parties,” The University of Chicago Law Review, Vol. 72, (Summer, 2005): 1079-1102; Jarod M. Bona, “The Antitrust Implications of Licensed Occupations Choosing Their Own Exclusive Jurisdiction,” University of St. Thomas Journal of Law & Public Policy, Vol 5, (Spring 2011): 28-51; Ingram Weber “The Antitrust State Action Doctrine and State Licensing Boards,” The University of Chicago Law Review, Vol. 79, (2012); Aaron Edlin and Rebecca Haw, “Cartels by Another Name:  Should Licensed Occupations Face Antitrust Scrutiny?,” University of Pennsylvania Law Review, Vol. 162, (2014): 1093-1164.

[11]   Wright, “Regulation in High-Tech Markets,” at 28-9.

[12]   Wright, “Regulation in High-Tech Markets,” at 29.

[13]   North Carolina State Bd. of Dental Exam’rs v. FTC, 135 S. Ct. 1101 (2015).

[14]   Id.

[15]   Id. Also see Edlin & Haw, “Cartels by Another Name,” at 1143, (“Who could seriously argue that an unsupervised group of competitors appointed to regulate their own profession can be counted on to neglect their selfish interests in favor of the state’s?”); Brief Amicus of the Pacific Legal Foundation and Cato Institute, North Carolina State Bd. of Dental Exam’rs v. FTC, (August 2014): 3, (“Antitrust immunity for private parties who act under color of state law is especially problematic, given that anticompetitive conduct is most likely to occur when private parties are in a position to exploit government’s regulatory powers.”)

[16]   See Maureen K. Ohlhausen, Reflections on the Supreme Court’s North Carolina Dental Decision and the FTC’s Campaign to Rein in State Action Immunity, before the Heritage Foundation, Washington, DC, March 31, 2015, at 16, https://www.ftc.gov/public-statements/2015/03/reflections-supreme-courts-north-carolina-dental-decision-ftcs-campaign, (“states need to be politically accountable for whatever market distortions they impose on consumers.”); Edlin & Haw, “Cartels by Another Name,” at 1137, (“political accountability is the price a state must pay for antitrust immunity.)

[17]   See Federal Trade Commission, Office of Policy and Planning, Report of the State Action Task Force (2003): 54, (“clear articulation requires that a state enunciate an affirmative intent to displace competition and to replace it with a stated criterion. Active supervision requires the state to examine individual private conduct, pursuant to that regulatory regime, to ensure that it comports with that stated criterion. Only then can the underlying conduct accurately be deemed that of the state itself, and political responsibility for the conduct fairly placed with the state.”) This test has been developed and refined in a variety of cases over the past 35 years. See: California Retail Liquor Dealers Ass’n v. Midcal Aluminum, Inc., 445 U.S. 97 (1980); Cmty. Comm’ns Co., Inc. v. City of Boulder, 455 U.S. 40, 48-51 (1982); City of Columbia v. Omni Outdoor Advertising, Inc., 499 U.S. 365 (1991); FTC v. Ticor Title Ins. Co., 504 U.S. 621 (1992).

[18]   North Carolina State Bd. of Dental Exam’rs v. FTC, 135 S. Ct. 1101 (2015).

[19]   Edlin & Haw, “Cartels by Another Name,” at 1156. (“Requiring that the state place its imprimatur on regulation is at least better than the status quo, in which states too often delegate self-regulation to professionals and walk away.”) See also North Carolina State Bd. of Dental Exam’rs v. FTC, 135 S. Ct. 1101 (2015) (“[Federal antitrust] immunity requires that the anticompetitive conduct of nonsovereign actors, especially those authorized by the State to regulate their own profession, result from procedures that suffice to make it the State’s own.”).

[20]  Maureen K. Ohlhausen, Commissioner, Fed. Trade Commission, “Regulatory Humility in Practice,” Remarks of the American Enterprise Institute, Washington, D.C. (April 1, 2015).

[21]   Edlin & Haw, “Cartels by Another Name,” at 1094, (“state action doctrine should not prevent antitrust suits against state licensing boards that are comprised of private competitors deputized to regulate and to outright exclude their own competition, often with the threat of criminal sanction.”). See also Brief Amicus of the Pacific Legal Foundation and Cato Institute, North Carolina State Bd. of Dental Exam’rs v. FTC, (August 2014): 2, 21, http://www.americanbar.org/content/dam/aba/publications/supreme_court_preview/BriefsV4/13-534_resp_amcu_plf-cato.authcheckdam.pdf, (noting that courts “should presume strongly against granting state-action immunity in antitrust cases.  It makes little sense to impose powerful civil and criminal punishments on private parties who are deemed to have engaged in anti-competitive conduct, while exempting government entities—or, worse, private parties acting under the government’s aegis—when they engage in the exact same conduct. . . . “Whatever one’s opinion of antitrust law in general, there is no justification for allowing states broad latitude to disregard federal law and erect private cartels with only vague instructions and loose oversight.”)

[22]   Wright, “Regulation in High-Tech Markets,” at 7.

[23]   FTC, Report of the State Action Task Force, 44, (“an unfortunate gap has emerged between scholarship and case law. Although many of the leading commentators have expressed serious concern regarding problems posed by interstate spillovers, their thinking has yet to take root in the law. Such spillovers undermine both economic efficiency and some of the same political representation values thought to be protected by principles of federalism.”); Brief Amicus of the Pacific Legal Foundation and Cato Institute, North Carolina State Bd. of Dental Exam’rs v. FTC, (August 2014): 13, (“Allowing states expansive power to exempt private actors from antitrust laws would also disrupt national economic policy by encouraging a patchwork of state-established entities licensed to engage in cartel behavior. This would disrupt interstate investment and consumer expectations, and would have spillover effects across state lines.”) Cooper and Kovacic, “U.S. Convergence with International Competition Norms,” at 1598, (“When a state exports the costs attendant to its anticompetitive regulatory scheme to those who have not participated in the political process, however, there is no political backstop; arguments for immunity based on federalism concerns are severely weakened, if not wholly eviscerated, in these situations.”

[24]   See Adam Thierer, The Delicate Balance: Federalism, Interstate Commerce, and Economic Freedom in the Technological Age (Washington, DC: The Heritage Foundation, 1998): 81-118.

]]>
https://techliberation.com/2015/05/12/what-should-the-ftc-do-about-state-local-barriers-to-sharing-economy-innovation/feed/ 0 75549
Mercatus Filing to FAA on Small Drones https://techliberation.com/2015/04/24/mercatus-filing-to-faa-on-small-drones/ https://techliberation.com/2015/04/24/mercatus-filing-to-faa-on-small-drones/#respond Fri, 24 Apr 2015 18:46:09 +0000 http://techliberation.com/?p=75531

Today, Eli Dourado, Ryan Hagemann and I filed comments with the Federal Aviation Administration (FAA) in its proceeding on the “Operation and Certification of Small Unmanned Aircraft Systems” (i.e. small private drones). In this filing, we begin by arguing that just as “permissionless innovation” has been the primary driver of entrepreneurialism and economic growth in many sectors of the economy over the past decade, that same model can and should guide policy decisions in other sectors, including the nation’s airspace. “While safety-related considerations can merit some precautionary policies,” we argue, “it is important that those regulations leave ample space for unpredictable innovation opportunities.”

We continue on in our filing to note that  “while the FAA’s NPRM is accompanied by a regulatory evaluation that includes benefit-cost analysis, the analysis does not meet the standard required by Executive Order 12866. In particular, it fails to consider all costs and benefits of available regulatory alternatives.” After that, we itemize the good and the bad of the FAA propose with an eye toward how the agency can maximize innovation opportunities. We conclude by noting:

 The FAA must carefully consider the potential effect of UASs on the US economy. If it does not, innovation and technological advancement in the commercial UAS space will find a home elsewhere in the world. Many of the most innovative UAS advances are already happening abroad, not in the United States. If the United States is to be a leader in the development of UAS technologies, the FAA must open the American skies to innovation.

You can read our entire 9-page filing here.

___________________________

Additional  Reading

]]>
https://techliberation.com/2015/04/24/mercatus-filing-to-faa-on-small-drones/feed/ 0 75531
Initial Thoughts on New FAA Drone Rules https://techliberation.com/2015/02/16/initial-thoughts-on-new-faa-drone-rules/ https://techliberation.com/2015/02/16/initial-thoughts-on-new-faa-drone-rules/#comments Mon, 16 Feb 2015 20:08:55 +0000 http://techliberation.com/?p=75465

Yesterday afternoon, the Federal Aviation Administration (FAA) finally released its much-delayed rules for private drone operations. As The Wall Street Journal  points out, the rules “are about four years behind schedule,” but now the agency is asking for expedited public comments over the next 60 days on the whopping 200-page order. (You have to love the irony in that!) I’m still going through all the details in the FAA’s new order — and here’s a summary of what the major provisions — but here are some high-level thoughts about what the agency has proposed.

Opening the Skies…

  • The good news is that, after a long delay, the FAA is finally taking some baby steps toward freeing up the market for private drone operations.
  • Innovators will no longer have to operate entirely outside the law in a sort of drone black market. There’s now a path to legal operation. Specifically, small unmanned aircraft systems (UAS) operators (for drones under 55 lbs.) will be able to go through a formal certification process and, after passing a test, get to operate their systems.

… but Not Without Some Serious Constraints

  • The problem is that the rules only open the skies incrementally for drone innovation.
  • You can’t read through these 200 pages of regulations without getting sense that the FAA still wishes that private drones would just go away.
  • For example, the FAA still wants to keep a bit of a leash around drones by (1) limiting their use to being daylight-only flights (2) that are in the visual line-of-sight of the operators at all times. And (3) the agency also says that drones cannot be flown over people.
  • Those three limitations will hinder some obvious innovations, such as same-day drone delivery for small packages, which Amazon has suggested they are interested in pursuing. (Amazon isn’t happy about these restrictions.)

Impact on Small Innovators?

  • But what I worry about more are all the small ‘Mom-and-Pop’ drone entrepreneur, who want to use airspace as a platform for open, creative innovation. These folks are out there but they don’t have the name or the resources to weather these restrictions the way that Amazon can. After all, if Amazon has to abandon same-day drone delivery because of the FAA rules, the company will still have a thriving commercial operation to fall back on. But all those small, nameless drone innovators currently experimenting with new, unforeseeable innovations may not be so lucky.
  • As a result, there’s a real threat here of drone entrepreneurs bolting the U.S. and offering their services in more hospitable environments if the FAA doesn’t take a more flexible approach.
  • [For more discussion of this problem, see my recent essay on “global innovation arbitrage.”]

Impact on News-Gathering?

  • It’s also worth asking how these rules might limit legitimate news-gathering operations by both journalistic enterprises and average citizens. If we can never fly a drone over a crowd of people, as the rules stipulate, that places some rather serious constraints on our ability to capture real-time images and video from events of societal importance (such as political protests or even just major events like sporting events or concerts).
  • [For more discussion about this, see this September 2014 Mercatus Center working paper, “News from Above: First Amendment Implications of the Federal Aviation Administration Ban on Commercial Drones.”]

Still Time to Reconsider More Flexible Rules

  • Of course, these aren’t final rules and the agency still has time to relax some of these restrictions to free the skies for less fettered private drone operation.
  • I suspect that drone innovators will protest the three specific limitations I identified above and ask for a more flexible approach to enforcing those rules.
  • But it’s good that the FAA has finally taken the first step toward decriminalizing private drone operations in the United States.

___________________________

Additional  Reading

]]>
https://techliberation.com/2015/02/16/initial-thoughts-on-new-faa-drone-rules/feed/ 1 75465
Permissionless Innovation & Commercial Drones https://techliberation.com/2015/02/04/permissionless-innovation-commercial-drones/ https://techliberation.com/2015/02/04/permissionless-innovation-commercial-drones/#comments Wed, 04 Feb 2015 23:20:57 +0000 http://techliberation.com/?p=75392

Farhad Manjoo’s latest New York Times column, “Giving the Drone Industry the Leeway to Innovate,” discusses how the Federal Aviation Administration’s (FAA) current regulatory morass continues to thwart many potentially beneficial drone innovations. I particularly appreciated this point:

But perhaps the most interesting applications for drones are the ones we can’t predict. Imposing broad limitations on drone use now would be squashing a promising new area of innovation just as it’s getting started, and before we’ve seen many of the potential uses. “In the 1980s, the Internet was good for some specific military applications, but some of the most important things haven’t really come about until the last decade,” said Michael Perry, a spokesman for DJI [maker of Phantom drones]. . . . He added, “Opening the technology to more people allows for the kind of innovation that nobody can predict.”

That is exactly right and it reflects the general notion of “permissionless innovation” that I have written about extensively here in recent years. As I summarized in a recent essay: “Permissionless innovation refers to the notion that experimentation with new technologies and business models should generally be permitted by default. Unless a compelling case can be made that a new invention or business model will bring serious harm to individuals, innovation should be allowed to continue unabated and problems, if they develop at all, can be addressed later.”

The reason that permissionless innovation is so important is that innovation is more likely in political systems that maximize breathing room for ongoing economic and social experimentation, evolution, and adaptation. We don’t know what the future holds. Only incessant experimentation and trial-and-error can help us achieve new heights of greatness. If, however, we adopt the opposite approach of “precautionary principle”-based reasoning and regulation, then these chances for serendipitous discovery evaporate. As I put it in my recent book, “living in constant fear of worst-case scenarios—and premising public policy upon them—means that best-case scenarios will never come about. When public policy is shaped by precautionary principle reasoning, it poses a serious threat to technological progress, economic entrepreneurialism, social adaptation, and long-run prosperity.”

In this regard, the unprecedented growth of the Internet is a good example of how permissionless innovation can significantly improve consumer welfare and our nation’s competitive status relative to the rest of the world. And this also holds lessons for how we treat commercial drone technologies, as Jerry Brito, Eli Dourado, and I noted when filing comments with the FAA back in April 2013. We argued:

Like the Internet, airspace is a platform for commercial and social innovation. We cannot accurately predict to what uses it will be put when restrictions on commercial use of UASs are lifted. Nevertheless, experience shows that it is vital that innovation and entrepreneurship be allowed to proceed without ex ante barriers imposed by regulators. We therefore urge the FAA not to impose  any prospective restrictions on the use of commercial UASs without clear evidence of actual, not merely hypothesized, harm.

Manjoo builds on that same point in his new Times essay when he notes:

[drone] enthusiasts see almost limitless potential for flying robots. When they fantasize about our drone-addled future, they picture not a single gadget, but a platform — a new class of general-purpose computer, as important as the PC or the smartphone, that may be put to use in a wide variety of ways. They talk about applications in construction, firefighting, monitoring and repairing infrastructure, agriculture, search and response, Internet and communications services, logistics and delivery, filmmaking and wildlife preservation, among other uses.

If only the folks at the FAA and in Congress saw things this way. We need to open up the skies to the amazing innovative potential of commercial drone technology, especially before the rest of the world seizes the opportunity to jump into the lead on this front.

___________________________

Additional  Reading

]]>
https://techliberation.com/2015/02/04/permissionless-innovation-commercial-drones/feed/ 2 75392