Technology, Business & Cool Toys – Technology Liberation Front https://techliberation.com Keeping politicians' hands off the Net & everything else related to technology Thu, 10 Aug 2023 15:25:01 +0000 en-US hourly 1 6772528 America Does Not Need a Digital Consumer Protection Commission https://techliberation.com/2023/08/10/america-does-not-need-a-digital-consumer-protection-commission/ https://techliberation.com/2023/08/10/america-does-not-need-a-digital-consumer-protection-commission/#comments Thu, 10 Aug 2023 15:25:01 +0000 https://techliberation.com/?p=77151

The New York Times today published my response to an oped by Senators Lindsey Graham & Elizabeth Warren calling for a new “Digital Consumer Protection Commission” to micromanage the high-tech information economy. “Their new technocratic digital regulator would do nothing but hobble America as we prepare for the next great global technological revolution,” I argue. Here’s my full response:

Senators Lindsey Graham and Elizabeth Warren propose a new federal mega-regulator for the digital economy that threatens to undermine America’s global technology standing.

A new “licensing and policing” authority would stall the continued growth of advanced technologies like artificial intelligence in America, leaving China and others to claw back crucial geopolitical strategic ground.

America’s digital technology sector enjoyed remarkable success over the past quarter-century — and provided vast investment and job growth — because the U.S. rejected the heavy-handed regulatory model of the analog era, which stifled innovation and competition.

The tech companies that Senators Graham and Warren cite (along with countless others) came about over the past quarter-century because we opened markets and rejected the monopoly-preserving regulatory regimes that had been captured by old players.

The U.S. has plenty of federal bureaucracies, and many already oversee the issues that the senators want addressed. Their new technocratic digital regulator would do nothing but hobble America as we prepare for the next great global technological revolution.

]]>
https://techliberation.com/2023/08/10/america-does-not-need-a-digital-consumer-protection-commission/feed/ 15 77151
My Latest Study on AI Governance https://techliberation.com/2023/04/20/my-latest-study-on-ai-governance/ https://techliberation.com/2023/04/20/my-latest-study-on-ai-governance/#comments Thu, 20 Apr 2023 18:25:29 +0000 https://techliberation.com/?p=77114

The R Street Institute has just released my latest study on AI governance and how to address “alignment” concerns in a bottom-up fashion. The 40-page report is entitled, “Flexible, Pro-Innovation Governance Strategies for Artificial Intelligence.”

My report asks, is it possible to address AI alignment without starting with the Precautionary Principle as the governance baseline default? I explain how that is indeed possible. While some critics claim that no one is seriously trying to deal with AI alignment today, my report explains how no technology in history has been more heavily scrutinized this early in its life-cycle as AI, machine learning and robotics. The number of ethical frameworks out there already is astonishing. We don’t have too few alignment frameworks; we probably have too many!

We need to get serious about bringing some consistency to these efforts and figure out more concrete ways to a culture of safety by embedding ethics-by-design. But there is an equally compelling interest in ensuring that algorithmic innovations are developed and made widely available to society.

Although some safeguards will be needed to minimize certain AI risks, a more agile and iterative governance approach can address these concerns without creating overbearing, top-down mandates, which would hinder algorithmic innovations – especially at a time when America is looking to stay ahead of China and other nations in the global AI race.

My report explores the many ethical frameworks that professional associations have already formulated as well as the various other “soft law” frameworks that have been devised. I also consider how AI auditing and algorithmic impact assessments can be used to help formalize the twin objectives of “ethics-by-design” and keeping “humans in the loop,” which are the two principles that drive most AI governance frameworks. But it is absolutely essential that audits and impact assessments are done right to ensure it does not become an overbearing, compliance-heavy, and politicized nightmare that would undermine algorithmic entrepreneurialism and computational innovation.

Finally, my report reviews the extensive array of existing government agencies and policies that ALREADY govern artificial intelligence and robotics as well as the wide variety of court-based common law solutions that cover algorithmic innovations. The notion that America has no law or regulation covering artificial intelligence today is massively wrong, as my report explains in detail.

I hope you’ll take the time to check out my new report. This and my previous report on “Getting AI Innovation Culture Right” serve as the foundation of everything we have coming on AI and robotics from the R Street Institute. Next up will be a massive study on global AI “existential risks” and national security issues. Stay tuned. Much more to come!

In the meantime, you can find all my recent work here on my “Running List of My Research on AI, ML & Robotics Policy.”


Additional Reading:

]]>
https://techliberation.com/2023/04/20/my-latest-study-on-ai-governance/feed/ 4 77114
On “Pausing” AI https://techliberation.com/2023/04/07/on-pausing-ai/ https://techliberation.com/2023/04/07/on-pausing-ai/#comments Fri, 07 Apr 2023 17:36:05 +0000 https://techliberation.com/?p=77111

Recently, the Future of Life Institute released an open letter that included some computer science luminaries and others  calling for a 6-month “pause” on the deployment and research of “giant” artificial intelligence (AI) technologies. Eliezer Yudkowsky, a prominent AI ethicist, then made news by arguing that the “pause” letter did not go far enough and he proposed that governments consider “airstrikes” against data processing centers, or even be open to the use of nuclear weapons. This is, of course, quite insane. Yet, this is the state of the things today as a AI technopanic seems to growing faster than any of the technopanic that I’ve covered in my 31 years in the field of tech policy—and I have covered a lot of them.

In a new joint essay co-authored with Brent Orrell of the American Enterprise Institute and Chris Messerole of Brookings, we argue that “the ‘pause’ we are most in need of is one on dystopian AI thinking.” The three of us recently served on a blue-ribbon  Commission on Artificial Intelligence Competitiveness, Inclusion, and Innovation, an independent effort assembled by the U.S. Chamber of Commerce. In our essay, we note how:

Many of these breakthroughs and applications will already take years to work their way through the traditional lifecycle of development, deployment, and adoption and can likely be managed through legal and regulatory systems that are already in place. Civil rights laws, consumer protection regulations, agency recall authority for defective products, and targeted sectoral regulations already govern algorithmic systems, creating enforcement avenues through our courts and by common law standards allowing for development of new regulatory tools that can be developed as actual, rather than anticipated, problems arise.

“Instead of freezing AI we should leverage the legal, regulatory, and informal tools at hand to manage existing and emerging risks while fashioning new tools to respond to new vulnerabilities,” we conclude. Also on the pause idea, it’s worth checking out this excellent essay from Bloomberg Opinion editors on why “An AI ‘Pause’ Would Be a Disaster for Innovation.”

The problem is not with the “pause” per se. Even if the signatories could somehow enforce a worldwide stop-work order, six months probably wouldn’t do much to halt advances in AI. If a brief and partial moratorium draws attention to the need to think seriously about AI safety, it’s hard to see much harm. Unfortunately, a pause seems likely to evolve into a more generalized opposition to progress.

The editors continue on to rightly note:

This is a formula for outright stagnation. No one can ever be fully confident that a given technology or application will only have positive effects. The history of innovation is one of trial and error, risk and reward. One reason why the US leads the world in digital technology — why it’s home to virtually all the biggest tech platforms — is that it did not preemptively constrain the industry with well-meaning but dubious regulation. It’s no accident that all the leading AI efforts are American too.

That is 100% right, and I appreciate the Bloomberg editors linking to my latest study on AI governance when they made this point. In this new R Street Institute study, I explain why “Getting AI Innovation Culture Right,” is essential to make sure we can enjoy the many benefits that algorithmic systems offer, while also staying competitive in the global race for competitive advantage in this space.

That report is the first in a trilogy of big studies on decentralized, flexible governance of artificial intelligence. We can achieve AI safety without crushing top-down bans or unworkable “pauses,” I argue. My next two papers are, “Flexible, Pro-Innovation Governance Strategies for Artificial Intelligence” (due out April 20th) and “Existential Risks & Global Governance Issues Surrounding AI & Robotics” (due out late May or early June). I’m also working on a co-authored essay taking a deep dive into the idea of AI impact assessments / auditing (late Spring / early Summer).

Relatedly, on April 7th, DeepLearningAI held an event on “Why at 6-Month AI Pause is a Bad Idea” featuring leading AI scientists Andrew Ng and Yann LeCun discussing the trade-offs associated with the proposal. A crucial point made in the discussion is that a pause, especially a pause in the form of a governmental ban, would be a misguided innovation policy decision. They stressed that there will be policy interventions to address targeted risks from specific algorithmic applications, but that it would be a serious mistake to stop the overall development of the underlying technological capabilities. It’s worth watching.

For more on AI policy, here’s a list of some of my latest reports and essays. Much more to come. AI policy will be the biggest tech policy fight of the our lifetimes.

 

 

]]>
https://techliberation.com/2023/04/07/on-pausing-ai/feed/ 2 77111
Why the Endless Techno-Apocalyptica in Modern Sci-Fi? https://techliberation.com/2022/09/02/why-the-endless-techno-apocalyptica-in-modern-sci-fi/ https://techliberation.com/2022/09/02/why-the-endless-techno-apocalyptica-in-modern-sci-fi/#comments Fri, 02 Sep 2022 15:06:06 +0000 https://techliberation.com/?p=77033

James Pethokousis of AEI interviews me about the current miserable state of modern science fiction, which is just dripping with dystopian dread in every movie, show, and book plot. How does all the techno-apocalyptica affect societal and political attitudes about innovation broadly and emerging technologies in particular. Our discussion builds on my recent a recent Discourse article, “How Science Fiction Dystopianism Shapes the Debate over AI & Robotics.” [Pasted down below.] Swing on over to Jim’s “Faster, Please” newsletter and hear what Jim and I have to say. And, for a bonus question, Jim asked me is we doing a good job of inspiring kids to have a sense of wonder and to take risks. I have some serious concerns that we are falling short on that front.

How Science Fiction Dystopianism Shapes the Debate over AI & Robotics

[Originally ran on Discourse on July 26, 2022.]

George Jetson will be born this year. We don’t know the exact date of this fictional cartoon character’s birth, but thanks to some skillful Hanna-Barbera hermeneutics the consensus seems to be sometime in 2022.

In the same episode that we learn George’s approximate age, we’re also told the good news that his life expectancy in the future is 150 years old. It was one of the many ways The Jestons, though a cartoon for children, depicted a better future for humanity thanks to exciting innovations. Another was a helpful robot named Rosie, along with a host of other automated technologies—including a flying car—that made George and his family’s life easier.

 

Most fictional portrayals of technology today are not as optimistic as  The Jetsons, however. Indeed, public and political conceptions about artificial intelligence (AI) and robotics in particular are being strongly shaped by the relentless dystopianism of modern science fiction novels, movies and television shows. And we are worse off for it.

AI, machine learning, robotics and the power of computational science hold the potential to drive explosive economic growth and profoundly transform a diverse array of sectors, while providing humanity with countless technological improvements in medicine and healthcarefinancial servicestransportationretailagricultureentertainmentenergyaviationthe automotive industry and many others. Indeed, these technologies are already deeply embedded in these and other industries and making a huge difference.

But that progress could be slowed and in many cases even halted if public policy is shaped by a precautionary-principle-based mindset that imposes heavy-handed regulation based on hypothetical worst-case scenarios. Unfortunately, the persistent dystopianism found in science fiction portrayals of AI and robotics conditions the ground for public policy debates, while also directing attention away from some of the more real and immediate issues surrounding these technologies.

Incessant Dystopianism Untethered from Reality

In his recent book Robots, Penn State business professor John Jordan observes how over the last century “science fiction set the boundaries of the conceptual playing field before the engineers did.” Pointing to the plethora of literature and film that depicts robots, he notes: “No technology has ever been so widely described and explored before its commercial introduction.” Not the internet, cell phones, atomic energy or any others.

Indeed, public conceptions of these technologies, and even the very vocabulary of the field, has been shaped heavily by sci-fi plots beginning a hundred years ago with the 1920 play  R.U.R. (Rossum’s Universal Robots)which gave us the term “robot,” and Fritz Lang’s 1927 silent film Metropolis, with its memorable Maschinenmensch, or “machine-human.” There has been a deep and rich imagination surrounding AI and robotics since then, but it has tended to be mostly negative and has grown more hostile over time.

The result has been a public and policy dialogue about AI and robotics that is focused on an endless parade of horribles about these technologies. Not surprisingly, popular culture also affects journalistic framings of AI and robotics. Headlines breathlessly scream of how “Robots May Shatter the Global Economic Order Within a Decade,” but only if we’re not dead already because… “If Robots Kill Us, It’s Because It’s Their Job.”

Dark depictions of AI and robotics are ever-present in popular modern sci-fi movies and television shows. A short list includes:  2001: A Space Odyssey, Avengers: Age of Ultron, Battlestar Galactica (both the 1978 original and the 2004 reboot), Black Mirror, Blade Runner, Ex Machina, Her, The Matrix, Robocop, The Stepford Wives, Terminator, Transcendence, Tron, WALL-E, Wargames and Westworld, among countless others. The least nefarious plots among these films and television shows rest on the idea that AI and robotics are going to drive us to a life of distraction, addiction or sloth. In more extreme cases, we’re warned about a future in which we are either going to be enslaved or destroyed by our new robotic or algorithmic overlords.

Don’t get me wrong; the movies and shows on the above list are some of my favorites.  2001 and Blade Runner are both in my top 5 all-time flicks, and the reboot of Battlestar is one of my favorite TV shows. The plots of all these movies and shows are terrifically entertaining and raise many interesting issues that make for fun discussions.

But they are not representative of reality. In fact, the vast majority of computer scientists and academic experts on AI and robotics agree that claims about machine “superintelligence” are wildly overplayed and that there is no possibility of machines gaining human-equivalent knowledge any time soon—or perhaps ever. “In any ranking of near-term worries about AI, superintelligence should be far down the list,” argues Melanie Mitchell, author of Artificial Intelligence: A Guide for Thinking Humans.

Contra the  Terminator-esque nightmares envisioned in so many sci-fi plots, MIT roboticist Rodney Brooks says that “fears of runaway AI systems either conquering humans or making them irrelevant aren’t even remotely well grounded.” John Jordan agrees, noting: “The fear and uncertainty generated by fictional representations far exceed human reactions to real robots, which are often reported to be ‘underwhelming.’”

The same is true for AI more generally. “A close inspection of AI reveals an embarrassing gap between actual progress by computer scientists working on AI and the futuristic visions they and others like to describe,” says Erik Larson, author of, The Myth of Artificial Intelligence: Why Computers Can’t Think the Way We Do. Larson refers to this extreme thinking about superintelligent AI as “technological kitsch,” or exaggerated sentimentality and melodrama that is untethered from reality. Yet, the public imagination remains captivated by tales of impending doom.

Seeding the Ground with Misery and Misguided Policy

But isn’t it all just harmless fun? After all, it’s just make believe. Moreover, can’t science fiction—no matter how full of techno-misery—help us think through morally weighty issues and potential ethical conundrums involving AI and robotics?

Yes and no. Titillating fiction has always had a cathartic element to it and helped us cope with the unknown and mysterious. Most historians believe it was Aristotle in his Poetics who first used the term katharsis when discussing how Greek tragedies helped the audience “through pity and fear effecting the proper purgation of these emotions.”

But are modern science fiction depictions of AI and robotics helping us cope with technological change, or instead just stoking a constant fear of it? Modern sci-fi isn’t so much purging negative emotion about the topic at hand as it is endlessly adding to the sense of dread surrounding these technologies. What are the societal and political ramifications of a cultural frame of reference that suggests an entire new class of computational technologies will undermine rather than enrich our human experiences and, possibly, our very existence?

The New Yorker’s Jill Lepore says we live in “A Golden Age for Dystopian Fiction,” but she worries that this body of work “cannot imagine a better future, and it doesn’t ask anyone to bother to make one.” She argues this “fiction of helplessness and hopelessness” instead “nurses grievances and indulges resentments” and that “[i]ts only admonition is: Despair more.” Lapore goes so far as to claim that, because “the radical pessimism of an unremitting dystopianism” has appeal to many on both the left and right, it “has itself contributed to the unravelling of the liberal state and the weakening of a commitment to political pluralism.”

I’m not sure dystopian fiction is driving the unravelling of pluralism, but Lapore is on to something when she notes how a fiction rooted in misery about the future will likely have political consequences at some point.

Techno-panic Thinking Shapes Policy Discussions

The ultimate question is whether public policy toward new AI and robotic technologies will be shaped by this hyperpessimistic thinking in the form of precautionary principle regulation, which essentially treats innovations as “guilty until proven innocent” and seeks to intentionally slow or retard their development.

If the extreme fears surrounding AI and robotics  do inspire precautionary controls—as they already have in the European Union—then we need to ask how the preservation of the technological status quo could undermine human well-being by denying society important new life-enriching and life-saving goods and services. Technological stasis does not provide a safer or healthier society, but instead holds back our collective ability to innovate, prosper and better our lives in meaningful ways.

Louis Anslow, curator of  Pessimists Archive calls this “the Black Mirror fallacy,” referencing the British television show that has enjoyed great success peddling tales of impending techno-disasters. Anslow defines the fallacy as follows: “When new technologies are treated as much more threatening and risky than old technologies with proven risks/harms. When technological progress is seen as a bigger threat than technological stagnation.”

Anslow’s Pessimists Archive collects real-world case studies of how moral panic and techno-panics have accompanied the introduction of new inventions throughout history. He notes, “Science fiction has conditioned us to be hypervigilant about avoiding dystopias born of technological acceleration and totally indifferent to avoiding dystopias born of technological stagnation.”

Techno-panics can have real-world consequences when they come to influence policymaking. Robert Atkinson, president of the Information Technology & Innovation Foundation (ITIF), has documented the many ways that “the social and political commentary [about AI] has been hype, bordering on urban myth, and even apocalyptic.” The more these attitudes and arguments come to shape policy considerations, the more likely it is precautionary principle-based recommendations will drive AI and robotics policy, preemptively limiting their potential. ITIF has published a report documenting “Ten Ways the Precautionary Principle Undermines Progress in Artificial Intelligence,” identifying how it will slow algorithmic advances in key sectors.

Similarly, in his important recent book Where Is My Flying Car ?, scientist J. Storrs Hall documents how “regulation clobbered the learning curve” for many important technologies in the U.S. over the last half century, especially nuclear, nanotech and advanced aviation. Society lost out on many important innovations due to endless bureaucratic delays, often thanks to opposition from special interests, anti-innovation activists, overzealous trial lawyers and a hostile media. Hall explained how this also sent a powerful signal to talented young people who might have been considering careers in those sectors. Why go into a field demonized by so many and where your creative abilities will be hamstrung by precautionary constraints?

Disincentivizing Talent

Hall argues that in those crucial sectors, this sort of mass talent migration “took our best and brightest away from improving our lives,” and he warns that those who still hope to make a career in such fields should be prepared to be “misconstrued and misrepresented by activists, demonized by ignorant journalists, and strangled by regulation.”

Is this what the future holds for AI and robotics? Hopefully not, and America continues to generate world-class talent on this front today in a diverse array of businesses and university programs. But if the waves of negativism about AI and robotics persist, we shouldn’t be surprised if it results in a talent shift away from building these technologies and toward fields that instead look to restrict them.

For example, Hall documents how, following the sudden shift in public attitudes surrounding nuclear power 50 years ago, “interests, and career prospects, in nuclear physics imploded” and “major discoveries stopped coming.” Meanwhile, enrollment in law schools and other soft sciences typically critical of technological innovation enjoyed greater success. Nobody writes any sci-fi stories about what a disaster that development has been for innovation in the energy sphere, even though it is now abundantly clear how precautionary principle policies have undermined environment goals and human welfare, with major geopolitical consequences for many nations.

If America loses the talent race on the AI front, it has ramifications for global competitive advantage going forward, especially as China races to catch up. In a world of global innovation arbitrage, talent and venture capital will flow to wherever it is treated most hospitably. Demonizing AI and robotics won’t help recruit or retain the next generation of talent and investors America needs to remain on top.

Flipping the Script

Some folks have had enough of the relentless pessimism surrounding technology and progress in modern science fiction and are trying to do something to reverse it. In a 2011  Wired essay decrying the dangers of “Innovation Starvation,” the acclaimed novelist Neal Stephenson decried the fact that “the techno-optimism of the Golden Age of [science fiction] has given way to fiction written in a generally darker, more skeptical and ambiguous tone.” While good science fiction, “supplies a plausible, fully thought-out picture of an alternate reality in which some sort of compelling innovation has taken place,” Stephenson said modern sci-fi was almost entirely focused on its potential downsides.

To help reverse this trend, Stephenson worked with the Center for Science and the Imagination at Arizona State University to launch Project Hieroglyph, an effort to support authors willing to take a more optimistic view of the future. It yielded a 2014 book, Hieroglyph: Stories and Visions for a Better Future that included almost 20 contributors. Later, in 2018, The Verge launched the “Better Worlds” project to support 10 writers of “stories that inspire hope” about innovation and the future. “Contemporary science fiction often feels fixated on a sort of pessimism that peers into the world of tomorrow and sees the apocalypse looming more often than not,” said Verge culture editor Laura Hudson when announcing the project.

Unfortunately, these efforts have not captured much public attention and that’s hardly surprising. “Pessimism has always been big box office,” says science writer Matt Ridley, primary because it really is more entertaining. Even though many of great sci-fi writers of the past, including Isaac Asimov, Arthur C. Clarke, and Robert Heinlein, wrote positively about technology, they ultimately had more success selling stories with darker themes. It’s just the nature of things more generally, from the best of Greek tragedy to Shakespeare and on down the line. There’s a reason they’re still rebooting Beowulf all these years later, after all.

So, There’s Star Trek and What Else?

While technological innovation will never enjoy the respect it deserves for being the driving force behind human progress, one can at least hope that more pop culture treatments of it might give it a fair shake. When I ask crowds of people to name a popular movie or television show that includes mostly positive depictions of technology, Star Trek is usually the first (and sometimes the only) thing people mention. It’s true that, on balance, technology was treated as a positive force in the original series, although “V’Ger”—a defunct space probe that attains a level of consciousness—was the prime antagonist in Star Trek: The Motion Picture. Later, Star Trek: The Next Generation gave us the always helpful android Data, but also created the lasting mental image of the Borg, a terrifying race of cyborgs hell-bent on assimilating everyone into their hive mind.

The Borg provided some of The Next Generation’s most thrilling moments, but also created a new cultural meme, with tech critics often worrying about how today’s humans are being assimilated into the hive mind of modern information systems. Philosopher Michael Sacasas even coined the term “the Borg Complex,” to refer to a supposed tendency “exhibited by writers and pundits who explicitly assert or implicitly assume that resistance to technology is futile.” After years of a friendly back-and-forth with Sacasas, I felt compelled to even wrap up my book Permissionless Innovation with a warning to other techno-optimists not to fall prey to this deterministic trap when defending technological change. Regardless of where one falls on that issue, the fact that Sacasas and I were having a serious philosophical discussion premised on a famous TV plotline serves as another indication of how much science fiction shapes public and intellectual debate over progress and innovation.

And, truth be told, some movies know how to excite the senses without resorting to dystopianism.  Interstellar and The Martian are two recent examples that come to mind. Interestingly, space exploration technologies themselves usually get a fair shake in many sci-fi plots, often only to be undermined by onboard Ais or androids, as occurred not only in 2001 with the eerie HAL 9000, but also Alien.

There are some positive (and sometimes humorous) depictions of robots as in  Robot & Frank, or touching ones as in Bicentennial Man. Beyond The Jetsons, other cartoons like Iron Giant and Big Hero 6 offer more kindly visions of robots. KITT, a super-intelligent robot car, was Michael Knight’s dependable ally in NBC’s Knight Rider. And R2-D2 is always a friendly helper throughout the Star Wars franchise. But generally speaking, modern sci-fi continues to churn out far more negativism about AI and robotics.

What If We Took It All Seriously?

So long as the public and political imagination is spellbound by machine machinations that dystopian sci-fi produces, we’ll be at risk of being stuck with absurd debates that have no meaningful solution other than “Stop the clock!” or “Ban it all!” Are we really being assimilated into the Borg hive mind, or just buying time until a coming robopocalypse grinds us into dust (or dinner)?

If there was a kernel of truth to any of this, then we should adopt some of the extreme solutions, Nick Bostrom of Oxford suggests in his writing on these issues. Those radical steps include worldwide surveillance and enforcement mechanisms for scientists and researchers developing algorithmic and robotic systems, as well as some sort of global censorship of information about these capabilities to ensure the technology is not used by bad actors.

To Bostrom’s great credit, he is at least willing to tell us how far he’d go. Most of today’s tech critics prefer to just spread a gospel of gloom and doom and suggest  something must be done, without getting into the ugly details about what a global control regime for computational science and robotic engineering looks like. We should reject such extremist hypothesizing and understand that silly sci-fi plots, bombastic headlines and kooky academic writing should not be our baseline for serious discussions about the governance of artificial intelligence and robotics.

At the same time, we absolutely should consider what downsides any technology poses for individuals and society. And, yes, some precautions will be needed of a regulatory nature. But most of the problems envisioned by sci-fi writers are not what we should be concerned with. There are far more specific and nuanced problems AI and robotics confronts us with today that deserve more serious consideration and governance steps. How to program safer drones and driverless cars, improve the accuracy of algorithmic medical and financial technologies, and ensure better transparency for government uses of AI are all more mundane but very important issues that require reasoned discussion and balanced solutions today. Dystopian thinking gives us no roadmap to get there other than extreme solutions.

Imagining a Better Future

The way forward here is neither to indulge in apocalyptic fantasies nor pollyannaish techno-optimism, but to approach these technologies with reasoned risk analysis, sensible industry best practices, educational efforts and other agile governance steps. In a forthcoming book on flexible governance strategies for AI and robotics, I outline how these and other strategies are already being formulated to address real-world challenges in fields as diverse as driverless cars, drones, machine learning in medicine and much more.

A wide variety of ethical frameworks, offered by professional associations, academic groups and others, already exists to “bake in” best practices and align AI design with widely shared goals and values while also “keeping humans in the loop” at critical stages of the design process to ensure that they can continue to guide and occasionally realign those values and best practices as needed.

When things do go wrong, many existing remedies are available, including a wide variety of common law solutions (torts, class actions, contract law, etc.), recall authority possessed by many regulatory agencies, and various consumer protection policies and other existing laws. Moreover, the most effective solution to technological problems usually lies in more innovation, not less. It is only through constant trial and error that humanity discovers better  and safer ways of satisfying important wants and needs.

These are complicated and nuanced issues that demand tailored and iterative governance responses. But this should not be done using inflexible, innovation-limiting mandates. Concerns about AI dangers deserve serious consideration and appropriate governance steps to ensure that these systems are beneficial to society. However, there is an equally compelling public interest in ensuring that AI innovations are developed and made widely available to help improve human well-being across many dimensions.

So, enjoy your next dopamine hit of sci-fi hysteria—I know I will, too. But don’t let that be your guide to the world that awaits us. Even if most sci-fi writers can’t imagine a better future, the rest of us can.

]]>
https://techliberation.com/2022/09/02/why-the-endless-techno-apocalyptica-in-modern-sci-fi/feed/ 2 77033
3 Questions about Progress: The Profectus Progress Roundtable https://techliberation.com/2022/06/15/3-questions-about-the-progress-the-profectus-progress-roundtable/ https://techliberation.com/2022/06/15/3-questions-about-the-progress-the-profectus-progress-roundtable/#respond Wed, 15 Jun 2022 17:10:56 +0000 https://techliberation.com/?p=77002

Profectus is an excellent new online magazine featuring essays and interviews on the intersection of academic literature, public policy, civilizational progress, and human flourishing. The Spring 2022 edition of the magazine features a “Progress Roundtable” in which six different scholars were asked to contribute their thoughts on three general questions:
  1. What is progress?
  2. What are the most significant barriers holding back further progress?
  3. If those challenges can be overcome, what does the world look like in 50 years?

I was honored to be asked by Clay Routledge to contribute answers to those questions alongside others, including: Steven Pinker (Harvard University), Jason Crawford (Roots of Progress), Matt Clancy (Institute for Progress), Marian Tupy (Human​Progress​.org), James Pethokoukis (AEI). I encourage you to jump over the roundtable and read all their excellent responses. I’ve included my answers down below:

What is progress?

Progress is the advancement of human health, happiness, and general well-being. Measures of well-being can be challenging, however, so we should consider a broad range of metrics, including: life expectancy, infant mortality, poverty measures, energy production/consumption, GDP, productivity, agricultural yields/nourishment, and access to various important goods, services, and conveniences. While each of these metrics may have limitations, taken together, they stand for something meaningful that represents a rough proxy for progress.

But we should always remember what progress means at a deeper level for every individual. Innovation and economic growth are important because they allow us to live lives of our own choosing and enjoy the fruits of a prosperous, pluralistic society.  Progress “is not just bigger piles of money,” as Hans Rosling once noted. “The ultimate goal is to have the freedom to do what we want.”  Accordingly, we should aim to broaden the range of opportunities available to all people to help them flourish.

What are the most significant barriers holding back further progress?

The most significant threat to continued progress is the risk of stagnation accompanying efforts to protect the status quo. As Virginia Postrel taught us in her wonderful book The Future & Its Enemies, we should reject stasis-minded thinking and instead shoot for a world of dynamism, which cherishes and protects the freedom to think and act differently.

Progress hinges upon the growth of knowledge. Knowledge comes from experience, and the most important experiences involve trial-and-error learning. Public attitudes and policies that restrict people and ideas from intermingling freely are a recipe for intellectual, social, and economic stagnation. Accordingly, when we consider public policies toward progress, we should first seek to identify and remove legal and regulatory impediments that limit risk-taking, entrepreneurialism, and technological innovation. As science writer Matt Ridley provocatively puts it, to unlock more growth and prosperity, we must first remove obstacles to “ideas having sex.”

The free movement of people and capital is essential to this process. Openness to immigration is the easiest way for a nation to expand its potential for innovation and growth. But domestic labor skills and mobility are equally important. For entrepreneurs and workers, we need to reframe the battle for progress as “the freedom to innovate” and “the right to earn a living.”

Unfortunately, many barriers exist to advancing those goals, like occupational licensing rules and permitting processes, cronyist industrial protectionist schemes, inefficient tax schemes, and many other layers of regulatory red tape. Reforming or eliminating such rules is crucial for broadening opportunities.

Finally, we need to address cultural barriers to progress. Technology and entrepreneurs often get a bad rap in the media and popular culture. Fear and pessimism dominate their narratives. We must do a better job communicating the benefits of openness to change and give people more reasons to be optimistic about a dynamic future.

If those challenges can be overcome, what does the world look like in 50 years?

I agree with Yogi Berra that “It’s tough to make predictions, especially about the future.” Nonetheless, history shows we can achieve remarkable things when we get the prerequisites for progress right and let people tap into their inherent inquisitiveness and inventiveness. Moving the needle on innovation and growth even just a little will yield compounding returns to future generations. But we should dare to dream bigger and think what progress means for each person today and in the future.

A pro-progress agenda will help us lead longer lives and significantly expand our capabilities because that is what people have always desired most. Accordingly, I believe the most significant advance of the next 50 years will be a radical increase in life expectancy and dramatic improvements in our physical and mental capabilities while we are alive.

Today’s tech critics often claim that technological innovation somehow undermines our humanity. They couldn’t be more wrong. There are few things more human than acts of invention. When we take steps to address practical human needs and wants, we enrich our lives and the lives of countless others. The future will be wonderful, so long as we are free to make it so.

]]>
https://techliberation.com/2022/06/15/3-questions-about-the-progress-the-profectus-progress-roundtable/feed/ 0 77002
VIDEO: My London Talk about the Future of AI Governance https://techliberation.com/2022/06/13/video-my-london-talk-about-the-future-of-ai-governance/ https://techliberation.com/2022/06/13/video-my-london-talk-about-the-future-of-ai-governance/#comments Mon, 13 Jun 2022 09:29:50 +0000 https://techliberation.com/?p=76999

On Thursday, June 9, it was my great pleasure to return to my first work office at the Adam Smith Institute in London and give a talk on the future of innovation policy and the governance of artificial intelligence. James Lawson, who is affiliated with the ASI and wrote a wonderful 2020 study on AI policy, introduced me and also offered some remarks. Among the issues discussed:

  • What sort of governance vision should govern the future of innovation generally and AI in particular: the “precautionary principle” or “permissionless innovation”?
  • Which AI sectors are witnessing the most exciting forms of innovation currently?
  • What are the fundamental policy fault lines in the AI policy debates today?
  • Will fears about disruption and automation lead to a new Luddite movement?
  • How can “soft law” and decentralized governance mechanism help us solve pressing policy concerns surrounding AI?
  • How did automation affect traditional jobs and sectors?
  • Will the European Union’s AI Act become a global model for regulation and will it have a “Brussels Effect” in terms of forcing innovators across the world to come into compliance with EU regulatory mandates?
  • How will global innovation arbitrage affect the efforts by governments in Europe and elsewhere to regulate AI innovation?
  • Can the common law help address AI risk? How is the UK common law system superior to the US legal system?
  • What do we mean by “existential risk” as it pertains to artificial intelligence?

I have a massive study in the works addressing all these issues. In the meantime, you can watch the video of my London talk here. And thanks again to my friends at the Adam Smith Institute for hosting!

Additional Reading:

 

 

]]>
https://techliberation.com/2022/06/13/video-my-london-talk-about-the-future-of-ai-governance/feed/ 5 76999
Samuel Florman & the Continuing Battle over Technological Progress https://techliberation.com/2022/04/06/samuel-florman-the-continuing-battle-over-technological-progress/ https://techliberation.com/2022/04/06/samuel-florman-the-continuing-battle-over-technological-progress/#comments Wed, 06 Apr 2022 18:37:45 +0000 https://techliberation.com/?p=76961

Almost every argument against technological innovation and progress that we hear today was identified and debunked by Samuel C. Florman a half century ago. Few others since him have mounted a more powerful case for the importance of innovation to human flourishing than Florman did throughout his lifetime.

Chances are you’ve never heard of him, however. As prolific as he was, Florman did not command as much attention as the endless parade of tech critics whose apocalyptic predictions grabbed all the headlines. An engineer by training, Florman became concerned about the growing criticism of his profession throughout the 1960s and 70s. He pushed back against that impulse in a series of books over the next two decades, including most notably: The Existential Pleasures of Engineering (1976), Blaming Technology: The Irrational Search for Scapegoats (1981), and The Civilized Engineer (1987). He was also a prolific essayist, penning hundreds of articles for a wide variety of journals, magazines, and newspapers beginning in 1959. He was also a regular columnist for MIT Technology Review for sixteen years.

Florman’s primary mission in his books and many of those essays was to defend the engineering profession against attacks emanating from various corners. More broadly, as he noted in a short autobiography on his personal website, Florman was interested in discussing, “the relationship of technology to the general culture.”

Florman could be considered a “rational optimist,” to borrow Matt Ridley’s notable term [1] for those of us who believe, as I have summarized elsewhere, that there is a symbiotic relationship between innovation, economic growth, pluralism, and human betterment.[2] Rational optimists are highly pragmatic and base their optimism on facts and historical analysis, not on dogmatism or blind faith in any particular viewpoint, ideology, or gut feeling. But they are unified in the belief that technological change is a crucial component of moving the needle on progress and prosperity.

Florman’s unique contribution to advancing rational optimism came in the way he itemized the various claims made by tech critics and then powerfully debunked each one of them. He was providing other rational optimists with a blueprint for how defend technological innovation against its many critics and criticisms. As he argued in The Civilized Engineer, we need to “broaden our conception of engineering to include all technological creativity.”[3] And then we need to defend it with vigor.

In 1982, the American Society of Mechanical Engineers appropriately awarded Florman the distinguished Ralph Coats Roe Medal for his “outstanding contribution toward a better public understanding and appreciation of the engineer’s worth to contemporary society.” Carl Sagan had won the award the previous year. Alas, Forman never attained the same degree of notoriety as Sagan. That is a shame because Florman was as much a philosopher and a historian as he was an engineer, and his robust thinking on technology and society deserves far greater attention. More generally, his plain-spoken style and straight-forward defense of technological progress continues to be a model for how to counter today’s techno-pessimists.

This essay highlights some of the most important themes and arguments found in Florman’s writing and explains its continuing relevance to the ongoing battles over technology and progress.

What Motivates The “Antitechnologists”?

Florman was interested in answering questions about what motivates both engineers as well as their critics. He dug deep into psychology and history to figure out what makes these people tick. Who are engineers, and why do they do what they do? That was his primary question, and we will turn to his answers momentarily. But he also wanted to know what drove the technology critics to oppose innovation so vociferously.

Florman’s most important contribution to the history of ideas lies in his 6-part explanation of “the main themes that run through the works of the antitechnologists.”[4] Florman used the term “antitechnologists” to describe the many different critics of engineering and innovation. He recognized that the term wasn’t perfect and that some people he labelled as such would object to it. Nevertheless, because they offer no umbrella label for their movement or way of thinking, Florman noted that opposition to, or general discomfort with, technology was what motivated these critics. Hence, the label “antitechnologists.”

Florman surveyed a wide swath of technological critics from many different disciplines—philosophy, sociology, law, and other fields. He condensed their main criticisms into six general points:

  • Technology is a “thing” or a force that has escaped from human control and is spoiling our lives.
  • Technology forces man to do work that is tedious and degrading.
  • Technology forces man to consume things that he does not really desire.
  • Technology creates an elite class of technocrats, and so disenfranchises the masses.
  • Technology cripples man by cutting him off from the natural world in which he evolved.
  • Technology provides man with technical diversions which destroy his existential sense of his own being.[5]

No one else before this had ever crafted such a taxonomy of complaints from tech critics, and no one has done it better since Florman did so in 1976. In fact, it is astonishing how well Florman’s list continues to identify what motivates modern technology critics. New technologies have come and gone, but these same concerns tend to be brought up again and again. Florman’s books addressed and debunked each of these concerns in powerful fashion.

The Relentless Pessimism & Elitism of the Antitechnologists

Florman identified the way a persistent pessimism unifies antitechnologists. “Our intellectual journals are full of gloomy tracts that depict a society debased by technology,” he noted.[6] What motivated such gloom and doom? “It is fear. They are terrified by the scene unfolding before their eyes.”[7] He elaborated:

“The antitechnologists are frightened; they counsel halt and retreat. They tell the people that Satan (technology) is leading them astray, but the people have heard that story before. They will not stand still for vague promises of a psychic contentment that is to follow in the wake of voluntary temperance.”[8]

The antitechnologist’s worldview isn’t just relentlessly pessimistic but also highly elitist and paternalistic, Florman argued. He referred to it as “Platonic snobbery.”[9] The economist and political scientist Thomas Sowell would later call that snobbish attitude, “the vision of the anointed.”[10] Like Sowell, Florman was angered at the way critics stared down their noses at average folk and disregarded their values and choices:

“The antitechnologists have every right to be gloomy, and have a bounden duty to express their doubts about the direction our lives are taking. But their persistent disregard of the average person’s sentiments is a crucial weakness in their argument—particularly when they then ask us to consider the ‘real’ satisfactions that they claim ordinary people experienced in other cultures of other times.”[11]

Florman noted that critics commonly complain about “too many people wanting too many things,” but he noted that, “[t]his is not caused by technology; it is a consequence of the type of creature that man is.”[12] One can moralize all they want about supposed over-consumption or “conspicuous consumption,” but in the end, most of us strive to better our lives in various ways—including by working to attain things that may be out of our reach or even superfluous in the eyes of others.

For many antitechnologists and other social critics, only the noble search for truth and wisdom will suffice. Basically, everybody should just get back to studying philosophy, sociology, and other soft sciences. Modern tech critics, Forman said, fashion themselves as the intellectual descendants of Greek philosophers who believed that, “[t]he ideal of the new Athenian citizen was to care for his body in the gymnasium, reason his way to Truth in the academy, gossip in the agora, and debate in the senate. Technology was not deemed worthy of a free man’s time.”[13]

“It is not surprising to find philosophers recommending the study of philosophy as a way of life,” Florman noted amusingly.[14] But that does not mean all of us want (or even need) to devote our lives to such things. Nonetheless, critics often sneer at the choices made by the rest of us—especially when they involve the fruits of science and technology. “The most effective weapon in the arsenal of the antitechnologists is self-righteousness,” he noted,[15] and, “[a]s seen by the antitechnologists, engineers and scientists are half-men whose analysis and manipulation of the world deprives them of the emotional experiences that are the essence of the good life.”[16]

Indeed, it is not uncommon (both in the past and today) to see tech critics self-anoint themselves “humanists” and then suggest that anyone who thinks differently from them (namely, those who are pro-innovation) are the equivalent of anti-humanistic. I wrote about this in my 2018 essay, “Is It ‘Techno-Chauvinist’ & ‘Anti-Humanist’ to Believe in the Transformative Potential of Technology?” I argued that, “[p]roperly understood, ‘technology’ and technological innovation are simply extensions of our humanity and represent efforts to continuously improve the human condition. In that sense, humanism and technology are compliments, not opposites.”

But the critics remain fundamentally hostile to that notion and they often suggest that there is something suspicious about those who believe, along with Florman, that there is a symbiotic relationship between innovation, economic growth, pluralism, and human betterment. We rational optimists, the critics suggest, are simply too focused on crass, materialistic measures of happiness and human flourishing.

Florman observed this when noting how much grief he and fellow engineers and scientists got when engaging with critics. “Anyone who has attempted to defend technology against the reproaches of an avowed humanist soon discovers that beneath all the layers of reasoning—political, environmental, aesthetic, or moral—lies a deep-seated disdain for ‘the scientific view.’”[17]

Everywhere you look in the world of Science & Technology Studies (STS) today, you find this attitude at work. In fact, the field is perhaps better labelled Anti-Science & Technology Studies, or at least Science & Technology Skeptical Studies. For most STSers, the burden of proof lies squarely on scientists, engineers, and innovators who must prove to some (often undefined) higher authorities that their ideas and inventions will bring worth to society (however the critics measure worth and value, which is often very unclear). Until then, just go slow, the critics say. Better yet, consult your local philosophy department for a proper course of action!

The critics will retort that they are just looking out for society’s best interests and trying to counter that selfish, materialist side of humanity. Florman countered by noting how, “most people are in search of the good life—not ‘the goods life’ as [Lewis] Mumford puts it, although some goods are entailed—and most human desires are for good things in moderate amounts.”[18] Trying to better our lives through the creation and acquisition of new and better goods and services is just a natural and quite healthy human instinct to help us attain some ever-changing definition of whatever each of us considers “the good life.” “Something other than technology is responsible for people wanting to live in a house on a grassy plot beyond walking distance to job, market, neighbor, and school,” Florman responded.[19] We all want to “get ahead” and improve our lot in life. That’s not because technology forces the urge upon us. Rather, that urge comes quite naturally as part of a desire to improve our lot in life.

The Power of Nostalgia

I have spent a fair amount of time in my own writing documenting the central role that nostalgia plays in motivating technological criticism.[20] Florman’s books repeatedly highlighted this reality. “The antitechnologists romanticize the work of earlier times in an attempt to make it seem more appealing than work in a technological age,” he noted. “But their idyllic descriptions of peasant life do not ring true.”[21]

The funny thing is, it is hard to pin down the critics regarding exactly when the “golden era” or “good ‘ol days” were. But if there is one thing that they all agree on, it’s that those days have long passed us by. In a 2019 essay on “Four Flavors of Doom: A Taxonomy of Contemporary Pessimism,” philosopher Maarten Boudry noted:

“In the good old days, everything was better. Where once the world was whole and beautiful, now everything has gone to ruin. Different nostalgic thinkers locate their favorite Golden Age in different historical periods. Some yearn for a past that they were lucky enough to experience in their youth, while others locate utopia at a point farther back in time…”

Not all nostalgia is bad. Clay Routledge has written eloquently about how “nostalgia serves important psychological functions,” and can sometimes possess a positive character that strengthens individuals and society. But the nostalgia found in the works of tech critics is usually a different thing altogether. It is rooted in misery about the present and dread of the future—all because technology has apparently stolen away or destroyed all that was supposedly great about the past. Florman noted how, “the current pessimism about technology is a renewed manifestation of pastoralism,” that is typically rooted in historical revisionism about bygone eras.[22] Many critics engage in what rhetoricians call “appeals to nature” and wax poetic about the joys of life for Pre-Technological Man, who apparently enjoyed an idyllic life free of the annoying intrusions created by modern contrivances.

Such “good ol days” romanticism is largely untethered from reality. “For most of recorded history humanity lived on the brink of starvation,” Wall Street Journal columnist Greg Ip noted in a column in early 2019. Even a cursory review of history offers voluminous, unambiguous proof that the old days were, in reality, eras of abject misery. Widespread poverty, mass hunger, poor hygiene, disease, short lifespans, and so on were the norm. What lifted humanity up and improved our lot as a species is that we learned how to apply knowledge to tasks in a better way through incessant trial-and-error experimentation. Recent books by Hans Rosling,[23] Steven Pinker,[24] and many others[25] have thoroughly documented these improvements to human well-being over time.

The critics are unmoved by such evidence, preferring to just jump around in time and cherry-pick moments when they feel life was better than it is now. “Fond as they are of tribal and peasant life, the antitechnologists become positively euphoric over the Middle Ages,” Florman quipped.[26] Why? Mostly because the Middle Ages lacked the technological advances of modern times, which the critics loathe. But facts are pesky things, and as Florman insisted, “it is fair to go on to ask whether or not life was ‘better’ in these earlier cultures than it is in our own.”[27] “We all are moved to reverie by talk of an arcadian golden age,” he noted. “But when we awaken from this reverie, we realize that the antitechnologists have diverted us with half-truths and distortions.”[28]

The critics’ reverence for the old days would be humorous if it wasn’t rooted in an arrogant and dangerous belief that society can be somehow reshaped to resemble whatever preferred past the critics desire. “Recognizing that we cannot return to earlier times, the antitechnologists nevertheless would have us attempt to recapture the satisfactions of these vanished cultures,” Florman noted. “In order to do this, what is required is nothing less than a change in the nature of man.”[29] That is, the critics will insist that, “something must be done” (namely be forced from above via some grand design) to remake humans and discourage their inner homo faber desire to be an incessant tool-builder. But this is madness, Florman argued in one of the best passages from his work:

“we are beginning to realize that for mankind there will never be a time to rest at the top of the mountain. There will be no new arcadian age. There will always be new burdens, new problems, new failures, new beginnings. And the glory of man is to respond to his harsh fate with zest and ever-renewed effort.”[30]

If the critics had their way, however, that zest would be dampened and those efforts restrained in the name of recapturing some mythical lost age. This sort of “rosy retrospection bias” is all the more shocking coming, as it does, from learned people who should know a lot more about the actual history of our species and the long struggle to escape utter despair and destitution. Alas, as the great Scottish philosopher David Hume observed in a 1777 essay, “The humour of blaming the present, and admiring the past, is strongly rooted in human nature, and has an influence even on persons endued with the profoundest judgment and most extensive learning.”[31]

Why Invent? Homo Faber is our Nature

While taking on the critics and debunking their misplaced nostalgia about the past, Florman mounted a defense of engineers and innovators by noting that the need to tinker and create is in our blood. He began by noting how “the nature of engineering has been misconceived”[32] because, in a sense, we are all engineers and innovators to some degree.

Florman’s thinking was very much in line with Benjamin Franklin, who once noted, “man is a tool-making animal.” “Both genetically and culturally the engineering instinct has been nurtured within us,” Florman argued, and this instinct “was as old as the human race.”[33] “To be human is to be technological. When we are being technological we are being human—we are expressing the age-old desire of the tribe to survive and prosper.”[34] In fact, he claimed, it was no exaggeration to say that humans, “are driven to technological creativity because of instincts hardly less basic than hunger and sex.”[35] Had our past situation been as rosy as the critics sometimes suggest, perhaps we would have never bothered to fashion tools to escape those eras! It was precisely because humans wanted to improve their lives and the lives of their loved ones that we started crafting more and better tools. Flint and firewood were never going to suffice.

But our engineering instincts do not end with basic needs. “Engineering responds to impulses that go beyond mere survival: a craving for variety and new possibilities, a feeling for proportion—for beauty—that we share with the artist,” Florman argued.[36] In essence, engineering and innovation respond to both basic human needs and higher ones at every stage of “Maslow’s pyramid,” which describes a five-level hierarchy of human needs. This same theme is developed in Arthur Diamond’s recent book, Openness to Creative Destruction: Sustaining Innovative Dynamism. As Diamond argues, one of the most unheralded features of technological innovation is that, “by providing goods that are especially useful in pursuing a life plan full of challenging, worthwhile creative projects,” it allows each of us the pursue different conceptions of what we consider a good life.[37] But we are only able to do so by first satisfying our basic physiological needs, which innovation also handles for us.

Florman was frustrated that critics failed to understand this point and equally concerned that engineers and innovators had been cast as uncaring gadget-worshipers who did not see beauty and truth in higher arts and other more worldly goals and human values. That’s hogwash, he argued:

“What an ironic turn of events! For if ever there was a group dedicated to—obsessed with—morality, conscience, and social responsibility, it has been the engineering profession. Practically every description of the practice of engineering has stressed the concept of service to humanity.[38] [. . .] Even in an age of global affluence, the main existential pleasure of the engineer will always be to contribute to the well-being of his fellow man.”[39]

Engineers and innovators do not always set out with some grandiose design to change the world, although some aspire to do so. Rather, the “existential pleasures of engineering” that Florman described in the title of his most notable book comes about by solving practical day-to-day problems:

“The engineer does not find existential pleasure by seeking it frontally. It comes to him gratuitously, seeping into him unawares. He does not arise in the morning and say, ‘Today I shall find happiness.’ Quite the contrary. He arises and says, ‘Today I will do the work that needs to be done, the work for which I have been trained, the work which I want to do because in doing it I feel challenged and alive.’ Then happiness arrives mysteriously as a byproduct of his effort.”[40]

And this pleasure of getting practical work done is something that engineers and innovators enjoy collectively by coming together and using specialized skills in new and unique combinations. “[T]echnological progress depends upon a variety of skills and knowledge that are far beyond the capacity of any one individual,” he insisted. “High civilization requires a high degree of specialization, and it was toward high civilization that the human journey appears always to have been directed.”[41] Adam Smith could not have said it any better.

“Muddling Through”: Why Trial-and-Error is the Key to Progress

My favorite insights from Florman’s work relate to the way humans have repeatedly faced up to adversity and found ways to “muddle through.” This was the focus of an old essay of mine— “Muddling Through: How We Learn to Cope with Technological Change”—which argued that humans are a remarkably resilient species and that we regularly find creative ways to deal with major changes through constant trial-and-error experimentation and the learning that results from it.[42]

Florman made this same point far more eloquently long ago:

“We have been attempting to muddle along, acknowledging that we are selfish and foolish, and proceeding by means of trial and error. We call ourselves pragmatists. Mistakes are made, of course. Also, tastes change, so that what seemed desirable to one generation appears disagreeable to the next. But our overriding concern has been to make sure that matters of taste do not become matters of dogma, for that is the way toward violent conflict and tyranny. Trial and error, however, is exactly what the antitechnologists cannot abide.[43]

It is the error part of trial-and-error that is so vital to societal learning. “Even the most cautious engineer recognizes that risk is inherent in what he or she does,” Florman noted. “Over the long haul the improbable becomes the inevitable, and accidents will happen. The unanticipated will occur.”[44] But “[s]ometimes the only way to gain knowledge is by experiencing failure,” he correctly observed[45] “To be willing to learn through failure—failure that cannot be hidden—requires tenacity and courage.”[46]

I’ve argued that this represents the central dividing line between innovation supporters and technology critics. The critics are so focused on risk-adverse, precautionary principle-based thinking that they simply cannot tolerate the idea that society can learn more through trial-and-error than through preemptive planning. They imagine it is possible to override that process and predetermine the proper course of action to create a safer, more stable society. In this mindset, failure is to be avoided at all costs through prescriptions and prohibitions. Innovation is to be treated as guilty until proven innocent in the hope of eliminating the error (or risk / failure) associated with trial-and-error experiments. To reiterate, this logic misses the fact that the entire point of trial-and-error is to learn from our mistakes and “fail better” next time, until we’ve solved the problem at hand entirely.[47]

Florman noted that, “sensible people have agreed that there is no free lunch; there are only difficult choices, options, and trade-offs.”[48] In other words, precautionary controls come at a cost. “All we can do is do the best we can, plan where we can, agree where we can, and compromise where we must,” he said.[49] But, again, the antitechnologists absolutely cannot accept this worldview. They are fundamentally hostile to it because they either believe that a precautionary approach will do a better job improving public welfare, or they believe that trial-and-error fails to safeguard any number of other values or institutions that they regard as sacrosanct. This shuts down the learning process from which wisdom is generated. As the old adage goes, “nothing ventured, nothing gained.” There can be no reward without some risk, and there can be no human advances without unless we are free to learn from the error portion of trial-and-error.

The Costs of Precautionary Regulation

Florman did not spend much time in his writing mulling over the finer points of public policy, but he did express skepticism about our collective ability to define and enforce “the public interest” in various contexts. A great many regulatory regimes—and their underlying statutes—rest on the notion of “protecting the public interest.” It is impossible to be against that notion, but it is often equally impossible to define what it even means.[50]

This leads to what Florman called, “the search for virtues that nobody can define”[51] “As engineers we are agreed that the public interest is very important; but it is folly to think that we can agree on what the public interest is. We cannot even agree on the scientific facts!”[52] This is especially true today in debates over what constitutes “responsible innovation” or “ethical innovation.”[53] What Florman noted about such conversations three decades ago is equally true today:

“Whenever engineering ethics is on the agenda, emotions come quickly to a boil. […] “It is oh so easy to mouth clichés, for example to pledge to protect the public interest, as the various codes of engineering ethics do. But such a pledge is only a beginning and hardly that. The real questions remain: What is the public interest, and how is it to be served?”[54]

That reality makes it extremely difficult to formulate consensus regarding public polices for emerging technologies. And it makes it particularly difficult to define and enforce a “precautionary principle” for emerging technologies that will somehow strike the Goldilocks balance of getting things just right. This was the focus of my 2016 book Permissionless Innovation, which argued that the precautionary principle should be the last resort when contemplating innovation policy. Experimentation with new technologies and business models should generally be permitted by default because, “living in constant fear of worst-case scenarios—and premising public policy on them—means that best-case scenarios will never come about,” I argued. The precautionary principle should only be tapped when the harms alleged to be associated with a new technology are highly probable, tangible, immediate, irreversible, catastrophic, or directly threatening to life and limb in some fashion.

For his part, Florman did not want to get his defense of engineering mixed up with politics and regulatory considerations. Engineers and technologists, he noted, come in many flavors and supported many different causes. Generally speaking, they tend to be quite pragmatic and shun strong ideological leanings and political pronouncements.

Of course, at some point, there is no avoiding this fight; one must comment on how to strike the right balance when politics enter the picture and threatens to stifle technological creativity. Florman’s perspectives on regulatory policy were somewhat jumbled, however. On one hand, he expressed concern about excessive and misguided regulations, but he also saw government playing an important role both in supporting various types of engineering projects and regulating certain technological developments:

“The regulatory impulse, running wild, wreaks havoc, first of all by stifling creative and productive forces that are vital to national survival. But it does harm also—and perhaps more ominously—by fomenting a counter-revolution among outraged industrialists, the intensity of which threatens to sweep away many of the very regulations we most need.”[55]

In his 1987 book, The Civilized Engineer, Florman even expressed surprise and regret about growing pushback against regulation during the Reagan years. He also expressed skepticism about “the deceptive allure” of benefit-cost analysis, which was on the rise at the time, saying that the “attempt to apply mathematical consistency to the regulatory process was deplorably simplistic.”[56] I have always been a big believer in the importance of benefit-cost analysis (BCA), so I was surprised to read of Florman’s skepticism of it. But he was writing in the early days of BCA and it was not entirely clear how well it work in practice. Four decades on, BCA has become far more rigorous, academically respected, and well-established throughout government. It has widespread and bipartisan support as a policy evaluation tool.

Florman adamantly opposed any sort of “technocracy”—or administration of government by technically-skilled elites. He thought it was silly that so many tech critics believe that such a thing already existed. “The myth of the technocratic elite is an expression of fear, like a fairy tale about ogres,” he argued. “It springs from an understandable apprehension, but since it has no basis in reality, it has no place in serious discourse.”[57] Nor did he believe that there was any real chance a technocracy would ever take hold. “No matter how complex technology becomes, and no matter how important it turns out to be in human affairs, we are not likely to see authority vested in a class of technocrats.”[58]

Florman hoped for wiser administration of law and regulations that affected engineering endeavors and innovation more generally. Like so many others, he did not necessarily want more law, just better law. One cannot fault that instinct, but Florman was not really interested in fleshing out the finer details of policy about how to accomplish that objective. He preferred instead to use history as a rough guide for policy. From the fall of the Roman Empire to the decline of Britain’s economic might in more recent times, Florman observed the ways in which societal and governmental attitudes toward innovation influenced the relative growth of science, technology, and national economies. In essence, he was explaining how “innovation culture” and “innovation arbitrage” had been realities for far longer than most people realize.[59]

“Where the entrepreneurial spirit cannot be rewarded, and where non-productive workers cannot be discharged, stagnation will set in,” Florman concluded.[60] This is very much in line with the thinking of economic historians like Joel Mokyr[61] and Deirdre McCloskey,[62] who have identified how attitudes toward creativity and entrepreneurialism affect the aggregate innovative capacity of nations, and thus their competitive advantage and relative prosperity in the world.

Debunking Determinism, Anxiety & Alienation Concerns

One of the ironies of modern technological criticism is the way many critics can’t seem to get their story straight when it comes to “technological determinism” versus social determinism. In the extreme view, technological determinism is the idea that technology drives history and almost has a will of its own. It is like an autonomous force that is practically unstoppable. By contrast, social determinism means that society (individuals, institutions, etc.) guide and control the development of technology.

In the field of Science and Technology Studies, technological determinism is a very hot matter. Academic and social critics are fond of painting innovation advocates as rigid tech determinists who are little better than uncaring anti-humanistic gadget-worshipers. The critics have employed a variety of other creative labels to describe tech determinism, including: “techno-fundamentalism,” “technological solutionism,” and even “techno-chauvinism.”

Engineers and other innovators often get hit with such labels and accused of being rigid technological determinists who just want to see tech plow over people and politics. But this was, and remains, a ridiculous argument. Sure, there will always be some wild-eyed futurists and extropian extremists who make preposterous claims about how “there is no stopping technology.” “Even now the salvation-through-technology doctrine has some adherents whose absurdities have helped to inspire the antitechnological movement, Florman said.”[63] But that hardly represents the majority of innovation supporters, who well understand that society and politics play a crucial role in shaping the future course of technological development.

As Florman noted, we can dismiss extreme deterministic perspectives for a rather simple reason: technologies fail all the time! “If promising technologies can suffer fatal blows from unexpected circumstances,” Florman correctly argued, then “[t]his means that we are still—however precariously—in control of our own destiny.”[64] He believed that, “technology is not an independent force, much less a thing, but merely one of the types of activities in which people engage.”[65] The rigid view of tech determinism can be dismissed, he said, because “it can be shown that technology is still very much under society’s control, that it is in fact an expression of our very human desires, fancies, and fears.”[66]

But what is amazing about this debate is that some of the most rigid technological determinists are the technology critics themselves! Recall how Florman began his 6-part taxonomy of common complaints from tech critics. “A primary characteristic of the antitechnologists,” Florman argued, “is the way in which they refer to ‘technology’ as a thing, or at least a force, as if it had an existence of its own” and which “has escaped from human control and is spoiling our lives.”[67]

He noted that many of the leading tech critics of the post-war era often spoke in remarkably deterministic ways. “The idea that a man of the masses has no thoughts of his own, but is something on the order of a programmed machine, owes part of its popularity with the antitechnologists to the influential writings of Herbert Marcuse,” he believed.[68] But then such thinking accelerated and gained greater favor with the popularity of critics like French philosopher Jacques Ellul, American historian Lewis Mumford, and American cultural critic Neil Postman.

Their books painted a dismal portrait of a future in which humans were subjugated to the evils of “technique” (Ellul), “technics” (Mumford), or “technopoly” (Postman).  The narrative of their works read like dystopian science fiction. Essentially, there was no escaping the iron grip that technology had on us. Postman claimed, for example, that technology was destined to destroy “the vital sources of our humanity” and lead to “a culture without a moral foundation” by undermining “certain mental processes and social relations that make human life worth living.”

Which gets us to commonly heard concerns about how technology leads to “anxiety” and “alienation.” “Having established the view of technology as an evil force, the antitechnologists then proceed to depict the average citizen as a helpless slave, driven by this force to perform work he detests,” Florman notes.[69] “Anxiety and alienation are the watchwords of the day, as if material comforts made life worse, rather than better.”[70]

These concerns about anxiety, alienation, and “dehumanization” are omnipresent in the work of modern tech critics, and they are also tied up with traditional worries about “conspicuous consumption.” It’s all part of the “false consciousness” narrative they also peddle, which basically views humans as too ignorant to look out for their own good. In this worldview, people are sheep being led to the slaughter by conniving capitalists and tech innovators, who are just trying to sell them things they don’t really need.

Florman pointed out how preposterous this line of thinking is when he noted how critics seem to always forget that, “a basic human impulse precedes and underlies each technological development”:[71]

“Very often this impulse, or desire, is directly responsible for the new invention. But even when this is not the case, even when the invention is not a response to any particular consumer demand, the impulse is alive and at the ready, sniffing about like a mouse in a maze, seeking its fulfillment. We may regret having some of these impulses. We certainly regret giving expression to some of them. But this hardly gives us the right to blame our misfortunes on a devil external to ourselves.”[72]

Consider the automobile, for example. Industrial era critics often focused on it and lambasted the way they thought industrialists pushed auto culture and technologies on the masses. Did we really need all those cars? All those colors? All those options? Did we really even need cars? The critics wanted us to believe that all these things were just imposed upon us. We were being force-fed options we really didn’t even need or want. “Choice” in this worldview is just a fiction; a front for the nefarious ends of our corporate overlords.

Florman demolished this reasoning throughout his books. “However much we deplore the growth of our automobile culture, clearly it has been created by people making choices, not by a runaway technology,” he argued.[73] Consumer demand and choice is not some fiction fabricated and forced upon us, as the antitechnologists suggest. We make decisions. “Those who would blame all of life’s problems on an amorphous technology, inevitably reject the concept of individual responsibility,” Florman retorted. “This is not humanism. It is a perversion of the humanistic impulse.”[74]

A modern tweak on the conspicuous consumption and false consciousness arguments is found in the work of leading tech critics like Evgeny Morozov, who pens attention-grabbing screeds decrying what he regards as “the folly of technological solutionism.” Morozov bluntly states that “our enemy is the romantic and revolutionary problem solver who resides within” of us, but most specifically within the engineers and technologists.[75]

But would the world really be better place it tinkerers didn’t try to scratch that itch?[76] In 2021, the Wall Street Journal profiled JoeBen Bevirt, an engineer and serial entrepreneur who has been working to bring flying cars from sci-fi to reality. Channeling Florman’s defense of the existential pleasures associated with engineering, Bevirt spoke passionately about the way innovators can help “move our species forward” through their constant tinkering to find solutions to hard problems. “That’s kind of the ethos of who we are,” he said. “We see problems, we’re engineers, we work to try to fix them.”[77]

When tech critics like Morozov decry “solutionism,” they are essentially saying that innovators like Bevirt need to just shut up and sit down. Don’t try to improve the world through tinkering; just settle for the status quo, the critics basically state. That’s the kiss of death for human progress, however, because it is only through incessant experimentation with the new and different approaches to hard problems that we can advance human well-being. “Solutionism” isn’t about just creating some shiny new toy; it’s about expanding the universe of potentially life-enriching and life-saving technologies available to humanity.

Conclusion

This review of Samuel Florman’s work may seem comprehensive, but it only scratches the surface of his wide-ranging writing. Florman was troubled that engineering lacked support or at least understanding. Perhaps that was because, he reasoned, that “[t]here is no single truth that embodies the practice of engineering, no patron saint, no motto or simple credo. There is no unique methodology that has been distilled from millenia of technological effort.”  Or, more simply, it may also be the case that the profession lacked articulate defenders. “The engineer may merely be waiting for his Shakespeare,” he suggested.[78]

Through his life’s work, however, Samuel Florman became that Shakespeare; the great bard of engineering and passionate defender of technological innovation and rational optimism more generally. In looking for a quote or two to close out my latest book, I ended with this one from Florman:

“By turning our backs on technological change, we would be expressing our satisfaction with current world levels of hunger, disease, and privation. Further, we must press ahead in the name of the human adventure. Without experimentation and change our existence would be a dull business.”[79]

Let us resolve to make sure that Florman’s greatest fear does not come to pass. Let us resolve to make sure that the great human adventure never ends. And let us resolve to counter the antitechnologists and their fundamentally anti-humanist worldview, which would most assuredly make our existence the “dull business” that Florman dreaded.

We can do better when we put our minds and hands to work innovating in an attempt to build a better future for humanity. Samuel Florman, the great prophet of progress, showed us the way forward.

 

Additional Reading from Adam Thierer:

 

Endnotes:

[1]    Matt Ridley, The Rational Optimist: How Prosperity Evolves (New York: Harper Collins, 2010).

[2]    Adam Thierer, “Defending Innovation Against Attacks from All Sides,” Discourse, November 9, 2021, https://www.discoursemagazine.com/ideas/2021/11/09/defending-innovation-against-attacks-from-all-sides.

[3]    Samuel C. Forman, The Civilized Engineer (New York: St. Martin’s Griffin, 1987), p. 26.

[4]    Samuel C. Florman, The Existential Pleasures of Engineering (New York, St. Martins Griffin, 2nd Edition, 1994), p. 53-4.

[5]    Existential Pleasures of Engineering, p. 53-4.

[6]    Samuel C. Florman, Blaming Technology: The Irrational Search for Scapegoats (New York: St. Martin’s Press, 1981), p. 186.

[7]    Existential Pleasures of Engineering, p. 76.

[8]    Existential Pleasures of Engineering, p. 77.

[9]    The Civilized Engineer, p. 38.

[10]   Thomas Sowell, The Vision of the Anointed: Self-Congratulation as a Basis for Social Policy (New York: Basic Books, 1995).

[11]   Existential Pleasures of Engineering, p. 72.

[12]   Existential Pleasures of Engineering, p. 76.

[13]   The Civilized Engineer, p. 35.

[14]   Existential Pleasures of Engineering, p. 102.

[15]   Blaming Technology, p. 162.

[16]   Existential Pleasures of Engineering, p. 55.

[17]   Blaming Technology, p. 70.

[18]   Existential Pleasures of Engineering, p. 77.

[19]   Existential Pleasures of Engineering, p. 60.

[20]   Adam Thierer, “Technopanics, Threat Inflation, and the Danger of an Information Technology Precautionary Principle,” Minnesota Journal of Law, Science & Technology 14, no. 1 (2013), p. 312–50, https://papers.ssrn.com/sol3/papers.cfm?abstract_id=2012494.

[21]   Existential Pleasures of Engineering, p. 62.

[22]   Blaming Technology, p. 9.

[23]   Hans Rosling, Factfulness: Ten Reasons We’re Wrong about the World—and Why Things Are Better Than You Think (New York: Flatiron Books, 2018).

[24]   Steven Pinker, Enlightenment Now: The Case for Reason, Science, Humanism, and Progress (New York: Viking, 2018).

[25]   Gregg Easterbrook, It’s Better than It Looks: Reasons for Optimism in an Age of Fear (New York: Public Affairs, 2018); Michael A. Cohen & Micah Zenko, Clear and Present Safety: The World Has Never Been Better and Why That Matters to Americans (New Haven, CT: Yale University Press, 2019).

[26]   Existential Pleasures of Engineering, p. 54.

[27]   Existential Pleasures of Engineering, p. 72.

[28]   Existential Pleasures of Engineering, p. 72.

[29]   Existential Pleasures of Engineering, p. 55.

[30]   Existential Pleasures of Engineering, p. 117.

[31]   David Hume, “Of the Populousness of Ancient Nations,” (1777), https://oll.libertyfund.org/titles/hume-essays-moral-political-literary-lf-ed.

[32]   The Civilized Engineer, p. 20.

[33]   Existential Pleasures of Engineering, p. 6.

[34]   The Civilized Engineer, p. 20.

[35]   Existential Pleasures of Engineering, p. 115.

[36]   The Civilized Engineer, p. 20.

[37]   Arthur Diamond, Openness to Creative Destruction: Sustaining Innovative Dynamism (Oxford: Oxford University Press, 2019).

[38]   Existential Pleasures of Engineering, p. 19.

[39]   Existential Pleasures of Engineering, p. 147.

[40]   Existential Pleasures of Engineering, p. 148.

[41]   The Civilized Engineer, p. 30.

[42]   Adam Thierer, “Muddling Through: How We Learn to Cope with Technological Change,” Medium, June 30, 2014, https://medium.com/tech-liberation/muddling-through-how-we-learn-to-cope-with-technological-change-6282d0d342a6.

[43]   Existential Pleasures of Engineering, p. 84.

[44]   The Civilized Engineer, p. 71.

[45]   The Civilized Engineer, p. 72.

[46]   The Civilized Engineer, p. 72.

[47]   Adam Thierer, “Failing Better: What We Learn by Confronting Risk and Uncertainty,” in Sherzod Abdukadirov (ed.), Nudge Theory in Action: Behavioral Design in Policy and Markets (Palgrave Macmillan, 2016): 65-94.

[48]   The Civilized Engineer, p. xi.

[49]   Existential Pleasures of Engineering, p. 85.

[50]   Adam Thierer, “Is the Public Served by the Public Interest Standard?” The Freeman, September 1, 1996,  https://fee.org/articles/is-the-public-served-by-the-public-interest-standard.

[51]   The Civilized Engineer, p. 84.

[52]   The Existential Pleasures of Engineering, p. 22.

[53]   Adam Thierer, “Are ‘Permissionless Innovation’ and ‘Responsible Innovation’ Compatible?” Technology Liberation Front, July 12, 2017, https://techliberation.com/2017/07/12/are-permissionless-innovation-and-responsible-innovation-compatible.

[54]   The Civilized Engineer, p. 79.

[55]   Blaming Technology, p. 106.

[56]   The Civilized Engineer, p. 158.

[57]   Blaming Technology, p. 41.

[58]   Blaming Technology, p. 40-1.

[59]   Adam Thierer, “Embracing a Culture of Permissionless Innovation,” Cato Online Forum, November 17, 2014, https://www.cato.org/publications/cato-online-forum/embracing-culture-permissionless-innovation; Christopher Koopman, “Creating an Environment for Permissionless Innovation,” Testimony before the US Congress Joint Economic Committee, May 22, 2018, https://www.mercatus.org/publications/creating-environment-permissionless-innovation.

[60]   The Civilized Engineer, p. 117.

[61]   Joel Mokyr, Lever of Riches: Technological Creativity and Economic Progress (New York: Oxford University Press, 1990).

[62]   Deirdre N. McCloskey, The Bourgeois Virtues: Ethics for an Age of Commerce (Chicago: The University of Chicago Press, 2006); Deirdre N. McCloskey, Bourgeois Dignity: Why Economics Can’t Explain the Modern World (Chicago: The University of Chicago Press. 2010).

[63]   Existential Pleasures of Engineering, p. 57.

[64]   Blaming Technology, p. 22.

[65]   The Existential Pleasures of Engineering, p. 58.

[66]   Blaming Technology, p. 10.

[67]   The Existential Pleasures of Engineering, p. 48, 53.

[68]   Existential Pleasures of Engineering, p. 70.

[69]   Existential Pleasures of Engineering, p. 49.

[70]   Existential Pleasures of Engineering, p. 16.

[71]   Existential Pleasures of Engineering, p. 61.

[72]   Existential Pleasures of Engineering, p. 61.

[73]   Existential Pleasures of Engineering, p. 60.

[74]   Blaming Technology, p. 104.

[75]   Evgeny Morozov, To Save Everything, Click Here: The Folly of Technological Solutionism (New York: Public Affairs, 2013).

[76]   Adam Thierer, “A Net Skeptic’s Conservative Manifesto,” Reason, April 27, 2013, https://reason.com/2013/04/27/a-net-skeptics-conservative-manifesto-2/.

[77]   Emily Bobrow, “JoeBen Bevirt Is Bringing Flying Taxis from Sci-Fi to Reality,” Wall Street Journal, July 9, 2021, https://www.wsj.com/articles/joeben-bevirt-is-bringing-flying-taxis-from-sci-fi-to-reality-11625848177.

[78]   Existential Pleasures of Engineering, p. 96.

[79]   Blaming Technology, p. 193.

]]>
https://techliberation.com/2022/04/06/samuel-florman-the-continuing-battle-over-technological-progress/feed/ 3 76961
New Mercatus Center Report on Industrial Policy https://techliberation.com/2021/11/17/new-mercatus-center-report-on-industrial-policy/ https://techliberation.com/2021/11/17/new-mercatus-center-report-on-industrial-policy/#comments Wed, 17 Nov 2021 21:21:29 +0000 https://techliberation.com/?p=76921

The Mercatus Center has just released a new special study that I co-authored with Connor Haaland entitled, “Does the United States Need a More Targeted Industrial Policy for High Tech?” With industrial policy reemerging as a major issue — and with Congress still debating a $250 billion, 2,400-page industrial policy bill — our report does a deep dive into the history various industrial policy efforts both here and abroad over the past half century. Our 64-page survey of the historical record leads us to conclude that, “targeted industrial policy programs cannot magically bring about innovation or economic growth, and government efforts to plan economies from the top down have never had an encouraging track record.”

We zero in on the distinction between general versus targeted economic development efforts and argue that:

whether we are referring to federal, state, or local planning efforts—the more highly tar­geted development efforts typically involve many tradeoffs that are often not taken into consider­ation by industrial policy advocates. Downsides include government steering of public resources into unproductive endeavors, as well as more serious problems, such as cronyism and even corruption.

We also stress the need to more tightly define the term “industrial policy” to ensure rational evaluation is even possible. We argue that, “industrial policy has intentionality and directionality, which distinguishes it from science policy, innovation policy, and economic policy more generally.” We like the focus definition used by economist Nathaniel Lane, who defines industrial policy as “intentional political action meant to shift the industrial structure of an economy.”

Our report examines the so-called “Japan model” of industrial policy that was all the rage in intellectual circles a generation ago and then compares it to the Chinese and European industrial policy efforts of today, which many pundits claim that the US needs to mimic. We find problems with those models and argue that:

America’s goal should not be to “imitate China” or “copy its playbook” when it comes to targeted industrial policy and technological governance of AI and other high-tech sectors. Europe’s approach, although not as heavy-handed, is also not a good model. Not only would the Chinese and European approaches potentially undermine the permissionless innovation ethos that made America’s tech companies become global powerhouses, but expanded industrial policy efforts would entail massive state bets on risky ventures using taxpayer resources.

We discuss the public choice dynamics surrounding many industrial development efforts and note that, “what is often described as “industrial policy” is in reality nothing more than industrial politics.” We highlight how many of the largest industrial policy programs have been prone to highly inefficient contracting procedures and massive cost overruns. Sometimes outright corruption even becomes a problem with some of the largest programs. But that’s not the only cost. Sometimes, in their effort to promote specific industrial outputs or outcomes, government undermines the very innovation they hope to spur.

When governments repress the entrepreneurial spirit of their most innovative creators and companies, this is bound to have negative ramifications for long-term competitiveness and economic growth. Heavy-handed industrial policy schemes can contribute to this sort of repression as the state gains more levers of control over private companies.

We note how that has certainly been the case in the European Union, where “countries have adopted a highly precautionary regulatory model for new digital sectors that shuns risk-taking and focuses on maximizing other values at the expense of disruptive change. This approach has resulted in fewer national champions, and it has cost Europe in terms of global competitive advantage,” we note. We also highlight the long string of failed European industrial policy programs.

Ours is not a doctrinaire analysis; we take a pragmatic approach to the evaluation of industrial policy programs and proposals. Some of them may succeed based simply on the reality that “if government officials roll the proverbial industrial policy dice enough times, some bets are bound to pay off, at least indirectly.” But any serious analysis of these efforts, we argue, must fully weigh the trade-offs associated with the potential tax and compliance burdens associated with funding them to begin with.

But we admit that, “industrial policy will always be with us to some extent, given the sheer size of government and the many existing programs already devoted to economic development or high-tech initiatives.” Toward that end, we wrap up the paper with a variety of high-level recommendations about industrial policy. We highlight how:

The priority should be generalized economic development over targeted development efforts. The most important thing that policymakers can do to boost economic opportunities is to create a legal and regulatory environment that is conducive to entrepreneurship, investment, innovation, and free trade.  [. . . ] government should focus on setting the table for entrepreneurial activity instead of trying to determine everything on the plate. To put this differently, policymakers need to avoid the “fun stuff” and focus on “boring” issues that often get neglected.

We apply these insights to the ongoing debate over regional economic development and the specific effort currently underway at the federal level to encourage “regional innovation hubs,” as federal and state lawmakers look to create “the next Silicon Valley” elsewhere.

In terms of our nation’s overall investment in R&D, we note that “[t]he United States has the most vibrant venture capital (VC) market in the world, and this market helps support risky ventures without gambling with taxpayer dollars.” While some bemoan the fact that private enterprise provides the bulk of R&D expenditures in the US – and that amount is increasing relative to governmental sources – this is actually something that should be celebrated. The strength of private-funded R&D helps set the US apart and make investment markets nimbler and more responsive to real-world needs. Moreover, global unicorn growth in the US continues at a healthy clip. From 2010 to mid-2021, the US created 53 percent of global unicorns, compared with 20 percent for China. These facts are often overlook in industrial policy debates.

While our paper is comprehensive, admittedly, there are some things we leave out of the analysis or do not spend as much time discussing. For example, there is a never-ending debate about the relationship between national security and industrial policy that raises many hard questions. A nation needs military hardware to defend itself, and almost every program to provide weapons and military equipment in the US involve private contracting to get them. These are the biggest industrial policy programs at all, but we don’t spend a lot of time focus on them in our paper because that would have taken us far afield.

We have a short section on these issues that notes how “defense-related programs have also been prone to highly inefficient contracting procedures and massive cost overruns.” Many of these programs remain vital, however, and must find a way to make them more efficient and cost-effective. But there are still other issues related to national security and industrial policy that raise hard questions, including: export or import controls, trade restrictions, and more. These continue to be challenging issues and I personally hope to revisit some of them in upcoming essays.

With Congress still trying to finalize its mega industrial policy bill, our paper is relevant to the short-term debate over these issues. But our hope is that this paper offers a big-picture, long-term framework for thinking through the challenges associated with industrial policy issues both here and abroad.

Here is the outline of the paper and, again, you can find it at this link. (The report can also be found on SSRN & Research Gate).

  1. Introduction: Definitional Challenges 5
  2. Calls for Expanding Industrial Policy to Boost High-Tech Innovation 8
  3. Some (Quickly Forgotten) Recent History 11
  4. The Romantic View of Industrial Policy vs. Reality 15
  5. The Challenge of Creating “National Champions”: Europe’s Failures 20
  6. Adverse Effects of State-Led Promotion: The China Model Examined 23
  7. Where Does Real Competitive Advantage Come From? 27
  8. Industrial Policy Did Not Give Us the Internet and the iPhone 33
  9. Evaluating Other Industrial Policy Efforts 39
  10. Using Competitions and Prizes to Encourage Innovation More Efficiently 46
  11. Conclusion: Generality Is Better Than Targeting

Additional Reading:

]]>
https://techliberation.com/2021/11/17/new-mercatus-center-report-on-industrial-policy/feed/ 1 76921
Lavoie’s Lessons for Industrial Policy Planners https://techliberation.com/2021/11/09/lavoies-lessons-for-industrial-policy-planners/ https://techliberation.com/2021/11/09/lavoies-lessons-for-industrial-policy-planners/#comments Tue, 09 Nov 2021 15:55:23 +0000 https://techliberation.com/?p=76917

Discourse magazine recently published my essay on what “Industrial Policy Advocates Should Learn from Don Lavoie.” With industrial policy enjoying a major revival in the the U.S. — with several major federal proposals are pending or already set to go into effect — I argue that Lavoie’s work is worth revisiting, especially as this weekend was the 20th anniversary of his untimely passing. Jump over to Discourse to read the entire thing.

But one thing I wanted to just briefly highlight here is the useful tool Lavoie created that helped us think about the “planning spectrum,” or the range of different industrial policy planning motivations and proposals. On one axis, he plotted “futurist” versus “preservationist” advocates and proposals, with the futurists wanting to invest in new skills and technologies, while the preservationists seek to prop up existing sectors. On the other axis, he contrasted “left-wing or pro-labor” and “right-wing or pro-business” advocates and proposals.

Lavoie used this tool to help highlight the remarkable intellectual schizophrenia among industrial policy planners, who all claimed to have the One Big Plan to save the economy. The problem was, Lavoie noted, all their plans differed greatly. For example, he did a deep dive into the work of Robert Reich and Felix Rohatyn, who were both outspoken industrial policy advocates during the 80s. Reich as affiliated with the Harvard School of Government at that time, and Rohatyn was a well-known Wall Street financier. The industrial policy proposals set forth by Reich and Rohatyn received enormous media and academic attention at the time, yet no one except Lavoie seriously explored the many ways in which their proposals differed so fundamentally. Rohatyn was slotted on the lower right quadrant because of his desire to prop up old sectors and ensure the health of various private businesses. Reich fell into the upper quadrant of being more of futurist in his desire to have the government promote newer skills, sectors, and technologies.

After identifying the many inconsistencies among these planners and their proposed schemes, Lavoie pointed out that these differences raised some obvious questions: Whose plan are we supposed to follow when proposed plans conflict? And how much stock should we place in the wisdom of industrial policy when the leading advocates cannot even agree on what sectors and technologies are worth preserving or promoting? It was a simply but powerful insight that should led us to calling into question anyone who tries to pretend that they have all the answers when it comes to industrial policy planning. And, as I argue in my new essay, this insight helps us identify the continuing intellectual schizophrenia among industrial policy planners and schemes today. If you jump over to my longer piece, you’ll see my breakdown of all this, but it’s plotted here:

In the end, I conclude that:

The limitations of industrial policy exist regardless of the policymaker’s intentions. There are no “good guys” versus “bad guys” when it comes to industrial policy efforts; there are just many people with many different technocratic plans, all of which are constrained by limited knowledge and resources.

Moreover, Lavoie most important piece of relevant advice is the simple adage that, if you find yourself in a hole, it is wise to stop digging. Constantly doubling down on planning efforts is not going to help governments escape the problems created by their earlier interventions. Unfortunately, this is exactly what many industrial policy advocates do: They insist that America already has an industrial policy, but that it lacks the sort of conscious design or coherent form or direction they desire. But that is the typical sort of hubris and folly we’ve always heard from planners. They always think there’s a proverbial “better path” out there and want us to imagine that they can lead us down it with wiser planning that avoids all the problems of all those past failed planning efforts.

As Lavoie taught us long ago, we’d be wise to reject their various schemes and recommendations. “In light of the inherent deficiencies of central planning, it might be argued that the U.S. should instead try to reduce current government interference with the competitive process to the absolute minimum consistent with other political goals,” he concluded. It remains wise advice for today’s policymakers.


Additional Reading:

]]>
https://techliberation.com/2021/11/09/lavoies-lessons-for-industrial-policy-planners/feed/ 2 76917
What Explains the Rebirth of Analog Era Media? https://techliberation.com/2021/10/01/76908/ https://techliberation.com/2021/10/01/76908/#comments Fri, 01 Oct 2021 15:37:36 +0000 https://techliberation.com/?p=76908

What explains the rebirth of analog era media? Many people (including me!) predicted that vinyl records, turntables, broadcast TV antennas and even printed books seemed destined for the dustbin of technological history. We were so wrong, as I note in this new oped that has gone out through the Tribune Wire Service.

“Many of us threw away our record collections and antennas and began migrating from physical books to digital ones,” I note. “Now, these older technologies are enjoying a revival. What explains their resurgence, and what’s the lesson?”

I offer some data about the rebirth of analog era media as well as some possible explanations for their resurgence. “With vinyl records and printed books, people enjoy making a physical connection with the art they love. They want to hold it in their hands, display it on their wall and show it off to their friends. Digital music or books don’t satisfy that desire, no matter how much more convenient and affordable they might be. The mediums still matter.”

Read more here. Meanwhile, my own personal vinyl collection continues to grow without constraint! …

]]>
https://techliberation.com/2021/10/01/76908/feed/ 5 76908
Existential Risk & Emerging Technology Governance https://techliberation.com/2020/08/05/existential-risk-emerging-technology-governance/ https://techliberation.com/2020/08/05/existential-risk-emerging-technology-governance/#comments Wed, 05 Aug 2020 16:51:39 +0000 https://techliberation.com/?p=76795

“The world should think better about catastrophic and existential risks.” So says a new feature essay in The Economist. Indeed it should, and that includes existential risks associated with emerging technologies.

The primary focus of my research these days revolves around broad-based governance trends for emerging technologies. In particular, I have spent the last few years attempting to better understand how and why “soft law” techniques have been tapped to fill governance gaps. As I noted in this recent post compiling my recent writing on the topic;

soft law refers to informal, collaborative, and constantly evolving governance mechanisms that differ from hard law in that they lack the same degree of enforceability. Soft law builds upon and operates in the shadow of hard law. But soft law lacks the same degree of formality that hard law possess. Despite many shortcomings and criticisms, compared with hard law, soft law can be more rapidly and flexibly adapted to suit new circumstances and address complex technological governance challenges. This is why many regulatory agencies are tapping soft law methods to address shortcomings in the traditional hard law governance systems.

I argued in recent law review articles as well as my latest book, despite its imperfections, I believe that soft law has an important role to play in filling governance gaps that hard law struggles to address. But there are some instances where soft law simply will not cut it. As I noted in Chapter 7 of my new book, there may be very legitimate existential threats out there that we should be spending more time addressing because the scope, severity, and probability of severe risk are present. Hard law solutions will still be needed in such instances, even if they may be challenged by many of the same factors that are fueling the shift toward soft law for other sectors or issues.

Of course, we are immediately confronted with a definitional challenge: What exactly counts as an “existential risk”? I argue that it is important that we spend more time discussing this question because far too many people today throw around the term “existential risk” when referencing risks that are noting of the sort. For example, increased social media use may indeed be a threat to data security and personal privacy, but those risks are not “existential” in the same way chemical or nuclear weapons proliferation are threats to our existence. This gets to the heart of the matter: the root of “existential” is existence. By definition, an existential risk needs to have some direct bearing on the future of humanity’s ability to survive. Efforts to conflate lesser risks into existential ones cheapen the very meaning of the term.

This shouldn’t be controversial, but somehow it is. Countless pundits today want to suggest that almost every new technological development might somehow pose an existential threat to humanity. But it just isn’t the case. That does not mean their concerns are not important, or potentially deserving of some government attention. It simply means that we need to take risk prioritization more seriously. If everything is an existential risk, than nothing is an existential risk. We must have some sort of ranking of risks if we hope to have a rational conversation about how to use scare societal resources to address matters of public concern.

These issues are discussed at far greater length in the sections of my book (pgs. 228-240) that you will find embedded down below. How should society deal with “killer robots” or the accelerated development of genetic editing capabilities? What kind of coordinated compliance regime might help address rouge actors who seek to use new technological capabilities for nefarious purposes? What can we learn from past global enforcement efforts for chemical and nuclear weapons? These are just some of the questions I take on in this section of the book and plan to spend more time addressing in coming years. Scan these pages from the book to see my initial thoughts on these matters. But I am really just scratching the surface here. I’ll have much more to say on these matters in coming months and years. It’s a massively complicated topic.

]]>
https://techliberation.com/2020/08/05/existential-risk-emerging-technology-governance/feed/ 2 76795
How Are We Ever Going to Stop the Blockbuster Video Monopoly? https://techliberation.com/2020/07/21/how-are-we-ever-going-to-stop-the-blockbuster-video-monopoly/ https://techliberation.com/2020/07/21/how-are-we-ever-going-to-stop-the-blockbuster-video-monopoly/#respond Tue, 21 Jul 2020 14:15:58 +0000 https://techliberation.com/?p=76771

Does anyone remember Blockbuster and Hollywood Video? I assume most of you do, but wow, doesn’t it seem like forever ago when we actually had to drive to stores to get movies to watch at home? What a drag that was!

Yet, just 15 years ago, that was the norm and those two firms were the titans of video distribution, so much so that federal regulators at the Federal Trade Commission looked to stop their hegemony through antitrust intervention. But then those firms and whatever “market power” they possessed quickly evaporated as a wave of Schumpeterian creative destruction swept through video distribution markets. Both those firms and antitrust regulators had completely failed to anticipate the tsunami of technological and marketplace changes about to hit in the form of alternative online video distribution platforms as well as the rise of smartphones and robust nationwide mobile networks.

Today, this serves as a cautionary tale of what happens when regulatory hubris triumphs over policy humility, as Trace Mitchell and I explain in this new essay for  National Review Online entitled, “The Crystal Ball of Antitrust Regulators Is Cracked.” As we note:

There is no discernable end point to the process of entrepreneurial-driven change. In fact, it seems to be proliferating rapidly. To survive, even the most successful companies must be willing to quickly dispense with yesterday’s successful business plans, lest they be steamrolled by the relentless pace of technological change and ever-shifting consumer demands. It is easy to understand why some people find it hard to imagine a time when Amazon, Apple, Facebook, and Google won’t be quite as dominant as they are today. But it was equally challenging 20 years ago to imagine that those same companies could disrupt the giants of that era.

Hopefully today’s policymakers will have a little more patience and trust competition and continued technological innovation to bring us still more wonderful video choices.

[OC] Blockbuster Video US store locations between 1986 and 2019 from r/dataisbeautiful
//embed.redditmedia.com/widgets/platform.js]]>
https://techliberation.com/2020/07/21/how-are-we-ever-going-to-stop-the-blockbuster-video-monopoly/feed/ 0 76771
DIY-Bio, Biohacking & Evasive Entrepreneurialism https://techliberation.com/2020/05/26/diy-bio-biohacking-evasive-entrepreneurialism/ https://techliberation.com/2020/05/26/diy-bio-biohacking-evasive-entrepreneurialism/#respond Tue, 26 May 2020 15:08:28 +0000 https://techliberation.com/?p=76740

DIY medicineMargaret Talbot has written an excellent New Yorker essay entitled, “The Rogue Experimenters,” which documents the growth of the D.I.Y.-bio movement. This refers to the organic, bottom-up, citizen science movement, or “leaderless do-ocracy” of tinkerers, as she notes. I highly recommend you check it out.

As I noted in my new book on Evasive Entrepreneurs and the Future of Governance, “DIY health services and medical devices are on the rise thanks to the combined power of open-source software, 3D printers, cloud computing, and digital platforms that allow information sharing between individuals with specific health needs. Average citizens are using these new technologies to modify their bodies and abilities, often beyond the confines of the law.”

Talbot discusses many of the same examples I discuss in my book, including:

  • the Four Thieves Vinegar collective, which devised instructions for building its own version of the EpiPen;
  • e-nable, an international collective of thirty thousand volunteers, designs and 3-D-prints prosthetic hands and arms (and which has, more recently, distributed more than fifty thousand face shields in more than twenty-five countries.);
  • GenSpace and other community biohacking labs; and
  • Open Insulin and Open Artificial Pancreas System.

I like the way Talbot compares these movements to the hacker and start-up culture of the Digital Revolution:

The D.I.Y.-bio movement, which emerged in the early two-thousands, seems almost evolutionarily adapted to its historical moment,” she argues. “It echoes aspects of startup culture, especially the early days of personal computing, with its garage-based origin stories. First came the hardware, then the software; now even the wetware of life can be created in people’s homes. D.I.Y. bio reflects popular skepticism about professional authority and gatekeeping, but it is not skeptical about learning or expertise.

She also quotes others on this point, like John Wilbanks, a health technologist at the research nonprofit Sage Bionetworks:

when new biotech companies fail, they tend to sell off their equipment for a discount, and community labs and biohackers scoop it up. Wilbanks told me, “D.I.Y. bio is very similar to the home-brew, hacker-club culture of the late seventies in Silicon Valley. If you’ve not gone on eBay to shop for a DNA sequencer that they can ship to you in twenty-four hours, check it out—there’s a massive secondary market.”

Perhaps the most interesting thing about this bottom-up citizen-science movement is that it run the political gamut. It includes “anarcho-libertarians” to those “steeped in social-justice activism,” Talbot says. But they are all generally unified by a commitment to the widespread dissemination of scientific knowledge and transparency in health-related matters. “D.I.Y. biologists often have a greater commitment than their professional counterparts do to making their work open to scrutiny—and available for free on the Internet,” Talbot notes.

“The D.I.Y.-bio ecosystem includes a lot of do-gooders, and many of them have been galvanized by the covid-19 crisis,” she also observes. Quite right. I discussed that fact in the launch essay for my book, “Evasive Entrepreneurialism and Technological Civil Disobedience in the Midst of a Pandemic.” I documented dozens of examples of various individuals and organizations rising up to meet the challenges posed by the pandemic. “Eventually, people take notice of how regulators and their rules encumber entrepreneurial activities, and they act to evade them when public welfare is undermined,” I argued. “Working around the system becomes inevitable when the permission society becomes so completely dysfunctional and counterproductive.” DIY health innovation has gone mainstream out of necessity.

Importantly, Talbot notes that when it comes to what counts as success for DIY health and biohacking, sometimes good enough is, well, good enough. On this point, she quotes Jon Schull, an e-nable (non-commercial 3D-printed prosthetics) co-founder, who says, “it doesn’t matter that e-nable hands aren’t state-of-the-art. The job of professional prostheses-makers, he said, is “to produce something really good, and if it’s merely better than nothing it’s not good enough”—but, in some circumstances, something is better than nothing.”

That is a crucial point understanding why this movement is so important: Working together in a spontaneous, bottom-up fashion, citizen scientists and tinkerers can act quickly to fill pressing public health needs. Of course, that is exactly what makes these same innovations potentially risky and has some people wondering about the wisdom of such efforts—and the potential need for more regulation.

I wish Talbott would have spent a bit more time diving into these ethical and legal questions. I really struggled with them when writing about all this stuff in my new book on evasive entrepreneurialism and technological civil disobedience. She does briefly discuss how some FDA regs might affect DIY bio movement, including efforts like Open Insulin.  “Even if Open Insulin begins producing a consistent product, it will have to overcome all kinds of regulatory obstacles to demonstrate safety and purity before taking it to market,” she notes. “Manufacturers of pharmacy-grade medications must provide the F.D.A. with reams of evidence that they can produce the substances with complete consistency, in sterile environments. Proving this level of proficiency can cost millions of dollars.” But Talbot does not spend much more time exploring what might happen next on this front if DIY efforts continue to expand.

“But what should the law say about people… who are creating their own specialized medical devices in an open-source, noncommercial fashion?” I ask in my new book.

I outlined three potential future scenarios for the movement:

  1. DIY technologies go mainstream and become more commercialized.
  2. biohacking remains decentralized but becomes more mainstream and professional without becoming fully commercial.
  3. biohacking turn even more rogue or underground in nature as a form of guerrilla innovation that sometimes borders on neo-anarchism.

Regardless of the outcome, the ethical and regulatory issues will persist and grow as technological capabilities continue to grow more sophisticated, decentralized, and inexpensive. I argue in the book that it would be foolish for policymakers to think they can (or should) bottle up this movement altogether:

biohacking and decentralized medicine will expand for a simple reason: People care deeply about improving their health and abilities. They will take advantage of new technological capabilities that let them do so—especially when those capabilities are significantly cheaper than other options. To reiterate, that does not make these technologies safe or smart, but it does mean we will need a new approach to governance as evasive entrepreneurialism expands in this arena.

And then I continue on to note how improved risk education and awareness efforts might be one solution to the growing DIY bio movement.

Anyway, for more discussion on this, see pages 79-87 of my new book. I’ve also listed a few other essays down below that you might find interesting, including several penned by my former colleague Jordan Reimschisel.


Additional Reading:

]]>
https://techliberation.com/2020/05/26/diy-bio-biohacking-evasive-entrepreneurialism/feed/ 0 76740
I (Eye), Robot? https://techliberation.com/2019/05/08/i-eye-robot/ https://techliberation.com/2019/05/08/i-eye-robot/#comments Wed, 08 May 2019 14:24:57 +0000 https://techliberation.com/?p=76482

[Originally published on the Mercatus Bridge blog on May 7, 2019.]

I became a little bit more of a cyborg this month with the addition of two new eyes—eye lenses, actually. Before I had even turned 50, the old lenses that Mother Nature gave me were already failing due to cataracts. But after having two operations this past month and getting artificial lenses installed, I am seeing clearly again thanks to the continuing miracles of modern medical technology.

Cataracts can be extraordinarily debilitating. One day you can see the world clearly, the next you wake up struggling to see through a cloudy ocular soup. It is like looking through a piece of cellophane wrap or a continuously unfocused camera.

If you depend on your eyes to make a living as most of us do, then cataracts make it a daily struggle to get even basic things done. I spend most of my time reading and writing each workday. Once the cataracts hit, I had to purchase a half-dozen pair of strong reading glasses and spread them out all over the place: in my office, house, car, gym bag, and so on. Without them, I was helpless.

Reading is especially difficult in dimly lit environments, and even with strong glasses you can forget about reading the fine print on anything. Every pillbox becomes a frightening adventure. I invested in a powerful magnifying glass to make sure I didn’t end up ingesting the wrong things.

For those afflicted with particularly bad cataracts, it becomes extraordinarily risky to drive or operate machinery. More mundane things—watching TV, tossing a ball with your kid, reading a menu at many restaurants, looking at art in a gallery—also become frustrating.

Open Your Eyes to the Wonders of Innovation

In the past, there was very little that could be done about cataracts unless one was willing to undergo extremely dangerous procedures. The oldest type of cataract surgery (“couching”) involved the use of sharp instruments such as thorns and needles to rip the cloudy lens out of the eye. Unsurprisingly, blindness was a common result of this primitive practice. As medical techniques and instruments improved, doctors were able to perform more sophisticated and successful surgeries, albeit still with some risks because human hands were still doing much of the work.

Today, thanks to remarkable advances in medicine, all this is done in a few minutes with the assistance of laser technology. Better yet, patients get to choose exactly what sort of replacement lens they will have installed. I chose “multifocal intraocular” replacement lenses, which let me see near and far equally well.

When you have cataracts in both eyes, they usually perform the surgeries a few weeks apart to make sure one eye comes out alright before getting the other done. Both my outpatient procedures were quick, painless, and remarkably effective. Astonishingly, within 24 hours of having both surgeries, I tested at better than 20/15 vision, which is close to perfect. It was like regaining a lost superpower.

Am I a Cyborg?

My first-hand experience with the miracles of modern medical technology makes me feel even more strongly about what I do for a living. I have spent my life covering emerging technology policy and responding to tech critics, who have a litany of grievances about modern inventions. One common complaint is that today’s technologies are “dehumanizing,” or threaten to turn us all into some sort of cyborgs.

To be sure, my eye surgeries did indeed make me just a little bit less human. After all, I am walking around today with artificial lenses affixed to my eyeballs. Moreover, I previously had eye surgery to correct strabismus, which is basically a form of crossed eyes. Had I remained perfectly “human” or “natural,” I would still be trying to look at the world through two crossed eyes covered with cloudy lenses. No thanks, Mother Nature!

Incidentally, I also have a metal plate and six pins in my ankle from a nasty compound fracture I sustained in the late 1990s. So, my foot isn’t completely “natural” either. But without those implants, I would not likely have walked properly again. Also, due to a combination of bad genes and poor dietary habits, my mouth is full of so many replacement teeth and crowns that I can’t even count them all. Without them, I probably would have needed dentures by age 40, just as my poor grandmother did once her teeth failed her for similar reasons.

Meanwhile, my left knee and right hip have been acting up in recent years, making me wonder if replacements may be needed down the road. Finally, my hearing isn’t so great either after years of abusing my ears at concerts and with speakers played at unhealthy volumes. (Turn down those headphones, kids!) I suspect some sort of hearing supplement awaits me in the future so I can continue to hear properly.

Enhancing Our Humanity

Given the medical procedures I’ve had done or might do, it’s fair to say that the critics are correct: I really am becoming more of a cyborg—part biological, part technological. But what of it? Certainly, my life and the lives of countless other people have been improved thanks to “artificial” improvements to our bodies.

As Joel Garreau noted in his brilliant 2005 book, Radical Evolution: The Promise and Peril of Enhancing Our Minds, Our Bodies—And What It Means to Be Human, the history of our species in one of constant improvements to our health and capabilities through technological means. We have augmented our senses and abilities through the use of spectacles, hearing aids, artificial limbs, implants, and various other specialized medicines and treatments. We are living longer, healthier, less painful lives because of it.

Some critics respond by saying that certain “basic” technological improvements to human health are fine, or perhaps should even be subsidized and available to all. One era’s “radical” enhancements become the next generation’s human rights! We have seen that story unfold in the realm of reproductive health, for example. As Jordan Reimschisel and I have documented, in vitro fertilization (IVF) was originally met with hostility in the 1970s, with various authorities objecting to the idea of being able to “play God.” Opposition subsided quickly, however, as public acceptance and demand grew. Today, IVF is often covered by insurance plans.

Still, critics of newer technological capabilities tend to frown upon more sophisticated technological enhancements that could radically enhance our capabilities in ways that supposedly “dehumanize” us. There are always risks associated with new technological capabilities, but through ongoing trial and error experimentation, we find new ways to counter adversity and ailments—and yes, even overcome some of our inherent human limitations. We are not destined to become mindless automatons just because technology enhances our humanity in these ways. Indeed, there is nothing more human than building new and better tools to improve the quality of the lives of people across the globe.

We Can Cope with Change

Critics are fond of falling back on worst-case “technopanic” scenarios ripped from sci-fi novels, movies, and shows to explain how, if we are not careful, we are all just one modification away from creating (or becoming) Frankenstein monsters. We should heed those warnings to some extent, but not to the extent those critics suggest.

There are legitimate ethical issues associated with certain medical treatments and human enhancements. Genetic editing, for example, holds both promise and peril for our species. By modifying our genetic code, we can counter or even defeat debilitating or deadly diseases or ailments before they hobble us or our children. Of course, genetic modification could also be used in unsettling ways by parents or governments to create “designer babies” that have no choice in how their genetic code is altered before birth.

Ethical guidelines, and even some public policies, will need to be crafted and continuously updated to keep pace with these challenges. But, we must not let worst-case thinking determine the future of  all forms of human modification such that the many possible best-case outcomes are discouraged in the process. That would represent a massive setback for the millions of humans, including the unborn ones, who might be threatened by debilitating ailments.

Just as technological innovation gave me (quite literally) a new outlook on the world, so too can it open up new possibilities for countless others. Each day brings inspiring news about how innovation is helping us overcome whatever ails us. The Wall Street Journal reported recently that, “[s]cientists have harnessed artificial intelligence to translate brain signals into speech, in a step toward brain implants that one day could let people with impaired abilities speak their minds.”

More modern miracles like that await us—so long as critics and regulators don’t hold back important innovations in medical technology. In the meantime, thanks to my new cyborg eyes, I have seven old pairs of reading glasses I no longer need, in case anyone wants them.

]]>
https://techliberation.com/2019/05/08/i-eye-robot/feed/ 1 76482
Technological Innovation, Economic Growth & Human Flourishing https://techliberation.com/2019/03/13/technological-innovation-economic-growth-human-flourishing/ https://techliberation.com/2019/03/13/technological-innovation-economic-growth-human-flourishing/#comments Wed, 13 Mar 2019 13:04:46 +0000 https://techliberation.com/?p=76461

Why should we really care about technological innovation? My Mercatus Center colleague James Broughel and I have just published a paper answering that question. In “Technological Innovation and Economic Growth: A Brief Report on the Evidence,” we summarize the extensive body of evidence that discusses the relationship between innovation, growth, and human prosperity. We note that while economists, political scientists, and historians don’t agree on much, there exists widespread consensus among them that there is a symbiotic relationship between the pace of innovation and the progress of civilization. Our 27-page paper documenting the academic evidence on this issue can be downloaded on SSRN or from the Mercatus website. Here’s the abstract:

Technological innovation is a fundamental driver of economic growth and human progress. Yet some critics want to deny the vast benefits that innovation has bestowed and continues to bestow on mankind. To inform policy discussions and address the technology critics’ concerns, this paper summarizes relevant literature documenting the impact of technological innovation on economic growth and, more broadly, on living standards and human well-being. The historical record is unambiguous regarding how ongoing innovation has improved the way we live; however, the short-term disruptive aspects of technological change are real and deserve attention as well. The paper concludes with an extended discussion about the relevance of these findings for shaping cultural attitudes toward technology and the role that public policy can play in fostering innovation, growth, and ongoing improvements in the quality of life of citizens.

]]>
https://techliberation.com/2019/03/13/technological-innovation-economic-growth-human-flourishing/feed/ 2 76461
Debating the Future of Artificial Intelligence: G7 Multistakeholder Conference https://techliberation.com/2018/12/04/debating-the-future-of-artificial-intelligence-g7-multistakeholder-conference/ https://techliberation.com/2018/12/04/debating-the-future-of-artificial-intelligence-g7-multistakeholder-conference/#comments Tue, 04 Dec 2018 15:27:40 +0000 https://techliberation.com/?p=76423

This week I will be traveling to Montreal to participate in the 2018 G7 Multistakeholder Conference on Artificial Intelligence. This conference follows the G7’s recent Ministerial Meeting on “Preparing for the Jobs of the Future” and will also build upon the  G7 Innovation Ministers’ Statement on Artificial Intelligence . The goal of Thursday’s conference is to, “focus on how to enable environments that foster societal trust and the responsible adoption of AI, and build upon a common vision of human-centric AI.” About 150 participants selected by G7 partners are expected to participate, and I was invited to attend as a U.S. expert, which is a great honor. 

I look forward to hearing and learning from other experts and policymakers who are attending this week’s conference. I’ve been spending a lot of time thinking about the future of AI policy in recent books, working papers, essays, and debates. My most recent essay concerning a vision for the future of AI policy was co-authored with Andrea O’Sullivan and it appeared as part of a point/counterpoint debate in the latest edition of the Communications of the ACM. The ACM is the Association for Computing Machinery, the world’s largest computing society, which “brings together computing educators, researchers, and professionals to inspire dialogue, share resources, and address the field’s challenges.” The latest edition of the magazine features about a dozen different essays on “Designing Emotionally Sentient Agents” and the future of AI and machine-learning more generally.

In our portion of the debate in the new issue, Andrea and I argue that “Regulators Should Allow the Greatest Space for AI Innovation.” “While AI-enabled technologies can pose some risks that should be taken seriously,” we note, “it is important that public policy not freeze the development of life-enriching innovations in this space based on speculative fears of an uncertain future.” We contrast two different policy worldviews — the precautionary principle versus permissionless innovation — and argue that:

artificial intelligence technologies should largely be governed by a policy regime of permissionless innovation so that humanity can best extract all of the opportunities and benefits they promise. A precautionary approach could, alternatively, rob us of these life-saving benefits and leave us all much worse off.

That’s not to say that AI won’t pose some serious policy challenges for us going forward that deserve serious attention. Rather, we are warning against the dangers of allowing worst-case thinking to be the default position in these discussions.

But what about some of the policy concerns regarding AI, including privacy, “algorithmic accountability,” or more traditional fears about automation leading to job displacement or industrial disruption. Some of the these issues deserve greater scrutiny, but as Andrea and I pointed out in a much longer paper with Raymond Russell, there often exists better ways of dealing with such issues before resorting to preemptive, top-down controls on fast-moving, hard-to-predict technologies.

“Soft law” options will often serve us better than old hard law approaches. Soft law mechanisms, as I write in my latest law review article with Jennifer Skees and Ryan Hagemann, are a useful way to bring diverse parties together to address pressing policy concerns without destroying the innovative promise of important new technologies. Among other things, soft law includes multistakeholder processes and ongoing efforts to craft flexible “best practices.” It can also include important collaborative efforts such as this recent IEEE “Global Initiative on Ethics of Autonomous and Intelligent Systems,” which serves as “an incubation space for new standards and solutions, certifications and codes of conduct, and consensus building for ethical implementation of intelligent technologies.” This approach brings together diverse voices from across the globe to develop rough consensus on what “ethically-aligned design” looks like for AI and aims to establish a framework and set of best practices for the development of these technologies over time.

Others have developed similar frameworks, including the ACM itself. The ACM developed a Code of Ethics and Professional Conduct in the early 1970s and then refined it in the early 1990s and then again just recently in 2018. Each iteration of the ACM Code reflected ongoing technological developments from the mainframe era to the PC and Internet revolution and on through today’s machine-learning and AI era. The latest version of the Code “affirms an obligation of computing professionals, both individually and collectively, to use their skills for the benefit of society, its members, and the environment surrounding them,” and insists that computing professionals “should consider whether the results of their efforts will respect diversity, will be used in socially responsible ways, will meet social needs, and will be broadly accessible.” The document also stresses how, “[a]n essential aim of computing professionals is to minimize negative consequences of computing, including threats to health, safety, personal security, and privacy. When the interests of multiple groups conflict, the needs of those less advantaged should be given increased attention and priority.”

Of course, over time, more targeted or applied best practices and codes of conduct will be formulated as new technological developments make them necessary. It is impossible to perfectly anticipate and plan for all the challenges that we may face down the line. But we can establish some rough best practices and ethical guidelines to help us deal with some of them. As we do so, we need to think hard about how to craft those principles and policies in such a way so as to not undermine the potentially amazing, life-enriching — and potentially even life- saving — benefits that AI technologies could bring about.

You can hear more about these and other issues surrounding the future of AI in this 6-minute video that  Communications of the ACM put together to coincide with my debate with Oren Etzioni of the Allen Institute for Artificial Intelligence. As you will probably notice, there’s actually a lot more common ground between us in this discussion that you might initially suspect. For example, we agree that it would be a serious mistake to regulate AI at the general-purpose level and that it instead makes more sense to zero-in on specific AI applications to determine where policy interventions might be needed.

Of course, things get more contentious when we consider  what kind of policy interventions we might want for specific AI applications, and also the much more challenging question about how to define and measure “harm” in this context. And this all assumes we can even come to some general consensus about how to first define what we even mean by “artificial intelligence” or “robotics” in general. That’s harder than many realize and it is important because it has a bearing on the overall scope and practicality of regulation in various contexts.

Another thing that seems to be the source of serious ongoing debate between people in this field concerns the wisdom of creating an entirely new agency or centralized authority of some sort to oversee or guide the development AI or robotics. I’ve debated that question many times with Ryan Calo, who first pitched the idea a few years back in a working paper for Brookings. In response, I noted that we already have quite a few “robot regulators” in existence today in the form of technocratic agencies that oversee the specific development of various types of robotic and AI-oriented applications. For example, NHTSA already oversees driverless cars, FAA regulates drones, and the FDA handles AI-based medical devices and applications. Will adding another big, over-arching Robotics Commission really add much value to the process? Or will it simply add another bureaucratic layer of red tape to the process of getting life-enriching services out to the public? I doubt, for example, that the Digital Revolution would have been somehow improved much had America created a Federal Computer Commission or Federal Internet Commission 25 years ago.

Moreover, had we adopted such entities, I worry about how the tech companies of an earlier generation might have utilized that process to keep new players and technologies from emerging. As I noted this week in a tweet that got a lot of attention, I used to have the adjoining poster from PC Computing magazine on my office wall over 20 years ago. It was entitled, “Roadmap to Top Online Services,” and showed how the powerful Big 4 online service providers — America Online, Prodigy, Compuserve, and Microsoft — were spreading their tentacles. People used to see this poster on my wall and ask me whether there was any hope of disrupting the perceived choke-hold that these companies had on the market at the time.

Of course, we now look back and laugh at the idea that these firms could have bottled up innovation and kept competition at bay. But ask yourself: When disruptive innovations appeared on the scene, what would those incumbent firms have done if they had regulators to run to for help down at a Federal Computer Commission or Federal Internet Commission? I think we know exactly what they would have done because the lamentable history of so much Federal Communication Commission regulation shows us that  the powerful will grab for the levers of power wherever they exist. Some critics don’t accept the idea that “rent-seeking” and regulatory capture are real problems, or they believe that we can find creative ways to avoid those problems. But history shows this has been a reoccurring problem in countless sectors and one that we should try to avoid as much as possible by not establishing mechanisms that could exclude beneficial forms of competition and innovation from coming about to begin with.

That could certainly happen right now with the regulatory mechanisms already in place. For example, just this week, Jennifer Huddleston Skees and I wrote about the dangers of “Emerging Tech Export Controls Run Amok,” as the Trump Administration ponders a potentially massive expansion of export restrictions on a wide variety of technologies. More than a dozen different AI or autonomous system technologies appear on the list for consideration. That could pose real trouble not just for commercial innovators in this space, but also for non-commercial research and collaborative open source efforts involving these technologies.

Again, that doesn’t mean AI and robotics should develop in a complete policy vacuum. We need “governance” but we don’t need the sort of heavy-handed, top-down, competition-killing, innovation-restricting sort of regulatory regimes of the past. I continue to believe that more flexible, adaptive “soft law” mechanisms provide the reasonable path forward for most of the concerns we hear about AI and robotics today. These are challenging issues, however, and I look forward to learning more from other experts in the field when I visit Montreal for this week’s G7 discussion.


Additional Reading:

]]>
https://techliberation.com/2018/12/04/debating-the-future-of-artificial-intelligence-g7-multistakeholder-conference/feed/ 2 76423
Infrastructure Control as Innovation Regulation https://techliberation.com/2018/08/10/infrastructure-control-as-innovation-regulation/ https://techliberation.com/2018/08/10/infrastructure-control-as-innovation-regulation/#comments Fri, 10 Aug 2018 20:28:51 +0000 https://techliberation.com/?p=76343

The ongoing ride-sharing wars in New York City are interesting to watch because they signal the potential move by state and local officials to use infrastructure management as an indirect form of innovation control or competition suppression. It is getting harder for state and local officials to defend barriers to entry and innovation using traditional regulatory rationales and methods, which are usually little more than a front for cronyist protectionism schemes. Now that the public has increasingly enjoyed new choices and better services in this and other fields thanks to technological innovation, it is very hard to convince citizens they would be better off without more of the same.

If, however, policymakers claim that they are limiting entry or innovation based on concerns about how disruptive actors supposedly negatively affect local infrastructure (in the form of traffic or sidewalk congestion, aesthetic nuisance, deteriorating infrastructure, etc.), that narrative can perhaps make it easier to sell the resulting regulations to the public or, more importantly, the courts. Going forward, I suspect that this will become a commonly-used playbook for many state and local officials looking to limit the reach of new technologies, including ride-sharing companies, electric scooters, driverless cars, drones, and many others.

To be clear, infrastructure control is both (a) a legitimate state and local prerogative; and (b) something that has been used in the past to control innovation and entry in other sectors. But I suspect that this approach is about to become far more prevalent because a full-frontal defense of barriers to innovation is far more likely to face serious public and legal challenges. For example, limiting ride-sharing competition in NYC on the grounds that it hurts local taxi cartels is unappealing to citizens and the courts alike. So, NYC is now making it all about traffic congestion. Even if that regulatory rationale is bunk, it is a much harder narrative to counter in the court of public opinion or the courts of law. For that reason, we can expect more and more state and local governments to just flip the narrative about innovation regulation going forward in this fashion.

How should defenders of innovation and competition respond to state and local efforts to use infrastructure control as an indirect form of innovation regulation? First, call them out on it if it really is just naked protectionism by another name. Second, to the extent there may be something their asserted concerns about infrastructure problems, propose alternative solutions that do not freeze innovation and new entry outright. The best approach is to borrow a page out of Coase’s playbook and use smarter pricing and property rights solutions. Or perhaps use unique funding mechanisms for new and better infrastructure that could accommodate ongoing entry and innovation.

For example, my Mercatus colleague Salim Furth recently penned a column (“Let Private Companies Pay for More Bike Lanes”) in which he noted how the electric scooter company Bird has offered cities a dollar a day per scooter to help build protected bike lanes. In doing so, Furth notes, Bird is:

offering to enter the long tradition of private provision of public goods. The original subway lines were private. Private institutions have frequently built or maintained public parks. Radio broadcasts, a textbook example of a public good, are largely private in the US. Companies often provide public entertainment because they benefit from the attraction.

In a similar way, Uber has already supported usage-based road pricing to alleviate congestion.  We could imagine still other examples like this for emerging technology companies. Drone manufacturers could help create or pay for “aerial sidewalks” or easements so they can deliver goods more efficiently. Scooter and dockless bike companies could help pay for bike and scooter paths either directly or through promotional efforts. Driverless car fleet providers could help build or cover the cost of new parking garages or for road improvements that would help make autonomous systems work better in local communities.

That is the pro-consumer, pro-innovation path forward. Hopefully, state and local officials will embrace such forward-looking reform ideas instead of seeking to indirectly control new entry and competition under the guise of infrastructure management.

]]>
https://techliberation.com/2018/08/10/infrastructure-control-as-innovation-regulation/feed/ 1 76343
3D Printers, Evasive Entrepreneurs and the Future of Tech Regulation https://techliberation.com/2018/08/03/3d-printers-evasive-entrepreneurs-and-the-future-of-tech-regulation/ https://techliberation.com/2018/08/03/3d-printers-evasive-entrepreneurs-and-the-future-of-tech-regulation/#respond Fri, 03 Aug 2018 13:06:52 +0000 https://techliberation.com/?p=76334

By Andrea O’Sullivan and Adam Thierer (First published at The Bridge on August 1, 2018.)

Technology is changing the ways that entrepreneurs interact with, and increasingly get away from, existing government regulations. The  ongoing legal battles surrounding 3D-printed weapons provide yet another timely example.

For years, a consortium of techies called Defense Distributed has sought to secure more protections for gun owners by  making the code allowing someone to print their own guns available online. Rather than taking their fight to Capitol Hill and spending billions of dollars lobbying in potentially fruitless pursuits of marginal legislative victories, Defense Distributed ties their fortunes to the mast of technological determinism and blurs the lines between regulated physical reality and the open world of cyberspace.

The federal government moved fast, with gun control advocates like Senator Chuck Schumer (D-NY) and former Representative Steve Israel (D-NY)  proposing legislation to criminalize Defense Distributed’s activities. They failed.

Plan B in the efforts to quash these acts of 3D-printing disobedience was to classify the Computer-aided design (CAD) files that Defense Distributed posted online as a kind of internationally-controlled munition. The US State Department engaged in a years-long legal brawl over whether or not Defense Distributed  violated established International Traffic in Arms Regulations (ITAR). The group pulled down the files while the issue was examined in court, but the code had long since been uploaded to sharing sites like The Pirate Bay. The files have also been available on the Internet Archive for many years. The CAD, if you will excuse the pun, is out of the bag.

In a surprising move, the  Department of Justice suddenly moved to drop the suit and settle with Defense Distributed last month. It agreed to cover the group’s legal fees and cease its attempt to regulate code already easily accessible online. While no legal precedent was set, since this was merely a settlement, it is likely that the government realized that its case would be unwinnable.

Gun control advocates did not react well to this legal retreat. This week, a group of eight state attorneys general (AGs) filed a lawsuit against the Trump administration and Defense Distributed to undo the group’s freedom to distribute their code online. Part of their argument is that the administration violated the Administrative Procedure Act as well as the Tenth Amendment by “infringing on states’ rights to regulate firearms.” But the move looks more like a last ditch effort by the AGs to exert control. Yesterday, a federal judge issued an injunction against Defense Distributed to prevent the files from being uploaded online. But as we mentioned, the files are and have been available across the internet for years now.

The case faces long odds. After all, they are essentially trying to regulate speech, which raises some clear First Amendment flags. This is precisely why the Department of Justice backed away from the case against Defense Distributed, and it echoes the  federal government’s previous attempts to crackdown on strong encryption practicesmore than two decades ago. Then, like now, a group of security-minded technologists wanted to bring defense technologies that were still controlled by ITAR regulations to the masses. And then, like now, activists correctly argued that any attempt to stop their online exchanges amounted to an illegal barrier to free speech in the United States. Besides, there wasn’t much that the government could do to turn back the tide of information that had already dispersed across the wide expanse of the web.

As Cody Wilson, the founder of  Defense Distributed put it: “This has been a continuous process of different levels of authority figures trying to stop it from happening and thus allowing it to happen…Of course we are going to succeed—because you all are trying to stop me. That seemed natural and ended up being true.”

Cody Wilson and Defense Distributed are not the only ones using additive manufacturing to change the world and challenge public policy in the process. The  “maker” revolution is a phenomenon that is widespread and growing. A 2016 Mercatus journal article on “Guns, Limbs, and Toys: What Future for 3D Printing?” discussed several examples of how additive manufacturing is making the governance of various emerging technologies quite challenging.

For example, “e-NABLE,” which is short for “ Enabling the Future,” is volunteer effort that brings together individuals from across the globe who design 3D–printed prosthetics for individuals (especially children) with limb deficiencies. Volunteers share open source blueprints and other information on various websites with others across the world. Then, they use their own printers to fabricate the limbs. Other entrepreneurs are creating custom 3D-printed orthoses to help children with cerebral palsy walk comfortably and without the aid of crutches. Off-the-shelf solutions were often ineffective and uncomfortable for many kids, which led some parents to craft custom-made orthoses for their own children to help them walk.

These “amateur” prosthetics are already being widely distributed today and helping to save many individuals and families significant amount of money, assuming they could have afforded “professional” prosthetics at all. While prosthetics are medical devices in a traditional regulatory sense, no one making their own is going to the FDA to ask permission for or a right to try new 3D–printed limbs. Instead, they are just going ahead and making new prosthetics for people in need. How should we regulate all this bottom-up innovation by average citizens (especially considering how much of it is non-commercial in character)?

Another interesting example from 2016 involved  Amos Dudley, a 23-year-old college student with no prior dentistry experience who used a 3D printer and laser scanner at his university to make his own orthodontics for just $60. Dudley’s DIY plastic braces were a dangerous experiment that could have put him, or others, at risk if they followed his lead. But what should the law say about people like Dudley or the eNable innovators who are creating their own specialized medical devices in an open source, non-commercial fashion?

For a more radical example, we can look to the  Four Thieves Vinegar Collective, a self-styled techno-anarchist collective dedicated to open sourcing and manufacturing alternatives to costly pharmaceutical medicines. Four Thieves harnesses the combined research output of distributed volunteer chemists, physicists, and programmers to compile and publish step-by-step instructions on how to reverse engineer treatments for maladies like AIDS and anaphylaxis. The group’s offers downloadable instructions on how to create what it calls the Apothecary MicroLab, a kind of hacked-together at-home compounding kit. The FDA is aware of, and unamused by, Four Thieves’ activities; yet it finds its hands tied by the fact that they haven’t actually done anything illegal in merely exercising their free speech rights.

These are examples of what MIT economist Eric von Hippel calls “ free innovation,” or “innovations developed and given away by consumers as a ‘free good.’” Another term for this is “social entrepreneurialism.” As the name implies, an underlying social goal or mission drives social entrepreneurship.

For example, our Mercatus Center colleagues have written about how social entrepreneurs help others in need in their community  following disasters. Social entrepreneurial activities are not typically in pursuit of compensation or profit, but that need not always be the case, and the distinction social and economic entrepreneurialism is sometimes quite blurry.

A great deal of additive manufacturing innovation today springs from a multitude of such “grassroots” or “household” efforts. As this sort of “ evasive entrepreneurialism” spreads, it will challenge regulatory regimes that are not equipped to cope with the astonishing pace of change occurring in many technology markets today.

This does not necessarily mean that governments will be completely powerless to stop highly decentralized, bottom-up innovation of this sort. For example, with firearms regulation, a gun is still a gun, regardless of how it is manufactured. Laws governing how and where firearms are carried and used will still be in effect. But “point-of-sale” type regulatory prohibitions will not work as well, obviously.

Likewise, efforts to limit the free flow of information about 3D-printed designs will be almost impossible to enforce once blueprints are available on the internet through peer-to-peer distribution mechanisms and platforms. Finally, it would not make sense for policymakers to affix liability on the makers or distributors of 3D printers because this is a general purpose technology with many other non-controversial uses.

This means that regulation should remain focused on the user and uses of firearms or other 3D-printed devices, regardless of how they are manufactured. There may also be some other steps that governments can take to educate the public about the potential risks associated with these and other examples of free innovation and social entrepreneurship.

But policymakers should also understand that many of these bottom-up innovations are being created or used by the average citizens because they fill a public need that many felt was going unmet. Entrepreneurial efforts tend to be hard to bottle up when enough demand exists for action, and the tools are becoming increasingly decentralized, low-cost, and easy to use. Instead of trying to put those technological genies back in their bottles, we are going to need to figure out how to coexist with them.

]]>
https://techliberation.com/2018/08/03/3d-printers-evasive-entrepreneurs-and-the-future-of-tech-regulation/feed/ 0 76334
We Need More Driverless Cars on Public Roads, Not Fewer https://techliberation.com/2018/03/20/we-need-more-driverless-cars-on-public-roads-not-fewer/ https://techliberation.com/2018/03/20/we-need-more-driverless-cars-on-public-roads-not-fewer/#comments Tue, 20 Mar 2018 16:13:11 +0000 https://techliberation.com/?p=76248

By Adam Thierer and Jennifer Huddleston Skees

There was horrible news from Tempe, Arizona this week as a pedestrian was struck and killed by a driverless car owned by Uber. This is the first fatality of its type and is drawing widespread media attention as a result. According to both police statements and Uber itself, the investigation into the accident is ongoing and Uber is assisting in the investigation. While this certainly is a tragic event, we cannot let it cost us the life-saving potential of autonomous vehicles.

While any fatal traffic accident involving a driverless car is certainly sad, we can’t ignore the fact that each and every day in the United States letting human beings drive on public roads is proving far more dangerous. This single event has led some critics to wonder why we were allowing driverless cars to be tested on public roads at all before they have been proven to be 100% safe. Driverless cars can help reverse a public health disaster decades in the making, but only if policymakers allow real-world experimentation to continue.

Let’s be more concrete about this: Each day, Americans take 1.1 billion trips driving 11 billion miles in vehicles that weigh on average between 1.5 and 2 tons. Sadly, about 100 people die  and over 6,000 are injured each day in car accidents. 94% of these accidents have been shown to be attributable to human error and this deadly trend has been increasing as we become more distracted while driving. Moreover, according to the Center for Disease Control and Prevention, almost 6000 pedestrians were killed in traffic accidents in 2016, which means there was roughly one crash-related pedestrian death every 1.6 hours. In Arizona, the issue is even more pronounced with the state ranked 6th worst for pedestrians and the Phoenix area ranked the 16th worst metro for such accidents nationally.

No matter how concerned the public is about the idea of autonomous vehicles on our roadways, one thing should be abundantly clear: Automated technologies can be part of the solution to the harms of our almost 100 year experiment with human drivers behind the wheel of a car. The algorithms behind self-driving cars don’t get drunk, drowsy, or distracted. Unfortunately, humans do those things with great regularity and the only way for autonomous vehicles to truly understand how to deal with idiosyncrasies and irrationalities of human drivers is to interact with them in the “real world.” Every time a human driver gets behind a wheel for a drive, therefore, an “experiment” of sorts is underway and we’ve seen the results of our human driven “experiments” on public roads far too often have catastrophic results.

Because these human-caused accidents are so common, they don’t make headlines. While as high as 83% of people admit they are concerned about safety when driving, the aggregate death toll is so large that the numbers aren’t as easy to “humanize” when crashes occur unless they involve people or places we know. As a result, we don’t heed the warnings and continue to engage in risky behavior by choosing to drive every day.  But precisely because this week’s driverless car-related death in Arizona is so unique and rare, it is making major news. If we turn a blind eye to all the lives lost due to human error while focusing on the rare occurrence of this one driverless car fatality, we risk many more lives in the long run.

But what should be done when accidents or deaths occur and autonomous cars are involved?

First, we can dispense with the notion that driverless cars are completely unregulated. Anytime these vehicles are operating on public roadways, they still must comply with traffic and safety laws. Driverless cars are programmed to operate in compliance with those laws and will be far more likely to do so than human operators. In fact, the concern is not that the cars won’t follow the traffic laws, but how they will interact with humans’ lawlessness and our misguided reactions to them.

Second, when accidents, like the one in Arizona this week do occur, courts are equipped to handle legal claims. This is how we have handled human-created accidents for decades, and there is no reason to believe that the common law and courts can’t evolve to handle new technology-created problems, too. The courts have an existing toolkit for handling both defective products and individual liability or bad actors. Some manufacturers have even publicly stated they will accept liability if it is shown that the technology behind the autonomous vehicle caused the accident. Courts have been able to apportion fault and deal with the specifics of particular without the need to completely overhaul the common law for a variety of new technologies throughout history. It would be misguided to assume the courts could not determine the true cause of an accident when it involved an autonomous vehicle when the courts have been dealing with increasingly sophisticated products in a variety of fields for years.

Third, driverless car innovators are currently working together, and with government officials, to address the safety and security of these technologies. In both the Obama and current Trump Administrations, an open, collaborative effort has been underway to sketch out sensible safety and security policies while also making sure to keep the innovation moving forward in this field. These conversations have resulted in guidance from the Department of Transportation that is flexible enough to adapt to the emerging technology while still promoting safe development and deployment.  This flexible approach is the smart path forward insuring that we don’t let overly precautionary concerns prevent technology that could save many, many more lives.

The most effective way to achieve significant auto safety gains is to make sure experimentation with new and better automotive technologies continues. That cannot all happen in a closed lab setting that is stifled by heavy-handed regulation at every juncture. We need driverless cars on the roadways now more than ever precisely because those machines will need to learn to anticipate and correct for the many real-world scenarios that human drivers struggle with every day.

Any loss of human life is a tragedy. But we cannot let a rare incident cost us the long-term potential life-saving technology of autonomous vehicles. We also must not rush to conclusions that technology was at fault before knowing all the facts of any particular situation. While Uber has temporarily halted its technology trials, this tragic accident should be looked at as a rarity we can learn from rather than a reason to stop moving forward.

]]>
https://techliberation.com/2018/03/20/we-need-more-driverless-cars-on-public-roads-not-fewer/feed/ 1 76248
What Do We Mean by Technological “Moonshots”? And Why Should We Care about Them? https://techliberation.com/2018/02/06/what-do-we-mean-by-technological-moonshots-and-why-should-we-care-about-them/ https://techliberation.com/2018/02/06/what-do-we-mean-by-technological-moonshots-and-why-should-we-care-about-them/#comments Tue, 06 Feb 2018 20:24:26 +0000 https://techliberation.com/?p=76232

We hear a lot these days about “technological moonshots.” It’s an interesting phrase because the meaning of both words in it are often left undefined. I won’t belabor the point about how people define–or, rather, fail to define–“technology” when they use it. I’ve already spent a lot of time writing about that problem. See, for example, this constantly updated essay here about “Defining ‘Technology.'” It’s a compendium I began curating years ago that collects what dozens of others have had to say on the matter. I’m always struck by how many different definitions are out there that I keep unearthing.

The term “moonshots” has a similar problem. The first meaning is the literal one that hearkens back to President Kennedy’s famous 1962 “we choose to go to the moon” speech. That use of the terms implies large government programs and agencies, centralized control, and top-down planning with a very specific political objective in mind. Increasingly, however, the term “moonshot” is used more generally, as I note in this new Mercatus essay about “Making the World Safe for More Moonshots.”  My Mercatus Center colleague Donald Boudreaux has referred to moonshots as, “radical but feasible solutions to important problems,” and  Mike Cushing of Enterprise Innovation  defines a moonshot as an “innovation that achieves the previously unthinkable.” I like that more generic use of the term and think it could be used appropriately when discussing the big innovations many of us hope to see in fields as diverse as quantum computing, genetic editing, AI and autonomous systems, supersonic transport, and much more. I still have some reservations about the term, but I think it’s definitely a better term than “disruptive innovation,” which is also used differently by various scholars and pundits.

Regardless of what we call them, “We Need Large Innovations,” as as entrepreneurship zealot Vinod Khosla argues in a recent essay. Why? Because as I point out in my new essay:

we should push for more moonshots because there is a profoundly positive correlation between innovation and human prosperity. Countless economic studies and historical surveys have documented the symbiotic relationship among technological progress, economic growth, and improvement of overall social welfare. Big innovations spawn big gains for society in the form of more choices, greater mobility, increased wealth, better health, and longer lifespans.

I hope to build on this point in a forthcoming paper and eventually in a new book. Big innovations–whether we call them “moonshots” or whatever else–pay big dividends for society.

Consequently, getting innovation policy right is essential because, as the great economic historian Joel Mokyr has shown, technological innovation and economic progress must be viewed as “a fragile and vulnerable plant, whose flourishing is not only dependent on the appropriate surroundings and climate, but whose life is almost always short. It is highly sensitive to the social and economic environment and can easily be arrested by relatively small external changes.” Thus, like a plant we wish to grown, we must constantly nurture our innovation policy environment if we hope to grow and prosper as a society. We cannot rest on our past successes. “What matters is the successful striving for what at each moment seems unattainable,” said  F. A. Hayek in The Constitution of Liberty. “It is not the fruits of past success but the living in and for the future in which human intelligence proves itself,” he rightly concluded.

]]>
https://techliberation.com/2018/02/06/what-do-we-mean-by-technological-moonshots-and-why-should-we-care-about-them/feed/ 2 76232
new Mercatus paper on “Artificial Intelligence and Public Policy” https://techliberation.com/2017/08/23/new-mercatus-paper-on-artificial-intelligence-and-public-policy/ https://techliberation.com/2017/08/23/new-mercatus-paper-on-artificial-intelligence-and-public-policy/#comments Wed, 23 Aug 2017 15:03:10 +0000 https://techliberation.com/?p=76180

The Mercatus Center at George Mason University has just released a new paper on, “Artificial Intelligence and Public Policy,” which I co-authored with Andrea Castillo O’Sullivan and Raymond Russell. This 54-page paper can be downloaded via the Mercatus website, SSRN, or ResearchGate. Here is the abstract:

There is growing interest in the market potential of artificial intelligence (AI) technologies and applications as well as in the potential risks that these technologies might pose. As a result, questions are being raised about the legal and regulatory governance of AI, machine learning, “autonomous” systems, and related robotic and data technologies. Fearing concerns about labor market effects, social inequality, and even physical harm, some have called for precautionary regulations that could have the effect of limiting AI development and deployment. In this paper, we recommend a different policy framework for AI technologies. At this nascent stage of AI technology development, we think a better case can be made for prudence, patience, and a continuing embrace of “permissionless innovation” as it pertains to modern digital technologies. Unless a compelling case can be made that a new invention will bring serious harm to society, innovation should be allowed to continue unabated, and problems, if they develop at all, can be addressed later.

]]>
https://techliberation.com/2017/08/23/new-mercatus-paper-on-artificial-intelligence-and-public-policy/feed/ 2 76180
Survey of Studies on Life-Saving Potential of Driverless Cars https://techliberation.com/2017/06/30/survey-of-studies-on-life-saving-potential-of-driverless-cars/ https://techliberation.com/2017/06/30/survey-of-studies-on-life-saving-potential-of-driverless-cars/#respond Fri, 30 Jun 2017 17:52:35 +0000 https://techliberation.com/?p=76158

Whatever you want to call them–autonomous vehicles, driverless cars, automated systems, unmanned systems, connected cars, piloteless vehicles, etc.–the life-saving potential of this new class of technologies has been shown to be potentially enormous. I’ve spent a lot of time researching and writing about these issues, and I have yet to see any study forecast the opposite (i.e., a net loss of lives due to these technologies.) While the estimated life savings vary, the numbers are uniformly positive across the board, and not just in terms of lives saved, but also for reductions in other injuries, property damage, and aggregate social costs associated with vehicular accidents more generally.

To highlight these important and consistent findings, I asked my research assistant Melody Calkins to help me compile a list of recent studies on this issue and summarize the key takeaways of each one regarding at least the potential for lives saved. The studies and findings are listed below in reverse chronological order of publication. I may try to add to this over time, so please feel free to shoot me suggested updates as they become available.

Needless to say, these findings would hopefully have some bearing on public policy toward these technologies. Namely, we should be taking steps to accelerate this transition and removing roadblocks to the driverless car revolution because we could be talking about the biggest public health success story of our lifetime if we get policy right here. Every day matters because each day we delay this transition is another day during which 90 people die in car crashes and more than 6,500 will be injured. And sadly, those numbers are going up, not down. According to the National Highway Traffic Safety Administration (NHTSA), auto crashes and the roadway death toll is climbing for the first time in decades. Meanwhile, the agency estimated that 94 percent of all crashes are attributable to human error. We have the potential to do something about this tragedy, but we have to get public policy right. Delay is not an option.


Accelerating the Future: The Economic Impact of the Emerging Passenger Economy (June 2017)

an Intel Report

  • p. 23: “If we conservatively assume that just 5 percent of these accidents are avoided in the decade from 2035 to 2045 due to pilotless vehicles, 585,000 lives will be saved during that time.”

Implications of connected and Automated vehicles on the Safety and Operations of Roadway Networks: A Final Report (Oct 2016)

By The University of Texas at Austin Center for Transportation Research

Chapter 4, Safety Benefits of CAVs

See Table 4.7,4.8, 4.9 (p.95-97) Annual economic cost and functional-years lost savings estimates from safety benefits of CAV technologies

  • p. 78: The most recently-available U.S. crash database (the 2013 National Automotive Sampling System (NASS) General Estimates System (GES) was used, and results suggest that advanced CAV technologies may reduce… functional human-years lost by nearly 2 million (per year, assuming a market penetration rate of 100%)
  • p. 80: Lane Departure Warning (LDW) systems can reduce 47% of all lane-departure-related crashes, corresponding to 85,000 crashes annually
  • p. 80: Backing-crash countermeasures (like backup collision intervention via automated braking) could prevent almost 65,000 backup crashes a year.
  • p. 80: With an assumption of 100% deployment and 100% device availability (for Road departure crash warning (RDCW) technology), an annually reduction of 9,400 to 74,800 U.S. road-departure crashes was predicted.
  • p. 81: V2V systems, such as FCW, blind spot warning (BSW), and lane change warning (LCW), can serve as primary crash countermeasures, reducing U.S. light-duty vehicle-involved crashes by 76%. They further estimated that V2I systems, such as curve speed warning (CSW), red light violation warning system (RLVW), and stop sign violation warning (SSVW), if deployed anywhere they could be useful, could address 25% of all light-duty-vehicle crashes in the U.S. 

Automated Vehicle Crash Rate Comparison using Naturalistic Data (Jan. 2016)

Commissioned by Google, Performed by the Virginia Tech Transportation Institute (Data adjusts for unreported crashes)

  • Estimated crash rates for the Self-Driving Car Project were lower for all three crash levels… Additionally, the rate of less-severe crashes (Level 3) for the Self-Driving Car was lower at a statistically significant level (39)
  • See Table 10 p.41 “Current data suggest that self-driving cars may have low rates of more-severe crashes (Level 1 and Level 2 crashes) when compared to national rates or to rates from naturalistic data sets.”
  • “The data also suggest that less-severe events (Level 3 crashes) may happen at a significantly lower rate for self-driving cars… none of the vehicles operating in autonomous mode were deemed at fault” (p.41)

The Future of Motor Insurance: How Car Connectivity and ADAS are Impacting the Market (2016)

HERE and Swiss Re

  • See p.15, Figure 9: Accident Reduction Rate by Selected Features
  • Advanced ADAS (highway pilot) would reduce accidents on motorways by 45.4% and on other roads by 27.5%
  • Sophisticated ADAS (lane keeping assistant, emergency braking assistant, night vision) would reduce accidents on motorways by 25.7% and on other roads by 27.5%

A Preliminary Analysis of Real-World Crashes Involving Self Driving Vehicles (Oct. 2015)

University of Michigan’s Transportation Research Institute

  • p. 14: The most common outcome of crashes for both vehicle types was property damage only, but self-driving vehicles had this outcome 10% more often than conventional vehicles. Consequently, self-driving vehicles experienced injury-related crashes 10% less often than conventional vehicles. The overall severity of crashes involving self-driving vehicles was also lower than for conventional vehicles.
  • p. 18: Four main findings:
  1. The current best estimate is that self-driving vehicles have a higher crash rate per million miles traveled than conventional vehicles, and similar patterns were evident for injuries per million miles traveled and for injuries per crash.
  2. The corresponding 95% confidence intervals overlap. Therefore, we currently cannot rule out, with a reasonable level of confidence, the possibility that the actual rates for self-driving vehicles are lower than for conventional vehicles.      
  3. Self-driving vehicles were not at fault in any crashes they were involved in.
  4. The overall severity of crash-related injuries involving self-driving vehicles has been lower than for conventional vehicles.
  • Limitations of the study (stating that crash rates for self-driving vehicles are higher than conventional vehicles) are corrected for in the more recent 2016 Google Study (see above), to show that actually self-driving vehicles crash less.

Ten Ways Autonomous Driving Could Redefine the Automotive World (June 2015)

McKinsey Report

  • Suggests that advanced ADAS and AVs could reduce accidents by up to 90%

Connected and Autonomous Vehicles: The UK Economic Opportunity (Mar 2015)

KPMG

  • p.2 & p.12: By 2030, connected and autonomous vehicles could save over 2,500 lives and prevent more than 25,000 serious accidents in the UK.

Preparing a Nation for Autonomous Vehicles (Oct. 2013)

Eno Center for Transportation

  • p. 8, Table 2: Estimates of Annual Economic Benefits from AVs in the United States
  • 10% market-penetration would mean 1,100 lives saved; 50% would be 9,600 lives; 90% would be 21,700 lives

 

]]>
https://techliberation.com/2017/06/30/survey-of-studies-on-life-saving-potential-of-driverless-cars/feed/ 0 76158
Why not auction off low-altitude airspace for exclusive use? https://techliberation.com/2017/06/27/why-not-auction-off-low-altitude-airspace-for-exclusive-use/ https://techliberation.com/2017/06/27/why-not-auction-off-low-altitude-airspace-for-exclusive-use/#respond Tue, 27 Jun 2017 21:26:14 +0000 https://techliberation.com/?p=76154

By Brent Skorup and Melody Calkins

Tech-optimists predict that drones and small aircraft may soon crowd US skies. An FAA administrator predicted that by 2020 tens of thousands of drones would be in US airspace at any one time. Further, over a dozen companies, including Uber, are building vertical takeoff and landing (VTOL) aircraft that could one day shuttle people point-to-point in urban areas. Today, low-altitude airspace use is episodic (helicopters, ultralights, drones) and with such light use, the low-altitude airspace is shared on an ad hoc basis with little air traffic management. Coordinating thousands of aircraft in low-altitude flight, however, demands a new regulatory framework.

Why not auction off low-altitude airspace for exclusive use?

There are two basic paradigms for resource use: open access and exclusive ownership. Most high-altitude airspace is lightly used and the open access regime works tolerably well because there are a small number of players (airline operators and the government) and fixed routes. Similarly, Class G airspace—which varies by geography but is generally the airspace from the surface to 700 feet above ground—is uncontrolled and virtually open access.

Valuable resources vary immensely in their character–taxi medallions, real estate, radio spectrum, intellectual property, water–and a resource use paradigm, once selected requires iteration and modification to ensure productive use. “The trick,” Prof. Richard Epstein notes, “is to pick the right initial point to reduce the stress on making these further adjustments.” If indeed dozens of operators will be vying for variable drone and VTOL routes in hundreds of local markets, exclusive use models could create more social benefits and output than open access and regulatory management. NASA is exploring complex coordination systems in this airspace but, rather than agency permissions, lawmakers should consider using property rights and the price mechanism.

The initial allocation of airspace could be determined by auction. An agency, probably the FAA, would:

  1. Identify and define geographic parcels of Class G airspace;
  2. Auction off the parcels to any party (private corporations, local governments, non-commercial stakeholders, or individual users) for a term of years with an expectation of renewal; and
  3. Permit the sale, combination, and subleasing of those parcels

The likely alternative scenario—regulatory allocation and management of airspace–derives from historical precedent in aviation and spectrum policy:

  1. First movers and the politically powerful acquire de facto control of low-altitude airspace,
  2. Incumbents and regulators exclude and inhibit newcomers and innovators,
  3. The rent-seeking and resource waste becomes unendurable for lawmakers, and
  4. Market-based reforms are slowly and haphazardly introduced.

For instance, after demand for commercial flights took off in the 1960s, a command-and-control quota system was created for crowded Northeast airports. Takeoff and landing rights, called “slots,” were assigned to early airlines but regulators did not allow airlines to sell those rights. The anticompetitive concentration and hoarding of airport slots at terminals is still being slowly unraveled by Congress and the FAA to this day. There’s a similar story for government assignment of spectrum over decades, as explained in Thomas Hazlett’s excellent new book, The Political Spectrum.

The benefit of an auction, plus secondary markets, is that the resource is generally put to its highest-valued use. Secondary markets and subleasing also permit latecomers and innovators to gain resource access despite lacking an initial assignment and political power. Further, exclusive use rights would also provide VTOL operators (and passengers) the added assurance that routes would be “clear” of potential collisions. (A more regulatory regime might provide that assurance but likely via complex restrictions on airspace use.) Airspace rights would be a new cost for operators but exclusive use means operators can economize on complex sensors, other safety devices, and lobbying costs. Operators would also possess an asset to sublease and monetize.

Another bonus (from the government’s point of view) is that the sale of Class G airspace can provide government revenue. Revenue would be slight at first but could prove lucrative once there’s substantial commercial interest. The Federal government, for instance, auctions off its usage rights for grazing, oil and gas retrieval, radio spectrum, mineral extraction, and timber harvesting. Spectrum auctions alone have raised over $100 billion for the Treasury since they began in 1994.

]]>
https://techliberation.com/2017/06/27/why-not-auction-off-low-altitude-airspace-for-exclusive-use/feed/ 0 76154
What a 1911 Silent Movie Tells Us about the Technopanic Mentality https://techliberation.com/2017/06/21/what-a-1911-silent-movie-tells-us-about-the-technopanic-mentality/ https://techliberation.com/2017/06/21/what-a-1911-silent-movie-tells-us-about-the-technopanic-mentality/#comments Wed, 21 Jun 2017 20:36:35 +0000 https://techliberation.com/?p=76148

I’ve written here before about the problems associated with the “technopanic mentality,” especially when it comes to how technopanics sometimes come to shape public policy decisions and restict important new, life-enriching innovations. As I argued in a recent book, the problem with this sort Chicken Little thinking is that, “living in constant fear of worst-case scenarios—and premising public policy on them—means that best-case scenarios will never come about. When public policy is shaped by precautionary principle reasoning, it poses a serious threat to technological progress, economic entrepreneurialism, social adaptation, and long-run prosperity.”

Perhaps the worst thing about worst-case thinking is how short-sighted and hubristic it can be. The technopanic crowd often has an air of snooty-ness about them in that they ask us to largely ignore the generally positive long-run trends associated with technological innovation and instead focus on hypothetical fears about an uncertain future that apparently only they can foresee. This is the case whether they are predicting the destruction of jobs, economy, lifestyles, or culture. Techno-dystopia lies just around the corner, they say, but the rest of us ignorant sheep who just can’t see it coming!

In his wonderful 2013 book,  Smarter Than You Think: How Technology Is Changing Our Minds for the BetterClive Thompson correctly noted that “dystopian predictions are easy to generate” and “doomsaying is emotionally self-protective: if you complain that today’s technology is wrecking the culture, you can tell yourself you’re a gimlet-eyed critic who isn’t hoodwinked by high-tech trends and silly, popular activities like social networking. You seem like someone who has a richer, deeper appreciation for the past and who stands above the triviality of today’s life.”

Stated differently, the doomsayers are guilty of a type of social and technical arrogance. They are almost always wrong on history, wrong on culture, and wrong on facts. Again and again, humans have proven remarkably resilient in the face of technological change and have overcome short-term adversity. Yet, the technopanic pundits are almost never called out for their elitist attitudes later when their prognostications are proven wildly off-base. And even more concerning is the fact that their Chicken Little antics lead them and others to ignore the more serious risks that could exist out there and which are worthy of our attention.

Here’s a nice example of that last point that comes from a silent film made all the way back in 1911! (Ironically, it was a tweet by Clive Thompson that brought this clip to my attention.) The short film is called The Automatic Motorist and here’s how Michael Waters summarizes the plot in a post over at Atlas Obscura: “In it, a robot chauffeur is developed to drive a newly wedded couple to their honeymoon destination. But this robot malfunctions, and all of a sudden the couple is marooned in outer space (and then sinking underwater, and then flying through the sky—it’s complicated).” In sum: don’t trust robots or autonomous systems or you will probably die.

Regardless of how silly the plot sounds or the film looks, what I really found interesting about it was the way that they film jumped right into the classic sci-fi dystopian scenario of ROBOTS GONE WILD. Countless other books, stories, movies, and TV shows would follow that same predictable plot line in subsequent decades. In one sense, it’s entirely logical why authors and screenwriters do this. Simply put, bad news sells, and that is especially true when the bad news is delivered in the form of robotic systems running amok and threatening the future of humanity.

But I wonder… did the creators of The Automatic Motorist ever consider the far more risky scenario surrounding automobiles? Specifically, isn’t it a shame that they didn’t foresee the millions upon millions of deaths that would occur due to human error behind the wheel?

The tale of automation-gone-wrong always makes for better box office and book sales, but fear-mongering about technologies can condition people (and policymakers) to think in fearful terms about those products and systems. Robotic cars would have been impossible in 1911, obviously, so perhaps this concern seems meaningless in this context. But it is indicative of the bigger problem of the technopanic crowd focusing on hypothetical worst-case scenarios and avoiding the more mundane — but ultimately far more concerning — real-world risks that might occur in the absence of ongoing technological innovation.

And in many ways this is still the debate we are having in 2017 as the discussion about robotic “driverless” cars has finally ripened. We stand on the brink of what may become one of the great public health success stories of our lifetime. With the roadway death toll climbing for the first time in decades (around 40,000 deaths last year; or over 100 people dying on the roads every day), and with 94 percent of accidents being attributable to human error, those facts alone should constitute the most powerful reason to give autonomous technology a chance to prove itself. If policymakers fail to do so, it could result in countless potential injuries and deaths that driverless cars probably could have prevented.

These “unseen” unintended consequences of misguided policies constitute as sort of hidden tax on humanity’s future. When the technopanic crowd that tells us we must live in fear of each and every new innovation, they are creating the riskiest future scenario of them all: one that is stagnant and backwards-looking. The burden of proof is on them to explain why we should be denied the benefits that accompany ongoing trial and error experimentation with new and better ways of doing things that could ensure us a safer and more prosperous future.

]]>
https://techliberation.com/2017/06/21/what-a-1911-silent-movie-tells-us-about-the-technopanic-mentality/feed/ 2 76148
Innovation Policy at the Mercatus Center: The Shape of Things to Come https://techliberation.com/2017/04/11/innovation-policy-at-the-mercatus-center-the-shape-of-things-to-come/ https://techliberation.com/2017/04/11/innovation-policy-at-the-mercatus-center-the-shape-of-things-to-come/#respond Tue, 11 Apr 2017 15:11:40 +0000 https://techliberation.com/?p=76133

Written with Christopher Koopman and Brent Skorup (originally published on Medium on 4/10/17)

Innovation isn’t just about the latest gee-whiz gizmos and gadgets. That’s all nice, but something far more profound is at stake: Innovation is the single most important determinant of long-term human well-being. There exists widespread consensus among historians, economists, political scientists and other scholars that technological innovation is the linchpin of expanded economic growth, opportunity, choice, mobility, and human flourishing more generally. It is the ongoing search for new and better ways of doing things that drives human learning and prosperity in every sense — economic, social, and cultural.

As the Industrial Revolution revealed, leaps in economic and human growth cannot be planned. They arise from societies that reward risk takers and legal systems that accommodate change. Our ability to achieve progress is directly proportional to our willingness to embrace and benefit from technological innovation, and it is a direct result of getting public policies right.

The United States is uniquely positioned to lead the world into the next era of global technological advancement and wealth creation. That’s why we and our colleagues at the Technology Policy Program at the Mercatus Center at George Mason University devote so much time and energy to defending the importance of innovation and countering threats to it. Unfortunately, those threats continue to multiply as fast as new technologies emerge.

Indeed, it isn’t easy keeping on top of all of these issues and threats because the only constant in the world of innovation policy — the study of technological change and its impact on social, economic, and political systems — is constant change. You go to sleep one night thinking you’ve got the world figured out, only to awake the next morning to see that another tectonic shift has reshaped the landscape.

In the industrial era, it was hard enough mapping the contours of this field of academic study. This task has grown far more challenging. Computing and Internet-enabled innovations have fundamentally reshaped society and have also helped spawn other technological revolutions in diverse fields such as: robotics, autonomous systems, artificial intelligence, big data, the Sharing Economy, 3D printing, virtual reality, aviation, advanced medical technology, blockchain and Bitcoin, and the so-called the Internet of Things.

The short-term social and economic disruptions caused by these and other new technologies often lead to backlashes and even occasional “techno-panics.” When those panics bubble over into the political arena, the risk is that misguided regulatory policies will short-circuit opportunities for creators and entrepreneurs to pursue life-enriching innovations.

At the Mercatus Center, where we study these and other topics, our goal is to bring greater focus to these emerging technologies and the many different facets of innovation policy surrounding them. How we accomplish these goals is as challenging as it is exciting. As more and more industries and business are affected by these emerging technologies, the decisions that policymakers make about them will have profound effects on large parts of our economy and society.

Specifically, as we place ourselves at the forefront of these debates, our aim is to:

  • Explore how innovation policy affects economic growth and mobility, consumer welfare, and global competitive advantage;
  • Identify barriers to entrepreneurial endeavors and devise a roadmap for how to remove them;
  • Push back against technopanics and overly-broad theories of “technological harm” that could limit innovation opportunities and greater consumer choice; and
  • Confront the legal and ethical concerns surrounding emerging technologies and craft constructive solutions to those problems to avoid solutions of the top-down, “command-and-control” variety.

Overall, our vision is simple: Permissionless innovation must become the norm rather than the exception. This means innovation and innovators are protected against efforts to preemptively control ongoing trial-and-error experimentation. We should let creative minds and empowered entrepreneurs experiment with new and better ways of doing things. It also means that the future if public policy should be rooted in fact-based analysis and not shaped by outlandish fears of hypothetical worst-case scenarios.

Going forward, you will continue to see Mercatus producing research applying permissionless innovation across a host of areas. You can also expect us to begin pursuing big questions about the future.

What if we could reduce the number of deaths on US roadways from 96 people per day to zero? What if we could double life expectancy? Triple it? Wouldn’t it be nice if we could travel from New York to London in three hours? New York to Los Angeles in 2.5 hours? What if we welcomed automation instead of fearing its effects on the workforce? What if we could remove the technical and political barriers keeping us from going to Mars and then beyond it? And so on.

We pose these questions not merely because they are intellectually interesting and important, but also because we hope to make the case for embracing the future with a sense of wonder and optimism about how technological advancement can radically improve human well-being in both the short- and long-run.

It isn’t enough to simply point out where innovators and entrepreneurs are being hindered. It isn’t enough to simply tell people that the future will be bright. We must explain, in real terms, how hindering innovation opportunities undermines our collective ability to constantly improve the human condition.

And because there is a symbiotic relationship between freedom and progress, we must defend our collective ability as a society to achieve very concrete, widely-shared advances in well-being through a general freedom to experiment with new technologies and better ways of doing things.

That is our vision for the Technology Policy Program at the Mercatus Center and we hope it is one that the public and public policymakers will embrace going forward.

]]>
https://techliberation.com/2017/04/11/innovation-policy-at-the-mercatus-center-the-shape-of-things-to-come/feed/ 0 76133
Remember What the Experts Said about the Apple iPhone 10 Years Ago? https://techliberation.com/2017/01/09/remember-what-the-experts-said-about-the-apple-iphone-10-years-ago/ https://techliberation.com/2017/01/09/remember-what-the-experts-said-about-the-apple-iphone-10-years-ago/#respond Mon, 09 Jan 2017 17:15:10 +0000 https://techliberation.com/?p=76106

Today marks the 10th anniversary of the launch of the Apple iPhone. With all the headlines being written today about how the device changed the world forever, it is easy to forget that before its launch, plenty of experts scoffed at the idea that Steve Jobs and Apple had any chance of successfully breaking into the seemingly mature mobile phone market.

After all, those were the days when BlackBerry, Palm, Motorola, and Microsoft were on everyone’s minds. Perhaps, then, it wasn’t so surprising to hear predictions like these leading up to and following the launch of the iPhone:

  • In December 2006, Palm CEO Ed Colligan summarily dismissed the idea that a traditional personal computing company could compete in the smartphone business. “We’ve learned and struggled for a few years here figuring out how to make a decent phone,” he said. “PC guys are not going to just figure this out. They’re not going to just walk in.”
  • In January 2007, Microsoft CEO Steve Ballmer laughed off the prospect of an expensive smartphone without a keyboard having a chance in the marketplace as follows: “Five hundred dollars? Fully subsidized? With a plan? I said that’s the most expensive phone in the world and it doesn’t appeal to business customers because it doesn’t have a keyboard, which makes it not a very good e-mail machine.”
  • In March 2007, computing industry pundit John C. Dvorak argued that “Apple should pull the plug on the iPhone” since “There is no likelihood that Apple can be successful in a business this competitive.” Dvorak believed the mobile handset business was already locked up by the era’s major players. “This is not an emerging business. In fact it’s gone so far that it’s in the process of consolidation with probably two players dominating everything, Nokia Corp. and Motorola Inc.”

A decade after these predictions were made, Motorola, Nokia, Palm, and Blackberry have been decimated by the rise of Apple as well as Google (which actually purchased Motorola in the midst of it all). And Microsoft still struggles with mobile even though they are still a player in the field. Rarely have Joseph Schumpeter’s “perennial gales of creative destruction” blown harder than they have in the mobile sector over this 10 year period.

The lesson here is pretty clear. As Yogi Berra once quipped: “It’s tough to make predictions, especially about the future.” But there’s more to it than just that. These mistaken predictions serve as a classic example of those with a static snapshot mentality disregarding the potential for new entry and technological disruption to shake things up. “In dealing with disruptive technologies leading to new markets,” says Clayton M. Christensen, author of The Innovator’s Dilemma, “researchers and business planners have consistently dismal records.”

This has implications not only for business forecasting but also for public policy, which is notoriously shortsighted when it comes to the potential for new technological innovations to shake up existing markets. Just because you think a particular firm or sector it the proverbial “King of the Hill” one day, it doesn’t mean they will be able to sit on that lofty perch forever. Likewise, policymakers cannot neatly “plan progress” by incessantly intervening in the hope of directing markets and technologies toward some supposedly better end. Picking winners and losers–or even just trying to stimulate more “winners”–will likely end very badly.

In his book,  The Year 2000: A Framework for Speculation on the Next Thirty-three Years, the futurist Herman Kahn wisely noted that:

History is likely to write scenarios that most observers would find implausible not only prospectively but sometimes, even in retrospect. Many sequences of events seem plausible now only because they have actually occurred; a man who knew no history might not believe any. Future events may not be drawn from the restricted list of those we have learned are possible; we should expect to go on being surprised.

But we can only “expect to go on being surprised” by leaving plenty of breathing room for the evolution of markets and technology. While all social and economic experiments are accompanied by a great deal of unpredictability and disruption, history indicates that most of those experiments will result in greater progress and prosperity–just as the iPhone did. But developments such as these are almost impossible to predict or plan beforehand. We have to get the environment for innovation right and then let creative minds work their magic.

 

 

]]>
https://techliberation.com/2017/01/09/remember-what-the-experts-said-about-the-apple-iphone-10-years-ago/feed/ 0 76106
Mercatus Center Filing on Governance of Artificial Intelligence https://techliberation.com/2016/07/24/mercatus-center-filing-on-governance-of-artificial-intelligence/ https://techliberation.com/2016/07/24/mercatus-center-filing-on-governance-of-artificial-intelligence/#comments Sun, 24 Jul 2016 20:00:04 +0000 https://techliberation.com/?p=76051

This week, my Mercatus Center colleague Andrea Castillo and I filed comments with the White House Office of Science and Technology Policy (OSTP) in a proceeding entitled, “Preparing for the Future of Artificial Intelligence.” For more background on this proceeding and the accompanying workshops that OSTP has hosted on this issue, see this White House site.

In our comments, Andrea and I make the case for prudence, patience, and a continuing embrace of “permissionless innovation” as the appropriate policy framework for artificial intelligence (AI) technologies at this nascent stage of their development. Down below, I have pasted our full comments, which were limited to just 2,000 words as required by the OSTP. But we plan on releasing a much longer report on these issues in coming months. You can find the full version of filing that includes footnotes here.


The Office of Science and Technology Policy (OSTP) has requested comments pertaining to the governance of artificial intelligence (AI) technologies.

The Technology Policy Program of the Mercatus Center at George Mason University is dedicated to advancing knowledge of the impact of regulation on society. It conducts careful and independent analyses employing contemporary economic scholarship to assess policy issues from the perspective of the public interest.

We write here to comment on the appropriate policy framework for artificial intelligence (AI) technologies at this nascent stage of their development and to make the case for prudence, patience, and a continuing embrace of “permissionless innovation.” Permissionless innovation refers to the idea that “experimentation with new technologies and business models should generally be permitted by default. Unless a compelling case can be made that a new invention will bring serious harm to society, innovation should be allowed to continue unabated and problems, if they develop at all, can be addressed later.”

Policymakers may be tempted to preemptively restrict AI technologies out of an abundance of caution for the perceived risks these new innovations might seem to pose. However, an examination of the history of US technology policy demonstrates that these concerns can be adequately addressed without quashing a potentially revolutionary new industry.

Specifically, as policymakers consider the governance of AI, they would be wise to consider the lessons that can be drawn from our recent experience with the Internet. The United States made permissionless innovation the basis of Internet policy beginning in the early 1990s, and it soon became the “secret sauce” that propelled the rise of the modern digital revolution.

If policymakers wish to replicate America’s success with the Internet, they need to adopt a similar “light-touch” approach for the governance of AI technologies. To highlight the benefits of permissionless innovation, the Mercatus Center at George Mason University has recently published a book, a series of law review articles, and several agency filings that explain what this policy vision entails for different technologies and sectors. A summary of the major insights from these studies can be found in a recent Mercatus Center paper called “Permissionless Innovation and Public Policy: A 10-Point Blueprint.”

If one’s sole conception of a technology comes from Hollywood depictions of dystopian science fiction or killer robotic systems run amok, it is understandable that one might want to use the force of regulation to clamp down decisively on these “threats.” But these fictional representations are just that: fictional. AI technologies are both much more benign and fantastic in reality.

The economic benefits of AI are projected to be enormous. One recent study used benchmarks derived from methodologically conservative studies of broadband Internet, mobile phones, and industrial robotics to estimate that the economic impact of AI could be between $1.49 trillion and $2.95 trillion over the next ten years. With less strict assumptions, the economic benefits could be greater still.

However, some skeptics are already making the case for a preemptive regulation of AI technologies. The rationales for control are varied, including concerns ranging from deindustrialization to dehumanization, as well as worries about the “fairness” of the algorithms behind AI systems.

Due to these anxieties associated with AI, some academics argue that policymakers should “legislate early and often” to “get ahead of” these hypothetical problems. Specifics are often in short supply, with some critics simply hinting that “something must be done” to address amorphous concerns.

Other scholars have provided more concrete regulatory blueprints, however. They propose, among other things, the passage of broad-based legislation such as an “Artificial Intelligence Development Act,” as well as the creation of a federal AI agency or possibly a “Federal Robotics Commission” or “National Algorithmic Technology Safety Administration.” These proposed laws and agencies would establish a certification process requiring innovators to subject their technologies to regulatory review to “ensure the safety and security of their A.I.” Or, at a minimum, such agencies would advise other federal, state, and local officials and organizations on how to craft policy for AI and robotics.

Such proposals are based on “precautionary principle” reasoning. The precautionary principle refers to the belief that new innovations should be curtailed or disallowed until their developers can prove that they will not cause any harms to individuals, groups, specific entities, cultural norms, or various existing laws, norms, or traditions.

It is certainly true that AI technologies might give rise to some of the problems that critics suggest. And we should continue to look for constructive solutions to the potentially thorny problems that some of these critics discuss. That does not mean that top-down, technocratic regulation is sensible, however.

Traditional administrative regulatory systems have a tendency to be overly rigid, bureaucratic, and slow to adapt to new realities. This is particularly problematic as it pertains to the governance of new, fast-moving technologies.

Prior restraints on innovative activities are a recipe for stagnation. By focusing on preemptive remedies that aim to predict hypothetical problems that may not ever come about, regulators run the risk of making bad bets based on overconfidence in their ability to predict the future. Worse yet, by preempting beneficial experiments that yield new and better ways of doing things, administrative regulation stifles the sort of creative, organic, bottom-up solutions that will be needed to solve problems that may be unforeseeable today.

This risk is perhaps more pronounced when dealing with AI technologies. How “artificial intelligence” is regulated makes little sense until policymakers define what it actually entails. The boundaries of AI are amorphous and ever changing. AI technologies are already all around us—examples include voice-recognition software, automated fraud detection systems, and medical diagnostic technologies—and new systems are constantly emerging and evolving rapidly. Policymakers should keep in mind the rich and distinct variety of opportunities presented by AI technologies, lest regulations more appropriate for one kind of application inadvertently stymie the development of another.

Toward that end, we suggest that a different policy approach for AI is needed, one that is rooted in humility and a recognition that we possess limited knowledge about the future.

This does not mean there is no role for government as it pertains to AI technologies. But it does mean that policymakers should first seek out less restrictive remedies to complex social and economic problems before resorting to top-down proposals that are preemptive and proscriptive.

Policymakers must carefully ensure they have a full understanding of the boundaries and promises of all of the technologies they address. Many AI technologies pose little or no risks to safety, fair market competition, or consumer welfare. These applications should not be stymied due to an inappropriate regulatory scheme that seeks to address an entirely separate technology. They should be distinguished and exempted from regulations as appropriate.

Other AI technologies may warrant more regulatory consideration if they generate substantial risks to public welfare. Still, regulators should proceed cautiously.

To the extent that policymakers wish to spur the development of a wide array of new life-enriching technologies, while also looking to devise sensible solutions to complex challenges, policymakers should consider a more flexible, bottom-up, permissionless innovation approach as the basis of America’s policy regime for AI technologies.

]]>
https://techliberation.com/2016/07/24/mercatus-center-filing-on-governance-of-artificial-intelligence/feed/ 1 76051
Updated Slides: “Permissionless Innovation” & the Clash of Visions over Emerging Technologies https://techliberation.com/2015/09/18/updated-slides-permissionless-innovation-the-clash-of-visions-over-emerging-technologies/ https://techliberation.com/2015/09/18/updated-slides-permissionless-innovation-the-clash-of-visions-over-emerging-technologies/#comments Fri, 18 Sep 2015 13:36:04 +0000 http://techliberation.com/?p=75731

Since the release of my book, Permissionless Innovation: The Continuing Case for Comprehensive Technological Freedom , it has been my pleasure to be invited to speak to dozens of groups about the future of technology policy debates. In the process, I have developed and continuously refined a slide show entitled, “Permissionless Innovation’ & the Clash of Visions over Emerging Technologies.” After delivering this talk again twice last week, I figured I would post the latest slide deck I’m using for the presentation. It’s embedded below or it can be found at the link above.

]]>
https://techliberation.com/2015/09/18/updated-slides-permissionless-innovation-the-clash-of-visions-over-emerging-technologies/feed/ 1 75731
New Filing & Working Paper on the Regulation of the Sharing Economy https://techliberation.com/2015/05/26/new-filing-working-paper-on-the-regulation-of-the-sharing-economy/ https://techliberation.com/2015/05/26/new-filing-working-paper-on-the-regulation-of-the-sharing-economy/#comments Tue, 26 May 2015 17:41:04 +0000 http://techliberation.com/?p=75562

Along with colleagues at the Mercatus Center at George Mason University, I am releasing two major new reports today dealing with the regulation of the sharing economy. The first report is a 20-page filing to the Federal Trade Commission that we are submitting to the agency for its upcoming June 9th workshop on “The “Sharing” Economy: Issues Facing Platforms, Participants, and Regulators.” We have been invited to participate in that event and I will be speaking on the fourth panel of the workshop. The filing I am submitting today for that workshop was co-authored with my Mercatus colleagues Christopher Koopman and Matt Mitchell.

The second report we are releasing today is a new 47-page working paper entitled, “How the Internet, the Sharing Economy, and Reputational Feedback Mechanisms Solve the ‘Lemons Problem.'” This study was co-authored with my Mercatus colleagues Christopher Koopman, Anne Hobson, and Chris Kuiper.

I will summarize each report briefly here.

In our new filing to the FTC, we address the five questions the Commission set forth in its workshop annoucement. Those five questions are as follows:

  • How can state and local regulators meet legitimate regulatory goals (such as protecting consumers, and promoting public health and safety) in connection with their oversight of sharing economy platforms and business models, without also restraining competition or hindering innovation?
  • How have sharing economy platforms affected competition, innovation, consumer choice, and platform participants in the sectors in which they operate? How might they in the future?
  • What consumer protection issues—including privacy and data security, online reviews and disclosures, and claims about earnings and costs—do these platforms raise, and who is responsible for addressing these issues?
  • What particular concerns or issues do sharing economy transactions raise regarding the protection of platform participants? What responsibility does a sharing economy platform bear for consumer injury arising from transactions undertaken through the platform?
  • How effective are reputation systems and other trust mechanisms, such as the vetting of sellers, insurance coverage, or complaint procedures, in encouraging consumers and suppliers to do business on sharing economy platforms?

We provide detailed answers to each of these questions as well as one additional major question that was not posed by the Commission in its workshop notice but which is, no doubt, on the minds of many at the agency and outside it: What should the FTC do about state and local barriers to entry and innovation that might be thwarting the growth of the sharing economy? (I blogged about that issue here a couple of weeks ago and our filing includes that discussion.)

Please take a look at our filing for detailed answers to each of these questions. (Incidentally, our filing is an extension of an earlier working paper that Koopman, Mitchell, and I released late last year on “The Sharing Economy and Consumer Protection Regulation: The Case for Policy Change.”) But, to briefly highlight the thrust of our argument, here’s a passage from our new filing:

As the debate surrounding the sharing economy moves forward, policymakers must keep in mind that merely because regulations were once justified on the grounds of consumer protection does not mean they accomplished those goals or that they are still needed today. Even well-intentioned policies must be judged against real-world evidence. Unfortunately, the evidence shows that many traditional consumer protection regulations hurt consumers; in the words of New York Attorney General Eric Schneiderman, they are often “cumbersome, and some are just plain protectionist.” Markets, competition, reputational systems, and ongoing innovation often solve problems better than regulation when they are given a chance to do so. There are two reasons for this. First, market imperfections create powerful profit opportunities for entrepreneurs who are able to find ways to correct them. Second, regulatory solutions too often undermine competition and lock in inefficient business models.

We continue on to explain exactly why that is the case, while also offering some constructive solutions to other issues that are on the minds of regulators.

Meanwhile, the new working paper we are releasing today provides much greater detail on the fifth of the five questions the FTC posed in its workshop notice regarding reputation systems and other trust mechanisms. Here is the abstract from the paper:

This paper argues that the sharing economy—through the use of the Internet and real time reputational feedback mechanisms—is providing a solution to the lemons problem that many regulators have spent decades attempting to overcome. Section I provides an overview of the sharing economy and traces its rapid growth. Section II revisits the lemons theory as well as the various regulatory solutions proposed to deal with the problem of asymmetric information. Section III discusses the relationship between reputation and trust and analyzes how reputational incentives affect commercial interactions. Section IV discusses how information asymmetries were addressed in the pre-Internet era. It also discusses how the evolution of both the Internet and information systems (especially the reputational feedback mechanisms of the sharing economy) addresses the lemons problem. Section V explains how these new realities affect public policy and concludes that asymmetric information is not a legitimate rationale for policy intervention in light of technological changes. We also argue that continued use of this rationale to regulate in the name of consumer protection might, in fact, make consumers worse off. This has ramifications for the current debate over regulation of the sharing economy.

We believe that our research makes it clear “how the sharing economy relies upon—and has helped spur the growth of—sophisticated reputational feedback mechanisms that facilitate online trust and commerce, overcoming many of the information asymmetries that seemed intractable… just a generation ago. In combination with online review services and other information-sharing technologies enabled by the Internet,” we conclude, “these reputational tools can help create more effective, and largely self-regulating, markets that provide more information to more individuals than ever before.”

We look forward to continuing engagement with officials at the FTC and other policymakers at the federal, state, and even international level on these issues. We hope our research will help legislators and regulators find sensible ways to adjust policy for the sharing economy so as not to derail the sort of “permissionless innovation” that has thus far powered this exciting sector and produced the many pro-consumer benefits flowing from it. Check out our filing and new paper for more details.

]]>
https://techliberation.com/2015/05/26/new-filing-working-paper-on-the-regulation-of-the-sharing-economy/feed/ 1 75562
Don’t Hit the (Techno-)Panic Button on Connected Car Hacking & IoT Security https://techliberation.com/2015/02/10/dont-hit-the-techno-panic-button-on-connected-car-hacking-iot-security/ https://techliberation.com/2015/02/10/dont-hit-the-techno-panic-button-on-connected-car-hacking-iot-security/#comments Tue, 10 Feb 2015 20:15:02 +0000 http://techliberation.com/?p=75425

do not panicOn Sunday night, 60 Minutes aired a feature with the ominous title, “Nobody’s Safe on the Internet,” that focused on connected car hacking and Internet of Things (IoT) device security. It was followed yesterday morning by the release of a new report from the office of Senator Edward J. Markey (D-Mass) called Tracking & Hacking: Security & Privacy Gaps Put American Drivers at Risk,  which focused on connected car security and privacy issues. Employing more than a bit of techno-panic flare, these reports basically suggest that we’re all doomed.

On 60 Minutes, we meet former game developer turned Department of Defense “cyber warrior” Dan (“call me DARPA Dan”) Kaufman–and learn his fears of the future: “Today, all the devices that are on the Internet [and] the ‘Internet of Things’ are fundamentally insecure. There is no real security going on. Connected homes could be hacked and taken over.”

60 Minutes reporter Lesley Stahl, for her part, is aghast. “So if somebody got into my refrigerator,” she ventures, “through the internet, then they would be able to get into everything, right?” Replies DARPA Dan, “Yeah, that’s the fear.” Prankish hackers could make your milk go bad, or hack into your garage door opener, or even your car.

This segues to a humorous segment wherein Stahl takes a networked car for a spin. DARPA Dan and his multiple research teams have been hard at work remotely programming this vehicle for years. A “hacker” on DARPA Dan’s team proceeded to torment poor Lesley with automatic windshield wiping, rude and random beeps, and other hijinks. “Oh my word!” exclaims Stahl.

Never mind that we are told that the “hackers” who “hacked” into this car had been directly working on its systems for years—a luxury scarcely available to the shadowy malicious hackers about whom DARPA Dan and his team so hoped to frighten us. The careful setup, editing, and Lesley Stahl’s squeals made for convincing theater.

Then there’s the Markey report. On the surface, the findings appear grim. For instance, we are warned that “Nearly 100% of cars on the market include wireless technologies that could pose vulnerabilities to hacking or privacy intrusions.” Nearly 100%? We’re practically naked out there! But digging through the report, we learn that the basis for this claim is that most of the 16 manufacturers surveyed responded that 100% of their vehicles are equipped with wireless entry points (WEPs)—like Bluetooth, Wi-Fi, navigation, and anti-theft features. Because these features “could pose vulnerabilities,” they are listed as a threat—one that lurks in nearly 100% of the cars on the market, at that.

Much of the report is similarly panicky and sometimes humorous (complaint #3: “many manufacturers did not seem to understand the questions posed by Senator Markey.”) The report concludes that the “alarmingly inconsistent and incomplete state of industry security and privacy practice,” warrants recommendations that federal regulators — led by the National Highway Traffic Safety Administration (NHTSA) and the Federal Trade Commission (FTC) — “promulgate new standards that will protect the data, security and privacy of drivers in the modern age of increasingly connected vehicles.”

Take a Deep Breath

As we face an uncertain future full of rapidly-evolving technologies, it’s only natural that some might feel a little anxiety about how these new machines and devices operate. Despite the exaggerated and sometimes silly nature of techno-panic reports like these, they reflect many people’s real and understandable concerns about new technologies.

But the problem with these reports is that they embody a “panic-first” approach to digital security and privacy issues. It is certainly true that our cars are become rolling computers, complete with an arsenal of sensors and networking technologies, and the rise of the Internet of Things means almost everything we own or come into contact with will possess networking capabilities. Consequently, just as our current generation of computing and communications technologies are vulnerable to some forms of hacking, it is likely that our cars and IoT devices will be as well.

But don’t you think that automakers and IoT developers know that? Are we really to believe that journalists, congressmen, and DARPA Dan have a greater incentive to understand these issues than the manufacturers whose companies and livelihoods are on the line? And wouldn’t these manufacturers only take on these risks if consumer demand and expected value supported them? Watching the 60 Minutes spot and reading through the Markey report, one is led to think that innovators in this space are completely oblivious to these threats, simply don’t care enough to address them, and don’t have any plans in motion. But that is lunacy.

No Mention of Liability?

To begin, neither report even mentions the possibility of massive liability for future hacking attacks on connected cars or IoT devices. That is amazing considering how the auto industry already attracts an absolutely astonishing amount of litigation activity. (Ambulance-chasing is a full-time legal profession, after all.) Thus, to the extent that some automakers don’t want to talk about everything they are doing to address security issues, it’s likely because they are still figuring out how to address the various vulnerabilities out there without attracting the attention of either enterprising hackers or trial lawyers.

Nonetheless, contrary to the absurd statement by Mr. Kaufman that “There is no real security going on” for connected cars or the Internet of Things, the reality is that these are issues that developers are actively studying and trying to address. Manufacturers of connected devices know that: (1) nobody wants to own or use devices that are fundamentally insecure or dangerous; and (2) if they sell such devices to the public, they are in for a world of hurt once the trial lawyers see the first headlines about it.

It also still quite unclear how big the threat is here. Writing over at Forbes yesterday, Doug Newcomb notes that “the threat of car hacking has largely been overblown by the media – there’s been only one case of a malicious car hack, and that was an inside job by a disgruntled former car dealer employee. But it’s a surefire way to get the attention of the public and policymakers,” he correctly observes. Newcomb also interviewed Damon McCoy, an assistant professor of computer science at George Mason University and a car security researcher, who noted that car hacking hasn’t become prevalent and that “Given the [monetary] motivation of most hackers, the chance of [automotive hacking] is very low.”

Security is a Dynamic, Evolving Process

Regardless, the notion that we can just clean this whole device security situation up with a single set of federal standards, as the Markey report suggests, is appealing but fanciful. “Security threats are constantly changing and can never be holistically accounted for through even the most sophisticated flowcharts,” observed my Mercatus Center colleagues Eli Dourado and Andrea Castillo in their recent white paper on “Why the Cybersecurity Framework Will Make Us Less Secure.” “By prioritizing a set of rigid, centrally designed standards, policymakers are neglecting potent threats that are not yet on their radar,” Dourado and Castillo note elsewhere.

We are at the beginning of a long process. There is no final destination when it comes to security; it’s a never-ending process of devising and refining policies to address vulnerabilities on the fly. The complex problem of cybersecurity readiness requires dynamic solutions that properly align incentives, improve communication and collaboration, and encourage good personal and organizational stewardship of connected systems. Implementing the brittle bureaucratic standards that Markey and others propose could have the tragic unintended consequence of rendering our devices even less secure.

Standards Are Developing Rapidly

Meanwhile, the auto industry has already come up with privacy standards that go above and beyond what most other digital innovators apply to their own products today. Here are the Auto Alliance’s “Consumer Privacy Protection Principles: Privacy Principles for Vehicle Technologies and Services,” which 23 major automobile manufacturers agreed to abide by. And, according to a press release yesterday, “automakers are currently working to establish an Information Sharing Analysis Center (or “Auto-ISAC”) for sharing vehicle cybersecurity information among industry stakeholders.”

Again, progress continues and standards are evolving. This needs to be a flexible, evolutionary process, instead of a static, top-down, one-size-fits-all bureaucratic political proceeding.

We can’t set down security and privacy standards in stone for fast-moving technologies like these for another reason, and one I am constantly stressing in my work on “Why Permissionless Innovation Matters.” If we spend all our time worrying about hypothetical worst-case scenarios — and basing our policy interventions on a parade of hypothetical horribles — then we run the risk that best-case scenarios will never come about.  As analysts at the Center for Data Innovation correctly argue, policymakers should only intervene to address specific, demonstrated harms. “Attempting to erect precautionary regulatory barriers for purely speculative concerns is not only unproductive, but it can discourage future beneficial applications of the Internet of Things.” And the same is true for connected cars.

Trade-Offs Matter

Technopanic indulgence isn’t always merely silly or annoying—it can be deadly.

“During the four deadliest wars the United States fought in the 20th century, 39 percent more Americans were dying in motor vehicles” than on the battlefield. So writes Washington Post reporter Matt McFarland in a powerful new post today. The ongoing toll associated with human error behind the wheel is falling but remains absolutely staggering, with almost 100 people losing their lives and almost 6,500 people injured every day.

We must never fail to appreciate the trade-offs at work when we are pondering precautionary regulation. Ryan Hagemann and I wrote about these issues in our recent Mercatus Center working paper, “Removing Roadblocks to Intelligent Vehicles and Driverless Cars.” That paper, which has been accepted for publication in a forthcoming edition of the Wake Forest Journal of Law & Policy, outlines the many benefits of autonomous or semi-autonomous systems and discusses the potential cost of delaying their widespread adoption.

When it comes to the various security, privacy, and ethical considerations related to intelligent vehicles, Hagemann and I argue that they “need to be evaluated against the backdrop of the current state of affairs, in which tens of thousands of people die each year in auto-related accidents due to human error.” We continue on later in the paper:

Autonomous vehicles are unlikely to create 100 percent safe, crash-free roadways, but if they significantly decrease the number of people killed or injured as a result of human error, then we can comfortably suggest that the implications of the technology, as a whole, are a boon to society. The ethical underpinnings of what makes for good software design and computer-generated responses are a difficult and philosophically robust space for discussion. Given the abstract nature of the intersection of ethics and robotics, a more detailed consideration and analysis of this space must be left for future research. Important work is currently being done on this subject. But those ethical considerations must not derail ongoing experimentation with intelligent-vehicle technology, which could save many lives and have many other benefits, as already noted. Only through ongoing experimentation and feedback mechanisms can we expect to see constant improvement in how autonomous vehicles respond in these situations to further minimize the potential for accidents and harms. (p. 42-3)

As I noted here in another recent essay, “anything we can do to reduce it significantly is something we need to be pursuing with great vigor, even while we continue to sort through some of those challenging ethical issues associated with automated systems and algorithms.”

No Mention of Alternative Solutions

Finally, it is troubling that neither the 60 Minutes segment nor the Markey report spend any time on alternative solutions to these problems. In my forthcoming law review article, “The Internet of Things and Wearable Technology: Addressing Privacy and Security Concerns without Derailing Innovation,” I devote the second half of the 90-page paper to constructive solutions to the sort of complex challenges raised in the 60 Minutes segment and the Markey report.

Many of the solutions I discuss in that paper — such as education and awareness-building efforts, empowerment solutions, the development of new social norms, and so on – aren’t even touched on by the reports. That’s a real shame because those methods could go a long way toward helping to alleviate many of the issues the reports identify.

We need a better public dialogue than this about the future of connected cars and Internet of Things security. Political scare tactics and techno-panic journalism are not going to help make the world a safer place. In fact, by whipping up a panic and potentially discouraging innovation, reports such as these can actually serve to prevent critical, life-saving technologies that could change society for the better.


Additional Reading

 

]]>
https://techliberation.com/2015/02/10/dont-hit-the-techno-panic-button-on-connected-car-hacking-iot-security/feed/ 2 75425