Search Results for “global innovation arbitrage” – Technology Liberation Front https://techliberation.com Keeping politicians' hands off the Net & everything else related to technology Tue, 12 Dec 2023 13:06:14 +0000 en-US hourly 1 6772528 Podcast: “AI – DC Policymakers Face a Crossroads” https://techliberation.com/2023/12/12/podcast-ai-dc-policymakers-face-a-crossroads/ https://techliberation.com/2023/12/12/podcast-ai-dc-policymakers-face-a-crossroads/#comments Tue, 12 Dec 2023 13:06:14 +0000 https://techliberation.com/?p=77170

Here’s a new DC EKG podcast I recently appeared on to discuss the current state of policy development surrounding artificial intelligence. In our wide-ranging chat, we discussed:

  • why a sectoral approach to AI policy is superior to general purpose licensing
  • why comprehensive AI legislation will not pass in Congress
  • the best way to deal with algorithmic deception
  • why Europe lost its tech sector
  • how a global AI regulator threatens our safety
  • the problem with Biden’s AI executive order
  • will AI policy follow same path as nuclear policy?
  • global innovation arbitrage & the innovation cage
  • AI, health care & FDA regulation
  • AI regulation vs trade secrets
  • is AI transparency / auditing the solution?

Listen to the full show here or here. To read more about current AI policy developments, check out my “Running List of My Research on AI, ML & Robotics Policy.”

 

]]>
https://techliberation.com/2023/12/12/podcast-ai-dc-policymakers-face-a-crossroads/feed/ 79 77170
Why the Endless Techno-Apocalyptica in Modern Sci-Fi? https://techliberation.com/2022/09/02/why-the-endless-techno-apocalyptica-in-modern-sci-fi/ https://techliberation.com/2022/09/02/why-the-endless-techno-apocalyptica-in-modern-sci-fi/#respond Fri, 02 Sep 2022 15:06:06 +0000 https://techliberation.com/?p=77033

James Pethokousis of AEI interviews me about the current miserable state of modern science fiction, which is just dripping with dystopian dread in every movie, show, and book plot. How does all the techno-apocalyptica affect societal and political attitudes about innovation broadly and emerging technologies in particular. Our discussion builds on my recent a recent Discourse article, “How Science Fiction Dystopianism Shapes the Debate over AI & Robotics.” [Pasted down below.] Swing on over to Jim’s “Faster, Please” newsletter and hear what Jim and I have to say. And, for a bonus question, Jim asked me is we doing a good job of inspiring kids to have a sense of wonder and to take risks. I have some serious concerns that we are falling short on that front.

How Science Fiction Dystopianism Shapes the Debate over AI & Robotics

[Originally ran on Discourse on July 26, 2022.]

George Jetson will be born this year. We don’t know the exact date of this fictional cartoon character’s birth, but thanks to some skillful Hanna-Barbera hermeneutics the consensus seems to be sometime in 2022.

In the same episode that we learn George’s approximate age, we’re also told the good news that his life expectancy in the future is 150 years old. It was one of the many ways The Jestons, though a cartoon for children, depicted a better future for humanity thanks to exciting innovations. Another was a helpful robot named Rosie, along with a host of other automated technologies—including a flying car—that made George and his family’s life easier.

 

Most fictional portrayals of technology today are not as optimistic as  The Jetsons, however. Indeed, public and political conceptions about artificial intelligence (AI) and robotics in particular are being strongly shaped by the relentless dystopianism of modern science fiction novels, movies and television shows. And we are worse off for it.

AI, machine learning, robotics and the power of computational science hold the potential to drive explosive economic growth and profoundly transform a diverse array of sectors, while providing humanity with countless technological improvements in medicine and healthcarefinancial servicestransportationretailagricultureentertainmentenergyaviationthe automotive industry and many others. Indeed, these technologies are already deeply embedded in these and other industries and making a huge difference.

But that progress could be slowed and in many cases even halted if public policy is shaped by a precautionary-principle-based mindset that imposes heavy-handed regulation based on hypothetical worst-case scenarios. Unfortunately, the persistent dystopianism found in science fiction portrayals of AI and robotics conditions the ground for public policy debates, while also directing attention away from some of the more real and immediate issues surrounding these technologies.

Incessant Dystopianism Untethered from Reality

In his recent book Robots, Penn State business professor John Jordan observes how over the last century “science fiction set the boundaries of the conceptual playing field before the engineers did.” Pointing to the plethora of literature and film that depicts robots, he notes: “No technology has ever been so widely described and explored before its commercial introduction.” Not the internet, cell phones, atomic energy or any others.

Indeed, public conceptions of these technologies, and even the very vocabulary of the field, has been shaped heavily by sci-fi plots beginning a hundred years ago with the 1920 play  R.U.R. (Rossum’s Universal Robots)which gave us the term “robot,” and Fritz Lang’s 1927 silent film Metropolis, with its memorable Maschinenmensch, or “machine-human.” There has been a deep and rich imagination surrounding AI and robotics since then, but it has tended to be mostly negative and has grown more hostile over time.

The result has been a public and policy dialogue about AI and robotics that is focused on an endless parade of horribles about these technologies. Not surprisingly, popular culture also affects journalistic framings of AI and robotics. Headlines breathlessly scream of how “Robots May Shatter the Global Economic Order Within a Decade,” but only if we’re not dead already because… “If Robots Kill Us, It’s Because It’s Their Job.”

Dark depictions of AI and robotics are ever-present in popular modern sci-fi movies and television shows. A short list includes:  2001: A Space Odyssey, Avengers: Age of Ultron, Battlestar Galactica (both the 1978 original and the 2004 reboot), Black Mirror, Blade Runner, Ex Machina, Her, The Matrix, Robocop, The Stepford Wives, Terminator, Transcendence, Tron, WALL-E, Wargames and Westworld, among countless others. The least nefarious plots among these films and television shows rest on the idea that AI and robotics are going to drive us to a life of distraction, addiction or sloth. In more extreme cases, we’re warned about a future in which we are either going to be enslaved or destroyed by our new robotic or algorithmic overlords.

Don’t get me wrong; the movies and shows on the above list are some of my favorites.  2001 and Blade Runner are both in my top 5 all-time flicks, and the reboot of Battlestar is one of my favorite TV shows. The plots of all these movies and shows are terrifically entertaining and raise many interesting issues that make for fun discussions.

But they are not representative of reality. In fact, the vast majority of computer scientists and academic experts on AI and robotics agree that claims about machine “superintelligence” are wildly overplayed and that there is no possibility of machines gaining human-equivalent knowledge any time soon—or perhaps ever. “In any ranking of near-term worries about AI, superintelligence should be far down the list,” argues Melanie Mitchell, author of Artificial Intelligence: A Guide for Thinking Humans.

Contra the  Terminator-esque nightmares envisioned in so many sci-fi plots, MIT roboticist Rodney Brooks says that “fears of runaway AI systems either conquering humans or making them irrelevant aren’t even remotely well grounded.” John Jordan agrees, noting: “The fear and uncertainty generated by fictional representations far exceed human reactions to real robots, which are often reported to be ‘underwhelming.’”

The same is true for AI more generally. “A close inspection of AI reveals an embarrassing gap between actual progress by computer scientists working on AI and the futuristic visions they and others like to describe,” says Erik Larson, author of, The Myth of Artificial Intelligence: Why Computers Can’t Think the Way We Do. Larson refers to this extreme thinking about superintelligent AI as “technological kitsch,” or exaggerated sentimentality and melodrama that is untethered from reality. Yet, the public imagination remains captivated by tales of impending doom.

Seeding the Ground with Misery and Misguided Policy

But isn’t it all just harmless fun? After all, it’s just make believe. Moreover, can’t science fiction—no matter how full of techno-misery—help us think through morally weighty issues and potential ethical conundrums involving AI and robotics?

Yes and no. Titillating fiction has always had a cathartic element to it and helped us cope with the unknown and mysterious. Most historians believe it was Aristotle in his Poetics who first used the term katharsis when discussing how Greek tragedies helped the audience “through pity and fear effecting the proper purgation of these emotions.”

But are modern science fiction depictions of AI and robotics helping us cope with technological change, or instead just stoking a constant fear of it? Modern sci-fi isn’t so much purging negative emotion about the topic at hand as it is endlessly adding to the sense of dread surrounding these technologies. What are the societal and political ramifications of a cultural frame of reference that suggests an entire new class of computational technologies will undermine rather than enrich our human experiences and, possibly, our very existence?

The New Yorker’s Jill Lepore says we live in “A Golden Age for Dystopian Fiction,” but she worries that this body of work “cannot imagine a better future, and it doesn’t ask anyone to bother to make one.” She argues this “fiction of helplessness and hopelessness” instead “nurses grievances and indulges resentments” and that “[i]ts only admonition is: Despair more.” Lapore goes so far as to claim that, because “the radical pessimism of an unremitting dystopianism” has appeal to many on both the left and right, it “has itself contributed to the unravelling of the liberal state and the weakening of a commitment to political pluralism.”

I’m not sure dystopian fiction is driving the unravelling of pluralism, but Lapore is on to something when she notes how a fiction rooted in misery about the future will likely have political consequences at some point.

Techno-panic Thinking Shapes Policy Discussions

The ultimate question is whether public policy toward new AI and robotic technologies will be shaped by this hyperpessimistic thinking in the form of precautionary principle regulation, which essentially treats innovations as “guilty until proven innocent” and seeks to intentionally slow or retard their development.

If the extreme fears surrounding AI and robotics  do inspire precautionary controls—as they already have in the European Union—then we need to ask how the preservation of the technological status quo could undermine human well-being by denying society important new life-enriching and life-saving goods and services. Technological stasis does not provide a safer or healthier society, but instead holds back our collective ability to innovate, prosper and better our lives in meaningful ways.

Louis Anslow, curator of  Pessimists Archive calls this “the Black Mirror fallacy,” referencing the British television show that has enjoyed great success peddling tales of impending techno-disasters. Anslow defines the fallacy as follows: “When new technologies are treated as much more threatening and risky than old technologies with proven risks/harms. When technological progress is seen as a bigger threat than technological stagnation.”

Anslow’s Pessimists Archive collects real-world case studies of how moral panic and techno-panics have accompanied the introduction of new inventions throughout history. He notes, “Science fiction has conditioned us to be hypervigilant about avoiding dystopias born of technological acceleration and totally indifferent to avoiding dystopias born of technological stagnation.”

Techno-panics can have real-world consequences when they come to influence policymaking. Robert Atkinson, president of the Information Technology & Innovation Foundation (ITIF), has documented the many ways that “the social and political commentary [about AI] has been hype, bordering on urban myth, and even apocalyptic.” The more these attitudes and arguments come to shape policy considerations, the more likely it is precautionary principle-based recommendations will drive AI and robotics policy, preemptively limiting their potential. ITIF has published a report documenting “Ten Ways the Precautionary Principle Undermines Progress in Artificial Intelligence,” identifying how it will slow algorithmic advances in key sectors.

Similarly, in his important recent book Where Is My Flying Car ?, scientist J. Storrs Hall documents how “regulation clobbered the learning curve” for many important technologies in the U.S. over the last half century, especially nuclear, nanotech and advanced aviation. Society lost out on many important innovations due to endless bureaucratic delays, often thanks to opposition from special interests, anti-innovation activists, overzealous trial lawyers and a hostile media. Hall explained how this also sent a powerful signal to talented young people who might have been considering careers in those sectors. Why go into a field demonized by so many and where your creative abilities will be hamstrung by precautionary constraints?

Disincentivizing Talent

Hall argues that in those crucial sectors, this sort of mass talent migration “took our best and brightest away from improving our lives,” and he warns that those who still hope to make a career in such fields should be prepared to be “misconstrued and misrepresented by activists, demonized by ignorant journalists, and strangled by regulation.”

Is this what the future holds for AI and robotics? Hopefully not, and America continues to generate world-class talent on this front today in a diverse array of businesses and university programs. But if the waves of negativism about AI and robotics persist, we shouldn’t be surprised if it results in a talent shift away from building these technologies and toward fields that instead look to restrict them.

For example, Hall documents how, following the sudden shift in public attitudes surrounding nuclear power 50 years ago, “interests, and career prospects, in nuclear physics imploded” and “major discoveries stopped coming.” Meanwhile, enrollment in law schools and other soft sciences typically critical of technological innovation enjoyed greater success. Nobody writes any sci-fi stories about what a disaster that development has been for innovation in the energy sphere, even though it is now abundantly clear how precautionary principle policies have undermined environment goals and human welfare, with major geopolitical consequences for many nations.

If America loses the talent race on the AI front, it has ramifications for global competitive advantage going forward, especially as China races to catch up. In a world of global innovation arbitrage, talent and venture capital will flow to wherever it is treated most hospitably. Demonizing AI and robotics won’t help recruit or retain the next generation of talent and investors America needs to remain on top.

Flipping the Script

Some folks have had enough of the relentless pessimism surrounding technology and progress in modern science fiction and are trying to do something to reverse it. In a 2011  Wired essay decrying the dangers of “Innovation Starvation,” the acclaimed novelist Neal Stephenson decried the fact that “the techno-optimism of the Golden Age of [science fiction] has given way to fiction written in a generally darker, more skeptical and ambiguous tone.” While good science fiction, “supplies a plausible, fully thought-out picture of an alternate reality in which some sort of compelling innovation has taken place,” Stephenson said modern sci-fi was almost entirely focused on its potential downsides.

To help reverse this trend, Stephenson worked with the Center for Science and the Imagination at Arizona State University to launch Project Hieroglyph, an effort to support authors willing to take a more optimistic view of the future. It yielded a 2014 book, Hieroglyph: Stories and Visions for a Better Future that included almost 20 contributors. Later, in 2018, The Verge launched the “Better Worlds” project to support 10 writers of “stories that inspire hope” about innovation and the future. “Contemporary science fiction often feels fixated on a sort of pessimism that peers into the world of tomorrow and sees the apocalypse looming more often than not,” said Verge culture editor Laura Hudson when announcing the project.

Unfortunately, these efforts have not captured much public attention and that’s hardly surprising. “Pessimism has always been big box office,” says science writer Matt Ridley, primary because it really is more entertaining. Even though many of great sci-fi writers of the past, including Isaac Asimov, Arthur C. Clarke, and Robert Heinlein, wrote positively about technology, they ultimately had more success selling stories with darker themes. It’s just the nature of things more generally, from the best of Greek tragedy to Shakespeare and on down the line. There’s a reason they’re still rebooting Beowulf all these years later, after all.

So, There’s Star Trek and What Else?

While technological innovation will never enjoy the respect it deserves for being the driving force behind human progress, one can at least hope that more pop culture treatments of it might give it a fair shake. When I ask crowds of people to name a popular movie or television show that includes mostly positive depictions of technology, Star Trek is usually the first (and sometimes the only) thing people mention. It’s true that, on balance, technology was treated as a positive force in the original series, although “V’Ger”—a defunct space probe that attains a level of consciousness—was the prime antagonist in Star Trek: The Motion Picture. Later, Star Trek: The Next Generation gave us the always helpful android Data, but also created the lasting mental image of the Borg, a terrifying race of cyborgs hell-bent on assimilating everyone into their hive mind.

The Borg provided some of The Next Generation’s most thrilling moments, but also created a new cultural meme, with tech critics often worrying about how today’s humans are being assimilated into the hive mind of modern information systems. Philosopher Michael Sacasas even coined the term “the Borg Complex,” to refer to a supposed tendency “exhibited by writers and pundits who explicitly assert or implicitly assume that resistance to technology is futile.” After years of a friendly back-and-forth with Sacasas, I felt compelled to even wrap up my book Permissionless Innovation with a warning to other techno-optimists not to fall prey to this deterministic trap when defending technological change. Regardless of where one falls on that issue, the fact that Sacasas and I were having a serious philosophical discussion premised on a famous TV plotline serves as another indication of how much science fiction shapes public and intellectual debate over progress and innovation.

And, truth be told, some movies know how to excite the senses without resorting to dystopianism.  Interstellar and The Martian are two recent examples that come to mind. Interestingly, space exploration technologies themselves usually get a fair shake in many sci-fi plots, often only to be undermined by onboard Ais or androids, as occurred not only in 2001 with the eerie HAL 9000, but also Alien.

There are some positive (and sometimes humorous) depictions of robots as in  Robot & Frank, or touching ones as in Bicentennial Man. Beyond The Jetsons, other cartoons like Iron Giant and Big Hero 6 offer more kindly visions of robots. KITT, a super-intelligent robot car, was Michael Knight’s dependable ally in NBC’s Knight Rider. And R2-D2 is always a friendly helper throughout the Star Wars franchise. But generally speaking, modern sci-fi continues to churn out far more negativism about AI and robotics.

What If We Took It All Seriously?

So long as the public and political imagination is spellbound by machine machinations that dystopian sci-fi produces, we’ll be at risk of being stuck with absurd debates that have no meaningful solution other than “Stop the clock!” or “Ban it all!” Are we really being assimilated into the Borg hive mind, or just buying time until a coming robopocalypse grinds us into dust (or dinner)?

If there was a kernel of truth to any of this, then we should adopt some of the extreme solutions, Nick Bostrom of Oxford suggests in his writing on these issues. Those radical steps include worldwide surveillance and enforcement mechanisms for scientists and researchers developing algorithmic and robotic systems, as well as some sort of global censorship of information about these capabilities to ensure the technology is not used by bad actors.

To Bostrom’s great credit, he is at least willing to tell us how far he’d go. Most of today’s tech critics prefer to just spread a gospel of gloom and doom and suggest  something must be done, without getting into the ugly details about what a global control regime for computational science and robotic engineering looks like. We should reject such extremist hypothesizing and understand that silly sci-fi plots, bombastic headlines and kooky academic writing should not be our baseline for serious discussions about the governance of artificial intelligence and robotics.

At the same time, we absolutely should consider what downsides any technology poses for individuals and society. And, yes, some precautions will be needed of a regulatory nature. But most of the problems envisioned by sci-fi writers are not what we should be concerned with. There are far more specific and nuanced problems AI and robotics confronts us with today that deserve more serious consideration and governance steps. How to program safer drones and driverless cars, improve the accuracy of algorithmic medical and financial technologies, and ensure better transparency for government uses of AI are all more mundane but very important issues that require reasoned discussion and balanced solutions today. Dystopian thinking gives us no roadmap to get there other than extreme solutions.

Imagining a Better Future

The way forward here is neither to indulge in apocalyptic fantasies nor pollyannaish techno-optimism, but to approach these technologies with reasoned risk analysis, sensible industry best practices, educational efforts and other agile governance steps. In a forthcoming book on flexible governance strategies for AI and robotics, I outline how these and other strategies are already being formulated to address real-world challenges in fields as diverse as driverless cars, drones, machine learning in medicine and much more.

A wide variety of ethical frameworks, offered by professional associations, academic groups and others, already exists to “bake in” best practices and align AI design with widely shared goals and values while also “keeping humans in the loop” at critical stages of the design process to ensure that they can continue to guide and occasionally realign those values and best practices as needed.

When things do go wrong, many existing remedies are available, including a wide variety of common law solutions (torts, class actions, contract law, etc.), recall authority possessed by many regulatory agencies, and various consumer protection policies and other existing laws. Moreover, the most effective solution to technological problems usually lies in more innovation, not less. It is only through constant trial and error that humanity discovers better  and safer ways of satisfying important wants and needs.

These are complicated and nuanced issues that demand tailored and iterative governance responses. But this should not be done using inflexible, innovation-limiting mandates. Concerns about AI dangers deserve serious consideration and appropriate governance steps to ensure that these systems are beneficial to society. However, there is an equally compelling public interest in ensuring that AI innovations are developed and made widely available to help improve human well-being across many dimensions.

So, enjoy your next dopamine hit of sci-fi hysteria—I know I will, too. But don’t let that be your guide to the world that awaits us. Even if most sci-fi writers can’t imagine a better future, the rest of us can.

]]>
https://techliberation.com/2022/09/02/why-the-endless-techno-apocalyptica-in-modern-sci-fi/feed/ 0 77033
Event Video on Algorithmic Auditing and AI Impact Assessments https://techliberation.com/2022/07/13/event-video-on-algorithmic-auditing-and-ai-impact-assessments/ https://techliberation.com/2022/07/13/event-video-on-algorithmic-auditing-and-ai-impact-assessments/#comments Wed, 13 Jul 2022 18:10:03 +0000 https://techliberation.com/?p=77008

Upsides:

  • Audits and impact assessments can help ensure organizations live up their promises as it pertains to “baking in” ethical best practices (on issues like safety, security, privacy, and non-discrimination).
  • Audits and impact assessments are already utilized in other fields to address safety practices, financial accountability, labor practices and human rights issues, supply chain practices, and various environmental concerns.
  • Internal auditing / Institute of Internal Auditors (IIA) efforts could expand to include AI risks
  • Eventually, more and more organizations will expand their internal auditing efforts to incorporate AI risks because it makes good business sense to stay on top of these issues and avoid liability, negative publicity, or other customer backlash.
  • the International Association of Privacy Professionals (IAPP) trains and certifies privacy professionals through formal credentialing programs, supplemented by regular meetings, annual awards, and a variety of outreach and educational initiatives.
  • We should use similar model for AI and start by supplementing Chief Privacy Officers with Chief Ethical Officers.
  • This is how we formalize the ethical frameworks and best practices that have been formulated by various professional associations such as IEEE, ISO, ACM and others.
  • OECD — Framework for the Classification of AI Systems with the twin goals of helping “to develop a common framework for reporting about AI incidents that facilitates global consistency and interoperability in incident reporting,” and advancing “related work on mitigation, compliance and enforcement along the AI system lifecycle, including as it pertains to corporate governance.”
  • NIST — AI Risk Management Framework “to better manage risks to individuals, organizations, and society associated with artificial intelligence.”
  • These frameworks being developed through a consensus-driven, open, transparent, and collaborative process. Not through top-down regulation.
  • Many AI developers and business groups have endorsed the use of such audits and assessments. BSA|The Software Alliance has said that, “By establishing a process for personnel to document key design choices and their underlying rationale, impact assessments enable organizations that develop or deploy high-risk AI to identify and mitigate risks that can emerge throughout a system’s lifecycle.”
  • Developers can still be held accountable for violations of certain ethical norms and bast practices both through private and potentially even through formal sanctions by consumer protection agencies (Federal Trade Commission / comparable state offices / by state AGs).
  • EqualAI / WEF — “Badge Program for Responsible AI Governance”
  • field of algorithmic consulting continues to expand (ex: O’Neil Risk Consulting)

Downsides:

  • constitutes a harm or impact in any given context will often be a contentious matter.
  • Auditing algorithms is nothing like auditing an accounting ledger, where the numbers either add up or they don’t.
  • With algorithms there are no binary metrics that can quantify the correct amount of privacy, safety, or security in any given system.
  • E.U. AI act will be a disaster for AI innovation and investment
  • Proposed U.S. Algorithmic Accountability Act of 2022 would require that developers perform impact assessments and file them with the Federal Trade Commission. A new Bureau of Technology would be created inside the agency to oversee the process.
  • If enforced through a rigid regulatory regime and another federal bureaucracy, compliance with algorithmic auditing mandates would likely become a convoluted, time-consuming bureaucratic process. That would likely slow the pace of AI development significantly.
  • Academic literature on AI auditing / impact assessment ignores potential costs; Mandatory auditing and assessments are treated as a sort of frictionless nirvana when we already know that such a process would entire significant costs.
  • Some AI scholars suggest that NEPA should be model for AI impact assessments / audits.
  • NEPA assessments were initially quite short (sometimes less than 10 pages), but today the average length of these statements is more than 600 pages and include appendices that average over 1,000 pages on top of that.
  • NEPA assessments take an average of 4.5 years to complete and that, between 2010 and 2017, there were four assessments that took at least 17 years to complete.
  • Many important public projects never get done or take far too long to complete at considerably higher expenditure than originally predicted.
  • would create a number of veto points that opponents of AI could use to stop much progress in the field. This is the “vetocracy” problem.
  • We cannot wait years or even months for bureaucracies to eventually getting around to formally signing off on audits or assessments, many of which would be obsolete before they were even done.
  • “global innovation arbitrage” problem would kick in: Innovators and investors increasingly relocate to the jurisdictions where they are treated most hospitably.
  • Both parties already accuse digital technology companies of manipulating their algorithms to censor their views.
  • Whichever party is in power at any given time could use the process to politicize terms like “safety,” “security,” and “non-discrimination” to nudge or even force private AI developers to alter their algorithms to satisfy the desires of partisan politicians or bureaucrats.
  • FCC abused its ambiguous authority to regulate “in the public interest” and indirectly censor broadcasters through intimidation via jawboning tactics and other “agency threats.” or “regulation by raised eyebrow”
  • There are potentially profound First Amendment issues in play with the regulation of algorithms that have not been explored here but which could become a major part of AI regulatory efforts going forward.

Summary:

  • Auditing and impact assessments can be a part of a more decentralized, polycentric governance framework.
  • Even in the absence of any sort of hard law mandates, algorithmic auditing and impact reviews represent an important way to encourage responsible AI development.
  • But we should be careful about mandating such things due to the many unanticipated cost and consequences of converting this into a top-down, bureaucratic regulatory regime.
  • The process should evolve gradually and organically, as it has in many other fields and sectors.
]]>
https://techliberation.com/2022/07/13/event-video-on-algorithmic-auditing-and-ai-impact-assessments/feed/ 1 77008
VIDEO: My London Talk about the Future of AI Governance https://techliberation.com/2022/06/13/video-my-london-talk-about-the-future-of-ai-governance/ https://techliberation.com/2022/06/13/video-my-london-talk-about-the-future-of-ai-governance/#comments Mon, 13 Jun 2022 09:29:50 +0000 https://techliberation.com/?p=76999

On Thursday, June 9, it was my great pleasure to return to my first work office at the Adam Smith Institute in London and give a talk on the future of innovation policy and the governance of artificial intelligence. James Lawson, who is affiliated with the ASI and wrote a wonderful 2020 study on AI policy, introduced me and also offered some remarks. Among the issues discussed:

  • What sort of governance vision should govern the future of innovation generally and AI in particular: the “precautionary principle” or “permissionless innovation”?
  • Which AI sectors are witnessing the most exciting forms of innovation currently?
  • What are the fundamental policy fault lines in the AI policy debates today?
  • Will fears about disruption and automation lead to a new Luddite movement?
  • How can “soft law” and decentralized governance mechanism help us solve pressing policy concerns surrounding AI?
  • How did automation affect traditional jobs and sectors?
  • Will the European Union’s AI Act become a global model for regulation and will it have a “Brussels Effect” in terms of forcing innovators across the world to come into compliance with EU regulatory mandates?
  • How will global innovation arbitrage affect the efforts by governments in Europe and elsewhere to regulate AI innovation?
  • Can the common law help address AI risk? How is the UK common law system superior to the US legal system?
  • What do we mean by “existential risk” as it pertains to artificial intelligence?

I have a massive study in the works addressing all these issues. In the meantime, you can watch the video of my London talk here. And thanks again to my friends at the Adam Smith Institute for hosting!

Additional Reading:

 

 

]]>
https://techliberation.com/2022/06/13/video-my-london-talk-about-the-future-of-ai-governance/feed/ 1 76999
Samuel Florman & the Continuing Battle over Technological Progress https://techliberation.com/2022/04/06/samuel-florman-the-continuing-battle-over-technological-progress/ https://techliberation.com/2022/04/06/samuel-florman-the-continuing-battle-over-technological-progress/#respond Wed, 06 Apr 2022 18:37:45 +0000 https://techliberation.com/?p=76961

Almost every argument against technological innovation and progress that we hear today was identified and debunked by Samuel C. Florman a half century ago. Few others since him have mounted a more powerful case for the importance of innovation to human flourishing than Florman did throughout his lifetime.

Chances are you’ve never heard of him, however. As prolific as he was, Florman did not command as much attention as the endless parade of tech critics whose apocalyptic predictions grabbed all the headlines. An engineer by training, Florman became concerned about the growing criticism of his profession throughout the 1960s and 70s. He pushed back against that impulse in a series of books over the next two decades, including most notably: The Existential Pleasures of Engineering (1976), Blaming Technology: The Irrational Search for Scapegoats (1981), and The Civilized Engineer (1987). He was also a prolific essayist, penning hundreds of articles for a wide variety of journals, magazines, and newspapers beginning in 1959. He was also a regular columnist for MIT Technology Review for sixteen years.

Florman’s primary mission in his books and many of those essays was to defend the engineering profession against attacks emanating from various corners. More broadly, as he noted in a short autobiography on his personal website, Florman was interested in discussing, “the relationship of technology to the general culture.”

Florman could be considered a “rational optimist,” to borrow Matt Ridley’s notable term [1] for those of us who believe, as I have summarized elsewhere, that there is a symbiotic relationship between innovation, economic growth, pluralism, and human betterment.[2] Rational optimists are highly pragmatic and base their optimism on facts and historical analysis, not on dogmatism or blind faith in any particular viewpoint, ideology, or gut feeling. But they are unified in the belief that technological change is a crucial component of moving the needle on progress and prosperity.

Florman’s unique contribution to advancing rational optimism came in the way he itemized the various claims made by tech critics and then powerfully debunked each one of them. He was providing other rational optimists with a blueprint for how defend technological innovation against its many critics and criticisms. As he argued in The Civilized Engineer, we need to “broaden our conception of engineering to include all technological creativity.”[3] And then we need to defend it with vigor.

In 1982, the American Society of Mechanical Engineers appropriately awarded Florman the distinguished Ralph Coats Roe Medal for his “outstanding contribution toward a better public understanding and appreciation of the engineer’s worth to contemporary society.” Carl Sagan had won the award the previous year. Alas, Forman never attained the same degree of notoriety as Sagan. That is a shame because Florman was as much a philosopher and a historian as he was an engineer, and his robust thinking on technology and society deserves far greater attention. More generally, his plain-spoken style and straight-forward defense of technological progress continues to be a model for how to counter today’s techno-pessimists.

This essay highlights some of the most important themes and arguments found in Florman’s writing and explains its continuing relevance to the ongoing battles over technology and progress.

What Motivates The “Antitechnologists”?

Florman was interested in answering questions about what motivates both engineers as well as their critics. He dug deep into psychology and history to figure out what makes these people tick. Who are engineers, and why do they do what they do? That was his primary question, and we will turn to his answers momentarily. But he also wanted to know what drove the technology critics to oppose innovation so vociferously.

Florman’s most important contribution to the history of ideas lies in his 6-part explanation of “the main themes that run through the works of the antitechnologists.”[4] Florman used the term “antitechnologists” to describe the many different critics of engineering and innovation. He recognized that the term wasn’t perfect and that some people he labelled as such would object to it. Nevertheless, because they offer no umbrella label for their movement or way of thinking, Florman noted that opposition to, or general discomfort with, technology was what motivated these critics. Hence, the label “antitechnologists.”

Florman surveyed a wide swath of technological critics from many different disciplines—philosophy, sociology, law, and other fields. He condensed their main criticisms into six general points:

  • Technology is a “thing” or a force that has escaped from human control and is spoiling our lives.
  • Technology forces man to do work that is tedious and degrading.
  • Technology forces man to consume things that he does not really desire.
  • Technology creates an elite class of technocrats, and so disenfranchises the masses.
  • Technology cripples man by cutting him off from the natural world in which he evolved.
  • Technology provides man with technical diversions which destroy his existential sense of his own being.[5]

No one else before this had ever crafted such a taxonomy of complaints from tech critics, and no one has done it better since Florman did so in 1976. In fact, it is astonishing how well Florman’s list continues to identify what motivates modern technology critics. New technologies have come and gone, but these same concerns tend to be brought up again and again. Florman’s books addressed and debunked each of these concerns in powerful fashion.

The Relentless Pessimism & Elitism of the Antitechnologists

Florman identified the way a persistent pessimism unifies antitechnologists. “Our intellectual journals are full of gloomy tracts that depict a society debased by technology,” he noted.[6] What motivated such gloom and doom? “It is fear. They are terrified by the scene unfolding before their eyes.”[7] He elaborated:

“The antitechnologists are frightened; they counsel halt and retreat. They tell the people that Satan (technology) is leading them astray, but the people have heard that story before. They will not stand still for vague promises of a psychic contentment that is to follow in the wake of voluntary temperance.”[8]

The antitechnologist’s worldview isn’t just relentlessly pessimistic but also highly elitist and paternalistic, Florman argued. He referred to it as “Platonic snobbery.”[9] The economist and political scientist Thomas Sowell would later call that snobbish attitude, “the vision of the anointed.”[10] Like Sowell, Florman was angered at the way critics stared down their noses at average folk and disregarded their values and choices:

“The antitechnologists have every right to be gloomy, and have a bounden duty to express their doubts about the direction our lives are taking. But their persistent disregard of the average person’s sentiments is a crucial weakness in their argument—particularly when they then ask us to consider the ‘real’ satisfactions that they claim ordinary people experienced in other cultures of other times.”[11]

Florman noted that critics commonly complain about “too many people wanting too many things,” but he noted that, “[t]his is not caused by technology; it is a consequence of the type of creature that man is.”[12] One can moralize all they want about supposed over-consumption or “conspicuous consumption,” but in the end, most of us strive to better our lives in various ways—including by working to attain things that may be out of our reach or even superfluous in the eyes of others.

For many antitechnologists and other social critics, only the noble search for truth and wisdom will suffice. Basically, everybody should just get back to studying philosophy, sociology, and other soft sciences. Modern tech critics, Forman said, fashion themselves as the intellectual descendants of Greek philosophers who believed that, “[t]he ideal of the new Athenian citizen was to care for his body in the gymnasium, reason his way to Truth in the academy, gossip in the agora, and debate in the senate. Technology was not deemed worthy of a free man’s time.”[13]

“It is not surprising to find philosophers recommending the study of philosophy as a way of life,” Florman noted amusingly.[14] But that does not mean all of us want (or even need) to devote our lives to such things. Nonetheless, critics often sneer at the choices made by the rest of us—especially when they involve the fruits of science and technology. “The most effective weapon in the arsenal of the antitechnologists is self-righteousness,” he noted,[15] and, “[a]s seen by the antitechnologists, engineers and scientists are half-men whose analysis and manipulation of the world deprives them of the emotional experiences that are the essence of the good life.”[16]

Indeed, it is not uncommon (both in the past and today) to see tech critics self-anoint themselves “humanists” and then suggest that anyone who thinks differently from them (namely, those who are pro-innovation) are the equivalent of anti-humanistic. I wrote about this in my 2018 essay, “Is It ‘Techno-Chauvinist’ & ‘Anti-Humanist’ to Believe in the Transformative Potential of Technology?” I argued that, “[p]roperly understood, ‘technology’ and technological innovation are simply extensions of our humanity and represent efforts to continuously improve the human condition. In that sense, humanism and technology are compliments, not opposites.”

But the critics remain fundamentally hostile to that notion and they often suggest that there is something suspicious about those who believe, along with Florman, that there is a symbiotic relationship between innovation, economic growth, pluralism, and human betterment. We rational optimists, the critics suggest, are simply too focused on crass, materialistic measures of happiness and human flourishing.

Florman observed this when noting how much grief he and fellow engineers and scientists got when engaging with critics. “Anyone who has attempted to defend technology against the reproaches of an avowed humanist soon discovers that beneath all the layers of reasoning—political, environmental, aesthetic, or moral—lies a deep-seated disdain for ‘the scientific view.’”[17]

Everywhere you look in the world of Science & Technology Studies (STS) today, you find this attitude at work. In fact, the field is perhaps better labelled Anti-Science & Technology Studies, or at least Science & Technology Skeptical Studies. For most STSers, the burden of proof lies squarely on scientists, engineers, and innovators who must prove to some (often undefined) higher authorities that their ideas and inventions will bring worth to society (however the critics measure worth and value, which is often very unclear). Until then, just go slow, the critics say. Better yet, consult your local philosophy department for a proper course of action!

The critics will retort that they are just looking out for society’s best interests and trying to counter that selfish, materialist side of humanity. Florman countered by noting how, “most people are in search of the good life—not ‘the goods life’ as [Lewis] Mumford puts it, although some goods are entailed—and most human desires are for good things in moderate amounts.”[18] Trying to better our lives through the creation and acquisition of new and better goods and services is just a natural and quite healthy human instinct to help us attain some ever-changing definition of whatever each of us considers “the good life.” “Something other than technology is responsible for people wanting to live in a house on a grassy plot beyond walking distance to job, market, neighbor, and school,” Florman responded.[19] We all want to “get ahead” and improve our lot in life. That’s not because technology forces the urge upon us. Rather, that urge comes quite naturally as part of a desire to improve our lot in life.

The Power of Nostalgia

I have spent a fair amount of time in my own writing documenting the central role that nostalgia plays in motivating technological criticism.[20] Florman’s books repeatedly highlighted this reality. “The antitechnologists romanticize the work of earlier times in an attempt to make it seem more appealing than work in a technological age,” he noted. “But their idyllic descriptions of peasant life do not ring true.”[21]

The funny thing is, it is hard to pin down the critics regarding exactly when the “golden era” or “good ‘ol days” were. But if there is one thing that they all agree on, it’s that those days have long passed us by. In a 2019 essay on “Four Flavors of Doom: A Taxonomy of Contemporary Pessimism,” philosopher Maarten Boudry noted:

“In the good old days, everything was better. Where once the world was whole and beautiful, now everything has gone to ruin. Different nostalgic thinkers locate their favorite Golden Age in different historical periods. Some yearn for a past that they were lucky enough to experience in their youth, while others locate utopia at a point farther back in time…”

Not all nostalgia is bad. Clay Routledge has written eloquently about how “nostalgia serves important psychological functions,” and can sometimes possess a positive character that strengthens individuals and society. But the nostalgia found in the works of tech critics is usually a different thing altogether. It is rooted in misery about the present and dread of the future—all because technology has apparently stolen away or destroyed all that was supposedly great about the past. Florman noted how, “the current pessimism about technology is a renewed manifestation of pastoralism,” that is typically rooted in historical revisionism about bygone eras.[22] Many critics engage in what rhetoricians call “appeals to nature” and wax poetic about the joys of life for Pre-Technological Man, who apparently enjoyed an idyllic life free of the annoying intrusions created by modern contrivances.

Such “good ol days” romanticism is largely untethered from reality. “For most of recorded history humanity lived on the brink of starvation,” Wall Street Journal columnist Greg Ip noted in a column in early 2019. Even a cursory review of history offers voluminous, unambiguous proof that the old days were, in reality, eras of abject misery. Widespread poverty, mass hunger, poor hygiene, disease, short lifespans, and so on were the norm. What lifted humanity up and improved our lot as a species is that we learned how to apply knowledge to tasks in a better way through incessant trial-and-error experimentation. Recent books by Hans Rosling,[23] Steven Pinker,[24] and many others[25] have thoroughly documented these improvements to human well-being over time.

The critics are unmoved by such evidence, preferring to just jump around in time and cherry-pick moments when they feel life was better than it is now. “Fond as they are of tribal and peasant life, the antitechnologists become positively euphoric over the Middle Ages,” Florman quipped.[26] Why? Mostly because the Middle Ages lacked the technological advances of modern times, which the critics loathe. But facts are pesky things, and as Florman insisted, “it is fair to go on to ask whether or not life was ‘better’ in these earlier cultures than it is in our own.”[27] “We all are moved to reverie by talk of an arcadian golden age,” he noted. “But when we awaken from this reverie, we realize that the antitechnologists have diverted us with half-truths and distortions.”[28]

The critics’ reverence for the old days would be humorous if it wasn’t rooted in an arrogant and dangerous belief that society can be somehow reshaped to resemble whatever preferred past the critics desire. “Recognizing that we cannot return to earlier times, the antitechnologists nevertheless would have us attempt to recapture the satisfactions of these vanished cultures,” Florman noted. “In order to do this, what is required is nothing less than a change in the nature of man.”[29] That is, the critics will insist that, “something must be done” (namely be forced from above via some grand design) to remake humans and discourage their inner homo faber desire to be an incessant tool-builder. But this is madness, Florman argued in one of the best passages from his work:

“we are beginning to realize that for mankind there will never be a time to rest at the top of the mountain. There will be no new arcadian age. There will always be new burdens, new problems, new failures, new beginnings. And the glory of man is to respond to his harsh fate with zest and ever-renewed effort.”[30]

If the critics had their way, however, that zest would be dampened and those efforts restrained in the name of recapturing some mythical lost age. This sort of “rosy retrospection bias” is all the more shocking coming, as it does, from learned people who should know a lot more about the actual history of our species and the long struggle to escape utter despair and destitution. Alas, as the great Scottish philosopher David Hume observed in a 1777 essay, “The humour of blaming the present, and admiring the past, is strongly rooted in human nature, and has an influence even on persons endued with the profoundest judgment and most extensive learning.”[31]

Why Invent? Homo Faber is our Nature

While taking on the critics and debunking their misplaced nostalgia about the past, Florman mounted a defense of engineers and innovators by noting that the need to tinker and create is in our blood. He began by noting how “the nature of engineering has been misconceived”[32] because, in a sense, we are all engineers and innovators to some degree.

Florman’s thinking was very much in line with Benjamin Franklin, who once noted, “man is a tool-making animal.” “Both genetically and culturally the engineering instinct has been nurtured within us,” Florman argued, and this instinct “was as old as the human race.”[33] “To be human is to be technological. When we are being technological we are being human—we are expressing the age-old desire of the tribe to survive and prosper.”[34] In fact, he claimed, it was no exaggeration to say that humans, “are driven to technological creativity because of instincts hardly less basic than hunger and sex.”[35] Had our past situation been as rosy as the critics sometimes suggest, perhaps we would have never bothered to fashion tools to escape those eras! It was precisely because humans wanted to improve their lives and the lives of their loved ones that we started crafting more and better tools. Flint and firewood were never going to suffice.

But our engineering instincts do not end with basic needs. “Engineering responds to impulses that go beyond mere survival: a craving for variety and new possibilities, a feeling for proportion—for beauty—that we share with the artist,” Florman argued.[36] In essence, engineering and innovation respond to both basic human needs and higher ones at every stage of “Maslow’s pyramid,” which describes a five-level hierarchy of human needs. This same theme is developed in Arthur Diamond’s recent book, Openness to Creative Destruction: Sustaining Innovative Dynamism. As Diamond argues, one of the most unheralded features of technological innovation is that, “by providing goods that are especially useful in pursuing a life plan full of challenging, worthwhile creative projects,” it allows each of us the pursue different conceptions of what we consider a good life.[37] But we are only able to do so by first satisfying our basic physiological needs, which innovation also handles for us.

Florman was frustrated that critics failed to understand this point and equally concerned that engineers and innovators had been cast as uncaring gadget-worshipers who did not see beauty and truth in higher arts and other more worldly goals and human values. That’s hogwash, he argued:

“What an ironic turn of events! For if ever there was a group dedicated to—obsessed with—morality, conscience, and social responsibility, it has been the engineering profession. Practically every description of the practice of engineering has stressed the concept of service to humanity.[38] [. . .] Even in an age of global affluence, the main existential pleasure of the engineer will always be to contribute to the well-being of his fellow man.”[39]

Engineers and innovators do not always set out with some grandiose design to change the world, although some aspire to do so. Rather, the “existential pleasures of engineering” that Florman described in the title of his most notable book comes about by solving practical day-to-day problems:

“The engineer does not find existential pleasure by seeking it frontally. It comes to him gratuitously, seeping into him unawares. He does not arise in the morning and say, ‘Today I shall find happiness.’ Quite the contrary. He arises and says, ‘Today I will do the work that needs to be done, the work for which I have been trained, the work which I want to do because in doing it I feel challenged and alive.’ Then happiness arrives mysteriously as a byproduct of his effort.”[40]

And this pleasure of getting practical work done is something that engineers and innovators enjoy collectively by coming together and using specialized skills in new and unique combinations. “[T]echnological progress depends upon a variety of skills and knowledge that are far beyond the capacity of any one individual,” he insisted. “High civilization requires a high degree of specialization, and it was toward high civilization that the human journey appears always to have been directed.”[41] Adam Smith could not have said it any better.

“Muddling Through”: Why Trial-and-Error is the Key to Progress

My favorite insights from Florman’s work relate to the way humans have repeatedly faced up to adversity and found ways to “muddle through.” This was the focus of an old essay of mine— “Muddling Through: How We Learn to Cope with Technological Change”—which argued that humans are a remarkably resilient species and that we regularly find creative ways to deal with major changes through constant trial-and-error experimentation and the learning that results from it.[42]

Florman made this same point far more eloquently long ago:

“We have been attempting to muddle along, acknowledging that we are selfish and foolish, and proceeding by means of trial and error. We call ourselves pragmatists. Mistakes are made, of course. Also, tastes change, so that what seemed desirable to one generation appears disagreeable to the next. But our overriding concern has been to make sure that matters of taste do not become matters of dogma, for that is the way toward violent conflict and tyranny. Trial and error, however, is exactly what the antitechnologists cannot abide.[43]

It is the error part of trial-and-error that is so vital to societal learning. “Even the most cautious engineer recognizes that risk is inherent in what he or she does,” Florman noted. “Over the long haul the improbable becomes the inevitable, and accidents will happen. The unanticipated will occur.”[44] But “[s]ometimes the only way to gain knowledge is by experiencing failure,” he correctly observed[45] “To be willing to learn through failure—failure that cannot be hidden—requires tenacity and courage.”[46]

I’ve argued that this represents the central dividing line between innovation supporters and technology critics. The critics are so focused on risk-adverse, precautionary principle-based thinking that they simply cannot tolerate the idea that society can learn more through trial-and-error than through preemptive planning. They imagine it is possible to override that process and predetermine the proper course of action to create a safer, more stable society. In this mindset, failure is to be avoided at all costs through prescriptions and prohibitions. Innovation is to be treated as guilty until proven innocent in the hope of eliminating the error (or risk / failure) associated with trial-and-error experiments. To reiterate, this logic misses the fact that the entire point of trial-and-error is to learn from our mistakes and “fail better” next time, until we’ve solved the problem at hand entirely.[47]

Florman noted that, “sensible people have agreed that there is no free lunch; there are only difficult choices, options, and trade-offs.”[48] In other words, precautionary controls come at a cost. “All we can do is do the best we can, plan where we can, agree where we can, and compromise where we must,” he said.[49] But, again, the antitechnologists absolutely cannot accept this worldview. They are fundamentally hostile to it because they either believe that a precautionary approach will do a better job improving public welfare, or they believe that trial-and-error fails to safeguard any number of other values or institutions that they regard as sacrosanct. This shuts down the learning process from which wisdom is generated. As the old adage goes, “nothing ventured, nothing gained.” There can be no reward without some risk, and there can be no human advances without unless we are free to learn from the error portion of trial-and-error.

The Costs of Precautionary Regulation

Florman did not spend much time in his writing mulling over the finer points of public policy, but he did express skepticism about our collective ability to define and enforce “the public interest” in various contexts. A great many regulatory regimes—and their underlying statutes—rest on the notion of “protecting the public interest.” It is impossible to be against that notion, but it is often equally impossible to define what it even means.[50]

This leads to what Florman called, “the search for virtues that nobody can define”[51] “As engineers we are agreed that the public interest is very important; but it is folly to think that we can agree on what the public interest is. We cannot even agree on the scientific facts!”[52] This is especially true today in debates over what constitutes “responsible innovation” or “ethical innovation.”[53] What Florman noted about such conversations three decades ago is equally true today:

“Whenever engineering ethics is on the agenda, emotions come quickly to a boil. […] “It is oh so easy to mouth clichés, for example to pledge to protect the public interest, as the various codes of engineering ethics do. But such a pledge is only a beginning and hardly that. The real questions remain: What is the public interest, and how is it to be served?”[54]

That reality makes it extremely difficult to formulate consensus regarding public polices for emerging technologies. And it makes it particularly difficult to define and enforce a “precautionary principle” for emerging technologies that will somehow strike the Goldilocks balance of getting things just right. This was the focus of my 2016 book Permissionless Innovation, which argued that the precautionary principle should be the last resort when contemplating innovation policy. Experimentation with new technologies and business models should generally be permitted by default because, “living in constant fear of worst-case scenarios—and premising public policy on them—means that best-case scenarios will never come about,” I argued. The precautionary principle should only be tapped when the harms alleged to be associated with a new technology are highly probable, tangible, immediate, irreversible, catastrophic, or directly threatening to life and limb in some fashion.

For his part, Florman did not want to get his defense of engineering mixed up with politics and regulatory considerations. Engineers and technologists, he noted, come in many flavors and supported many different causes. Generally speaking, they tend to be quite pragmatic and shun strong ideological leanings and political pronouncements.

Of course, at some point, there is no avoiding this fight; one must comment on how to strike the right balance when politics enter the picture and threatens to stifle technological creativity. Florman’s perspectives on regulatory policy were somewhat jumbled, however. On one hand, he expressed concern about excessive and misguided regulations, but he also saw government playing an important role both in supporting various types of engineering projects and regulating certain technological developments:

“The regulatory impulse, running wild, wreaks havoc, first of all by stifling creative and productive forces that are vital to national survival. But it does harm also—and perhaps more ominously—by fomenting a counter-revolution among outraged industrialists, the intensity of which threatens to sweep away many of the very regulations we most need.”[55]

In his 1987 book, The Civilized Engineer, Florman even expressed surprise and regret about growing pushback against regulation during the Reagan years. He also expressed skepticism about “the deceptive allure” of benefit-cost analysis, which was on the rise at the time, saying that the “attempt to apply mathematical consistency to the regulatory process was deplorably simplistic.”[56] I have always been a big believer in the importance of benefit-cost analysis (BCA), so I was surprised to read of Florman’s skepticism of it. But he was writing in the early days of BCA and it was not entirely clear how well it work in practice. Four decades on, BCA has become far more rigorous, academically respected, and well-established throughout government. It has widespread and bipartisan support as a policy evaluation tool.

Florman adamantly opposed any sort of “technocracy”—or administration of government by technically-skilled elites. He thought it was silly that so many tech critics believe that such a thing already existed. “The myth of the technocratic elite is an expression of fear, like a fairy tale about ogres,” he argued. “It springs from an understandable apprehension, but since it has no basis in reality, it has no place in serious discourse.”[57] Nor did he believe that there was any real chance a technocracy would ever take hold. “No matter how complex technology becomes, and no matter how important it turns out to be in human affairs, we are not likely to see authority vested in a class of technocrats.”[58]

Florman hoped for wiser administration of law and regulations that affected engineering endeavors and innovation more generally. Like so many others, he did not necessarily want more law, just better law. One cannot fault that instinct, but Florman was not really interested in fleshing out the finer details of policy about how to accomplish that objective. He preferred instead to use history as a rough guide for policy. From the fall of the Roman Empire to the decline of Britain’s economic might in more recent times, Florman observed the ways in which societal and governmental attitudes toward innovation influenced the relative growth of science, technology, and national economies. In essence, he was explaining how “innovation culture” and “innovation arbitrage” had been realities for far longer than most people realize.[59]

“Where the entrepreneurial spirit cannot be rewarded, and where non-productive workers cannot be discharged, stagnation will set in,” Florman concluded.[60] This is very much in line with the thinking of economic historians like Joel Mokyr[61] and Deirdre McCloskey,[62] who have identified how attitudes toward creativity and entrepreneurialism affect the aggregate innovative capacity of nations, and thus their competitive advantage and relative prosperity in the world.

Debunking Determinism, Anxiety & Alienation Concerns

One of the ironies of modern technological criticism is the way many critics can’t seem to get their story straight when it comes to “technological determinism” versus social determinism. In the extreme view, technological determinism is the idea that technology drives history and almost has a will of its own. It is like an autonomous force that is practically unstoppable. By contrast, social determinism means that society (individuals, institutions, etc.) guide and control the development of technology.

In the field of Science and Technology Studies, technological determinism is a very hot matter. Academic and social critics are fond of painting innovation advocates as rigid tech determinists who are little better than uncaring anti-humanistic gadget-worshipers. The critics have employed a variety of other creative labels to describe tech determinism, including: “techno-fundamentalism,” “technological solutionism,” and even “techno-chauvinism.”

Engineers and other innovators often get hit with such labels and accused of being rigid technological determinists who just want to see tech plow over people and politics. But this was, and remains, a ridiculous argument. Sure, there will always be some wild-eyed futurists and extropian extremists who make preposterous claims about how “there is no stopping technology.” “Even now the salvation-through-technology doctrine has some adherents whose absurdities have helped to inspire the antitechnological movement, Florman said.”[63] But that hardly represents the majority of innovation supporters, who well understand that society and politics play a crucial role in shaping the future course of technological development.

As Florman noted, we can dismiss extreme deterministic perspectives for a rather simple reason: technologies fail all the time! “If promising technologies can suffer fatal blows from unexpected circumstances,” Florman correctly argued, then “[t]his means that we are still—however precariously—in control of our own destiny.”[64] He believed that, “technology is not an independent force, much less a thing, but merely one of the types of activities in which people engage.”[65] The rigid view of tech determinism can be dismissed, he said, because “it can be shown that technology is still very much under society’s control, that it is in fact an expression of our very human desires, fancies, and fears.”[66]

But what is amazing about this debate is that some of the most rigid technological determinists are the technology critics themselves! Recall how Florman began his 6-part taxonomy of common complaints from tech critics. “A primary characteristic of the antitechnologists,” Florman argued, “is the way in which they refer to ‘technology’ as a thing, or at least a force, as if it had an existence of its own” and which “has escaped from human control and is spoiling our lives.”[67]

He noted that many of the leading tech critics of the post-war era often spoke in remarkably deterministic ways. “The idea that a man of the masses has no thoughts of his own, but is something on the order of a programmed machine, owes part of its popularity with the antitechnologists to the influential writings of Herbert Marcuse,” he believed.[68] But then such thinking accelerated and gained greater favor with the popularity of critics like French philosopher Jacques Ellul, American historian Lewis Mumford, and American cultural critic Neil Postman.

Their books painted a dismal portrait of a future in which humans were subjugated to the evils of “technique” (Ellul), “technics” (Mumford), or “technopoly” (Postman).  The narrative of their works read like dystopian science fiction. Essentially, there was no escaping the iron grip that technology had on us. Postman claimed, for example, that technology was destined to destroy “the vital sources of our humanity” and lead to “a culture without a moral foundation” by undermining “certain mental processes and social relations that make human life worth living.”

Which gets us to commonly heard concerns about how technology leads to “anxiety” and “alienation.” “Having established the view of technology as an evil force, the antitechnologists then proceed to depict the average citizen as a helpless slave, driven by this force to perform work he detests,” Florman notes.[69] “Anxiety and alienation are the watchwords of the day, as if material comforts made life worse, rather than better.”[70]

These concerns about anxiety, alienation, and “dehumanization” are omnipresent in the work of modern tech critics, and they are also tied up with traditional worries about “conspicuous consumption.” It’s all part of the “false consciousness” narrative they also peddle, which basically views humans as too ignorant to look out for their own good. In this worldview, people are sheep being led to the slaughter by conniving capitalists and tech innovators, who are just trying to sell them things they don’t really need.

Florman pointed out how preposterous this line of thinking is when he noted how critics seem to always forget that, “a basic human impulse precedes and underlies each technological development”:[71]

“Very often this impulse, or desire, is directly responsible for the new invention. But even when this is not the case, even when the invention is not a response to any particular consumer demand, the impulse is alive and at the ready, sniffing about like a mouse in a maze, seeking its fulfillment. We may regret having some of these impulses. We certainly regret giving expression to some of them. But this hardly gives us the right to blame our misfortunes on a devil external to ourselves.”[72]

Consider the automobile, for example. Industrial era critics often focused on it and lambasted the way they thought industrialists pushed auto culture and technologies on the masses. Did we really need all those cars? All those colors? All those options? Did we really even need cars? The critics wanted us to believe that all these things were just imposed upon us. We were being force-fed options we really didn’t even need or want. “Choice” in this worldview is just a fiction; a front for the nefarious ends of our corporate overlords.

Florman demolished this reasoning throughout his books. “However much we deplore the growth of our automobile culture, clearly it has been created by people making choices, not by a runaway technology,” he argued.[73] Consumer demand and choice is not some fiction fabricated and forced upon us, as the antitechnologists suggest. We make decisions. “Those who would blame all of life’s problems on an amorphous technology, inevitably reject the concept of individual responsibility,” Florman retorted. “This is not humanism. It is a perversion of the humanistic impulse.”[74]

A modern tweak on the conspicuous consumption and false consciousness arguments is found in the work of leading tech critics like Evgeny Morozov, who pens attention-grabbing screeds decrying what he regards as “the folly of technological solutionism.” Morozov bluntly states that “our enemy is the romantic and revolutionary problem solver who resides within” of us, but most specifically within the engineers and technologists.[75]

But would the world really be better place it tinkerers didn’t try to scratch that itch?[76] In 2021, the Wall Street Journal profiled JoeBen Bevirt, an engineer and serial entrepreneur who has been working to bring flying cars from sci-fi to reality. Channeling Florman’s defense of the existential pleasures associated with engineering, Bevirt spoke passionately about the way innovators can help “move our species forward” through their constant tinkering to find solutions to hard problems. “That’s kind of the ethos of who we are,” he said. “We see problems, we’re engineers, we work to try to fix them.”[77]

When tech critics like Morozov decry “solutionism,” they are essentially saying that innovators like Bevirt need to just shut up and sit down. Don’t try to improve the world through tinkering; just settle for the status quo, the critics basically state. That’s the kiss of death for human progress, however, because it is only through incessant experimentation with the new and different approaches to hard problems that we can advance human well-being. “Solutionism” isn’t about just creating some shiny new toy; it’s about expanding the universe of potentially life-enriching and life-saving technologies available to humanity.

Conclusion

This review of Samuel Florman’s work may seem comprehensive, but it only scratches the surface of his wide-ranging writing. Florman was troubled that engineering lacked support or at least understanding. Perhaps that was because, he reasoned, that “[t]here is no single truth that embodies the practice of engineering, no patron saint, no motto or simple credo. There is no unique methodology that has been distilled from millenia of technological effort.”  Or, more simply, it may also be the case that the profession lacked articulate defenders. “The engineer may merely be waiting for his Shakespeare,” he suggested.[78]

Through his life’s work, however, Samuel Florman became that Shakespeare; the great bard of engineering and passionate defender of technological innovation and rational optimism more generally. In looking for a quote or two to close out my latest book, I ended with this one from Florman:

“By turning our backs on technological change, we would be expressing our satisfaction with current world levels of hunger, disease, and privation. Further, we must press ahead in the name of the human adventure. Without experimentation and change our existence would be a dull business.”[79]

Let us resolve to make sure that Florman’s greatest fear does not come to pass. Let us resolve to make sure that the great human adventure never ends. And let us resolve to counter the antitechnologists and their fundamentally anti-humanist worldview, which would most assuredly make our existence the “dull business” that Florman dreaded.

We can do better when we put our minds and hands to work innovating in an attempt to build a better future for humanity. Samuel Florman, the great prophet of progress, showed us the way forward.

 

Additional Reading from Adam Thierer:

 

Endnotes:

[1]    Matt Ridley, The Rational Optimist: How Prosperity Evolves (New York: Harper Collins, 2010).

[2]    Adam Thierer, “Defending Innovation Against Attacks from All Sides,” Discourse, November 9, 2021, https://www.discoursemagazine.com/ideas/2021/11/09/defending-innovation-against-attacks-from-all-sides.

[3]    Samuel C. Forman, The Civilized Engineer (New York: St. Martin’s Griffin, 1987), p. 26.

[4]    Samuel C. Florman, The Existential Pleasures of Engineering (New York, St. Martins Griffin, 2nd Edition, 1994), p. 53-4.

[5]    Existential Pleasures of Engineering, p. 53-4.

[6]    Samuel C. Florman, Blaming Technology: The Irrational Search for Scapegoats (New York: St. Martin’s Press, 1981), p. 186.

[7]    Existential Pleasures of Engineering, p. 76.

[8]    Existential Pleasures of Engineering, p. 77.

[9]    The Civilized Engineer, p. 38.

[10]   Thomas Sowell, The Vision of the Anointed: Self-Congratulation as a Basis for Social Policy (New York: Basic Books, 1995).

[11]   Existential Pleasures of Engineering, p. 72.

[12]   Existential Pleasures of Engineering, p. 76.

[13]   The Civilized Engineer, p. 35.

[14]   Existential Pleasures of Engineering, p. 102.

[15]   Blaming Technology, p. 162.

[16]   Existential Pleasures of Engineering, p. 55.

[17]   Blaming Technology, p. 70.

[18]   Existential Pleasures of Engineering, p. 77.

[19]   Existential Pleasures of Engineering, p. 60.

[20]   Adam Thierer, “Technopanics, Threat Inflation, and the Danger of an Information Technology Precautionary Principle,” Minnesota Journal of Law, Science & Technology 14, no. 1 (2013), p. 312–50, https://papers.ssrn.com/sol3/papers.cfm?abstract_id=2012494.

[21]   Existential Pleasures of Engineering, p. 62.

[22]   Blaming Technology, p. 9.

[23]   Hans Rosling, Factfulness: Ten Reasons We’re Wrong about the World—and Why Things Are Better Than You Think (New York: Flatiron Books, 2018).

[24]   Steven Pinker, Enlightenment Now: The Case for Reason, Science, Humanism, and Progress (New York: Viking, 2018).

[25]   Gregg Easterbrook, It’s Better than It Looks: Reasons for Optimism in an Age of Fear (New York: Public Affairs, 2018); Michael A. Cohen & Micah Zenko, Clear and Present Safety: The World Has Never Been Better and Why That Matters to Americans (New Haven, CT: Yale University Press, 2019).

[26]   Existential Pleasures of Engineering, p. 54.

[27]   Existential Pleasures of Engineering, p. 72.

[28]   Existential Pleasures of Engineering, p. 72.

[29]   Existential Pleasures of Engineering, p. 55.

[30]   Existential Pleasures of Engineering, p. 117.

[31]   David Hume, “Of the Populousness of Ancient Nations,” (1777), https://oll.libertyfund.org/titles/hume-essays-moral-political-literary-lf-ed.

[32]   The Civilized Engineer, p. 20.

[33]   Existential Pleasures of Engineering, p. 6.

[34]   The Civilized Engineer, p. 20.

[35]   Existential Pleasures of Engineering, p. 115.

[36]   The Civilized Engineer, p. 20.

[37]   Arthur Diamond, Openness to Creative Destruction: Sustaining Innovative Dynamism (Oxford: Oxford University Press, 2019).

[38]   Existential Pleasures of Engineering, p. 19.

[39]   Existential Pleasures of Engineering, p. 147.

[40]   Existential Pleasures of Engineering, p. 148.

[41]   The Civilized Engineer, p. 30.

[42]   Adam Thierer, “Muddling Through: How We Learn to Cope with Technological Change,” Medium, June 30, 2014, https://medium.com/tech-liberation/muddling-through-how-we-learn-to-cope-with-technological-change-6282d0d342a6.

[43]   Existential Pleasures of Engineering, p. 84.

[44]   The Civilized Engineer, p. 71.

[45]   The Civilized Engineer, p. 72.

[46]   The Civilized Engineer, p. 72.

[47]   Adam Thierer, “Failing Better: What We Learn by Confronting Risk and Uncertainty,” in Sherzod Abdukadirov (ed.), Nudge Theory in Action: Behavioral Design in Policy and Markets (Palgrave Macmillan, 2016): 65-94.

[48]   The Civilized Engineer, p. xi.

[49]   Existential Pleasures of Engineering, p. 85.

[50]   Adam Thierer, “Is the Public Served by the Public Interest Standard?” The Freeman, September 1, 1996,  https://fee.org/articles/is-the-public-served-by-the-public-interest-standard.

[51]   The Civilized Engineer, p. 84.

[52]   The Existential Pleasures of Engineering, p. 22.

[53]   Adam Thierer, “Are ‘Permissionless Innovation’ and ‘Responsible Innovation’ Compatible?” Technology Liberation Front, July 12, 2017, https://techliberation.com/2017/07/12/are-permissionless-innovation-and-responsible-innovation-compatible.

[54]   The Civilized Engineer, p. 79.

[55]   Blaming Technology, p. 106.

[56]   The Civilized Engineer, p. 158.

[57]   Blaming Technology, p. 41.

[58]   Blaming Technology, p. 40-1.

[59]   Adam Thierer, “Embracing a Culture of Permissionless Innovation,” Cato Online Forum, November 17, 2014, https://www.cato.org/publications/cato-online-forum/embracing-culture-permissionless-innovation; Christopher Koopman, “Creating an Environment for Permissionless Innovation,” Testimony before the US Congress Joint Economic Committee, May 22, 2018, https://www.mercatus.org/publications/creating-environment-permissionless-innovation.

[60]   The Civilized Engineer, p. 117.

[61]   Joel Mokyr, Lever of Riches: Technological Creativity and Economic Progress (New York: Oxford University Press, 1990).

[62]   Deirdre N. McCloskey, The Bourgeois Virtues: Ethics for an Age of Commerce (Chicago: The University of Chicago Press, 2006); Deirdre N. McCloskey, Bourgeois Dignity: Why Economics Can’t Explain the Modern World (Chicago: The University of Chicago Press. 2010).

[63]   Existential Pleasures of Engineering, p. 57.

[64]   Blaming Technology, p. 22.

[65]   The Existential Pleasures of Engineering, p. 58.

[66]   Blaming Technology, p. 10.

[67]   The Existential Pleasures of Engineering, p. 48, 53.

[68]   Existential Pleasures of Engineering, p. 70.

[69]   Existential Pleasures of Engineering, p. 49.

[70]   Existential Pleasures of Engineering, p. 16.

[71]   Existential Pleasures of Engineering, p. 61.

[72]   Existential Pleasures of Engineering, p. 61.

[73]   Existential Pleasures of Engineering, p. 60.

[74]   Blaming Technology, p. 104.

[75]   Evgeny Morozov, To Save Everything, Click Here: The Folly of Technological Solutionism (New York: Public Affairs, 2013).

[76]   Adam Thierer, “A Net Skeptic’s Conservative Manifesto,” Reason, April 27, 2013, https://reason.com/2013/04/27/a-net-skeptics-conservative-manifesto-2/.

[77]   Emily Bobrow, “JoeBen Bevirt Is Bringing Flying Taxis from Sci-Fi to Reality,” Wall Street Journal, July 9, 2021, https://www.wsj.com/articles/joeben-bevirt-is-bringing-flying-taxis-from-sci-fi-to-reality-11625848177.

[78]   Existential Pleasures of Engineering, p. 96.

[79]   Blaming Technology, p. 193.

]]>
https://techliberation.com/2022/04/06/samuel-florman-the-continuing-battle-over-technological-progress/feed/ 0 76961
The End of Permissionless Innovation? https://techliberation.com/2021/01/10/the-end-of-permissionless-innovation/ https://techliberation.com/2021/01/10/the-end-of-permissionless-innovation/#comments Sun, 10 Jan 2021 21:24:12 +0000 https://techliberation.com/?p=76823

Time magazine recently declared 2020 “The Worst Year Ever.” By historical standards that may be a bit of hyperbole. For America’s digital technology sector, however, that headline rings true. After a remarkable 25-year run that saw an explosion of innovation and the rapid ascent of a group of U.S. companies that became household names across the globe, politicians and pundits in 2020 declared the party over. “We now are on the cusp of a new era of tech policy, one in which the policy catches up with the technology,” says Darrell M. West of the Brookings Institution in a recent essay, “The End of Permissionless Innovation.” West cites the House Judiciary Antitrust Subcommittee’s October report on competition in digital markets—where it equates large tech firms with the “oil barons and railroad tycoons” of the Gilded Age—as the clearest sign that politicization of the internet and digital technology is accelerating. It is hardly the only indication that America is set to abandon permissionless innovation and revisit the era of heavy-handed regulation for information and communication technology (ICT) markets. Equally significant is the growing bipartisan crusade against Section 230, the provision of the 1996 Telecommunications Act that shields “interactive computer services” from liability for information posted or published on their systems by users. No single policy has been more important to the flourishing of online speech or commerce than Sec. 230 because, without it, online platforms would be overwhelmed by regulation and lawsuits. But now, long knives are coming out for the law, with plenty of politicians and academics calling for it to be gutted. Calls to reform or repeal Sec. 230 were once exclusively the province of left-leaning academics or policymakers, but this year it was conservatives in the White Houseon Capitol Hill and at the Federal Communications Commission (FCC) who became the leading cheerleaders for scaling back or eliminating the law. President Trump railed against Sec. 230 repeatedly on Twitter, and most recently vetoed the annual National Defense Authorization Act in part because Congress did not include a repeal of the law in the measure. Meanwhile, conservative lawmakers in Congress such as Sens. Josh Hawley and Ted Cruz have used subpoenasangry letters and heated hearings to hammer digital tech executives about their content moderation practices. Allegations of anti-conservative bias have motivated many of these efforts. Even Supreme Court Justice Clarence Thomas questioned the law in a recent opinion. Other proposed regulatory interventions include calls for new national privacy laws, an “Algorithmic Accountability Act” to regulate artificial intelligence technologies, and a growing variety of industrial policy measures that would open the door to widespread meddling with various tech sectors. Some officials in the Trump administration even pushed for a nationalized 5G communications network in the name of competing with China. This growing “techlash” signals a bipartisan “Back to the Future” moment, with the possibility of the U.S. reviving a regulatory playbook that many believed had been discarded in history’s dustbin. Although plenty of politicians and pundits are taking victory laps and giving each other high-fives over the impending end of the permissionless innovation era, it is worth considering what America will be losing if we once again apply old top-down, permission slip-oriented policies to the technology sector.

Permissionless Innovation: The Basics

As an engineering principle, permissionless innovation represents the general freedom to tinker and develop new ideas and products in a relatively unconstrained fashion. As I noted in a recent book on the topic, permissionless innovation can also describe a governance disposition or regulatory default toward entrepreneurial activities. In this sense, permissionless innovation refers to the idea that experimentation with new technologies and innovations should generally be permitted by default and that prior restraints on creative activities should be avoided except in those cases where clear and immediate harm is evident. There is an obvious relationship between the narrow and broad definitions of permissionless innovation. When governments lean toward permissionless innovation as a policy default, it is likely to encourage freewheeling experimentation more generally. But permissionless innovation can sometimes occur in the wild, even when public policy instead tends toward its antithesis—the precautionary principle. As I noted in my latest book, tinkerers and innovators sometimes behave evasively and act to make permissionless innovation a reality even when public policy discourages it through precautionary restraints. To be clear, permissionless innovation as a policy default has not meant anarchy. Quite the opposite, in fact. In the United States, over the past 25 years, no major federal agencies that regulate technology or laws that do so were eliminated. Indeed, most agencies grew bigger. But in spite of this, entrepreneurs during this period got more green lights than red ones, and innovation was treated as innocent until proven guilty. This is how and why social media and the sharing economy developed and prospered here and not in other countries, where layers of permission slips prevented such innovations from ever getting off the drawing board. The question now is, how will the shift to end permissionless innovation as a policy default in the U.S. affect innovative activity here more generally? Economic historians Deirdre McCloskey and Joel Mokyr teach us that societal and political attitudes toward growth, risk-taking and entrepreneurialism have a powerful connection with the competitive standing of nations and the possibility of long-term prosperity. If America’s innovation culture sours on the idea of permissionless-ness and moves toward a precautionary principle-based model, creative minds will find it harder to experiment with bold new ideas that could help enrich the nation and improve the well-being of the citizenry—which is exactly why America discarded its old top-down regulatory model in the first place.

Why America Junked the Old Model

Perhaps the easiest way to put some rough bookends on the beginning and end of America’s permissionless innovation era is to date it to the birth and impending death of Sec. 230 itself. The enactment in 1996 of the Telecommunications Act was important, not only because it included Sec. 230, but also because the law created a sort of policy firewall between the old and new worlds of ICT regulation. The old ICT regime was rooted in a complex maze of federal, state and local regulatory permission slips. If you wanted to do anything truly innovative in the old days, you typically needed to get some regulator’s blessing first—sometimes multiple blessings. The exception was the print sector, which enjoyed robust First Amendment protection from the time of the nation’s founding. Newspapers, magazines and book publishers were left largely free of prior restraints regarding what they published or how they innovated. The electronic media of the 20th century were not so lucky. Telephony, radio, television, cable, satellite and other technologies were quickly encumbered with a crazy quilt of federal and state regulations. Those restraints include price controls, entry restrictions, speech restrictions and endless agency threats. ICT policy started turning the corner in the late 1980s after the old regulatory model failed to achieve its mission of more choice, higher quality and lower prices for media and communications. Almost everyone accepted that change was needed, and it came fast. The 1990s became a whirlwind of policy and technological change. In the mid-1990s, the Clinton administration decided to allow open commercialization of the internet, which, until then, had mostly been a plaything for government agencies and university researchers. But it was the enactment of the 1996 telecommunications law that sealed the deal. Not only did the new law largely avoid regulating the internet like analog-era ICT, but, more importantly, it included Sec. 230, which helped ensure that future regulators or overzealous tort lawyers would not undermine this wonderful new resource. A year later, the Clinton administration put a cherry on top with the release of its Framework for Global Electronic Commerce. This bold policy statement announced a clean break from the past, arguing that “the private sector should lead [and] the internet should develop as a market-driven arena, not a regulated industry.” Permissionless innovation had become the foundation of American tech policy.

The Results

Ideas have consequences, as they say, and that includes ramifications for domestic business formation and global competitiveness. While the U.S. was allowing the private sector to largely determine the shape of the internet, Europe was embarking on a very different policy path, one that would hobble its tech sector. America’s more flexible policy ecosystem proved to be fertile ground for digital startups. Consider the rise of “unicorns,” shorthand for companies valued at $1+ billion. “In terms of the global distribution of startup success,” notes the State of the Venture Capital Industry in 2019, “the number of private unicorns has grown from an initial list of 82 in 2015 to 356 in Q2 2019,” and fully half of them are U.S.-based. The United States is also home to the most innovative tech firms. Over the past decade, Strategy& (PricewaterhouseCooper’s strategy consulting business) has compiled a list of the world’s most innovative companies, based on R&D efforts and revenue. Each year that list is dominated by American tech companies. In 2013, 9 of the top 10 most innovative companies were based in the U.S., and most of them were involved in computing, software and digital technology. Global competition is intensifying, but in the most recent 2018 list, 15 of the top 25 companies are still U.S.-based giants, with Amazon, Google, Intel, Microsoft, Apple, Facebook, Oracle and Cisco leading the way. Meanwhile, European digital tech companies cannot be found on any such list. While America’s tech companies are household names across the European continent, most people struggle to name a single digital innovator headquartered in the EU. Permissionless innovation crushed the precautionary principle in the trans-Atlantic policy wars. European policymakers have responded to the continent’s digital stagnation by doubling down on their aggressive regulatory efforts. The EU closed out 2020 with two comprehensive new measures (the Digital Services Act and the Digital Markets Act), while the U.K. simultaneously pursued a new “online harms” law. Taken together, these proposals represent “the biggest potential expansion of global tech regulation in years,” according to The Wall Street Journal. The measures will greatly expand extraterritorial control over American tech companies. Having decimated their domestic technology base and driven away innovators and investors, EU officials are now resorting to plugging budget shortfalls with future antitrust fines on U.S.-based tech companies. It has essentially been a lost quarter century for Europe on the information technology front, and now American companies are expected to pay for it.

Republicans Revive ‘Regulation-By-Raised-Eyebrow’

In light of the failure of Europe’s precautionary principle-based policy paradigm, and considering the threat now posed by the growing importance of various Chinese tech companies, one might think U.S. policymakers would be celebrating the competitive advantages created by a quarter century of American tech dominance and contemplating how to apply this winning vision to other sectors of the economy. Alas, despite its amazing run, business and political leaders are now turning against permissionless innovation as America’s policy lodestar. What is most surprising is how this reversal is now being championed by conservative Republicans, who traditionally support deregulation. President Trump also called for tightening the screws on Big Tech. For example, in a May 2020 Executive Order on “Preventing Online Censorship,” he accused online platforms of “selective censorship that is harming our national discourse” and suggested that “these platforms function in many ways as a 21st century equivalent of the public square.” Trump and his supporters put Google, Facebook, Twitter and Amazon in their crosshairs, accusing them of discriminating against conservative viewpoints or values. The irony here is that no politician owes more to modern social media platforms than Donald Trump, who effectively used them to communicate his ideas directly to the American people. Moreover, conservative pundits now enjoy unparalleled opportunity to get their views out to the wider world thanks to all the digital soapboxes they now can stand on. YouTube and Twitter are chock-full of conservative punditry, and the daily list of top 10 search terms on Facebook is dominated consistently by conservative voices, where “the right wing has a massive advantage,” according to Politico. Nonetheless, conservatives insist they still don’t get a fair shake from the cornucopia of new communications platforms that earlier generations of conservatives could have only dreamed about having at their disposal. They think the deck is stacked against them by Silicon Valley liberals. This growing backlash culminated in a remarkable Senate Commerce Committee hearing on Oct. 28 in which congressional Republicans hounded tech CEOs and called for more favorable treatment of conservatives, and threatened social media companies with regulation if conservative content was taken down. Liberal lawmakers, by contrast, uniformly demanded the companies do more to remove content they felt was harmful or deceptive in some fashion. In many cases, lawmakers on both sides of the aisle were talking about the exact same content, putting the companies in the impossible position of having to devise a Goldilocks formula to get the content balance just right, even though it would be impossible to make both sides happy. In the broadcast era, this sort of political harassment was known as the “regulation-by-raised-eyebrow” approach, which allowed officials to get around First Amendment limitations on government content control. Congressional lawmakers and regulators at the FCC would set up show trial hearings and use political intimidation to gain programming concessions from licensed radio and television operators. These shakedown tactics didn’t always work, but they often resulted in forms of soft censorship, with media outlets editing content to make politicians happy. The same dynamic is at work today. Thus, when a firebrand politician like Sen. Josh Hawley suggests “we’d be better off if Facebook disappeared,” or when Sohrab Ahmari, the conservative op-ed editor at the New York Postcalls for the nationalization of Twitter, they likely understand these extreme proposals won’t happen. But such jawboning represents an easy way to whip up your base while also indirectly putting intense pressure on companies to tweak their policies. Make us happy, or else! It is not always clear what that “or else” entails, but the accumulated threats probably have some effect on content decisions made by these firms. Whether all this means that Sec. 230 gets scrapped or not shouldn’t distract from the more pertinent fact: few on the political right are preaching the gospel of permissionless innovation anymore. Even tech companies and Silicon Valley-backed organizations now actively distance themselves from the term. Zachary Graves, head of policy at Lincoln Network, a tech advocacy organization, worries that permissionless innovation is little more than a “legitimizing facade for anarcho-capitalists, tech bros, and cynical corporate flacks.” He lines up with the growing cast of commentators on both the left and right who endorse a “Tech New Deal” without getting concrete about what that means in practice. What it likely means is a return to a well-worn regulatory playbook of the past that resulted in innovation stagnation and crony capitalism.

A More Political Future

Indeed, as was the case during past eras of permission slip-based policy, our new regulatory era will be a great boon to the largest tech companies. Many people advocate greater regulation in the name of promoting competition, choice, quality and lower prices. But merely because someone proclaims that they are looking to serve the public interest doesn’t mean the regulatory policies they implement will achieve those well-intentioned goals. The means to the end—new rules, regulations and bureaucracies—are messy, imprecise and often counterproductive. Fifty years ago, the Nobel prize-winning economist George Stigler taught us that, “as a rule, regulation is acquired by the industry and is designed and operated primarily for its benefits.” In other words, new regulations often help to entrench existing players rather than fostering greater competition. Countless experts since then have documented the problem of regulatory capture in various contexts. If the past is prologue, we can expect many large tech firms to openly embrace regulation as they come to see it as a useful way of preserving market share and fending off pesky new rivals, most of whom will not be able to shoulder the compliance burdens and liability threats associated with permission slip-based regulatory regimes. True to form, in recent congressional hearings, Facebook head Mark Zuckerberg called on lawmakers to begin regulating social media markets. The company then rolled out a slick new website and advertising campaign inviting new rules on various matters. It is always easy for the king of the hill to call for more regulation when that hill is a mound of red tape of their own making—and which few others can ascend. It is a lesson we should have learned in the AT&T era, when a decidedly unnatural monopoly was formed through a partnership between company officials and the government.

Image Credit: Infrogmation/Wikimedia Commons

Many independent telephone companies existed across America before AT&T’s leaders cut sweetheart deals with policymakers that tilted the playing field in its favor and undermined competition. With rivals hobbled by entry restrictions and other rules, Ma Bell went on to enjoy more than a half century of stable market share and guaranteed rates of return. Consumers, by contrast, were expected to be content with plain-vanilla telephone services that barely changed. Some of us are old enough to remember when the biggest “innovation” in telephony involved the move from rotary-dial phones to the push-button Princess phone, which, we were thrilled to discover, came in multiple colors and had a longer cord. In a similar way, the impending close of the permissionless innovation era signals the twilight of technological creative destruction and its replacement by a new regime of political favor-seeking and logrolling, which could lead to innovation stagnation. The CEOs of the remaining large tech companies will be expected to make regular visits to the halls of Congress and regulatory agencies (and to all those fundraising parties, too) to get their marching orders, just as large telecom and broadcaster players did in the past. We will revert to the old historical trajectory, which saw communications and media companies securing marketplace advantages more through political machinations than marketplace merit.

Will Politics Really Catch Up?

While permissionless innovation may be falling out of favor with elites, America’s entrepreneurial spirit will be hard to snuff out, even when layers of red tape make it riskier to be creative. If for no other reason, permissionless innovation still has a fighting chance so long as Congress struggles to enact comprehensive technology measures. General legislative dysfunction and profound technological ignorance are two reasons that Congress has largely become a non-actor on tech policy in recent years. But the primary limitation on legislative meddling is the so-called pacing problem, which refers to the way technological innovation often outpaces the ability of laws and regulations to keep up. “I have said more than once that innovation moves at the speed of imagination and that government has traditionally moved at, well, the speed of government,” observed former Federal Aviation Administration head Michael Huerta in a 2016 speech.

DNA sequencing machine. Image Credit: Assembly/Getty Images

The same factors that drove the rise of the internet revolution—digitization, miniaturization, ubiquitous mobile connectivity and constantly increasing processing power—are spreading to many other sectors and challenging precautionary policies in the process. For example, just as “Moore’s Law” relentlessly powers the pace of change in ICT sectors, the “Carlson curve” now fuels genetic innovation. The curve refers to the fact that, over the past two decades, the cost of sequencing a human genome has plummeted from over $100 million to under $1,000, a rate nearly three times faster than Moore’s Law. Speed isn’t the only factor driving the pacing problem. Policymakers also struggle with metaphysical considerations about how to define the things they seek to regulate. It used to be easy to agree what a phone, television or medical tracking device was for regulatory purposes. But what do those terms really mean in the age of the smartphone, which incorporates all of them and much more? “‘Tech’ is a very diverse, widely-spread industry that touches on all sorts of different issues,” notes tech analyst Benedict Evans. “These issues generally need detailed analysis to understand, and they tend to change in months, not decades.” This makes regulating the industry significantly more challenging than it was in the past. It doesn’t mean the end of regulation—especially for sectors already encumbered by many layers of preexisting rules. But these new realities lead to a more interesting game of regulatory whack-a-mole: pushing down technological innovation in one way often means it simply pops up somewhere else. The continued rapid growth of what some call “the new technologies of freedom”—artificial intelligence, blockchain, the Internet of Things, etc.—should give us some reasons for optimism. It’s hard to put these genies back in their bottles now that they’re out. This is even more true thanks to the growth of innovation arbitrage—both globally and domestically. Creators and capital now move fluidly across borders in pursuit of more hospitable innovation and investment climates. Recently, some high-profile tech CEOs like Elon Musk and Joe Lonsdale have relocated from California to Texas, citing tax and regulatory burdens as key factors in their decisions. Oracle, America’s second-largest software company, also just announced it is moving its corporate headquarters from Silicon Valley to Austin, just over a week after Hewlett Packard Enterprise said it too is moving its headquarters from California to Texas—in this case, Houston. “Voting with your feet” might actually still mean something, especially when it is major tech companies and venture capitalists abandoning high-tax, over-regulated jurisdictions.

Advocacy Remains Essential

But we shouldn’t imagine that technological change is inevitable or fall into the trap of thinking of it as a sort of liberation theology that will magically free us from repressive government controls. Policy advocacy still matters. Innovation defenders will need to continue to push back against the most burdensome precautionary policies, while also promoting reforms that protect entrepreneurial endeavors. The courts offer us great hope. Groups like the Institute for Justice, the Goldwater Institute, the Pacific Legal Foundation and others continue to litigate successfully in defense of the freedom to innovate. While the best we can hope for in the legislative arena may be perpetual stalemate, these and other public interest law firms are netting major victories in courtrooms across America. Sometimes court victories force positive legislative changes, too. For example, in 2015, the Supreme Court handed down North Carolina State Board of Dental Examiners v. Federal Trade Commission, which held that local government cannot claim broad immunity from federal antitrust laws when it delegates power to nongovernmental bodies, such as licensing boards. This decision made much-needed occupational licensing reform an agenda item across America. Many states introduced or adopted bipartisan legislation aimed at reforming or sunsetting occupational licensing rules that undermine entrepreneurship. Even more exciting are proposals that would protect citizens’ “right to earn a living.” This right would allow individuals to bring suit if they believe a regulatory scheme or decision has unnecessarily infringed upon their ability to earn a living within a legally permissible line of work. Meanwhile, there have been ongoing state efforts to advance “right to try” legislation that would expand medical treatment options for Americans tired of overly paternalistic health regulations. Perhaps, then, it is too early to close the book on the permissionless innovation era. While dark political clouds loom over America’s technological landscape, there are still reasons to believe the entrepreneurial spirit can prevail.
]]>
https://techliberation.com/2021/01/10/the-end-of-permissionless-innovation/feed/ 1 76823
Future Aviation, Drones, and Airspace Markets https://techliberation.com/2020/07/22/future-aviation-drones-and-airspace-markets/ https://techliberation.com/2020/07/22/future-aviation-drones-and-airspace-markets/#respond Wed, 22 Jul 2020 13:55:40 +0000 https://techliberation.com/?p=76767

My research focus lately has been studying and encouraging markets in airspace. Aviation airspace is valuable but has been assigned to date by regulatory mechanisms, custom, and rationing by industry agreement. This rationing was tolerable decades ago when airspace use was relatively light. Today, regulators need to consider markets in airspace–allowing the demarcation, purchase, and transfer of aerial corridors–in order to give later innovators airspace access, to avoid anticompetitive “route squatting,” and to serve as a revenue stream for governments, much like spectrum auctions and offshore oil leases.

Last month, the FAA came out in favor of “urban air mobility corridors”–point-to-point aerial highways that new eVTOL, helicopter, and passenger drones will use. It’s a great proposal, but the FAA’s plan for allocating and sharing those corridors is largely to let the industry negotiate it among themselves (the “Community Business Rules”):

Operations within UAM Corridors will also be supported by CBRs collaboratively developed by the stakeholder community based on industry standards or FAA guidelines and approved by the FAA.

This won’t end well, much like Congress and the Postmaster General letting the nascent airlines in the 1930s divvy up air routes didn’t end well–we’re still living with the effects of those anticompetitive decisions. Decades later the FAA is still refereeing industry fights over routes and airport access.

Rather, regulators should create airspace markets because otherwise, as McKinsey analysts noted last year about urban air mobility:

first movers will have an advantage by securing the most attractive sites along high-traffic routes.

Airspace today is a common-pool resource rationed via regulation and custom. But with drones, eVTOL, and urban air mobility, congestion will increase and centralized air traffic control will need to give way to a more federated and privately-managed airspace system. As happened with spectrum: a demand shock to an Ostrom-ian common pool resource should lead to enclosure and “propertization.”

Markets in airspace probably should have been created decades ago once airline routes became fixed and airports became congested. Instead, the centralized, regulatory rationing led to large economic distortions:

For example, in 1968, nearly one-third of peak-time New York City air traffic–the busiest region in the US–was general aviation (that is, small, personal) aircraft. To combat severe congestion, local authorities raised minimum landing fees by a mere $20 (1968 dollars) on sub 25-seat aircraft. General aviation traffic at peak times immediately fell over 30%—suggesting that a massive amount of pre-July 1968 air traffic in the region was low-value. The share of aircraft delayed by 30 or more minutes fell from 17% to about 8%.

This pricing of airspace and airport access was half-hearted and resisted by incumbents. Regulators fell back on rationing via the creation of “slots” at busy airports, which were given mostly to dominant airlines. Slots have the attributes of property–they can be defined, valued, sold, transferred, borrowed against. But the federal government refuses to call it property, partly because of the embarrassing implications. The GAO said in 2008:

[the] argument that slots are property proves too much—it suggests that the agency [FAA] has been improperly giving away potentially millions of dollars of federal property, for no compensation, since it created the slot system in 1968.

It may be too late to have airspace and route markets for traditional airlines–but it’s not too late for drones and urban air mobility. Demarcating aerial corridors should proceed quickly to bring the drone industry and services to the US. As Adam has pointed out, this is a global race of “innovation arbitrage”–drone firms will go where regulators are responsive and flexible. Federal and state aviation officials should not give away valuable drone routes, which will end up going to first-movers and the politically powerful. Airspace markets, in contrast, avoid anticompetitive lock-in effects and give drone innovators a chance to gain access to valuable routes in the future.

Research and Commentary on Airspace Markets

Law journal article. The North Carolina JOLT published my article, “Auctioning Airspace,” in October 2019. I argued for the FAA to demarcate and auction urban air mobility corridors (SSRN).

Mercatus white paper. In March 2020 Connor Haaland and I explained that federal and state transportation officials could demarcate and lease airspace to drone operators above public roads because many state laws allow local and state authorities to lease such airspace.

Law journal article. A student note in a 2020 Indiana Law Journal issue discusses airspace leasing for drone operations (pdf).

FAA report. The FAA’s Drone Advisory Committee in March 2018 took up the idea of auctioning or leasing airspace to drone operators as a way to finance the increased costs of drone regulations (pdf).

GAO report. The GAO reviewed the idea of auctioning or leasing airspace to drone operators in a December 2019 report (pdf).

Airbus UTM white paper. The Airbus UTM team reviewed the idea of auctioning or leasing airspace to UAM operators in a March 2020 report, “Fairness in Decentralized Strategic Deconfliction in UTM” (pdf).

Federalist Society video. I narrated a video for the Federalist Society in July 2020 about airspace design and drone federalism (YouTube).

Mercatus Center essay. Adam Thierer, Michael Koutrous, and Connor Haaland wrote about drone industry red tape how the US can’t have “innovation by regulatory waiver,” and how to accelerate widespread drone services.

I’ve discussed the idea in several outlets and events, including:

Podcast Episodes about Drones and Airspace Markets

  • In a Federalist Society podcast episode, Adam Thierer and I discussed airspace markets and drone regulation with US Sen. Mike Lee. (Sen. Lee has introduced a bill to draw a line in the sky at 200 feet in order to clarify and formalize federal, state, and local powers over low-altitude airspace.)
  • Tech Policy Institute podcast episode with Sarah Oh, Eli Dourado, and Tom Lenard.
  • Macro Musings podcast episode with David Beckworth.
  • Drone Radio Show podcast episode with Randy Goers.
  • Drones in America podcast episode with Grant Guillot.
  • Uncommon Knowledge podcast episode with Juliette Sellgren.
  • Building Tomorrow podcast episode with Paul Matzko and Matthew Feeney.
  • sUAS News podcast episode and interview.
]]>
https://techliberation.com/2020/07/22/future-aviation-drones-and-airspace-markets/feed/ 0 76767
Global Innovation Arbitrage: Export Controls Edition https://techliberation.com/2019/01/02/global-innovation-arbitrage-export-controls-edition/ https://techliberation.com/2019/01/02/global-innovation-arbitrage-export-controls-edition/#respond Wed, 02 Jan 2019 15:07:14 +0000 https://techliberation.com/?p=76444

Policy incentives matter and have a profound affect on the innovative capacity of a nation. If policymakers erect more obstacles to innovation, it will encourage entrepreneurs to look elsewhere when considering the most hospitable place to undertake their innovative activities. This is “global innovation arbitrage,” a topic we’ve discussed many times here in the past. I’ve defined it as, “the idea that innovators can, and will with increasingly regularity, move to those jurisdictions that provide a legal and regulatory environment more hospitable to entrepreneurial activity.” We see innovation arbitrage happening in high-tech fields as far-ranging as drones, driverless cars, and genetics,among others.

US policymakers might want to consider this danger before the nation loses its competitive advantage in various high-tech fields. Today’s most pressing example arrives in the form of potentially burdensome new export control regulations. In late 2018, the US Department of Commerce’s Bureau of Industry and Security announced a “Review of Controls for Certain Emerging Technologies,” which launched an inquiry about whether to greatly expand the list of technologies that would be subjected to America’s complex export control regulations. Most of the long list of technologies under consideration (such as artificial intelligence, robotics, 3D printing, and advanced computing technologies) were “dual-use” in nature, meaning that they have many peaceful applications.

Nonetheless, the Trump Administration is plowing forward with the inquiry following the passage last summer of the Export Control Reform Act of 2018, which required that the President formulate an interagency process to coordinate export control rules with the goal of creating, “a regular and robust process to identify the emerging and other types of critical technologies of concern, as defined in United States foreign direct investment laws, and regulate their release to foreign persons as warranted regardless of the nature of the underlying transaction.” As part of this process, the Commerce Department is to create a list “of foreign persons and end-uses that are determined to be a threat to the national security and foreign policy of the United States . . .  and to whom exports, reexports, and transfers of items are controlled.”

As Jennifer Skees and I wrote at the time, if restrictive export controls were imposed on a broad class of dual-use emerging technologies, it would likely undermine US innovation and competitiveness. More people are waking up to that reality, as well as the specter of global innovation arbitrage kicking in if such heavy-haded regulations are imposed.

Commenting on the impact that these new export controls might have, Cade Metz of the New York Times suggested this week that “[o]verly restrictive rules that prevent foreign nationals from working on certain technologies in the United States could also push researchers and companies into other countries.” Metz also quoted international trade lawyer Jason Waite of the firm Alston & Bird who said of the rules, “It might be easier for people to just do this stuff in Europe,” if controls were imposed by the US.

That, in a nutshell, is how global innovation arbitrage works in practice. Anti-innovation policies create incentives for entrepreneurs to behave more “evasively” and shop around for better places to engage in creative endevours. You can be certain that innovators and especially investors are watching these developments closely. When policymakers are debating the imposition of burdensome new rules, it sends a clear signal to markets about where to put their money. As venture capitalist Marc Andreessen explained back in 2014:

Think of it as a sort of “global arbitrage” around permissionless innovation — the freedom to create new technologies without having to ask the powers that be for their blessing. Entrepreneurs can take advantage of the difference between opportunities in different regions, where innovation in a particular domain of interest may be restricted in one region, allowed and encouraged in another, or completely legal in still another.

Investors like Andreessen will place their bets on technologies and innovators which have the best hope in thriving in such an open environment, wherever that may be on the planet. Let’s hope that continues to be the US. If burdensome exports control regulations are imposed on America’s best and brightest entrepreneurs, that will not likely be the case.

]]>
https://techliberation.com/2019/01/02/global-innovation-arbitrage-export-controls-edition/feed/ 0 76444
Emerging Tech Export Controls Run Amok https://techliberation.com/2018/11/28/emerging-tech-export-controls-run-amok/ https://techliberation.com/2018/11/28/emerging-tech-export-controls-run-amok/#respond Wed, 28 Nov 2018 16:55:53 +0000 https://techliberation.com/?p=76421

By Adam Thierer & Jennifer Huddleston Skees

He’s making a list and checking it twice. Gonna find out who’s naughty and nice .”

With the Christmas season approaching, apparently it’s not just Santa who is making a list. The Trump Administration has just asked whether a long list of emerging technologies are naughty or nice — as in whether they should be heavily regulated or allowed to be developed and traded freely.

If they land on the naughty list, these technologies could be subjected to complex export control regulations, which would limit research and development efforts in many emerging tech fields and inadvertently undermine U.S. innovation and competitiveness. Worse yet, it isn’t even clear there would be any national security benefit associated with such restrictions.  

From Light-Touch to a Long List

Generally speaking, the Trump Administration has adopted a “light-touch” approach to the regulation of emerging technology and relied on more flexible “soft law” approaches to high-tech policy matters. That’s what makes the move to impose restrictions on the trade and usage of these emerging technologies somewhat counter-intuitive. On November 19, the Department of Commerce’s Bureau of Industry and Security launched a “ Review of Controls for Certain Emerging Technologies .” The notice seeks public comment on “criteria for identifying emerging technologies that are essential to U.S. national security, for example because they have potential conventional weapons, intelligence collection, weapons of mass destruction, or terrorist applications or could provide the United States with a qualitative military or intelligence advantage.”

The Commerce Department has long sought to control the use of such technologies through a combination of methods, including formal export controls. The process for establishing such controls was clumsily cobbled together over time, so Congress passed the Export Control Reform Act of 2018 (ECRA) to formalize these regulations. ECRA requires that the President formulate an interagency process to coordinate these rules with the goal of creating, “a regular and robust process to identify the emerging and other types of critical technologies of concern, as defined in United States foreign direct investment laws, and regulate their release to foreign persons as warranted regardless of the nature of the underlying transaction.” As part of this process, the Commerce Department is to create a list “of foreign persons and end-uses that are determined to be a threat to the national security and foreign policy of the United States . . .  and to whom exports, reexports, and transfers of items are controlled.”

Sweeping Breadth

That is what prompted the Trump Administration’s recent Emerging Technologies notice, which includes is a remarkably sweeping list of technologies that the Commerce Department is considering for the exports controls list. The list has 14 major categories:

(1) Biotechnology

(2) Artificial intelligence

(3) Position, Navigation, and Timing (PNT) technology

(4) Microprocessor technology

(5) Advanced computing technology

(6) Data analytics technology

(7) Quantum information and sensing technology

(8) Logistics technology

(9) Additive manufacturing / 3D printing

(10) Robotics

(11) Brain-computer interfaces

(12) Hypersonics

(13) Advanced materials

(14) Advanced surveillance technologies

The Department’s 14-category list also includes over 40 itemized examples of specific applications. For example, the “artificial intelligence” category alone includes a list of 11 applied types of AI, from AI cloud technologies and chipsets to neural networks to speech and audio processing.

The breadth of this list is remarkable in that it touches almost every emerging technology sector imaginable. It might have been easier for the Commerce Department to simply list those emerging technologies that will not be subject to review for potential export controls. It is an “everything-but-the-kitchen-sink” approach to emerging technology policy oversight and regulation that could clearly have far reaching consequences beyond national security.

There are some obvious dangers with such an open-ended review and it is important to remember these technologies have many beneficial applications as well as any potential risks.

Threatening Beneficial Uses

First, the potential export regulations create the danger of negative spillover effects that could undermine beneficial uses of each technology listed . All of the technologies listed have already been used in many ways that benefit both consumers and businesses. Limitations on their export could limit their availability or prevent improvements due to concerns that such broad interpretations of restrictions could limit the market.

For example, the regulation of AI mentioned above would not only address concerns about how AI might be used in weapons, but could even undermine the export of technology that has become a part of our everyday lives such as Siri in iPhones and Amazon’s Alexa. While the department claims that it seeks to “avoid negatively impacting U.S. leadership in the science, technology, engineering, and manufacturing sectors,” it is unlikely that any but the most narrowly tailored rules could actually avoid having a negative impact on innovation in the named technologies .

The more general purpose a technology the more difficult it will be to control the potential impact on the beneficial uses of the technology as well as the negative impacts. In fact, in some cases such as AI and robotics it can even be difficult to define what the technology is, because it is typically the applications and not the technology more generally that is being discussed and regulated. In many cases, the anti-export regulations would or could at least signal to entrepreneurial innovators that their time is better spent on other technologies or that their work should be taken elsewhere and risks the U.S. falling behind other countries in these important innovative areas.  

Undermining International Competitiveness

Second, the inquiry could undermine U.S. competitiveness by encouraging more offshoring in a world of innovation arbitrage opportunities . With our increasingly connected global economy and specifically the more mobile nature of many emerging technologies, it is becoming easier for innovators who find themselves subjected to onerous regulations in one country to move their research and development efforts to another. This is sometimes referred to as “ innovation arbitrage .”

While the U.S. remains a leader in attracting innovators, this scenario has already played out several times. For example, Amazon moved its drone testing program to the UK rather than test in the US due in large part to FAA regulations regarding drones. Similarly, 23andme also initially took its direct-to-consumer genetic testing abroad after the FDA threatened to shut down their product.

Heavily regulating the export of general applications of these technologies could actually backfire and encourage innovators to take their research to countries like China where they do not face such regulations. R. David Edelman, the director of the Project on Technology, the Economy, and National Security at MIT, has noted that while the inquiry might be “intended to help US companies be more competitive,” the reality is that “it would almost certainly give Chinese companies that don’t face those same restrictions a sizable advantage in the playing field.”

Moreover, if export controls undermine domestic innovation and competitiveness in this fashion and benefit developers in other countries, it means the U.S. will have less of a say over the ethical development of many important technologies. Bloomberg contributor Noah Smith observes that , when it comes to the global race for hegemony in genetic sciences, China is poised to take the lead. “If the U.S. shies away from developing genetic-engineering technology, these riches will flow to China, or to whatever other countries seize the technological edge,” he notes. That would be problematic not just from a competitive perspective, but also from an ethical perspective, because America would have less of a say in guiding the development of these important but controversial technologies. “Dystopian outcomes are also less likely with the U.S. at the helm,” Smith believes.

Limiting or Ending Technologies Consumers Already Enjoy

Third, the inquiry could pose a threat to everyday consumer technologies that are already widely distributed . The most interesting thing about the technologies listed in the notice is that many of them have moved well beyond the “emerging” phrase of development. They are already out in the wild and being used by people every day.

For example, among the AI technologies listed in the notice are “speech and audio processing (e.g., speech recognition and production)” as well as, “natural language processing (e.g., machine translation).” We already enjoy a great many services such as those today, including Siri and Alexa. Meanwhile, there are technologies already on the market that help disabled and autistic children communicate and interact with their peers using AI and robotics.

For example, the KASPAR robot helps children with such disabilities learn social skills to interact with their peers and teach conversational skills. Similarly, technology that translates apparently nonverbal sounds and other methods of communication into speech via apps and other technology with various voices that others can understand could be subject to development ending regulations or be unable to help children in other countries if the proposed export restrictions are phrased too broadly. Not only might new restrictions limit the development of new technologies, it could even limit or eliminate those that we have already embraced and improved the lives of many.

Risk to Research & Open-Source Efforts

Fourth, the expansion of export controls for many of the technologies listed in the inquiry opens the door to widespread policing of open source coding and communications , but offers no explanation of how that would even work. A large number of the technologies on the Commerce Department list have both commercial and non-commercial applications. Innovation scholars use terms like “ free innovation ” and “social entrepreneurialism” to describe innovative efforts that are undertaken by individuals or groups of people to pursue a broader array of social goals or values beyond just profit-seeking.

A prominent example of social entrepreneurs engaging in free innovation involves the use of 3D printers and open source designs to voluntarily create prosthetics for children with limb deficiencies. What happens to collaborative, non-commercial innovations like that if export controls are suddenly imposed on additive manufacturing technologies by the Department of Commerce? If one participant is based outside the US, is that sufficient to subject such collaboration to export controls? What, exactly, would be subjected to controls? The 3D printers? The open source blueprints? The website hosting such information? It is difficult to imagine how such regulation would work in practice but it is easy to imagine the effect it would have if pursued: It would create a massive chilling effect on many beneficial forms of innovation and simultaneously threaten freedom of speech and academic research.

This same problem could play out in many other technology fields listed in the Commerce Department notice, including: robotics, speech recognition, biotechnology, and genetic engineering, among many others often engage in open and cross-border collaboration for open source development. Free innovation and social entrepreneurialism are expanding rapidly in these and other emerging technology arenas. Thus, export control regulation can no longer hinge on going after “deep-pocketed” corporations looking to sell physical systems. To be truly effective, regulations will need to cover bottom-up, “grassroots” innovation. But that move will have profound ramifications for the freedom to freely tinker with or even freely research important technologies and technological processes.

Dubious National Security Benefits

There’s a final danger associated with this effort: it might not help advance America’s national security objectives , and could even hinder them.

To the extent that ECRA and this new Department of Commerce effort lead to heightened scrutiny for the many dozens of technologies identified, it could undermine research and development efforts in many of those fields. It could do so directly (by formally limiting or forbidding domestic R&D efforts) or indirectly (by incentivizing many domestic emerging tech innovators to move their operations offshore, or discouraging foreign developers from setting up shop here). Not only would such actions risk the US losing its lead in innovation, it could actually result in such regulations backfiring from a national security perspective.  

At the end of the day, the problem here is that Congress is failing to clearly identify what is “essential to the national security of the United States.” ECRA just passes the buck on that thorny question to the Commerce Department for a laundry list of emerging technologies. By soliciting public input, the best hope here is that experts in these various emerging technology sectors will step forward and identify the trade-offs associated with inclusion of most of these technologies on the export controls list. Hopefully, the list would then be narrowed the much smaller class of applied technologies that have a very real, immediate, and clearly catastrophic potential for harm to the national security interests of the nation. That would have been the better way to begin this process, but Congress and the Administration have instead adopted the opposite approach here and now we must hope that they are willing to significantly pare back the list of technologies even being considered for inclusion.

Back to the Crypto Wars?

In a sense, this debate was foreshadowed by the debate in the late 1990s over export controls for encryption technologies. As encryption emerged , law enforcement and national security agencies were concerned about its potential use by bad actors to hide or destroy evidence or information by using encrypted devices or services and sought to require backdoors to be able to access encrypted data and to restrict the export of certain types of encryption and certain encrypted devices. Such requirements, as the Information Technology & Innovation Foundation’s Daniel Castro and Alan McQuinn pointed out, would actually reduce the security of everyday Americans to cyber attacks, negatively impact U.S. businesses’ global competitiveness, and reduce the competitiveness and innovation of the technology sector not only in encryption but in related fields as well.

Luckily, many of these concerns were avoided and encryption restrictions have been narrowly tailored. Recent tensions between the FBI and tech companies like Apple illustrate that this debate is far from settled. Now it seems that the Commerce Department’s proposed restrictions could create the same vulnerabilities more broadly for a great number of emerging technologies.

“Soft Law” & Next Steps

In some ways this move to regulate technologies via export restrictions shows the dark side of the growing trend of “soft law.” Soft law, as we discuss in more detail in our forthcoming paper , includes regulatory actions such as guidance documents, working groups, sandboxing, and many other informal regulatory mechanisms. Such mechanisms are often used to regulate emerging technologies in the absence of formal actions or because the traditional policymaking apparatus cannot keep pace with the rapid evolution of technology. In many cases soft law has been used to accelerate technological development that otherwise might have been limited by traditional hard law.

But where soft law thrives in the vacuum left by a lack of formal delegation and regulation, this inaction also poses risks. Agencies like the Commerce Department could extend amorphous powers over emerging technologies without the expertise to fully understand the way such regulations might negatively affect beneficial technological developments, which are typically hard to predict in advance.

A smarter approach to export controls for emerging technologies begins with a rational assessment of:

  1. a more robust evaluation of what really constitutes a tangible, immediate, irreversible, and catastrophic harm to the national security interests of the United States;
  2. the practicality of proposed controls for any emerging technologies considered for inclusion on the list;
  3. the wisdom of placing technologies on the list which already have been developed or marketed overseas (or appear poised to be); and,
  4. the potential unintended consequences that any new export controls might have on the innovative potential of American creators and companies, the future of research in important sectors, the free flow of knowledge regarding peaceful applications, and the competitive standing of the United States relative to other countries.
  5. whether catastrophic concerns about emerging technologies might be better addressed through multilateral accords or agreements aimed at achieving global consensus regarding inappropriate use and applications (as has been done in chemical weapon treaties and nuclear non-proliferation efforts).

Several specific technologies may still qualify for inclusion on the export controls list after such an evaluation, but it will start with a more limited approach and then expand as necessary. Such an approach assumes that in general purpose technology is not a threat until proven otherwise. By inverting the process in this fashion, the Administration wouldn’t be treating every emerging technology under the sun as guilty until proven innocent; innovations would be allowed to flourish naturally until the potential for harm is well-documented.

Unfortunately, the Commerce Department’s proposed approach does just the opposite and risks minimizing the benefits of these emerging technologies while doing little to advance national security interests in a meaningful way.

]]>
https://techliberation.com/2018/11/28/emerging-tech-export-controls-run-amok/feed/ 0 76421
Q&A about Evasive Entrepreneurialism & the Freedom to Innovate https://techliberation.com/2018/09/13/qa-about-evasive-entrepreneurialism-the-freedom-to-innovate/ https://techliberation.com/2018/09/13/qa-about-evasive-entrepreneurialism-the-freedom-to-innovate/#respond Thu, 13 Sep 2018 13:02:16 +0000 https://techliberation.com/?p=76378

Over at the Mercatus Center’s Bridge blog, Chad Reese interviewed me about my forthcoming book and continuing research on “evasive entrepreneurialism” and the freedom to innovate. I provide a quick summary of the issues and concepts that I am exploring with my colleagues currently. Those issues include:

  • free innovation
  • evasive entrepreneurialism & social entrepreneurialism
  • technological civil disobedience
  • the freedom to tinker / freedom to try / freedom to innovate
  • the right to earn a living
  • “moonshots” / deep technologies / disruptive innovation / transformative tech
  • innovation culture
  • global innovation arbitrage
  • the pacing problem & the Collingridge dilemma
  • “soft law” solutions for technological governance

You can read the entire Q&A over at The Bridge, or I have pasted it down below.


Your research and next book project are focused on “evasive entrepreneurialism” and the freedom to innovate. Tell us a bit more about this work.

Evasive entrepreneurs are innovators who don’t always conform to social or legal norms. Various scholars have documented how entrepreneurs are increasingly using new technological capabilities to circumvent traditional regulatory systems or put pressure on lawmakers or regulators to alter policy in some fashion. Evasive entrepreneurs rely on a strategy of “permissionless innovation” in both the business world and the political arena.

Some evasive behavior could even be considered “technological civil disobedience” in the sense that many innovators behave in this fashion because they find many rules to be offensive, confusing, time-consuming, expensive, or perhaps just annoying and irrelevant. In that sense, they could also be referred to as “regulatory entrepreneurs” who push back against what Tim Sandefur labels “The Permission Society.”

My book documents “evasive” behavior of this sort and explains why it is happening with increasing regularity. I also make the normative case for embracing the freedom to innovative more generally because of the many benefits society derives from technological innovations and especially “moonshots”—game-changing, transformative technologies.

You mentioned “permissionless innovation.” That was the topic of your last book. Could you explain what that means and how it relates to your new book?

The term “permissionless innovation” is of uncertain origin but generally refers to trying new things without asking for the prior blessing of various authorities. The phrase is sometimes attributed to Grace M. Hopper, a computer scientist who was a rear admiral in the United States Navy. “It’s easier to ask forgiveness than it is to get permission,” she once noted famously.

In my last book, I used the term more broadly to describe a governance philosophy for a variety of emerging technologies and contrasted it with its opposite—the “precautionary principle.” Permissionless innovation, I argued, refers to the notion that experimentation with new technologies and business models should generally be permitted by default. Unless a compelling case can be made that a new invention will bring serious harm to society, innovation should be allowed to continue and problems, if any develop, can be addressed later.

By contrast, the precautionary principle generally recommends disallowing or slowing innovations until their creators can prove that new products and services are “safe,” however that is defined. The problem with making precaution the basis of all technology policy is that it means a great deal of life-enriching (and even life-saving) innovation will never come about if we base policy on hypothetical worst-case scenarios.

The tension between these visions is on display in every major technology field today—drones, driverless cars, crypocurrency, genetics, mobile medicine, 3D printing, virtual reality, the sharing economy, and many others. That’s why we have made these sectors the focus of ongoing Mercatus research.

Could you give us a few examples of how entrepreneurs behave in an “evasive” fashion or how innovators engage in technological civil disobedience?

Many scholars and tech analysts have highlighted the ways in which sharing economy innovators like Uber and Airbnb engaged in regulatory entrepreneurialism, but that’s hardly the only example. Using 3D printers and open source designs, for example, many creative people are pushing up against legal norms when they fabricate prosthetic hands for children with limb deficiencies or create their own firearms for self-defense.

One of my favorite examples is the open source, do-it-yourself Nightscout Project, a non-profit founded by parents of diabetic children. These parents came together and shared knowledge and code to create better insulin remote monitoring and delivery devices for their kids. Their motto is “WeAreNotWaiting.” Specifically, these parents got tired of waiting for the development of new “professional” devices to be approved by the Food and Drug Administration (FDA), which can take many years to get through the regulatory process. Through voluntary collaboration, these parents have created reliable devices that are much less expensive than those FDA-approved devices, which can cost many thousands of dollars.

When average citizens engage in this sort of “biohacking” to create better and cheaper insulin pumps or 3D-printed prosthetic limbs but do not charge anything for it, their actions are of ambiguous legality. But even if they are breaking some laws or bending some rules, it isn’t stopping them from working together to make the world a better place. That’s technological civil disobedience in a nutshell.

So evasive entrepreneurialism can be both commercial and non-commercial in character?

Yes. Abroad range of “evasive” actors exist with large commercial players on one end of the spectrum and purely non-commercial “grassroots” or “household” innovators on the other. MIT economist Eric von Hippel calls the latter activity “free innovation,” which includes things like the 3D-printed creations I already mentioned.

Social entrepreneurialism is a closely related concept. Several of my Mercatus colleagues have documented how social entrepreneurs were instrumental in helping community recovery efforts following hurricanes and other disasters. Entrepreneurs aim to create social value through innovative acts that can assist their communities, while also potentially helping them create new business opportunities later down the road.

What’s interesting about “free innovation” and social entrepreneurialism is that much of this activity happens at the boundaries of what it technically legal. These innovators just want to help others. When laws stand in the way of that, they sometimes creatively evade them to get things done. That’s clearly the case with the open source DIY insulin pumps or 3D-printed prosthetic limbs.

Another example involves drone enthusiasts who often help out in search-and-rescue missions for missing people and pets even though they could be running afoul of various aviation regulations in the process. Even something as routine as children setting up free lemonade stands without local permits serves as an example of how people can behave in an evasive fashion to serve others.

The so-called “pacing problem” figures prominently in your work. Could you explain what it is and why it is important to the future of innovation policy?

As I noted in a recent Bridge essay, the pacing problem refers to the notion that technological change increasingly outpaces the ability of laws and regulations to keep up. The power of “combinatorial innovation,” which is driven by “Moore’s Law,” fuels a constant expansion of technological capabilities. Meanwhile, citizens quickly assimilate new tools into their daily lives and then expect that even more and better tools will be delivered tomorrow.

This makes it difficult for government officials and organizations to keep policy in line with fast-moving marketplace and social developments. That is especially true because of how increasingly dysfunctional and unable to adapt many government bodies and processes have become. This is why I argue that the pacing problem is becoming the great equalizer in debates over technological governance; policymakers are being forced to rethink their approach to the regulation of many sectors and technologies. This is especially the case because the pacing problem can be exploited by evasive entrepreneurs who are looking to do an end-run around slower regulatory processes.

Will “evasive” tactics work for entrepreneurs in every context? It seems like this would be more challenging in some regulatory contexts than others, right?

Evasive techniques are obviously more likely to succeed for technologies and sectors that are “born free” as opposed to “born captive.”  Technologies that are “born free” are not confronted with old laws and regulatory regimes that require permission before new products and services are offered. For example, there is no Federal Robotics Commission, 3D Printing Safety Act, or Virtual Reality Agency. It’s obviously easier to innovate as you wish in those fields, at least currently.

If, however, you want to put a driverless car on the road or a drone in the sky, preemptive approval is required, making evasive acts far riskier. Of course, it is exactly those sectors where evasive acts are potentially most needed! Too many old sectors are immune from new entry and consumer choice due to cronyism and industrial protectionism. As we saw with the ride-sharing services and now electric scooter sharing, sometimes evasive techniques can work for a time and then give innovators more leverage at the bargaining table.

In some cases, like space policy, supersonic transportation, or new FinTech offerings, evasive strategies are largely impossible because of the stifling morass of overlapping laws and regulations. Agencies will not tolerate much (if any) departure from regulatory norms in those instances. The Federal Aviation Administration (FAA), Federal Communications Commission (FCC), and FDA are particularly notorious for stifling entrepreneurial efforts.

But I am sometimes surprised to find evasive efforts happening even in those sectors. While the FAA is quite heavy-handed about strictly regulating airspace, the agency isn’t doing much to enforce its current drone registration requirements. Countless Americans fly their drones every day without a care about what the feds say. And while 23andme got a cease-and-desist order from the FDA due to their evasive efforts with home genetic test kits, the creators of many mobile medical devices and 3D-printed medical objects are currently being allowed to push up against the boundaries of legality under traditional FDA rules. The agency has bent its rules to accommodate that activity. When agencies take a pass on enforcing their own regulations, that is called “rule departure,” and it seems to be happening with greater regularity, probably due to the combined influence of both the pacing problem and evasive entrepreneurialism.

What’s at stake if policymakers push back too aggressively against evasive innovators?

Technological innovation is the fundamental driver of human well-being. When we let people experiment with new and better ways of doing things, we not only allow for the constant expansion of new goods and services, but we grow opportunities, incomes, and knowledge. This is how countries raise their overall standard of living and achieve prosperity over the long haul.

Entrepreneurs are the key to this process because by taking risks and exploring new opportunities, they continuously replenish the well of important ideas and innovations. If, therefore, we punish creative people for seeking creative solutions to hard problems—even those sometimes behaving “evasively”—we will be denied the fruits of those creative efforts. We will also be denying them the right to earn a livingand enjoy the fruits of their labors. In this sense, the freedom to innovate is closely linked with individual autonomy and self-worth and deserves greater protection. It is about being free to pursue happiness however we each see fit.

Policymakers should, therefore, give innovators greater freedom to experiment, even when those efforts prove to be highly disruptive. Moonshots may not happen unless public policy supports a culture of experimentation and risk-taking. This is also crucial to the competitive advantage of nations. Scholars from many different fields have observed how a nation’s attitudes toward entrepreneurialism create a sort of “innovation culture,” which sends signals to individuals and investors about where they should spend their time and money. Unsurprisingly, where public policy frowns upon entrepreneurial effort, you get a lot less of it. Like a plant, innovation must be nurtured to help it and the economy grow.

In today’s highly integrated global economy, you either innovate or perish thanks to the increasing prevalence of “innovation arbitrage.” This refers to the fact that ideas and innovations will often flock to those jurisdictions that provide a legal and regulatory environment more hospitable to entrepreneurial activity. We see it happening today with dronesdriverless cars, and genetic testing to just name three prominent examples.

Don’t you think that policymakers will bring down the regulatory hammer on evasive entrepreneurs? Should they?

Humility, patience, and flexibility are the key virtues for policymakers in this regard. If policymakers can come to appreciate the ways in which evasive entrepreneurialism can help advance economic and social opportunities, then they should consider giving innovative acts a wide berth—even when entrepreneurs are not in strict compliance with all laws and regulations.

Evasive acts are not usually undertaken to completely defy the law. Instead, they often represent the beginning of a negotiation. Many innovators have grown frustrated with public policies that block new entry or just defy common sense. Evading anti-competitive or illogical restrictions is a way to gain some degree of leverage in political negotiations. Sometimes it works; sometimes it doesn’t. But traditional reform avenues are often foreclosed because incumbents and other defenders of the regulatory status quo don’t like change.

Policymakers should see evasive entrepreneurialism as a signal that politics sometimes fails to serve the public when change is needed most. And once they sit down with innovators to discuss a better way of crafting policy, they need to be willing to adapt and devise more flexible governance frameworks, most of a “soft law” variety. As my colleagues and I explain in a recent law review article, soft lawrefers to a hodge-podge of informal governance tools for emerging tech, such as multistakeholder processes, industry best practices, agency guidance and consultation, and so on. Such informal governance mechanisms will need to fill the governance gap left by the gradual erosion of hard law thanks to the growth of the pacing problem and the expansion of evasive entrepreneurialism.

But what about the worst-case scenarios some fear, like the proverbial mad scientist who concocts a horrific virus in their basement? Even if they are still just hypothetical, aren’t some serious risk worth addressing preemptively?

Indeed, there are some extremely serious harms that are worth addressing preemptively, but that’s all the better reason to  not get obsessed with lesser concerns. Over-regulating entrepreneurial activity is foolish in a world where policymakers are both knowledge- and resource-constrained.

My Mercatus colleagues have documented the astonishing growth and cost of regulatory accumulation. But forget about the burden excessive regulation poses to entrepreneurs and the economy for a moment, and instead consider how all those enforcement activities divert the time and attention of regulators themselves away from bigger problems. When policymakers get lost in a convoluted compliance maze of their own making, they lose the ability to address big risks in a sensible, timely fashion. That’s why we need a new governance vision for the technological age that is more flexible and adaptive than the heavy-handed regulatory regimes of the Industrial Era.

]]>
https://techliberation.com/2018/09/13/qa-about-evasive-entrepreneurialism-the-freedom-to-innovate/feed/ 0 76378
Evasive Entrepreneurialism and Technological Civil Disobedience: Basic Definitions https://techliberation.com/2018/07/10/evasive-entrepreneurialism-and-technological-civil-disobedience-basic-definitions/ https://techliberation.com/2018/07/10/evasive-entrepreneurialism-and-technological-civil-disobedience-basic-definitions/#respond Tue, 10 Jul 2018 13:59:24 +0000 https://techliberation.com/?p=76313

I’ve been working on a new book that explores the rise of evasive entrepreneurialism and technological civil disobedience in our modern world. Following the publication of my last book, Permissionless Innovation: The Continuing Case for Comprehensive Technological Freedom, people started bringing examples of evasive entrepreneurialism and technological civil disobedience to my attention and asked how they were related to the concept of permissionless innovation. As I started exploring and cataloging these cases studies, I realized I could probably write an entire book about these developments and their consequences.

Hopefully that book will be wrapped up shortly. In the meantime, I am going to start rolling out some short essays based on content from the book. To begin, I will state the general purpose of the book and define the key concepts discussed therein. In coming weeks and months, I’ll build on these themes, explain why they are on the rise, explore the effect they are having on society and technological governance efforts, and more fully develop some relevant case studies.

Key Concepts Defined

  • Evasive entrepreneurs – Innovators who don’t always conform to social or legal norms.
  • Regulatory entrepreneurs – Innovators who “are in the business of trying to change or shape the law” and are “strategically operating in a zone of questionable legality or breaking the law until they can (hopefully) change it.” (Pollman & Barry)
  • Technologies of freedom – Devices and platforms that let citizens openly defy (or perhaps just ignore) public policies that limit their liberty or freedom to innovate.
  • The “pacing problem” – The gap between the ever-expanding frontier of technological possibilities and the ability of governments to keep up with the pace of those changes.
  • Technological civil disobedience – The technologically-enabled refusal of individuals, groups, or businesses to obey certain laws or regulations because they find them offensive, confusing, time-consuming, expensive, or perhaps just annoying and irrelevant.
  • Innovation arbitrage – The movement of ideas, innovations, or operations to those jurisdictions that provide a legal and regulatory environment more hospitable to entrepreneurial activity. It can also be thought of as a form of “jurisdictional shopping” and can be facilitated by “competitive federalism.”
  • Permissionless innovation – As a general concept, it refers to Rear Admiral Grace Hopper’s notion that quite often, “It’s easier to ask forgiveness than it is to get permission.” As a policy vision, it refers to the idea that experimentation with new technologies and business models should generally be permitted by default. Permissionless innovation comes down to a general acceptance of change and risk-taking.

Themes of the Book

The book documents how evasive entrepreneurs are using new technological capabilities to circumvent traditional regulatory systems, or at least put pressure on public policymakers to reform or selectively enforce laws and regulation that are outmoded, inefficient, or illogical. Evasive entrepreneurs pursue a strategy of “permissionless innovation” in both the business world and the political arena.  In essence, they live out the adage that, “it is easier to ask forgiveness than it is to get permission” by creating new products and services without necessarily receiving the blessing of public officials before doing so.

Evasive entrepreneurs are taking advantage of the growth of various technologies of freedom and the corresponding “pacing problem” to create new goods and services or just decide how to live a life of their own choosing. We can think of this phenomenon as “technological civil disobedience.” The technologies of freedom that facilitate this sort of civil disobedience include common tools like smartphones, ubiquitous computing, and various new media platforms, as well as more specialized technologies like cryptocurrencies and blockchain-based services, private drones, immersive tech (like virtual reality), 3D printers, the “Internet of Things,” and sharing economy platforms and services. But that list just scratches the surface.

When innovators and consumers use new tools and technological capabilities to pursue a living, enjoy new experiences, or enhance their lives and the lives of others, they often disrupt legal or social norms in the process. While that can raise serious legal and ethical concerns, evasive entrepreneurialism and technological civil disobedience can have positive upsides for society by:

  • expanding the range of life-enriching—and even life-saving—innovations available to society;
  • helping citizens pursue a life of their own choosing—both as creators looking for the freedom to earn a living, and as consumers looking to discover and enjoy important new goods and services; and,
  • providing a meaningful, ongoing check on government policies and programs that all too often have outlived their usefulness or simply defy common sense.

For those reasons, my book will argue that we should accept—and often even embrace—a certain amount of evasive entrepreneurialism and technological civil disobedience. I am particularly excited by the last point. In an age when many of the constitutional limitations on government power are being ignored or unenforced, innovation itself can act as a powerful check on the power of the state and help serve as a protector of important human liberties. Over the past century, both legislative and judicial “checks and balances” in the United States have been eroded to the point where they now exist mostly in name only. While we should never abandon efforts to use democratic and constitutional means of limiting state power—especially in the courts, where meaningful reforms are still possible—the ongoing evolution of technology can provide another way of keeping governments in line by forcing public officials to constrain their worse tendencies and undo past mistakes. If they fail to, they risk losing the allegiance of their more technologically-empowered citizenry.

But evasive entrepreneurialism and technological civil disobedience can have serious downsides, too. We should explore how to address the challenges associated with this more turbulent and sometimes dangerous world. In doing so, however, technological critics and public policymakers should also appreciate how once any particular innovation genie is out of its bottle, it will be increasingly difficult to stuff it back in. Worse yet, attempts to do so can often result in a “compliance paradox,” in which tighter rules lead to increased legal evasion and intractable enforcement challenges. Thus, more flexible and adaptive technological governance mechanisms will be needed.

In coming essays, I will discuss some prominent examples of these trends that are developed at length in my book, I will also do a deeper dive into some of the interesting ways governments are responding to these developments using what Phil Weiser refers to as “entrepreneurial administration,” or what others call “soft law” mechanisms. As Weiser notes, “[t]he traditional model of regulation is coming under strain in the face of increasing globalization and technological change,” and, therefore, governments must think and act differently than they did in the past. And they are already doing so. Even in an age of expanding evasive entrepreneurialism and technological civil disobedience, governments can shape the evolution of technology. But that cannot be done using the previous era’s technocratic, overly-bureaucratic, and top-down regulatory playbook. New policies and procedures will be needed for a new era.

]]>
https://techliberation.com/2018/07/10/evasive-entrepreneurialism-and-technological-civil-disobedience-basic-definitions/feed/ 0 76313
A Guide on Breaking Into Technology Policy https://techliberation.com/2017/10/10/a-guide-on-breaking-into-technology-policy/ https://techliberation.com/2017/10/10/a-guide-on-breaking-into-technology-policy/#comments Tue, 10 Oct 2017 20:24:05 +0000 https://techliberation.com/?p=76197

In recent months, I’ve come across a growing pool of young professionals looking to enter the technology policy field. Although I was lucky enough to find a willing and capable mentor to guide me through a lot of the nitty gritty, a lot of these would-be policy entrepreneurs haven’t been as lucky. Most of them are keen on shifting out of their current policy area, or are newcomers to Washington, D.C. looking to break into a technology policy career track. This is a town where there’s no shortage of sage wisdom, and while much of it still remains relevant to new up-and-comers, I figured I would pen these thoughts based on my own experiences as a relative newcomer to the D.C. tech policy community.

I came to D.C. in 2013, originally spurred by the then-recent revelations of mass government surveillance revealed by Edward Snowden’s NSA leaks. That event led me to the realization that the Internet was fragile, and that engaging in the battle of ideas in D.C. might be a career calling. So I packed up and moved to the nation’s capital, intent on joining the technology policy fray. When I arrived, however, I was immediately struck by the almost complete lack of jobs in, and focus on, technology issues in libertarian circles.

Through a series of serendipitous and fortuitous circumstances, I managed to ultimately break into a field that was still a small and relatively under-appreciated group. What we lacked in numbers and support we had to make up for in quality and determined effort. Although the tech policy community has grown precipitously in recent years, this is still a relatively niche policy vocation relative to other policy tracks. That means there’s a lot of potential for rapid professional growth—if you can manage to get your foot in the door.

So if you’re interested in breaking into technology policy, here are some thoughts that might be of help.

Adapting to the Shifting Sands

My own mentor, Mercatus Senior Fellow Adam Thierer, wrote what I consider the defining guide to breaking into the technology policy arena. Before jumping into the depths of policy, I used his insights in that article to help wrap my head around the ins-and-outs of this field. The broad takeaway is that you should learn from those who came before you . Intellectual humility is important in any profession, and tech policy is no different. Even in this still-young and growing field, there’s an exceptionally robust body of work that is worth parsing through. That means, first and foremost, reading. A lot.

Many of these pieces are going to touch on a broad range of disciplines. Law review articles, technical analyses, regulatory comments, and economic research play an important role in informing the many and varied debates in the tech policy field. While a degree in law or economics isn’t a prerequisite for working in this space, you’ll definitely need to do your homework. Having an understanding of the interdisciplinary work being done in tech policy can be the difference between a good analyst and a great analyst.

Distinguishing yourself in the field also requires embracing the inherent dynamism of this issue space. Things can change a lot, and quickly. The rate of technological change in the modern era is rapid and unceasing—changes that are reflected in the policy arena. If you’re going to keep up with the pace, you’ll not only have to consistently read (a lot), you’ll have to be passionate about the learning . For some, that may be daunting; for those who live for perpetual motion in policy, it can be exciting and energizing. If you’re uncomfortable with that level of dynamism and prefer something a bit more certain and steady, then this probably isn’t the career track for you.

If you yearn for the constantly shifting sands, however, then you’re going to have to read, read, read, and then read some more.

Once you’ve done the reading, you’ll have to start thinking about how, or whether, you want to specialize. Adam notes this explicitly in his piece: specialization matters . I tend to agree. However, what you decide to specialize in is less straightforward. Because this field is ever-changing, the opportunities for specialization are also changing, with a lot of issues intermingling with one another and blurring the lines of previously distinct areas.

Telecommunications, for example, is technically an area of specialization for tech policy. However, even that category has become quite broad and now very often overlaps with newer emerging technology issues. As an example, working on spectrum issues—previously the purview of analysts looking at the traditional media marketplace (television, radio, etc.)—now involves a host of other non-telecommunications issues, such as autonomous and connected vehicles, small microcube satellite constellations delivering Internet service, low-altitude commercial drone traffic management, and much more. Specialization just isn’t what it used to be, and as the policy landscape continues to change relative to the emergence of new technologies, would-be tech policy analysts will need to be flexible and adaptive in considering what issues merit engagement.

In short, read with an eye towards specializing, but be prepared to adapt when things change; and when they inevitably do, get ready to read some more and specialize anew.

Understanding the Political Landscape

You may already have strongly-held political opinions. Then again, maybe not. Either way, it’s important to understand the who’s who of this space, where they come down on their philosophical approaches to technology governance, and how each ideological tribe thinks about the issues. Because tech policy doesn’t elicit the same type of partisanship more commonly associated with traditional issues like health policy and labor policy, you may be surprised to discover who your common bedfellows are.

There are some issue-specific exceptions to this. The debate over Net Neutrality comes to mind as a particularly controversial flashpoint, largely divided down partisan lines. In general, however, there’s relatively little hyper-partisanship in technology policy debates. Technological progress and innovation are generally viewed positively across the political spectrum. As a result, the discussions surrounding issues like AI, autonomous vehicles, and other emerging technologies seldom involve disagreement over whether such advances should be permitted—though again, there are exceptions—and instead boil down to issues related to the specific regulations that will govern their deployment. Ultimately, the discourse tends to gravitate towards the political center and disagreements are largely confined to issues over regulatory governance: the variety ( what types of rules), source ( who governs), and magnitude ( how restrictive or permissive) of regulations. To figure out where your sympathies lie, you’ll first need to make sense of the political terrain by identifying the major players in technology policy circles.

To that end, I definitely suggest you take a look at this great landscape analysis from Rob Atkinson, the president of the Information Technology and Innovation Foundation. Rob classifies the tech policy crowd into 8 camps:

  • Cyber-Libertarians believe the Internet can get along just fine without the nations, institutions, and other “ weary giants of flesh and steel ” of the pre-Internet world;
  • Social Engineers are proponents of the Internet’s promise as an educational and communications tool, but tend to belie its economic benefits;
  • Free Marketers believe in the Internet’s power as a liberating force for markets and individuals, and are generally skeptical of government involvement;
  • Moderates are “staunchly and unabashedly” in favor of technological developments, but are supportive of government involvement in promoting and accelerating these developments;
  • Moral Conservatives tend to view the Internet and emerging technologies as nefarious dens of vice that are accelerating the decline of traditional cultural norms and etiquette, and are supportive of government efforts to reverse that decline; and
  • Old Economy Regulators don’t believe there is anything unique about these new technological tools, and believe restrictive pre-Internet regulatory frameworks can work just as well when applied to these new digital technologies.

Rob also ropes in the “Tech Companies and Trade Associations” and “Bricks and Mortars” groups, but I leave these aside as they tend to fall slightly outside the traditional policy analysis space associated with nonprofits, academic institutions, and advocacy groups. Going by Rob’s classification, I used to throw oscillate between associating with the “Cyber-Libertarian” and “Free marketers” tribes. In recent years, however, I’ve come to move quite solidly into the “Moderate” camp.

Wherever you think you fall, be sure not to ignore the work of “non-aligned” organizations and individuals—the best tech policy analysts are those who know both sides of a debate inside and out. Getting to know the major dividing lines between these groups is key to understanding the nuances involved in tech policy debates, and Rob’s piece is an excellent starting point for newcomers to get a sense of where these disagreements rest.

Framing the Issues

As discussed previously, one of the defining characteristics of this policy field is its dynamic nature. An issue you thought you had nailed down on Monday could be completely flipped on its head by Friday. That’s why it’s so important to consider how you think about these issues. A general framework or taxonomy will help, and different analysts think about these issues differently.

For example, some people look at technology issues through the lens of privacy; others, through the lens of cybersecurity. Personally, I think that single-issue lenses tend to miss the fundamentally multi-faceted nature of this issue space. That’s why I look at tech policy through not a lens, but a kaleidoscope, with each emerging technology presenting unique privacy, cybersecurity, safety, regulatory, and economic challenges and benefits.

All emerging technologies present balancing concerns between these equities. Autonomous vehicles will undoubtedly save lives, but may present greater concerns for privacy and cybersecurity. Commercial drones could likely decrease the costs for delivering goods or open up a renaissance in air transportation, but regulatory barriers and safety concerns present formidable obstacles to adoption. In short, I don’t think there’s any one “lens” through which it’s best to see these technologies. How you decide to approach an issue should ultimately be governed by how you balance the many tradeoffs associated with a new technology, and whether you prefer to use a “lens” or a “kaleidoscope.”

At the Niskanen Center, that “kaleidoscope” approach involves employing a framework that  touches on four general issue “buckets”: Regulatory Governance, Emerging Technologies, the Digital Economy, and Cyber Society.

“Regulatory Governance” focuses on an examination of how rules and regulations can manage new emerging technologies. This bucket informs our basic principles and overarching perspective on technology policy (best encapsulated as support for a “soft law” regime ), and directly informs our engagement on specific “Emerging Technologies,” such as genomics, AI, autonomous vehicles, and other emerging technologies.

The other two buckets—”The Digital Economy” and “Cyber Society”—involve areas in which there is a much greater degree of overlap and intermingling (copyright, “Future of Work” issues, online free speech, digital due process, government surveillance, etc.). These are areas where the lines between tech policy and other, more traditional policy work are much “fuzzier.” This leads us to an important point worth addressing if you’re thinking about jumping into this field: what is, and is not, tech policy?

Thinking About What Isn’t Tech Policy

Different analysts and scholars will disagree about the contours here, so I’ll caveat my thoughts on the “not-tech policy” space by noting that these are purely my own biases. What I consider “tech policy” will probably differ from what other individuals and organizations would group under that header. A lot can be said here, so I’ll just focus on one particular area that is often grouped under the tech policy banner, but which I would not consider tech policy proper: the gig economy.

Take Uber. Uber is a smartphone app. In that sense, it’s technology. However, the issues affected by its use are more relevant to labor, tax, welfare, and traditional regulatory policy analysis—the role of contract work in society, tax classification for part-time laborers, portability of benefits, and barriers to market entry, for example. Although the regulatory component is definitely an issue related to tech policy, it’s not clear that the regulatory issues are technology-specific. This makes for reasonable disagreement about whether gig economy issues, which would also include services like Airbnb and TaskRabbit, are appropriately classified as primarily technology policy.

Ultimately, I see the gig economy as an area that is fundamentally about connecting unused or under-utilized capital to higher-value uses (in the case of Uber, connecting vehicles that would otherwise remain idle with passengers looking for transportation services). While the underlying technology that makes much of the gig economy possible (smartphone apps and digital communications technology) gives the appearance that these issues are actually about technology, the real policy implications are less technology-specific than other areas of tech policy, such as AI, the Internet of Things, autonomous vehicles, and commercial drones.

That having been said, there’s plenty of cases to be made for tech policy to include the gig economy. The takeaway here, however, is that technology is literally eating the modern world, and pretty much all traditional policy spaces are now, in some respect, intertwined with tech policy. As such, we have to draw a dividing line somewhere, otherwise “technology policy” loses any sort of substantive meaning as a distinct field of study.

So if you’re thinking about a career in tech policy broadly, but have a particular interest in, e.g., gig economy issues, it’s worth asking what precisely draws you to the issue. If you’re primarily interested in its impact on labor markets, taxes, or regulatory barriers, then tech policy might not be what you had in mind.

Next Steps

So after you’ve read a bit, focused in on an area of interest, developed a sense of the lay of the political landscape, and put some thought into how you think about framing your analytical approach, what next? Eli Dourado, formerly the director of the Technology Policy Program at Mercatus and now the head of global policy and communications at Boom, offered some succinct thoughts on actually getting involved in this field.

“First, get started now.”

Just start doing technology policy.
Write about it every day. Say unexpected things; don’t just take a familiar side in a drawn-out debate. Do something new. What is going to be the big tech policy issue two years from now? Write about that. Let your passion show.
The tech policy world is small enough — and new ideas rare enough — that doing this will get you a following in our community.

“Second, get in touch.”

These are both great pieces of advice. If you’re really interested in jumping into tech policy, then you’re going to want to start writing. Read as much as you can and get up to speed on the issues that interest you. Then start blogging and editorializing your thoughts. These days, the costs of starting your own blog are primarily just your time and effort, and there are plenty of easy-to-use and free services out there that you can take advantage of.

Once you’ve started writing, start connecting with a wider audience via Twitter, Facebook, and other social media platforms. But don’t limit yourself to the venue of cyberspace forums. Reach out to established analysts by email and get their thoughts and feedback. Networking is key, and if you’re not doing it, you’re not doing half the work. You might have the greatest tech policy thoughts since Marc Andreessen wrote Software is Eating the World (which, incidentally, you should also add to your reading list), but if no one is reading your work, it doesn’t really matter. Just as you need to read, read, read, so too should you network, network, network, and then network some more .

Reach out, and get in touch with people in the field—especially those of us in D.C. If you’re serious about your craft and you’re putting in the time and effort to position yourself as a young tech policy professional, there are plenty of us who are more than happy to have a conversation with you. Indeed, like a lot of people in this field, I couldn’t have made it to where I am if not for the willingness of more established professionals like Adam taking the time to chat with me.

So reach out, network, and engage with those scholars and analysts whose work you follow.  A casual conversation could very easily be the beginning of a new career in tech policy.

Concluding Thoughts

So if after reading all that you’re still considering a career in tech policy, here are some final thoughts for consideration.

First, be open to the possibility that you may be wrong.

Tech policy debates involve a lot of nuance, but there’s also a lot of surprising agreement. Given the constant evolution of technology, at some point you’ll undoubtedly be confronted with a scenario in which you need to reassess your priors. (I’ve had to learn this lesson the hard way on the issue of surveillance. Just take a look at some of my writings earlier in my career and compare them with more recent pieces.) You shouldn’t constantly sway with the winds of compromise, but nor should you see every policy battle as a hill worth dying on.

Second, there’s no such thing as too much reading or networking.

This is worth reiterating, over and over, because it’s important, and there’s no shortcut here. There’s always more to read to get up to speed on tech issues, and chances are you’ll never know it all. So read, read, read, and when you’ve had enough of reading, try switching it up with some outreach and networking. There’s a fair number of people working in tech policy, but it’s still a relatively small, close-knit community. Once you meet a handful of people, it’s easy enough to catapult yourself to introductions to the rest of us. Jobs in tech policy, especially in D.C., are still tough to come by, but it’s a growing field, and the more people you know, the more likely you’ll be well-positioned to take advantage of opportunities.

Finally, have something to say.

This point is worth an anthology all its own, and cannot be over-emphasized: don’t be a policy parrot . Have something to say—not just something to say, but something new and unique . That counts doubly for having actual policy solutions. There’s plenty of people who default to the “let’s have a conversation” school of thought —don’t be one of them. Your job as an analyst is to parse the details of a contentious issue and apply your expertise to provide real, actionable recommendations on the appropriate course of action. Have real recommendations and actual solutions and you’ll set yourself apart from the run-of-the-mill tech policy analyst. Always remember: the difference between doing something right and doing nothing at all, is doing something half-assed. Don’t be the half-assed tech policy parrot.

Don’t get discouraged; establishing your brand takes time. But if you’re serious about giving tech policy a go and you put in the effort, there will be opportunities to make a name for yourself. So read, write, reach out, and offer something unique to the discussion. If you can do that, the sky’s the limit.

]]>
https://techliberation.com/2017/10/10/a-guide-on-breaking-into-technology-policy/feed/ 1 76197
Book Review: Garry Kasparov’s “Deep Thinking” https://techliberation.com/2017/05/11/book-review-garry-kasparovs-deep-thinking/ https://techliberation.com/2017/05/11/book-review-garry-kasparovs-deep-thinking/#respond Thu, 11 May 2017 22:58:17 +0000 https://techliberation.com/?p=76140

[originally posted on Medium ]

Today is the anniversary of the day the machines took over.

Exactly twenty years ago today, on May 11, 1997, the great chess grandmaster Garry Kasparov became the first chess world champion to lose a match to a supercomputer. His battle with IBM’s “Deep Blue” was a highly-publicized media spectacle, and when he lost Game 6 of his match against the machine, it shocked the world.

At the time, Kasparov was bitter about the loss and even expressed suspicions about how Deep Blue’s team of human programmers and chess consultants might have tipped the match in favor of machine over man. Although he still wonders about how things went down behind the scenes during the match, Kasparov is no longer as sore as he once was about losing to Deep Blue. Instead, Kasparov has built on his experience that fateful week in 1997 and learned how he and others can benefit from it.

The result of this evolution in his thinking is Deep Thinking: Where Machine Intelligence Ends and Human Creativity Begins, a book which serves as a paean to human resiliency and our collective ability as a species to adapt in the face of technological disruption, no matter how turbulent.

Kasparov’s book serves as the perfect antidote to the prevailing gloom-and-doom narrative in modern writing about artificial intelligence (AI) and smart machines. His message is one of hope and rational optimism about future in which we won’t be racing against the machines but rather running alongside them and benefiting in the process.

Overcoming the Technopanic Mentality

There is certainly no shortage of books and articles being written today about AI, robotics, and intelligent machines. The tone of most of these tracts is extraordinarily pessimistic. Each page is usually dripping with dystopian dread and decrying a future in which humanity is essentially doomed.

As I noted in a recent essay about “The Growing AI Technopanic,” after reading through most of these books and articles, one is left to believe that in the future: “Either nefarious-minded robots enslave us or kill us, or AI systems treacherously trick us, or at a minimum turn our brains to mush.” These pessimistic perspectives are clearly on display within the realm of fiction, where every sci-fi book, movie, or TV show depicts humanity as certain losers in the proverbial “race” against machines. But such lugubrious lamentations are equally prevalent within the pages of many non-fiction books, academic papers, editorials, and journalistic articles.

Given the predominantly panicky narrative surrounding the age of smart machines, Kasparov’s Deep Thinking serves as a welcome breath of fresh air. The aim of his book is finding ways of “doing a smarter job of humans and machines working together” to improve well-being.

Chess fans will enjoy Kasparov’s overview of the history of the game as well as his discussion of how the development of computing and smart machines has been intermingled with chess for many decades now. They will also appreciate his detailed postmortem of his losing battle with Deep Blue, which makes up the meat of the middle of the book. But what is important about the book is the way Kasparov draws out lessons about how the game of chess and chess players themselves have adapted to the rise of smart machines over time — just as he had to following his historic loss to Deep Blue.

Kasparov begins by noting that the growing panic over machine-learning and AI is unwarranted, but in another sense entirely unsurprising. He correctly observes that, “doomsaying has always been a popular pastime when it comes to new technology” and that, “With every new encroachment of machines, the voices of panic and doubt are heard, and they are only getting louder today.”

Fears of sectoral disruptions and job displacements are nothing new, of course, and many of them have even proven legitimate, Kasparov notes. He discusses “a pattern that has repeated over and over for centuries,” in which humans initially scoffed at the idea of machines being able to compete with them. “Eventually we have had to concede that there is no physical labor that couldn’t be replicated, or mechanically surpassed.” That includes the game of chess, where smart machines are now superior to the world’s best players.

But that doesn’t mean we can or should stop the progression of machine intelligence, he says, because the history of humanity is fundamentally tied up with the never-ending process of technological improvements and the gradual assimilation of new tools into our lives, jobs, and economy. He argues:

“Every profession will eventually feel this pressure, and it must, or else it will mean humanity has ceased to make progress. We can either see these changes as a robotic hand closing around our necks or one that can lift us up higher than we can reach on our own, as has always been the case. Romanticizing the loss of jobs to technology is little better than complaining that antibiotics put too many grave diggers out of work.”

That is why it is essential, Kasparov argues, that we not waste time trying to avoid these changes altogether. He regards the very idea of it as an exercise in futility. “Fighting to thwart the impact of machine intelligence is like lobbying against electricity or rockets,” he says. Instead, he argues, we must look to adapt, and do so quickly.

Adaptation, Resiliency & Risk-Taking

In that sense, Kasparov suggests that there are lessons for us in the history of chess as well as from his own experience competing against Deep Blue. He notes that his match against IBM’s supercomputer, “was symbolic of how we are in a strange competition both with and against our creation in more ways every day.”

Instead of just throwing our hands up in the air in frustration, we must be willing to embrace the new and unknown — especially AI and machine-learning. “Each of us has a choice to make: to embrace these new challenges, or to resist them.” His consistent plea throughout the book is to not give into to our worst fears, but instead to embrace these new technological challenges with a willingness to try new ways of doing things. “No matter how many people are worried about jobs, or the social structure, or killer machines, we can never go back,” he concludes.

On that point, my favorite passage in his book comes early in a short chapter about the history of chess. Kasparov’s sagacious advice is worth quoting at length:

“The willingness to keep trying new things — different methods, uncomfortable tasks — when you are already an expert at something is what separates good from great. Focusing on your strengths is required for peak performance, but improving your weaknesses has the potential for the greatest gains. This is true for athletes, executives, and entire companies. Leaving your comfort zone involves risk, however, and when you are already doing well the temptation to stick with the status quo can be overwhelming, leading to stagnation.”

Societal attitudes toward risk-taking and disruption matter profoundly in this regard because “our perspective on disruption affects how well prepared for it we will be” for the future. Again, the lessons from the world of chess are clear: “How professional chess changed when computers and databases arrived is a useful metaphor for how new technology is adopted across industries and societies in general.” For modern chess players, “it was a matter of adapting to survive,” he argues. “Those who quickly mastered the new methods thrived; the few who didn’t mostly dropped down the rating lists.”

 

Disrupting Education

Kasparov is particularly concerned about how a deep underlying conservatism and resistance to experimentation has become a chronic problem within the traditional educational system. “The prevailing attitude is that education is too important to take risks. My response is that education is too important not to take risks,” he says.

He again returns to the world of chess and he speaks with excitement about the ways in which young chess prodigies are tapping computers and sophisticated programs to supplement their skill-building. They do this, Kasparov says, even though they often receive little encouragement from the older guard, who often still resist the new methods of learning. “We need to find out what works and the only way to do that is to experiment,” he argues. “The kids can handle it. They are already doing it on their own. It’s the adults who are afraid.”

He’s also bullish on the globalization of these trends and the way in which “technology will enable people from all over the world to become entrepreneurs, or scientists, or anything they want despite where they live.” Kasparov believes this is already happening within the global chess community as new computing technologies help players everywhere raise the level of their skills. “Kids are capable of learning far more, far faster, than tradition educational methods allow for,” he argues. “They are already doing it mostly on their own, living and playing in a far more complex environment than the one their parents grew up in.”

Problems Ahead

Kasparov isn’t blind to the potential problems associated with new technologies, including AI and algorithmic systems. The potential for privacy violations represents one of the major concerns related to our powerful new technological capabilities. “There are countless privacy issues to be negotiated anytime [personal] data is accessed, of course, and that trade-off will continue to be one of the main battlefields of the AI revolution.”

Kasparov says he is “glad privacy advocates are on the job, especially regarding the powers of the government,” yet he also senses that we are our own worst enemies because new digital technologies and AI-enabled systems “will continue to make the benefits of sharing our data practically irresistible.” “Utility always wins,” he argues, and even if one country seeks to clamp down on innovation, others will welcome it. “When the results come back and show that the economic and health benefits are tremendous, the floodgates will open everywhere.”

He is probably right. After all, as I have noted in recent essays, we increasingly live in a world where “global innovation arbitrage” — i.e., the increasingly frictionless movement of innovations to jurisdictions that provide a legal and regulatory environment more hospitable to entrepreneurial activity — is increasingly easy. We already know how challenging it is to control data flows in the age of the Internet, smartphones, and social media. But the combination of more sophisticated forms of machine-learning and the rise of innovation arbitrage opportunities means that formidable challenges lie ahead in terms of digital privacy and cybersecurity.

Other ethical issues will need to be worked out over time, but it is important not to imbue new AI technologies or automated systems with too much moral weight right out of the gates. “Our technology is not concerned about good or evil. It is agnostic,” Kasparov correctly notes. The real question, he says, is how we ourselves put our tools to use. “The ethics are in how we humans use it, not whether we should build it.”

Humility about the Future

Despite some concerns such as these, Kasparov is generally quite bullish about the future of humanity in an age of smart machines. Again, his core message is that, “going backwards isn’t an option” and that “it is almost always better to start looking for alternatives and how to advance the change into something better instead of trying to fight it and hold on to the dying status quo.”

He agrees with many other pundits that new skills and jobs will be needed going forward, but admits they aren’t always easy to plan for in advance. As Yogi Berra once famously said, “It’s tough to make predictions, especially about the future.” Indeed, as I pointed out in the most recent edition of my book Permissionless Innovation, when one looks back at official government labor market studies and forecasts from the 1970s and 1980s, you are struck by the way in which policymakers didn’t even have a vocabulary to describe the jobs and skills of the present. For example, you find no mention in past reports of some of today’s hottest jobs, such as software engineers and architects, UX designers, database scientists and administrators, and so on.

On one hand, therefore, pessimistic pundits and policymakers regularly underestimate the adaptability of workers and the evolution of new skills and professions. On the other hand, they make an equally egregious mistake when they overestimate the impact of technological change on many sectors and professions, or suggest that mass unemployment is just around the corner unless we slow automation down.

Just this week, the Information Technology and Innovation Foundation released a new report on the impact of technological disruption in the U.S. labor market from 1850 to present and decried the “false alarmism” often on display in debates about current and future skills and professions. “Labor market disruption is not abnormally high,” conclude authors Robert D. Atkinson and John Wu, but instead, “it’s occurring at its lowest rate since the Civil War.”

We’ve been through more turbulent labor market disruptions in the past and weathered the storm. Chances are we will do so again, so long as we embrace the potential for that change to improve our lives and economy in the long-term. “In fact,” conclude Atkinson and Wu, “the single biggest economic challenge facing advanced economies today is not too much labor market churn, but too little, and thus too little productivity growth.” This is consistent with Kasparov’s repeated call in Deep Thinking for us not to give in to our fears about a highly uncertain future but to instead embrace its potential. “Our machines will continue to make us healthier and richer as we use them wisely,” he says, while adding, “They will also make us smarter.”

Learning by Doing

What Kasparov is really doing throughout the book is making the case for building human and institutional resiliency through a constant willingness to experiment and learn through trial and error. It is certainly true that many of today’s skillsets, professions, and business models will be challenged by the rise of smarter machines and algorithmic learning. Defeatism in the face of that prospect, however, isn’t the answer; adaptation is.

Boston University economist James Bessen wrote about this process in his new book, Learning by Doing. Bessen argued that periods of profound technological change require a willingness by workers, businesses, and other institutions to adjust to new marketplace realities. For progress to occur, large numbers of ordinary workers must acquire new knowledge and skills. However, “that is a slow and difficult process, and history suggests that it often requires social changes supported by accommodating institutions and culture,” Bessen notes.

Luckily, history also suggests that we have been through this process many times before and can get through it again — and raise the standard of living for workers and average citizens alike over the long-run. The crucial part of that process is a general willingness to continue to experiment with new ways of doing things — i.e., learning by doing — and understanding that new skills and professions will emerge from all that process.

That is essentially the same point Kasparov makes in Deep Thinking. As he summarized in a new podcast conversation with Tyler Cowen:

“There will be redistribution of jobs. Many jobs today — like drone operators or 3D printer managers or social media managers — they didn’t exist 10 years ago, 15 years ago. No doubt in 10, 15 years, there will be many jobs, maybe the best-paid jobs, that don’t exist today, and we don’t even know how these jobs will look. I think that’s natural. All we have to do is realize that this process is inevitable, and we have to prepare us mentally, but also to have some sort of safety cushions to help people that will have great difficulty in adjusting.”

What about more specific public policy solutions? Considering the unclear future that lies ahead, flexibility and plenty of policy experimentation will be crucial to finding and unlocking new methods that could help us cope and adapt in the new world. “The problem comes when the government is inhibiting innovation with overregulation and short-sighted policy,” Kasparov says. Trade wars and restrictive immigration policies won’t help matters either, he argues, because they “will limit America’s ability to attract the best and brightest minds.” Hopefully the Trump Administration is listening to his advice in this regard.

AI skeptics and other technology critics will lament Kasparov’s lack of greater detail and the absence of a more precise blueprint for helping workers and institutions navigate an uncertain future. But, again, the entire point of Kasparov’s book is that there is enormous value in the very act of confronting those new challenges, learning through trial-and-error(including the many accompanying failures), and “muddling through” over time.

Much like looking out over the chessboard and pondering the wisdom of our next move, we cannot be frozen into inaction because of fear. We must be willing to make that next move. And then another, and another. And then we must learn from our experiences, and especially our mistakes, if we hope to prosper. “To keep ahead of the machines, we must not try to slow them down because that slows us down as well,” Kasparov concludes in his closing chapter. “We must speed them up. We must give them, and ourselves, plenty of room to grow. We must go forward, outward, and upward.”

Wise advice from the greatest of all grandmasters.

]]>
https://techliberation.com/2017/05/11/book-review-garry-kasparovs-deep-thinking/feed/ 0 76140
Innovation Arbitrage, Technological Civil Disobedience & Spontaneous Deregulation https://techliberation.com/2016/12/05/innovation-arbitrage-technological-civil-disobedience-spontaneous-deregulation/ https://techliberation.com/2016/12/05/innovation-arbitrage-technological-civil-disobedience-spontaneous-deregulation/#comments Mon, 05 Dec 2016 20:06:53 +0000 https://techliberation.com/?p=76096

The future of emerging technology policy will be influenced increasingly by the interplay of three interrelated trends: “innovation arbitrage,” “technological civil disobedience,” and “spontaneous private deregulation.” Those terms can be briefly defined as follows:

  • Innovation arbitrage” refers to the idea that innovators can, and will with increasingly regularity, move to those jurisdictions that provide a legal and regulatory environment more hospitable to entrepreneurial activity. Just as capital now fluidly moves around the globe seeking out more friendly regulatory treatment, the same is increasingly true for innovations. And this will also play out domestically as innovators seek to play state and local governments off each other in search of some sort of competitive advantage.
  • Technological civil disobedience” represents the refusal of innovators (individuals, groups, or even corporations) or consumers to obey technology-specific laws or regulations because they find them offensive, confusing, time-consuming, expensive, or perhaps just annoying and irrelevant. New technological devices and platforms are making it easier than ever for the public to openly defy (or perhaps just ignore) rules that limit their freedom to create or use modern technologies.
  • Spontaneous private deregulation” can be thought of as de facto rather than the de jure elimination of traditional laws and regulations owing to a combination of rapid technological change as well the potential threat of innovation arbitrage and technological civil disobedience. In other words, many laws and regulations aren’t being formally removed from the books, but they are being made largely irrelevant by some combination of those factors. “Benign or otherwise, spontaneous deregulation is happening increasingly rapidly and in ever more industries,” noted Benjamin Edelman and Damien Geradin in a Harvard Business Review article on the phenomenon.[1]

I have previously documented examples of these trends in action for technology sectors as varied as drones, driverless cars, genetic testing, Bitcoin, and the sharing economy. (For example, on the theme of global innovation arbitrage, see all these various essays. And on the growth of technological civil disobedience, see, “DOT’s Driverless Cars Guidance: Will ‘Agency Threats’ Rule the Future?” and “Quick Thoughts on FAA’s Proposed Drone Registration System.” I also discuss some of these issues in the second edition of my Permissionless Innovation book.)

In this essay, I want to briefly highlight how, over the course of just the past month, a single company has offered us a powerful example of how both global innovation arbitrage and technological civil disobedience— or at least the threat thereof—might become a more prevalent feature of discussions about the governance of emerging technologies. And, in the process, that could lead to at least the partial spontaneous deregulation of certain sectors or technologies. Finally, I will discuss how this might affect technological governance more generally and accelerate the movement toward so-called “soft law” governance mechanisms as an alternative to traditional regulatory approaches.

Comma.ai Case Study, Part 1: The Innovation Arbitrage Threat

The company I want to highlight is Comma.ai, a start-up that had hoped to sell a $999 after-market kit for vehicles called the “Comma One,” which “would give average, everyday cars autonomous functionality.”[2] Created by famed hacker George Hotz, who as a teenager gained notoriety for being the first person to unlock an iPhone in 2007, the Comma One represents an attempt to create autonomous vehicle tech “on the cheap” by using off-the-shelf cameras and GPS technology combined with a healthy dose of artificial intelligence technology.

comma-one

But regulators at the National Highway Traffic Safety Administration (NHTSA), the federal agency responsible for road safety and automobile regulation, were none too happy to hear about Hotz’s plan to unleash his technology into the wild without first getting their blessing. On October 27, the agency fired off a nastygram to Hotz saying: “We are concerned that your product would put the safety of your customers and other road users at risk. We strongly encourage you to delay selling or deploying your product on the public roadways unless and until you can ensure it is safe.”

Hotz responded on Twitter promptly and angrily. After posting the full NHTSA letter, he said, “First time I hear from them and they open with threats. No attempt at a dialog.” In a follow-up tweet, he said, “Would much rather spend my life building amazing tech than dealing with regulators and lawyers. It isn’t worth it.” And then he announced that, “The comma one is cancelled. comma.ai will be exploring other products and markets. Hello from Shenzhen, China.” A flood of news articles followed about Hotz’s threat to engage in this sort of global innovation arbitrage by bolting US shores.[3]

Incidentally, what Hotz and Comma.ai were proposing to do with Comma One—i.e., deploy autonomous vehicle tech into the wild without prior regulatory approval—was recently done by Otto, a developer of autonomous trucking technology. As Mark Harris reported on Backchannel:

When Otto performed its test drive — the one shown in the May video — it did so despite a clear warning from Nevada’s Department of Motor Vehicles (DMV) that it would be violating the state’s autonomous vehicle regulations. When the DMV realized that Otto had gone ahead anyway, one official called the drive “illegal” and even threatened to shut down the agency’s autonomous vehicle program.”[4]

While Nevada regulators were busy firing off angry letters, Otto was busy doing even more testing in others states (like Ohio), which are eager to make their jurisdictions a testbed for autonomous vehicle innovation.[5] In fact, just recently, Ohio Gov. John Kasich announced the creation of the “Smart Mobility Corridor,” which, according to the Dayton Daily News, will be “a 35-mile stretch of U.S. 33 in central Ohio that runs through Logan County. Officials say that section of U.S. 33 will become a corridor where technologies can be safely tested in real-life traffic, aided by a fiber-optic cable network and sensor systems slated for installation next year.”[6]

otto-truck

This is an example of innovation arbitrage will increasingly take root here domestically as well as abroad, and some states (or countries) will use inducements in an effort to lure innovators to their jurisdictions.

Anyway, let’s get back to the Comma One case study. I don’t want to get too sidetracked regarding the merits of the concerns raised by NHTSA in its letter to Hotz and the implications of the agency’s threats for innovation in this space. But EFF board member Brad Templeton did a nice job addressing that issue in an essay about NHTSA’s letter that threatened Comma. As Templeton observed:

I will presume the regulators will say, “We only want to scare away dangerous innovation” but the hard truth is that is a very difficult thing to judge. All innovation in this space is going to be a bit dangerous. It’s all there trying to take the car — the 2nd most dangerous legal consumer product — and make it safer, but it starts from a place of danger. We are not going to get to safety without taking risks along the way.[7]

This gets to the very real trade-offs in play in the debate over driverless car technology and its regulation. In fact, my Mercatus Center colleague Caleb Watney and I recently filed comments [8] with NHTSA addressing the agency’s recently proposed “Federal Automated Vehicles Policy.”[9] We stressed the potentially deleterious implications of prior regulatory restraints on autonomous vehicle innovation by stressing the horrific real-world baseline we live with today, in which over 35,000 people dying on US roadways in 2015 (roughly 96 people per day) and 94 percent of all those crashes being attributable to human error.

Caleb and I noted that, by imposing new preemptive constraints on the coding of superior autonomous driving technology, “NHTSA’s proposed policy for automated vehicles may inadvertently increase the number of total automobile fatalities by delaying the rapid development and diffusion of this life-saving technology.” Needless to say, if that comes to pass, it would be a disaster because “automation on the roads could be the great public-health achievement of the 21st century.”[10]

In our filing, Caleb and I estimated that, “If NHTSA’s proposed premarket approval process slows the deployment of HAVs by 5 percent, we project an additional 15,500 fatalities over the course of the next 31 years. At 10 percent regulatory delay, we project an additional 34,600 fatalities over 33 years. And at 25 percent regulatory delay, we project an additional 112,400 fatalities over 40 years.[11]

So, needless to say, this is a very big deal.

But let’s ignore all those potential foregone benefits for the moment and just stick with the question of whether Hotz’s threat to engage in a bit of global innovation arbitrage (by moving to China or somewhere else) could work, or at least affect policy in some fashion. I think it absolutely could be an effective threat both because (a) policymakers really do want to do everything they can to achieve greater road safety, and (b) the auto sector remains a hugely important industry for the United States, and one that policymakers will want to do everything in their power to retain on our shores.

Moreover, as Templeton observes that “Comma is not the only company trying to build a system with pure neural networks doing the actual steering decisions.” Even if NHTSA succeeds in bringing Comma to heel, there will be others who will follow in its footsteps. It might be a firm like Otto, but there are many other players in this space today, including big dogs like Tesla and Google. If ever there was a truly global technology industry, it the automotive sector. Autonomous vehicle innovation could take root and blossom in almost any country in the world, and many countries will be waiting with open arms if America screws up its regulatory process.

As Templeton concludes:

The USA and California led the way in robocars in part because it was unregulated. In the USA, everything is permitted unless it was explicitly forbidden and nobody thought to write “no robots” in the laws. Progress in other countries where everything is forbidden unless it is permitted was much slower. The USA is moving in the wrong direction.[12]

Comma.ai Case Study, Part 2: The Technological Civil Disobedience Threat

But an interesting thing happened on the way to Comma’s threatened exodus. On November 30, the firm announced that it would now be open sourcing the code for its autonomous vehicle technology. Reporters at The Verge noted that, during a press conference:

Hotz said that Comma.ai decided to go open source in an effort to sidestep NHTSA as well as the California DMV, the latter of which he said showed up to his house on three separate occasions. “NHTSA only regulates physical products that are sold,” Hotz said. “They do not regulate open source software, which is a whole lot more like speech.” He went on to say that “if the US government doesn’t like this [project], I’m sure there are plenty of countries that will.”[13]

So here we see Hotz combining the threat of still potentially taking the project offshore (i.e., global innovation arbitrage) with the suggestion that by open-sourcing the code for Comma One he might be able to get around the law altogether. We might consider that an indirect form of technological civil disobedience.

george-hotz

Incidentally, Hotz may not be aware of the fact that NHTSA is in the process of making a power-play to become a driverless car code cop. While Hotz is technically correct that, under current law, NHTSA officials “do not regulate open source software, which is a whole lot more like speech,” NHTSA’s recent Federal Automated Vehicles Policy claimed that the agency “has authority to regulate the safety of software changes provided by manufacturers after a vehicle’s first sale to a consumer” while also suggesting that the agency “may need to develop additional regulatory tools and rules to regulate the certification and compliance verification of such post-sale software updates.”[14]

Needless to say, this proposal has important ramifications for not only Comma, but all other firms in this sector. Consider the implications for Tesla’s “autopilot” mode, which is really little more than a string of constantly-evolving code it pushes out to offer greater and greater autonomous driving functionality.  How would that iterative process work if every time Tesla wanted to make a little tweak to its code it had to run to Washington and file paperwork with NHTSA petitioning for permission to experiment and improve their systems? And then think about all the smaller innovators out there who want to be the next Elon Musk or George Hotz but do not yet have the resources or political connections in Washington to even go through this complex and costly process.

In any event, I have no idea if Hotz or Comma.ai will follow through with any of these threats or be successful in doing so. It may be the case that he is just blowing off smoke and that he and his firm will end up staying in the U.S. and perhaps even later reversing course on the decision to open source the Comma code. But to the extent that innovators like Hotz even hint that they might split the country or open source their code to avoid burdensome regulatory regimes, it can have an influence on future policy decisions. Or at least it should.

New Tech Realities & Their Policy Implications

Indeed, the increasing prevalence of global innovation arbitrage and technological civil disobedience raise some interesting issues for the governance of emerging technologies going forward. The traditional regulatory stance toward many existing sectors and technologies will be challenged by these realities. That’s because most of those traditional regulatory systems are highly precautionary, preemptive, and prophylactic in character. They generally opt for policy solutions that are top-down, overly rigid, and bureaucratic.

marcandreessen
This results in a slow-moving and sometimes completely stagnant regulatory approval process that can stop innovation dead in its tracks, or at least delay it for many years. Such systems send innovators a clear message: You are guilty until proven innocent and must receive some bureaucrat’s blessing before you can move forward.

Of course, in the past, many innovators (especially smaller scale entrepreneurs) really couldn’t do much to avoid similar regulatory systems where they existed. You either fell into line, or else! It wasn’t always clear what “or else!” would entail, but it could range from being denied a permit/license to operate, waiting months or years for rules to emerge, dealing with fines or other penalties, or some combination of all those things. Or perhaps you would just give up on your innovative idea altogether and exit the market.

But the world has changed in some important ways in recent years. Many of the underlying drivers of the digital revolution—massive increases in processing power, exploding storage capacity, steady miniaturization of computing, ubiquitous communications and networking capabilities, the digitization of all data, and more—are beginning to have a profound impact beyond the confines of cyberspace.[15] As venture capitalist Marc Andreessen explained in a widely read 2011 essay about how “software is eating the world”:

More and more major businesses and industries are being run on software and delivered as online services—from movies to agriculture to national defense. Many of the winners are Silicon Valley-style entrepreneurial technology companies that are invading and overturning established industry structures. Over the next 10 years, I expect many more industries to be disrupted by software, with new world-beating Silicon Valley companies doing the disruption in more cases than not. Why is this happening now? Six decades into the computer revolution, four decades since the invention of the microprocessor, and two decades into the rise of the modern Internet, all of the technology required to transform industries through software finally works and can be widely delivered at global scale.[16]

We can add to this list of a new realities the more general problem of technology accelerating at an unprecedented pace. This is what philosophers of technology call the “pacing problem.”  In his new book,  A Dangerous Master: How to Keep Technology from Slipping beyond Our Control, Wendell Wallach concisely defined the pacing problem as “the gap between the introduction of a new technology and the establishment of laws, regulations, and oversight mechanisms for shaping its safe development.” “There has always been a pacing problem,” Wallach correctly observed, but like other philosophers, he believes that modern technological innovation is accelerating much faster than it was in the past.[17]

What are the ramifications of all this for policy? As technology lawyer and consultant Larry Downes has noted, lawmaking in the information age is now inexorably governed by the “law of disruption” or the fact that “technology changes exponentially, but social, economic, and legal systems change incrementally.”[18] This law is “a simple but unavoidable principle of modern life,” he said, and it will have profound implications for the way businesses, government, and culture evolve. “As the gap between the old world and the new gets wider,” he argues, “conflicts between social, economic, political, and legal systems” will intensify and “nothing can stop the chaos that will follow.”[19]

laws-of-disruption

The end result of the “law or disruption” and a world relentlessly governed by the ever-accelerating “pacing problem” is that it will be harder than ever to effectively control emerging technologies using traditional legal and regulatory systems and mechanisms. And this makes it even more likely that the related threats of global innovation arbitrage and various forms of technological civil disobedience will become more regular fixtures in debates about many emerging technologies.

New Governance Models

How one reacts to these new realities will depend upon their philosophical disposition toward innovative activities more generally.

Consider first those adhering to a more “precautionary principle” mindset, which I have defined in my recent book as those who believe “that new innovations should be curtailed or disallowed until their developers can prove that they will not cause any harm to individuals, groups, specific entities, cultural norms, or various existing laws, norms, or traditions.”[20]

Needless to say, the precautionary principle crowd with be dismayed by these new trends and perhaps even decry them as “lawlessness.” Some of these folks seem to be in denial about these new realities and pretend that nothing much has changed. Yet, I have found that most precautionary principle-oriented advocates, and even many regulatory agencies themselves, tend to acknowledge these new realities. But they remain very uncertain about how best to respond to them, often just suggesting that we’ll all need to just try harder to impose new and better regulations on a more expedited or streamlined basis.

Of course, those of us who generally embrace the alternative policy vision for technological governance—“permissionless innovation”—are going to be more accepting of the new technological realities I have described, and we will perhaps even work to defend and encourage them. But while I count myself among this crowd, we cannot ignore the fact that many serious challenges will arise when innovation outpaces law or can easily evade it.

There is some middle ground here, although it is very messy middle ground.

The era of technocratic, top-down, one-size-fits-all regulatory regimes is fading, or at least being severely strained. We will instead need to craft flexible and adaptive policies going forward that are bottom-up, flexible, and evolutionary in character.

What that means in practice is that a lot more “soft law” and informal governance mechanisms will become the new norm. I wrote about this new policy environment in my recent essay, “DOT’s Driverless Cars Guidance: Will ‘Agency Threats’ Rule the Future?” as well as this lengthy review of Wendell Wallach’s latest book about technology ethics.  Along with Gary Marchant of the Arizona State University law school, Wallach recently published an excellent book chapter on “Governing the Governance of Emerging Technologies,” which discussed these soft law mechanisms, which include: “codes of conduct, statements of principles, partnership programs, voluntary programs and standards, certifications programs and private industry initiatives.”[21]

Their chapter appears in an important collection of essays that Gary Marchant edited with Kenneth W. Abbott and Braden Allenby entitled, Innovative Governance Models for Emerging Technologies.

governance-book

What is interesting about the chapters in that book is that seemingly widespread consensus now exists among experts in this field that some combination of these soft law mechanisms are likely to become the primary mode of technological governance for the indefinite future.  This is because, as Marc A. Saner points out in a different chapter of that book, “the control paradigm is too limited to address all the issues that arise in the context of emerging technologies.”[22] By the control paradigm, he generally means traditional administrative regulatory agencies and processes. He and other contributors in the book all seem to agree that the control problem paradigm “has its limits when diffusion, pacing and ethical issues associated with emerging technologies become significant, as is often the case.”[23]

And so the traditional command-and-control ways will gradually give way to a new paradigm for emerging technology governance. In fact, as I noted in my recent essay on driverless cars, we see this happening quite a bit already. “Multistakeholder processes” are already all the rage in the world of emerging technologies and their governance. In recent years, we have seen the White House and various agencies (such as the FTC, NTIA, FDA, and others) craft multistakeholder agreements or best practice guidance documents for technologies as far ranging as:

  • Drones & privacy
  • Sharing economy
  • Internet of Things
  • Driverless cars
  • Big data
  • Artificial intelligence
  • Cross-device tracking
  • Native advertising
  • Online data collection
  • Mobile app transparency and security
  • Mobile apps for kids
  • Mobile medical apps
  • Online health advertising
  • 3D printing
  • Facial recognition

And that list is not comprehensive. I know I am missing other multistakeholder efforts, best practices, or industry guidance documents that have been crafted in recent years.

Of course, many challenging issues need to be sorted out here, most notably: how transparent and accountable will these soft law systems be in practice? How will they be enforced? And what will happen to all those existing laws, regs, and agencies that will continue to exist? More generally, it is worth asking whether we can more closely study these various multistakeholder arrangements and soft law governance mechanisms and determine if there are certain principles or strategies that could be applicable across a wide class of technologies and sectors. In other words, can we a do a better job of “formalizing the informal,” without falling right back into the trap of trying to impose rules in a rigid, top-down, one-size-fits-all fashion?

Conclusion

Those are just a few of the hard questions we will need to consider going forward. For now, however, I think it is safe to conclude that we will no longer see much “law” being made for emerging technologies, at least not in the traditional sense of the term. Thanks to the new technological realities I have described here—and the relentless reality of the “pacing problem” more generally—I believe we are witnessing a wide-ranging and quite profound transformation in how technology is governed in our modern world. And I believe this movement away from traditional “hard law” and toward “soft law” governance mechanisms is likely to accelerate due to the increasing prevalence of innovation arbitrage, technological civil disobedience, and spontaneous private deregulation.

The ramifications of this transformation will be studied by philosophers, legal theorists, and political scientists for many decades to come. But we are still in the early years of this momentous transformation in technological governance and we will continue to struggle to figure out how to make it all work, as messy as it all may be.


[ Note: This essay is condensed from a manuscript I have been working on about The Rise of Technological Civil Disobedience. I’m not sure I will ever get around to finishing it, however, so I thought I would at least post this piece for now. In a subsequent essay, which is also part of that draft manuscript, I hope to discuss how this process might play out for technologies that are “born free” versus those that are “born in captivity.” That is, how likely is it that the trends I discuss here will take hold for technologies that have no pre-existing laws or agencies, while other technologies that are born into a regulatory environment are potentially doomed to be pigeonholed into those old regulatory regimes? What are the chances that the latter technologies can escape captivity and gain the freedom the other technologies already enjoy? How might technology-enabled “spontaneous private deregulation” be accelerated for those sectors? Is that always desirable? Again, I will leave these questions for another day. Scholars and students who are interested in these topics can feel free to contact me if they are interested in discussing them as well as potential paper ideas. Regardless of how you feel about these trends, these issues are ripe for intellectual exploration.]

[1]     Benjamin Edelman and Damien Geradin, “Spontaneous Deregulation,” Harvard Business Review, April 2016, https://hbr.org/2016/04/spontaneous-deregulation.

[2]     Megan Geuss, “After mothballing Comma One, George Hotz releases free autonomous car software,” Ars Technica, November 30, 2016, http://arstechnica.com/cars/2016/11/after-mothballing-comma-one-george-hotz-releases-free-autonomous-car-software.

[3]     See: “NHTSA Scared This Self-Driving Entrepreneur Off the Road,” Bloomberg Technology, October 28, 2016, https://www.bloomberg.com/news/articles/2016-10-28/nhtsa-scared-this-self-driving-entrepreneur-off-the-road; Sean O’Kane, “George Hotz cancels his self-driving car project after NHTSA expresses concern,” The Verge, October 28, 2016, http://www.theverge.com/2016/10/28/13453344/comma-ai-self-driving-car-comma-one-kit-canceled; Brad Templeton, “Comma.ai cancels comma-one add-on box after threats from NHTSA,” Robohub, October 31, 2016, http://robohub.org/comma-ai-cancels-comma-one-add-on-box-after-threats-from-nhtsa.

[4]     Mark Harris, “How Otto Defied Nevada and Scored a $680 Million Payout from Uber,” Backchannel, November 28, 2016,  https://backchannel.com/how-otto-defied-nevada-and-scored-a-680-million-payout-from-uber-496aa07f5ba2#.9rmtb29bl

[5]     Larry E. Hall, “Otto Self-Driving Truck Tests in Ohio; Violated Nevada Regulations,” Hybrid Cars, November 29, 2016, http://www.hybridcars.com/otto-self-driving-truck-tests-in-ohio-violated-nevada-regulations.

[6]     Kara Driscoll, “Ohio to create ‘smart’ road for driverless trucks,” Dayton Daily News, November 30, 2016, http://www.daytondailynews.com/business/ohio-create-smart-road-for-driverless-trucks/25qC7uYjz9rE96q6YFVUUK.

[7]     Brad Templeton, “Comma.ai cancels comma-one add-on box after threats from NHTSA,” Robohub, October 31, 2016, http://robohub.org/comma-ai-cancels-comma-one-add-on-box-after-threats-from-nhtsa/

[8]     Adam Thierer and Caleb Watney, “Comment on the Federal Automated Vehicles Policy,” November 22, 2016, https://www.researchgate.net/publication/311065194_Comment_on_the_Federal_Automated_Vehicles_Policy.

[9]     National Highway Traffic Safety Administration (NHTSA), Federal Automated Vehicles Policy, September 2016.

[10]   Adrienne LaFrance, “Self-Driving Cars Could Save 300,000 Lives per Decade in America,” Atlantic, September 29, 2015

[11]   Adam Thierer and Caleb Watney, “Comment on the Federal Automated Vehicles Policy,” November 22, 2016, https://www.researchgate.net/publication/311065194_Comment_on_the_Federal_Automated_Vehicles_Policy.

[12]   Templeton.

[13]   Sean O’Kane and Lauren Goode, “George Hotz is giving away the code behind his self-driving car project,” The Verge, November 30, 2016, http://www.theverge.com/2016/11/30/13779336/comma-ai-autopilot-canceled-autonomous-car-software-free.

[14]   NHTSA, Federal Automated Vehicles Policy, 76.

[15]   Adam Thierer, Jerry Brito, and Eli Dourado, “Technology Policy: A Look Ahead,” Technology Liberation Front, May 12, 2014, http://techliberation.com/2014/05/12/technology-policy-a-look-ahead.

[16]   Marc Andreessen, “Why Software Is Eating the World,” Wall Street Journal, August 20, 2011, http://www.wsj.com/articles/SB10001424053111903480904576512250915629460.

[17]   Wendell Wallach, A Dangerous Master: How to Keep Technology from Slipping beyond Our Control (New York: Basic Books, 2015), 60.

[18]   Larry Downes, The Laws of Disruption: Harnessing the New Forces That Govern Life and Business in the Digital Age 2 (2009).

[19]   Id.

[20]   Thierer, Permissionless Innovation, at 1.

[21]   Gary E. Marchant and Wendell Wallach, “Governing the Governance of Emerging Technologies,” in Gary E. Marchant, Kenneth W. Abbott & Braden Allenby (eds.), Innovative Governance Models for Emerging Technologies (Cheltenham, UK: Edward Elgar, 2013), 136.

[22]   Marc A. Saner,  “The Role of Adaptation in the Governance of Emerging Technologies,” in Gary E. Marchant, Kenneth W. Abbott & Braden Allenby (eds.), Innovative Governance Models for Emerging Technologies (Cheltenham, UK: Edward Elgar, 2013), 106.

[23]   Ibid., at 94.

]]>
https://techliberation.com/2016/12/05/innovation-arbitrage-technological-civil-disobedience-spontaneous-deregulation/feed/ 2 76096
Global Innovation Arbitrage: Drone Delivery Edition https://techliberation.com/2016/08/25/global-innovation-arbitrage-drone-delivery-edition/ https://techliberation.com/2016/08/25/global-innovation-arbitrage-drone-delivery-edition/#respond Thu, 25 Aug 2016 15:46:01 +0000 https://techliberation.com/?p=76076

Dominos pizza drone
Just three days ago I penned another installment in my ongoing series about the growing phenomenon of “global innovation arbitrage” — or the idea that “innovators can, and increasingly will, move to those countries and continents that provide a legal and regulatory environment more hospitable to entrepreneurial activity.” And now it’s already time for another entry in the series!

My previous column focused on driverless car innovation moving overseas, and earlier installments discussed genetic testingdrones, and the sharing economy. Now another drone-related example has come to my attention, this time from New Zealand. According to the New Zealand Herald:

Aerial pizza delivery may sound futuristic but Domino’s has been given the green light to test New Zealand pizza delivery via drones. The fast food chain has partnered with drone business Flirtey to launch the first commercial drone delivery service in the world, starting later this year.

Importantly, according to the story, “If it is successful the company plans to extend the delivery method to six other markets – Australia, Belgium, France, The Netherlands, Japan and Germany.” That’s right, America is not on the list. In other words, a popular American pizza delivery chain is looking overseas to find the freedom to experiment with new delivery methods. And the reason they are doing so is because of the seemingly endless bureaucratic foot-dragging by federal regulators at the FAA.

Some may scoff and say, ‘Who cares? It’s just pizza!’ Well, even if you don’t care about innovation in the field of food delivery, how do you feel about getting medicines or vital supplies delivered on a more timely and efficient basis in the future? What may start as a seemingly mundane or uninteresting experiment with pizza delivery through the sky could quickly expand to include a wide range of far more important things. But it will never happen unless you give innovators a little breathing room–i.e., “permissionless innovation”–to try new and different ways of doing things.

Incidentally, Flirtey, the drone deliver company that Domino’s partnered with in New Zealand, is also an American-based company. On the company’s website, the firm notes that: “Drones can be operated commercially in a growing number of countries. We’re in discussions with regulators all around the world, and we’re helping to shape the regulations and systems that will make drone delivery the most effective, personal and frictionless delivery method in the market.”

That’s just another indication of the reality that global innovation arbitrage is at work today. If the U.S. puts it head in the sand and lets bureaucrats continue to slow the pace of progress, America’s next generation of great innovators will increasingly look offshore in search of patches of freedom across the planet where they can try out their exciting new products and services.

BTW, I wrote all about this in Chapter 3 of my Permissionless Innovation book. And here’s some additional Mercatus research on the topic.


Additional  Reading

]]>
https://techliberation.com/2016/08/25/global-innovation-arbitrage-drone-delivery-edition/feed/ 0 76076
Global Innovation Arbitrage: Driverless Cars Edition https://techliberation.com/2016/08/22/global-innovation-arbitrage-driverless-cars-edition/ https://techliberation.com/2016/08/22/global-innovation-arbitrage-driverless-cars-edition/#respond Mon, 22 Aug 2016 19:34:42 +0000 https://techliberation.com/?p=76074

In previous essays here I have discussed the rise of “global innovation arbitrage” for genetic testing, drones, and the sharing economy. I argued that: “Capital moves like quicksilver around the globe today as investors and entrepreneurs look for more hospitable tax and regulatory environments. The same is increasingly true for innovation. Innovators can, and increasingly will, move to those countries and continents that provide a legal and regulatory environment more hospitable to entrepreneurial activity.” I’ve been working on a longer paper about this with Samuel Hammond, and in doing research on the issue, we keep finding interesting examples of this phenomenon.

The latest example comes from a terrific new essay (“Humans: Unsafe at Any Speed“) about driverless car technology by Wall Street Journal technology columnist L. Gordon Crovitz. He cites some important recent efforts by Ford and Google and he notes that they and other innovators will need to be given more flexible regulatory treatment if we want these life-saving technologies on the road as soon as possible. “The prospect of mass-producing cars without steering wheels or pedals means U.S. regulators will either allow these innovations on American roads or cede to Europe and Asia the testing grounds for self-driving technologies,” Crovitz observes. “By investing in autonomous vehicles, Ford and Google are presuming regulators will have to allow the new technologies, which are developing faster even than optimists imagined when Google started working on self-driving cars in 2009.” 

Alas, regulators at the National Highway Traffic Safety Administration are more likely to continue to embrace a heavy-handed and highly precautionary regulatory approach instead of the sort of “permissionless innovation” approach to policy that could help make driverless cars a reality sooner rather than later. If regulators continue to take that path, it could influence the competitive standing of the U.S. in the race for global supremacy in this arena.

Crovitz cites a recent essay by innovation consultant Chunka Mui’s on this point: “The appropriate first-mover unit of innovation is not the car, or even the car company. It is the nation.” Mui uses the example of Singapore, where “the lead government agency [is] working to enhance Singapore’s position as a global business center” and has been inviting self-driving car developers to work with the island nation to avoid what Mui describes as “the tangled web of competition, policy fights, regulatory hurdles and entrenched interests governing the pace of driverless-car development and deployment in the U.S.”

That’s global innovation arbitrage in a nutshell and it would be a real shame if America was on the losing end of this competition. To make sure we’re not, Crovitz notes that U.S. policymakers need to avoid overly-precautionary “pre-market-approval steps” that “would give bureaucrats the power to pick which technologies can develop and which are banned. If that happens,” he notes, “the winner in the race to the next revolution in transportation is likelier to be Singapore than Detroit or Silicon Valley.”

Too true. Let’s hope that policymakers are listening before it’s too late.


 

Additional Reading:

]]>
https://techliberation.com/2016/08/22/global-innovation-arbitrage-driverless-cars-edition/feed/ 0 76074
Wendell Wallach on the Challenge of Engineering Better Technology Ethics https://techliberation.com/2016/04/20/wendell-wallach-on-the-challenge-of-engineering-better-technology-ethics/ https://techliberation.com/2016/04/20/wendell-wallach-on-the-challenge-of-engineering-better-technology-ethics/#respond Wed, 20 Apr 2016 19:08:57 +0000 https://techliberation.com/?p=76026

DM cover
On May 3rd, I’m excited to be participating in a discussion with Yale University bioethicist Wendell Wallach at the Microsoft Innovation & Policy Center in Washington, DC. (RSVP here.) Wallach and I will be discussing issues we write about in our new books, both of which focus on possible governance models for emerging technologies and the question of how much preemptive control society should exercise over new innovations.

Wallach’s latest book is entitled, A Dangerous Master: How to Keep Technology from Slipping beyond Our Control. And, as I’ve noted here recently, the greatly expanded second edition of my latest book, Permissionless Innovation: The Continuing Case for Comprehensive Technological Freedom, has just been released.

Of all the books of technological criticism or skepticism that I’ve read in recent years—and I have read stacks of them!— A Dangerous Master is by far the most thoughtful and interesting. I have grown accustomed to major works of technological criticism being caustic, angry affairs. Most of them are just dripping with dystopian dread and a sense of utter exasperation and outright disgust at the pace of modern technological change.

Although he is certainly concerned about a wide variety of modern technologies—drones, robotics, nanotech, and more—Wallach isn’t a purveyor of the politics of panic. There are some moments in the book when he resorts to some hyperbolic rhetoric, such as when he frets about an impending “techstorm” and the potential, as the book’s title suggests, for technology to become a “dangerous master” of humanity. For the most part, however, his approach is deeper and more dispassionate than what is found in the leading tracts of other modern techno-critics.

Many Questions, Few Clear Answers

Wallach does a particularly good job framing the major questions about emerging technologies and their effect on society. “Navigating the future of technological possibilities is a hazardous venture,” he observes. “It begins with learning to ask the right questions—questions that reveal the pitfalls of inaction, and more importantly, the passageways available for plotting a course to a safe harbor.” (p. 7) Wallach then embarks on a 260+ page inquiry that bombards the reader with an astonishing litany of questions about the wisdom of various forms of technological innovation—both large and small. While I wasn’t about to start an exact count, I would say that the number of questions Wallach poses in the book runs well into the hundreds. In fact, many paragraphs of the book are nothing but an endless string of questions.

Thus, if there is a primary weakness with A Dangerous Master, it’s that Wallach spends so much time formulating such a long list of smart and nuanced questions that some readers may come away disappointed when they do not find equally satisfying answers. On the other hand, the lack of clear answers is also completely understandable because, as Wallach notes, there really are no simple answers to most of these questions.

Just Slow Down!

Moving on to substance, let me make clear where Wallach and I generally see eye-to-eye and where we part ways.

Generally speaking, we agree about the need to come up with better “soft governance” systems for emerging technologies, which might include multistakeholder process, developer codes of conduct, sectoral self-regulation, sensible liability rules, and so on. (More on those strategies in a moment.)

But while we both believe it is wise to consider how we might “bake-in” better ethics and norms into the process of technological development, Wallach seems much more inclined than me to expect that we will be able to pre-ordain (or potentially require?) all this happens before much of this experimentation and innovation actually moves forward. Wallach opens by asking:

Determining when to bow to the judgment of experts and whether to intervene in the deployment of a new technology is certainly not easy. How can government leaders or informed citizens effectively discern which fields of research are truly promising and which pose serious risks? Do we have the intelligence and means to mitigate the serious risks that can be anticipated? How should we prepare for unanticipated risks? (p. 6)

Again, many good questions here! But this really gets to the primary difference between Wallach’s preferred approach and my own: I tend to believe that many of these things can only be worked out through ongoing trial and error, the constant reformulation of the various norms that govern the process of innovation, and the development of sensible ex post solutions to some of the most difficult problems posed by turbulent technological change.

By contrast, Wallach’s generally attitude toward technological evolution is probably best summarized by the phrases: “Slow down!” and, “Let’s have a conversation about it first!” As he puts it in his own words: “Slowing down the accelerating adoption of technology should be done as a responsible means to ensure basic human safety and to support broadly shared values.” (p. 13)

But I tend to believe that it’s not always possible to preemptively determine which innovations to slow down, or even how to determine what those “shared values” are that will help us make this determination. More importantly, I worry that there are very serious potential risks and unintended consequences associated with slowing down many forms of technological innovation, which could improve human welfare in important ways. There can be no prosperity, after all, without a certain degree of risk-taking and disruption.

Getting Out Ahead of the Pacing Problem

WW
It’s not that Wallach is completely hostile to new forms of technological innovation or blind to the many ways those innovations might improve our lives. To the contrary, he does a nice job throughout the book highlighting the many benefits associated with various new technologies, or he is at least willing to acknowledge that there can be many downsides associated with efforts aimed at limiting research and experimentation with new technological capabilities.

Yet, what concerns Wallach most is the much-discussed issue from the field of the philosophy of technology, the so-called “pacing problem.” Wallach concisely defines the pacing problem as “the gap between the introduction of a new technology and the establishment of laws, regulations, and oversight mechanisms for shaping its safe development.” (p. 251) “There has always been a pacing problem,” he notes, but he is concerned that technological innovation—especially highly disruptive and potentially uncontrollable forms of innovation—is now accelerating at an absolutely unprecedented pace.

(Just as an aside for all the philosophy nerds out there…  Such a rigid belief in the “pacing problem” represents a techno-deterministic viewpoint that is, ironically, sometimes shared by technological skeptics like Wallach as well as technological optimists like Larry Downes and even many in the middle of this debate, like Vivek Wadhwa. See, for example, The Laws of Disruption by Downes and “Laws and Ethics Can’t Keep Pace with Technology” by Wadhwa. Although these scholars approach technology ethics and politics quite differently, they all seem to believe that the pace of modern technological change is so relentless as to almost be an unstoppable force of nature. I guess the moral of the story is that, to some extent, we’re all technological determinists now!)

Despite his repeated assertions that modern technologies are accelerating at such a potentially uncontrollable pace, Wallach nonetheless hopes we can achieve some semblance of control over emerging technologies before they reach a critical “inflection point.” In the study of history and science, an inflection point generally represents a moment when a situation and trend suddenly changes in a significant way and things begin moving rapidly in a new direction. These inflections points can sometimes develop quite abruptly, ushering in major changes by creating new social, economic, or political paradigms. As it relates to technology in particular, inflection points can refer to the moment with a particular technology achieves critical mass in terms of adoption or, more generally, to the time when that technology begins to profoundly transform the way individuals and institutions act.

Another related concept that Wallach discusses is the so-called “Collingridge dilemma,” which refers to the notion that it is difficult to put the genie back in the bottle once a given technology has reached a critical mass of public adoption or acceptance. The concept is named after David Collingridge, who wrote about this in his 1980 book, The Social Control of Technology. “The social consequences of a technology cannot be predicated early in the life of the technology,” Collingridge argued. “By the time undesirable consequences are discovered, however, the technology is often so much part of the whole economics and social fabric that its control is extremely difficult.”

On “Having a Discussion” & Coming Up with “a Broad Plan”

These related concepts of inflection points and the Collingridge dilemma constitute the operational baseline of Wallach’s worldview. “In weighing speedy development against long-term risks, speedy development wins,” he worries. “This is particularly true when the risks are uncertain and the perceived benefits great.” (p. 85)

Consequently, throughout his book, Wallach pleads with us to take what I will call Technological Time Outs. He says we need to pause at times so that we can have “a full public discussion” (p. 13) and make sure there is a “broad plan in place to manage our deployment of new technologies” (p. 19) to make sure that innovation happens only at “a humanly manageable pace” (p. 261) “to fortify the safety of people affected by unpredictable disruptions.” (p. 262) Wallach’s call for Technological Time Outs is rooted in his belief that “the accelerating pace [of modern technological innovation] undermines the quality of each of our lives.” (p. 263)

That is Wallach’s weakest assertion in the book and he doesn’t really offer much evidence to prove that the velocity of modern technological is hurting us rather than helping us, as many of us believe. Rather, he treats it as a widely accepted truism that necessitates some sort of collective effort to slow things down if the proverbial genie is about to exit the bottle, or to make sure those genies don’t get out of their bottles without a lot of preemptive planning regarding how they are to be released into the world. In the following passage on pg. 72, Wallach very succinctly summarizes his approach recommended throughout A Dangerous Master:

this book will champion the need for more upstream governance: more control over the way that potentially harmful technologies are developed or introduced into the larger society. Upstream management is certainly better than introducing regulations downstream, after a technology is deeply entrenched or something major has already gone wrong. Yet, even when we can access risks, there remain difficulties in recognizing when or determining how much control should be introduced. When does being precautionary make sense, and when is precaution an over-reaction to the risks? (p. 72)

Those who have read my Permissionless Innovation book will recall that I open by framing innovation policy debates in almost exactly the same way as Wallach suggests in that last line above. I argue in the first lines of my book that:

The central fault line in innovation policy debates today can be thought of as ‘the permission question.’  The permission question asks: Must the creators of new technologies seek the blessing of public officials before they develop and deploy their innovations? How that question is answered depends on the disposition one adopts toward new inventions and risk-taking, more generally.  Two conflicting attitudes are evident. One disposition is known as the ‘precautionary principle.’ Generally speaking, it refers to the belief that new innovations should be curtailed or disallowed until their developers can prove that they will not cause any harm to individuals, groups, specific entities, cultural norms, or various existing laws, norms, or traditions. The other vision can be labeled ‘permissionless innovation.’ It refers to the notion that experimentation with new technologies and business models should generally be permitted by default. Unless a compelling case can be made that a new invention will bring serious harm to society, innovation should be allowed to continue unabated and problems, if any develop, can be addressed later.

So, by contrasting these passages, you can see what I am setting up here is a clash of visions between what appears to be Wallach’s precautionary principle-based approach versus my own permissionless innovation-focused worldview.

How Much Formal Precaution?

But that would be a tad bit too simplistic because just a few paragraphs after Wallach makes the statement just above about “upstream management” being superior to ex post solutions formulated “after a technology is deeply entrenched,” Wallach begins slowly backing away from an overly-rigid approach to precautionary principle-based governance of technological processes and systems.

He admits, for example, that “precautionary measures in the form of regulations and governmental oversight can slow the development of research whose overall society impact will be beneficial,” (p. 26) and that can “be costly” and “slow innovation.” For countries, Wallach admits, this can have real consequences because “Countries with more stringent precautionary policies are at a competitive disadvantage to being the first to introduce a new tool or process.” (p. 74)

So, he’s willing to admit that what we might call a hard precautionary principle usually won’t be sensible or effective in practice, but he is far more open to soft precaution. But this is where real problems begin to develop with Wallach’s approach, and it presents us with a chance to turn the tables on him a bit and begin posing some serious questions about his vision for governing technology.

Much of what follows below are my miscellaneous ramblings about the current state of the intellectual dialogue about tech ethics and technological control efforts. I have discussed these issues at greater length in my new book as well as a series of essays here in past years, most notably: “On the Line between Technology Ethics vs. Technology Policy; “What Does It Mean to “Have a Conversation” about a New Technology?”; and, “Making Sure the “Trolley Problem” Doesn’t Derail Life-Saving Innovation.”

As I’ve argued in those and other essays, my biggest problem with modern technological criticism is that specifics are in scandalously short supply in this field! Indeed, I often find the lack of details in this arena to be utterly exasperating. Most modern technological criticism follows a simple formula:

TECHNOLOGY –>> POTENTIAL PROBLEMS –>> DO SOMETHING!

But almost all the details come in the discussion about the nature of the technology in question and the apparent many problems associated with it. Far, far less thought goes into the “DO SOMETHING!” part of the critics’ work. One reason for that is probably self-evident: There are no easy solutions. Wallach admits as much at many junctures throughout the book. But that doesn’t excuse the need for the critics to give us a more concrete blueprint for identifying and then potentially rectifying the supposed problems.

Of course, the other reason that many critics are short of specifics is because what they really mean when they quip how much we need to “have a conversation” about a new disruptive technology is that we need to have a conversation about stopping that technology.

Where Shall We Draw the Line between Hard and Soft Law?

But this is what I found most peculiar about Wallach’s book: He never really gives us a good standard by which to determine when we should look to hard governance (traditional top-down regulation) versus soft governance (more informal, bottom-up and non-regulatory approaches).

On one hand, he very much wants society to exercise greatly restraint and precaution when it comes to many of the technologies he and others worry about today. Again, he’s particularly concerned about the potential runaway development and use of drones, genetic editing, nanotech, robotics, and artificial intelligence. For at least one class of robotics—autonomous military robots—Wallach does call for immediate policy action in the form of an Executive Order to ban “killer” autonomous systems. (Incidentally, there’s also a major effort underway called the “Campaign to Stop Killer Robots” that aims to make such a ban part of international law through a multinational treaty.)

But Wallach also acknowledges the many trade-offs associated with efforts to preemptively controls on robotics and other technology. Perhaps for that reason, Wallach doesn’t develop a clear test for when the Precautionary Principle should be applied to new forms of innovation.

Clearly there are times when it is appropriate, although I believe it is only in an extremely narrow subset of cases. In the 2 nd Edition of my Permissionless Innovation book, I tried to offer a rough framework for when formal precautionary regulation (i.e., highly-restrictive policy defaults are necessary, such as operational restrictions, licensing requirements, research limitations, or even formal bans) might be necessary. I do not want to interrupt the flow of this review of Wallach’s book too much, so I have decided to just cut-and-paste that portion of Chapter 3 of my book (“When Does Precaution Make Sense?”) down below as an appendix to this essay.

The key takeaway of that passage from my book is that all of us who study innovation policy and the philosophy of technology—Wallach, myself, the whole darn movement—have done a remarkably poor job being specific about precisely when formal policy precaution is warranted. What is the test? All too often, we get lazy and apply what we might call an “I-Know-It-When-I-See-It” standard. Consider the possession of bazookas, tanks, and uranium. Almost all of us would agree that citizens should not be allowed to possess or use such things. Why? Well, it seems obvious, right? They just shouldn’t! But what is the exact standard we use to make that determination.

In coming years, I plan on spending a lot more time articulating a better test by which Precautionary Principle-based policies could be reasonably applied. Those who know me may be taken aback by what I just said. After all, I’ve spend many years explaining why Precautionary Principle-based thinking threatens human prosperity and should be rejected in the vast majority of cases. But that doesn’t excuse the lack of a serious and detailed exploration of the exact standard by which we determine when we should impose some limits on technological innovation.

Generally speaking, while I strongly believe that “permissionless innovation” should remain the policy default for most technologies, there certainly exists some scenarios where the threat of harm associated with a new innovation might be highly probable, tangible, immediate, irreversible, and catastrophic in nature. If so, that could qualify it for at least a light version of the Precautionary Principle. In a future paper or book chapter I’m just now starting to research, I hope to fuller develop those qualifiers and formulate a more robust test around them.

I would have very much liked to see Wallach articulate and defend a test of his own for when formal precaution would make sense. And, by extension, when should we default to soft precaution, or soft law and informal governance mechanisms for emerging technologies.

We turn to that issue next.

Toward Soft Governance & the Engineering of Better Technological Ethics

Even though Wallach doesn’t provide us with a test for determining when precaution makes sense or when we should instead default to soft governance, he does a much better job explaining the various models of soft law or informal governance that might help us deal with the potential negative ramifications of highly disruptive forms of technological change.

What Wallach proposes, in essence, is that we bake a dose of precautionary directly into the innovation process through a wide variety of informal governance/oversight mechanisms. “By embedding shared values in the very design of new tools and techniques, engineers improve the prospect of a positive outcome,” he claims. “The upstream embedding of shared values during the design process can ease the need for major course adjustments when it’s often too late.” (p. 261)

Wallach’s favored instrument of soft governance is what he refers to as “Governance Coordinating Committees” (GCCs). These Committees would coordinate “the separate initiatives by the various government agencies, advocacy groups, and representatives of industry” who would serve as “issue managers for the comprehensive oversight of each field of research.” (p. 250) He elaborates and details the function of GCCs as follows:

These committees, led by accomplished elders who have already achieved wide respect, are meant to work together with all the interested stakeholders to monitor technological development and formulate solutions to perceived problems. Rather than overlap with or function as a regulatory body, the committee would work together with existing institutions. (p. 250-51)

Wallach discussed the GCC idea in much greater detail in a 2013 book chapter he penned with Gary E. Marchant for a collected volume of essays on Innovative Governance Models for Emerging Technologies. (I highly recommend you pick up that book if you can afford it! Many terrific essays in that book on these issues.) In their chapter, Marchant and Wallach specify some of the soft law mechanisms we might use to instill a bit of precaution preemptively. These mechanisms include: “codes of conduct, statements of principles, partnership programs, voluntary programs and standards, certification programs and private industry initiatives.”

If done properly, GCCs could provide exactly the sort of wise counsel and smart recommendations that Wallach desires. In my book and many law review articles on various disruptive technologies, I have endorsed many of the ideas and strategies Wallach identifies. I’ve also stressed the importance of many other mechanisms, such as education and empowerment-based strategies that could help the public learn to cope with new innovations or use them appropriately. In addition, I’ve highlighted the many flexible, adaptive ex post remedies that can help when things go wrong. Those mechanisms include common law remedies such as product defects law, various torts, contract law, property law, and even class action lawsuits. Finally, I have written extensively about the very active role played by the Federal Trade Commission (FTC) and other consumer protection agencies, which have broad discretion to police “unfair and deceptive practices” by innovators.

Moreover, we already have a quasi-GCC model developing today with the so-called “multistakeholder governance” model that is often used in both informal and formal ways to handle many emerging technology policy issues.  The Department of Commerce (the National Telecommunications and Information Administration in particular) and the FTC have already developed many industry codes of conduct and best practices for technologies such as biometrics, big data, the Internet of Things, online advertising, and much more. Those agencies and others (such as the FDA and FAA) are continuing to investigate other codes or guidelines for things like advanced medical devices and drones, respectively. Meanwhile, I’ve heard other policymakers and academics float the idea of “digital ombudsmen,” “data ethicists,” and “private IRBs” (institutional review boards) as other potential soft law solutions that technology companies might consider. Perhaps going forward, many tech firms will have Chief Ethical Officers just as many of them today have Chief Privacy Officers or Chief Security Officers.

In other words, there’s already a lot of “soft law” activities going on in this space. And I haven’t even begun an inventory of the many other bodies or groups that already exist in each sector today that has set forth their own industry self-regulatory codes, but they exist in almost every field that Wallach worries about.

So, I’m not sure how much his GCC idea will add to this existing mix, but I would not be opposed to them playing the sort of coordinating “issue manager” role he describes. But I still have many questions about GCC’s, including:

  • How many of them are needed and how we will know which one is the definitive GCC for each sector or technology?
  • If they are overly formal in character and dominated by the most vociferous opponents of any particular technology, a real danger exists that a GCC could end up granting a small cabal a “heckler’s veto” over particular forms of innovation.
  • Alternatively, the possibility of “regulatory capture” could be a problem for some GCCs if incumbent companies come to dominate their membership.
  • Even if everything went fairly smoothly and the GCCs produced balanced reports and recommendations, future developers might wonder if and why they are to be bound by older guidelines.
  • And if those future developers choose not to play by the same set of guidelines, what’s the penalty for non-compliance?
  • And how are such guidelines enforced in a world where what I’ve called “global innovation arbitrage” is an increasing reality?

Challenging Questions for Both Hard and Soft Law

To summarize, whether we are speaking of “hard” or “soft” law approaches to technological governance, I am just not nearly as optimistic as Wallach seems to be that we will be able to find consensus on these three things:

(1) what constitutes “harm” in many of these circumstances;

(2) which “shared values” should prevail when “society” debates the shaping of ethics or guiding norms for emerging technologies but has highly contradictory opinions about those values (consider online privacy as a good example, where many people enjoy hyper-sharing while other demand hyper-privacy); and,

(3) that we can create a legitimate “governing body” (or bodies) that will be responsible for formulating these guidelines in a fair way without completely derailing the benefits of innovation in new fields and also remaining relevant for very long.

Nonetheless, as he and others have suggested, the benefit of adopting a soft law/informal governance approach to these issues is that it at least seeks to address these questions in more flexible and adaptive fashion. As I noted in my book, traditional regulatory systems “tend to be overly rigid, bureaucratic, inflexible, and slow to adapt to new realities. They focus on preemptive remedies that aim to predict the future, and future hypothetical problems that may not ever come about. Worse yet, administrative regulation generally preempts or prohibits the beneficial experiments that yield new and better ways of doing things.” ( Permissionless Innovation, p. 120)

So, despite the questions I have raised here, I welcome the more flexible soft law approach that Wallach sets forth in his book. I think it represents a far more constructive way forward when compared to the opposite “top-down” or “command-and-control” regulatory systems of the past. But I very much want to make sure that even these new and more flexible soft law approaches leave plenty of breathing room for ongoing trial-and-error experimentation with new technologies and systems.

Conclusion

In closing, I want to reiterate that not only did I appreciate the excellent questions raised by Wendell Wallach in A Dangerous Master, but I take them very seriously. When I sat down to revise and expand my Permissionless Innovation book last year, I decided to include this warning from Wallach in my revised preface: “The promoters of new technologies need to speak directly to the disquiet over the trajectory of emerging fields of research. They should not ignore, avoid, or superficially dampen criticism to protect scientific research.” (p. 28–9)

As I noted, in response to Wallach: “I take this charge seriously, as should others who herald the benefits of permissionless innovation as the optimal default for technology policy. We must be willing to take on the hard questions raised by critics and then also offer constructive strategies for dealing with a world of turbulent technological change.”

Serious questions deserve serious answers. Of course, sometimes those posing those questions fail to provide many answers of their own! Perhaps it is because they believe the questions answer themselves. Other times, it’s because they are willing to admit that easy answers to these questions typically prove quite elusive. In Wallach’s case, I believe it’s more the latter.

To wrap up, I’ll just reiterated that both Wallach and I share a common desire to find solutions to the hard questions about technological innovation. But the crucial question that probably separates his worldview and my own is this: Whether we are talking about hard or soft governance, how much faith should we place in preemptive planning vs. ongoing trial and error experimentation to solve technological challenges? Wallach is more inclined to believe we can divine these things with the sagacious foresight of “accomplished elders” and technocratic “issue managers,” who will help us slow things down until we figure out how to properly ease a new technology into society (if at all). But I believe that the only way we will find many of the answers we are searching for is by allowing still more experimentation with the very technologies that he and others seek to control the development of. We humans are outstanding problem-solvers and have the uncanny ability among all mammals to adapt to changing circumstances. We roll with the punches, learn from them, and become more resilient in the process. As I noted in my 2014 essay, “Muddling Through: How We Learn to Cope with Technological Change”:

we modern pragmatic optimists must continuously point to the unappreciated but unambiguous benefits of technological innovation and dynamic change. But we should also continue to remind the skeptics of the amazing adaptability of the human species in the face of adversity. [. . .] Humans have consistently responded to technological change in creative, and sometimes completely unexpected ways. There’s no reason to think we can’t get through modern technological disruptions using similar coping and adaptation strategies.

Will the technologies that Wallach fears bring about a “techstorm” that overwhelms our culture, our economy, and even our very humanity? It’s certainly possible, and we should continue to seriously discuss the issues that he and other skeptics raise about our expanding technological capabilities and the potential for many of them to do great harm. Because some of them truly could.

But it is equally plausible—in fact, some of us would say, highly probable—that instead of overwhelming us, we learn how to bend these new technological capabilities to our will and make them work for our collective benefit. Instead of technology becoming “a dangerous master,” we will instead make it our helpful servant, just as we have so many times before.


APPENDIX: When Does Precaution Make Sense?

[excerpt from chapter 3 of Permissionless Innovation: The Continuing Case for Comprehensive Technological Freedom. Footnotes omitted. See book for all references.]

But aren’t there times when a certain degree of precautionary policymaking makes good sense? Indeed, there are, and it is important to not dismiss every argument in favor of precautionary principle–based policymaking, even though it should not be the default policy rule in debates over technological innovation.

The challenge of determining when precautionary policies make sense comes down to weighing the (often limited) evidence about any given technology and its impact and then deciding whether the potential downsides of unrestricted use are so potentially catastrophic that trial-and-error experimentation simply cannot be allowed to continue. There certainly are some circumstances when such a precautionary rule might make sense. Governments restrict the possession of uranium and bazookas, to name just two obvious examples.

Generally speaking, permissionless innovation should remain the norm in the vast majority of cases, but there will be some scenarios where the threat of tangible, immediate, irreversible, catastrophic harm associated with new innovations could require at least a light version of the precautionary principle to be applied.  In these cases, we might be better suited to think about when an “anti-catastrophe principle” is needed, which narrows the scope of the precautionary principle and focuses it more appropriately on the most unambiguously worst-case scenarios that meet those criteria.

Precaution might make sense when harm is … Precaution generally doesn’t make sense for asserted harms that are …
Highly probable Highly improbable
Tangible (physical) Intangible (psychic)
Immediate Distant / unclear timeline
Irreversible Reversible / changeable
Catastrophic Mundane / trivial

 

But most cases don’t fall into this category. Instead, we generally allow innovators and consumers to freely experiment with technologies, and even engage in risky behaviors, unless a compelling case can be made that precautionary regulation is absolutely necessary.  How is the determination made regarding when precaution makes sense? This is where the role of benefit-cost analysis (BCA) and regulatory impact analysis is essential to getting policy right.  BCA represents an effort to formally identify the tradeoffs associated with regulatory proposals and, to the maximum extent feasible, quantify those benefits and costs.  BCA generally cautions against preemptive, precautionary regulation unless all other options have been exhausted—thus allowing trial-and-error experimentation and “learning by doing” to continue. (The mechanics of BCA are discussed in more detail in section VII.)

This is not the end of the evaluation, however. Policymakers also need to consider the complexities associated with traditional regulatory remedies in a world where technological control is increasingly challenging and quite costly. It is not feasible to throw unlimited resources at every problem, because society’s resources are finite.  We must balance risk probabilities and carefully weigh the likelihood that any given intervention has a chance of creating positive change in a cost-effective fashion.  And it is also essential to take into account the potential unintended consequences and long-term costs of any given solution because, as Harvard law professor Cass Sunstein notes, “it makes no sense to take steps to avert catastrophe if those very steps would create catastrophic risks of their own.”  “The precautionary principle rests upon an illusion that actions have no consequences beyond their intended ends,” observes Frank B. Cross of the University of Texas. But “there is no such thing as a risk-free lunch. Efforts to eliminate any given risk will create some new risks,” he says.

Oftentimes, after working through all these considerations about whether to regulate new technologies or technological processes, the best solution will be to do nothing because, as noted throughout this book, we should never underestimate the amazing ingenuity and resiliency of humans to find creative solutions to the problems posed by technological change.  (Section V discusses the importance of individual and social adaptation and resiliency in greater detail.) Other times we might find that, while some solutions are needed to address the potential risks associated with new technologies, nonregulatory alternatives are also available and should be given a chance before top-down precautionary regulations are imposed. (Section VII considers those alternative solutions in more detail.)

Finally, it is again essential to reiterate that we are talking here about the dangers of precautionary thinking as a public policy prerogative—that is, precautionary regulations that are mandated and enforced by government officials. By contrast, precautionary steps may be far more wise when undertaken in a more decentralized manner by individuals, families, businesses, groups, and other organizations. In other words, as I have noted elsewhere in much longer articles on the topic, “there is a different choice architecture at work when risk is managed in a localized manner as opposed to a society-wide fashion,” and risk-mitigation strategies that might make a great deal of sense for individuals, households, or organizations, might not be nearly as effective if imposed on the entire population as a legal or regulatory directive.

Finally, at times, more morally significant issues may exist that demand an even more exhaustive exploration of the impact of technological change on humanity. Perhaps the most notable examples arise in the field of advance medical treatments and biotechnology. Genetic experimentation and human cloning, for example, raise profound questions about altering human nature or abilities as well as the relationship between generations.

The case for policy prudence in these matters is easier to make because we are quite literally talking about the future of what it means to be human.  Controversies have raged for decades over the question of when life begins and how it should end. But these debates will be greatly magnified and extended in coming years to include equally thorny philosophical questions.  Should parents be allowed to use advanced genetic technologies to select the specific attributes they desire in their children? Or should parents at least be able to take advantage of genetic screening and genome modification technologies that ensure their children won’t suffer from specific diseases or ailments once born?

Outside the realm of technologically enhanced procreation, profound questions are already being raised about the sort of technological enhancements adults might make to their own bodies. How much of the human body can be replaced with robotic or bionic technologies before we cease to be human and become cyborgs?  As another example, “biohacking”—efforts by average citizens working together to enhance various human capabilities, typically by experimenting on their own bodies —could become more prevalent in coming years.  Collaborative forums, such as Biohack.Me, already exist where individuals can share information and collaborate on various projects of this sort.  Advocates of such amateur biohacking sometimes refer to themselves as “grinders,” which Ben Popper of the Verge defines as “homebrew biohackers [who are] obsessed with the idea of human enhancement [and] who are looking for new ways to put machines into their bodies.”

These technologies and capabilities will raise thorny ethical and legal issues as they advance. Ethically, they will raise questions of what it means to be human and the limits of what people should be allowed to do to their own bodies. In the field of law, they will challenge existing health and safety regulations imposed by the FDA and other government bodies.

Again, most innovation policy debates—including most of the technologies discussed throughout this book—do not involve such morally weighty questions. In the abstract, of course, philosophers might argue that every debate about technological innovation has an impact on the future of humanity and “what it means to be human.” But few have much of a direct influence on that question, and even fewer involve the sort of potentially immediate, irreversible, or catastrophic outcomes that should concern policymakers.

In most cases, therefore, we should let trial-and-error experimentation continue because “experimentation is part and parcel of innovation” and the key to social learning and economic prosperity.  If we froze all forms of technological innovation in place while we sorted through every possible outcome, no progress would ever occur. “Experimentation matters,” notes Harvard Business School professor Stefan H. Thomke, “because it fuels the discovery and creation of knowledge and thereby leads to the development and improvement of products, processes, systems, and organizations.”

Of course, ongoing experimentation with new technologies always entails certain risks and potential downsides, but the central argument of this book is that (a) the upsides of technological innovation almost always outweigh those downsides and that (b) humans have proven remarkably resilient in the face of uncertain, ever-changing futures.

In sum, when it comes to managing or coping with the risks associated with technological change, flexibility and patience is essential. One size most certainly does not fit all. And one-size-fits-all approaches to regulating technological risk are particularly misguided when the benefits associated with technological change are so profound. Indeed, “[t]echnology is widely considered the main source of economic progress”; therefore, nothing could be more important for raising long-term living standards than creating a policy environment conducive to ongoing technological change and the freedom to innovate.

]]>
https://techliberation.com/2016/04/20/wendell-wallach-on-the-challenge-of-engineering-better-technology-ethics/feed/ 0 76026
Permissionless Innovation: Book, Video, Slides, Podcast, Paper & More! https://techliberation.com/2016/04/19/permissionless-innovation-book-video-slides-podcast-paper-more/ https://techliberation.com/2016/04/19/permissionless-innovation-book-video-slides-podcast-paper-more/#respond Tue, 19 Apr 2016 14:25:09 +0000 https://techliberation.com/?p=76012

Permissionless Innovation 2nd edition book cover -1
I am pleased to announce the release of the second edition of my book, Permissionless Innovation: The Continuing Case for Comprehensive Technological Freedom. As with the first edition, the book represents a short manifesto that condenses — and attempts to make more accessible — arguments that I have developed in various law review articles, working papers, and blog posts over the past few years. The book attempts to accomplish two major goals.

First, I attempt to show how the central fault line in almost all modern technology policy debates revolves around “the permission question,” which asks: Must the creators of new technologies seek the blessing of public officials before they develop and deploy their innovations? How that question is answered depends on the disposition one adopts toward new inventions. Two conflicting attitudes are evident.

One disposition is known as the “precautionary principle.” Generally speaking, it refers to the belief that new innovations should be curtailed or disallowed until their developers can prove that they will not cause any harms to individuals, groups, specific entities, cultural norms, or various existing laws, norms, or traditions.

The other vision can be labeled “permissionless innovation.” It refers to the notion that experimentation with new technologies and business models should generally be permitted by default. Unless a compelling case can be made that a new invention will bring serious harm to society, innovation should be allowed to continue unabated and problems, if they develop at all, can be addressed later.

I argue that we are witnessing a grand clash of visions between these two mindsets today in almost all major technology policy discussions today.

The second major objective of the book, as is made clear by the title, is to make a forceful case in favor of the latter disposition of “permissionless innovation.” I argue that policymakers should unapologetically embrace and defend the permissionless innovation ethos — not just for the Internet but also for all new classes of networked technologies and platforms. Some of the specific case studies discussed in the book include: the “Internet of Things” and wearable technologies, smart cars and autonomous vehicles, commercial drones, 3D printing, and various other new technologies that are just now emerging.

I explain how precautionary principle thinking is increasingly creeping into policy discussions about these technologies. The urge to regulate preemptively in these sectors is driven by a variety of safety, security, and privacy concerns, which are discussed throughout the book. Many of these concerns are valid and deserve serious consideration. However, I argue that if precautionary-minded regulatory solutions are adopted in a preemptive attempt to head-off these concerns, the consequences will be profoundly deleterious.

Mye central thesis is this: Living in constant fear of hypothetical worst-case scenarios — and premising public policy upon them — means that best-case scenarios will never come about. When public policy is shaped by precautionary principle reasoning, it poses a serious threat to technological progress, economic entrepreneurialism, social adaptation, and long-run prosperity.

Again, that doesn’t mean we should ignore the various problems created by these highly disruptive technologies. But how we address these concerns matters greatly. If and when problems develop, there are many less burdensome ways to address them than through preemptive technological controls. The best solutions to complex social problems are almost always organic and “bottom-up” in nature. Luckily, there exists a wide variety of constructive approaches that can be tapped to address or alleviate concerns associated with new innovations. These include:

  • education and empowerment efforts (including media literacy, digital citizenship efforts);
  • social pressure from activists, academics, and the press and the public more generally.
  • voluntary self-regulation and adoption of best practices (including privacy and security “by design” efforts); and,
  • increased transparency and awareness-building efforts to enhance consumer knowledge about how new technologies work.

Such solutions are almost always superior to top-down, command-and-control regulatory edits and bureaucratic schemes of a “Mother, May I?” (i.e., permissioned) nature. The problem with “top-down” traditional regulatory systems is that they often tend to be overly-rigid, bureaucratic, inflexible, and slow to adapt to new realities. They focus on preemptive remedies that aim to predict the future, and future hypothetical problems that may not ever come about. Worse yet, administrative regulation generally preempts or prohibits the beneficial experiments that yield new and better ways of doing things. It raises the cost of starting or running a business or non-business venture, and generally discourages activities that benefit society.

To the extent that other public policies are needed to guide technological developments, simple legal principles are greatly preferable to technology-specific, micro-managed regulatory regimes. Again, ex ante (preemptive and precautionary) regulation is often highly inefficient, even dangerous. To the extent that any corrective legal action is needed to address harms, ex post measures, especially via the common law (torts, class actions, etc.), are typically superior. And the Federal Trade Commission will, of course, continue to play a backstop here by utilizing the broad consumer protection powers it possesses under Section 5 of the Federal Trade Commission Act, which prohibits “unfair or deceptive acts or practices in or affecting commerce.” In recent years, the FTC has already brought and settled many cases involving its Section 5 authority to address identity theft and data security matters. If still more is needed, enhanced disclosure and transparency requirements would certainly be superior to outright bans on new forms of experimentation or other forms of heavy-handed technological controls.

In the end, however, I argue that, to the maximum extent possible, our default position toward new forms of technological innovation must remain: “innovation allowed.” That is especially the case because, more often than not, citizens find ways to adapt to technological change by employing a variety of coping mechanisms, new norms, or other creative fixes. We should have a little more faith in the ability of humanity to adapt to the challenges new innovations create for our culture and economy. We have done it countless times before. We are creative, resilient creatures. That’s why I remain so optimistic about our collective ability to confront the challenges posed by these new technologies and prosper in the process.

If you’re interested in taking a look, you can find a free PDF of the book at the Mercatus Center website or you can find out how to order it from there as an eBook. Hardcopies are also available.

The Mercatus Center also recently hosted a book launch party for the release of the 2nd edition. The event was very well-attended and many of those present asked me to forward along specific slides or the entire deck. So, for those who asked, or others who may be interested in seeing the slides, here ya go!

And here’s the video from the event, which also incorporates these slides:

Also, back in September 2015, Sonal Chokshi was kind enough to invite me on the a16z podcast and we discussed, “Making the Case for Permissionless Innovation.” You can listen to that conversation here:

Finally, I put together a paper summarizing the major policy recommendations contained in the book. It’s entitled, “Permissionless Innovation and Public Policy: A 10-Point Blueprint.”  And then, along with Michael Wilt, I published condensed version of the paper as an essay over at  Medium

PI blueprint2.JPG

Materials mentioned in this post related to Permissionless Innovation project:

Related Essays:

Journal articles and book chapters:

Tech Policy Issue Matrix 2015

 

]]>
https://techliberation.com/2016/04/19/permissionless-innovation-book-video-slides-podcast-paper-more/feed/ 0 76012
CFTC’s Giancarlo on Permissionless Innovation for the Blockchain https://techliberation.com/2016/04/01/cftcs-giancarlo-on-permissionless-innovation-for-the-blockchain/ https://techliberation.com/2016/04/01/cftcs-giancarlo-on-permissionless-innovation-for-the-blockchain/#respond Fri, 01 Apr 2016 16:02:43 +0000 https://techliberation.com/?p=76010

Christopher Giancarlo
U.S. Commodity Futures Trading Commission (CFTC) Commissioner J. Christopher Giancarlo delivered an amazing address this week before the Depository Trust & Clearing Corporation 2016 Blockchain Symposium. The title of his speech was “Regulators and the Blockchain: First, Do No Harm,” and it will go down as the definitive early statement about how policymakers can apply a principled, innovation-enhancing policy paradigm to distributed ledger technology (DLT) or “blockchain” applications.

“The potential applications of this technology are being widely imagined and explored in ways that will benefit market participants, consumers and governments alike,” Giancarlo noted in his address. But in order for that to happen, he said, we have to get policy right. “It is time again to remind regulators to ‘do no harm,'” he argued, and he continued on to note that

The United States’ global leadership in technological innovation of the Internet was built hand-in-hand with its enlightened “do no harm” regulatory framework. Yet, when the Internet developed in the mid-1990s, none of us could have imagined its capabilities that we take for granted today. Fortunately, policymakers had the foresight to create a regulatory environment that served as a catalyst rather than a choke point for innovation. Thanks to their forethought and restraint, Internet-based applications have revolutionized nearly every aspect of human life, created millions of jobs and increased productivity and consumer choice. Regulators must show that same forethought and restraint now [for the blockchain].

What Giancarlo is referring to is the approach that the U.S. government adopted toward the Internet and digital networks in the mid-1990s. You can think of this vision as “permissionless innovation.” As I explain in my recent book of the same title, permissionless innovation refers to the notion that we should generally be free to experiment and learn new and better ways of doing things through ongoing trial-and-error.

How did U.S. policymakers make permissionless innovation the cornerstone of Internet policy during the mid-1990s? In my book, I highlight several key policy decisions, but the most crucial moment came with the Clinton Administration’s 1997 publication of Framework for Global Electronic Commerce in July 1997.  As I have noted here many times before, the document was a succinct and principled vision statement that made the idea of permissionless innovation the cornerstone of Internet policy for America. The five principles at the heart of this beautiful Framework were:

1. The private sector should lead. The Internet should develop as a market driven arena not a regulated industry. Even where collective action is necessary, governments should encourage industry self-regulation and private sector leadership where possible. 2. Governments should avoid undue restrictions on electronic commerce. In general, parties should be able to enter into legitimate agreements to buy and sell products and services across the Internet with minimal government involvement or intervention. Governments should refrain from imposing new and unnecessary regulations, bureaucratic procedures or new taxes and tariffs on commercial activities that take place via the Internet. 3. Where governmental involvement is needed, its aim should be to support and enforce a predictable, minimalist, consistent and simple legal environment for commerce. Where government intervention is necessary, its role should be to ensure competition, protect intellectual property and privacy, prevent fraud, foster transparency, and facilitate dispute resolution, not to regulate. 4. Governments should recognize the unique qualities of the Internet. The genius and explosive success of the Internet can be attributed in part to its decentralized nature and to its tradition of bottom-up governance. Accordingly, the regulatory frameworks established over the past 60 years for telecommunication, radio and television may not fit the Internet. Existing laws and regulations that may hinder electronic commerce should be reviewed and revised or eliminated to reflect the needs of the new electronic age. 5. Electronic commerce on the Internet should be facilitated on a global basis. The Internet is a global marketplace. The legal framework supporting commercial transactions should be consistent and predictable regardless of the jurisdiction in which a particular buyer and seller reside.

It was and remains a near-perfect vision for how emerging technologies should be governed because, as I note in my book, it “gave innovators the green light to let their minds run wild and experiment with an endless array of exciting new devices and services.”

Commissioner Giancarlo agrees, noting of the  Framework that, “This model is well-recognized as the enlightened regulatory underpinning of the Internet that brought about profound changes to human society. … During the period of this “do no harm” regulatory framework, a massive amount of investment was made in the Internet’s infrastructure. It yielded a rapid expansion in access that supported swift deployment and mass adoption of Internet-based technologies.” And countless new exciting systems, devices, and applications came about, which none of us could have anticipated until we let people experiment freely.

By extension, we should apply the “do no harm” / permissionless innovation policy paradigm more broadly, Giancarlo says.

‘Do no harm’ was unquestionably the right approach to development of the Internet. Similarly, “do no harm” is the right approach for DLT. Once again, the private sector must lead and regulators must avoid impeding innovation and investment and provide a predictable, consistent and straightforward legal environment. Protracted regulatory uncertainty or an uncoordinated regulatory approach must be avoided, as should rigid application of existing rules designed for a bygone technological era. . . . I believe that innovators and investors should not have to seek government’s permission, only its forbearance, to develop DLT so they can do the work necessary to address the increased operational complexity and capital consumption of modern financial market regulation.

And if America fails to adopt this approach for the Blockchain, it could be disastrous. “Without such a “do no harm” approach,” Giancarlo predicts, “financial service and technology firms will be left trying to navigate a complex regulatory environment, where multiple agencies have their own rule frameworks, issues and concerns.” And that led Giancarlo to touch upon an issue I have discussed here many times before: The growing reality of a world of “global innovation arbitrage.” As I noted in an essay on that topic, “Capital moves like quicksilver around the globe today as investors and entrepreneurs look for more hospitable tax and regulatory environments. The same is increasingly true for innovation. Innovators can, and increasingly will, move to those countries and continents that provide a legal and regulatory environment more hospitable to entrepreneurial activity.”

This is why it is so crucial that policymakers set the right tone for innovation for blockchain-based technologies and applications. If they don’t, innovators would seek out more hospitable legal environments in which they can innovate without prior restraint. As he ellaborates:

It is therefore critical for regulators to come together to adopt a principles-based approach to DLT regulation that is flexible enough so innovators do not fear unwitting infractions of an uncertain regulatory environment. Some regulators have already openly acknowledged the need for light-touch oversight. For instance, the UK’s Financial Conduct Authority (FCA) has committed to regulatory forbearance on DLT development for the foreseeable future in an effort to give innovators “space” to develop and improve the technology. The FCA is even going one step further and engaging in discussions with the industry to determine whether DLT could meet the FCA’s own needs. Similarly, a few weeks ago, Masamichi Kono, Vice Minister for International Affairs at the Japan Financial Services Agency, stated that regulators must take a “pragmatic and flexible approach” to regulation of new technologies so not to stifle innovation. I have no doubt that the FCA’s intention to give DLT innovators “space” to innovate will be good for DLT research and development. I also suspect that it will be good for the UK’s burgeoning FinTech industry and the jobs it creates across the Atlantic. U.S. lawmakers concerned about the rapid loss of jobs in the U.S. financial service industry, especially in the New York City area, should similarly look to provide “space” to U.S. DLT innovation and entrepreneurship and the well-paying American jobs that will surely follow.

That is exactly right. I just hope other policymakers are listening to this wisdom. The future of blockchain-based innovation depends upon it. America should follow Commissioner Giancarlo’s wise call to adopt permissionless innovation as the policy default for this exciting technology.


 

Additional Reading:

]]>
https://techliberation.com/2016/04/01/cftcs-giancarlo-on-permissionless-innovation-for-the-blockchain/feed/ 0 76010
The Right to Try, 3D Printing, the Costs of Technological Control & the Future of the FDA https://techliberation.com/2015/08/10/the-right-to-try-3d-printing-the-costs-of-technological-control-the-future-of-the-fda/ https://techliberation.com/2015/08/10/the-right-to-try-3d-printing-the-costs-of-technological-control-the-future-of-the-fda/#comments Mon, 10 Aug 2015 13:28:37 +0000 http://techliberation.com/?p=75660

I’ve been thinking about the “right to try” movement a lot lately. It refers to the growing movement (especially at the state level here in the U.S.) to allow individuals to experiment with alternative medical treatments, therapies, and devices that are restricted or prohibited in some fashion (typically by the Food and Drug Administration). I think there are compelling ethical reasons for allowing citizens to determine their own course of treatment in terms of what they ingest into their bodies or what medical devices they use, especially when they are facing the possibility of death and have exhausted all other options.

But I also favor a more general “right to try” that allows citizens to make their own health decisions in other circumstances. Such a general freedom entails some risks, of course, but the better way to deal with those potential downsides is to educate citizens about the trade-offs associated with various treatments and devices, not to forbid them from seeking them out at all.

The Costs of Control

But this debate isn’t just about ethics. There’s also the question of the costs associated with regulatory control. Practically speaking, with each passing day it becomes harder and harder for governments to control unapproved medical devices, drugs, therapies, etc.  Correspondingly, that significantly raises the costs of enforcement and makes one wonder exactly how far the FDA or other regulators will go to stop or slow the advent of new technologies.

I have written about this “cost of control” problem in various law review articles as well as my little Permissionless Innovation book and pointed out that, when enforcement challenges and costs reach a certain threshold, the case for preemptive control grows far weaker simply because of (1) the massive resources that regulators would have to pour into the task on crafting a workable enforcement regime; and/or (2) the massive loss of liberty it would entail for society more generally to devise such solutions. With the rise of the Internet of Things, wearable devices, mobile medical apps, and other networked health and fitness technologies, these issues are going to become increasingly ripe for academic and policy consideration.

A Hypothetical Regulatory Scenario

Here’s an interesting case study to consider in this regard:  Can  3D printing  of prosthetics be controlled? Clearly prosthetics are medical devices in the traditional regulatory sense, but few people are going to the FDA and asking for permission or a “right to try” new 3D-printed limbs. They’re just doing it. And the results have been incredibly exciting, as my Mercatus Center colleague Robert Graboyes has noted.

But let’s imagine what the regulators might do if they really wanted to impose their will and limit the right to try in this context:

  • Could government officials ban 3D printers outright? I don’t see how. The technology is already too diffuse and is utilized for so many alternative (and uncontroversial) uses that it doesn’t seem likely such a control regime would work or be acceptable. And if any government did take this extreme step, “global innovation arbitrage” would kick in. That is, innovators would just move offshore.
  • Could government officials ban the inputs used by 3D printers? Again, I don’t see how. After all, we are primarily talking about plastics and glue here!
  • Could government officials ban 3D printer blueprints? Two problems with that. First, such blueprints are a form of free speech and government efforts to censor them would represent a form of prior restraint that would violate the First Amendment of the U.S. Constitution. Second, even ignoring the First Amendment issues, information control is just damned hard and I don’t see how you could suppress such blueprints effectively when are they are freely available across the Internet. Or, people would just “torrent” them, as they do (illegally) with copyrighted files today.
  • Could government officials ban and/or fine specific companies (especially those with deep pockets)? Perhaps, but that is likely a losing strategy since 3D printing is already so highly decentralized and is done by average citizens in the comfort of their own home (and often for no monetary gain). So, attempting to go after a handful of corporate players and “make an example out of them” to deter others from experimenting isn’t likely to work. And, again, it’ll just lead to more offshoring and undergrounding of these devices and innovative activities.
  • Could government officials ban the sale of certain 3D printing applications? They could try, but enterprising minds would likely start using alternative payment methods (like Bitcoin) to conduct their deals. But the question of payments is largely irrelevant in many fields because much of this activity is non-commercial and open-source in character. People are freely distributing blueprints for 3D-printed prosthetics, for example, and they are even giving away the actual 3D-printed prosthetic devices to those who need them.
  • Could government officials just create a licensing / approval regime for narrowly-targeted 3D printed medical devices? Of course, but for all the reasons outlined above, it would likely be pretty easy to evade such a regime. Moreover, the very effort to enforce such a licensing regime would likely deter many beneficial innovations in the process, while also leading to the old cronyist problems associated with firms engaging in rent-seeking and courting favor with regulators in order to survive or prosper.

Anyway, you get the point: The practicality of control makes a difference and at some point the enormous costs associated with enforcement become an ethical matter in its own right. Stated differently, it’s not just that citizens should generally be at liberty to determine their own treatments and decide what drugs they ingest and what medical devices they use, it’s also the case that regulatory efforts aimed at limiting that right have so many corresponding enforcement costs that can spillover on to society more generally. And that’s an ethical matter of a different sort when you get right down to it. But, at a minimum, it’s an increasingly costly strategy and the costs associated with such technological control regimes should be considered closely and quantified where possible.

The Need for a Shift toward Risk Education

Let’s return to the question I raised above regarding the educational role that the FDA, or governments more generally, could play in the future. As I noted, a world in which citizens are granted the liberty to make more of their own health decisions is a world in which they could, at times, be rolling the dice with their health and lives. The highly paternalistic approach of modern food and drug regulation is rooted in the belief that citizens simply cannot be trusted to make such decisions on their own because they will never be able to appreciate the relative risks. You might be surprised to hear that I am somewhat sympathetic to that argument. People can and do make rash and unwise decisions about their health based on misinformation or a general lack of quality information presented in an easy-to-understand fashion. As a result, policymakers have taken the right to make these decisions away from us in many circumstances.

Although motivated by the best of intentions, paternalistic controls are not the optimal way to address these concerns. The better approach is rooted in risk education. To reiterate, the wise way to deal with the potential downsides associated with freedom of choice is to educate citizens about the relative risks associated with various medical treatments and devices, not to forbid them from seeking them out at all.

What does that mean for the future of the FDA? If the agency was smart, it would recognize that traditional command-and-control regulation is no longer a sensible strategy; it’s increasingly unworkable and imposes too many other costs on innovators and personal liberty. Thus, the agency needs to reorient its focus toward becoming a risk educator. Their goal should be to help create a more fully-informed citizenry that is empowered with more and better information about relative risk trade-offs.

Overcoming the Opposition & Getting Consent Mechanisms Right

Such an approach (i.e., shifting the FDA’s mission from being primarily a risk regulator to becoming a risk educator) will encounter opposition from strident defenders and opponents of the FDA alike.

The defenders of the FDA and its traditional approach will continue to insist that people can  never be trusted to make such decisions on their own, regardless of how much information they have at their disposal or how many warnings we might give them. The problem with that position is that it treats citizens like ignorant sheep and denies them the most basic of all human rights: The right to live a life of your own choosing and to make the ultimate determinations about your own health and welfare. And, again, blindly defending the old system isn’t wise because traditional command-and-control regulatory methods are increasingly impractical and incredibly costly to enforce.

Opponents of the FDA, by contrast, will insist that the agency can’t even be trusted to provide us with good information for us to make these decisions on our own. Additionally, critics will likely argue that the agency might give us the wrong information or try to “nudge” us in certain directions. I share some of those concerns, but I am willing to live with that possibility so long as we are moving toward a world in which that is the only real power that the FDA possess over me and my fellow citizens. Because if all the agency is doing is providing us with information about risk trade-offs, then at least we still remain free to seek out alternative information from other experts and then choose our own courses of action.

The tricky issue here is getting consent mechanisms right. In fact, it’s the lynchpin of the new regime I am suggesting. In other words, even if we could agree that a more fully-informed citizenry should be left free to make these decisions on their own, we need to make sure that those individuals have provided clear and informed consent to the parties they might need to contract with when seeking alternative treatments. That’s particularly essential in a litigious society like America, where the threat of liability always looms large over doctors, nurses, hospital, insurers, and medical innovators. Those parties will only be willing to go along with an expanded “right to try” regime if they can be assured they won’t be held to blame when citizens make controversial choices that they advised them against, or at least clearly laid out all the potential risks and other alternatives at their disposal. This will require not only an evolution of statutory law and regulatory standards, but also of the common law and insurance norms.

Once we get all that figured out—and it will, no doubt, take some time—we’ll be on our way to a better world where the idea of having a “right to try” is the norm instead of the exception.


(My thanks to Adam Marcus for commenting on a draft of this essay. For more general background on 3D printing, see his excellent 2011 primer here, “3D Printing: The Future is Here.”)

]]>
https://techliberation.com/2015/08/10/the-right-to-try-3d-printing-the-costs-of-technological-control-the-future-of-the-fda/feed/ 2 75660
Mercatus Filing to FAA on Small Drones https://techliberation.com/2015/04/24/mercatus-filing-to-faa-on-small-drones/ https://techliberation.com/2015/04/24/mercatus-filing-to-faa-on-small-drones/#respond Fri, 24 Apr 2015 18:46:09 +0000 http://techliberation.com/?p=75531

Today, Eli Dourado, Ryan Hagemann and I filed comments with the Federal Aviation Administration (FAA) in its proceeding on the “Operation and Certification of Small Unmanned Aircraft Systems” (i.e. small private drones). In this filing, we begin by arguing that just as “permissionless innovation” has been the primary driver of entrepreneurialism and economic growth in many sectors of the economy over the past decade, that same model can and should guide policy decisions in other sectors, including the nation’s airspace. “While safety-related considerations can merit some precautionary policies,” we argue, “it is important that those regulations leave ample space for unpredictable innovation opportunities.”

We continue on in our filing to note that  “while the FAA’s NPRM is accompanied by a regulatory evaluation that includes benefit-cost analysis, the analysis does not meet the standard required by Executive Order 12866. In particular, it fails to consider all costs and benefits of available regulatory alternatives.” After that, we itemize the good and the bad of the FAA propose with an eye toward how the agency can maximize innovation opportunities. We conclude by noting:

 The FAA must carefully consider the potential effect of UASs on the US economy. If it does not, innovation and technological advancement in the commercial UAS space will find a home elsewhere in the world. Many of the most innovative UAS advances are already happening abroad, not in the United States. If the United States is to be a leader in the development of UAS technologies, the FAA must open the American skies to innovation.

You can read our entire 9-page filing here.

___________________________

Additional  Reading

]]>
https://techliberation.com/2015/04/24/mercatus-filing-to-faa-on-small-drones/feed/ 0 75531
Initial Thoughts on New FAA Drone Rules https://techliberation.com/2015/02/16/initial-thoughts-on-new-faa-drone-rules/ https://techliberation.com/2015/02/16/initial-thoughts-on-new-faa-drone-rules/#comments Mon, 16 Feb 2015 20:08:55 +0000 http://techliberation.com/?p=75465

Yesterday afternoon, the Federal Aviation Administration (FAA) finally released its much-delayed rules for private drone operations. As The Wall Street Journal  points out, the rules “are about four years behind schedule,” but now the agency is asking for expedited public comments over the next 60 days on the whopping 200-page order. (You have to love the irony in that!) I’m still going through all the details in the FAA’s new order — and here’s a summary of what the major provisions — but here are some high-level thoughts about what the agency has proposed.

Opening the Skies…

  • The good news is that, after a long delay, the FAA is finally taking some baby steps toward freeing up the market for private drone operations.
  • Innovators will no longer have to operate entirely outside the law in a sort of drone black market. There’s now a path to legal operation. Specifically, small unmanned aircraft systems (UAS) operators (for drones under 55 lbs.) will be able to go through a formal certification process and, after passing a test, get to operate their systems.

… but Not Without Some Serious Constraints

  • The problem is that the rules only open the skies incrementally for drone innovation.
  • You can’t read through these 200 pages of regulations without getting sense that the FAA still wishes that private drones would just go away.
  • For example, the FAA still wants to keep a bit of a leash around drones by (1) limiting their use to being daylight-only flights (2) that are in the visual line-of-sight of the operators at all times. And (3) the agency also says that drones cannot be flown over people.
  • Those three limitations will hinder some obvious innovations, such as same-day drone delivery for small packages, which Amazon has suggested they are interested in pursuing. (Amazon isn’t happy about these restrictions.)

Impact on Small Innovators?

  • But what I worry about more are all the small ‘Mom-and-Pop’ drone entrepreneur, who want to use airspace as a platform for open, creative innovation. These folks are out there but they don’t have the name or the resources to weather these restrictions the way that Amazon can. After all, if Amazon has to abandon same-day drone delivery because of the FAA rules, the company will still have a thriving commercial operation to fall back on. But all those small, nameless drone innovators currently experimenting with new, unforeseeable innovations may not be so lucky.
  • As a result, there’s a real threat here of drone entrepreneurs bolting the U.S. and offering their services in more hospitable environments if the FAA doesn’t take a more flexible approach.
  • [For more discussion of this problem, see my recent essay on “global innovation arbitrage.”]

Impact on News-Gathering?

  • It’s also worth asking how these rules might limit legitimate news-gathering operations by both journalistic enterprises and average citizens. If we can never fly a drone over a crowd of people, as the rules stipulate, that places some rather serious constraints on our ability to capture real-time images and video from events of societal importance (such as political protests or even just major events like sporting events or concerts).
  • [For more discussion about this, see this September 2014 Mercatus Center working paper, “News from Above: First Amendment Implications of the Federal Aviation Administration Ban on Commercial Drones.”]

Still Time to Reconsider More Flexible Rules

  • Of course, these aren’t final rules and the agency still has time to relax some of these restrictions to free the skies for less fettered private drone operation.
  • I suspect that drone innovators will protest the three specific limitations I identified above and ask for a more flexible approach to enforcing those rules.
  • But it’s good that the FAA has finally taken the first step toward decriminalizing private drone operations in the United States.

___________________________

Additional  Reading

]]>
https://techliberation.com/2015/02/16/initial-thoughts-on-new-faa-drone-rules/feed/ 1 75465
What Cory Booker Gets about Innovation Policy https://techliberation.com/2015/02/16/what-cory-booker-gets-about-innovation-policy/ https://techliberation.com/2015/02/16/what-cory-booker-gets-about-innovation-policy/#respond Mon, 16 Feb 2015 15:32:43 +0000 http://techliberation.com/?p=75460

Cory BookerLast Wednesday, it was my great pleasure to testify at a Senate Commerce Committee hearing entitled, “The Connected World: Examining the Internet of Things.” The hearing focused “on how devices… will be made smarter and more dynamic through Internet technologies. Government agencies like the Federal Trade Commission, however, are already considering possible changes to the law that could have the unintended consequence of slowing innovation.”

But the session went well beyond the Internet of Things and became a much more wide-ranging discussion about how America can maintain its global leadership for the next-generation of Internet-enabled, data-driven innovation. On both sides of the aisle at last week’s hearing, one Senator after another made impassioned remarks about the enormous innovation opportunities that were out there. While doing so, they highlighted not just the opportunities emanating out of the IoT and wearable device space, but also many other areas, such as connected cars, commercial drones, and next-generation spectrum.

I was impressed by the energy and nonpartisan vision that the Senators brought to these issues, but I wanted to single out the passionate statement that Sen. Cory Booker (D-NJ) delivered when it came his turn to speak because he very eloquently articulated what’s at stake in the battle for global innovation supremacy in the modern economy. (Sen. Booker’s remarks were not published, but you can watch them starting at the 1:34:00 mark of the hearing video.)

Embrace the Opportunity

First, Sen. Booker stressed the enormous opportunity with the Internet of Things. “ This is a phenomenal opportunity for a bipartisan, profoundly patriotic approach to an issue that can explode our economy. I think that there are trillions of dollars, creating countless jobs, improving quality of life, [and] democratizing our society,” he said. “We can’t even imagine the future that this portends of, and we should be embracing that.”

Sen. Booker has it exactly right. And for more details about the enormous innovation opportunities associated with the Internet of Things, see Section 2 of my new law review article, “The Internet of Things and Wearable Technology Addressing Privacy and Security Concerns without Derailing Innovation,” which provides concrete evidence.

Protect America’s Competitive Advantage in the Innovation Age

Second, Sen. Booker highlighted the importance of getting our policy vision right to achieve those opportunities. He noted that “a lot of my concerns are what my Republican colleagues also echoed, which is we should be doing everything possible to encourage this and nothing to restrict it.”

America right now is the net exporter of technology and innovation in the globe, and we can’t lose that advantage,” he said and “we should continue to be the global innovators on these areas.” He continued on to say:

And so, from copyright issues, security issues, privacy issues… all of these things are worthy of us wrestling and grappling with, but to me we cannot stop human innovation and we can’t give advantages in human innovation to other nations that we don’t have. America should continue to lead.

This is something I have been writing actively about now for many years and I agree with Sen. Booker that America needs to get our policy vision right to ensure we don’t lose ground in the international competition to see who will lead the next wave of Internet-enabled innovation. As I noted in my testimony, “If America hopes to be a global leader in the Internet of Things, as it has been for the Internet more generally over the past two decades, then we first have to get public policy right. America took a commanding lead in the digital economy because, in the mid-1990s, Congress and the Clinton administration crafted a nonpartisan vision for the Internet that protected “permissionless innovation”—the idea that experimentation with new technologies and business models should generally be permitted without prior approval.”

Meanwhile, as I documented in my longer essay, “Why Permissionless Innovation Matters: Why does economic growth occur in some societies & not in others?” our international rivals languished on this front because they strapped their tech sectors with layers of regulatory red tape that thwarted digital innovation.

Reject Fear-Based Policymaking

Third, and perhaps most importantly, Sen. Booker stressed how essential it was that we reject a fear-based approach to public policymaking. As he noted at the hearing about these new information technologies, “ there’s a lot of legitimate fears, but in the same way of every technological era, there must have been incredible fears.”

He cited, for example, the rise of air travel and the onset of humans taking flight. Sen. Booker correctly noted that while that must have been quite jarring at first, we quickly came to realize the benefits of that new innovation. The same will be true for new technologies such as the Internet of Things, connected cars, and private drones, Booker argued. In each case, some early fears about these technologies could lead to overly-precautionary approach to policy. “ But for us to do anything to inhibit that leap in humanity to me seems unfortunate,” he said.

Once again, the Senator has it exactly right. As I noted in my law review article on “Technopanics, Threat Inflation, and the Danger of an Information Technology Precautionary Principle,” as well as my recent essay, “Muddling Through: How We Learn to Cope with Technological Change,” humans have exhibited the uncanny ability to adapt to changes in their environment, bounce back from adversity, and learn to be resilient over time. A great deal of wisdom is born of experience, including experiences that involve risk and the possibility of occasional mistakes and failures while both developing new technologies and learning how to live with them. More often than not, citizens have found ways to adapt to technological change by employing a variety of coping mechanisms, new norms, or other creative fixes.

Booker gets that and understands why we need to be patient to allow that process to unfold once again so that we can enjoy the abundance of riches that will accompany a more innovative economy.

Avoiding Global Innovation Arbitrage

Sen. Booker also highlighted how some existing government legal and regulatory barriers could hold back progress. On the wireless spectrum front he noted that “ the government hoards too much spectrum and there is a need for more spectrum out there. Everything we are talking about,” he argued, “is going to necessitate more spectrum.” Again, 100% correct. Although some spectrum reform proposals (licensed vs. unlicensed, for example) will still prove contentious, we can at least all agree that we have to work together to find ways to open up more spectrum since the coming Internet of Things universe of technologies is going to demand lots of it.

Booker also noted that another area where fear undermines American leadership is the issue of private drone use. He noted that, “ the potential possibilities for drone technology to alleviate burdens on our infrastructure, to empower commerce, innovation, jobs… to really open up unlimited opportunities in this country is pretty incredible to me.”

The problem is that existing government policies, enforced by the Federal Aviation Administration (FAA), have been holding back progress. And that has had consequences in terms of global competitiveness. “As I watch our government go slow in promulgating rules holding back American innovation,” Booker said, “what happened as a result of that is that innovation has spread to other countries that don’t have these rules (or have) put in place sensible regulations. But now we seeing technology exported from America and going other places.”

Correct again! I wrote about this problem in a recent essay on “global innovation arbitrage,” in which I noted how “Capital moves like quicksilver around the globe today as investors and entrepreneurs look for more hospitable tax and regulatory environments. The same is increasingly true for innovation. Innovators can, and increasingly will, move to those countries and continents that provide a legal and regulatory environment more hospitable to entrepreneurial activity.”

That’s already happening with drone innovation, as I documented in that piece. Evidence suggests that the FAA’s heavy-handed and overly-precautionary approach to drones has encouraged some innovators to flock overseas in search of more hospitable regulatory environment.

Luckily, just this weekend, the FAA finally announced its (much-delayed) rules for private drone operations. (Here’s a summary of those rules.) Unfortunately, the rules are a bit of mixed bag, with some greater leeway being provided for very small drones, but the rules will still be too restrictive to allow for other innovative applications, such as widespread drone delivery (which has Amazon angry, among others.)

Bottom line: if our government doesn’t take a more flexible, light-touch approach to these and other cutting-edge technologies, than some of our most creative minds and companies are going to bolt.

I dealt with all of these innovation policy issues in far more detail in my latest little book Permissionless Innovation: The Continuing Case for Comprehensive Technological Freedom, which I condensed further still into this essay on, “Embracing a Culture of Permissionless Innovation.” But Sen. Booker has offered us an even more concise explanation of just what’s at stake in the battle for innovation leadership in the modern economy. His remarks point the way forward and illustrate, as I have noted before, that innovation policy can and should be a nonpartisan issue.

 


Additional Reading

 

]]>
https://techliberation.com/2015/02/16/what-cory-booker-gets-about-innovation-policy/feed/ 0 75460
Permissionless Innovation & Commercial Drones https://techliberation.com/2015/02/04/permissionless-innovation-commercial-drones/ https://techliberation.com/2015/02/04/permissionless-innovation-commercial-drones/#comments Wed, 04 Feb 2015 23:20:57 +0000 http://techliberation.com/?p=75392

Farhad Manjoo’s latest New York Times column, “Giving the Drone Industry the Leeway to Innovate,” discusses how the Federal Aviation Administration’s (FAA) current regulatory morass continues to thwart many potentially beneficial drone innovations. I particularly appreciated this point:

But perhaps the most interesting applications for drones are the ones we can’t predict. Imposing broad limitations on drone use now would be squashing a promising new area of innovation just as it’s getting started, and before we’ve seen many of the potential uses. “In the 1980s, the Internet was good for some specific military applications, but some of the most important things haven’t really come about until the last decade,” said Michael Perry, a spokesman for DJI [maker of Phantom drones]. . . . He added, “Opening the technology to more people allows for the kind of innovation that nobody can predict.”

That is exactly right and it reflects the general notion of “permissionless innovation” that I have written about extensively here in recent years. As I summarized in a recent essay: “Permissionless innovation refers to the notion that experimentation with new technologies and business models should generally be permitted by default. Unless a compelling case can be made that a new invention or business model will bring serious harm to individuals, innovation should be allowed to continue unabated and problems, if they develop at all, can be addressed later.”

The reason that permissionless innovation is so important is that innovation is more likely in political systems that maximize breathing room for ongoing economic and social experimentation, evolution, and adaptation. We don’t know what the future holds. Only incessant experimentation and trial-and-error can help us achieve new heights of greatness. If, however, we adopt the opposite approach of “precautionary principle”-based reasoning and regulation, then these chances for serendipitous discovery evaporate. As I put it in my recent book, “living in constant fear of worst-case scenarios—and premising public policy upon them—means that best-case scenarios will never come about. When public policy is shaped by precautionary principle reasoning, it poses a serious threat to technological progress, economic entrepreneurialism, social adaptation, and long-run prosperity.”

In this regard, the unprecedented growth of the Internet is a good example of how permissionless innovation can significantly improve consumer welfare and our nation’s competitive status relative to the rest of the world. And this also holds lessons for how we treat commercial drone technologies, as Jerry Brito, Eli Dourado, and I noted when filing comments with the FAA back in April 2013. We argued:

Like the Internet, airspace is a platform for commercial and social innovation. We cannot accurately predict to what uses it will be put when restrictions on commercial use of UASs are lifted. Nevertheless, experience shows that it is vital that innovation and entrepreneurship be allowed to proceed without ex ante barriers imposed by regulators. We therefore urge the FAA not to impose  any prospective restrictions on the use of commercial UASs without clear evidence of actual, not merely hypothesized, harm.

Manjoo builds on that same point in his new Times essay when he notes:

[drone] enthusiasts see almost limitless potential for flying robots. When they fantasize about our drone-addled future, they picture not a single gadget, but a platform — a new class of general-purpose computer, as important as the PC or the smartphone, that may be put to use in a wide variety of ways. They talk about applications in construction, firefighting, monitoring and repairing infrastructure, agriculture, search and response, Internet and communications services, logistics and delivery, filmmaking and wildlife preservation, among other uses.

If only the folks at the FAA and in Congress saw things this way. We need to open up the skies to the amazing innovative potential of commercial drone technology, especially before the rest of the world seizes the opportunity to jump into the lead on this front.

___________________________

Additional  Reading

]]>
https://techliberation.com/2015/02/04/permissionless-innovation-commercial-drones/feed/ 2 75392
Global Innovation Arbitrage: Genetic Testing Edition https://techliberation.com/2014/12/12/global-innovation-arbitrage-genetic-testing-edition/ https://techliberation.com/2014/12/12/global-innovation-arbitrage-genetic-testing-edition/#comments Sat, 13 Dec 2014 03:48:50 +0000 http://techliberation.com/?p=75086

Earlier this week I posted an essay entitled, “Global Innovation Arbitrage: Commercial Drones & Sharing Economy Edition,” in which I noted how:

Capital moves like quicksilver around the globe today as investors and entrepreneurs look for more hospitable tax and regulatory environments. The same is increasingly true for innovation. Innovators can, and increasingly will, move to those countries and continents that provide a legal and regulatory environment more hospitable to entrepreneurial activity.

That essay focused on how actions by U.S. policymakers and regulatory agencies threatened to disincentivize homegrown innovation in the commercial drone and sharing economy sectors. But there are many other troubling examples of how America risks losing its competitive advantage in sectors where we should be global leaders as innovators looks offshore. We can think of this as “global innovation arbitrage,” as venture capitalist Marc Andreessen has aptly explained:

Think of it as a sort of “global arbitrage” around permissionless innovation — the freedom to create new technologies without having to ask the powers that be for their blessing. Entrepreneurs can take advantage of the difference between opportunities in different regions, where innovation in a particular domain of interest may be restricted in one region, allowed and encouraged in another, or completely legal in still another.

One of the more vivid recent examples of global innovation arbitrage involves the well-known example of 23andMe, which sells mail-order DNA-testing kits to allow people to learn more about their genetic history and predisposition to various diseases. Unfortunately, the Food and Drug Administration (FDA) is actively thwarting innovation on this front, as SF Gate reporter Stephanie Lee notes in her recent article, “23andMe’s health DNA kits now for sale in U.K., still blocked in U.S.“:

A little more than a year ago, 23andMe, the Google-backed startup that sells mail-order DNA-testing kits, was ordered by U.S. regulators to stop telling consumers about their genetic health risks. The Mountain View company has since tried to regain favor with the Food and Drug Administration, but it’s also started to expand outside the country. As of Tuesday, United Kingdom consumers can buy 23andMe’s saliva kits and learn about their inherited risks of diseases and responses to drugs.

While the FDA drags its feet on this front, however, other countries are ready to open their doors to innovators and their life-enriching products and services:

A spokesperson for the United Kingdom’s Medicines and Healthcare Products Regulatory Agency said the [23andMe] test can be used with caution. […]  “The U.K. is a world leader in genomics and we are very excited to offer a product specifically for U.K. customers,” Anne Wojcicki, 23andMe’s co-founder and CEO, told the BBC. Mark Thomas, a professor of evolutionary genetics at University College London, said in a statement, ”For better or worse, direct-to-the-consumer genetic testing companies are here to stay. One could argue the rights and wrongs of such companies existing, but I suspect that ship has sailed.”

That’s absolutely right, even if the FDA wants to bury it’s head in the sand and pretend it can turn back the clock. The problem is that the longer the FDA pretends it can play by the old command-and-control playbook, the more likely it is that American innovators like 23andMe will look to move offshore and find more hospitable homes or their innovative endevours.

This is a central lesson that my Mercatus Center colleague Dr. Robert Graboyes stressed in his recent study, Fortress and Frontier in American Health Care. Graboyes noted that if America failed to embrace the “frontier” spirit of innovation — i.e., a policy disposition that embraces creative destruction and disruptive, “permissionless” innovation — then our global competitive advantage in this space is at risk:

Moving health care from the Fortress to the Frontier may be more a matter of necessity than of choice. We are entering a period of rapid technological advances that will radically alter health care. Many of these advances require only modest capital and labor inputs that governments cannot easily control or prohibit. If US law obstructs these technologies here, it will be feasible for Americans to obtain them by Internet, by mail, or by travel. (p. 41-2)

Graboyes highlighted several areas in which this issue will play out going forward beyond genomic information, including: personalized medicine, 3-D printing, artificial intelligence, information sharing via social media, wearable technology, and telemedicine.

As Larry Downes and Paul Nunes noted in a recent  Wired editorial, “Regulating 23andMe Won’t Stop the New Age of Genetic Testing“:

The information flood is coming. If not this Christmas season, then one in the near future. Before long, $100 will get you sequencing of not just the million genes 23andMe currently examines, but all of them. Regulators and medical practitioners must focus their attention not on raising temporary obstacles, but on figuring out how they can make the best use of this inevitable tidal wave of information.

American policymakers must accept that reality and adjust their attitudes and policies accordingly or else we can expect to see even more global innovation arbitrage — and a correspondingly loss of national competitiveness — in coming years.

[ Note: Our friends over at TechFreedom launched a Change.org petition awhile back to call for a reversal of the FDA’s actions.]

Additional Reading:

 

 

 

]]>
https://techliberation.com/2014/12/12/global-innovation-arbitrage-genetic-testing-edition/feed/ 1 75086
A Nonpartisan Policy Vision for the Internet of Things https://techliberation.com/2014/12/11/a-nonpartisan-policy-vision-for-the-internet-of-things/ https://techliberation.com/2014/12/11/a-nonpartisan-policy-vision-for-the-internet-of-things/#comments Thu, 11 Dec 2014 20:07:11 +0000 http://techliberation.com/?p=75076

What sort of public policy vision should govern the Internet of Things? I’ve spent a lot of time thinking about that question in essays here over the past year, as well as in a new white paper (“The Internet of Things and Wearable Technology: Addressing Privacy and Security Concerns without Derailing Innovation”) that will be published in the Richmond Journal of Law & Technology early next year.

But I recently heard three policymakers articulate their recommended vision for the Internet of Things (IoT) and I found their approach so inspiring that I wanted to discuss it here in the hopes that it will become the foundation for future policy in this arena.

Last Thursday, it was my pleasure to attend a Center for Data Innovation (CDI) event on “How Can Policymakers Help Build the Internet of Things?” As the title implied, the goal of the event was to discuss how to achieve the vision of a more fully-connected world and, more specifically, how public policymakers can help facilitate that objective. It was a terrific event with many excellent panel discussions and keynote addresses.

Two of those keynotes were delivered by Senators Deb Fischer (R-Neb.) and Kelly Ayotte (R-N.H.). Below I will offer some highlights from their remarks and then relate them to the vision set forth by Federal Trade Commission (FTC) Commissioner Maureen K. Ohlhausen in some of her recent speeches. I will conclude by discussing how the Ayotte-Fischer-Ohlhausen vision can be seen as the logical extension of the Clinton Administration’s excellent 1997 Framework for Global Electronic Commerce, which proposed a similar policy paradigm for the Internet more generally. This shows how crafting policy for the IoT can and should be a nonpartisan affair.

Sen. Deb Fischer

In her opening remarks at the CDI event last week, Sen. Deb Fischer explained how “the Internet of Things can be a game changer for the U.S. economy and for the American consumer.” “It gives people more information and better tools to analyze data to make more informed choices,” she noted.

After outlining some of the potential benefits associated with the Internet of Things, Sen. Fischer continued on to explain why it is essential we get public policy incentives right first if we hope to unlock the full potential of these new technologies. Specifically, she argued that:

In order for Americans to receive the maximum benefits from increased connectivity, there are two things the government must avoid. First, policymakers can’t bury their heads in the sand and pretend this technological revolution isn’t happening only to wake up years down the road and try to micromanage a fast-changing, dynamic industry. Second, the federal government must also avoid regulation just for the sake of regulation. We need thoughtful, pragmatic responses and narrow solutions to any policy issues that arise. For too long, the only “strategy” in Washington policy-making has been to react to crisis after crisis. We should dive into what this means for U.S. global competitiveness, consumer welfare, and economic opportunity before the public policy challenges overwhelm us, before legislative and executive branches of government – or foreign governments – react without all the facts.

Fischer concluded by noting that, “it’s entirely appropriate for the U.S. government to think about how to modernize its regulatory frameworks, consolidate, renovate, and overhaul obsolete rules. We’re destined to lose to the Chinese or others if the Internet of Things is governed in the United States by rules that pre-date the VCR.”

Sen. Kelly Ayotte

Like Sen. Fischer, Ayotte similarly stressed the many economic opportunities associated with IoT technologies for both consumers and producers alike. [Note: Sen. Ayotte did not publish her remarks on her website, but you can watch her speech from the CDI event beginning around the 17-minute mark of the event video.]

Ayotte also noted that IoT is going to be a major topic for the Senate Commerce Committee and that there will be an upcoming hearing on the issue. She said that the role of the Committee will be to ensure that the various agencies looking into IoT issues are not issuing “conflicting regulatory directives” and “that what is being done makes sense and allows for future innovation that we can’t even anticipate right now.” Among the agencies she cited that are currently looking into IoT issues: FTC (privacy & security), FDA (medical device apps), FCC (wireless issues), FAA (commercial drones), NHTSA (intelligent vehicle technology), NTIA (multistakeholder privacy reviews), as well as state lawmakers and regulatory agencies.

Sen. Ayotte then explained what sort of policy framework America needed to adopt to ensure that the full potential of the Internet of Things could be realized. She framed the choice lawmakers are confronted with as follows:

we as policymakers we can either create an environment that allows that to continue to grow, or one that thwarts that. To stay on the cutting edge, we need to make sure that our regulatory environment is conducive to fostering innovation.” […] “we’re living in the Dark Ages in the ways the some of the regulations have been framed. Companies must be properly incentivized to invest in the future, and government shouldn’t be a deterrent to innovation and job-creation.

Ayotte also stressed that “technology continues to evolve so rapidly there is no one-size-fits-all regulatory approach” that can work for a dynamic environment like this. “If legislation drives technology, the technology will be outdated almost instantly,” and “that is why humility is so important,” she concluded.

The better approach, she argued was to let technology evolve freely in a “permissionless” fashion and then see what problems developed and then address them accordingly. “[A] top-down, preemptive approach is never the best policy” and will only serve to stifle innovation, she argued. “If all regulators looked with some humility at how technology is used and whether we need to regulate or not to regulate, I think innovation would stand to benefit.”

FTC Commissioner Maureen K. Ohlhausen

Fischer and Ayotte’s remarks reflect a vision for the Internet of Things that FTC Commissioner Maureen K. Ohlhausen has articulated in recent months. In fact, Sen. Ayotte specifically cited Ohlhausen in her remarks.

Ohlhausen has actually delivered several excellent speeches on these issues and has become one of the leading public policy thought leaders on the Internet of Things in the United States today. One of her first major speeches on these issues was her October 2013 address entitled, “The Internet of Things and the FTC: Does Innovation Require Intervention?” In that speech, Ohlhausen noted that, “The success of the Internet has in large part been driven by the freedom to experiment with different business models, the best of which have survived and thrived, even in the face of initial unfamiliarity and unease about the impact on consumers and competitors.”

She also issued a wise word of caution to her fellow regulators:

It is . . . vital that government officials, like myself, approach new technologies with a dose of regulatory humility, by working hard to educate ourselves and others about the innovation, understand its effects on consumers and the marketplace, identify benefits and likely harms, and, if harms do arise, consider whether existing laws and regulations are sufficient to address them, before assuming that new rules are required.

In this and other speeches, Ohlhausen has highlighted the various other remedies that already exist when things do go wrong, including FTC enforcement of “unfair and deceptive practices,” common law solutions (torts and class actions), private self-regulation and best practices, social pressure, and so on. (Note: Inspired by Ohlhausen’s approach, I devoted the final section of my big law review article on IoT issues to a deeper exploration of all those “bottom-up” solutions to privacy and security concerns surrounding the IoT and wearable tech.)

The Clinton Administration Vision

These three women have articulated what I regard as the ideal vision for fostering the growth of the Internet of Things. It should be noted, however, that their framework is really just an extension of the Clinton Administration’s outstanding vision for the Internet more generally.

In the 1997 Framework for Global Electronic Commerce, the Clinton Administration outlined its approach toward the Internet and the emerging digital economy. As I’ve noted many times before, the Framework was a succinct and bold market-oriented vision for cyberspace governance that recommended reliance upon civil society, contractual negotiations, voluntary agreements, and ongoing marketplace experiments to solve information age problems. Specifically, it stated that “the private sector should lead [and] the Internet should develop as a market driven arena not a regulated industry.” “[G]overnments should encourage industry self-regulation and private sector leadership where possible” and “avoid undue restrictions on electronic commerce.”

Sen. Ayotte specifically cited those Clinton principles in her speech and said, “I think those words, given twenty years ago at the infancy of the Internet, are today even more relevant as we look at the challenges and the issues that we continue to face as regulators and policymakers.”

I completely agree. This is exactly the sort of vision that we need to keep innovation moving forward to benefit consumers and the economy, and this also illustrates how IoT policy can be a nonpartisan effort.

Why does this matter so much? As I noted in this recent essay, thanks to the Clinton Administration’s bold vision for the Internet:

This policy disposition resulted in an unambiguous green light for a rising generation of creative minds who were eager to explore this new frontier for commerce and communications. . . . The result of this freedom to experiment was an outpouring of innovation. America’s info-tech sectors thrived thanks to permissionless innovation, and they still do today. An annual Booz & Company report on the world’s most innovative companies revealed that 9 of the top 10 most innovative companies are based in the U.S. and that most of them are involved in computing, software, and digital technology.

In other words, America got policy right before and we can get policy right again to ensure we are again global innovation leaders. Patience, flexibility, and forbearance are the key policy virtues that nurture an environment conducive to entrepreneurial creativity, economic progress, and greater consumer choice.

Other policymakers should endorse the vision originally sketched out by the Clinton Administration and now so eloquently embraced and extended by Sen. Fischer, Sen. Ayotte, and Commissioner Ohlhausen. This is the path forward if we hope to realize the full potential of the Internet of Things.

]]>
https://techliberation.com/2014/12/11/a-nonpartisan-policy-vision-for-the-internet-of-things/feed/ 1 75076
Global Innovation Arbitrage: Commercial Drones & Sharing Economy Edition https://techliberation.com/2014/12/09/global-innovation-arbitrage-commercial-drones-sharing-economy-edition/ https://techliberation.com/2014/12/09/global-innovation-arbitrage-commercial-drones-sharing-economy-edition/#respond Tue, 09 Dec 2014 21:02:44 +0000 http://techliberation.com/?p=75060

Capital moves like quicksilver around the globe today as investors and entrepreneurs look for more hospitable tax and regulatory environments. The same is increasingly true for innovation. Innovators can, and increasingly will, move to those countries and continents that provide a legal and regulatory environment more hospitable to entrepreneurial activity. I was reminded of that fact today while reading two different reports about commercial drones and the sharing economy and the global competition to attract investment on both fronts. First, on commercial drone policy, a new Wall Street Journal article notes that:

Amazon.com Inc., which recently began testing delivery drones in the U.K., is warning American officials it plans to move even more of its drone research abroad if it doesn’t get permission to test-fly in the U.S. soon. The statement is the latest sign that the burgeoning drone industry is shifting overseas in response to the Federal Aviation Administration’s cautious approach to regulating unmanned aircraft.

According to the  Journal reporters, Amazon has sent a letter to the FAA warning that, “Without the ability to test outdoors in the United States soon, we will have no choice but to divert even more of our [drone] research and development resources abroad.” And another report in the U.K. Telegraph  notes that other countries are ready and willing to open their skies to the same innovation that the FAA is thwarting in America. Both the UK and Australia have been more welcoming to drone innovators recently. Here’s a report from an Australian newspaper about Google drone services testing there. (For more details, see this excellent piece by Alan McQuinn, a research assistant with the Information Technology and Innovation Foundation: “Commercial Drone Companies Fly Away from FAA Regulations, Go Abroad.”) None of this should be a surprise, as I’ve noted in recent essays and filings. With the FAA adopting such a highly precautionary regulatory approach, innovation has been actively disincentivized. America runs the risk of driving still more private drone innovation offshore in coming months since all signs are that the FAA intends to drag its feet on this front as long as it can, even though Congress has told to agency to take steps to integrate these technologies into national airspace. 

Meanwhile, innovation in the sharing economy is at risk because of incessant bureaucratic meddling at the state and especially the local level across the United States.  My colleagues Matt Mitchell, Christopher Koopman, and I released a new Mercatus Center white paper on these issues yesterday (“The Sharing Economy and Consumer Protection Regulation: The Case for Policy Change“) and argued that most of the rules and regulations holding back the sharing economy are counter-productive and desperately in need of immediate reform. If policymakers don’t take steps to liberalize the layers of red tape that encumber new sharing economy start-ups, it’s possible that some of these companies will also start to look for opportunities offshore. Plenty of countries will be eager to embrace them, which I realized as I was reading through another report recently. The UK’s Department for Business, Innovation & Skills recently published a white paper called, “Unlocking the Sharing Economy,” which discussed how the British government intended to embrace the many innovations that could flow from this space. The preface to the report opened with this telling passage from Rt. Hon. Matthew Hancock, MP and Minister of State for Business, Enterprise, and Energy:

The UK is embracing new, disruptive business models and challenger businesses that increase competition and offer new products and experiences for consumers. Where other countries and cities are closing down consumer choice, and limiting people’s freedom to make better use of their possessions, we are embracing it.

That really says it all, doesn’t it!  If other countries, including the US, don’t clean up their act and create an more welcoming environment for sharing economy innovation, then the UK will be all too happy to invite them to come set up operations there.The offshoring option is just as real in countless other sectors of the modern tech economy. As Marc Andreessen, co-founder of the venture capital firm Andreessen Horowitz, noted in Politico oped this summer:

Think of it as a sort of “global arbitrage” around permissionless innovation — the freedom to create new technologies without having to ask the powers that be for their blessing. Entrepreneurs can take advantage of the difference between opportunities in different regions, where innovation in a particular domain of interest may be restricted in one region, allowed and encouraged in another, or completely legal in still another.

Similar opportunities for such “global arbitrage” exist for the Internet of Things and wearable techintelligent vehicle technologyadvanced medical device techrobotics, Bitcoin, and so on. The links I have embedded here point back to other essays I have written recently about the choice we face in each of these fields, namely, will we embrace “permissionless innovation” or “precautionary principle” thinking. This matters because — as I noted in recent essays (1,2) as well as a book on these issues — economic growth depends upon policymakers promoting the right values when it comes to entrepreneurial activity. “For innovation and growth to blossom, entrepreneurs need a clear green light from policymakers that signals a general acceptance of risk-taking—especially risk-taking that challenges existing business models and traditional ways of doing things,” I noted in a recent essay on the importance of “Embracing a Culture of Permissionless Innovation.” Or, as the great historian of technological progress Joel Mokyr has concluded: “technological progress requires above all tolerance toward the unfamiliar and the eccentric.” To sum up in two words, incentives matter.  “[E]conomic and social institutions have to encourage potential innovators by presenting them with the right incentive structure,” Mokyr notes. Thus, when the economic and social incentive structure discourages risk-taking and experimentation in a given country or even entire continent, we can expect that global innovation arbitrage will accelerate as entrepreneurs look to find more hospitable investment climates.

 

 

Additional Reading:

 

]]>
https://techliberation.com/2014/12/09/global-innovation-arbitrage-commercial-drones-sharing-economy-edition/feed/ 0 75060
Preparing to Pounce: D.C. angles for another industry https://techliberation.com/2009/10/19/preparing-to-pounce-d-c-angles-for-another-industry/ https://techliberation.com/2009/10/19/preparing-to-pounce-d-c-angles-for-another-industry/#comments Tue, 20 Oct 2009 03:42:55 +0000 http://techliberation.com/?p=22720

As you’ve no doubt heard, Washington D.C. is angling for a takeover of the . . . U.S. telecom industry?!

That’s right: broadband, routers, switches, data centers, software apps, Web video, mobile phones, the Internet. As if its agenda weren’t full enough, the government is preparing a dramatic centralization of authority over our healthiest, most dynamic, high-growth industry.

Two weeks ago, FCC chairman Julius Genachowski proposed new “net neutrality” regulations, which he will detail on October 22. Then on Friday, Yochai Benkler of Harvard’s Berkman Center published an FCC-commissioned report on international broadband comparisons. The voluminous survey serves up data from around the world on broadband penetration rates, speeds, and prices. But the real purpose of the report is to make a single point: foreign “open access” broadband regulation, good; American broadband competition, bad. These two tracks — “net neutrality” and “open access,” combined with a review of the U.S. wireless industry and other investigations — lead straight to an unprecedented government intrusion of America’s vibrant Internet industry.

Benkler and his team of investigators can be commended for the effort that went into what was no doubt a substantial undertaking. The report, however,

  • misses all kinds of important distinctions among national broadband markets, histories, and evolutions;
  • uses lots of suspect data;
  • underplays caveats and ignores some important statistical problems;
  • focuses too much on some metrics, not enough on others;
  • completely bungles America’s own broadband policy history; and
  • draws broad and overly-certain policy conclusions about a still-young, dynamic, complex Internet ecosystem.

The gaping, jaw-dropping irony of the report was its failure even to mention the chief outcome of America’s previous open-access regime: the telecom/tech crash of 2000-02. We tried this before. And it didn’t work! The Great Telecom Crash of 2000-02 was the equivalent for that industry what the Great Panic of 2008 was to the financial industry. A deeply painful and historic plunge. In the case of the Great Telecom Crash, U.S. tech and telecom companies lost some $3 trillion in market value and one million jobs. The harsh open access policies (mandated network sharing, price controls) that Benkler lauds in his new report were a main culprit. But in Benkler’s 231-page report on open access policies, there is no mention of the Great Crash.

Although the report is subtitled “A review of broadband Internet  transitions and policy from around the world” (emphasis added), Benkler does not correctly review the most pronounced and obvious transition in the very nation for whom he would now radically remake policy. Benkler writes that the U.S. successfully tried “open access” in the late-1990s, but then

the FCC decided to abandon this mode of regulation for broadband in a series of decisions in 2001 and 2002.  Open access has been largely treated as a closed issue in U.S. policy debates ever since. [p. 11]

This is false.

In fact, open access and other heavy-handed regulations were rolled back in a series of federal actions from 2003 to 2005 (e.g., ’03 triennial review, ’05 Brand-X Supreme Court decision). And then between 2006 and 2009, a number of states adopted long-overdue reforms of decades-old telecom laws written for the pre-Internet age. (The Supreme Court even decided a DSL “line-sharing” case as late as February 2009.) If we had in fact ended open access regulation in 2001, as Benkler claims, perhaps the telecom crash would have been less severe.

Benkler would like to show that America’s “decline” in broadband corresponds to its open-access roll-back. But the chronology doesn’t fit. In fact, American broadband took a very large hit in the open-access era. In 2002, at the end of this open access experiment, George Gilder estimated that South Korea — first to deploy fiber-to-the-X and 3G wireless — enjoyed some 40 times America’s per capita bandwidth. Our international “rank” may have appeared better at the time, but we were far worse off compared to our broadband “potential.” Because the U.S. invented most of the Internet technologies and applications, we were bound to lead at the outset, in the mid- and late-1990s. Bad policy stalled our rise for a time, and other nations moved forward. But today we are back on track. The U.S. broadband trajectory is steep. In the last few years, U.S. broadband has swiftly recovered and begun closing, or eliminating, the international gap. The depth of the “gap” then (2002) was far more pronounced — and relevant — than the “ranking differential” now (2009). Our wired and wireless broadband capabilities, services, and innovations now rival or exceed many of the world’s best.

In a June 2009 report, I attempted to quantify the rise of American broadband. I found that by the end of 2008, U.S. consumers enjoyed 717 terabits per second of communications power, or bandwidth, equal to 2.4 megabits per second on a per capita basis. Today, bandwidth per capita approaches 3 megabits per second, an almost 23-fold increase since the dawn of deregulation in 2003.

This rise was possible because U.S. info-tech investment in 2008, to pick the most recent year, was $455 billion, or 22% of all America’s capital investment. Communications service providers alone invested some $65 billion. And this is real investment. As you can see in the graph below, there was another peak in 2000. But much of that era’s investment was due to (1) Y2K preparations; (2) the concentrated build-out of long-haul optical networks (too much capacity at the time, but useful over the, pun intended, “long haul”); and (3) wasteful duplication by communications service providers employing regulatory arbitrage strategies designed to exploit the very type of open access policies Benkler urges again. This new U.S. surge in info-tech investment looks to be far more sustainable, based on real businesses and sound incentives, not some centralized policy to pick “winners and losers.”

A number of other problems plague the Benkler report.

As George Ou explains, the international data on broadband “speed” and prices are highly suspect. Benkler repeats findings from the oft cited OECD studies that show the U.S. is 15th in the world in broadband penetration. Benkler also supplements previous reports with apparently corroborating evidence of his own. But Scott Wallsten (now at the FCC) adjusted the OECD data to correct for household size and other factors and found the U.S. is actually more like 8th or 10th in broadband penetration. South Korea, the consensus global broadband leader, is just 6th in the broadband penetration metric. The U.S. and most other OECD nations, moreover, will reach broadband saturation in the next few years. Average download speeds in the U.S., Wallsten found, were in line with the mid-tier of OECD nations, behind Korea, Japan, Sweden, and the Netherlands. But Ou thinks there is yet more evidence to question the quality of the international data and thus the argument the U.S. is even this small distance behind.

US broadband providers deliver the closest actual throughput to what is being advertised, and it is well above 90% of the advertised rate.  When we consider the fact that advertised performance is often quoting raw signaling speeds (sync rate) which include a typical 10% protocol overhead and the measured speeds indicate payload throughput, US broadband providers are delivering almost exactly what they are advertising.  By comparison, consumers in Japan are the least satisfied with their broadband service despite the fact that they have the fastest broadband service.  This dissatisfaction is due to the fact that Japanese broadband customers get far less than what was advertised to them.  The Ofcom data is further backed up by Akamai data which provides the largest sample of real-world performance data in the world.  When real-world speeds are compared, the difference is nowhere near the difference in advertised speeds. Cable broadband companies in Japan don’t really have more capacity than their counter parts in the United States yet they offer “160 Mbps” service.  While that sounds even more impressive than the Fiber to the Home (FTTH) offerings in Japan or the U.S., it’s important to understand that the total capacity of the network may only be 160 or 320 Mbps shared between an entire neighborhood with 150 subscribers.  Even if only 30 of those customers in a neighborhood takes up the 160 Mbps service, just 2 of those customers can hog up all the capacity on the network. While Verizon may charge more per Mbps for FTTH service than their Japanese counter parts, they aren’t nearly as oversubscribed and they usually deliver advertised speeds or even slightly higher than advertised speeds.  More to the point, Verizon’s 50 Mbps service is a lot closer to the 100 Mbps services they offer in Japan than the advertised speeds suggest.  Oversubscription is a very important factor and it is the reason businesses will pay 20 times more per Mbps for dedicated DS3 circuits with no oversubscription on the access link.  If Verizon were willing to oversell and exaggerate advertised bandwidth as much as FTTH operators in Japan, they could easily provide “100? Mbps or even “1000? mbps FiOS service over their existing GPON infrastructure.  The risk in doing this is that Verizon might lose some customer satisfaction and trust when the actual performance (while higher than current levels) don’t live up to the advertised speeds.

Even absent these important corrections and nuances, we should understand that in today’s world — with today’s computers, today’s applications, today’s media content, and, importantly, the rest of today’s Internet infrastructure, from data farms and content delivery networks to peering points and all the latency-inducing hops, buffers, and queues in between — there is no appreciable difference for even a high-end consumer between 100 Mbps and 50 Mbps. A Verizon FiOS customer with 50 Mbps thus does not suffer a miserable “half” the capability of a Korean with “100 Mbps” service and may in fact enjoy better overall service. But some of the metrics in the Benkler report suggest the “100 Mbps” user is 100% better off.

Another important market distinction: The U.S. has by far largest cable TV presence of any nation reviewed. Cable has a larger broadband share than DSL+fiber, and has since the late 1990s. No nation has nearly the divided market between two very substantial technology/service platforms. This unique environment makes many of the Benkler comparisons less relevant and the policy points far less salient. In a market where the incumbent telecom provider is regulated and is subject to far less intermodal competition (as from the ubiquitous U.S. cable MSOs), the regulated company earns a range of guaranteed returns on various products and may also be ordered to deploy certain network technologies by the regulator.

In France, just three DSL operators comprise 97%-98% of all DSL subscribers. DSL subscribers, moreover, are nearly all of France’s broadband subscribers. So how the open access regulations Benkler lauds lead to a better outcome is unclear. Presumably, the key virtue of open access is “more competition” at the service, or ISP, layer of the network. But if there’s not more competition, what is the open access mechanism that leads to better outcomes? What is the link between “open access” policies and either investment and/or prices? And thus the link between policy and “penetration” or “speed” or “quality”? It is easy to see how open access can by force drive down prices for existing products and technologies. But how open access incentivizes investment in next generation networks is never explained. Several steps are missing from Benkler’s analysis.

Perhaps most notably in the end, Benkler does not address actual innovations, applications, consumer services, devices, and Internet traffic. These are real measures of Internet usage, health, progress, and ability to plug in and innovate. The U.S. achieves in all these areas and leads in most. Huge numbers of broadband-intensive apps and services originated and blossomed in the U.S. For example, Google, YouTube, blogs, Hulu, Vonage and other VoIP services, iTunes, Flash apps, iPhone, BlackBerry, Wikipedia, Netflix, content delivery networks (CDNs), cloud computing, Facebook, etc., etc. The U.S. generates substantially more Internet traffic than Europe. How is all this possible if the U.S. broadband arena is so backward?

Let’s take a deep breath before we let Washington topple another industry — a healthy, growing one at that.

— Bret Swanson

(cross-posted from Maximum Entropy)

]]>
https://techliberation.com/2009/10/19/preparing-to-pounce-d-c-angles-for-another-industry/feed/ 27 22720