Philosophy & Cyber-Libertarianism – Technology Liberation Front https://techliberation.com Keeping politicians' hands off the Net & everything else related to technology Fri, 06 Sep 2024 22:45:01 +0000 en-US hourly 1 6772528 Panel Video: How Should We Regulate the Digital World & AI? https://techliberation.com/2024/09/06/panel-video-how-should-we-regulate-the-digital-world-ai/ https://techliberation.com/2024/09/06/panel-video-how-should-we-regulate-the-digital-world-ai/#comments Fri, 06 Sep 2024 22:44:36 +0000 https://techliberation.com/?p=77193

The Technology Policy Institute has posted the video of my talk at the 2024 Aspen Forum panel on “How Should we Regulate the Digital World?” My remarks run from 33:33–44:12 of the video. I also elaborate briefly during Q&A.

My remarks at this year’s TPI Aspen Forum panel were derived from my R Street Institute essay, “The Policy Origins of the Digital Revolution & the Continuing Case for the Freedom to Innovate,” which sketches out a pro-freedom vision for the Computational Revolution.

 

]]>
https://techliberation.com/2024/09/06/panel-video-how-should-we-regulate-the-digital-world-ai/feed/ 648 77193
We Need to Get All the Smart People in a Room & Have a Conversation https://techliberation.com/2022/10/16/we-need-to-get-all-the-smart-people-in-a-room-have-a-conversation/ https://techliberation.com/2022/10/16/we-need-to-get-all-the-smart-people-in-a-room-have-a-conversation/#comments Sun, 16 Oct 2022 12:51:13 +0000 https://techliberation.com/?p=77052

For my latest Discourse column (“We Really Need To ‘Have a Conversation’ About AI … or Do We?”), I discuss two commonly heard lines in tech policy circles.

1) “We need to have conversation about the future of AI and the risks that it poses.”

2) “We should get a bunch of smart people in a room and figure this out.”

I note that, if you’ve read enough essays, books or social media posts about artificial intelligence (AI) and robotics—among other emerging technologies—then chances are you’ve stumbled on variants of these two arguments many times over. They are almost impossible to disagree with in theory, but when you start to investigate what they actually mean in practice, they are revealed to be largely meaningless rhetorical flourishes which threaten to hold up meaningful progress on the AI front. I continue on to argue in my essay that:

I’m not at all opposed to people having serious discussions about the potential risks associated with AI, algorithms, robotics or smart machines. But I do have an issue with: (a) the astonishing degree of ambiguity at work in the world of AI punditry regarding the nature of these “conversations,” and, (b) the fact that people who are making such statements apparently have not spent much time investigating the remarkable number of very serious conversations that have already taken place, or which are ongoing, about AI issues. In fact, it very well could be the case that we have  too many conversations going on currently about AI issues and that the bigger problem is instead one of better coordinating the important lessons and best practices that we have already learned from those conversations.

I then unpack each of those lines and explain what is wrong with them in more detail. One thing that always bugs be about the “we need to have a conversation” aphorism is that those uttering it absolutely refuse to be nailed down on the specifics, like:

  1. What is the nature or goal of that conversation?
  2. Who is the “we” in this conversation?
  3. How is this conversation to be organized and managed?
  4. How do we know when the conversation is going on, or when it is sufficiently complete such that we can get on with things?
  5. And, most importantly, aren’t you implicitly suggesting that we should ban or limit the use of that technology until you (or the royal “we”) is somehow satisfied that the conversation is over or yielded satisfactory answers?

The other commonly heard line — “We need to get a bunch of smart people in a room and figure this out” — can be equally infuriating due to both a lack of specifics (which people? what room? where and when? etc) but also because of the fact that we already have had tons of the Very Smartest People on these issues meeting in countless rooms across the globe for many years. In an earlier essay, I documented the astonishing growth of AI governance frameworks, ethical best practices and professional codes of conduct: “The amount of interest surrounding AI ethics and safety dwarfs all other fields and issues. I sincerely doubt that ever in human history has so much attention been devoted to any technology as early in its lifecycle as AI.”

I also note that, practically speaking, “the most important conversations society has about new technologies are those we have every day when we all interact with those new technologies and with one another. Wisdom is born from experiences, including activities and interactions involving risk and the possibility of mistakes. This is how progress happens.” And I conclude by noting how:

We won’t ever be able to “have a conversation” about a new technology that yields satisfactory answers for some critics precisely because the questions just multiply and evolve endlessly over time, and they can only be answered through ongoing societal interactions and problem-solving. But we shouldn’t stop life-enriching innovations from happening just because we don’t have all the answers beforehand.

Anyway, I invite you to head over to  Discourse and read the entire essay. In the meantime, I propose we get all the smart people in a room and have a conversation about how these two lines came to dominate tech policy discussions before they end up doing real damage to human prosperity! It’s the ethical thing to do if you really care about the future.

]]>
https://techliberation.com/2022/10/16/we-need-to-get-all-the-smart-people-in-a-room-have-a-conversation/feed/ 2 77052
6 Ways Conservatives Betray Their First Principles with Online Child Safety Regulations https://techliberation.com/2022/09/20/6-ways-conservatives-betray-their-first-principles-with-online-child-safety-regulations/ https://techliberation.com/2022/09/20/6-ways-conservatives-betray-their-first-principles-with-online-child-safety-regulations/#comments Tue, 20 Sep 2022 19:42:00 +0000 https://techliberation.com/?p=77048

I’ve been floating around in conservative policy circles for 30 years and I have spent much of that time covering media policy and child safety issues. My time in conservative circles began in 1992 with a 9-year stint at the Heritage Foundation, where I launched the organization’s policy efforts on media regulation, the Internet, and digital technology. Meanwhile, my work on child safety has spanned 4 think tanks, multiple blue ribbon child safety commissions, countless essays, dozens of filings and testimonies, and even a multi-edition book.

During this three-decade run, I’ve tried my hardest to find balanced ways of addressing some of the legitimate concerns that many conservatives have about kids, media content, and online safety issues. Raising kids is the hardest job in the world. My daughter and son are now off at college, but the last twenty years of helping them figure out how to navigate the world and all the challenges it poses was filled with difficulties. This was especially true because my daughter and son faced completely different challenges when it came to media content and online interactions. Simply put, there is no one-size-fits-all playbook when it comes to raising kids or addressing concerns about healthy media interactions.

Something Must Be Done!

My personal approach, as I summarized in my book on these issues, was to first and foremost do everything in my power to (a) keep an open mind about new media content and platforms, and (b) ensure an open line of ongoing communication with my kids about the issues they might be facing. Shutting down conversation or calling for others to come in and save the day were the worst two options, in my opinion. As I summarized in my book, “At the end of the day, there is simply no substitute for talking to our children in an open, loving, and understanding fashion about the realities of-this world, including the more distasteful bits.” This was my Parental Prime Directive, if you will. I just always wanted to make sure that my kids felt like they could talk to me about their issues, no matter how varied, horrible, or heart-breaking those problems might be.

When talking with other parents through the years, I’ve heard about their own unique concerns and struggles. Every family faces different challenges because no two kids or situations are alike. Moreover, the challenges can feel overwhelming in our modern world of information abundance, which is flush with ubiquitous communications and media options. Sometimes these parental frustrations can fester and grow into a sort of rage until you finally hear folks utter that famous phrase: Something must be done! And that “something” is often some sort of government regulation “for the children.”

Again, I get it. When all your best efforts to help or protect your kids don’t seem to work according to plan, it’s only natural to call for help. But there are very serious problems associated with calling on government for that help. When legislators and regulators are asked to play the role of National Nanny, it comes with all the same baggage that accompanies many other efforts by the government to intervene in our lives or control what people or organizations can say or do.

Conservative Contradictions

These are particularly sensitive issues for many conservatives, both because conservatives tend to have more heightened concerns about media content and online safety issues, and also because the steps they often recommend to address these issues can quickly come into conflict with their own first principles.

Let me run through six ways that support for media content controls and child safety regulations can sometimes run afoul of conservative principles.

1) It’s a rejection of personal responsibility

Again, I understand all too well how hard parenting can be. But that does not mean we should abdicate our parental responsibilities to the State. Conservatives have spent decades fighting government when it comes to broken schools and the supposed brainwashing many kids get in them. The rallying cry of conservatives has long been: Let us have a greater say in how we raise and educate our children because the State is failing us or betraying our values.

Thus, when conservatives suggest that the State should be making decisions for us as it pertains to anything the government says is a “child safety” issue, there is some serious cognitive dissonance going on there. In his humorous Devil’s Dictionary, Ambrose Bierce jokingly defined responsibility as, “A detachable burden easily shifted to the shoulders of God, Fate, Fortune, Luck or one’s neighbor. In the days of astrology it was customary to unload it upon a star.” For parental responsibility to actually mean something, it has to be more than a “detachable burden” that we unload upon government.

2) It’s an embrace of the administrative state & arbitrary rule by unelected bureaucrats

Beyond the classroom, conservatives have long been concerned about the specter of massive administrative agencies and armies of unelected bureaucrats controlling our lives from the shadows. I’ve spent decades working with conservative organizations and scholars trying to get the administrative state under some control to scale back its enormous power, arbitrary edicts, and costly burdens. Over-criminalization has become such a problem that, according to the Heritage Foundation, “regulatory offenses… have proliferated to the point that, literally, nobody knows how many federal criminal regulations exist today.” We’re all criminals of some sort in the eyes of the modern regulatory state.

Yet, when conservatives advocate the expansion of the administrative state through new “online safety” regulations, they are just making the over-criminalization problem worse, including by treating our own children as guilty parties for simply trying to access the primary media platforms of their generation and interact with their friends there. For example, calls to ban all teens from social media until they’re 18 would result in the most massive “forbidden fruit” nightmare in American history, with every teen suddenly becoming a criminal actor and working together to tunnel around bans using the same sort of VPNs and evasion technologies people in China and other repressive nations use to get around over-bearing speech policies. [See: “Again, We Should Not Ban All Teens from Social Media”]

Needless to say, all this regulation and bureaucratic empowerment would have massive negative externalities for online freedom more generally as the era of “permissionless innovation” is replaced by a new age of permission-slip regulation.

3) It’s a rejection of the First Amendment & free speech rights

Conservatives have spent many decades pushing for greater First Amendment-based freedoms as it pertains to religious liberty and or organizational/corporate speech issues. Thus, when conservatives seek to undermine free speech principles and jurisprudence in the name of child safety, it could undo everything conservatives have been fighting to accomplish in those other contexts.

Conservatives are understandably upset with some social media platforms for being too over-zealous with certain types of speech takedowns or de-platformings. But two wrongs don’t make a right, and they should not be calling on Big Government to be imposing its own editorial judgments in place of private actors. [See: “The Great Deplatforming of 2021“ and “When It Comes to Fighting Social Media Bias, More Regulation Is Not the Answer.“]

4) It’s a rejection of property rights and freedom more generally

Related to the previous two points, conservatives have long upheld the sanctity of property rights in many different contexts. This includes the property rights that private establishments enjoy under the Constitution to generally decide how to structure their operations, who they will do business with, and how they will do so. Private organizations and religious institutions possess not only free speech rights in this regard, but property and contractual rights, too.

But when it comes to “child safety” mandates, some conservatives would toss all this out the window and undermine those rights, replacing them with burdensome regulatory mandates that tell private parties how to conduct their affairs. Again, there’s a lot of cognitive dissonance going on here and it could have serious blowback for conservatives when the property / contractual rights of other people or organizations are undermined on similar grounds.

5) It’s an embrace of frivolous lawsuits & the trial lawyers that bring them

The last time I checked, trial lawyers were not exactly the most conservative-friendly constituency. For many decades, conservatives have looked to advance tort reform, limit junk science and frivolous lawsuits, and make sure that the courts don’t engage in excessive judicial activism.

Unfortunately, many of the child safety regulations being proposed today would empower the regulatory state and trial lawyers at the same time. Many of the bills being floated open the door to open-ended litigation and potentially punishing liability for private platforms — and not just against deep-pocketed “Big Tech” companies. The fact is, once conservatives open the litigation floodgates based on amorphous accusations of potential online safety harms, they will be empowering the tort bar (one of the biggest supporters of the Democratic Party, no less) to launch a legal jihad against any and every media platform out there. Good luck putting that genie back in the bottle once you unleash it.

6) It’s an embrace of the same moral panic arguments your parents leveled against you

How quickly we forget the accusations our own parents and others leveled against us as children. Remember when video games were going to make us a lost generation of murderous youth? Or when rap and rock-and-roll music were going to send us straight to hell? Today, those kids are all grown up and trying to tell us that they are fine but it’s this latest generation that is doomed. It’s just an endless generational cycle of moral panics. [See: “Why Do We Always Sell the Next Generation Short?” and “Confessions of a ‘Vidiot’: 50 Years of Video Games & Moral Panics”] Today’s conservatives need to remember that they, too, were once kids and somehow muddled through to adulthood.

The “3-E” Approach Is the Better Answer

At this point, some of the people who’ve read this far are screaming at the screen: “So, are you saying we should just do nothing!?”

Absolutely not. But it is important that we consider less onerous and more practical ways to address these challenging issues without falling prey to Big Government gimmicks that would undermine other important principles. We should start by acknowledging that there are no easy fixes or silver-bullet solutions. The plain truth of the matter is that the best solutions here can seem messy and unsatisfying to many because they require enormous ongoing efforts to mentor and assist our kids at a far deeper level than some folks are comfortable with.

For example, it is just insanely uncomfortable to have to speak with your kids about online bullying or harassment, pornography, violence in movies and games, hate speech, and so on. And I haven’t even mentioned the hardest things to talk to kids about: The daily news of the real world: wars, violence, tragic accidents, famines, etc. Honestly, the hardest conversations I’ve had to have with my kids were those about school shootings. By comparison, many other discussions about online content and interactions were much easier. To the extent that we’re attempting to measure and address negative media affects, I firmly believe that there a few things in this world more horrifying to kids — or harder to talk with them about — than the first 10 minutes of what’s on cable news each hour of the day.

Regardless, whether we’re talking about the potential “harms” or mass media or online content, we cannot pretend there exists a simple solution to any of it. Here’s the better approach.

I recently authored a study for the American Enterprise Institute on, “Governing Emerging Technology in an Age of Policy Fragmentation and Disequilibrium.” It was my attempt to sketch out a flexible, pragmatic, bottom-up set of governance principles for modern technology platforms and issues. In that report, I noted how “[t]he First Amendment constitutes a particularly high barrier to the use of hard law in the United States,” and that court challenges were likely to continue to block many of the regulatory efforts being floated today, just as been the case countless times before in recent decades. Thus, we need to have backup approaches to online safety beyond one-size-fits-all regulatory Hail Mary passes.

I have described that backup plan as the “3-E” approach or “layered approach” to online safety:

  • Empowerment of parents: Parental controls cannot solve all the world’s problems. It’s better to view them as helpful speed bumps or emergency alerts for when things are going badly for your child. In the old days, we placed a lot of faith in filtering, and that still has a role along with other tools that help place some reasonable limits not only on content but also overall consumption. But the best types of parental empowerment are those that force conversations between parents and kids by allowing reasonable monitoring to happen that is scaled by age (as in more limits for younger kids until they are gradually relaxed over time). And other carrot-and-stick tools and approaches are incredibly useful in helping parents place smart limits on youth activity and overall consumption.
  • Education of youth: Education is the strategy with the most lasting impact for online safety. Education and digital literacy provide skills and wisdom that can last a lifetime. Specifically, education can help teach both kids (and adults!) how to behave in — or respond to — a wide variety of situations. Building resiliency and encouraging healthy interactions is the goal.
  • Enforcement of existing laws: There are many sensible and straightforward laws already in place that address more concrete types of harm and harassment. And we have lots of laws pertaining to fraud and unfair and deceptive practices. Sometimes these rules can be challenging (and time-consuming) to enforce, but they constitute an existing backstop that can handle most worst-case scenarios when other less-restrictive steps fall short. And we should certainly tap these existing remedies before advancing unworkable new regulatory regimes.

I noted in my AEI study that, between 2000 and 2010, six major online-safety task forces or blue-ribbon commissions were formed to study online-safety issues and consider what should be done to address them. Each of them recommended some variant of the “3-E” approach as they encouraged a variety of best practices, educational approaches, and technological-empowerment solutions to address various safety concerns. Self-regulatory codes, private content-rating systems, and a wide variety of different parental-control technologies all proliferated during this period. Many multi-stakeholder initiatives and other organizations were also formed to address governance issues collaboratively. There are countless groups doing important work on this front today, including my old friends at the Family Online Safety Institute (FOSI) among many others.

These organizations push for a layered approach to online safety and work closely with educators, child development experts, and other academics and activists to find workable solutions to new online safety challenges as they arise. Their work is never done, and at times it can feel overwhelming. But, again, it’s the nature of the task at hand. We all must work together to continuously devise new and better approaches to addressing these challenges, because they will be endless. But let’s please not expect that we can unload these responsibilities on government and expect regulators to somehow handle it for us.

Do the Ends Justify the Means When it Comes to Media & Content Control?

I could be wasting my breath here because I’ve been attempting to appeal to conservative principles that may be rapidly disappearing from the modern conservative movement. Donald Trump radically disrupted everything in American politics, but especially the Republican Party. Many so-called national conservatives now live by Trump’s central operating principle: The ends justify the means. The ends are “owning the libs” in any way possible. And “the libs” include not only anyone on the Left of the political spectrum, but even those individuals and institutions that Trumpian conservatives believe are “the enemy” and controlled by “liberal interests.” By their definition, this now includes virtually all large media and technology companies and platforms. Thus, when we turn to the means, it’s increasingly the case that just about anything goes — including many traditional conservative principles.

To see how far we’ve come, recall what President Ronald Reagan said 35 years ago when vetoing an effort to reinstate the Fairness Doctrine. “History has shown that the dangers of an overly timid or biased press cannot be averted through bureaucratic regulation, but only through the freedom and compe­tition that the First Amendment sought to guarantee,” he said. At the time, President Reagan was confronted with some of the same arguments we hear today about media being too biased or conservatives not getting a fair shake. But he called upon his fellow conservatives to reject the idea that Big Government was the solution to such problems.

Unfortunately, Mr. Trump and some of his most loyal followers and even some major conservative groups today have largely given up on this logic and instead embraced regulation. While Trumpian conservatives love to decry everyone they oppose as “communists,” ironically it is this same group that is embracing a sort of communications collectivism as it pertains to modern media control. In the Trumpian worldview, media and tech platforms are useful only to the extent they carry out the will of the party — or at least the man on top of it.

These national conservatives have made a horrible miscalculation. Feeling aggrieved by Big Tech “bias,” or just feeling overwhelmed by things they don’t like about online platforms, they’ve decided that two wrongs make a right. In reality, two political wrongs never make a right, but they almost always combine to make government a lot bigger and more powerful.

It’s an incredibly naïve gamble almost certainly destined to fail, but they should ask themselves what it means if it works. This endless ratcheting effect will result in comprehensive state control of most channels of communications and information dissemination. Is this a game that you really think you can play better than the Lefties?

I’ll close by returning to one of Reagan’s favorite jokes. He always used to say that, “The nine most terrifying words in the English language are: I’m from the government and I’m here to help.” I would suggest that an even scarier version of that line would be, “We’re from the government and we’re here to help you parent your kids.”

Don’t let it be you uttering that line.

______________

Additional Reading

· Adam Thierer, “Again, We Should Not Ban All Teens from Social Media

· Adam Thierer, “Why Do We Always Sell the Next Generation Short?”

· Adam Thierer, “The Classical Liberal Approach to Digital Media Free Speech Issues

· Adam Thierer, “Confessions of a ‘Vidiot’: 50 Years of Video Games & Moral Panics

· Adam Thierer, “Left and right take aim at Big Tech — and the First Amendment

· Adam Thierer, “When It Comes to Fighting Social Media Bias, More Regulation Is Not the Answer

· Adam Thierer, “Ongoing Series: Moral Panics / Techno-Panics

· Adam Thierer, “No Goldilocks Formula for Content Moderation in Social Media or the Metaverse, But Algorithms Still Help

· Adam Thierer, “FCC’s O’Rielly on First Amendment & Fairness Doctrine Dangers

· Adam Thierer, “Conservatives & Common Carriage: Contradictions & Challenges

· Adam Thierer, “The Great Deplatforming of 2021

· Adam Thierer, “A Good Time to Re-Read Reagan’s Fairness Doctrine Veto

· Adam Thierer, “Sen. Hawley’s Radical, Paternalistic Plan to Remake the Internet

· Adam Thierer, “How Conservatives Came to Favor the Fairness Doctrine & Net Neutrality

· Adam Thierer, “Sen. Hawley’s Moral Panic Over Social Media

· Adam Thierer, “The White House Social Media Summit and the Return of ‘Regulation by Raised Eyebrow’

· Adam Thierer, “The Surprising Ideological Origins of Trump’s Communications Collectivism

· Adam Thierer, Parental Controls & Online Child Protection: A Survey of Tools and Methods (2009).

]]>
https://techliberation.com/2022/09/20/6-ways-conservatives-betray-their-first-principles-with-online-child-safety-regulations/feed/ 2 77048
VIDEO: My London Talk about the Future of AI Governance https://techliberation.com/2022/06/13/video-my-london-talk-about-the-future-of-ai-governance/ https://techliberation.com/2022/06/13/video-my-london-talk-about-the-future-of-ai-governance/#comments Mon, 13 Jun 2022 09:29:50 +0000 https://techliberation.com/?p=76999

On Thursday, June 9, it was my great pleasure to return to my first work office at the Adam Smith Institute in London and give a talk on the future of innovation policy and the governance of artificial intelligence. James Lawson, who is affiliated with the ASI and wrote a wonderful 2020 study on AI policy, introduced me and also offered some remarks. Among the issues discussed:

  • What sort of governance vision should govern the future of innovation generally and AI in particular: the “precautionary principle” or “permissionless innovation”?
  • Which AI sectors are witnessing the most exciting forms of innovation currently?
  • What are the fundamental policy fault lines in the AI policy debates today?
  • Will fears about disruption and automation lead to a new Luddite movement?
  • How can “soft law” and decentralized governance mechanism help us solve pressing policy concerns surrounding AI?
  • How did automation affect traditional jobs and sectors?
  • Will the European Union’s AI Act become a global model for regulation and will it have a “Brussels Effect” in terms of forcing innovators across the world to come into compliance with EU regulatory mandates?
  • How will global innovation arbitrage affect the efforts by governments in Europe and elsewhere to regulate AI innovation?
  • Can the common law help address AI risk? How is the UK common law system superior to the US legal system?
  • What do we mean by “existential risk” as it pertains to artificial intelligence?

I have a massive study in the works addressing all these issues. In the meantime, you can watch the video of my London talk here. And thanks again to my friends at the Adam Smith Institute for hosting!

Additional Reading:

 

 

]]>
https://techliberation.com/2022/06/13/video-my-london-talk-about-the-future-of-ai-governance/feed/ 5 76999
The Future of Progress Studies https://techliberation.com/2022/05/01/the-future-of-progress-studies/ https://techliberation.com/2022/05/01/the-future-of-progress-studies/#comments Sun, 01 May 2022 19:21:03 +0000 https://techliberation.com/?p=76980

If you haven’t yet had the chance to check out the new Progress Forum, I encourage you to do so. It’s a discussion group for progress studies and all things related to it. The Forum is sponsored by The Roots of Progress. Even though the Forum is still in pre-launch phase, there are already many interesting threads worth checking out. I was my honor to contribute one of the first on the topic, “Where is ‘Progress Studies’ Going?” It’s an effort to sort through some of the questions and challenges facing the Progress Studies movement in terms of focus and philosophical grounding. I thought I would just reproduce the essay here, but I encourage you to jump over to the Progress Forum to engage in discussion about it, or the many other excellent discussions happening there on other issues.

________________

Where is “Progress Studies” Going? by Adam Thierer

What do we mean by “Progress Studies” and how can this field of study be advanced? I’ve been thinking about that question a lot since Patrick Collison and Tyler Cowen published their 2019 manifesto in  The Atlantic on why “We Need a New Science of Progress.” At present, there is no overarching “unified field theory” of what Progress Studies entails or what underpins it, and that may be holding up progress on Progress Studies. I recently attended an important conference on the “Moral Foundations of Progress Studies,” co-hosted by The Roots of Progress and the Salem Center at UT Austin, where I discovered that many others were grappling with these same issues.

While a broad range of people are interested in Progress Studies, their moral priors differ, sometimes significantly. For example, the UT Austin conference included scholars from diverse disciplines (philosophy, psychology, economics, political science, history, and others) whose thinking was rooted in different philosophical traditions (utilitarianism, effective altruism, individualism, and various hybrids). Everyone shared the goal of advancing human well-being, but participants had different conceptions of the moral foundations of well-being, and even some disagreement about what well-being meant in concrete terms. There were also differing perspectives about what the “studies” part of Progress Studies should entail. Specifically, does it include progress  advocacy, including the potential for specific policy recommendations?

Comprehension vs. Advocacy

Part of the confusion over the nature and goals of Progress Studies can be traced back to Collison and Cowen’s foundational essay. On one hand, their goal was progress  comprehension. “Progress itself is understudied,” Collison and Cowen argued. They lamented that “there is no broad-based intellectual movement focused on understanding the dynamics of progress.”

But Collison and Cowen went further. Their goal was not merely to inspire the development of a field of study that could give us a better understanding of the prerequisites of progress, but also to formulate a plan for advancing progress. They argued that “mere comprehension is not the goal,” and advocated for “the deeper goal of speeding it up.” They went on to say, “the implicit question is how scientists [and others]  should be acting” and that Progress Studies should be viewed as, “closer to medicine than biology: The goal is to treat, not merely to understand.” The presupposition here is that progress is important and that we need to take steps to get a lot more of it. Again, we can think of this part of Progress Studies as progress advocacy. And advocacy can entail both advocating for progress generally as well as specific types of policy advocacy.

This raises an interesting question we debated at the UT Austin conference: Can you study something and advocate for it at the same time? Some felt you really cannot separate them, while others believed that the broader questions about how progress has worked could be kept separate from any advocacy efforts. Of course, this same tension between comprehension and advocacy comes up in many other fields.

What Progress Studies Can Learn from STS

In this sense, Progress Studies might learn some important lessons by examining the older but loosely related field of Science and Technology Studies (STS). STS incorporates a wide variety of mostly “soft science” academic disciplines, such as law, philosophy, sociology, and anthropology. These scholars analyze the relationship between technology, society, culture, and politics.

One conclusion from studying STS is obvious: comprehension and advocacy frequently get blurred. Many of the STS scholars who engage in critical studies of the history of technology seamlessly transition into anti-technology advocates, even as many of them claim they are “just studying” the issues. As I’ve noted elsewhere:

When thinking about of technology, STS scholars commonly employ words like “anxiety,” “alienation,” “degradation,” and “discrimination.” Consequently, most of them suggest that the burden of proof lies squarely on scientists, engineers, and innovators to prove that their ideas and inventions will bring worth to society before they are deployed. In other words, STS scholars generally fall in the precautionary principle camp, and their policy prescriptions have grown increasingly radical over time.

Meanwhile, as I discussed in my latest book, many STS scholars describe themselves as “humanists” while implicitly suggesting that those who promote technological progress are somehow callous oafs who only care about the cold calculus of profit-seeking and creating shiny new gadgets we don’t need.

While some STS scholars continue to do important and largely objective work, many others routinely show their more radical leanings in books, essays, and social media posts. Most worrying is their newfound love of Luddism, as they spin revisionist histories of “Why Luddites Matter,” insisting that “There’s Nothing Wrong with Being a Luddite,” and that “I’m a Luddite. You Should Be One Too.” Neil Richards, a law professor and leading STS scholar declares bluntly on Twitter: “Less metaverse, less crypto, less disruptive innovation. More regulation, more ethics, more humanity.” In other words, public policy defaults should be set squarely to the Precautionary Principle and anyone opposed to that is unethical and anti-human. Taken to the extreme, STS scholars marry up this Luddite revisionism with the retrograde philosophy of “degrowth” and produce book chapters with titles like, “Methodological Luddism: A Concept for Tying Degrowth to the Assessment and Regulation of Technologies.”

The Progress Studies movement might consider framing its work as a response to the growing extremism of the STS movement. STS scholars have become so remarkably hostile to the very notion that science and technology are central to human advancement that the field might today better be labeled  Anti-Science & Technology Studies. Yet, these are the scholars that dominate many academic departments where students are learning about technological progress. Progress Studies scholars can push back against that radicalism and offer level-headed, empirical responses to it.

Ensuring A Big Tent 

To improve its chances of success, the Progress Studies movement should seek to broaden its appeal by avoiding a dogmatic party line on its moral foundations while ensuring that multiple disciplines and viewpoints are incorporated into it.

In terms of philosophical underpinnings, those interested in Progress Studies can take different approaches to the moral foundations of progress and human well-being. Many philosophers get frustrated when others fail to hammer out all the detailed nuances of the metaphysics, epistemology, and ethics of these matters. I understand that urge, but I’ve now spent over 30 years covering technology policy and have been constantly surprised about how many people can come together and agree on a broad set of principles about the importance of progress without sharing a common philosophical framework.

The same is true as it pertains to policy prescriptions. We need to ensure a “big tent” in this way, too. It is already the case that many people engaged in Progress Studies have very different perspectives on issues like intellectual property and industrial policy, for example. I have many friends on different sides of these issues. Importantly, there are not even clear sides on these issues but rather a very broad spectrum of viewpoints. Progress Studies scholars will likely always disagree on the finer points of both types of “IP” policy. Nonetheless, they can remain more unified in stressing the common goal of moving the needle on progress in a positive direction and highlighting the continuing importance of flexible experimentation with policies aimed at enhancing innovation and growth.

To the extent there is any litmus test for the Progress Studies movement, that’s it:  advancing opportunities for innovation and growth is paramount. Regardless of how one grounds their moral philosophy, or goes about constructing a theory of rights, many people can agree that granting humans the freedom to explore, experiment, and be entrepreneurial has important benefits for individuals, families, organizations, and entire nations. Openness to change is what unifies us. Stagnation and “steady state” thinking—and the Precautionary Principle-based policies that flow from such reasoning—are the enemy. 

Thus, the Progress Studies movement can focus on both studying progress and advancing it at the same time, even if some will devote more effort to one priority than the other. And we shouldn’t forget that these two objectives are reinforcing: Comprehension informs advocacy and vice-versa. Progress is a never-ending process of trial-and-error. It’s all about learning by doing. We try, we fail, we learn, and we try again. This is as true for the individuals attempting to make progress in the real-world as it is for scholars studying it and seeking to promote it.

Let us get on with this important work, regardless of what motivates us to do it.

]]>
https://techliberation.com/2022/05/01/the-future-of-progress-studies/feed/ 5 76980
Samuel Florman & the Continuing Battle over Technological Progress https://techliberation.com/2022/04/06/samuel-florman-the-continuing-battle-over-technological-progress/ https://techliberation.com/2022/04/06/samuel-florman-the-continuing-battle-over-technological-progress/#comments Wed, 06 Apr 2022 18:37:45 +0000 https://techliberation.com/?p=76961

Almost every argument against technological innovation and progress that we hear today was identified and debunked by Samuel C. Florman a half century ago. Few others since him have mounted a more powerful case for the importance of innovation to human flourishing than Florman did throughout his lifetime.

Chances are you’ve never heard of him, however. As prolific as he was, Florman did not command as much attention as the endless parade of tech critics whose apocalyptic predictions grabbed all the headlines. An engineer by training, Florman became concerned about the growing criticism of his profession throughout the 1960s and 70s. He pushed back against that impulse in a series of books over the next two decades, including most notably: The Existential Pleasures of Engineering (1976), Blaming Technology: The Irrational Search for Scapegoats (1981), and The Civilized Engineer (1987). He was also a prolific essayist, penning hundreds of articles for a wide variety of journals, magazines, and newspapers beginning in 1959. He was also a regular columnist for MIT Technology Review for sixteen years.

Florman’s primary mission in his books and many of those essays was to defend the engineering profession against attacks emanating from various corners. More broadly, as he noted in a short autobiography on his personal website, Florman was interested in discussing, “the relationship of technology to the general culture.”

Florman could be considered a “rational optimist,” to borrow Matt Ridley’s notable term [1] for those of us who believe, as I have summarized elsewhere, that there is a symbiotic relationship between innovation, economic growth, pluralism, and human betterment.[2] Rational optimists are highly pragmatic and base their optimism on facts and historical analysis, not on dogmatism or blind faith in any particular viewpoint, ideology, or gut feeling. But they are unified in the belief that technological change is a crucial component of moving the needle on progress and prosperity.

Florman’s unique contribution to advancing rational optimism came in the way he itemized the various claims made by tech critics and then powerfully debunked each one of them. He was providing other rational optimists with a blueprint for how defend technological innovation against its many critics and criticisms. As he argued in The Civilized Engineer, we need to “broaden our conception of engineering to include all technological creativity.”[3] And then we need to defend it with vigor.

In 1982, the American Society of Mechanical Engineers appropriately awarded Florman the distinguished Ralph Coats Roe Medal for his “outstanding contribution toward a better public understanding and appreciation of the engineer’s worth to contemporary society.” Carl Sagan had won the award the previous year. Alas, Forman never attained the same degree of notoriety as Sagan. That is a shame because Florman was as much a philosopher and a historian as he was an engineer, and his robust thinking on technology and society deserves far greater attention. More generally, his plain-spoken style and straight-forward defense of technological progress continues to be a model for how to counter today’s techno-pessimists.

This essay highlights some of the most important themes and arguments found in Florman’s writing and explains its continuing relevance to the ongoing battles over technology and progress.

What Motivates The “Antitechnologists”?

Florman was interested in answering questions about what motivates both engineers as well as their critics. He dug deep into psychology and history to figure out what makes these people tick. Who are engineers, and why do they do what they do? That was his primary question, and we will turn to his answers momentarily. But he also wanted to know what drove the technology critics to oppose innovation so vociferously.

Florman’s most important contribution to the history of ideas lies in his 6-part explanation of “the main themes that run through the works of the antitechnologists.”[4] Florman used the term “antitechnologists” to describe the many different critics of engineering and innovation. He recognized that the term wasn’t perfect and that some people he labelled as such would object to it. Nevertheless, because they offer no umbrella label for their movement or way of thinking, Florman noted that opposition to, or general discomfort with, technology was what motivated these critics. Hence, the label “antitechnologists.”

Florman surveyed a wide swath of technological critics from many different disciplines—philosophy, sociology, law, and other fields. He condensed their main criticisms into six general points:

  • Technology is a “thing” or a force that has escaped from human control and is spoiling our lives.
  • Technology forces man to do work that is tedious and degrading.
  • Technology forces man to consume things that he does not really desire.
  • Technology creates an elite class of technocrats, and so disenfranchises the masses.
  • Technology cripples man by cutting him off from the natural world in which he evolved.
  • Technology provides man with technical diversions which destroy his existential sense of his own being.[5]

No one else before this had ever crafted such a taxonomy of complaints from tech critics, and no one has done it better since Florman did so in 1976. In fact, it is astonishing how well Florman’s list continues to identify what motivates modern technology critics. New technologies have come and gone, but these same concerns tend to be brought up again and again. Florman’s books addressed and debunked each of these concerns in powerful fashion.

The Relentless Pessimism & Elitism of the Antitechnologists

Florman identified the way a persistent pessimism unifies antitechnologists. “Our intellectual journals are full of gloomy tracts that depict a society debased by technology,” he noted.[6] What motivated such gloom and doom? “It is fear. They are terrified by the scene unfolding before their eyes.”[7] He elaborated:

“The antitechnologists are frightened; they counsel halt and retreat. They tell the people that Satan (technology) is leading them astray, but the people have heard that story before. They will not stand still for vague promises of a psychic contentment that is to follow in the wake of voluntary temperance.”[8]

The antitechnologist’s worldview isn’t just relentlessly pessimistic but also highly elitist and paternalistic, Florman argued. He referred to it as “Platonic snobbery.”[9] The economist and political scientist Thomas Sowell would later call that snobbish attitude, “the vision of the anointed.”[10] Like Sowell, Florman was angered at the way critics stared down their noses at average folk and disregarded their values and choices:

“The antitechnologists have every right to be gloomy, and have a bounden duty to express their doubts about the direction our lives are taking. But their persistent disregard of the average person’s sentiments is a crucial weakness in their argument—particularly when they then ask us to consider the ‘real’ satisfactions that they claim ordinary people experienced in other cultures of other times.”[11]

Florman noted that critics commonly complain about “too many people wanting too many things,” but he noted that, “[t]his is not caused by technology; it is a consequence of the type of creature that man is.”[12] One can moralize all they want about supposed over-consumption or “conspicuous consumption,” but in the end, most of us strive to better our lives in various ways—including by working to attain things that may be out of our reach or even superfluous in the eyes of others.

For many antitechnologists and other social critics, only the noble search for truth and wisdom will suffice. Basically, everybody should just get back to studying philosophy, sociology, and other soft sciences. Modern tech critics, Forman said, fashion themselves as the intellectual descendants of Greek philosophers who believed that, “[t]he ideal of the new Athenian citizen was to care for his body in the gymnasium, reason his way to Truth in the academy, gossip in the agora, and debate in the senate. Technology was not deemed worthy of a free man’s time.”[13]

“It is not surprising to find philosophers recommending the study of philosophy as a way of life,” Florman noted amusingly.[14] But that does not mean all of us want (or even need) to devote our lives to such things. Nonetheless, critics often sneer at the choices made by the rest of us—especially when they involve the fruits of science and technology. “The most effective weapon in the arsenal of the antitechnologists is self-righteousness,” he noted,[15] and, “[a]s seen by the antitechnologists, engineers and scientists are half-men whose analysis and manipulation of the world deprives them of the emotional experiences that are the essence of the good life.”[16]

Indeed, it is not uncommon (both in the past and today) to see tech critics self-anoint themselves “humanists” and then suggest that anyone who thinks differently from them (namely, those who are pro-innovation) are the equivalent of anti-humanistic. I wrote about this in my 2018 essay, “Is It ‘Techno-Chauvinist’ & ‘Anti-Humanist’ to Believe in the Transformative Potential of Technology?” I argued that, “[p]roperly understood, ‘technology’ and technological innovation are simply extensions of our humanity and represent efforts to continuously improve the human condition. In that sense, humanism and technology are compliments, not opposites.”

But the critics remain fundamentally hostile to that notion and they often suggest that there is something suspicious about those who believe, along with Florman, that there is a symbiotic relationship between innovation, economic growth, pluralism, and human betterment. We rational optimists, the critics suggest, are simply too focused on crass, materialistic measures of happiness and human flourishing.

Florman observed this when noting how much grief he and fellow engineers and scientists got when engaging with critics. “Anyone who has attempted to defend technology against the reproaches of an avowed humanist soon discovers that beneath all the layers of reasoning—political, environmental, aesthetic, or moral—lies a deep-seated disdain for ‘the scientific view.’”[17]

Everywhere you look in the world of Science & Technology Studies (STS) today, you find this attitude at work. In fact, the field is perhaps better labelled Anti-Science & Technology Studies, or at least Science & Technology Skeptical Studies. For most STSers, the burden of proof lies squarely on scientists, engineers, and innovators who must prove to some (often undefined) higher authorities that their ideas and inventions will bring worth to society (however the critics measure worth and value, which is often very unclear). Until then, just go slow, the critics say. Better yet, consult your local philosophy department for a proper course of action!

The critics will retort that they are just looking out for society’s best interests and trying to counter that selfish, materialist side of humanity. Florman countered by noting how, “most people are in search of the good life—not ‘the goods life’ as [Lewis] Mumford puts it, although some goods are entailed—and most human desires are for good things in moderate amounts.”[18] Trying to better our lives through the creation and acquisition of new and better goods and services is just a natural and quite healthy human instinct to help us attain some ever-changing definition of whatever each of us considers “the good life.” “Something other than technology is responsible for people wanting to live in a house on a grassy plot beyond walking distance to job, market, neighbor, and school,” Florman responded.[19] We all want to “get ahead” and improve our lot in life. That’s not because technology forces the urge upon us. Rather, that urge comes quite naturally as part of a desire to improve our lot in life.

The Power of Nostalgia

I have spent a fair amount of time in my own writing documenting the central role that nostalgia plays in motivating technological criticism.[20] Florman’s books repeatedly highlighted this reality. “The antitechnologists romanticize the work of earlier times in an attempt to make it seem more appealing than work in a technological age,” he noted. “But their idyllic descriptions of peasant life do not ring true.”[21]

The funny thing is, it is hard to pin down the critics regarding exactly when the “golden era” or “good ‘ol days” were. But if there is one thing that they all agree on, it’s that those days have long passed us by. In a 2019 essay on “Four Flavors of Doom: A Taxonomy of Contemporary Pessimism,” philosopher Maarten Boudry noted:

“In the good old days, everything was better. Where once the world was whole and beautiful, now everything has gone to ruin. Different nostalgic thinkers locate their favorite Golden Age in different historical periods. Some yearn for a past that they were lucky enough to experience in their youth, while others locate utopia at a point farther back in time…”

Not all nostalgia is bad. Clay Routledge has written eloquently about how “nostalgia serves important psychological functions,” and can sometimes possess a positive character that strengthens individuals and society. But the nostalgia found in the works of tech critics is usually a different thing altogether. It is rooted in misery about the present and dread of the future—all because technology has apparently stolen away or destroyed all that was supposedly great about the past. Florman noted how, “the current pessimism about technology is a renewed manifestation of pastoralism,” that is typically rooted in historical revisionism about bygone eras.[22] Many critics engage in what rhetoricians call “appeals to nature” and wax poetic about the joys of life for Pre-Technological Man, who apparently enjoyed an idyllic life free of the annoying intrusions created by modern contrivances.

Such “good ol days” romanticism is largely untethered from reality. “For most of recorded history humanity lived on the brink of starvation,” Wall Street Journal columnist Greg Ip noted in a column in early 2019. Even a cursory review of history offers voluminous, unambiguous proof that the old days were, in reality, eras of abject misery. Widespread poverty, mass hunger, poor hygiene, disease, short lifespans, and so on were the norm. What lifted humanity up and improved our lot as a species is that we learned how to apply knowledge to tasks in a better way through incessant trial-and-error experimentation. Recent books by Hans Rosling,[23] Steven Pinker,[24] and many others[25] have thoroughly documented these improvements to human well-being over time.

The critics are unmoved by such evidence, preferring to just jump around in time and cherry-pick moments when they feel life was better than it is now. “Fond as they are of tribal and peasant life, the antitechnologists become positively euphoric over the Middle Ages,” Florman quipped.[26] Why? Mostly because the Middle Ages lacked the technological advances of modern times, which the critics loathe. But facts are pesky things, and as Florman insisted, “it is fair to go on to ask whether or not life was ‘better’ in these earlier cultures than it is in our own.”[27] “We all are moved to reverie by talk of an arcadian golden age,” he noted. “But when we awaken from this reverie, we realize that the antitechnologists have diverted us with half-truths and distortions.”[28]

The critics’ reverence for the old days would be humorous if it wasn’t rooted in an arrogant and dangerous belief that society can be somehow reshaped to resemble whatever preferred past the critics desire. “Recognizing that we cannot return to earlier times, the antitechnologists nevertheless would have us attempt to recapture the satisfactions of these vanished cultures,” Florman noted. “In order to do this, what is required is nothing less than a change in the nature of man.”[29] That is, the critics will insist that, “something must be done” (namely be forced from above via some grand design) to remake humans and discourage their inner homo faber desire to be an incessant tool-builder. But this is madness, Florman argued in one of the best passages from his work:

“we are beginning to realize that for mankind there will never be a time to rest at the top of the mountain. There will be no new arcadian age. There will always be new burdens, new problems, new failures, new beginnings. And the glory of man is to respond to his harsh fate with zest and ever-renewed effort.”[30]

If the critics had their way, however, that zest would be dampened and those efforts restrained in the name of recapturing some mythical lost age. This sort of “rosy retrospection bias” is all the more shocking coming, as it does, from learned people who should know a lot more about the actual history of our species and the long struggle to escape utter despair and destitution. Alas, as the great Scottish philosopher David Hume observed in a 1777 essay, “The humour of blaming the present, and admiring the past, is strongly rooted in human nature, and has an influence even on persons endued with the profoundest judgment and most extensive learning.”[31]

Why Invent? Homo Faber is our Nature

While taking on the critics and debunking their misplaced nostalgia about the past, Florman mounted a defense of engineers and innovators by noting that the need to tinker and create is in our blood. He began by noting how “the nature of engineering has been misconceived”[32] because, in a sense, we are all engineers and innovators to some degree.

Florman’s thinking was very much in line with Benjamin Franklin, who once noted, “man is a tool-making animal.” “Both genetically and culturally the engineering instinct has been nurtured within us,” Florman argued, and this instinct “was as old as the human race.”[33] “To be human is to be technological. When we are being technological we are being human—we are expressing the age-old desire of the tribe to survive and prosper.”[34] In fact, he claimed, it was no exaggeration to say that humans, “are driven to technological creativity because of instincts hardly less basic than hunger and sex.”[35] Had our past situation been as rosy as the critics sometimes suggest, perhaps we would have never bothered to fashion tools to escape those eras! It was precisely because humans wanted to improve their lives and the lives of their loved ones that we started crafting more and better tools. Flint and firewood were never going to suffice.

But our engineering instincts do not end with basic needs. “Engineering responds to impulses that go beyond mere survival: a craving for variety and new possibilities, a feeling for proportion—for beauty—that we share with the artist,” Florman argued.[36] In essence, engineering and innovation respond to both basic human needs and higher ones at every stage of “Maslow’s pyramid,” which describes a five-level hierarchy of human needs. This same theme is developed in Arthur Diamond’s recent book, Openness to Creative Destruction: Sustaining Innovative Dynamism. As Diamond argues, one of the most unheralded features of technological innovation is that, “by providing goods that are especially useful in pursuing a life plan full of challenging, worthwhile creative projects,” it allows each of us the pursue different conceptions of what we consider a good life.[37] But we are only able to do so by first satisfying our basic physiological needs, which innovation also handles for us.

Florman was frustrated that critics failed to understand this point and equally concerned that engineers and innovators had been cast as uncaring gadget-worshipers who did not see beauty and truth in higher arts and other more worldly goals and human values. That’s hogwash, he argued:

“What an ironic turn of events! For if ever there was a group dedicated to—obsessed with—morality, conscience, and social responsibility, it has been the engineering profession. Practically every description of the practice of engineering has stressed the concept of service to humanity.[38] [. . .] Even in an age of global affluence, the main existential pleasure of the engineer will always be to contribute to the well-being of his fellow man.”[39]

Engineers and innovators do not always set out with some grandiose design to change the world, although some aspire to do so. Rather, the “existential pleasures of engineering” that Florman described in the title of his most notable book comes about by solving practical day-to-day problems:

“The engineer does not find existential pleasure by seeking it frontally. It comes to him gratuitously, seeping into him unawares. He does not arise in the morning and say, ‘Today I shall find happiness.’ Quite the contrary. He arises and says, ‘Today I will do the work that needs to be done, the work for which I have been trained, the work which I want to do because in doing it I feel challenged and alive.’ Then happiness arrives mysteriously as a byproduct of his effort.”[40]

And this pleasure of getting practical work done is something that engineers and innovators enjoy collectively by coming together and using specialized skills in new and unique combinations. “[T]echnological progress depends upon a variety of skills and knowledge that are far beyond the capacity of any one individual,” he insisted. “High civilization requires a high degree of specialization, and it was toward high civilization that the human journey appears always to have been directed.”[41] Adam Smith could not have said it any better.

“Muddling Through”: Why Trial-and-Error is the Key to Progress

My favorite insights from Florman’s work relate to the way humans have repeatedly faced up to adversity and found ways to “muddle through.” This was the focus of an old essay of mine— “Muddling Through: How We Learn to Cope with Technological Change”—which argued that humans are a remarkably resilient species and that we regularly find creative ways to deal with major changes through constant trial-and-error experimentation and the learning that results from it.[42]

Florman made this same point far more eloquently long ago:

“We have been attempting to muddle along, acknowledging that we are selfish and foolish, and proceeding by means of trial and error. We call ourselves pragmatists. Mistakes are made, of course. Also, tastes change, so that what seemed desirable to one generation appears disagreeable to the next. But our overriding concern has been to make sure that matters of taste do not become matters of dogma, for that is the way toward violent conflict and tyranny. Trial and error, however, is exactly what the antitechnologists cannot abide.[43]

It is the error part of trial-and-error that is so vital to societal learning. “Even the most cautious engineer recognizes that risk is inherent in what he or she does,” Florman noted. “Over the long haul the improbable becomes the inevitable, and accidents will happen. The unanticipated will occur.”[44] But “[s]ometimes the only way to gain knowledge is by experiencing failure,” he correctly observed[45] “To be willing to learn through failure—failure that cannot be hidden—requires tenacity and courage.”[46]

I’ve argued that this represents the central dividing line between innovation supporters and technology critics. The critics are so focused on risk-adverse, precautionary principle-based thinking that they simply cannot tolerate the idea that society can learn more through trial-and-error than through preemptive planning. They imagine it is possible to override that process and predetermine the proper course of action to create a safer, more stable society. In this mindset, failure is to be avoided at all costs through prescriptions and prohibitions. Innovation is to be treated as guilty until proven innocent in the hope of eliminating the error (or risk / failure) associated with trial-and-error experiments. To reiterate, this logic misses the fact that the entire point of trial-and-error is to learn from our mistakes and “fail better” next time, until we’ve solved the problem at hand entirely.[47]

Florman noted that, “sensible people have agreed that there is no free lunch; there are only difficult choices, options, and trade-offs.”[48] In other words, precautionary controls come at a cost. “All we can do is do the best we can, plan where we can, agree where we can, and compromise where we must,” he said.[49] But, again, the antitechnologists absolutely cannot accept this worldview. They are fundamentally hostile to it because they either believe that a precautionary approach will do a better job improving public welfare, or they believe that trial-and-error fails to safeguard any number of other values or institutions that they regard as sacrosanct. This shuts down the learning process from which wisdom is generated. As the old adage goes, “nothing ventured, nothing gained.” There can be no reward without some risk, and there can be no human advances without unless we are free to learn from the error portion of trial-and-error.

The Costs of Precautionary Regulation

Florman did not spend much time in his writing mulling over the finer points of public policy, but he did express skepticism about our collective ability to define and enforce “the public interest” in various contexts. A great many regulatory regimes—and their underlying statutes—rest on the notion of “protecting the public interest.” It is impossible to be against that notion, but it is often equally impossible to define what it even means.[50]

This leads to what Florman called, “the search for virtues that nobody can define”[51] “As engineers we are agreed that the public interest is very important; but it is folly to think that we can agree on what the public interest is. We cannot even agree on the scientific facts!”[52] This is especially true today in debates over what constitutes “responsible innovation” or “ethical innovation.”[53] What Florman noted about such conversations three decades ago is equally true today:

“Whenever engineering ethics is on the agenda, emotions come quickly to a boil. […] “It is oh so easy to mouth clichés, for example to pledge to protect the public interest, as the various codes of engineering ethics do. But such a pledge is only a beginning and hardly that. The real questions remain: What is the public interest, and how is it to be served?”[54]

That reality makes it extremely difficult to formulate consensus regarding public polices for emerging technologies. And it makes it particularly difficult to define and enforce a “precautionary principle” for emerging technologies that will somehow strike the Goldilocks balance of getting things just right. This was the focus of my 2016 book Permissionless Innovation, which argued that the precautionary principle should be the last resort when contemplating innovation policy. Experimentation with new technologies and business models should generally be permitted by default because, “living in constant fear of worst-case scenarios—and premising public policy on them—means that best-case scenarios will never come about,” I argued. The precautionary principle should only be tapped when the harms alleged to be associated with a new technology are highly probable, tangible, immediate, irreversible, catastrophic, or directly threatening to life and limb in some fashion.

For his part, Florman did not want to get his defense of engineering mixed up with politics and regulatory considerations. Engineers and technologists, he noted, come in many flavors and supported many different causes. Generally speaking, they tend to be quite pragmatic and shun strong ideological leanings and political pronouncements.

Of course, at some point, there is no avoiding this fight; one must comment on how to strike the right balance when politics enter the picture and threatens to stifle technological creativity. Florman’s perspectives on regulatory policy were somewhat jumbled, however. On one hand, he expressed concern about excessive and misguided regulations, but he also saw government playing an important role both in supporting various types of engineering projects and regulating certain technological developments:

“The regulatory impulse, running wild, wreaks havoc, first of all by stifling creative and productive forces that are vital to national survival. But it does harm also—and perhaps more ominously—by fomenting a counter-revolution among outraged industrialists, the intensity of which threatens to sweep away many of the very regulations we most need.”[55]

In his 1987 book, The Civilized Engineer, Florman even expressed surprise and regret about growing pushback against regulation during the Reagan years. He also expressed skepticism about “the deceptive allure” of benefit-cost analysis, which was on the rise at the time, saying that the “attempt to apply mathematical consistency to the regulatory process was deplorably simplistic.”[56] I have always been a big believer in the importance of benefit-cost analysis (BCA), so I was surprised to read of Florman’s skepticism of it. But he was writing in the early days of BCA and it was not entirely clear how well it work in practice. Four decades on, BCA has become far more rigorous, academically respected, and well-established throughout government. It has widespread and bipartisan support as a policy evaluation tool.

Florman adamantly opposed any sort of “technocracy”—or administration of government by technically-skilled elites. He thought it was silly that so many tech critics believe that such a thing already existed. “The myth of the technocratic elite is an expression of fear, like a fairy tale about ogres,” he argued. “It springs from an understandable apprehension, but since it has no basis in reality, it has no place in serious discourse.”[57] Nor did he believe that there was any real chance a technocracy would ever take hold. “No matter how complex technology becomes, and no matter how important it turns out to be in human affairs, we are not likely to see authority vested in a class of technocrats.”[58]

Florman hoped for wiser administration of law and regulations that affected engineering endeavors and innovation more generally. Like so many others, he did not necessarily want more law, just better law. One cannot fault that instinct, but Florman was not really interested in fleshing out the finer details of policy about how to accomplish that objective. He preferred instead to use history as a rough guide for policy. From the fall of the Roman Empire to the decline of Britain’s economic might in more recent times, Florman observed the ways in which societal and governmental attitudes toward innovation influenced the relative growth of science, technology, and national economies. In essence, he was explaining how “innovation culture” and “innovation arbitrage” had been realities for far longer than most people realize.[59]

“Where the entrepreneurial spirit cannot be rewarded, and where non-productive workers cannot be discharged, stagnation will set in,” Florman concluded.[60] This is very much in line with the thinking of economic historians like Joel Mokyr[61] and Deirdre McCloskey,[62] who have identified how attitudes toward creativity and entrepreneurialism affect the aggregate innovative capacity of nations, and thus their competitive advantage and relative prosperity in the world.

Debunking Determinism, Anxiety & Alienation Concerns

One of the ironies of modern technological criticism is the way many critics can’t seem to get their story straight when it comes to “technological determinism” versus social determinism. In the extreme view, technological determinism is the idea that technology drives history and almost has a will of its own. It is like an autonomous force that is practically unstoppable. By contrast, social determinism means that society (individuals, institutions, etc.) guide and control the development of technology.

In the field of Science and Technology Studies, technological determinism is a very hot matter. Academic and social critics are fond of painting innovation advocates as rigid tech determinists who are little better than uncaring anti-humanistic gadget-worshipers. The critics have employed a variety of other creative labels to describe tech determinism, including: “techno-fundamentalism,” “technological solutionism,” and even “techno-chauvinism.”

Engineers and other innovators often get hit with such labels and accused of being rigid technological determinists who just want to see tech plow over people and politics. But this was, and remains, a ridiculous argument. Sure, there will always be some wild-eyed futurists and extropian extremists who make preposterous claims about how “there is no stopping technology.” “Even now the salvation-through-technology doctrine has some adherents whose absurdities have helped to inspire the antitechnological movement, Florman said.”[63] But that hardly represents the majority of innovation supporters, who well understand that society and politics play a crucial role in shaping the future course of technological development.

As Florman noted, we can dismiss extreme deterministic perspectives for a rather simple reason: technologies fail all the time! “If promising technologies can suffer fatal blows from unexpected circumstances,” Florman correctly argued, then “[t]his means that we are still—however precariously—in control of our own destiny.”[64] He believed that, “technology is not an independent force, much less a thing, but merely one of the types of activities in which people engage.”[65] The rigid view of tech determinism can be dismissed, he said, because “it can be shown that technology is still very much under society’s control, that it is in fact an expression of our very human desires, fancies, and fears.”[66]

But what is amazing about this debate is that some of the most rigid technological determinists are the technology critics themselves! Recall how Florman began his 6-part taxonomy of common complaints from tech critics. “A primary characteristic of the antitechnologists,” Florman argued, “is the way in which they refer to ‘technology’ as a thing, or at least a force, as if it had an existence of its own” and which “has escaped from human control and is spoiling our lives.”[67]

He noted that many of the leading tech critics of the post-war era often spoke in remarkably deterministic ways. “The idea that a man of the masses has no thoughts of his own, but is something on the order of a programmed machine, owes part of its popularity with the antitechnologists to the influential writings of Herbert Marcuse,” he believed.[68] But then such thinking accelerated and gained greater favor with the popularity of critics like French philosopher Jacques Ellul, American historian Lewis Mumford, and American cultural critic Neil Postman.

Their books painted a dismal portrait of a future in which humans were subjugated to the evils of “technique” (Ellul), “technics” (Mumford), or “technopoly” (Postman).  The narrative of their works read like dystopian science fiction. Essentially, there was no escaping the iron grip that technology had on us. Postman claimed, for example, that technology was destined to destroy “the vital sources of our humanity” and lead to “a culture without a moral foundation” by undermining “certain mental processes and social relations that make human life worth living.”

Which gets us to commonly heard concerns about how technology leads to “anxiety” and “alienation.” “Having established the view of technology as an evil force, the antitechnologists then proceed to depict the average citizen as a helpless slave, driven by this force to perform work he detests,” Florman notes.[69] “Anxiety and alienation are the watchwords of the day, as if material comforts made life worse, rather than better.”[70]

These concerns about anxiety, alienation, and “dehumanization” are omnipresent in the work of modern tech critics, and they are also tied up with traditional worries about “conspicuous consumption.” It’s all part of the “false consciousness” narrative they also peddle, which basically views humans as too ignorant to look out for their own good. In this worldview, people are sheep being led to the slaughter by conniving capitalists and tech innovators, who are just trying to sell them things they don’t really need.

Florman pointed out how preposterous this line of thinking is when he noted how critics seem to always forget that, “a basic human impulse precedes and underlies each technological development”:[71]

“Very often this impulse, or desire, is directly responsible for the new invention. But even when this is not the case, even when the invention is not a response to any particular consumer demand, the impulse is alive and at the ready, sniffing about like a mouse in a maze, seeking its fulfillment. We may regret having some of these impulses. We certainly regret giving expression to some of them. But this hardly gives us the right to blame our misfortunes on a devil external to ourselves.”[72]

Consider the automobile, for example. Industrial era critics often focused on it and lambasted the way they thought industrialists pushed auto culture and technologies on the masses. Did we really need all those cars? All those colors? All those options? Did we really even need cars? The critics wanted us to believe that all these things were just imposed upon us. We were being force-fed options we really didn’t even need or want. “Choice” in this worldview is just a fiction; a front for the nefarious ends of our corporate overlords.

Florman demolished this reasoning throughout his books. “However much we deplore the growth of our automobile culture, clearly it has been created by people making choices, not by a runaway technology,” he argued.[73] Consumer demand and choice is not some fiction fabricated and forced upon us, as the antitechnologists suggest. We make decisions. “Those who would blame all of life’s problems on an amorphous technology, inevitably reject the concept of individual responsibility,” Florman retorted. “This is not humanism. It is a perversion of the humanistic impulse.”[74]

A modern tweak on the conspicuous consumption and false consciousness arguments is found in the work of leading tech critics like Evgeny Morozov, who pens attention-grabbing screeds decrying what he regards as “the folly of technological solutionism.” Morozov bluntly states that “our enemy is the romantic and revolutionary problem solver who resides within” of us, but most specifically within the engineers and technologists.[75]

But would the world really be better place it tinkerers didn’t try to scratch that itch?[76] In 2021, the Wall Street Journal profiled JoeBen Bevirt, an engineer and serial entrepreneur who has been working to bring flying cars from sci-fi to reality. Channeling Florman’s defense of the existential pleasures associated with engineering, Bevirt spoke passionately about the way innovators can help “move our species forward” through their constant tinkering to find solutions to hard problems. “That’s kind of the ethos of who we are,” he said. “We see problems, we’re engineers, we work to try to fix them.”[77]

When tech critics like Morozov decry “solutionism,” they are essentially saying that innovators like Bevirt need to just shut up and sit down. Don’t try to improve the world through tinkering; just settle for the status quo, the critics basically state. That’s the kiss of death for human progress, however, because it is only through incessant experimentation with the new and different approaches to hard problems that we can advance human well-being. “Solutionism” isn’t about just creating some shiny new toy; it’s about expanding the universe of potentially life-enriching and life-saving technologies available to humanity.

Conclusion

This review of Samuel Florman’s work may seem comprehensive, but it only scratches the surface of his wide-ranging writing. Florman was troubled that engineering lacked support or at least understanding. Perhaps that was because, he reasoned, that “[t]here is no single truth that embodies the practice of engineering, no patron saint, no motto or simple credo. There is no unique methodology that has been distilled from millenia of technological effort.”  Or, more simply, it may also be the case that the profession lacked articulate defenders. “The engineer may merely be waiting for his Shakespeare,” he suggested.[78]

Through his life’s work, however, Samuel Florman became that Shakespeare; the great bard of engineering and passionate defender of technological innovation and rational optimism more generally. In looking for a quote or two to close out my latest book, I ended with this one from Florman:

“By turning our backs on technological change, we would be expressing our satisfaction with current world levels of hunger, disease, and privation. Further, we must press ahead in the name of the human adventure. Without experimentation and change our existence would be a dull business.”[79]

Let us resolve to make sure that Florman’s greatest fear does not come to pass. Let us resolve to make sure that the great human adventure never ends. And let us resolve to counter the antitechnologists and their fundamentally anti-humanist worldview, which would most assuredly make our existence the “dull business” that Florman dreaded.

We can do better when we put our minds and hands to work innovating in an attempt to build a better future for humanity. Samuel Florman, the great prophet of progress, showed us the way forward.

 

Additional Reading from Adam Thierer:

 

Endnotes:

[1]    Matt Ridley, The Rational Optimist: How Prosperity Evolves (New York: Harper Collins, 2010).

[2]    Adam Thierer, “Defending Innovation Against Attacks from All Sides,” Discourse, November 9, 2021, https://www.discoursemagazine.com/ideas/2021/11/09/defending-innovation-against-attacks-from-all-sides.

[3]    Samuel C. Forman, The Civilized Engineer (New York: St. Martin’s Griffin, 1987), p. 26.

[4]    Samuel C. Florman, The Existential Pleasures of Engineering (New York, St. Martins Griffin, 2nd Edition, 1994), p. 53-4.

[5]    Existential Pleasures of Engineering, p. 53-4.

[6]    Samuel C. Florman, Blaming Technology: The Irrational Search for Scapegoats (New York: St. Martin’s Press, 1981), p. 186.

[7]    Existential Pleasures of Engineering, p. 76.

[8]    Existential Pleasures of Engineering, p. 77.

[9]    The Civilized Engineer, p. 38.

[10]   Thomas Sowell, The Vision of the Anointed: Self-Congratulation as a Basis for Social Policy (New York: Basic Books, 1995).

[11]   Existential Pleasures of Engineering, p. 72.

[12]   Existential Pleasures of Engineering, p. 76.

[13]   The Civilized Engineer, p. 35.

[14]   Existential Pleasures of Engineering, p. 102.

[15]   Blaming Technology, p. 162.

[16]   Existential Pleasures of Engineering, p. 55.

[17]   Blaming Technology, p. 70.

[18]   Existential Pleasures of Engineering, p. 77.

[19]   Existential Pleasures of Engineering, p. 60.

[20]   Adam Thierer, “Technopanics, Threat Inflation, and the Danger of an Information Technology Precautionary Principle,” Minnesota Journal of Law, Science & Technology 14, no. 1 (2013), p. 312–50, https://papers.ssrn.com/sol3/papers.cfm?abstract_id=2012494.

[21]   Existential Pleasures of Engineering, p. 62.

[22]   Blaming Technology, p. 9.

[23]   Hans Rosling, Factfulness: Ten Reasons We’re Wrong about the World—and Why Things Are Better Than You Think (New York: Flatiron Books, 2018).

[24]   Steven Pinker, Enlightenment Now: The Case for Reason, Science, Humanism, and Progress (New York: Viking, 2018).

[25]   Gregg Easterbrook, It’s Better than It Looks: Reasons for Optimism in an Age of Fear (New York: Public Affairs, 2018); Michael A. Cohen & Micah Zenko, Clear and Present Safety: The World Has Never Been Better and Why That Matters to Americans (New Haven, CT: Yale University Press, 2019).

[26]   Existential Pleasures of Engineering, p. 54.

[27]   Existential Pleasures of Engineering, p. 72.

[28]   Existential Pleasures of Engineering, p. 72.

[29]   Existential Pleasures of Engineering, p. 55.

[30]   Existential Pleasures of Engineering, p. 117.

[31]   David Hume, “Of the Populousness of Ancient Nations,” (1777), https://oll.libertyfund.org/titles/hume-essays-moral-political-literary-lf-ed.

[32]   The Civilized Engineer, p. 20.

[33]   Existential Pleasures of Engineering, p. 6.

[34]   The Civilized Engineer, p. 20.

[35]   Existential Pleasures of Engineering, p. 115.

[36]   The Civilized Engineer, p. 20.

[37]   Arthur Diamond, Openness to Creative Destruction: Sustaining Innovative Dynamism (Oxford: Oxford University Press, 2019).

[38]   Existential Pleasures of Engineering, p. 19.

[39]   Existential Pleasures of Engineering, p. 147.

[40]   Existential Pleasures of Engineering, p. 148.

[41]   The Civilized Engineer, p. 30.

[42]   Adam Thierer, “Muddling Through: How We Learn to Cope with Technological Change,” Medium, June 30, 2014, https://medium.com/tech-liberation/muddling-through-how-we-learn-to-cope-with-technological-change-6282d0d342a6.

[43]   Existential Pleasures of Engineering, p. 84.

[44]   The Civilized Engineer, p. 71.

[45]   The Civilized Engineer, p. 72.

[46]   The Civilized Engineer, p. 72.

[47]   Adam Thierer, “Failing Better: What We Learn by Confronting Risk and Uncertainty,” in Sherzod Abdukadirov (ed.), Nudge Theory in Action: Behavioral Design in Policy and Markets (Palgrave Macmillan, 2016): 65-94.

[48]   The Civilized Engineer, p. xi.

[49]   Existential Pleasures of Engineering, p. 85.

[50]   Adam Thierer, “Is the Public Served by the Public Interest Standard?” The Freeman, September 1, 1996,  https://fee.org/articles/is-the-public-served-by-the-public-interest-standard.

[51]   The Civilized Engineer, p. 84.

[52]   The Existential Pleasures of Engineering, p. 22.

[53]   Adam Thierer, “Are ‘Permissionless Innovation’ and ‘Responsible Innovation’ Compatible?” Technology Liberation Front, July 12, 2017, https://techliberation.com/2017/07/12/are-permissionless-innovation-and-responsible-innovation-compatible.

[54]   The Civilized Engineer, p. 79.

[55]   Blaming Technology, p. 106.

[56]   The Civilized Engineer, p. 158.

[57]   Blaming Technology, p. 41.

[58]   Blaming Technology, p. 40-1.

[59]   Adam Thierer, “Embracing a Culture of Permissionless Innovation,” Cato Online Forum, November 17, 2014, https://www.cato.org/publications/cato-online-forum/embracing-culture-permissionless-innovation; Christopher Koopman, “Creating an Environment for Permissionless Innovation,” Testimony before the US Congress Joint Economic Committee, May 22, 2018, https://www.mercatus.org/publications/creating-environment-permissionless-innovation.

[60]   The Civilized Engineer, p. 117.

[61]   Joel Mokyr, Lever of Riches: Technological Creativity and Economic Progress (New York: Oxford University Press, 1990).

[62]   Deirdre N. McCloskey, The Bourgeois Virtues: Ethics for an Age of Commerce (Chicago: The University of Chicago Press, 2006); Deirdre N. McCloskey, Bourgeois Dignity: Why Economics Can’t Explain the Modern World (Chicago: The University of Chicago Press. 2010).

[63]   Existential Pleasures of Engineering, p. 57.

[64]   Blaming Technology, p. 22.

[65]   The Existential Pleasures of Engineering, p. 58.

[66]   Blaming Technology, p. 10.

[67]   The Existential Pleasures of Engineering, p. 48, 53.

[68]   Existential Pleasures of Engineering, p. 70.

[69]   Existential Pleasures of Engineering, p. 49.

[70]   Existential Pleasures of Engineering, p. 16.

[71]   Existential Pleasures of Engineering, p. 61.

[72]   Existential Pleasures of Engineering, p. 61.

[73]   Existential Pleasures of Engineering, p. 60.

[74]   Blaming Technology, p. 104.

[75]   Evgeny Morozov, To Save Everything, Click Here: The Folly of Technological Solutionism (New York: Public Affairs, 2013).

[76]   Adam Thierer, “A Net Skeptic’s Conservative Manifesto,” Reason, April 27, 2013, https://reason.com/2013/04/27/a-net-skeptics-conservative-manifesto-2/.

[77]   Emily Bobrow, “JoeBen Bevirt Is Bringing Flying Taxis from Sci-Fi to Reality,” Wall Street Journal, July 9, 2021, https://www.wsj.com/articles/joeben-bevirt-is-bringing-flying-taxis-from-sci-fi-to-reality-11625848177.

[78]   Existential Pleasures of Engineering, p. 96.

[79]   Blaming Technology, p. 193.

]]>
https://techliberation.com/2022/04/06/samuel-florman-the-continuing-battle-over-technological-progress/feed/ 3 76961
On Defining “Industrial Policy” https://techliberation.com/2020/09/03/on-defining-industrial-policy/ https://techliberation.com/2020/09/03/on-defining-industrial-policy/#comments Thu, 03 Sep 2020 16:26:20 +0000 https://techliberation.com/?p=76808

In his debut essay for the new Agglomerations blog, my former colleague Caleb Watney, now Director of Innovation Policy for the Progressive Policy Institute, seeks to better define a few important terms, including: technology policy, innovation policy, and industrial policy. In the end, however, he decides to basically dispense with the term “industry policy” because, when it comes to defining these terms, “it is useful to have a limiting principle and it’s unclear what the limiting principle is for industrial policy.”

I sympathize. Debates about industrial policy are frustrating and unproductive when people cannot even agree to the parameters of sensible discussion. But I don’t think we need to dispense with the term altogether. We just need to define it somewhat more narrowly to make sure it remains useful. First, let’s consider how this exact same issue played out three decades ago. In the 1980s, many articles and books featured raging debates about the proper scope of industrial policy. I spent my early years as a policy analyst devouring all these books and essays because I originally wanted to be a trade policy analyst. And in the late 1980s and early 1990s, you could not be a trade policy analyst without confronting industrial policy arguments.

This was the era of what some called “Japan, Inc.” and Japan-bashing. South Korea and Taiwan were also part of that discussion, but the primary focus was “the Japan Model” and whether it represented the optimal industrial policy for the modern economy. That “Japan Model” sounds much like what is heard today when pundits reference China and its industrial policy model: Generous (and highly targeted) R&D investments, government-led public-private consortia, industrial trade policies (a combination of export assistance plus restrictions on imports and foreign investment), and other forms of targeted government support for specific sectors or technological developments. In the 1980s Japan’s economy started expanding rapidly and many Japanese multinationals began making major investments in US businesses and properties. The Japanese government played an active role in facilitating much of this. Suddenly, lots of people in the US were debating the wisdom of America falling in line and adopting its own industrial policy to counter Japan. Panic was in the air in academic and legislative circles. Lawmakers were literally smashing Japanese electronics with sledgehammers on the stairs of the US Capitol. Meanwhile, pundits were publishing a steady steam of pessimistic books with titles asking, Can America Compete?, while others suggested that the US was Trading Places with Japan.
THE COMING WAR WITH JAPAN | Kirkus Reviews Japan-loathing probably reached its apex around 1991 or ’92 with the publication of the non-fiction book, The Coming War with Japan, and then Michael Crichton’s fictional book (and then adapted movie), Rising Sun.  Japan’s new economic model was going to steamroll US innovators and allow them to dominate the global economy for decades to come. Three decades later, we know how all this played out. The US never went to war again with Japan. We just kept trading peacefully with them, thankfully. Meanwhile, the “Japan, Inc.” industrial policy model didn’t quite pan out the way they hoped (or that US pundits feared). In a 2007 report, Marcus Noland of the Peterson Institute for International Economics summarized Japan’s industrial policy results in bleak terms:
Japan faces significant challenges in encouraging innovation and entrepreneurship. Attempts to formally model past industrial policy interventions uniformly uncover little, if any, positive impact on productivity, growth, or welfare. The evidence indicates that most resource flows went to large, politically influential “backward” sectors, suggesting that political economy considerations may be central to the apparent ineffectiveness of Japanese industrial policy.
But I don’t want to get diverted into the specifics of why Japan’s industrial policy didn’t work. Rather, I just want to make the simple point that Japan definitely had an industrial policy that we can still evaluate today. We should not abandon all use of the term industrial policy because, once defined in a more focused fashion, it remains a useful concept worthy of serious academic study and deliberation.
Jump back to the mid-80s and flip through the individual contributions to this AEI book on The Politics of Industrial Policy. It features hot debates over the exact issue we’ve still trying to figure out today. Essays by Aaron Wildavsky, Thomas McCraw, and James Fallows generally argued for a broad conception of what industrial policy should include. Others such as economist Herbert Stein insisted upon a much narrower reading of the term. Into that debate stepped economic historian Ellis W. Hawley with a wonderful essay on industrial policy efforts in the pre-New Deal era. Hawley began his essay with what I still regard as the best understanding of what “industry policy” really means in practice. Here is Hawley’s definition:
By industrial policy I mean a national policy aimed at developing or retrenching selected industries to achieve national economic goals. In this usage, I follow those who distinguish such a policy, both from policies aimed at making the macroeconomic environment more conducive to industrial development in general and from the totality of microeconomic interventions aimed at particular industries. To have an industrial policy, a nation must not only be intervening at the microeconomic level but also have a planning and coordinating mechanism through which the intervention is rationally related to national goals, a general pattern of microeconomic targets is decided upon, and particular industrial programs are worked out and implemented.
I think Hawley’s conception of industrial policy gets it just right. Crucially, he clearly distinguished industrial policy from “policy” more generally. And he also specifies the requirement that “a planning and coordinating mechanism” is necessary and that targets are established.
]]>
https://techliberation.com/2020/09/03/on-defining-industrial-policy/feed/ 1 76808
On Doctorow’s “Adversarial Interoperability” https://techliberation.com/2020/08/29/on-doctorows-adversarial-interoperability/ https://techliberation.com/2020/08/29/on-doctorows-adversarial-interoperability/#comments Sat, 29 Aug 2020 19:15:25 +0000 https://techliberation.com/?p=76805

Interoperability is a topic that has long been of interest to me. How networks, platforms, and devices work with each other–or sometimes fail to–is an important engineering, business, and policy issue. Back in 2012, I spilled out over 5,000 words on the topic when reviewing John Palfrey and Urs Gasser’s excellent book, Interop: The Promise and Perils of Highly Interconnected Systems.

I’ve always struggled with the interoperability issues, however, and often avoided them became of the sheer complexity of it all. Some interesting recent essays by sci-fi author and digital activist Cory Doctorow remind me that I need to get back on top of the issue. His latest essay is a call-to-arms in favor of what he calls “adversarial interoperability.” “[T]hat’s when you create a new product or service that plugs into the existing ones without the permission of the companies that make them,” he says. “Think of third-party printer ink, alternative app stores, or independent repair shops that use compatible parts from rival manufacturers to fix your car or your phone or your tractor.”

Doctorow is a vociferous defender of expanded digital access rights of many flavors and his latest essays on interoperability expand upon his previous advocacy for open access and a general freedom to tinker. He does much of this work with the Electronic Frontier Foundation (EFF), which shares his commitment to expanded digital access and interoperability rights in various contexts.

I’m in league with Doctorow and EFF on some of these things, but also find myself thinking they go much too far in other ways. At root, their work and advocacy raise a profound question: should there be any general right to exclude on digital platforms? Although he doesn’t always come right out and say it, Doctorow’s work often seems like an outright rejection of any sort of property rights in networks or platforms. Generally speaking, he does not want the law to recognize any right for tech platforms to exclude using digital fences of any sort.

Where to Draw the Lines?

As someone who has authored a book about the importance of permissionless innovation, I need to be able to answer questions about where these lines between open versus closed systems are drawn. Definitions and framing matter, however. I use “permissionless innovation” as a descriptor for one possible policy disposition when considering where legal and regulatory defaults should be set. Another conception of permissionless innovation is more of an engineering ideal; a general freedom to connect, tinker, modify, etc. (I speak more about these conceptions in my latest book, Evasive Entrepreneurs.) Of course, someone advocating permissionless innovation as a policy default will sometimes be confronted with the question of what the law should say when someone behaves in an “evasive” fashion in the latter conception of permissionless innovation.

Doctorow would generally answer that question by saying that law should not be rigged to favor exclusion through laws like the DMCA (and specifically the law’s anti- circumvention provisions), Computer Fraud and Abuse Act, patent law, and various other rules and laws. “[T]he current crop of Big Tech companies has secured laws, regulations, and court decisions that have dramatically restricted adversarial interoperability.”

Generally speaking, I agree. I’m not a fan of technocratic laws or regulations that seek to micro-manage interoperability and which stack the deck in favor of exclusionary conduct with steep penalties for evasion. But does that mean adversarial interoperability should be permitted in all cases? Should there exist any sort of common law presumption one way or the other when a user or competitor seeks access to an existing private platform or device?

Specifics matter here and I don’t have time to get into all the case studies that Doctorow goes through. Some are no-brainers, like the infamous Lexmark case involving refillable printer ink cartridges. Other cases are far more complicated, at least for me. Does Epic, creator of Fortnite, have a right of adversarial interoperability that it can exercise against Apple and their AppStore? As Dirk Auer suggests in a new essay, this episode looks more like a straightforward pricing dispute. Epic is making it out to be much more than that, suggesting Apple is guilty of unfair and exclusionary practices that require a legal remedy.

Why not take that logic further and just say Apple’s App Store us tantamount to a natural monopoly or digital essential facility that Epic and everyone else is entitled to on whatever terms they want? For that matter, why not apply the same logic to Epic’s Fortnite platform or even its Unreal Engine? Does every other gaming developer have a right to piggyback on the juggernaut that Epic has built?

This gets to the core question about Doctorow’s concept of adversarial interoperability: Exactly what should common law and the courts say platform owners make access rights a simple pricing matter and say: “You pay or you are out.” Like Doctorow and EFF, I don’t want Apple to benefit from any special favors from laws like DMCA. Where we differ is that I would still leave the door open for Apple to exercise various other common law contractual rights or property rights in court.

I suspect Doctorow would deny any such claims by Apple or anyone else. If so, I would like to see him spell out in more precise terms exactly what Apple’s property rights and contractual rights are in this instance. Or, again, should we just treat the App Store as a digital commons with unfettered open access rights for developers? If so, would Apple be required to still manage the resource once it is a quasi-commons?

I think that would end miserably, but would like to hear Doctorow’s preferred approach before saying more. I suspect a lot rides on the distinction between “open” verses “proprietary” standards, but compared to Doctorow and EFF, I am willing to embrace a world of both open and proprietary systems, and many hybrids in between. I don’t want the law favoring one type over the other, but that means I need to endorse a generalized property right for digital operators such that they can still exclude others (even in the absence of artificial regulatory rights like DMCA creates). Again, I suspect Doctorow would reject that standard, preferring a generalized right of access, even if that means the platforms become de facto commons.

More Radical Steps

Elsewhere, Doctorow has said is that some of these questions would be better addressed through more aggressive antitrust regulation. Mere data portability or mandatory interoperability isn’t enough for him. “Data portability is important,” Doctorow says, “but it is no substitute for the ability to have ongoing access to a service that you’re in the process of migrating away from.”

In his latest online book on “How to Destroy Surveillance Capitalism,” Doctorow suggests that it is time to “make Big Tech small again” through an “anti-monopoly ecology movement.” That “means bans on mergers between large companies, on big companies acquiring nascent competitors, and on platform companies competing directly with the companies that rely on the platforms.” And he desires a host of other remedies.

So, here we have the convergence of interoperability policy and antitrust policy, with a layer of property confiscation layered on top apparently. “Now it’s up to us to seize the means of computation, putting that electronic nervous system under democratic, accountable control,” he insists in his latest manifesto.

What’s funny about this is that Doctorow begins most of his essays by pointing out all the ways that politics is the problem when it comes to access issues, only to end by suggesting that a lot more political meddling is the required solution. He repeatedly laments how large tech players have so often been able to convince lawmakers and regulators to pass special laws or regulations that work to their favor. Yet, in his We-Can-Build-A-Better-Bureaucrat model of things, all those old problems will apparently disappear when we get the right people in power and get rid of those nefarious capitalist schemers.

Thus, what really animates Doctorow’s advocacy for adversarial interoperability is a deep suspicion of free market capitalism and property rights in particular. In this worldview, interoperability really just becomes a Trojan Horse meant to help bring down the entire capitalist order. Am I exaggerating? “As to why things are so screwed up? Capitalism.” Those are his exact words from the conclusion of his latest book.

Adversarial Innovation & Evolutionary Interop

Still, Doctorow raises many legitimate issues about interconnection and digital access rights. But we need a better approach to work though these questions than the one he suggests.

In my lengthy review of the Palfrey and Gasser Interop book, I tried to sketch out an alternative framework for thinking seriously about these issues. I referred to my preferred approach as “experimental interoperability” or “evolutionary interoperability.” I described this as the theory that ongoing marketplace experimentation with technical standards, modes of information production and dissemination, and interoperable information systems, is almost always preferable to the artificial foreclosure of this dynamic process through state action. The former allows for better learning and coping mechanisms to develop while also incentivizing the spontaneous, natural evolution of the market and market responses.

Adversarial interoperability is important, but not nearly as important as adversarial innovation and facilities-based competition. Stated differently, access rights to existing systems is an important value, but the incentives we have in place to encourage entirely new systems is what really matters most. At some point, a generalized right of access to existing systems discourages the sort of platform-building that could help give rise to the sort of creative destruction we have seen at work repeatedly in the past and that we still need today. Taken too far, adversarial interoperability threatens to undermine this goal. Why seek to build a better alternative platform if you can just endlessly free ride off someone else’s by force of law?

Thus, I prefer to work at the margins and think through how to balance these competing claims of access / interoperability rights versus contractual / property rights. My take will be too utilitarian for not only Doctorow but also for some libertarians, who want clear answers to all these questions based upon their preferred natural law-oriented constructions of rights. The problem with that approach is that it leads to all-or-nothing extremes (complete digital property rights, or virtually none) and that approach is fundamentally unworkable and destructive. We need to work harder about how to balance these rights and values in pro-competitive, pro-innovation fashion.

There is No Such Thing as Optimal Interoperability

In sum, there is no such thing as “optimal interoperablity.” Sometimes proprietary or “closed” systems will offer the public features and options that they will find preferable to “open” ones.  “There are many reasons why consumers might prefer ‘closed’ systems – even when they have to pay a premium for them,” argues Dirk Auer in a separate essay. It could be greater convenience, security, or other things. Palfrey and Gasser correctly noted in their book that, “the state is rarely in a position to call a winner among competing technologies” (p. 174). Moreover, they concluded:

“Lawmakers need to keep in view the limits of their own effectiveness when it comes to accomplishing optimal levels of interoperability. Case studies of government intervention, especially where complex information technologies are involved, show that states tend to be ill suited to determine on their own what specific technology will be the best option for the future (p. 175)

A thousand amens to that! The law should not artificially foreclose experimentation with many different types of platforms, standards, devices and the interoperability that exists among them.

]]>
https://techliberation.com/2020/08/29/on-doctorows-adversarial-interoperability/feed/ 3 76805
Symposium: Hirschman’s “Exit, Voice & Loyalty” at 50 https://techliberation.com/2020/08/27/symposium-hirschmans-exit-voice-loyalty-at-50/ https://techliberation.com/2020/08/27/symposium-hirschmans-exit-voice-loyalty-at-50/#comments Thu, 27 Aug 2020 15:28:01 +0000 https://techliberation.com/?p=76803

Albert Hirschman and the Social Sciences: A Memorial Roundtable – Humanity JournalThis month’s Cato Unbound symposium features a conversation about the continuing relevance of Albert Hirschman’s Exit, Voice and Loyalty: Responses to Decline in Firms, Organizations, and States, fifty years after its publication. It was a slender by important book that has influenced scholars in many different fields over the past five decades. The Cato symposium features a discussion between me and three other scholars who have attempted to use Hirschman’s framework when thinking about modern social, political, and technological developments.

My lead essay considers how we might use Hirschman’s insights to consider how entrepreneurialism and innovative activities might be reconceptualized as types of voice and exit. Response essays by Mikayla NovakIlya Somin, and Max Borders broaden the discussion to highlight how to think about Hirschman’s framework in various contexts. And then I returned to the discussion this week with a response essay of my own attempting to tie those essays together and extend the discussion about how technological innovation might provide us with greater voice and exit options going forward. Each contributor offers important insights and illustrates the continuing importance of Hirschman’s book.

I encourage you to jump over to Cato Unbound to read the essays and join the conversations in the comments.

 

]]>
https://techliberation.com/2020/08/27/symposium-hirschmans-exit-voice-loyalty-at-50/feed/ 2 76803
An Esoteric Reading of LM Sacasas https://techliberation.com/2019/02/26/an-esoteric-reading-of-lm-sacasas/ https://techliberation.com/2019/02/26/an-esoteric-reading-of-lm-sacasas/#respond Tue, 26 Feb 2019 14:54:15 +0000 https://techliberation.com/?p=76459

After reading LM Sacasas’ recent piece on moral communities , I couldn’t help but wonder if the piece was written in the esoteric mode .

Let me explain by some meandering.

Now, I am surely going to butcher his argument, so take a read of it yourself, but there is a bit of an interesting call and response structure to the piece. He begins with commentary on “frequent deployment of the rhetorical we ,” in discussions over the morality of technology. Then, channeling Langdon Winner, he notes approvingly that “What matters here is that this lovely ‘we’ suggests the presence of a moral community that may not, in fact, exist at all, at least not in any coherent, self-conscious form.”

He is right, the use of the rhetorical we helps to construct a community, which he thens deploys later in the piece. To see this in action,   

…The idea that technical forms are merely neutral has proven hard to shake. For a very long time, it has been a cornerstone principle of our thinking about technology and society. Or, more to the point, we have taken it for granted and have consequently done very little thinking about technology with regards to society.

I’ll note in passing that the liberal democratic structures of modern political culture and the development of technology are deeply intertwined, and they have both depended upon the presumption of their ostensible neutrality. I tempted to think that our present crisis is a function of a growing realization that neither our political structures nor our technologies are, in fact, merely neutral instruments.

Before becoming a policy analyst, I went to graduate school at the University of Illinois at Chicago and studied communication, which at the time was transitioning away from the influence of former dean Stanley Fish and becoming a new media study program. The staff was and still is excellent, but at the time it was deeply heterodox, including both old school rhetoricians and literary scholars as well as communication historians, and communication sociologists.

All of this background is to say that Sacasas’ charge that “we have taken it for granted and have consequently done very little thinking about technology with regards to society,” depends a lot on the kind of community you call your own and how you understand community.

My former community, communication scholars, has a long history of exploring these questions. Indeed, one of my favorite classes was an introductory survey course on democracy and technology . But Sacasas all too well knows that community. I don’t think he was intending to suggest those kind of counterpublics when suggesting community. As he notes, “There is no moral community or public space in which technological issues are topics for deliberation, debate, and shared action.” Here, he means moral community as it comes to us from Durkheim. Just as a reminder, moral community in this tradition generally references “those beings that you need to think ‘but is this right’ before you do something that could affect them.” In other words, questions over the morality of technology are not attended by the kinds of questions that constitute a moral community. I want to come back to this point later.

Where does this leave us? He further explains,

We are, at present, stuck in an unhelpful tendency to imagine that our only options with regard to how we govern technology are, on the one hand, individual choices and, on the other, regulation by the state. What’s worse, we’ve also tended to oppose these to one another. But this way of conceptualizing our situation is both a symptom of the deepest consequences of modern technology and part of the reason why it is so difficult to make any progress.

Technology operates at different scales and effective mechanisms of governance need to correspond to the challenges that arise at each scale. Mechanism of governance that makes sense at one end of the spectrum will be ineffective at the other end and vice versa.

Our problem is basically this: technologies that operate at the macro-level cannot be effectively governed by micro-level mechanisms, which basically amount to individual choices. At the macro-level, however, governance is limited by the degree to which we can arrive at public consensus, and the available tools of governance at the macro-level cannot address all of the ways technologies impact individuals. What is required is a cocktail of strategies that address the consequences of technology as they manifest themselves across the spectrum of scale.

In other words, Sacasas sets up a governance gap problem . There are micro-level solutions and macro-level solutions, but nothing in the middle that might emanate from a moral community. But, again, the fundamental criticism of this entire argument hinges on accepting the rhetorical we and the notion of a community. Or, to say it another way, a community must first be constructed for a governance gap to exist. If we don’t agree to the rhetorical construction of community, if there is no we, then there is no gap to fill. This is no small feat. Even Durkheim’s original understanding of moral community was a subjective understanding of the ethics of an imagined community.

But even separate from the construction problem, it is not clear to me that there isn’t already “a cocktail of strategies that address the consequences of technology as they manifest themselves across the spectrum of scale.” For example, Facebook changed its policy on breastfeeding photos after a group of mothers organized and pushed the #FreeTheNipple campaign . I cannot help but wonder if that is the kind of community driven strategy that Sacasas would want to promote.

That notoriously nebulous concept of civil society is worth invoking here. Organizations like EFF and EPIC and FreePress sue platforms and local governments, and help enact change. And what about all of the reports from journalists in the last decade? They have impacted both Facebook and Google, forcing them to change. Same with Apple and AT&T and Verizon. All of this is to say, I’m not exactly convinced this vision of the world is the appropriate yardstick of critique.   

]]>
https://techliberation.com/2019/02/26/an-esoteric-reading-of-lm-sacasas/feed/ 0 76459
Three Short Responses To The Pacing Problem https://techliberation.com/2018/11/27/three-short-responses-to-the-pacing-problem/ https://techliberation.com/2018/11/27/three-short-responses-to-the-pacing-problem/#respond Tue, 27 Nov 2018 17:16:38 +0000 https://techliberation.com/?p=76419

Contemporary tech criticism displays an anti-nostalgia. Instead of being reverent for the past, anxiety about the future abounds. In these visions, the future is imagined as a strange, foreign land, beset with problems. And yet, to quote that old adage, tomorrow is the visitor that is always coming but never arrives. The future never arrives because we are assembling it today.  

The distance between the now and the future finds its hook in tech policy in the pacing problem, a term describing the mismatch between advancing technologies and society’s efforts to cope with them. Vivek Wadhwa explained that , “We haven’t come to grips with what is ethical, let alone with what the laws should be, in relation to technologies such as social media.” In The Laws of Disruption , Larry Downes explained the pacing problem like this: “technology changes exponentially, but social, economic, and legal systems change incrementally.” Or, as Adam Thierer wondered , “What happens when technological innovation outpaces the ability of laws and regulations to keep up?”

Here are three short responses.

Technological Determinism

Part of what drives the worry about a pacing problem is rooted in a belief in technological determinism . Determinism aligns human actors and technological objects in a causal relationship. Technology acts on society as an outside force. In this view of the world, technology is separate from society and thus can advance by leaps and bounds before society and regulation can catch up. In other words, technology is made an independent variable with acts upon us all.

Yet, that doesn’t describe the world in which technological objects are created and sustained. The iPhone was created by Apple following the success of the iPod in melding the hardware platform with the content of the mobile web, ultimately for the purpose of boosting sales. And people became enamored with it, lining up days before its release to grab one. Technologies aren’t alien objects. They are molded by particular interests and institutional goals, and rooted in society, especially the bourgeois virtues.

Technologies exist within human ecology, just as economic systems do. To make technology an outside force misplaces the role of human values in the creation and adoption of innovation. As separated from society, determinism allows for technology to be both mythologized and demonized. Technologies cannot outpace our ability to adapt. Rather, the speed of change, of innovation, is rate limited by society’s ability to adapt. As Robin Hanson explained , “society’s ability to adapt is the primary constraint on how fast we adopt new technologies.”

The Technological Accident

The pacing problem also gains purchase because new technologies create the possibility for new accidents. As philosopher Paul Virilio wrote ,

To invent the sailing ship or the steamer is to invent the shipwreck. To invent the train is to invent the rail accident of derailment. To invent the family automobile is to produce the pile-up on the highway.

Every newly created technology comes with the potential for problems. So the possibility set for accidents increases dramatically when a new technology comes onto the scene. But it isn’t the case that all of those risk will be manifested. Only a subset of potential problems will ever become realized. As such, it isn’t is that social and regulatory responses systems need to have all answers. Rather, there needs to be in place flexible systems to deal with actualized issues.      

Regulation as a Real Option

Perhaps, however, we have been thinking about the pacing problem incorrectly. Maybe the pacing problem isn’t a problem as much as it is a reflection of uncertainty. Again, Vivek Wadhwa pithilty explained this problem, saying, “We haven’t come to grips with what is ethical, let alone with what the laws should be , in relation to technologies such as social media.” Consider that phrase I have highlighted. There is little agreement as to how we should regulate social media. In other words, there is regulatory uncertainty. The concept of real option might help make sense of this.

Real options are the investment choices that a company’s management will makes in order “to expand, change or curtail projects based on changing economic, technological or market conditions.” While originally used in strictly financial terms, economists Avinash Dixit and Robert Pindyck have adapted this concept to understand how firms invest, or not, in the face of regulatory uncertainty. As you read this paragraph from the first chapter of their book on the subject , replace the term investment with regulation and see what you think,  

Most investment decisions share three important characteristics it varying degrees. First, the investment is partially or completely irreversible. In other words, the initial cost of investment is at least partially sunk; you cannot recover it all should you change your mind. Second, there is uncertainty over the future rewards from the investment. The best you can do is to assess the probabilities of the alternative outcomes that can mean greater or smaller profit (or loss) for your venture. Third, you have some leeway about the timing of your investment. You can postpone action to get more information (but never, of course, complete certainty) about the future.   

There are strong corollaries. First, most regulatory decisions are difficult to reverse. It is rare for regulations to be stricken from the books, and even if they are, the affected industries are often impacted in more subtle ways. Second off, the potential benefits from a regulatory action are uncertain as Wadhwa pointed out. And finally, government bodies do have some leeway about the timing of their regulatory. Putting all of this together then, regulation might be thought of as a real option.

As economists Bronwyn H. Hall and Beethika Khan explained ,  

The most important thing to observe about this kind of [investment] decision is that at any point in time the choice being made is not a choice between adopting and not adopting but a choice between adopting now or deferring the decision until later.

In the same way, government regulation isn’t about regulating now or not regulating at all, but about regulating now or deferring the decision until later. That sounds a lot to me like the pacing problem.  

]]>
https://techliberation.com/2018/11/27/three-short-responses-to-the-pacing-problem/feed/ 0 76419
Book Review: Cathy O’Neil’s “Weapons of Math Destruction” https://techliberation.com/2018/11/07/book-review-cathy-oneils-weapons-of-math-destruction/ https://techliberation.com/2018/11/07/book-review-cathy-oneils-weapons-of-math-destruction/#comments Wed, 07 Nov 2018 17:01:28 +0000 https://techliberation.com/?p=76408

To read Cathy O’Neil’s Weapons of Math Destruction (2016) is to experience another in a line of progressive pugilists of the technological age. Where Tim Wu took on the future of the Internet and Evgeny Morozov chided online slactivism , O’Neil takes on algorithms, or what she has dubbed weapons of math destruction (WMD).

O’Neil’s book came at just the right moment in 2016. It sounded the alarm about big data just as it was becoming a topic for public discussion. And now, two years later, her worries seem prescient. As she explains in the introduction,

Big Data has plenty of evangelists, but I’m not one of them. This book will focus sharply in the other direction, on the damage inflicted by WMDs and the injustice they perpetuate. We will explore harmful examples that affect people at critical life moments: going to college, borrowing money, getting sentenced to prison, or finding and holding a job. All of these life domains are increasingly controlled by secret models wielding arbitrary punishments.

O’Neil is explicit about laying out the blame at the feet of the WMDs, “You cannot appeal to a WMD. That’s part of their fearsome power. They do not listen.” Yet, these models aren’t deployed and adopted in a frictionless environment. Instead, they “reflect goals and ideology” as O’Neil readily admits. Where Weapons of Math Destruction falters is that it ascribes too much agency to algorithms in places, and in doing so misses the broader politics behind algorithmic decision making.

For example, O’Neil begins her book with a story about Sarah Wysocki, a teacher who got fired from the D.C. public school system because of how the teacher evaluation system ranked her abilities. O’Neil writes,

Yet at the end of the 2010-11 school year, Wysocki received a miserable score on her IMPACT evaluation. Her problem was a new scoring system known as value-added modeling, which purported to measure her effectiveness in teaching math and language skills. That score, generated by an algorithm, represented half of her overall evaluation, and it outweighed the positive reviews from school administrators and the community. This left the district with no choice but to fire her, along with 205 other teachers who has IMPACT scores below the minimal threshold.

In the ensuing pages, O’Neil describes the scoring system, how it was designed, and how it affected Wysocki. But the broader politics behind the scoring system that ousted Wysocki are just as important.

Why, for example, was the value-added score such a prominent feature in the teacher evaluation as compared to administrative and parent input? Well, research from the Bill & Melinda Gates Foundation found that a teacher’s value-added track record is among the strongest predictors of student achievement gains. So, the school district changed around their evaluations to make it a central feature. As Jason Kamras, chief of human capital for D.C. schools, told the Washington Post , “We put a lot of stock in it.” But that decision wasn’t without its critics, including Washington Teachers’ Union President Nathan Saunders who said, “You can get me to walk down the road with you to say value-added is relevant, but 50 percent is too weighted.”

Moreover, the weights changed in 2009 because the Chancellor of D.C. public schools, Michelle Rhee, had negotiated a new deal with the teachers union. In exchange for 20 percent pay raises and bonuses of $20,000 to 30,000 for effective teachers, the district was given more leeway to fire teachers for poor performance, which they did using the IMPACT system. In part, this fight was spurred on because Obama-era Education Secretary Arne Duncan was doling out $3.4 billion in Race to the Top grants that focused on teacher effectiveness measures. Moreover, Rhee was a Chancellor because D.C. Mayor Adrian Fenty had passed legislation that would bypass the Board of Education and give him control of the schools.           

Yes, Wysocki might have been a false positive, but what about all of the poor performing teachers that the previous system hadn’t let go? By focusing on the teachers, O’Neil steers the conversation away from what should be the central concern, did the change actually help students learn and achieve?

Truth be told, my quibbles with Weapons of Math Destruction fit into two types. The first class relates to questions of emphasis and scope, which become important when the reader tallies off the costs and benefits of algorithms. Perhaps it is the case that “The U.S. News college ranking has great scale, inflicts widespread damage, and generates an almost endless spiral of destructive feedback loops.” But on the other hand, lower ranked colleges have decreased their net tuition and accepted a larger share of applicants. Yes, credit scores “open doors for some of us, while slamming them in the face of others,” but in which proportion? In Chile, for example, credit bureaus were forced to stop reporting defaults in 2012. The change was found to reduce the costs for most of the poorer defaulters, but raised the costs for non-defaulters, leading to a 3.5 percent decrease in lending and a reduction in aggregate welfare. It could be case that “the payday loan industry operates WMDs,” but it is unclear where low-income Americans will find short-term loans if they are outlawed.

Second, Weapons of Math Destruction continuously toys with important questions regarding the moral agency of technologies but never explicitly lays them out. How much value should be ascribed to technologies? To what degree are technologies value-neutral or value-laden? All technologies, including the algorithms that O’Neil describes, are designed and implemented for certain kinds of instrumental outcomes by companies and government agencies. An institution has to take on the task on adopting an algorithm for decision-making purposes, and thus, the algorithm reflects the institutional goals.

Should the algorithm be blamed, the institutional structures that put it into place, or some combination of the both? Reading with a careful eye, one will easily see that this is the fundamental question of the book, especially since O’Neil wonders whether “we’ve eliminated human bias or simply camouflaged it with technology.” But the real answer isn’t in this binary. Algorithmic problems are pluralist.

]]>
https://techliberation.com/2018/11/07/book-review-cathy-oneils-weapons-of-math-destruction/feed/ 1 76408
In Defense of Techno-optimism https://techliberation.com/2018/10/10/in-defense-of-techno-optimism/ https://techliberation.com/2018/10/10/in-defense-of-techno-optimism/#comments Wed, 10 Oct 2018 18:05:15 +0000 https://techliberation.com/?p=76391

Many are understandably pessimistic about platforms and technology. This year has been a tough one, from Cambridge Analytica and Russian trolls to the implementation of GDPR and data breaches galore.

Those who think about the world, about the problems that we see every day, and about their own place in it, will quickly realize the immense frailty of humankind. Fear and worry makes sense. We are flawed, each one of us. And technology only seems to exacerbate those problems.

But life is getting better. Poverty continues nose-diving; adult literacy is at an all-time high; people around the world are living longer, living in democracies, and are better educated than at any other time in history. Meanwhile, the digital revolution has resulted in a glut of informational abundance, helping to correct the informational asymmetries that have long plagued humankind. The problem we now face is not how to address informational constraints, but how to provide the means for people to sort through and make sense of this abundant trove of data. These macro trends don’t make headlines.  Psychologists know that people love to read negative articles. Our brains are wired for  pessimism .

In the shadow of a year of bad news, it helpful to remember that Facebook and Google and Reddit and Twitter also support humane conversations. Most people  aren’t going online to talk about politics and if you are, then you are rare. These sites are places where families and friends can connect. They offer a space of solace – like when chronic pain sufferers  find others on Facebook , or when widows vent, rage, laugh and cry without judgement through  the Hot Young Widows Club . Let’s also not forget that Reddit, while sometimes a place of rage and spite, is also where a weight lifter with  cerebral palsy can become a hero and where those with  addiction can find healing . And in the hardest to reach places in Canada, in Iqaluit,  people say that “Amazon Prime has done more toward elevating the standard of living of my family than any territorial or federal program. Full stop. Period”

Three-fourths of Americans say major technology companies’ products and services have been more good than bad for them personally. But when it comes to the whole of society, they are more skeptical about technology bringing benefits. Here is how I read that disparity: Most of us think that we have benefited from technology, but we worry about where it is taking the human collective. That is an understandable worry, but one that shouldn’t hobble us to inaction.

Nor is technology making us stupid. Indeed, quite the opposite is happening. Technology use in those aged 50 and above seems to have caused them to be cognitively younger than their parents to the tune of 4 to 8 years. While the use of Google does seem to reduce our ability to recall information, studies find that it has boosted other kinds of memory, like retrieving information. Why remember a fact when you can remember where it is located? Concerned how audiobooks might be affecting people, Beth Rogowsky, an associate professor of education, compared them to physical reading and was surprised to find “no significant differences in comprehension between reading, listening, or reading and listening simultaneously.” Cyberbullying and excessive use might make parents worry, but NIH supported work found that “Heavy use of the Internet and video gaming may be more a symptom of mental health problems than a cause. Moderate use of the Internet, especially for acquiring information, is most supportive of healthy development.” Don’t worry. The kids are going to be alright.

And yes, there is a lot we still need to fix. There is cruelty, racism, sexism, and poverty of all kinds embedded in our technological systems. But the best way to handle these issues is through the application of human ingenuity. Human ingenuity begets technology in all of its varieties.

When Scott Alexander over at Star Slate Codex  recently looked at 52 startups being groomed by startup incubator Y Combinator, he rightly pointed out that many of them were working for the betterment of all:  

Thirteen of them had an altruistic or international development focus, including Neema , an app to help poor people without access to banks gain financial services; Kangpe , online health services for people in Africa without access to doctors; Credy , a peer-to-peer lending service in India; Clear Genetics , an automated genetic counseling tool for at-risk parents; and Dost Education , helping to teach literacy skills in India via a $1/month course.

Twelve of them seemed like really exciting cutting-edge technology, including CBAS , which describes itself as “human bionics plug-and-play”; Solugen , which has a way to manufacture hydrogen peroxide from plant sugars; AON3D , which makes 3D printers for industrial uses; Indee , a new genetic engineering system; Alem Health , applying AI to radiology, and of course the obligatory drone delivery startup .

Eighteen of them seemed like boring meat-and-potatoes companies aimed at businesses that need enterprise data solution software application package analytics targeting management something something something “the cloud”.

As for the other companies, they were the kind of niche products that Silicon Valley has come to be criticized for supporting. Perhaps the Valley deserves some criticism, but perhaps it deserves more credit than it’s been receiving as-of-late.

Contemporary tech criticism displays a kind of anti-nostalgia. Instead of being reverent for the past, anxiety for the future abounds. In these visions, the future is imagined as a strange, foreign land, beset with problems. And yet, to quote that old adage, tomorrow is the visitor that is always coming but never arrives. The future never arrives because we are assembling it today. We need to work diligently together to piece together a better world. But if we constantly live in fear of what comes next, that future won’t be built. Optimism needn’t be pollyannaish. It only needs to be hopeful of a better world.  

]]>
https://techliberation.com/2018/10/10/in-defense-of-techno-optimism/feed/ 2 76391
The Pacing Problem, the Collingridge Dilemma & Technological Determinism https://techliberation.com/2018/08/16/the-pacing-problem-the-collingridge-dilemma-technological-determinism/ https://techliberation.com/2018/08/16/the-pacing-problem-the-collingridge-dilemma-technological-determinism/#comments Thu, 16 Aug 2018 22:41:56 +0000 https://techliberation.com/?p=76349

I recently posted an essay over at The Bridge about “The Pacing Problem and the Future of Technology Regulation.” In it, I explain why the pacing problem—the notion that technological innovation is increasingly outpacing the ability of laws and regulations to keep up—“is becoming the great equalizer in debates over technological governance because it forces governments to rethink their approach to the regulation of many sectors and technologies.”

In this follow-up article, I wanted to expand upon some of the themes developed in that essay and discuss how they relate to two other important concepts: the “Collingridge Dilemma” and technological determinism. In doing so, I will build on material that is included in a forthcoming law review article I have co-authored with Jennifer Skees, Ryan Hagemann (“Soft Law for Hard Problems: The Governance of Emerging Technologies in an Uncertain Future”) as well as a book I am finishing up on the growth of “evasive entrepreneurialism” and “technological civil disobedience.”

Recapping the Nature of the Pacing Problem

First, let us quickly recap that nature of “the pacing problem.” I believe Larry Downes did the best job explaining the “problem” in his 2009 book on The Laws of Disruption. Downes argued that “technology changes exponentially, but social, economic, and legal systems change incrementally” and that this “law” was becoming “a simple but unavoidable principle of modern life.”

Downes was generally a cheerleader for such developments. For him, the pacing problem is more like the pacing benefit. But Downes is in the minority among most tech policy scholars in this regard. In the field of Science and Technology Studies (STS), discussions about the pacing problem and what to do about it are omnipresent and full of foreboding gloominess.

STS is a broad field of interdisciplinary studies unified by a concern with “the impacts and control of science and technology, with particular focus on the risks, benefits and opportunities that S&T may pose” to a wide range of values. STS studies incorporates many disciplines: legal and philosophical studies, sociology, anthropology, engineering, and others. In countless essays, papers, journal articles, and books, STS scholars lament the pacing problem and often insist something must be done, often without ever getting around to explaining what that something is.

Regardless of their field of study, there is broad recognition among these scholars that new technological, social, and political realities make the pacing problem a phenomenon worth studying.  In my Bridge essay, I identified three primary drivers of the pacing problem:

  • Technological driver: The power of “combinatorial innovation,” which is driven by “Moore’s Law,” fuels a constant expansion of technological capabilities.
  • Social driver: As citizens quickly assimilate new tools into their daily lives and then expect that even more and better tools will be delivered tomorrow.
  • Political driver: Government has grown increasingly dysfunctional and unable to adapt to those technological and social changes.

The “Collingridge Dilemma”

Although they do not always refer to it by name, STS scholars regularly stress the so-called “Collingridge dilemma” in their work. The Collingridge dilemma refers to the extreme difficulty of putting proverbial genies back in their bottles once a given technology has reached a certain inflection point in society. The concept is named after David Collingridge, who wrote about the challenges of governing emerging technologies in his 1980 book, The Social Control of Technology .

“The social consequences of a technology cannot be predicted early in the life of the technology,” Collingridge argued. “By the time undesirable consequences are discovered, however, the technology is often so much part of the whole economics and social fabric that its control is extremely difficult.” He called this the “dilemma of control,” and asserted that, “When change is easy, the need for it cannot be foreseen; when the need for change is apparent, change has become expensive, difficult and time-consuming.”

In a sense, the “Collingridge dilemma” is simply a restatement of the pacing problem but with (1) greater stress on the social drivers behind the pacing problem and, (2) an implicit solution to “the problem” in the form of preemptive control of new technologies while they are still young and more manageable.

Specifically, for many STS scholars, Collingridge’s “dilemma” is preferably solved through the application of the Precautionary Principle. The contours of the Precautionary Principle are notoriously murky and ill-defined. Nonetheless, as I discussed a great length in my last book on the subject, the Precautionary Principle generally refers to the belief that new innovations should be curtailed or disallowed until their developers can prove that they will not cause any harm to individuals, groups, specific entities, cultural norms, or various existing laws, norms, or traditions.

You can see the logic of the Collingridge dilemma and the Precautionary Principle at work everywhere in STS scholarship today. Few scholars want to admit they favor the Precautionary Principle, however, so they often use different terminology. “Anticipatory governance” or “upstream governance” are the preferred terms of art these days.

For example, in a recent law review article about “Regulating Disruptive Innovation,” Nathan Cortez argues that “new technologies can benefit from decisive, well-timed regulation” or even “early regulatory interventions.” Similarly, writing in Slate in 2014, John Frank Weaver insisted we should regulate emerging tech like artificial intelligence “early and often” to “get out ahead of” various social and economic concerns.

In his last book, A Dangerous Master: How to Keep Technology from Slipping beyond Our Control, bioethicist Wendell Wallach also argued for new forms of upstream governance and defined it as a system that allow for “more control over the way that potentially harmful technologies are developed or introduced into the larger society. Upstream management is certainly better than introducing regulations downstream, after a technology is deeply entrenched, or something major has already gone wrong,” he argued. Wallach is basically just restating the Collingridge dilemma in this regard.

The problem with all these calls for the anticipatory or upstream governance solutions to the pacing problem and the Collingridge dilemma is that, like the Precautionary Principle more generally, the specific solutions are very incoherent or sometimes completely lacking. STS scholars almost always leave the reader hanging without offering a conclusion to their gloomy, pessimistic narratives about whatever technology or technological process it is they are critiquing. Critics are quick to issue bold calls-to-action, but rarely provide a detailed blueprint.

There are some exceptions. Some STS scholars have advocated for Precautionary Principle-minded legislation or agencies, like an “Artificial Intelligence Development Act,” a “National Algorithmic Technology Safety Administration” or a federal AI agency, such as a “Federal Robotics Commission.” Meanwhile, over the past decade, many STS scholars have pushed for national privacy and cybersecurity legislation, or expansive new forms of liability for technology companies. The regulatory authority sought in these cases would be squarely precautionary in character, aimed at addressing a wide array of hypothetical harms through permissioned-based rulemaking before those problems even materialize.

Technological Determinism?

Discussions about the pacing problem and the Collingridge dilemma have an air of technological determinism to them. Technological determinism generally refers to the notion that technology almost has a mind of its own and that it will plow forward without much resistance from society or governments. Here is a more scholarly definition from Sally Wyatt, who has explained how technological determinism is generally defined in a two-part fashion:

The first part is that technological developments take place outside society, independently of social, economic, and political forces. New or improved products or ways of making things arise from the activities of inventors, engineers, and designers following an internal, technical logic that has nothing to do with social relationships. The more crucial second part is that technological change causes or determines social change.

The opposite of technological determinism is usually referred to as “social constructivism,” which as Thomas Hughes notes, “presumes that social and cultural forces determine technical change.”

Ironically, among STS scholars, technological determinist reasoning is both (a) regularly on display, and (b) generally reviled. That is, many STS scholars speaking in deterministic tones about the inevitability of certain technological developments, but then they effortlessly shift into social constructivist mode when commenting on what they hope to do about it.

One of the most well-known technology critics of the past century was French philosopher Jacques Ellul. It is impossible to read his tracts and not find deterministic reasoning flying off every other page. He argued, for example, that technology is “self-perpetuating, all-persuasive, and inescapable,” and that it represents “an autonomous and uncontrollable force that dehumanized all that it touches.” Moreover, within the field of Marxist studies, technological determinism is ubiquitous. Of course, that goes back to Marx himself and his many ideological descendants, who held strongly deterministic views about the role industrial technology played in sharping history and socio-political systems. Plenty of other STS scholars remain hard-core social constructivist, however, and insist that dealing with the pacing problem and the Collingridge dilemma really just comes down to a matter of sheer social and political willpower.

Techno-determinist thinking is usually on display in more vivid terms among technological optimists. Reading the writings of futurists like Ray Kurzweil and Kevin Kelly, one cannot help but get the sense that they are pining for the day when we are all just assimilated into The Matrix. There is an air of utter futility associated with humanity’s efforts to resist the spread of various technological systems and processes. Philosopher Michael Sacasas refers to this mentality as “the Borg Complex,” which, he says, is often “exhibited by writers and pundits who explicitly assert or implicitly assume that resistance to technology is futile.”

The point I am trying to make here is that technological determinism is at work in all sorts of scholarship and punditry. Regardless of whether one subscribes to what Ian Barbour has labelled the warring viewpoints of “Technology as Liberator” or “Technology as a Threat,” very different people can hold strongly deterministic viewpoints.

Soft Determinism

The problem with all this talk about determinism—technological, social, political, or whatever—is that the lines are never quite as bright as some suggest. “Hard” determinism of any of these varieties simply cannot be correct. We have too many historical examples that run counter to both narratives.

Personally, I’ve always subscribed to what some refer to as “ soft technological determinism.” Technological historian Merritt Roe Smith defines “soft determinism” as the view “which holds that technological change drives social change but at the same time responds discriminatingly to social pressures,” as compared to “hard determinism,” which “perceives technological development as an autonomous force, completely independent of social constraints.”

Konstantinos Stylianou has offered a variant of soft determinism that zeroes in on better understanding the unique attributes of specific technologies and political systems when considering how difficult they may be to control. He argues that “there are indeed technologies so disruptive that by their very nature they cause a certain change regardless of other factors,” such as the Internet. Stylianou concludes that:

It seems reasonable to infer that the thrust behind technological progress is so powerful that it is almost impossible for traditional legislation to catch up. While designing flexible rules may be of help, it also appears that technology has already advanced to the degree that is able to bypass or manipulate legislation. As a result, the cat-and-mouse chase game between the law and technology will probably always tip in favor of technology. It may thus be a wise choice for the law to stop underestimating the dynamics of technology, and instead adapt to embrace it.

That may sound like just more hard deterministic thinking, but it represents a softer variety that holds that the special characteristics of some technologies are indeed altering our capacity to govern many newer sectors using traditional regulatory mechanisms. In my new law review article with Jennifer Skees and Ryan Hagemann, we conclude that this is the key factor motivating the gradual move away from “hard law” and toward “soft law” governance tools for a great many emerging technologies.

To be clear, this does not mean we are going to soon reach the proverbial “end of politics” or the “death of the nation-state” due to technology, or anything like that. As I point out in my forthcoming book, that sort of talk is silly. Some technology enthusiasts or libertarians use techno-determinist talk as if they are preaching a gospel of liberation theology—liberation from the state through technology emancipation, that is.

In reality, technology giveth and technology taketh away. Technology can empower people and institutions and help them challenge laws, regulations, and entire political systems. My forthcoming book documents how many “evasive entrepreneurs” are doing just that today, and with increasing regularity. But technology empowers government actors, too. In an unpublished 2009 manuscript entitled, “Does Technology Drive the Growth of Government?” my Mercatus Center colleague Tyler Cowen noted how growth of big government in the 20th century was greatly facilitated by various modern technologies (advanced transportation and communications networks, in particular). “Future technologies may either increase or decrease the role of government in society,” he noted, “but if history shows one thing, it is that we should not neglect technology in understanding the shift from an old political equilibrium to a new one.”

Thus, those who think that the pacing problem is a one-way ratchet to emancipation from state control need to realize that technology can be used for good and bad ends, and it can be used (and abused) by governments to expand their powers and limit our liberties. Similarly, those tech critics and STS scholars who lament how the pacing problem will undermine governments, democracy, or other institutions or values without radical interventions also are going too far. They need to recognize that while it is true many new technologies will march forward at a steady clip, it does not mean that society is powerless to bring some order to technological processes. We shape our tools and then our tools shape us. And then we create still more tools to improve upon previous tools, and the process goes on and on.

John Seely Brown and Paul Duguid put it best in this 2001 essay responding to “doom-and-gloom technofuturists”:

[T]echnological and social systems shape each other. The same is true on a larger scale. . . . Technology and society are constantly forming and reforming new dynamic equilibriums with far-reaching implications. The challenge . . . is to see beyond the hype and past the over-simplifications to the full import of these new sociotechnical formations.

So yes, the pacing problem is real, and it will continue to raise problems for social and political systems. But as Brown and Paul Duguid suggest, we’ll constantly adapt, form and reform new dynamic equilibriums, and then “muddle through,” just as we have so many times before.


Related Reading

 

 

]]>
https://techliberation.com/2018/08/16/the-pacing-problem-the-collingridge-dilemma-technological-determinism/feed/ 2 76349
The Online Public Sphere or: Facebook, Google, Reddit, and Twitter also support positive communities https://techliberation.com/2018/07/11/the-online-public-sphere-or-facebook-google-reddit-and-twitter-also-support-positive-communities/ https://techliberation.com/2018/07/11/the-online-public-sphere-or-facebook-google-reddit-and-twitter-also-support-positive-communities/#comments Wed, 11 Jul 2018 15:40:46 +0000 https://techliberation.com/?p=76314

In cleaning up my desk this weekend, I chanced upon an old notebook and like many times before I began to transcribe the notes. It was short, so I got to the end within a couple of minutes. The last page was scribbled with the German term Öffentlichkeit (public sphere), a couple sentences on Hannah Arendt, and a paragraph about Norberto Bobbio’s view of public and private.

Then I remembered. Yep. This is the missing notebook from a class on democracy in the digital age .   

Serendipitously, a couple of hours later, William Freeland alerted me to Franklin Foer’s newest piece in The Atlantic titled “ The Death of the Public Square .” Foer is the author of “World Without Mind: The Existential Threat of Big Tech,” and if you want a good take on that book, check out Adam Thierer’s review in Reason .

Much like the book, this Atlantic piece wades into techno ruin porn but focuses instead on the public sphere:

Nobody designed the public sphere from a dorm room or a Silicon Valley garage. It just started to organically accrete, as printed volumes began to pile up, as liberal ideas gained currency and made space for even more liberal ideas. Institutions grew, and then over the centuries acquired prestige and authority. Newspapers and journals evolved into what we call media. Book publishing emerged from the printing guilds, and eventually became taste-making, discourse-shaping enterprises.

In recent years, this has been eviscerated by Facebook and Google, Foer continues,  

It took centuries for the public sphere to develop—and the technology companies have eviscerated it in a flash. By radically remaking the advertising business and commandeering news distribution, Google and Facebook have damaged the economics of journalism. Amazon has thrashed the bookselling business in the U.S. They have shredded old ideas about intellectual property—which had provided the economic and philosophical basis for authorship.

Philosopher Jurgen Habermas, who is cited throughout the piece, coined the term Öffentlichkeit, which has been translated into English as public sphere. However, Habermas used the term to describe not only the “process by which people articulate the needs of society with the state” but also the “public opinion needed to legitimate authority in any functioning democracy.” So, the public bridges the practices of democracy with mass communication methods like broadcast television, newspapers, and magazines.   

While Foer doesn’t explore it fully, the public sphere forms a basis for legitimate authority, which in turn implicates political power.

Nancy Fraser provided the classic critique of public sphere because even in Habermas’ own conception of the term, countless voices were excluded from the public sphere. “This network of clubs and associations – philanthropic, civic, professional, and cultural – was anything but accessible to everyone,” Fraser explained. “On the contrary, it was the arena, the training ground and eventually the power base of a stratum of bourgeois men who were coming to see themselves as a ‘universal class’ and preparing to assert their fitness to govern.”

In parallel to the public sphere, Fraser observed that numerous counterpublics formed “where members of subordinated social groups invent and circulate counter discourses to formulate oppositional interpretations of their identities, interests, and needs.” And it is through these oppositional interpretations that the public conversation around politics changed. Think about civil rights and the environmental movement, and even deregulation as examples.

Foer might be right to focus on the public sphere, but I’m not sure his analysis goes far enough. He explains:

This assault on the public sphere is an assault on free expression. In the West, free expression is a transcendent right only in theory—in practice its survival is contingent and tenuous. We’re witnessing the way in which public conversation is subverted by name-calling and harassment. We can convince ourselves that these are fringe characteristics of social media, but social media has implanted such tendencies at the core of the culture. They are in fact practiced by mainstream journalists, mobs of the well meaning, and the president of the United States. The toxicity of the environment shreds the quality of conversation and deters meaningful participation in it. In such an environment, it becomes harder and harder to cling to the idea of the rational individual, formulating opinions on the basis of conscience. And as we lose faith in that principle, the public will lose faith in the necessity of preserving the protections of free speech.

But Foer’s lament, if it is about the public sphere, is ultimately about the old friction, between the public sphere and counterpublics, in new form. Foer’s worries about theological zealots, demagogic populists, avowed racists, trollish misogynists, filter bubbles, the false prophets of disruption, and invisible manipulation, just to name a couple techno-golems, echoes the “counter discourses [that] formulate oppositional interpretations” of Fraser.

It is all quite inhumane, yes.

But let’s also remember that Facebook and Google and Reddit and Twitter also support humane counterpublics. Like when chronic pain sufferers find solace on Facebook . Or when widows vent, rage, laugh and cry without judgement through the Hot Young Widows Club . Let’s also not forgot that Reddit, while sometimes being a place of rage and spite, is also where a weight lifter with cerebral palsy became a hero and where those with addiction can find healing

Let’s also not forget that most Americans think  these companies have on the whole been beneficial in their lives. And that most of us don’t post political content on either Facebook or Twitter. And that people are the least likely to get their news from social networking sites compared to every other source.

Focusing on democracy and on politics tightens the critical vision, causing us to miss the multiplicities of experiences online. Yet those experiences, those counterpublics are just as representative. They constitute a reality far more real than those constructed by critics.

]]>
https://techliberation.com/2018/07/11/the-online-public-sphere-or-facebook-google-reddit-and-twitter-also-support-positive-communities/feed/ 2 76314
Are “Permissionless Innovation” and “Responsible Innovation” Compatible? https://techliberation.com/2017/07/12/are-permissionless-innovation-and-responsible-innovation-compatible/ https://techliberation.com/2017/07/12/are-permissionless-innovation-and-responsible-innovation-compatible/#respond Wed, 12 Jul 2017 18:28:55 +0000 https://techliberation.com/?p=76164

“Responsible research and innovation,” or “RRI,” has become a major theme in academic writing and conferences about the governance of emerging technologies. RRI might be considered just another variant of corporate social responsibility (CSR), and it indeed borrows from that heritage. What makes RRI unique, however, is that it is more squarely focused on mitigating the potential risks that could be associated with various technologies or technological processes. RRI is particularly concerned with “baking-in” certain values and design choices into the product lifecycle before new technologies are released into the wild.

In this essay, I want to consider how RRI lines up with the opposing technological governance regimes of “permissionless innovation” and the “precautionary principle.” More specifically, I want to address the question of whether “permissionless innovation” and “responsible innovation” are even compatible. While participating in recent university seminars and other tech policy events, I have encountered a certain degree of skepticism—and sometimes outright hostility—after suggesting that, properly understood, “permissionless innovation” and “responsible innovation” are not warring concepts and that RRI can co-exist peacefully with a legal regime that adopts permissionless innovation as its general tech policy default. Indeed, the application of RRI lessons and recommendations can strengthen the case for adopting a more “permissionless” approach to innovation policy in the United States and elsewhere.

Definitional Ambiguities, Part 1: “Governance”

Before we can have a constructive conversation about these issues, however, we need to agree upon how narrowly or broadly we are defining some relevant terms, beginning with the word “governance.” When some hear the term “governance” their first reaction might be to think “government,” and formal legal and regulatory processes in particular. That is certainly one form of governance, but it is hardly the only one.

We often speak of the “governance” of corporations, schools, churches, other institutions, and even households. When we do, we usually do not mean government administration of these things; we are instead thinking of some other, more amorphous form of governance by a variety of individuals or groups. The “governance” of a company, for example, includes the interaction of shareholders, board members, corporate officials, workers, and so on. The “governance” of a church might involve clergy, the congregation, and sacred scriptures or traditions.  Household “governance” comes down to decisions made by parents and caretakers. And so on.

Thus, “governance” can certainly have the narrow connotation of being associated with formal regulatory enactments by governments, but it can also describe a much broader universe of norms and rules that are established and enforced by a wide variety of people (or groups of people) in a wide variety of ways.

When we consider questions of technological governance—and specifically the notion of “anticipatory governance,” which is prominent feature of RRI discussions—it helps to specify whether we are speaking of governance in a broad or narrow sense. Whether it is done consciously or not, in much of the literature, RRI scholars and advocates fail to make it clear what type of “governance” they are thinking of when proposing new forms of anticipatory technological governance.

Definitional Ambiguities, Part 2: “Precautionary Principle” & “Permissionless Innovation”

These distinctions are particularly important when we compare and contrast the “precautionary principle” and “permissionless innovation.” These concepts are most useful when viewed as governance dispositions or policy postures and they are usually—although not always—used in the narrow “governance” sense to describe one’s perspective on where legal and regulatory defaults should be set.

Even when applied narrowly, however, both terms are open to interpretation as applied in various policy contexts. For example, precaution could mean an outright prohibition on an innovative activity until such time as it had been proven safe (this is the way many FDA or FAA regulations work). But precaution might be imposed through somewhat less restrictive approaches, such as a set of government-established safety standards buttressed by a recall regime (think NHTSA or CPSC). Even less restrictive but still precautionary in orientation would be a mandatory labeling law or a government-led risk reduction educational campaign. In other words, there are probably as many flavors of the precautionary principle as there are flavors of ice cream.

For the longest time, both proponents and critics of the precautionary principle have failed to put a name on its opposing worldview or governance disposition. I have argued that, despite its uncertain origin and imprecise meaning, “permissionless innovation” provides a useful name for the antithesis of the precautionary principle.

As I noted in a recent speech at an Arizona State University law school conference on technological governance, critics of permissionless innovation sometimes like to imply that it is synonymous with anarchy. (In fact, a few people at that event leveled that accusation at me.) But I’ve written an entire book on this notion and surveyed countless essays and articles that cite the term, and I have never once seen any advocate of permissionless innovation going to such an extreme. In fact, those advocates often don’t even bother calling for the abolition of any laws, programs, or agencies. As I noted in my ASU talk, “most of those defenders of permissionless innovation are using the term as a sort of shorthand when what they really mean to say is something like: ‘give innovators a bit more breathing room,’ or, ‘don’t rush to regulate.’”

And so, as a policy posture, permissionless innovation really comes down to a preference for setting public policy defaults closer to green lights rather than red ones. In my own book on the subject, I defined the term as follows:

“Permissionless innovation refers to the notion that experimentation with new technologies and business models should generally be permitted by default. Unless a compelling case can be made that a new invention will bring serious harm to society, innovation should be allowed to continue unabated and problems, if any develop, can be addressed later.”

By contrast, the precautionary principle posture generally recommends keeping the light red until innovators can prove their new products and services are “safe,” however that is defined. But there are many points along the spectrum between these two policy postures. And if we can accept the idea that the “precautionary principle” and “permissionless innovation” act more as general governance dispositions instead of fixed and rigid edicts, then it is also easier to imagine how both of those dispositions can incorporate “responsible innovation” notions into their governance visions.

Definitional Ambiguities, Part 3: “Responsible Innovation”

But what exactly constitutes “responsible innovation”? Definitions of responsible research and innovation are still evolving, but a leading article on the subject by René von Schomberg from 2011 argues that it can be defined as:

“A transparent, interactive process by which societal actors and innovators become mutually responsive to each other with a view to the (ethical) acceptability, sustainability and societal desirability of the innovation process and its marketable products (in order to allow a proper embedding of scientific and technological advances in our society).”

A more streamlined definition was offered by Jack Stigloe, Richard Owen, and Phil Macnaghten in a 2013 article: “Responsible innovation means taking care of the future through collective stewardship of science and innovation in the present.” They also proposed four dimensions of responsible innovation—anticipation, reflexivity, inclusion and responsiveness—which they say “provide a framework for raising, discussing and responding to such questions.”

RRI Tools, a European consortium focused on promoting responsible innovation strategies, identifies the six core goals of RRI as: open access, gender equality in science, ethics, science education, governance, and public engagement. Other groups and individuals promoting RRI focus on privacy, safety, and security as crucial values that they hope to work into more product development processes early on.

As with “corporate social responsibility” before it, “responsible innovation” will remain a term that is open to varying interpretations and which can incorporate many distinct values that are context-dependent. What Milton Friedman said of CSR discussions in 1970—that they “are notable for their analytical looseness and lack of rigor”—continues to be somewhat true for both CSR and RRI circa 2017. Nonetheless, what both concepts hold in common is the belief that, whatever those “responsible” values are, they can be “baked in” to corporate decision-making and product design processes in an anticipatory fashion.

And while not everyone will agree on the contours of these concepts, practically speaking, I think we can expect both the CSR and RRI movement will continue to grow in coming years. That will be the case not only because of the pressures applied by various activists, stakeholders, and governments, but also because many companies and their consumers will demand more than just better products and greater profitability.

But Doesn’t RRI Necessitate the Precautionary Principle as a Policy Prerequisite?

But how precisely should RRI notions and recommendations influence policy deliberations over the future course of technological governance in the narrow sense of the term (i.e., more legalistic sense)? Here’s where things get more interesting.

The problem is that many of the advocates of RRI are seemingly more sympathetic to precautionary policy regimes and skeptical of the wisdom of permissionless innovation as a policy default. This is not always well-articulated in their writing. Instead, it is the attitude seemingly on display when I speak with RRI advocates or hear them deliver speeches.  Yet, most of these advocates just won’t ever let you nail them down on the point.

Some RRI advocates do come close to making that connection. In his seminal article, Rene von Schomberg argues that RRI, “can reduce the human cost of trial and error and make advantage of a societal learning process of stakeholders and technical innovators. It creates a possibility for anticipatory governance,” he says. “This should ultimately lead to products which are (more) societal robust.”

He then briefly raises the possibility of RRI informing the application of the precautionary principle in public policy debates:

“The precautionary principle works as an incentive to make safe and sustainable products and allow governmental bodies to intervene with Risk Management decisions (such as temporary licensing, case by case decision making etc) whenever necessary in order to avoid negative impacts.”

Yet, von Schomberg never really spells out the exact relationship between RRI and the precautionary principle as a matter of public policy .

Another leading article on the meaning of RRI by Grace Eden, Marina Jirotka, and Bernd Stahl, says that, “The RRI focus is more on mitigating wider societal long-term risks and so favors incremental rather than radical innovation.” That seems to suggest a closer connection between RRI and a formal application of the precautionary principle in policy deliberations about emerging technologies. They also speak of the “two very different approaches to problem solving (anticipatory vs. evidence-based),” which I have argued gets to the heart of the divergence between the precautionary principle and permissionless innovation policy paradigms. Yet, these authors do not dwell on this connection at length, and most of the rest of their article is focused on the ways in which RRI can (and already does) infuse product and service development processes outside of the realm of public policy.

In a 2015 Brookings Institution white paper about RRI, Walter D. Valdivia and David H. Guston offer a more concrete answer to this question when they insist that responsible innovation “is not a doctrine of regulation and much less an instantiation of the precautionary principle; the actions it recommends do not seek to slow down innovation because they do not constrain the set of options for researchers and businesses, they expand it.” They continue on to note that:

“[responsible innovation] considers innovation inherent to democratic life and recognizes the role of innovation in the social order and prosperity. It also recognizes that at any point in time, innovation and society can evolve down several paths and the path forward is to some extent open to collective choice. What RI pursues is a governance of innovation where that choice is more consonant with democratic principles.”

Here, finally, we have a better demarcation between the general notion of RRI and the formal application of the precautionary principle. But is that line really so bright? Do other RRI scholars agree with Valdivia and Guston about this separation between the “responsible innovation” movement and the formal application of the precautionary principle in the policy realm? And, finally, what is meant by “democratic life” and “democratic principles” in this context?

I suspect that many RRI advocates would read that last line from Valdivia and Guston above (“What RI pursues is a governance of innovation where that choice is more consonant with democratic principles.”) and suggest that it favors an embrace of the precautionary principle as the default position in emerging technology policy discussions. But, again, that remains open to debate because so much of the RRI literature lacks precision regarding the connection between these concepts.

How RRI Can be Compatible with Both Visions

Regardless, I would like to suggest that parties on both sides of this debate would be wise to divorce the concept of responsible innovation from their priors regarding optimal regulatory policy toward emerging technology. Properly understood, “responsible innovation” could be a feature of the “precautionary” vision, but it could also be compatible with the “permissionless” governance vision and resulting policy regimes. To reach that understanding, both sides will need to be open to learning from the other and willing to take their concerns seriously.

Advocates of RRI should understand that, just as CSR can do a great deal of good even in the absence of formal regulatory action, the same can be true of RRI, even in a policy regime in which permissionless innovation is the general default.

If, however, the first instinct among the RRI community is to consider advocates of permissionless innovation nothing more than a bunch of uncaring anarchists, they relinquish the opportunity to work with diverse parties to instill wise guidelines into technological development processes. This would be particularly misguided in an age when the so-called “Pacing Problem”—i.e., the growing gap between the introduction of new technologies and time it takes laws and regulations to adjust or be formulated in response—has become an ever-accelerating reality, making traditional “hard law” regulatory enactment increasingly difficult. If the RRI community wants to get any of the values that they care about incorporated into technological development processes, then they will need to be open to the idea that perhaps the only way to do so will be through less formal procedures precisely because law will likely lag so far behind marketplace developments.

Likewise, if the first instinct among the permissionless innovation advocates is to regard the RRI movement as little more than repackaged Ludditism, hell-bent on derailing all the great inventions of the future, then they are foolishly forgoing the chance to work with a diverse group of well-intentioned scholars and stakeholders who could ensure that new products and services gain more widespread acceptance and public trust. More practically, permissionless innovation advocates would be wise to accept the fact that, although technological innovation is generally outpacing the ability of government to keep up, that doesn’t mean most of the traditional regulatory regimes or agencies are going away any time soon. After all, can you name a technocratic law or regulatory body that has been liberalized or eliminated in recent memory? RRI offers a chance to forge a rough peace with agencies and officials who often just want to have a small say in how innovative processes are unfolding. Of course, if regulators seek to have a BIG say in those matters, then policy fights will no doubt ensue. But in my experience, this is less often the case than some defenders of permissionless innovation suggest.

Thus, advocates of permissionless innovation should understand that RRI is not synonymous with a formal precautionary principle-focused policy prescription and that “anticipatory governance” can mean something more generic and beneficial, so long as it does not come to mean the formal application of the precautionary principle as the public policy default.

We Are Already Going Down This Path

Perhaps I am being naïve to think this sort of common ground might exist. But the funny thing is that I know for a fact that it already does! RRI principles have been infusing various multistakeholder processes in the United States for many years now.

For example, here’s a paper I wrote back in 2009 about the various online safety task forces, blue ribbon commissions, and other collaborative efforts that were instilling “safety by design” principles into various online services and digital products. Meanwhile, “privacy by design” and “security by design” efforts are all the rage these days and a wide variety of best practices and codes of conduct have been established to make sure privacy and security values are baked-in to the product design process from the start.

Meanwhile, safety, security, and privacy best practices have increasingly been formulated by the U.S. Department of Commerce (the National Telecommunications and Information Administration in particular), the Federal Trade Commission, FDA, FCC, and the White House Office of Science and Technology Policy. These multistakeholder efforts and agency best practice reports have contained assorted “responsible innovation” principles for technologies as wide-ranging as: big data, artificial intelligence, the Internet of Things, facial recognition, online advertising, mobile phone privacy, mobile apps for kids, driverless cars, commercial drones, genetic testing, medical advertising on social media, 3D printed medical devices, medical device cybersecurity, nanotech, and much more. (I have a forthcoming paper in the works with Ryan Hagemann of the Niskanen Center in which we attempt to document many of these new “soft law” technological governance efforts. There have been so many of these efforts – many of which are still underway – that we are having a hard time cataloging them all!)

I am utterly perplexed why more RRI scholarship has not identified the many ways in which the principles they advocate already infuse multistakeholder processes such as these. Perhaps it is because those scholars feel that some of these multistakeholder processes fail to address the full range of issues or values that they feel are in play. But if you examine recent reports from these agencies and government bodies, I think you will come away quite impressed by the breadth of issues and concerns that they cover. Likewise, the values and best practices they discuss and/or recommend are exactly the sort of responsible innovation principles that the RRI movement cares about.

To some extent, therefore, RRI is already well-entrenched in the technology governance process, it’s just a bit messy. I think some RRI scholars probably fall prey to the old “Goldilocks myth” that we can get these principles just right with enough consideration and oversight. The reality on the ground is that instilling RRI values into the technological design process is a dynamic, iterative, and quite imprecise art.

In closing, there’s still more to the technological governance story that RRI advocates fail to incorporate into their work. To fully appreciate the many ways technological processes are constrained and corrected, they must take into account other governance forces and factors, including the role of:

  • social norms and reputational effects (especially the growing importance of reputational feedback mechanisms);
  • third-party accreditation and standards-setting bodies;
  • courts and common law (including legal solutions like product liability, negligence, design defects law, failure to warn, breach of warranty, and other assorted torts and class action claims);
  • insurance markets as risk calibrators and correctional mechanisms;
  • federal and state consumer protection agencies (such as the FTC), which police “unfair and deceptive practices” and other harms; and
  • media, academic institutions, non-profit advocacy groups, and the general public more generally, all of which can put pressure on technology developers.

Only by taking into account the full range of players and activities at work can we develop a more robust understanding of how technology is actually “governed” in our modern world. I suspect that many in the RRI community of scholars do appreciate these other factors, even though they don’t always account for all of them in their writing and advocacy. Then again, many of those advocates would perhaps decry the more remedial, ex post nature of these governance tools and insist that more ex ante anticipatory planning must be at the heart of technological design and development processes.

In reality, a mix of these two approaches is already at work today and will likely continue to dominate the governance process well into the future. So long as the anticipatory efforts don’t become formal regulatory proposals, there is no reason that this mix of “responsible innovation” governance tools and methods can’t be embraced by a diverse array of scholars and innovators.


Further Reading:

]]>
https://techliberation.com/2017/07/12/are-permissionless-innovation-and-responsible-innovation-compatible/feed/ 0 76164
Celebrating 20 Years of Internet Free Speech & Free Exchange https://techliberation.com/2017/06/22/celebrating-20-years-of-internet-free-speech-free-exchange/ https://techliberation.com/2017/06/22/celebrating-20-years-of-internet-free-speech-free-exchange/#comments Thu, 22 Jun 2017 14:47:15 +0000 https://techliberation.com/?p=76149

[originally published on Plaintext on June 21, 2017.]

This summer, we celebrate the 20th anniversary of two developments that gave us the modern Internet as we know it. One was a court case that guaranteed online speech would flow freely, without government prior restraints or censorship threats. The other was an official White House framework for digital markets that ensured the free movement of goods and services online.

The result of these two vital policy decisions was an unprecedented explosion of speech freedoms and commercial opportunities that we continue to enjoy the benefits of twenty years later.

While it is easy to take all this for granted today, it is worth remembering that, in the long arc of human history, no technology or medium has more rapidly expanded the range of human liberties — both speech and commercial liberties — than the Internet and digital technologies. But things could have turned out much differently if not for the crucially important policy choices the United States made for the Internet two decades ago.

First, on June 26, 1997, the Supreme Court handed down its landmark decision in Reno v. ACLU, which struck down the Communications Decency Act’s provisions seeking to regulate online content under the old broadcast media standard. The Court concluded that there was “no basis for qualifying the level of First Amendment scrutiny that should be applied to this medium” and rejected the congressional effort to pigeonhole this exciting new medium into the archaic censorship regimes of the past.

The Reno decision was tremendously important in protecting online speakers from the chilling effect of government “indecency” regulations. The decision also set a strong legal precedent and was cited in countless subsequent decisions involving not only online speech, but also efforts to regulate video game content.

Second, in July 1997, the Clinton Administration released The Framework for Global Electronic Commerce, a document that outlined the US government’s new policy approach toward the Internet and the emerging digital economy. The Framework was a bold vision statement that endorsed comprehensive online freedom of exchange, saying that “the private sector should lead [and] the Internet should develop as a market driven arena not a regulated industry.” The Administration rejected a restrictive regulatory regime for commercial activities and instead recommended reliance on civil society, contractual negotiations, voluntary agreements, and industry self-regulation.

To “avoid undue restrictions on electronic commerce,” the vision statement recommended that “parties should be able to enter into legitimate agreements to buy and sell products and services across the Internet with minimal government involvement or intervention.” But, “[w]here governmental involvement is needed, its aim should be to support and enforce a predictable, minimalist, consistent and simple legal environment for commerce.”

Taken together, the Reno decision and the Clinton Administration’s Framework acted as a Magna Carta moment for the Internet and digital technologies. It signaled that “permissionless innovation” would become America’s governance stance toward online speech and commerce.

As I defined it in a book on the subject, permissionless innovation, “refers to the notion that experimentation with new technologies and business models should generally be permitted by default. Unless a compelling case can be made that a new invention will bring serious harm to society, innovation should be allowed to continue unabated and problems, if any develop, can be addressed later.” The primary advantage of permissionless innovation as a governance disposition is that it sends a clear green light to citizens telling them they are at liberty to pursue their own interests and passions, free from the suffocating grip of prior restraints on free speech and free exchange.

But the Reno decision and the Clinton Administration’s Framework are not the only critical policy decisions that helped enshrine permissionless innovation as the lodestar of online policy in the US. In the mid-1990s, the Clinton Administration made the decision to allow open commercialization of the Internet, which was previously just the domain of government agencies and university researchers. Even more crucially, when Congress passed and President Bill Clinton signed into law the Telecommunications Act of 1996, lawmakers made it clear that traditional analog-era communications and media regulatory regimes would generally not be applied to the Internet.

The Telecom Act also included an obscure provision known as “Section 230,” which immunized online intermediaries from onerous liability for the content and communications that traveled over their networks. Section 230 was hugely important in that it let online speech and commerce flourish without the constant threat of frivolous lawsuits looming overhead. Internet scholar David Post has argued that “it is impossible to imagine what the Internet ecosystem would look like today without [Section 230]. Virtually every successful online venture that emerged after 1996 — including all the usual suspects, viz. Google, Facebook, Tumblr, Twitter, Reddit, Craigslist, YouTube, Instagram, eBay, Amazon — relies in large part (or entirely) on content provided by their users, who number in the hundreds of millions, or billions,” he notes. It is unlikely that the vibrant marketplace of online speech and commerce we enjoy today could have existed without the protections afforded by Section 230.

Finally, in 1998, another important legislative development occurred when Congress passed the Internet Tax Freedom Act, which blocked all levels of government in the US from imposing discriminatory taxes on the Internet. That made it clear that the Net would not be milked as a “cash cow” the way previous communications systems had been.

So, let’s recap how policymakers generally got policy right for the Internet in the mid-1990s by enshrining permissionless innovation as the law of the land:

  • The Executive Branch set the tone for online freedom by fully privatizing the underlying network and then establishing a governance vision based upon minimal government interference with online speech and exchange.
  • The Legislative Branch generally endorsed the Clinton Administration’s vision for the Internet and digital technologies by ensuring that new policies would not be based upon the failed regulatory and tax policies of the past.
  • The Judicial Branch upheld the centrality of the First Amendment in the Information Age and made it clear that this new medium for speech would be granted the strongest protection against government encroachments on freedom of speech and expression.

The combined effect of these wise, bipartisan policy decisions was that the Net and digital tech were “born free” instead of being born into regulatory captivity. We continue to enjoy the fruits of these freedoms today as citizens here in the US and across the world take advantage of the unprecedented ability to connect and communicate to pursue their passions and interests as they see fit.

There’s still more work to be done, however. Online platforms and digital technologies continue to come under attack from regulatory activists both here and abroad. Many governments continue to push back against these online speech and commercial freedoms, meaning we’ll need to redouble our efforts to highlight and defend the benefits of preserving these important victories.

Finally, as the underlying drivers of the Digital Revolution continue to spread into other segments of the economy, these freedoms will come into conflict with older top-down regulatory regimes for automobiles, aviation, medical technology, finance, and much more. This will create an epic conflict of governance visions between the Internet’s permissionless innovation model versus the precautionary, command-and-control regulatory regimes of the industrial age. We already see tension at work in policy deliberations over the Internet of Things, “big data,” driverless cars, commercial drones, robotics, artificial intelligence, 3D printing, virtual reality, the sharing economy, and others.

If policymakers hope to preserve and extend the benefits of the hard-fought victories of the Internet’s past twenty years, they will need to restate and reinvigorate their commitment to permissionless innovation to help spur the next great technological revolutions in these and other fields.

]]>
https://techliberation.com/2017/06/22/celebrating-20-years-of-internet-free-speech-free-exchange/feed/ 1 76149
Does “Permissionless Innovation” Even Mean Anything? https://techliberation.com/2017/05/18/does-permissionless-innovation-even-mean-anything/ https://techliberation.com/2017/05/18/does-permissionless-innovation-even-mean-anything/#comments Thu, 18 May 2017 22:49:28 +0000 https://techliberation.com/?p=76143

[Remarks p repared for Fifth Annual Conference on Governance of Emerging Technologies: Law, Policy & Ethics at Arizona State University, Phoenix, AZ, May 18, 2017.]

_________________

What are we to make of this peculiar new term “permissionless innovation,” which has gained increasing currency in modern technology policy discussions? And how much relevance has this notion had—or should it have—on those conversations about the governance of emerging technologies? That’s what I’d like to discuss here today.

Uncertain Origins, Unclear Definitions

I should begin by noting that while I have written a book with the term in the title, I take no credit for coining the phrase “permissionless innovation,” nor have I been able to determine who the first person was to use the term. The phrase is sometimes attributed to Grace M. Hopper, a computer scientist who was a rear admiral in the United States Navy. She once famously noted that, “It’s easier to ask forgiveness than it is to get permission.”

“Hopper’s Law,” as it has come to be known in engineering circles, is probably the most concise articulation of the general notion of “permissionless innovation” that I’ve ever heard, but Hopper does not appear to have ever used the actual phrase anywhere. Moreover, Hopper was not necessarily applying this notion to the realm of technological governance, but was seemingly speaking more generically about the benefit of trying new things without asking for the blessing of any number of unnamed authorities or overseers—which could include businesses, bosses, teachers, or perhaps even government officials.

Today, however, we most often hear the “permissionless innovation” used in discussions about the governance of information technologies as well as a wide variety of emerging technologies. Unfortunately, scholars and advocates who have suggested that permissionless innovation should serve as the governing lodestar in these areas do not always precisely define what they mean by the term.

None of them seem to be suggesting, however, that permissionless innovation is synonymous with anarchy. To the contrary, many of them are quick to note that governments will continue to have a role to play. It is even rare to see advocates of permissionless innovation in these varied contexts calling for the abolition of any laws, programs, or agencies.

Instead, it seems to be the case that most of those defenders of permissionless innovation are using the term as a sort of shorthand when what they really mean to say is something like: “give innovators a bit more breathing room,” or, “don’t rush to regulate.”

This is consistent with my own articulation of the term, which goes as follows:

“Permissionless innovation refers to the notion that experimentation with new technologies and business models should generally be permitted by default. Unless a compelling case can be made that a new invention will bring serious harm to society, innovation should be allowed to continue unabated and problems, if any develop, can be addressed later.”

Default Policy Positions

Framing the term in this fashion makes it clear that, as it pertains to technological governance, permissionless innovation is about setting our public policy defaults closer to green lights rather than red ones.

It switches the burden of proof to the opponents of ongoing technological change by asserting five things:

  • First, technological innovation is the single most important determinant of long-term human well-being.
  • Second, there is real value to learning through continued trial-and-error experimentation, resiliency, and ongoing adaptation to technological change.
  • Third, constraints on new innovation should be the last resort, not the first. Innovation should be innocent until proven guilty.
  • Fourth, as regulatory interventions are considered, policy should be based on evidence of concrete potential harm and not fear of worst-case hypotheticals.
  • Fifth, and finally, where policy interventions are deemed needed, flexible, bottom-up solutions of an ex post (responsive) nature are almost always preferable to rigid, top-down controls of an ex ante (anticipatory) nature.

Shared Shortcomings of Both Visions

At least on the surface, that sort of governance vision stands in stark contrast to the “precautionary principle.” Defenders of the precautionary principle as the general default position in technology policy debates generally believe that new innovations should be curtailed or disallowed until their developers can prove that they will not cause any harm to individuals, groups, specific entities, cultural norms, or various existing laws, norms, or traditions.

That being said, I’d like to point out some of the shared shortcomings of both of these governance visions.

First, as with attempts to define the parameters of “permissionless innovation,” the precautionary principle is not always as rigid as its critics sometimes suggest. There are as many flavors of the precautionary principle as there are ice cream. Indeed, this is why many have criticized the precautionary principle not for what it says but rather for what it doesn’t say. It doesn’t tell us exactly how and when to apply precautionary measures, or how to evaluate the trade-offs associated with precaution.

This points the second and deeper underlying problem faced by advocates of both precautionary measures and permissionless innovation: Our collective inability to craft a widely-shared definition of what constitutes “technological harm” in various contexts. This is certainly not to suggest that no attempt has been made to do so. Rather, simply that we don’t seem to be any closer to concrete agreement about how or where to draw those lines.

Of course, let’s not kid ourselves into thinking that we can find bright-line answers to all these questions. After all, for many of these technological governance issues we are operating in the realm of “Level 3” or “Earth-level” systems, as Professors Allenby and Sarewitz refer to it in their book, The Techno-Human Condition. These are systems in which we deal with, as they say, “a context that is always shifting, and on meanings that are never fixed.”

That makes it even more challenging to define what we mean by “responsible innovation” or “socially desirable innovation” for purposes of determining optimal technology policy.

Risk Analysis through the Lens of Permissionless Innovation

For me, there are no easy ways out of this mess. But I do know two things for certain.

First, we must continue to refine and improve our risk analysis tools and techniques to make better determinations of when proposed interventions are sensible and cost-effective relative to the many trade-offs at work.

Again, I recognize the challenge of doing this when many of the issues and values in play are amorphous and metaphysical conflicts exist about how to even define some of these things. Most of the emerging technology policy issues I write about today, for example, involve some sort of privacy, safety, or security concern. In each case, however, very little consensus exists about what those terms even mean in varied contexts.

Nonetheless, the fact that benefit-cost analysis is hard should not serve as an excuse for failing to go through the exercise of attempting some sort of valuation of the many variables in play.

Soft Law Alternatives

The second thing I know for certain is that, due the combination of both definitional complexity regarding what constitutes technological harm, as well as the ever-accelerating pace of the so-called “pacing problem,” all roads lead back to soft law solutions instead of hard law remedies.

Last year, I had the pleasure of reading and reviewing Wendell Wallach’s new book and then having a nice conversation with him about it at Microsoft’s DC headquarters. The most interesting thing about our exchange was that, although we do not begin in the same place philosophically-speaking, we largely end up in the same place practically-speaking.

That is, there seemed to be some grudging acceptance on both our parts that “soft law” systems, multistakeholder processes, and various other informal governance mechanisms will need to fill the governance gap left by the gradual erosion of hard law.

Many other scholars, including many of you in this room, have discussed the growth of soft law mechanisms in specific contexts, but I believe we have probably failed to acknowledge the extent to which these informal governance models have already become the dominant form of technological governance, at least in the United States.

I’m currently co-authoring a very long study which documents how the Obama Administration came to rely quite heavily on multistakeholder processes, negotiated “best practices,” and industry codes of conduct as the primary governance mechanisms for a long list of emerging tech issues, including: driverless cars, commercial drones, big data, facial recognition, the Internet of Things and wearable technology, mobile medical applications, 3D printing, artificial intelligence, the Sharing Economy, and much more.

Most of these soft law processes were driven by the NTIA and FTC, but plenty of other agencies with an “N” or an “F” at the beginning of their name have undertaken some sort of soft law process, including NHTSA, the FDA, the FAA, and so on.

Now, I’m willing to bet that many of those involved in these processes who generally favor more anticipatory regulatory approaches would have preferred to start with hard law solutions to some of these issues. And I am equally certain that many of the innovators involved in those multistakeholder processes would have probably preferred not to have had to come to the table at all.

But at the end of the day, for the most part, all sides did come to the table and worked together in a good faith effort to find some rough consensus about what sort of informal guidelines would govern the future of innovation in these sectors.

The Worst of All Systems, Except All the Others

Plenty of questions remain about such soft law systems, and the irony is that defenders of both permissionless innovation and the precautionary principle will quite often be raising very similar concerns regarding the transparency, accountability, and enforceability of these systems.

But I’m inclined to believe that no matter where you sit on the permissionless vs. precautionary spectrum, and no matter what your reservations may be about it the new world of soft law governance that we find ourselves moving into, this is the future and the future is now.

Much as Churchill said of democracy being “the worst form of Government except for all those other forms that have been tried from time to time,” I think we are well on our way to a world in which soft law is the worst form of technological governance except for all those others that have been tried before.

Of course, the devil is always in the details and I suspect that we’ll have plenty of discuss and debate in that regard. Let’s get that conversation going.

]]>
https://techliberation.com/2017/05/18/does-permissionless-innovation-even-mean-anything/feed/ 4 76143
Innovation Policy at the Mercatus Center: The Shape of Things to Come https://techliberation.com/2017/04/11/innovation-policy-at-the-mercatus-center-the-shape-of-things-to-come/ https://techliberation.com/2017/04/11/innovation-policy-at-the-mercatus-center-the-shape-of-things-to-come/#respond Tue, 11 Apr 2017 15:11:40 +0000 https://techliberation.com/?p=76133

Written with Christopher Koopman and Brent Skorup (originally published on Medium on 4/10/17)

Innovation isn’t just about the latest gee-whiz gizmos and gadgets. That’s all nice, but something far more profound is at stake: Innovation is the single most important determinant of long-term human well-being. There exists widespread consensus among historians, economists, political scientists and other scholars that technological innovation is the linchpin of expanded economic growth, opportunity, choice, mobility, and human flourishing more generally. It is the ongoing search for new and better ways of doing things that drives human learning and prosperity in every sense — economic, social, and cultural.

As the Industrial Revolution revealed, leaps in economic and human growth cannot be planned. They arise from societies that reward risk takers and legal systems that accommodate change. Our ability to achieve progress is directly proportional to our willingness to embrace and benefit from technological innovation, and it is a direct result of getting public policies right.

The United States is uniquely positioned to lead the world into the next era of global technological advancement and wealth creation. That’s why we and our colleagues at the Technology Policy Program at the Mercatus Center at George Mason University devote so much time and energy to defending the importance of innovation and countering threats to it. Unfortunately, those threats continue to multiply as fast as new technologies emerge.

Indeed, it isn’t easy keeping on top of all of these issues and threats because the only constant in the world of innovation policy — the study of technological change and its impact on social, economic, and political systems — is constant change. You go to sleep one night thinking you’ve got the world figured out, only to awake the next morning to see that another tectonic shift has reshaped the landscape.

In the industrial era, it was hard enough mapping the contours of this field of academic study. This task has grown far more challenging. Computing and Internet-enabled innovations have fundamentally reshaped society and have also helped spawn other technological revolutions in diverse fields such as: robotics, autonomous systems, artificial intelligence, big data, the Sharing Economy, 3D printing, virtual reality, aviation, advanced medical technology, blockchain and Bitcoin, and the so-called the Internet of Things.

The short-term social and economic disruptions caused by these and other new technologies often lead to backlashes and even occasional “techno-panics.” When those panics bubble over into the political arena, the risk is that misguided regulatory policies will short-circuit opportunities for creators and entrepreneurs to pursue life-enriching innovations.

At the Mercatus Center, where we study these and other topics, our goal is to bring greater focus to these emerging technologies and the many different facets of innovation policy surrounding them. How we accomplish these goals is as challenging as it is exciting. As more and more industries and business are affected by these emerging technologies, the decisions that policymakers make about them will have profound effects on large parts of our economy and society.

Specifically, as we place ourselves at the forefront of these debates, our aim is to:

  • Explore how innovation policy affects economic growth and mobility, consumer welfare, and global competitive advantage;
  • Identify barriers to entrepreneurial endeavors and devise a roadmap for how to remove them;
  • Push back against technopanics and overly-broad theories of “technological harm” that could limit innovation opportunities and greater consumer choice; and
  • Confront the legal and ethical concerns surrounding emerging technologies and craft constructive solutions to those problems to avoid solutions of the top-down, “command-and-control” variety.

Overall, our vision is simple: Permissionless innovation must become the norm rather than the exception. This means innovation and innovators are protected against efforts to preemptively control ongoing trial-and-error experimentation. We should let creative minds and empowered entrepreneurs experiment with new and better ways of doing things. It also means that the future if public policy should be rooted in fact-based analysis and not shaped by outlandish fears of hypothetical worst-case scenarios.

Going forward, you will continue to see Mercatus producing research applying permissionless innovation across a host of areas. You can also expect us to begin pursuing big questions about the future.

What if we could reduce the number of deaths on US roadways from 96 people per day to zero? What if we could double life expectancy? Triple it? Wouldn’t it be nice if we could travel from New York to London in three hours? New York to Los Angeles in 2.5 hours? What if we welcomed automation instead of fearing its effects on the workforce? What if we could remove the technical and political barriers keeping us from going to Mars and then beyond it? And so on.

We pose these questions not merely because they are intellectually interesting and important, but also because we hope to make the case for embracing the future with a sense of wonder and optimism about how technological advancement can radically improve human well-being in both the short- and long-run.

It isn’t enough to simply point out where innovators and entrepreneurs are being hindered. It isn’t enough to simply tell people that the future will be bright. We must explain, in real terms, how hindering innovation opportunities undermines our collective ability to constantly improve the human condition.

And because there is a symbiotic relationship between freedom and progress, we must defend our collective ability as a society to achieve very concrete, widely-shared advances in well-being through a general freedom to experiment with new technologies and better ways of doing things.

That is our vision for the Technology Policy Program at the Mercatus Center and we hope it is one that the public and public policymakers will embrace going forward.

]]>
https://techliberation.com/2017/04/11/innovation-policy-at-the-mercatus-center-the-shape-of-things-to-come/feed/ 0 76133
Innovation Arbitrage, Technological Civil Disobedience & Spontaneous Deregulation https://techliberation.com/2016/12/05/innovation-arbitrage-technological-civil-disobedience-spontaneous-deregulation/ https://techliberation.com/2016/12/05/innovation-arbitrage-technological-civil-disobedience-spontaneous-deregulation/#comments Mon, 05 Dec 2016 20:06:53 +0000 https://techliberation.com/?p=76096

The future of emerging technology policy will be influenced increasingly by the interplay of three interrelated trends: “innovation arbitrage,” “technological civil disobedience,” and “spontaneous private deregulation.” Those terms can be briefly defined as follows:

  • Innovation arbitrage” refers to the idea that innovators can, and will with increasingly regularity, move to those jurisdictions that provide a legal and regulatory environment more hospitable to entrepreneurial activity. Just as capital now fluidly moves around the globe seeking out more friendly regulatory treatment, the same is increasingly true for innovations. And this will also play out domestically as innovators seek to play state and local governments off each other in search of some sort of competitive advantage.
  • Technological civil disobedience” represents the refusal of innovators (individuals, groups, or even corporations) or consumers to obey technology-specific laws or regulations because they find them offensive, confusing, time-consuming, expensive, or perhaps just annoying and irrelevant. New technological devices and platforms are making it easier than ever for the public to openly defy (or perhaps just ignore) rules that limit their freedom to create or use modern technologies.
  • Spontaneous private deregulation” can be thought of as de facto rather than the de jure elimination of traditional laws and regulations owing to a combination of rapid technological change as well the potential threat of innovation arbitrage and technological civil disobedience. In other words, many laws and regulations aren’t being formally removed from the books, but they are being made largely irrelevant by some combination of those factors. “Benign or otherwise, spontaneous deregulation is happening increasingly rapidly and in ever more industries,” noted Benjamin Edelman and Damien Geradin in a Harvard Business Review article on the phenomenon.[1]

I have previously documented examples of these trends in action for technology sectors as varied as drones, driverless cars, genetic testing, Bitcoin, and the sharing economy. (For example, on the theme of global innovation arbitrage, see all these various essays. And on the growth of technological civil disobedience, see, “DOT’s Driverless Cars Guidance: Will ‘Agency Threats’ Rule the Future?” and “Quick Thoughts on FAA’s Proposed Drone Registration System.” I also discuss some of these issues in the second edition of my Permissionless Innovation book.)

In this essay, I want to briefly highlight how, over the course of just the past month, a single company has offered us a powerful example of how both global innovation arbitrage and technological civil disobedience— or at least the threat thereof—might become a more prevalent feature of discussions about the governance of emerging technologies. And, in the process, that could lead to at least the partial spontaneous deregulation of certain sectors or technologies. Finally, I will discuss how this might affect technological governance more generally and accelerate the movement toward so-called “soft law” governance mechanisms as an alternative to traditional regulatory approaches.

Comma.ai Case Study, Part 1: The Innovation Arbitrage Threat

The company I want to highlight is Comma.ai, a start-up that had hoped to sell a $999 after-market kit for vehicles called the “Comma One,” which “would give average, everyday cars autonomous functionality.”[2] Created by famed hacker George Hotz, who as a teenager gained notoriety for being the first person to unlock an iPhone in 2007, the Comma One represents an attempt to create autonomous vehicle tech “on the cheap” by using off-the-shelf cameras and GPS technology combined with a healthy dose of artificial intelligence technology.

comma-one

But regulators at the National Highway Traffic Safety Administration (NHTSA), the federal agency responsible for road safety and automobile regulation, were none too happy to hear about Hotz’s plan to unleash his technology into the wild without first getting their blessing. On October 27, the agency fired off a nastygram to Hotz saying: “We are concerned that your product would put the safety of your customers and other road users at risk. We strongly encourage you to delay selling or deploying your product on the public roadways unless and until you can ensure it is safe.”

Hotz responded on Twitter promptly and angrily. After posting the full NHTSA letter, he said, “First time I hear from them and they open with threats. No attempt at a dialog.” In a follow-up tweet, he said, “Would much rather spend my life building amazing tech than dealing with regulators and lawyers. It isn’t worth it.” And then he announced that, “The comma one is cancelled. comma.ai will be exploring other products and markets. Hello from Shenzhen, China.” A flood of news articles followed about Hotz’s threat to engage in this sort of global innovation arbitrage by bolting US shores.[3]

Incidentally, what Hotz and Comma.ai were proposing to do with Comma One—i.e., deploy autonomous vehicle tech into the wild without prior regulatory approval—was recently done by Otto, a developer of autonomous trucking technology. As Mark Harris reported on Backchannel:

When Otto performed its test drive — the one shown in the May video — it did so despite a clear warning from Nevada’s Department of Motor Vehicles (DMV) that it would be violating the state’s autonomous vehicle regulations. When the DMV realized that Otto had gone ahead anyway, one official called the drive “illegal” and even threatened to shut down the agency’s autonomous vehicle program.”[4]

While Nevada regulators were busy firing off angry letters, Otto was busy doing even more testing in others states (like Ohio), which are eager to make their jurisdictions a testbed for autonomous vehicle innovation.[5] In fact, just recently, Ohio Gov. John Kasich announced the creation of the “Smart Mobility Corridor,” which, according to the Dayton Daily News, will be “a 35-mile stretch of U.S. 33 in central Ohio that runs through Logan County. Officials say that section of U.S. 33 will become a corridor where technologies can be safely tested in real-life traffic, aided by a fiber-optic cable network and sensor systems slated for installation next year.”[6]

otto-truck

This is an example of innovation arbitrage will increasingly take root here domestically as well as abroad, and some states (or countries) will use inducements in an effort to lure innovators to their jurisdictions.

Anyway, let’s get back to the Comma One case study. I don’t want to get too sidetracked regarding the merits of the concerns raised by NHTSA in its letter to Hotz and the implications of the agency’s threats for innovation in this space. But EFF board member Brad Templeton did a nice job addressing that issue in an essay about NHTSA’s letter that threatened Comma. As Templeton observed:

I will presume the regulators will say, “We only want to scare away dangerous innovation” but the hard truth is that is a very difficult thing to judge. All innovation in this space is going to be a bit dangerous. It’s all there trying to take the car — the 2nd most dangerous legal consumer product — and make it safer, but it starts from a place of danger. We are not going to get to safety without taking risks along the way.[7]

This gets to the very real trade-offs in play in the debate over driverless car technology and its regulation. In fact, my Mercatus Center colleague Caleb Watney and I recently filed comments [8] with NHTSA addressing the agency’s recently proposed “Federal Automated Vehicles Policy.”[9] We stressed the potentially deleterious implications of prior regulatory restraints on autonomous vehicle innovation by stressing the horrific real-world baseline we live with today, in which over 35,000 people dying on US roadways in 2015 (roughly 96 people per day) and 94 percent of all those crashes being attributable to human error.

Caleb and I noted that, by imposing new preemptive constraints on the coding of superior autonomous driving technology, “NHTSA’s proposed policy for automated vehicles may inadvertently increase the number of total automobile fatalities by delaying the rapid development and diffusion of this life-saving technology.” Needless to say, if that comes to pass, it would be a disaster because “automation on the roads could be the great public-health achievement of the 21st century.”[10]

In our filing, Caleb and I estimated that, “If NHTSA’s proposed premarket approval process slows the deployment of HAVs by 5 percent, we project an additional 15,500 fatalities over the course of the next 31 years. At 10 percent regulatory delay, we project an additional 34,600 fatalities over 33 years. And at 25 percent regulatory delay, we project an additional 112,400 fatalities over 40 years.[11]

So, needless to say, this is a very big deal.

But let’s ignore all those potential foregone benefits for the moment and just stick with the question of whether Hotz’s threat to engage in a bit of global innovation arbitrage (by moving to China or somewhere else) could work, or at least affect policy in some fashion. I think it absolutely could be an effective threat both because (a) policymakers really do want to do everything they can to achieve greater road safety, and (b) the auto sector remains a hugely important industry for the United States, and one that policymakers will want to do everything in their power to retain on our shores.

Moreover, as Templeton observes that “Comma is not the only company trying to build a system with pure neural networks doing the actual steering decisions.” Even if NHTSA succeeds in bringing Comma to heel, there will be others who will follow in its footsteps. It might be a firm like Otto, but there are many other players in this space today, including big dogs like Tesla and Google. If ever there was a truly global technology industry, it the automotive sector. Autonomous vehicle innovation could take root and blossom in almost any country in the world, and many countries will be waiting with open arms if America screws up its regulatory process.

As Templeton concludes:

The USA and California led the way in robocars in part because it was unregulated. In the USA, everything is permitted unless it was explicitly forbidden and nobody thought to write “no robots” in the laws. Progress in other countries where everything is forbidden unless it is permitted was much slower. The USA is moving in the wrong direction.[12]

Comma.ai Case Study, Part 2: The Technological Civil Disobedience Threat

But an interesting thing happened on the way to Comma’s threatened exodus. On November 30, the firm announced that it would now be open sourcing the code for its autonomous vehicle technology. Reporters at The Verge noted that, during a press conference:

Hotz said that Comma.ai decided to go open source in an effort to sidestep NHTSA as well as the California DMV, the latter of which he said showed up to his house on three separate occasions. “NHTSA only regulates physical products that are sold,” Hotz said. “They do not regulate open source software, which is a whole lot more like speech.” He went on to say that “if the US government doesn’t like this [project], I’m sure there are plenty of countries that will.”[13]

So here we see Hotz combining the threat of still potentially taking the project offshore (i.e., global innovation arbitrage) with the suggestion that by open-sourcing the code for Comma One he might be able to get around the law altogether. We might consider that an indirect form of technological civil disobedience.

george-hotz

Incidentally, Hotz may not be aware of the fact that NHTSA is in the process of making a power-play to become a driverless car code cop. While Hotz is technically correct that, under current law, NHTSA officials “do not regulate open source software, which is a whole lot more like speech,” NHTSA’s recent Federal Automated Vehicles Policy claimed that the agency “has authority to regulate the safety of software changes provided by manufacturers after a vehicle’s first sale to a consumer” while also suggesting that the agency “may need to develop additional regulatory tools and rules to regulate the certification and compliance verification of such post-sale software updates.”[14]

Needless to say, this proposal has important ramifications for not only Comma, but all other firms in this sector. Consider the implications for Tesla’s “autopilot” mode, which is really little more than a string of constantly-evolving code it pushes out to offer greater and greater autonomous driving functionality.  How would that iterative process work if every time Tesla wanted to make a little tweak to its code it had to run to Washington and file paperwork with NHTSA petitioning for permission to experiment and improve their systems? And then think about all the smaller innovators out there who want to be the next Elon Musk or George Hotz but do not yet have the resources or political connections in Washington to even go through this complex and costly process.

In any event, I have no idea if Hotz or Comma.ai will follow through with any of these threats or be successful in doing so. It may be the case that he is just blowing off smoke and that he and his firm will end up staying in the U.S. and perhaps even later reversing course on the decision to open source the Comma code. But to the extent that innovators like Hotz even hint that they might split the country or open source their code to avoid burdensome regulatory regimes, it can have an influence on future policy decisions. Or at least it should.

New Tech Realities & Their Policy Implications

Indeed, the increasing prevalence of global innovation arbitrage and technological civil disobedience raise some interesting issues for the governance of emerging technologies going forward. The traditional regulatory stance toward many existing sectors and technologies will be challenged by these realities. That’s because most of those traditional regulatory systems are highly precautionary, preemptive, and prophylactic in character. They generally opt for policy solutions that are top-down, overly rigid, and bureaucratic.

marcandreessen
This results in a slow-moving and sometimes completely stagnant regulatory approval process that can stop innovation dead in its tracks, or at least delay it for many years. Such systems send innovators a clear message: You are guilty until proven innocent and must receive some bureaucrat’s blessing before you can move forward.

Of course, in the past, many innovators (especially smaller scale entrepreneurs) really couldn’t do much to avoid similar regulatory systems where they existed. You either fell into line, or else! It wasn’t always clear what “or else!” would entail, but it could range from being denied a permit/license to operate, waiting months or years for rules to emerge, dealing with fines or other penalties, or some combination of all those things. Or perhaps you would just give up on your innovative idea altogether and exit the market.

But the world has changed in some important ways in recent years. Many of the underlying drivers of the digital revolution—massive increases in processing power, exploding storage capacity, steady miniaturization of computing, ubiquitous communications and networking capabilities, the digitization of all data, and more—are beginning to have a profound impact beyond the confines of cyberspace.[15] As venture capitalist Marc Andreessen explained in a widely read 2011 essay about how “software is eating the world”:

More and more major businesses and industries are being run on software and delivered as online services—from movies to agriculture to national defense. Many of the winners are Silicon Valley-style entrepreneurial technology companies that are invading and overturning established industry structures. Over the next 10 years, I expect many more industries to be disrupted by software, with new world-beating Silicon Valley companies doing the disruption in more cases than not. Why is this happening now? Six decades into the computer revolution, four decades since the invention of the microprocessor, and two decades into the rise of the modern Internet, all of the technology required to transform industries through software finally works and can be widely delivered at global scale.[16]

We can add to this list of a new realities the more general problem of technology accelerating at an unprecedented pace. This is what philosophers of technology call the “pacing problem.”  In his new book,  A Dangerous Master: How to Keep Technology from Slipping beyond Our Control, Wendell Wallach concisely defined the pacing problem as “the gap between the introduction of a new technology and the establishment of laws, regulations, and oversight mechanisms for shaping its safe development.” “There has always been a pacing problem,” Wallach correctly observed, but like other philosophers, he believes that modern technological innovation is accelerating much faster than it was in the past.[17]

What are the ramifications of all this for policy? As technology lawyer and consultant Larry Downes has noted, lawmaking in the information age is now inexorably governed by the “law of disruption” or the fact that “technology changes exponentially, but social, economic, and legal systems change incrementally.”[18] This law is “a simple but unavoidable principle of modern life,” he said, and it will have profound implications for the way businesses, government, and culture evolve. “As the gap between the old world and the new gets wider,” he argues, “conflicts between social, economic, political, and legal systems” will intensify and “nothing can stop the chaos that will follow.”[19]

laws-of-disruption

The end result of the “law or disruption” and a world relentlessly governed by the ever-accelerating “pacing problem” is that it will be harder than ever to effectively control emerging technologies using traditional legal and regulatory systems and mechanisms. And this makes it even more likely that the related threats of global innovation arbitrage and various forms of technological civil disobedience will become more regular fixtures in debates about many emerging technologies.

New Governance Models

How one reacts to these new realities will depend upon their philosophical disposition toward innovative activities more generally.

Consider first those adhering to a more “precautionary principle” mindset, which I have defined in my recent book as those who believe “that new innovations should be curtailed or disallowed until their developers can prove that they will not cause any harm to individuals, groups, specific entities, cultural norms, or various existing laws, norms, or traditions.”[20]

Needless to say, the precautionary principle crowd with be dismayed by these new trends and perhaps even decry them as “lawlessness.” Some of these folks seem to be in denial about these new realities and pretend that nothing much has changed. Yet, I have found that most precautionary principle-oriented advocates, and even many regulatory agencies themselves, tend to acknowledge these new realities. But they remain very uncertain about how best to respond to them, often just suggesting that we’ll all need to just try harder to impose new and better regulations on a more expedited or streamlined basis.

Of course, those of us who generally embrace the alternative policy vision for technological governance—“permissionless innovation”—are going to be more accepting of the new technological realities I have described, and we will perhaps even work to defend and encourage them. But while I count myself among this crowd, we cannot ignore the fact that many serious challenges will arise when innovation outpaces law or can easily evade it.

There is some middle ground here, although it is very messy middle ground.

The era of technocratic, top-down, one-size-fits-all regulatory regimes is fading, or at least being severely strained. We will instead need to craft flexible and adaptive policies going forward that are bottom-up, flexible, and evolutionary in character.

What that means in practice is that a lot more “soft law” and informal governance mechanisms will become the new norm. I wrote about this new policy environment in my recent essay, “DOT’s Driverless Cars Guidance: Will ‘Agency Threats’ Rule the Future?” as well as this lengthy review of Wendell Wallach’s latest book about technology ethics.  Along with Gary Marchant of the Arizona State University law school, Wallach recently published an excellent book chapter on “Governing the Governance of Emerging Technologies,” which discussed these soft law mechanisms, which include: “codes of conduct, statements of principles, partnership programs, voluntary programs and standards, certifications programs and private industry initiatives.”[21]

Their chapter appears in an important collection of essays that Gary Marchant edited with Kenneth W. Abbott and Braden Allenby entitled, Innovative Governance Models for Emerging Technologies.

governance-book

What is interesting about the chapters in that book is that seemingly widespread consensus now exists among experts in this field that some combination of these soft law mechanisms are likely to become the primary mode of technological governance for the indefinite future.  This is because, as Marc A. Saner points out in a different chapter of that book, “the control paradigm is too limited to address all the issues that arise in the context of emerging technologies.”[22] By the control paradigm, he generally means traditional administrative regulatory agencies and processes. He and other contributors in the book all seem to agree that the control problem paradigm “has its limits when diffusion, pacing and ethical issues associated with emerging technologies become significant, as is often the case.”[23]

And so the traditional command-and-control ways will gradually give way to a new paradigm for emerging technology governance. In fact, as I noted in my recent essay on driverless cars, we see this happening quite a bit already. “Multistakeholder processes” are already all the rage in the world of emerging technologies and their governance. In recent years, we have seen the White House and various agencies (such as the FTC, NTIA, FDA, and others) craft multistakeholder agreements or best practice guidance documents for technologies as far ranging as:

  • Drones & privacy
  • Sharing economy
  • Internet of Things
  • Driverless cars
  • Big data
  • Artificial intelligence
  • Cross-device tracking
  • Native advertising
  • Online data collection
  • Mobile app transparency and security
  • Mobile apps for kids
  • Mobile medical apps
  • Online health advertising
  • 3D printing
  • Facial recognition

And that list is not comprehensive. I know I am missing other multistakeholder efforts, best practices, or industry guidance documents that have been crafted in recent years.

Of course, many challenging issues need to be sorted out here, most notably: how transparent and accountable will these soft law systems be in practice? How will they be enforced? And what will happen to all those existing laws, regs, and agencies that will continue to exist? More generally, it is worth asking whether we can more closely study these various multistakeholder arrangements and soft law governance mechanisms and determine if there are certain principles or strategies that could be applicable across a wide class of technologies and sectors. In other words, can we a do a better job of “formalizing the informal,” without falling right back into the trap of trying to impose rules in a rigid, top-down, one-size-fits-all fashion?

Conclusion

Those are just a few of the hard questions we will need to consider going forward. For now, however, I think it is safe to conclude that we will no longer see much “law” being made for emerging technologies, at least not in the traditional sense of the term. Thanks to the new technological realities I have described here—and the relentless reality of the “pacing problem” more generally—I believe we are witnessing a wide-ranging and quite profound transformation in how technology is governed in our modern world. And I believe this movement away from traditional “hard law” and toward “soft law” governance mechanisms is likely to accelerate due to the increasing prevalence of innovation arbitrage, technological civil disobedience, and spontaneous private deregulation.

The ramifications of this transformation will be studied by philosophers, legal theorists, and political scientists for many decades to come. But we are still in the early years of this momentous transformation in technological governance and we will continue to struggle to figure out how to make it all work, as messy as it all may be.


[ Note: This essay is condensed from a manuscript I have been working on about The Rise of Technological Civil Disobedience. I’m not sure I will ever get around to finishing it, however, so I thought I would at least post this piece for now. In a subsequent essay, which is also part of that draft manuscript, I hope to discuss how this process might play out for technologies that are “born free” versus those that are “born in captivity.” That is, how likely is it that the trends I discuss here will take hold for technologies that have no pre-existing laws or agencies, while other technologies that are born into a regulatory environment are potentially doomed to be pigeonholed into those old regulatory regimes? What are the chances that the latter technologies can escape captivity and gain the freedom the other technologies already enjoy? How might technology-enabled “spontaneous private deregulation” be accelerated for those sectors? Is that always desirable? Again, I will leave these questions for another day. Scholars and students who are interested in these topics can feel free to contact me if they are interested in discussing them as well as potential paper ideas. Regardless of how you feel about these trends, these issues are ripe for intellectual exploration.]

[1]     Benjamin Edelman and Damien Geradin, “Spontaneous Deregulation,” Harvard Business Review, April 2016, https://hbr.org/2016/04/spontaneous-deregulation.

[2]     Megan Geuss, “After mothballing Comma One, George Hotz releases free autonomous car software,” Ars Technica, November 30, 2016, http://arstechnica.com/cars/2016/11/after-mothballing-comma-one-george-hotz-releases-free-autonomous-car-software.

[3]     See: “NHTSA Scared This Self-Driving Entrepreneur Off the Road,” Bloomberg Technology, October 28, 2016, https://www.bloomberg.com/news/articles/2016-10-28/nhtsa-scared-this-self-driving-entrepreneur-off-the-road; Sean O’Kane, “George Hotz cancels his self-driving car project after NHTSA expresses concern,” The Verge, October 28, 2016, http://www.theverge.com/2016/10/28/13453344/comma-ai-self-driving-car-comma-one-kit-canceled; Brad Templeton, “Comma.ai cancels comma-one add-on box after threats from NHTSA,” Robohub, October 31, 2016, http://robohub.org/comma-ai-cancels-comma-one-add-on-box-after-threats-from-nhtsa.

[4]     Mark Harris, “How Otto Defied Nevada and Scored a $680 Million Payout from Uber,” Backchannel, November 28, 2016,  https://backchannel.com/how-otto-defied-nevada-and-scored-a-680-million-payout-from-uber-496aa07f5ba2#.9rmtb29bl

[5]     Larry E. Hall, “Otto Self-Driving Truck Tests in Ohio; Violated Nevada Regulations,” Hybrid Cars, November 29, 2016, http://www.hybridcars.com/otto-self-driving-truck-tests-in-ohio-violated-nevada-regulations.

[6]     Kara Driscoll, “Ohio to create ‘smart’ road for driverless trucks,” Dayton Daily News, November 30, 2016, http://www.daytondailynews.com/business/ohio-create-smart-road-for-driverless-trucks/25qC7uYjz9rE96q6YFVUUK.

[7]     Brad Templeton, “Comma.ai cancels comma-one add-on box after threats from NHTSA,” Robohub, October 31, 2016, http://robohub.org/comma-ai-cancels-comma-one-add-on-box-after-threats-from-nhtsa/

[8]     Adam Thierer and Caleb Watney, “Comment on the Federal Automated Vehicles Policy,” November 22, 2016, https://www.researchgate.net/publication/311065194_Comment_on_the_Federal_Automated_Vehicles_Policy.

[9]     National Highway Traffic Safety Administration (NHTSA), Federal Automated Vehicles Policy, September 2016.

[10]   Adrienne LaFrance, “Self-Driving Cars Could Save 300,000 Lives per Decade in America,” Atlantic, September 29, 2015

[11]   Adam Thierer and Caleb Watney, “Comment on the Federal Automated Vehicles Policy,” November 22, 2016, https://www.researchgate.net/publication/311065194_Comment_on_the_Federal_Automated_Vehicles_Policy.

[12]   Templeton.

[13]   Sean O’Kane and Lauren Goode, “George Hotz is giving away the code behind his self-driving car project,” The Verge, November 30, 2016, http://www.theverge.com/2016/11/30/13779336/comma-ai-autopilot-canceled-autonomous-car-software-free.

[14]   NHTSA, Federal Automated Vehicles Policy, 76.

[15]   Adam Thierer, Jerry Brito, and Eli Dourado, “Technology Policy: A Look Ahead,” Technology Liberation Front, May 12, 2014, http://techliberation.com/2014/05/12/technology-policy-a-look-ahead.

[16]   Marc Andreessen, “Why Software Is Eating the World,” Wall Street Journal, August 20, 2011, http://www.wsj.com/articles/SB10001424053111903480904576512250915629460.

[17]   Wendell Wallach, A Dangerous Master: How to Keep Technology from Slipping beyond Our Control (New York: Basic Books, 2015), 60.

[18]   Larry Downes, The Laws of Disruption: Harnessing the New Forces That Govern Life and Business in the Digital Age 2 (2009).

[19]   Id.

[20]   Thierer, Permissionless Innovation, at 1.

[21]   Gary E. Marchant and Wendell Wallach, “Governing the Governance of Emerging Technologies,” in Gary E. Marchant, Kenneth W. Abbott & Braden Allenby (eds.), Innovative Governance Models for Emerging Technologies (Cheltenham, UK: Edward Elgar, 2013), 136.

[22]   Marc A. Saner,  “The Role of Adaptation in the Governance of Emerging Technologies,” in Gary E. Marchant, Kenneth W. Abbott & Braden Allenby (eds.), Innovative Governance Models for Emerging Technologies (Cheltenham, UK: Edward Elgar, 2013), 106.

[23]   Ibid., at 94.

]]>
https://techliberation.com/2016/12/05/innovation-arbitrage-technological-civil-disobedience-spontaneous-deregulation/feed/ 3 76096
Permissionless Innovation & Cybersecurity: Are They Compatible? https://techliberation.com/2016/03/09/permissionless-innovation-cybersecurity-are-they-compatible/ https://techliberation.com/2016/03/09/permissionless-innovation-cybersecurity-are-they-compatible/#respond Wed, 09 Mar 2016 16:58:00 +0000 https://techliberation.com/?p=76006

[This is an excerpt from Chapter 6 of the forthcoming 2nd edition of my book, “Permissionless Innovation: The Continuing Case for Comprehensive Technological Freedom,” due out later this month. I was presenting on these issues at today’s New America Foundation “Cybersecurity for a New America” event, so I thought I would post this now.  To learn more about the contrast between “permissionless innovation” and “precautionary principle” thinking, please consult the earlier edition of my book or see this blog post.]


 

Viruses, malware, spam, data breeches, and critical system intrusions are just some of the security-related concerns that often motivate precautionary thinking and policy proposals.[1] But as with privacy- and safety-related worries, the panicky rhetoric surrounding these issues is usually unfocused and counterproductive.

In today’s cybersecurity debates, for example, it is not uncommon to hear frequent allusions to the potential for a “digital Pearl Harbor,”[2] a “cyber cold war,”[3] or even a “cyber 9/11.”[4] These analogies are made even though these historical incidents resulted in death and destruction of a sort not comparable to attacks on digital networks. Others refer to “cyber bombs” or technological “time bombs,” even though no one can be “bombed” with binary code.[5] Michael McConnell, a former director of national intelligence, went so far as to say that this “threat is so intrusive, it’s so serious, it could literally suck the life’s blood out of this country.”[6]

Such outrageous statements reflect the frequent use of “threat inflation” rhetoric in debates about online security.[7] Threat inflation has been defined as “the attempt by elites to create concern for a threat that goes beyond the scope and urgency that a disinterested analysis would justify.”[8] Unfortunately, such bombastic rhetoric often conflates minor cybersecurity risks with major ones. For example, dramatic doomsday stories about hackers pushing planes out of the sky misdirects policymakers’ attention from the more immediate, but less gripping, risks of data extraction and foreign surveillance. Well-meaning skeptics might then conclude that our real cybersecurity risks are also not a problem. In the meantime, outdated legislation and inappropriate legal norms continue to impede beneficial defensive measures that could truly improve security.

Meanwhile, similar concerns have already been raised about security vulnerabilities associated with the Internet of Things[9] and driverless cars.[10] Legislation has already been floated to address the latter concern through federal certification standards.[11] More broad-based cybersecurity legislative proposals have also been proposed, most notably the Cybersecurity Information Sharing Act, which would extend legal immunity to corporations that share customer data with intelligence agencies.[12]

Ironically, these efforts to expand federal cybersecurity authority come before the federal government has even gotten its own house in order. According to a recent report, federal information security failures had increased by an astounding 1,169 percent, from 5,503 in fiscal year 2006 to 69,851 in fiscal year 2014.[13] Of course, many of these same agencies would be tasked with securing the massive new datasets containing personally identifiable details about US citizens’ online activities that legislation like the Cybersecurity Information Sharing Act would authorize. In the worst-case scenario, such federal data storage could counterintuitively encourage more attacks on government systems.

It’s important to put all these security issues in some context and to realize that proposed legal remedies are often inappropriate to address online security concerns and sometimes end up backfiring. In his research on the digital security marketplace, my Mercatus Center colleague Eli Dourado has illustrated how we are already able to achieve “Internet Security without Law.”[14] Dourado documented the many informal institutions that enforce network security norms on the Internet to show how cooperation among a remarkably varied set of actors improves online security without extensive regulation or punishing legal liability. “These informal institutions carry out the functions of a formal legal system—they establish and enforce rules for the prevention, punishment, and redress of cybersecurity-related harms,” Dourado says.[15]

For example, a diverse array of computer security incident response teams (CSIRTs) operate around the globe, sharing their research on and coordinating responses to viruses and other online attacks. Individual Internet service providers (ISPs), domain name registrars, and hosting companies work with these CSIRTs and other individuals and organizations to address security vulnerabilities.

Encouraging the development of robust and lawful software vulnerability markets would provide even more effective cybersecurity reporting. Some private companies and nonprofit security research firms have offered financial incentives for hackers to find and report software vulnerabilities to the proper parties for years now.[16] Such “bug bounty” and “vulnerability auction” programs better align hackers’ monetary incentives with the public interest. By allowing a space for security researchers to responsibly report and profit from discovered bugs, these markets dissuade hackers from selling vulnerabilities to criminal or state-backed organizations.[17]

A growing market for private security consultants and software providers also competes to offer increasingly sophisticated suites of security products for businesses, households, and governments. “Corporations, including software vendors, antimalware makers, ISPs, and major websites such as Facebook and Twitter, are aggressively pursuing cyber criminals,” notes Roger Grimes of Infoworld.[18] “These companies have entire legal teams dedicated to national and international cyber crime. They are also taking down malicious websites and bot-spitting command-and-control servers, along with helping to identify, prosecute, and sue bad guys,” he says.[19] Meanwhile, more organizations are employing “active defense” strategies, which are “countermeasures that entail more than merely hardening one’s own network against threats and instead seek to unmask one’s attacker or disable the attacker’s system.”[20]

A great deal of security knowledge is also “crowd-sourced” today via online discussion forums and security blogs that feature contributions from experts and average users alike. University-based computer science and cyber law centers and experts have also helped by creating projects like Stop Badware, which originated at Harvard University but then grew into a broader nonprofit organization with diverse financial support.[21] Meanwhile, informal grassroots security groups like The Cavalry have formed to build awareness about digital security threats among developers and the general public and then devise solutions to protect public safety.[22]

The recent debacle over the Commerce Department’s proposed new export rules for so-called cyberweapons provides a good example of how poorly considered policies can inadvertently undermine such beneficial emergent ecosystems. The agency’s new draft of US “Wassenaar Arrangement” arms control policies would have unintentionally criminalized the normal communication of basic software bug-testing techniques that hundreds of companies employ each day.[23] The regulators who were drafting the new rules had good intentions. They wanted to crack down on cyber criminals’ abilities to sell malware to hostile state-backed initiatives. However, their lack of technical sophistication led them to unknowingly write a proposal that would have compelled software engineers to seek Commerce Department permission before communicating information about minor software quirks. Fortunately, regulators wisely heeded the many concerned industry comments and rescinded the initial proposal.[24]

Dourado notes that informal, bottom-up efforts to coordinate security responses offer several advantages over top-down government solutions such as administrative regulatory regimes or punishing liability regimes. First, the informal cooperative approach “gives network operators flexibility to determine what constitutes due care in a dynamic environment.” “Formal legal standards,” by contrast, “may not be able to adapt as quickly as needed to rapidly changing circumstances,” he says.[25] Simply put, markets are more nimble than mandates when it comes to promptly patching security vulnerabilities.

Second, Dourado notes that “formal legal proceedings are adversarial and could reduce ISPs’ incentives to share information and cooperate.”[26] Heavy-handed regulation or threatening legal liability schemes could have the unintended consequence of discouraging the sort of cooperation that today alleviates security problems swiftly.

Indeed, there is evidence that existing cybersecurity law prevents defensive strategies that could help organizations to more quickly respond to system infiltrations. For example, some argue that private individuals and organizations should be allowed to defend themselves using special measures to expel or track system infiltrators, often called “hacking back” or “active defense.” Anthony Glosson’s analysis for the Mercatus Center discusses how the Computer Fraud and Abuse Act currently prevents computer security specialists from utilizing defensive hacking techniques that could improve system defenses or decrease the number of attempted attacks.[27]

Third, legal solutions are less effective because “the direct costs of going to court can be substantial, as can be the time associated with a trial,” Dourado argues.[28] By contrast, private actors working cooperatively “do not need to go to court to enforce security norms,” meaning that “security concerns are addressed quickly or punishment . . . is imposed rapidly.”[29] For example, if security warnings don’t work, ISPs can “punish” negligent or willfully insecure networks by “de-peering,” or terminating network interconnection agreements. The very threat of de-peering helps keep network operators on their toes.

Finally, and perhaps most importantly, Dourado notes that international cooperation between state-based legal systems is limited, complicated, and costly. By contrast, under today’s informal, voluntary approach to online security, international coordination and cooperation are quite strong. The CSIRTs and other security institutions and researchers mentioned above all interact and coordinate today as if national borders did not exist. Territorial legal system and liability regimes don’t have the same advantage; enforcement ends at the border.

Dourado’s model has ramifications for other fields of tech policy. Indeed, as noted above, these collaborative efforts and approaches are already at work in the realms of online safety and digital privacy. Countless organizations and individuals collaborate on educational initiatives to improve online safety and privacy. And many industry and nonprofit groups have established industry best practices and codes of conduct to ensure a safer and more secure online experience for all users. The efforts of the Family Online Safety Institute were discussed above. Another example comes from the Future of Privacy Forum, a privacy think tank that seeks to advance responsible data practices. The think tank helps create codes of conduct to ensure privacy best practices by online operators and also helps highlight programs run by other organizations.[30] Likewise, the National Cyber Security Alliance helps promote Internet safety and security efforts among a variety of companies and coordinates National Cyber Security Awareness Month (every October) and Data Privacy Day (held annually on January 28).[31]

What these efforts prove is that not every complex social problem requires a convoluted legal regime or heavy-handed regulatory response. We can achieve reasonably effective safety and security without layering on more and more law and regulation.[32] Indeed, the Internet and digital systems could arguably be made more secure by reforming outdated legislation that prevents potential security-increasing collaborations. “Dynamic systems are not merely turbulent,” Postrel notes. “They respond to the desire for security; they just don’t do it by stopping experimentation.”[33] She adds, “Left free to innovate and to learn, people find ways to create security for themselves. Those creations, too, are part of dynamic systems. They provide personal and social resilience.”[34]

Education is a crucial part of building resiliency in the security context as well. People and organizations can prepare for potential security problems rationally if given even more information and better tools to secure their digital systems and to understand how to cope when problems arise. Again, many corporations and organizations already take steps to guard against malware and other types of cyberattacks by offering customers free (or cheap) security software. For example, major broadband operators offer free antivirus software to customers and various parental control tools to parents. In the context of “connected car” technology, automakers have banded together to come up with privacy and security best practices to address worries about remote hacking of cars as well as concerns about how much data they collect about our driving habits.[35]

Thus, although it is certainly true that “more could be done” to secure networks and critical systems, panic is unwarranted because much is already being done to harden systems and educate the public about risks.[36] Various digital attacks will continue, but consumers, companies, and others organizations are learning to cope and become more resilient in the face of those threats through creative “bottom-up” solutions instead of innovation-limiting “top-down” regulatory approaches.


 

[1]    This section partially adapted from Adam Thierer, “Achieving Internet Order without Law,” Forbes, June 24, 2012, http://www.forbes.com/sites/adamthierer/2012/06/24/achieving-internet-order-without-law. The author wishes to thank Andrea Castillo for major contributions to this section.

[2]    See Richard A. Serrano, “Cyber Attacks Seen as a Growing Threat,” Los Angeles Times, February 11, 2011, A18. (“[T]he potential for the next Pearl Harbor could very well be a cyber attack.”)

[3]    Harry Raduege, “Deterring Attackers in Cyberspace,” The Hill, September 23, 2011, 11, http://thehill.com/opinion/op-ed/183429-deterring-attackers-in-cyberspace.

[4]    Kurt Nimmo, “Former CIA Official Predicts Cyber 9/11,” InfoWars.com, August 4, 2011, http://www.infowars.com/former-cia-official-predicts-cyber-911.

[5]    Rodney Brown, “Cyber Bombs: Data-Security Sector Hopes Adoption Won’t Require a ‘Pearl Harbor’ Moment,” Innovation Report, October 26, 2011, 10, http://digital.masshightech.com/launch.aspx?referral=other&pnum=&refresh=6t0M1Sr380Rf&EID=1c256165-396b-454f-bc92-a7780169a876&skip=; Craig Spiezle, “Defusing the Internet of Things Time Bomb,” TechCrunch, August 11, 2015, http://techcrunch.com/2015/08/10/defusing-the-internet-of-things-time-bomb.

[6]    “Morning Edition: Cybersecurity Bill: Vital Need or Just More Rules?” NPR, March 22, 2012, http://www.npr.org/templates/transcript/transcript.php?storyId=149099866.

[7]    Jerry Brito and Tate Watkins, “Loving the Cyber Bomb? The Dangers of Threat Inflation in Cybersecurity Policy” (Mercatus Working Paper No. 11-24, Mercatus Center at George Mason University, Arlington, VA, 2011).

[8]    Jane K. Cramer and A. Trevor Thrall, “Introduction: Understanding Threat Inflation,” in American Foreign Policy and the Politics of Fear: Threat Inflation Since 9/11, ed. A. Trevor Thrall and Jane K. Cramer (London: Routledge, 2009), 1.

[9]    Tufekci, “Dumb Idea”; Byron Acohido, “Hackers Take Control of Internet Appliances,” USA Today, October 15, 2013, http://www.usatoday.com/story/cybertruth/2013/10/15/hackers-taking-control-of-internet-appliances/2986395.

[10]   Ed Markey, Tracking & Hacking: Security & Privacy Gaps Put American Drivers at Risk, US Senate, February 2015, http://www.markey.senate.gov/imo/media/doc/2015-02-06_MarkeyReport-Tracking_Hacking_CarSecurity%202.pdf.

[11]   Ed Markey, “Markey, Blumenthal to Introduce Legislation to Protect Drivers from Auto Security and Privacy Vulnerabilities with Standards and ‘Cyber Dashboard,’” press release, February 11, 2015, http://www.markey.senate.gov/news/press-releases/markey-blumenthal-to-introduce-legislation-to-protect-drivers-from-auto-security-and-privacy-vulnerabilities-with-standards-and-cyber-dashboard.

[12]   Andrea Castillo, “How CISA Threatens Both Privacy and Cybersecurity,” Reason, May 10, 2015, https://reason.com/archives/2015/05/10/why-cisa-wont-improve-cybersecurity.

[13]   Eli Dourado and Andrea Castillo, “Poor Federal Cybersecurity Reveals Weakness of Technocratic Approach” (Mercatus Working Paper, Mercatus Center at George Mason University, Arlington, VA, June 22, 2015), http://mercatus.org/publication/poor-federal-cybersecurity-reveals-weakness-technocratic-approach.

[14]   Eli Dourado, “Internet Security without Law: How Security Providers Create Online Order” (Mercatus Working Paper No. 12-19, Mercatus Center at George Mason University, Arlington, VA, June 19, 2012), http://mercatus.org/publication/internet-security-without-law-how-service-providers-create-order-online.

[15]   Ibid.

[16]   Charlie Miller, “The Legitimate Vulnerability Market: Inside the Secretive World of 0-day Exploit Sales,” Independent Security Evaluators, May 6, 2007, http://www.econinfosec.org/archive/weis2007/papers/29.pdf.

[17]   Andrea Castillo, “The Economics of Software-Vulnerability Sales: Can the Feds Encourage ‘Pro-social’ Hacking?” Reason, August 11, 2015, https://reason.com/archives/2015/08/11/economics-of-the-zero-day-sales-market.

[18]   Roger Grimes, “The Cyber Crime Tide Is Turning,” Infoworld, August 9, 2011, http://www.pcworld.com/article/237647/the_cyber_crime_tide_is_turning.html.

[19]   Dourado, “Internet Security.”

[20]   Anthony D. Glosson, “Active Defense: An Overview of the Debate and a Way Forward,” (Mercatus Working Paper, Mercatus Center at George Mason University, Arlington, VA, August 10, 2015), http://mercatus.org/publication/active-defense-overview-debate-and-way-forward-guardians-of-peace-hackers-cybersecurity.

[21]   http://stopbadware.org.

[22]   https://www.iamthecavalry.org.

[23]   Andrea Castillo, “The Government’s Latest Attempt to Stop Hackers Will Only Make Cybersecurity Worse,” Reason, July 28, 2015, https://reason.com/archives/2015/07/28/gov-ploy-to-stop-hackers-will-backfire.

[24]   Russell Brandom, “The US is Rewriting its Controversial Zero-Day Export Policy,” The Verge, July 29, 2015, http://www.theverge.com/2015/7/29/9068665/wassenaar-export-zero-day-revisions-department-of-commerce.

[25]   Dourado, “Internet Security.”

[26]   Ibid.

[27]   Glosson, “Active Defense.”

[28]   Dourado, “Internet Security.”

[29]   Dourado, “Internet Security.”

[30]   Future of Privacy Forum, “Best Practices,” http://www.futureofprivacy.org/resources/best-practices/.

[31]   See http://www.staysafeonline.org/ncsam and http://www.staysafeonline.org/data-privacy-day.

[32]   Glosson, “Active Defense,” 22. (“The precautionary principle is especially inadvisable in the dynamic realm of tech policy, and until the ostensible harms of active defense materialize, the law should facilitate maximum innovation in the network security field.”)

[33]   Postrel, Future and Its Enemies, at 199.

[34]   Ibid., 202.

[35]   See Future of Privacy Forum, “Connected Cars Project,” accessed October 16, 2015, http://www.futureofprivacy.org/connectedcars; Auto Alliance, “Automakers Believe That Strong Consumer Data Privacy Protections Are Essential to Maintaining the Trust of Our Customers,” accessed October 16, 2015, http://www.autoalliance.org/automotiveprivacy. See also Future of Privacy Forum, “Comments of the Future of Privacy Forum on Connected Smart Technologies in Advance of the FTC ‘Internet of Things’ Workshop,” May 31, 2013, http://www.futureofprivacy.org/wp-content/uploads/FPF-Comments-Regarding-Internet-of-Things.pdf.

[36]   Adam Thierer, “Don’t Panic over Looming Cybersecurity Threats,” Forbes, August 7, 2011, http://www.forbes.com/sites/adamthierer/2011/08/07/dont-panic-over-looming-cybersecurity-threats.

 

]]>
https://techliberation.com/2016/03/09/permissionless-innovation-cybersecurity-are-they-compatible/feed/ 0 76006
How Attitudes about Risk & Failure Affect Innovation on Either Side of the Atlantic https://techliberation.com/2015/06/19/how-attitudes-about-risk-failure-affect-innovation-on-either-side-of-the-atlantic/ https://techliberation.com/2015/06/19/how-attitudes-about-risk-failure-affect-innovation-on-either-side-of-the-atlantic/#comments Fri, 19 Jun 2015 22:15:06 +0000 http://techliberation.com/?p=75596

“Why hasn’t Europe fostered the kind of innovation that has spawned hugely successful technology companies?” asks James B. Stewart in an important new column for the New York Times (“A Fearless Culture Fuels U.S. Tech Giants“).

That’s a great question, and one that I have tried to answer in a series of recent essays. (See, for example, “Europe’s Choice on Innovation” and “Embracing a Culture of Permissionless Innovation.”) What I have suggested in those essays is that the starkly different outcomes on either side of the Atlantic in terms of recent economic growth and innovation can primarily be explained by cultural attitudes toward risk-taking and failure. “For innovation and growth to blossom, entrepreneurs need a clear green light from policymakers that signals a general acceptance of risk-taking—especially risk-taking that challenges existing business models and traditional ways of doing things,” I have argued. And the most powerful proof of this is to examine the amazing natural experiment that has played out on either side of the Atlantic over the past two decades with the Internet and the digital economy.

For example, an annual Booz & Company report on the world’s most innovative companies revealed that 9 of the top 10 most innovative companies are based in the U.S. and that most of them are involved in computing and digital technology. None of them are based in Europe, however. Another recent survey revealed that the world’s 15 most valuable Internet companies (based on market capitalizations) have a combined market value of nearly $2.5 trillion, but none of them are European while 11 of them are U.S. firms. Again, it is America’s tech innovators that dominate that list.

Many European officials and business leaders are waking up to this grim reality and are wondering how to reverse this situation. In his  Times essay, Stewart quotes Danish economist Jacob Kirkegaard of the Peterson Institute for International Economics, who notes that Europeans “all want a Silicon Valley. . . . But none of them can match the scale and focus on the new and truly innovative technologies you have in the United States. Europe and the rest of the world are playing catch-up, to the great frustration of policy makers there.”

OK, but why is that? Again, it comes down to those different cultural attitudes about risk and the stark differences over the potential lessons to be gained from allowing firms, business models, and entire professions to fail and/or be significantly disrupted.

Stewart quotes German economist Petra Moser on this point. He noted that “Europeans are worried. . . . They’re trying to recreate Silicon Valley in places like Munich, so far with little success,” she said. “The institutional and cultural differences are still too great.” In Europe, stability is prized,” she says. Here’s the key passage from the Stewart piece elaborating on this point:

Often overlooked in the success of American start-ups is the even greater number of failures. “Fail fast, fail often” is a Silicon Valley mantra, and the freedom to innovate is inextricably linked to the freedom to fail. In Europe, failure carries a much greater stigma than it does in the United States. Bankruptcy codes are far more punitive, in contrast to the United States, where bankruptcy is simply a rite of passage for many successful entrepreneurs.

Moreover, he notes, “Europeans are also much less receptive to the kind of truly disruptive innovation represented by a Google or a Facebook.”

And that remains the heart of the problem for Europe. What many leaders there fail to appreciate, as I noted in my earlier essays, is that:

Innovation is more likely in systems that maximize breathing room for ongoing economic and social experimentation, evolution, and adaptation. Societies that appreciate those values—and allow them to influence both social norms and policy decisions—are likely to experience greater economic growth. By contrast, those that deride such values and adopt a more precautionary policy approach are more likely to discourage innovation and languish economically.

The remarkable aversion to failure and its affect on deterring entrepreneurialism and long-term growth in Europe and elsewhere cannot be overstated. As I will argue in a forthcoming book chapter on this topic, we can conclude, paradoxically, that individuals, institutions, and countries that over-zealously seek to avoid the possibility of certain short-term failures are actually far more prone to potentially far more dangerous and systemic failures in the long-term. Put more simply: the more you try to avoid all the little failures, the harder you fail more generally. This is Europe’s fundamental predicament circa 2015.

Of course, changing long-entrenched cultural attitudes toward risk and failure can be challenging and take many years, even decades. But the path forward–at least in terms of legal policy and regulatory reforms–has been charted by Larry Downes in his new Harvard Business Review essay, “How Europe Can Create Its Own Silicon Valley.” EU policymakers, he correctly observes, will “have to learn to appreciate in the first place the profound role regulation (or the lack of it) plays in the creation of economic value in the Internet economy.” Downes then continues on to itemize some of the policy changes that would help put Europe on the right track to unlock the amazing entrepreneurial spirit that lies dormant across the continent.

Whether or not the Europeans are willing to take those steps remains to be seen. Regardless, the lesson for U.S. policymakers should be clear: If you want to continue to produce world-beating tech innovators, you must avoid Europe’s overly precautionary and highly risk-averse approach to policy. “Permissionless innovation” remains the better default policy position toward new entrepreneurs and technologies, no matter how disruptive they may be in the short-term.

]]>
https://techliberation.com/2015/06/19/how-attitudes-about-risk-failure-affect-innovation-on-either-side-of-the-atlantic/feed/ 2 75596
Again, We Humans Are Pretty Good at Adapting to Technological Change https://techliberation.com/2015/01/16/again-we-humans-are-pretty-good-at-adapting-to-technological-change/ https://techliberation.com/2015/01/16/again-we-humans-are-pretty-good-at-adapting-to-technological-change/#respond Fri, 16 Jan 2015 16:58:19 +0000 http://techliberation.com/?p=75292

Claire Cain Miller of The New York Times posted an interesting story yesterday noting how, “Technology Has Made Life Different, but Not Necessarily More Stressful.” Her essay builds on a new study by researchers at the Pew Research Center and Rutgers University on “Social Media and the Cost of Caring.” Miller’s essay and this new Pew/Rutgers study indirectly make a point that I am always discussing in my own work, but that is often ignored or downplayed by many technological critics, namely: We humans have repeatedly proven quite good at adapting to technological change, even when it entails some heartburn along the way.

The major takeaway of the Pew/Rutgers study was that, “social media users are not any more likely to feel stress than others, but there is a subgroup of social media users who are more aware of stressful events in their friends’ lives and this subgroup of social media users does feel more stress.” Commenting on the study, Miller of the Times notes:

Fear of technology is nothing new. Telephones, watches and televisions were similarly believed to interrupt people’s lives and pressure them to be more productive. In some ways they did, but the benefits offset the stressors. New technology is making our lives different, but not necessarily more stressful than they would have been otherwise. “It’s yet another example of how we overestimate the effect these technologies are having in our lives,” said Keith Hampton, a sociologist at Rutgers and an author of the study.  . . .  Just as the telephone made it easier to maintain in-person relationships but neither replaced nor ruined them, this recent research suggests that digital technology can become a tool to augment the relationships humans already have.

I found this of great interest because I have written about how humans assimilate new technologies into their lives and become more resilient in the process as they learn various coping techniques. I elaborated on these issues in a lengthy essay last summer entitled,  “Muddling Through: How We Learn to Cope with Technological Change.” I borrowed the term “muddling through” from Joel Garreau’s terrific 2005 book, Radical Evolution: The Promise and Peril of Enhancing Our Minds, Our Bodies — and What It Means to Be Human.  Garreau argued that history can be viewed “as a remarkably effective paean to the power of humans to muddle through extraordinary circumstances.”

Garreau associated this with what he called the “Prevail” scenario and he contrasted it with the “Heaven” scenario, which believes that technology drives history relentlessly, and in almost every way for the better, and the “Hell” scenario, which always worries that “technology is used for extreme evil, threatening humanity with extinction.” Under the “Prevail” scenario, Garreau argued, “humans shape and adapt [technology] in entirely new directions.” (p. 95) “Just because the problems are increasing doesn’t mean solutions might not also be increasing to match them,” he concluded. (p. 154) Or, as John Seely Brown and Paul Duguid noted in their excellent 2001, “Response to Bill Joy and the Doom-and-Gloom Technofuturists”:

technological and social systems shape each other. The same is true on a larger scale. […] Technology and society are constantly forming and reforming new dynamic equilibriums with far-reaching implications. The challenge for futurology (and for all of us) is to see beyond the hype and past the over-simplifications to the full import of these new sociotechnical formations.  Social and technological systems do not develop independently; the two evolve together in complex feedback loops, wherein each drives, restrains and accelerates change in the other.

In my essay last summer, I sketched out the reasons why I think this “prevail” or “muddling through” scenario offers the best explanation for how we learn to cope with technological disruption and prosper in the process. Again, it comes down to the fact that people and institutions learned to cope with technological change and become more resilient over time. It’s a learning process, and we humans are good at rolling with the punches and finding new baselines along the way. While “muddling through” can sometimes be quite difficult and messy, we adjust to most of the new technological realities we face and, over time, find constructive solutions to the really hard problems.

So, while it’s always good to reflect on the challenges of life in an age of never-ending, rapid-fire technological change, there’s almost never cause for panic. Read my old essay for more discussion on why I remain so optimistic about the human condition.

]]>
https://techliberation.com/2015/01/16/again-we-humans-are-pretty-good-at-adapting-to-technological-change/feed/ 0 75292
The 10 Most-Read Posts of 2014 https://techliberation.com/2014/12/30/the-10-most-read-posts-of-2014/ https://techliberation.com/2014/12/30/the-10-most-read-posts-of-2014/#comments Tue, 30 Dec 2014 16:36:34 +0000 http://techliberation.com/?p=75156

As 2014 draws to a close, we take a look back at the most-read posts from the past year at The Technology Liberation Front. Thank you for reading, and enjoy.

  1. New York’s financial regulator releases a draft of ‘BitLicense’ for Bitcoin businesses. Here are my initial thoughts.

In July, Jerry Brito wrote about New York’s proposed framework for regulating digital currencies like Bitcoin.

My initial reaction to the rules is that they are a step in the right direction. Whether one likes it or not, states will want to license and regulate Bitcoin-related businesses, so it’s good to see that New York engaged in a thoughtful process, and that the rules they have proposed are not out of the ordinary.
  1. Google Fiber: The Uber of Broadband

In February, I noted some of the parallels between Google Fiber and ride-sharing, in that new entrants are upending the competitive and regulatory status quo to the benefit of consumers.

The taxi registration systems and the cable franchise agreements were major regulatory mistakes. Local regulators should reduce regulations for all similarly-situated competitors and resist the temptation to remedy past errors with more distortions.
  1. The Debate over the Sharing Economy: Talking Points & Recommended Reading

In September, Adam Thierer appeared on Fox Business Network’s Stossel show to talk about the sharing economy. In a TLF post, he expands upon his televised commentary and highlights five main points.

  1. CES 2014 Report: The Internet of Things Arrives, but Will Washington Welcome It?

After attending the 2014 Consumer Electronics Show in January, Adam wrote a prescient post about the promise of the Internet of Things and the regulatory risks ahead.

When every device has a sensor, a chip, and some sort of networking capability, amazing opportunities become available to consumers…. But those same capabilities are exactly what raise the blood pressure of many policymakers and policy activists who fear the safety, security, or privacy-related problems that might creep up in a world filled with such technologies.
  1. Defining “Technology”

Earlier this year, Adam compiled examples of how technologists and experts define “technology,” with entries ranging from the Oxford Dictionary to Peter Thiel. It’s a slippery exercise, but

if you are going to make an attempt to either study or critique a particular technology or technological practice or development, then you probably should take the time to tell us how broadly or narrowly you are defining the term “technology” or “technological process.”
  1. The Problem with “Pessimism Porn”

Adam highlights the tendency of tech press, academics, and activists to mislead the public about technology policy by sensationalizing technology risks.

The problem with all this, of course, is that it perpetuates societal fears and distrust. It also sometimes leads to misguided policies based on hypothetical worst-case thinking…. [I]f we spend all our time living in constant fear of worst-case scenarios—and premising public policy upon them—it means that best-case scenarios will never come about.
  1. Mark T. Williams predicted Bitcoin’s price would be under $10 by now; it’s over $600

Professor Mark T. Williams predicted in December 2013 that by mid-2014, Bitcoin’s price would fall to below $10. In mid-2014, Jerry commends Prof. Williams for providing, unlike most Bitcoin watchers, a bold and falsifiable prediction about Bitcoin’s value. However, as Jerry points out, that prediction was erroneous: Bitcoin’s 2014 collapse never happened and the digital currency’s value exceeded $600.

  1. What Vox Doesn’t Get About the “Battle for the Future of the Internet”

In May, Tim Lee wrote a Vox piece about net neutrality and the Netflix-Comcast interconnection fight. Eli Dourado posted a widely-read and useful corrective to some of the handwringing in the Vox piece about interconnection, ISP market power, and the future of the Internet.

I think the article doesn’t really consider how interconnection has worked in the last few years, and consequently, it makes a big deal out of something that is pretty harmless…. There is nothing unseemly about Netflix making … payments to Comcast, whether indirectly through Cogent or directly, nor is there anything about this arrangement that harms “the little guy” (like me!).
  1. Muddling Through: How We Learn to Cope with Technological Change

The second most-read TLF post of 2014 is also the longest and most philosophical in this top-10 list. Adam wrote a popular and in-depth post about the social effects of technological change and notes that technology advances are largely for consumers’ benefit, yet “[m]odern thinking and scholarship on the impact of technological change on societies has been largely dominated by skeptics and critics.” The nature of human resilience, Adam explains, should encourage a cautiously optimistic view of technological change.

  1. Help me answer Senate committee’s questions about Bitcoin

Two days into 2014, Jerry wrote the most-read TLF piece of the past year. Jerry had testified before the Senate Homeland Security and Governmental Affairs Committee in 2013 as an expert on Bitcoin. The Committee requested more information about Bitcoin post-hearing and Jerry solicited comment from our readers.

Thank you to our loyal readers for continuing to visit The Technology Liberation Front. It was busy year for tech and telecom policy and 2015 promises to be similarly exciting. Have a happy and safe New Years!

]]>
https://techliberation.com/2014/12/30/the-10-most-read-posts-of-2014/feed/ 1 75156
A Nonpartisan Policy Vision for the Internet of Things https://techliberation.com/2014/12/11/a-nonpartisan-policy-vision-for-the-internet-of-things/ https://techliberation.com/2014/12/11/a-nonpartisan-policy-vision-for-the-internet-of-things/#comments Thu, 11 Dec 2014 20:07:11 +0000 http://techliberation.com/?p=75076

What sort of public policy vision should govern the Internet of Things? I’ve spent a lot of time thinking about that question in essays here over the past year, as well as in a new white paper (“The Internet of Things and Wearable Technology: Addressing Privacy and Security Concerns without Derailing Innovation”) that will be published in the Richmond Journal of Law & Technology early next year.

But I recently heard three policymakers articulate their recommended vision for the Internet of Things (IoT) and I found their approach so inspiring that I wanted to discuss it here in the hopes that it will become the foundation for future policy in this arena.

Last Thursday, it was my pleasure to attend a Center for Data Innovation (CDI) event on “How Can Policymakers Help Build the Internet of Things?” As the title implied, the goal of the event was to discuss how to achieve the vision of a more fully-connected world and, more specifically, how public policymakers can help facilitate that objective. It was a terrific event with many excellent panel discussions and keynote addresses.

Two of those keynotes were delivered by Senators Deb Fischer (R-Neb.) and Kelly Ayotte (R-N.H.). Below I will offer some highlights from their remarks and then relate them to the vision set forth by Federal Trade Commission (FTC) Commissioner Maureen K. Ohlhausen in some of her recent speeches. I will conclude by discussing how the Ayotte-Fischer-Ohlhausen vision can be seen as the logical extension of the Clinton Administration’s excellent 1997 Framework for Global Electronic Commerce, which proposed a similar policy paradigm for the Internet more generally. This shows how crafting policy for the IoT can and should be a nonpartisan affair.

Sen. Deb Fischer

In her opening remarks at the CDI event last week, Sen. Deb Fischer explained how “the Internet of Things can be a game changer for the U.S. economy and for the American consumer.” “It gives people more information and better tools to analyze data to make more informed choices,” she noted.

After outlining some of the potential benefits associated with the Internet of Things, Sen. Fischer continued on to explain why it is essential we get public policy incentives right first if we hope to unlock the full potential of these new technologies. Specifically, she argued that:

In order for Americans to receive the maximum benefits from increased connectivity, there are two things the government must avoid. First, policymakers can’t bury their heads in the sand and pretend this technological revolution isn’t happening only to wake up years down the road and try to micromanage a fast-changing, dynamic industry. Second, the federal government must also avoid regulation just for the sake of regulation. We need thoughtful, pragmatic responses and narrow solutions to any policy issues that arise. For too long, the only “strategy” in Washington policy-making has been to react to crisis after crisis. We should dive into what this means for U.S. global competitiveness, consumer welfare, and economic opportunity before the public policy challenges overwhelm us, before legislative and executive branches of government – or foreign governments – react without all the facts.

Fischer concluded by noting that, “it’s entirely appropriate for the U.S. government to think about how to modernize its regulatory frameworks, consolidate, renovate, and overhaul obsolete rules. We’re destined to lose to the Chinese or others if the Internet of Things is governed in the United States by rules that pre-date the VCR.”

Sen. Kelly Ayotte

Like Sen. Fischer, Ayotte similarly stressed the many economic opportunities associated with IoT technologies for both consumers and producers alike. [Note: Sen. Ayotte did not publish her remarks on her website, but you can watch her speech from the CDI event beginning around the 17-minute mark of the event video.]

Ayotte also noted that IoT is going to be a major topic for the Senate Commerce Committee and that there will be an upcoming hearing on the issue. She said that the role of the Committee will be to ensure that the various agencies looking into IoT issues are not issuing “conflicting regulatory directives” and “that what is being done makes sense and allows for future innovation that we can’t even anticipate right now.” Among the agencies she cited that are currently looking into IoT issues: FTC (privacy & security), FDA (medical device apps), FCC (wireless issues), FAA (commercial drones), NHTSA (intelligent vehicle technology), NTIA (multistakeholder privacy reviews), as well as state lawmakers and regulatory agencies.

Sen. Ayotte then explained what sort of policy framework America needed to adopt to ensure that the full potential of the Internet of Things could be realized. She framed the choice lawmakers are confronted with as follows:

we as policymakers we can either create an environment that allows that to continue to grow, or one that thwarts that. To stay on the cutting edge, we need to make sure that our regulatory environment is conducive to fostering innovation.” […] “we’re living in the Dark Ages in the ways the some of the regulations have been framed. Companies must be properly incentivized to invest in the future, and government shouldn’t be a deterrent to innovation and job-creation.

Ayotte also stressed that “technology continues to evolve so rapidly there is no one-size-fits-all regulatory approach” that can work for a dynamic environment like this. “If legislation drives technology, the technology will be outdated almost instantly,” and “that is why humility is so important,” she concluded.

The better approach, she argued was to let technology evolve freely in a “permissionless” fashion and then see what problems developed and then address them accordingly. “[A] top-down, preemptive approach is never the best policy” and will only serve to stifle innovation, she argued. “If all regulators looked with some humility at how technology is used and whether we need to regulate or not to regulate, I think innovation would stand to benefit.”

FTC Commissioner Maureen K. Ohlhausen

Fischer and Ayotte’s remarks reflect a vision for the Internet of Things that FTC Commissioner Maureen K. Ohlhausen has articulated in recent months. In fact, Sen. Ayotte specifically cited Ohlhausen in her remarks.

Ohlhausen has actually delivered several excellent speeches on these issues and has become one of the leading public policy thought leaders on the Internet of Things in the United States today. One of her first major speeches on these issues was her October 2013 address entitled, “The Internet of Things and the FTC: Does Innovation Require Intervention?” In that speech, Ohlhausen noted that, “The success of the Internet has in large part been driven by the freedom to experiment with different business models, the best of which have survived and thrived, even in the face of initial unfamiliarity and unease about the impact on consumers and competitors.”

She also issued a wise word of caution to her fellow regulators:

It is . . . vital that government officials, like myself, approach new technologies with a dose of regulatory humility, by working hard to educate ourselves and others about the innovation, understand its effects on consumers and the marketplace, identify benefits and likely harms, and, if harms do arise, consider whether existing laws and regulations are sufficient to address them, before assuming that new rules are required.

In this and other speeches, Ohlhausen has highlighted the various other remedies that already exist when things do go wrong, including FTC enforcement of “unfair and deceptive practices,” common law solutions (torts and class actions), private self-regulation and best practices, social pressure, and so on. (Note: Inspired by Ohlhausen’s approach, I devoted the final section of my big law review article on IoT issues to a deeper exploration of all those “bottom-up” solutions to privacy and security concerns surrounding the IoT and wearable tech.)

The Clinton Administration Vision

These three women have articulated what I regard as the ideal vision for fostering the growth of the Internet of Things. It should be noted, however, that their framework is really just an extension of the Clinton Administration’s outstanding vision for the Internet more generally.

In the 1997 Framework for Global Electronic Commerce, the Clinton Administration outlined its approach toward the Internet and the emerging digital economy. As I’ve noted many times before, the Framework was a succinct and bold market-oriented vision for cyberspace governance that recommended reliance upon civil society, contractual negotiations, voluntary agreements, and ongoing marketplace experiments to solve information age problems. Specifically, it stated that “the private sector should lead [and] the Internet should develop as a market driven arena not a regulated industry.” “[G]overnments should encourage industry self-regulation and private sector leadership where possible” and “avoid undue restrictions on electronic commerce.”

Sen. Ayotte specifically cited those Clinton principles in her speech and said, “I think those words, given twenty years ago at the infancy of the Internet, are today even more relevant as we look at the challenges and the issues that we continue to face as regulators and policymakers.”

I completely agree. This is exactly the sort of vision that we need to keep innovation moving forward to benefit consumers and the economy, and this also illustrates how IoT policy can be a nonpartisan effort.

Why does this matter so much? As I noted in this recent essay, thanks to the Clinton Administration’s bold vision for the Internet:

This policy disposition resulted in an unambiguous green light for a rising generation of creative minds who were eager to explore this new frontier for commerce and communications. . . . The result of this freedom to experiment was an outpouring of innovation. America’s info-tech sectors thrived thanks to permissionless innovation, and they still do today. An annual Booz & Company report on the world’s most innovative companies revealed that 9 of the top 10 most innovative companies are based in the U.S. and that most of them are involved in computing, software, and digital technology.

In other words, America got policy right before and we can get policy right again to ensure we are again global innovation leaders. Patience, flexibility, and forbearance are the key policy virtues that nurture an environment conducive to entrepreneurial creativity, economic progress, and greater consumer choice.

Other policymakers should endorse the vision originally sketched out by the Clinton Administration and now so eloquently embraced and extended by Sen. Fischer, Sen. Ayotte, and Commissioner Ohlhausen. This is the path forward if we hope to realize the full potential of the Internet of Things.

]]>
https://techliberation.com/2014/12/11/a-nonpartisan-policy-vision-for-the-internet-of-things/feed/ 3 75076
New Paper on The Sharing Economy and Consumer Protection Regulation https://techliberation.com/2014/12/08/new-paper-on-the-sharing-economy-and-consumer-protection-regulation/ https://techliberation.com/2014/12/08/new-paper-on-the-sharing-economy-and-consumer-protection-regulation/#comments Mon, 08 Dec 2014 15:06:54 +0000 http://techliberation.com/?p=75035

Sharing Economy paper from MercatusI’ve just released a short new paper, co-authored with my Mercatus Center colleagues Christopher Koopman and Matthew Mitchell, on “The Sharing Economy and Consumer Protection Regulation: The Case for Policy Change.” The paper is being released to coincide with a Congressional Internet Caucus Advisory Committee event that I am speaking at today on “Should Congress be Caring About Sharing? Regulation and the Future of Uber, Airbnb and the Sharing Economy.”

In this new paper, Koopman, Mitchell, and I discuss how the sharing economy has changed the way many Americans commute, shop, vacation, borrow, and so on. Of course, the sharing economy “has also disrupted long-established industries, from taxis to hotels, and has confounded policymakers,” we note. “In particular, regulators are trying to determine how to apply many of the traditional ‘consumer protection’ regulations to these new and innovative firms.” This has led to a major debate over the public policies that should govern the sharing economy.

We argue that, coupled with the Internet and various new informational resources, the rapid growth of the sharing economy alleviates the need for much traditional top-down regulation. These recent innovations are likely doing a much better job of serving consumer needs by offering new innovations, more choices, more service differentiation, better prices, and higher-quality services. In particular, the sharing economy and the various feedback mechanism it relies upon helps solve the tradition economic problem of “asymmetrical information,” which is often cited as a rationale for regulation. We conclude, therefore, that “the key contribution of the sharing economy is that it has overcome market imperfections without recourse to traditional forms of regulation. Continued application of these outmoded regulatory regimes is likely to harm consumers.”

We note that this is especially likely to be the case when the failure of traditional regulatory models is taken into account. As we document in the paper, all too often, well-intentioned “public interest” regulation is often captured by industry and used to to serve their interests:

by limiting entry, or by raising rivals’ costs, regulations can be useful to the regulated firms. Though regulations often make consumers worse off, they are often sustained by political pressure from consumer advocates because they can be disguised as “consumer protection.”

We provide evidence of the problem of regulatory capture and note it has been a particular problem in many of the sectors that are now being disrupted by sharing economy innovators–such as taxi and transportation services. It is evident that regulation has not lived up to its lofty expectations in many sectors. Accordingly, when market circumstances change dramatically—or when new technology or competition alleviate the need for regulation—then public policy should evolve and adapt to accommodate these new realities.

Of course, many bad laws and regulations that policymakers remain on the books and have constituencies who will defend them vociferously. Our paper concludes with some recommendations for how to “level the regulatory playing field” in a pro-consumer, pro-innovation fashion. We note that while differential regulatory treatment of incumbents and new entrants does represent a potential problem, there’s a sensible, pro-consumer and pro-innovation way to solve that problem:

such regulatory asymmetries represent a legitimate policy problem. But the solution is not to punish new innovations by simply rolling old regulatory regimes onto new technologies and sectors. The better alternative is to level the playing field by “deregulating down” to put everyone on equal footing, not by “regulating up” to achieve parity. Policymakers should relax old rules on incumbents as new entrants and new technologies challenge the status quo. By extension, new entrants should only face minimal regulatory requirements as more onerous and unnecessary restrictions on incumbents are relaxed.

Download this new paper on the Mercatus website or via SSRN or ResearchGate. Incidentally, we plan to release a much longer Mercatus Center white paper early next year that will explore reputational feedback mechanisms in far greater detail and explain how these systems help address the problem of “asymmetrical information” in these and other contexts.


Also see:The Debate over the Sharing Economy: Talking Points & Recommended Reading,” which includes the following video of me on the Stossel Show discussing these issues recently.

]]>
https://techliberation.com/2014/12/08/new-paper-on-the-sharing-economy-and-consumer-protection-regulation/feed/ 1 75035
Thinking about Innovation Policy Debates: 4 Related Paradigms https://techliberation.com/2014/11/11/thinking-about-innovation-policy-debates-4-related-paradigms/ https://techliberation.com/2014/11/11/thinking-about-innovation-policy-debates-4-related-paradigms/#comments Tue, 11 Nov 2014 21:09:02 +0000 http://techliberation.com/?p=74915

In my previous essay, I discussed a new white paper by my colleague Robert Graboyes, Fortress and Frontier in American Health Care, which examines the future of medical innovation. Graboyes uses the “fortress vs frontier” dichotomy to help explain different “visions” about how public policies debates about technological innovation in the health care arena often play out.  It’s a terrific study that I highly recommend for all the reasons I stated in my previous post.

As I was reading Bob’s new report, I realized that his approach shared much in common with a couple of other recent innovation policy paradigms I have discussed here before from Virginia Postrel (“Stasis” vs. “Dynamism”), Robert D. Atkinson (“Preservationists” vs. “Modernizers”), and myself (“Precautionary Principle” vs. “Permissionless Innovation”). In this essay, I will briefly relate Bob’s’ approach to those other three innovation policy paradigms and then note a deficiency with our common approaches. I’ll conclude by briefly discussing another interesting framework from science writer Joel Garreau.

Stasis vs. Dynamism – Virginia Postrel (1998)

Future and Its EnemiesIn her 1998 book, The Future and Its Enemies, Virginia Postrel contrasted the conflicting worldviews of “dynamism”and “stasis” and showed how the tensions between these two visions would affect the course of future human progress. Postrel made the case for embracing dynamism — “a world of constant creation, discovery, and competition” — over the “regulated, engineered world” of the stasis mentality. She argued that we should “see technology as an expression of human creativity and the future as inviting” and reject the idea “that progress requires a central blueprint.” Dynamism defines progress as “a decentralized, evolutionary process” in which mistakes aren’t viewed as permanent disasters but instead as “the correctable by-products of experimentation.” (p. xiv)

Postrel argued that our dynamic modern world and the amazing technologies that drive it have united diverse “stasis”-minded forces in opposition to its continued, unfettered evolution:

[It] has united two types of stasists who would have once been bitter enemies: reactionaries, whose central value is stability, and technocrats, whose central value is control. Reactionaries seek to reverse change, restoring the literal or imagined past and holding it in place. . . . Technocrats, for their part, promise to manage change, centrally directing “progress” according to a predictable plan. . . . They do not celebrate the primitive or traditional. Rather, they worry about the government’s inability to control dynamism. (p. 7-8)

Preservationists vs. Modernizers – Robert D. Atkinson (2004)

Past & Future of Economy - AtkinsonRobert D. Atkinson, President, Information Technology and Innovation Foundation, presented another useful way of looking at innovation policy divides in his 2004 book, The Past and Future of America’s Economy. In Chapter 6 on “The New Economy and Its Discontents,” Atkinson noted how “American history is rife with resistance to change,” as he recounted some of the heated battles over previous industrial / technological revolutions. He argued:

This conflict between stability and progress, security and prosperity, dynamism and stasis, has led to the creation of a major political fault line in American politics. On one side are those who welcome the future and look at the New Economy as largely positive. On the other are those who resist change and see only the risks of new technologies and the New Economy.  As a result, a political divide is emerging between preservationists who want to hold onto the past and modernizers who recognize that new times require new means. (p. 201)

Precautionary Principle vs. Permissionless Innovation – Adam Thierer (2014)

book cover (small)In my latest book, “Permissionless Innovation: The Continuing Case for Comprehensive Technological Freedom,” I argued that the central fault line in almost all modern technology policy debates revolves around “the permission question,” which asks: Must the creators of new technologies seek the blessing of public officials before they develop and deploy their innovations? I argued that we are today witnessing a grand clash of visions between two competing mindsets about how that question should be answered for a wide variety of new inventions:

One disposition is known as the “precautionary principle.” Generally speaking, it refers to the belief that new innovations should be curtailed or disallowed until their developers can prove that they will not cause any harms to individuals, groups, specific entities, cultural norms, or various existing laws, norms, or traditions.
The other vision can be labeled “permissionless innovation.” It refers to the notion that experimentation with new technologies and business models should generally be permitted by default. Unless a compelling case can be made that a new invention will bring serious harm to society, innovation should be allowed to continue unabated and problems, if they develop at all, can be addressed later.

Fortress vs. Frontier – Robert Graboyes (2014)

GraboyesIn his new white paper, Fortress and Frontier in American Health Care, Robert Graboyes seeks to reframe the debate over the future of health care innovation in terms of “Fortress versus Frontier” and to highlight what lessons we can learn from the Internet and the Information Revolution that can better inform health care policy. Graboyes defines “Fortress and Frontier” as follows:

The Fortress is an institutional environment that aims to obviate risk and protect established producers (insiders) against competition from newcomers (outsiders). The Frontier, in contrast, tolerates risk and allows outsiders to compete against established insiders. . . .  The Fortress-Frontier divide does not correspond neatly with the more familiar partisan or ideological divides. Framing health care policy issues in this way opens the door for a more productive national health care discussion and for unconventional policy alliances. (p. 4)

He elaborates in more detail later in the paper:

the Frontier encourages creative destruction and disruptive innovation. Undreamed-of products arise and old, revered ones vanish. New production processes sweep away old ones. This is a place where unknown innovators in garages destroy titans of industry. The Frontier celebrates and rewards risk, and there is a brutal egalitarianism to the creative process. In contrast, the Fortress discourages creative destruction and disruptive innovation. Insiders are protected from competition by government or by private organizations (such as insurers and medical societies) acting in quasigovernmental fashion. In the Fortress, insiders preserve the existing order. Innovation comes from well-established, credentialed insiders who, it is presumed, have the wisdom and motives and competence to identify opportunities for innovation. (p. 13)

The Common Themes

There are several themes that unify these four frameworks. Most notably, they all seek to escape the traditional “Left vs. Right,” “Conservative vs. Liberal,” and “Democrat vs. Republican” labels and models. Postrel’s book noted that, although there are differences at the margin, “reactionaries” (who tend to be more politically and socially “conservative”) and “technocrats” (who tend to identify as politically “progressive”) are united by their desire for greater control over the pace and shape of technological innovation. They both hope that sagacious, noble-minded public officials can set us on a “better path,” or return us to an old path from which we have drifted.

Similarly, Atkinson’s “preservationists versus modernizers” dichotomy identified the “small-c” conservatism that animates the preservationist mindset, regardless of which party or political movement they belong to. Graboyes and I identify this same tendency of those with a precautionary, Fortress mindset to be deeply suspicious of change, and sometimes even being quite openly hostile to it–regardless of their political affiliation. Moreover, all four authors note that, at a minimum, Stasis/Preservationist/Fortress/Precautionary vision is unified by a general gloominess about the prospect for technological change to really better our economy or culture.

From a policy perspective, the competing visions outlined in each of these four paradigms are unified by their preferred policy default for new innovation. Generally speaking, those subscribing to the Dynamist/Modernizer/Frontier/Permissionless Innovation vision believe that innovators should have a clear green light to experiment without fear of prior restraint. By contrast, those adhering to the Stasis/Preservationist/Fortress/Precautionary vision are more risk-adverse and tend to opt for “better to be safe than sorry” policy defaults.

Here’s a little table I put together to highlight the “conflict of visions” over innovation policy identified in these works.

Innovation Policy: The Conflict of Visions
“Stasis” “Dynamism”
“Preservationists” “Modernizers”
“Precautionary principle” “Permissionless innovation”
“Fortress” “Frontier”
progress should be carefully guided progress should free-wheeling
fear of risk & uncertainty embrace of risk & uncertainty
stability/safety first spontaneity first
equilibrium experimentation
wisdom through better planning wisdom through trial & error
anticipation & regulation adaptation & resiliency
ex ante solutions ex post solutions
“better to be safe than sorry” “nothing ventured, nothing gained”

A Problem with These Paradigms

An astute reader will notice a potential problem with these four paradigms: They were crafted by people (including myself) who were much more favorably disposed to one vision than the other. In fact, each of the authors listed here (including me) firmly embraced a common “positive” or “optimistic” vision about the potential for innovation and technological change to generally boost human welfare. We were all writing defenses of visions that, generally speaking, encourage the adoption of attitudes and public policies that are generally welcoming toward new innovations. Postrel, for example, was seeking to articulate and defend the superiority of the dynamist vision over the stasis mentality. Atkinson defended modernizers and bashed preservationists. Graboyes embraced the Frontier mentality and warned of the dangers of the Fortress mentality. Finally, in my own work, I have vociferously defended the notion of permissionless innovation while repeatedly criticizing precautionary principle-based thinking.

I will proudly defend my own work as well as the visions sketched out by Postrel, Atkinson, and Graboyes, which are all very much in league with my own. Nonetheless, some readers or critics might claim that we have stacked the deck in our favor by framing innovation policy debates in the ways we have. We each had a polemical purpose in mind when writing these books; we were hoping to convince others to embrace our way of thinking about technological progress and the future. As a result, that influenced our choice of language and labels. Some critics might even claim that the words we chose to describe the alternative vision are too simplistic or unfairly derogatory. After all, who wants to be labeled a “stasis”-minded “preservationist” who is trapped in a “fortress” mentality advocating hopelessly “precautionary” policies?! By contrast, it is relatively easy for many of us to say we are “modernizers” who embrace “dynamism” and the “frontier” spirit in defense of “permissionless innovation.”

Technological critics have penned a wide variety of polemics making their views on these matters clear, but what is interesting is how few of them attempt to describe the opposing positions in clear detail, or even bother trying to label them. Nor do they usually bother labeling their own positions or perspectives. I suspect that many of them would claim their visions or critiques cannot be succinctly summarized in a mere word or phrase, and that trying to craft conflicting “visions” about innovation policy over-simplifies very complex matters. I actually appreciate that point more than you think. When I am writing about these matters, I try not to over-generalize the very nuanced, sensitive issues in play in here, such as the privacy, safety, and security implications associated with various new innovations. These are profound matters and they deserve to be analyzed carefully and respectfully.

That being said, I still believe that there is a role for visions when thinking about the past, the present, and the future of technological change. Labels and classifications can help us unpack the philosophical differences between different people and organizations and then also evaluate their preferred policy solutions. This allows us to better understand what animates the opposing forces that are pushing for specific policy changes.

Nonetheless, I welcome alternative framings of these proposals and the personalities behind them. Moreover, I would very much like to see others — either those who take opposing views, or analysts with no stake in the fight — suggest other ways of looking at the conflict of visions that animates debates over technological innovation and the future of progress.

A Note on Joel Garreau’s Framing

Radical EvolutionI want to close with a quick postscript related to my point about over-simplifying “visions” about technological change.  In 2010, I penned an essay that got a fair amount of attention entitled, “Are You An Internet Optimist or Pessimist? The Great Debate over Technology’s Impact on Society.” As the title implied, it was an attempt to divide the history of thinking about technological innovation into two camps: “pessimists” and “optimists.” It was a crude and overly-simplistic dichotomy, but it was an attempt to begin sketching out a rough taxonomy of the personalities and perspectives that we often seen pitted against each other in debates about the impact of technology on culture and humanity.

I was never really satisfied with the “optimist vs. pessimist” breakdown, and I got an earful from some people about it. I always thought there must be somebody who had figured out a better way of reviewing the long arc of history and human thinking about technological change and coming up with better labels or “visions.” And there was!

When I wrote that earlier piece, I was unfortunately not aware of a similar (and much better) framing of this divide that was developed by science and technology writer Joel Garreau in his outstanding 2005 book, Radical Evolution: The Promise and Peril of Enhancing Our Minds, Our Bodies — and What It Means to Be Human. In that book, Garreau is thinking in much grander terms about technology and the future than I was in my earlier essay. He was focused on how various emerging technologies might be changing our very humanity and he notes that narratives about these issues are typically framed in “Heaven” versus “Hell” scenarios.

Under the “Heaven” scenario, technology drives history relentlessly, and in almost every way for the better. As Garreau describes the beliefs of the Heaven crowd, they believe that going forward, “almost unimaginably good things are happening, including the conquering of disease and poverty, but also an increase in beauty, wisdom, love, truth, and peace.” (p. 130) By contrast, under the “Hell” scenario, “technology is used for extreme evil, threatening humanity with extinction.” (p. 95) Garreau notes that what unifies the Hell scenario theorists is the sense that in “wresting power from the gods and seeking to transcend the human condition,” we end up instead creating a monster — or maybe many different monsters — that threatens our very existence. Garreau says this “Frankenstein Principle” can be seen in countless works of literature and technological criticism throughout history, and it is still very much with us today. (p. 108)

After discussing the “Heaven” and “Hell” scenarios cast about by countless tech writers throughout history, Garreau outlined a third, and more pragmatic “Prevail” option, which views history “as a remarkably effective paean to the power of humans to muddle through extraordinary circumstances.” As Garreau explains it, under the “Prevail” scenario, “humans shape and adapt [technology] in entirely new directions.” (p. 95) “Just because the problems are increasing doesn’t mean solutions might not also be increasing to match them,” he rightly notes. (p. 154)

That pretty much sums up my own perspective on things, as I noted in this essay earlier this year, “Muddling Through: How We Learn to Cope with Technological Change.”  I think the “prevail” or “muddling through” notion offers the best explanation for how we learn to cope with technological disruption and prosper in the process. (I also wrote a lengthy law review article on this and discussed this issue more in my recent book.) In any event, I chose not to include Garreau’s framework in the above discussion because Garreau — a former reporter and editor at The Washington Post — tries to be somewhat more objective in discussing the various “Heaven” vs. “Hell” scenarios and the personalities behind them (even though in the concluding chapter he seems to be aligning himself with the “Prevail” crowd.) So, it doesn’t quite align perfectly with the more polemical visions I described above. But I continue to think it is the single best thing penned in recent years on the nature of these debates. I cannot recommend it strongly enough.

In closing, I want to reiterate that I would very much welcome suggestions from others about alternative framings and paradigms for thinking about the future of technological change and progress. I imagine I will spend the rest of my life researching and writing about these issues, so I’d love to get more input.  As you can tell, I find these debates terrifically interesting!

]]>
https://techliberation.com/2014/11/11/thinking-about-innovation-policy-debates-4-related-paradigms/feed/ 2 74915
The Debate over the Sharing Economy: Talking Points & Recommended Reading https://techliberation.com/2014/09/26/the-debate-over-the-sharing-economy-talking-points-recommended-reading/ https://techliberation.com/2014/09/26/the-debate-over-the-sharing-economy-talking-points-recommended-reading/#comments Fri, 26 Sep 2014 15:40:11 +0000 http://techliberation.com/?p=74792

The sharing economy is growing faster than ever and becoming a hot policy topic these days. I’ve been fielding a lot of media calls lately about the nature of the sharing economy and how it should be regulated. (See latest clip below from the Stossel show on Fox Business Network.) Thus, I sketched out some general thoughts about the issue and thought I would share them here, along with some helpful additional reading I have come across while researching the issue. I’d welcome comments on this outline as well as suggestions for additional reading. (Note: I’ve also embedded some useful images from Jeremiah Owyang of Crowd Companies.)

1) Just because policymakers claim that regulation is meant to protect consumers does not mean it actually does so.

  1. Cronyism/ Rent-seeking: Regulation is often “captured” by powerful and politically well-connected incumbents and used to their own benefit. (+ Lobbying activity creates deadweight losses for society.)
  2. Innovation-killing: Regulations become a formidable barrier to new innovation, entry, and entrepreneurism.
  3. Unintended consequences: Instead of resulting in lower prices & better service, the opposite often happens: Higher prices & lower quality service. (Example: Painting all cabs same color destroying branding & ability to differentiate).

2) The Internet and information technology alleviates the need for top-down regulation & actually does a better job of serving consumers.

  1. Ease of entry/innovation in online world means that new entrants can come in to provide better options and solve problems previously thought to be unsolvable in the absence of regulation.
  2. Informational empowerment: The Internet and information technology solves old problem of lack of consumer access to information about products and services. This gives them monitoring tools to find more and better choices. (i.e., it lowers both search costs & transaction costs). (“To the extent that consumer protection regulation is based on the claim that consumers lack adequate information, the case for government intervention is weakened by the Internet’s powerful and unprecedented ability to provide timely and pointed consumer information.” – John C. Moorhouse)
  3. Feedback mechanisms (product & service rating / review systems) create powerful reputational incentives for all parties involved in transactions to perform better.
  4. Self-regulating markets: The combination of these three factors results in a powerful check on market power or abusive behavior. The result is reasonably well-functioning and self-regulating markets. Bad actors get weeded out.
  5. Law should evolve: When circumstances change dramatically, regulation should as well. If traditional rationales for regulation evaporate, or new technology or competition alleviates need for it, then the law should adapt.

3) Sharing economy has demonstrably improved consumer welfare. It provides:

  1. more choices / competition
  2. more service innovation / differentiation
  3. better prices
  4. higher quality services  (safety & cleanliness /convenience / peace of mind)
  5. Better options & conditions for workers

4) If we need to “level the (regulatory) playing field,” best way to do so is by “deregulating down” to put everyone on equal footing; not by “regulating up” to achieve parity.

  1. Regulatory asymmetry is real: Incumbents are right that they are at disadvantage relative to new sharing economy start-ups.
  2. Don’t punish new innovations for it: But solution is not to just roll the old regulatory regime onto the new innovators.
  3. Parity through liberalization: Instead, policymakers should “deregulate down” to achieve regulatory parity. Loosen old rules on incumbents as new entrants challenge status quo.
  4. “Permissionless innovation” should trump “precautionary principle” regulation: Preemptive, precautionary regulation does not improve consumer welfare. Competition and choice do better. Thus, our default position toward the sharing economy should be “innovation allowed” or permissionless innovation.
  5. Alternative remedies exist: Accidents will always happen, of course. But insurance, contracts, product liability, and other legal remedies exist when things go wrong. The difference is that ex post remedies don’t discourage innovation and competition like ex ante regulation does. By trying to head off every hypothetical worst-case scenario, preemptive regulations actually discourage many best-case scenarios from ever coming about.

5) Bottom line = Good intentions only get you so far in this world.

  1. Just because a law was put on the books for noble purposes, it does not mean it really accomplished those goals, or still does so today.
  2. Markets, competition, and ongoing innovation typically solve problems better than law when we give them a chance to do so.

[P.S. On 9/30, my Mercatus Center colleague Matt Mitchell posted this excellent follow-up essay building on my outline and improving it greatly.]

Sharing Economy Taxonomy-001

Why People Use Sharing Services Source: Jeremiah Owyang, Crowd Companies

Additional Reading

]]>
https://techliberation.com/2014/09/26/the-debate-over-the-sharing-economy-talking-points-recommended-reading/feed/ 2 74792
Muddling Through: How We Learn to Cope with Technological Change https://techliberation.com/2014/06/17/muddling-through-how-we-learn-to-cope-with-technological-change/ https://techliberation.com/2014/06/17/muddling-through-how-we-learn-to-cope-with-technological-change/#comments Tue, 17 Jun 2014 17:38:18 +0000 http://techliberation.com/?p=74622

How is it that we humans have again and again figured out how to assimilate new technologies into our lives despite how much those technologies “unsettled” so many well-established personal, social, cultural, and legal norms?

In recent years, I’ve spent a fair amount of time thinking through that question in a variety of blog posts (“Are You An Internet Optimist or Pessimist? The Great Debate over Technology’s Impact on Society”), law review articles (“Technopanics, Threat Inflation, and the Danger of an Information Technology Precautionary Principle”), opeds (“Why Do We Always Sell the Next Generation Short?”), and books (See chapter 4 of my new book, “Permissionless Innovation: The Continuing Case for Comprehensive Technological Freedom”).

It’s fair to say that this issue — how individuals, institutions, and cultures adjust to technological change — has become a personal obsession of mine and it is increasingly the unifying theme of much of my ongoing research agenda. The economic ramifications of technological change are part of this inquiry, of course, but those economic concerns have already been the subject of countless books and essays both today and throughout history. I find that the social issues associated with technological change — including safety, security, and privacy considerations — typically get somewhat less attention, but are equally interesting. That’s why my recent work and my new book narrow the focus to those issues.

Optimistic (“Heaven”) vs. Pessimistic (“Hell”) Scenarios

Modern thinking and scholarship on the impact of technological change on societies has been largely dominated by skeptics and critics.

In the past century, for example, French philosopher Jacques Ellul ( The Technological Society), German historian Oswald Spengler (Man and Technics), and American historian Lewis Mumford (Technics and Civilization) penned critiques of modern technological processes that took a dour view of technological innovation and our collective ability to adapt positively to it. (Concise summaries of their thinking can be found in Christopher May’s edited collection of essays, Key Thinkers for the Information Society.)

These critics worried about the subjugation of humans to “technique” or “technics” and feared that technology and technological processes would come to control us before we learned how to control them. Media theorist Neil Postman was the most notable of the modern information technology critics and served as the bridge between the industrial era critics (like Ellul, Spengler, and Mumford) and some of today’s digital age skeptics (like Evgeny Morozov and Nick Carr). Postman decried the rise of a “technopoly” — “the submission of all forms of cultural life to the sovereignty of technique and technology” — that would destroy “the vital sources of our humanity” and lead to “a culture without a moral foundation” by undermining “certain mental processes and social relations that make human life worth living.” We see that attitude on display in countless works of technological criticism since then.

Of course, there’s been some pushback from some futurists and technological enthusiasts. But there’s often a fair amount of irrational exuberance at work in their tracts and punditry. Many self-proclaimed “futurists” have predicted that various new technologies would produce a nirvana that would overcome human want, suffering, ignorance, and more.

In a 2010 essay, I labeled these two camps technological “pessimists” and “optimists.” It was a crude and overly-simplistic dichotomy, but it was an attempt to begin sketching out a rough taxonomy of the personalities and perspectives that we often seen pitted against each other in debates about the impact of technology on culture and humanity.

Sadly, when I wrote that earlier piece, I was not aware of a similar (and much better) framing of this divide that was developed by science writer Joel Garreau in his terrific 2005 book, Radical Evolution: The Promise and Peril of Enhancing Our Minds, Our Bodies — and What It Means to Be Human. In that book, Garreau is thinking in much grander terms about technology and the future than I was in my earlier essay. He was focused on how various emerging technologies might be changing our very humanity and he notes that narratives about these issues are typically framed in “Heaven” versus “Hell” scenarios.

Under the “Heaven” scenario, technology drives history relentlessly, and in almost every way for the better. As Garreau describes the beliefs of the Heaven crowd, they believe that going forward, “almost unimaginably good things are happening, including the conquering of disease and poverty, but also an increase in beauty, wisdom, love, truth, and peace.” (p. 130) By contrast, under the “Hell” scenario, “technology is used for extreme evil, threatening humanity with extinction.” (p. 95) Garreau notes that what unifies the Hell scenario theorists is the sense that in “wresting power from the gods and seeking to transcend the human condition,” we end up instead creating a monster — or maybe many different monsters — that threatens our very existence. Garreau says this “Frankenstein Principle” can be seen in countless works of literature and technological criticism throughout history, and it is still very much with us today. (p. 108)

Theories of Collapse: Why Does Doomsaying Dominate Discussions about New Technologies?

Indeed, in examining the way new technologies and inventions have long divided philosophers, scientists, pundits, and the general public, one can find countless examples of that sort of fear and loathing at work. “Armageddon has a long and distinguished history,” Garreau notes. “Theories of progress are mirrored by theories of collapse.” (p. 149)

In that regard, Garreau rightly cites Arthur Herman’s magisterial history of apocalyptic theories, The Idea of Decline in Western History, which documents “declinism” over time. The irony of much of this pessimistic declinist thinking, Herman notes, is that:

In effect, the very things modern society does best — providing increasing economic affluence, equality of opportunity, and social and geographic mobility — are systematically deprecated and vilified by its direct beneficiaries. None of this is new or even remarkable.” (p. 442)

Why is that? Why has the “Hell” scenario been such a dominant reoccurring theme in past writing and commentary throughout history, even though the general trend has been steady improvements in human health, welfare, and convenience?

There must be something deeply rooted in the human psyche that accounts for this tendency. As I have discussed in my new book as well as my big “Technopanics” law review article, our innate tendency to be pessimistic but also want to be certain about the future means that “the gloom-mongers have it easy,” as author Dan Gardner argues in his book, Future Babble: Why Expert Predictions Are Next to Worthless, and You Can Do Better. He continues on to note of the techno-doomsday pundits:

Their predictions are supported by our intuitive pessimism, so they feel right to us. And that conclusion is bolstered by our attraction to certainty. As strange as it sounds, we want to believe the expert predicting a dark future is exactly right, because knowing that the future will be dark is less tormenting than suspecting it. Certainty is always preferable to uncertainty, even when what’s certain is disaster. (p. 140-1)

Similarly, in his new book, Smarter Than You Think: How Technology Is Changing Our Minds for the Better, Clive Thompson notes that “dystopian predictions are easy to generate” and “doomsaying is emotionally self-protective: if you complain that today’s technology is wrecking the culture, you can tell yourself you’re a gimlet-eyed critic who isn’t hoodwinked by high-tech trends and silly, popular activities like social networking. You seem like someone who has a richer, deeper appreciation for the past and who stands above the triviality of today’s life.” (p. 283)

Another explanation is that humans are sometimes very poor judges of the relative risks to themselves or those close to them. Harvard University psychology professor Steven Pinker, author of The Blank Slate: The Modern Denial of Human Nature, notes:

The mind is more comfortable in reckoning probabilities in terms of the relative frequency of remembered or imagined events. That can make recent and memorable events—a plane crash, a shark attack, an anthrax infection—loom larger in one’s worry list than more frequent and boring events, such as the car crashes and ladder falls that get printed beneath the fold on page B14. And it can lead risk experts to speak one language and ordinary people to hear another. (p. 232)

Put simply, there exists a wide variety of explanations for why our collective first reaction to new technologies often is one of dystopian dread. In my work, I have identified several other factors, including: generational differences; hyper-nostalgia; media sensationalism; special interest pandering to stoke fears and sell products or services; elitist attitudes among intellectuals; and the so-called “third-person effect hypothesis,” which posits that when some people encounter perspectives or preferences at odds with their own, they are more likely to be concerned about the impact of those things on others throughout society and to call on government to “do something” to correct or counter those perspectives or preferences.

Some combination of these factors ends up driving the initial resistance we have see to new technologies that disrupted long-standing social norms, traditions, and institutions. In the extreme, it results in that gloom-and-doom, sky-is-falling disposition in which we are repeatedly told how humanity is about to be steam-rolled by some new invention or technological development.

The “Prevail” (or “Muddling Through”) Scenario

“The good news is that end-of-the-world predictions have been around for a very long time, and none of them has yet borne fruit,” Garreau reminds us. (p. 148) Why not? Let’s get back to his framework for the answer. After discussing the “Heaven” (optimistic) and “Hell” (skeptical or pessimistic) scenarios cast about by countless tech writers throughout history, Garreau outlines a third, and more pragmatic “Prevail” option, which views history “as a remarkably effective paean to the power of humans to muddle through extraordinary circumstances.”

That pretty much sums up my own perspective on things, and in the remainder of this essay I want sketch out the reasons why I think the “prevail” or “muddling through” scenario offers the best explanation for how we learn to cope with technological disruption and prosper in the process.

As Garreau explains it, under the “Prevail” scenario, “humans shape and adapt [technology] in entirely new directions.” (p. 95) “Just because the problems are increasing doesn’t mean solutions might not also be increasing to match them,” he rightly notes. (p. 154) As John Seely Brown and Paul Duguid noted in their excellent 2001, “ Response to Bill Joy and the Doom-and-Gloom Technofuturists”:

technological and social systems shape each other. The same is true on a larger scale. […] Technology and society are constantly forming and reforming new dynamic equilibriums with far-reaching implications. The challenge for futurology (and for all of us) is to see beyond the hype and past the over-simplifications to the full import of these new sociotechnical formations.  Social and technological systems do not develop independently; the two evolve together in complex feedback loops, wherein each drives, restrains and accelerates change in the other.

It is this process of “constantly forming and reforming new dynamic equilibriums” that interests me most. In a recent exchange with Michael Sacasas – one of the most thoughtful modern technology critics I’ve come across — I noted that the nature of individual and societal acclimation to technological change is worthy of serious investigation if for no other reason that it has continuously happened! What I hope to better understand is the process by which we humans have again and again figured out how to assimilate new technologies into their lives despite how much those technologies disrupted our personal, social, economic, cultural, and legal norms.

In a response to me, Sacasas put forth the following admonition: “That people eventually acclimate to changes precipitated by the advent of a new technology does not prove that the changes were inconsequential or benign.” This is undoubtedly true, but it does not undermine the reality of societal adaptation. What can we learn from this? What were the mechanics of that adaptive process? As social norms, personal habits, and human relationships were disrupted, what helped us muddle through and find a way of coping with new technologies? Likewise, as existing markets and business models were disrupted, how were new ones formulated in response to the given technological disruption? Finally, how did legal norms and institutions adjust to those same changes?

Of course, this raises an entirely different issue: What metrics are we using to judge whether “the changes were inconsequential or benign”? As I noted in my exchange with Sacasas, at the end of the day, it may be that we won’t be able to even agree on a standard by which to make that judgment and will instead have to settle for a rough truce about what history has to teach us that might be summed up by the phrase: “something gained, something lost.”

Resiliency: Why Do the Skeptics Never Address It (and Its Benefits)?

Nonetheless, I believe that while technological change often brings sweeping and quite consequential change, there is great value in the very act of living through it.

In my work, including my latest little book, I argue that humans have exhibited the uncanny ability to adapt to changes in their environment, bounce back from adversity, and learn to be resilient over time. A great deal of wisdom is born of experience, including experiences that involve risk and the possibility of occasional mistakes and failures while both developing new technologies and learning how to live with them. I believe it wise to continue to be open to new forms of innovation and technological change, not only because it provides breathing space for future entrepreneurialism and invention, but also because it provides an opportunity to see how societal attitudes toward new technologies evolve — and to learn from it. More often than not, I argue, citizens have found ways to adapt to technological change by employing a variety of coping mechanisms, new norms, or other creative fixes.

What we’re talking about here is resiliency. Andrew Zolli and Ann Marie Healy, authors of Resilience: Why Things Bounce Back, define resilience as “the capacity of a system, enterprise, or a person to maintain its core purpose and integrity in the face of dramatically changed circumstances.” (p. 7) “To improve your resilience,” they note, “is to enhance your ability to resist being pushed from your preferred valley, while expanding the range of alternatives that you can embrace if you need to. This is what researchers call preserving adaptive capacity—the ability to adapt to changed circumstances while fulfilling once core purpose—and it’s an essential skill in an age of unforeseeable disruption and volatility.” (p. 7-8, emphasis in original) Moreover, they note, “by encouraging adaptation, agility, cooperation, connectivity, and diversity, resilience-thinking can bring us to a different way of being in the world, and to a deeper engagement with it.” (p. 16)

Even if you one doesn’t agree with all of that, again, I would think one would find great value in studying the process by which such adaptation happens precisely because it does happen so regularly. And then we could argue about whether it was all really worth it! Specially, was it worth whatever we lost in the process (i.e., a change in our old moral norms, our old privacy norms, our old institutions, our old business models, our old laws, or whatever else)?

As Sacasas correctly argues, “That people before us experienced similar problems does not mean that they magically cease being problems today.” Again, quite right. On the other hand, the fact that people and institutions learned to cope with those concerns and become more resilient over time is worthy of serious investigation because somehow we “muddled through” before and we’ll have to muddle through again. And, again, what we learned from living through that process may be extremely valuable in its own right.

Of Course, Muddling Through Isn’t Always Easy

Now, let’s be honest about this process of “muddling through”: it isn’t always neat or pretty. To put it crudely, sometimes muddling through really sucks! Think about the modern technologies that violate our visceral sense of privacy and personal space today. I am an intensely private person and if I had a life motto it would probably be: “ Leave Me Alone!” Yet, sometimes there’s just no escaping the pervasive reach of modern technologies and processes. On the other hand, I know that, like so many others, I derive amazing benefits from all these new technologies, too. So, like most everyone else I put up with the downsides because, on net, there are generally more upsides.

Almost every digital service that we use today presents us with these trade-offs. For example, email has allowed us to connect with a constantly growing universe of our fellow humans and organizations. Yet, spam clutters our mailboxes and the sheer volume of email we get sometimes overwhelms us. Likewise, in just the past five years, smartphones have transformed our lives in so many ways for the better in terms of not just personal convenience but also personal safety. On the other hand, smartphones have become more than a bit of nuisance in certain environments (theaters, restaurants, and other closed spaces.) And they also put our safety at risk when we use them while driving automobiles.

But, again, we adjust to most of these new realities and then we find constructive solutions to the really hard problems – yes, and that sometimes includes legal remedies to rectify serious harms. But a certain amount of social adaptation will, nonetheless, be required. Law can only slightly slow that inevitability; it can’t stop it entirely. And as messy and uncomfortable as muddling through can be, we have to (a) be aware of what we gain in the process and (b) ask ourselves what the cost of taking the alternative path would be. Attempts to through a wrench in the works and derail new innovations or delay various types of technological change are always going to be tempting, but such interventions will come at a very steep cost: less entreprenurialism, diminished competition, stagnant markets, higher prices, and fewer choices for citizens. As I note in my new book, if we spend all our time living in constant fear of worst-case scenarios — and premising public policy upon such fears — it means that many best-case scenarios will never come about.

Social Resistance / Pressure Dynamics

There’s another part to this story that often gets overlooked. “Muddling through” isn’t just some sort of passive process where individuals and institutions have to figure out how to cope with technological change. Rather, there is an active dynamic at work, too. Individuals and institutions push back and actively shape their tools and systems.

In a recent Wired essay on public attitudes about emerging technologies such as the controversial Google Glass, Issie Lapowsky noted that:

If the stigma surrounding Google Glass (or, perhaps more specifically, “Glassholes”) has taught us anything, it’s that no matter how revolutionary technology may be, ultimately its success or failure ride on public perception. Many promising technological developments have died because they were ahead of their times. During a cultural moment when the alleged arrogance of some tech companies is creating a serious image problem, the risk of pushing new tech on a public that isn’t ready could have real bottom-line consequences.

In my new book, I spend some time think about this process of “norm-shaping” through social pressure, activist efforts, educational steps, and even public shaming. A recent Ars Technica essay by Joe Silver offered some powerful examples of how when “shamed on Twitter, corporations do an about-face.” Silver notes that “A few recent case-study examples of individuals who felt they were wronged by corporations and then took to the Twitterverse to air their grievances show how a properly placed tweet can be a powerful weapon for consumers to combat corporate malfeasance.” In my book and in recent law review articles, I have provided other examples how this works at both a corporate and individual level to constrain improper behavior and protect various social norms.

Edmund Burke once noted that, “Manners are of more importance than laws. Manners are what vex or soothe, corrupt or purify, exalt or debase, barbarize or refine us, by a constant, steady, uniform, insensible operation, like that of the air we breathe in.” Cristina Bicchieri, a leading behavioral ethicist, calls social norms “the grammar of society” because,

like a collection of linguistic rules that are implicit in a language and define it, social norms are implicit in the operations of a society and make it what it is. Like a grammar, a system of norms specifies what is acceptable and what is not in a social group. And analogously to a grammar, a system of norms is not the product of human design and planning.

Put simply, more than law can regulate behavior — whether it is organizational behavior or individual behavior. It’s yet another way we learn to cope and “muddle through” over time. Again, check out my book for several other examples.

A Case Study: The Long-Standing “Problem” of Photography

Let’s bring all this together and be more concrete about it by using a case study: photography. With all the talk of how unsettling various modern technological developments are, they really pale in comparison to just how jarring the advent of widespread public photography must have been in the late 1800s and beyond. “For the first time photographs of people could be taken without their permission—perhaps even without their knowledge,” notes Lawrence M. Friedman in his 2007 book, Guiding Life’s Dark Secrets: Legal and Social Controls over Reputation, Propriety, and Privacy.

Thus, the camera was viewed as a highly disruptive force as photography became more widespread. In fact, the most important essay ever written on privacy law, Samuel D. Warren and Louis D. Brandeis’s famous 1890 Harvard Law Review essay on “The Right to Privacy,” decried the spread of public photography. The authors lamented that “instantaneous photographs and newspaper enterprise have invaded the sacred precincts of private and domestic life” and claimed that “numerous mechanical devices threaten to make good the prediction that ‘what is whispered in the closet shall be proclaimed from the house-tops.’”

Warren and Brandeis weren’t alone. Plenty of other critics existed and many average citizens were probably outraged by the rise of cameras and public photography. Yet, personal norms and cultural attitudes toward cameras and public photography evolved quite rapidly and they became ingrained in human experience. At the same time, social norms and etiquette evolved to address those who would use cameras in inappropriate, privacy-invasive ways.

Again, we muddled through. And we’ve had to continuously muddle through in this regard because photography presents us with a seemingly endless set of new challenges. As cameras grow still smaller and get integrated into other technologies (most recently, smartphones, wearable technologies, and private drones), we’ve had to learn to adjust and accommodate. With wearables technologies (check out Narrative, Butterflye, and Autographer, for example), personal drones (see “Drones are the future of selfies,”) and other forms of microphotography all coming online now, we’ll have to adjust still more and develop new norms and coping mechanisms. There’s never going to be an end to this adjustment process.

Toward Pragmatic Optimism

Should we really remain bullish about humanity’s prospects in the midst of all this turbulent change? I think so.

Again, long before the information revolution took hold, the industrial revolution produced its share of cultural and economic backlashes, and it is still doing so today. Most notably, many Malthusian skeptics and environmental critics lamented the supposed strain of population growth and industrialization on social and economic life. Catastrophic predictions followed.

In his 2007 book, Prophecies of Doom and Scenarios of Progress, Paul Dragos Aligicia, a colleague of mine at the Mercatus Center, documented many of these industrial era “prophecies of doom” and described how this “doomsday ideology” was powerfully critiqued by a handful of scholars — most notably Herman Kahn and Julian Simon. Aligicia explains that Kahn and Simon argued for, “the alternative paradigm, the pro-growth intellectual tradition that rejected the prophecies of doom and called for realism and pragmatism in dealing with the challenge of the future.”

Kahn and Simon were pragmatic optimists or what author Matt Ridley calls “rational optimists.” They were bullish about the future and the prospects for humanity, but they were not naive regarding the many economic and scosial challenges associated with technological change. Like Kahn and Simon, we should embrace the amazing technological changes at work in today’s information age but with a healthy dose of humility and appreciation for the disruptive impact and pace of that change.

But the rational optimists never get as much attention as the critics and catastrophists. “For 200 years pessimists have had all the headlines even though optimists have far more often been right,” observes Ridley. “Arch-pessimists are feted, showered with honors and rarely challenged, let alone confronted with their past mistakes.” At least part of the reason for that, as already noted, goes back to the amazing rhetorical power of good intentions. Techno-pessimists often exhibit a deep passion about their particular cause and are typically given more than just the benefit of doubt in debates about progress and the future; they are treated as superior to opponents who challenge their perspectives or proposals. When a privacy advocate says they are just looking out consumers, or an online safety claims they have the best interests of children in mind, or a consumer advocate argues that regulation is needed to protect certain people from some amorphous harm, they are assuming the moral high ground through the assertion of noble-minded intentions. Even if their proposals will often fail to bring about the better state of affairs they claim or derail life-enriching innovations, they are more easily forgiven for those mistakes precisely because of their fervent claim of noble-minded intentions.

If intentions are allowed to trump empiricism and a general openness to change, however, the results for a free society and for human progress will be profoundly deleterious. That is why, when confronted with pessimistic, fear-based arguments, the pragmatic optimist must begin by granting that the critics clearly have the best of intentions, but then point out how intentions can only get us so far in the real-world, which is full of complex trade-offs.

The pragmatic optimist must next meticulously and dispassionately outline the many reasons why restricting progress or allowing planning to enter the picture will have many unintended consequences and hidden costs. The trade-offs must be explained in clear terms. Examples of previous interventions that went wrong must be proffered.

The Evidence Speaks for Itself

Luckily, we pragmatic optimists have plenty of evidence working in our favor when making this case. As Pulitzer Prize-winning historian Richard Rhodes noted in his 1999 book, Visions of Technology: A Century of Vital Debate About Machines Systems And The Human World:

it’s surprising that [many intellectual] don’t value technology; by any fair assessment, it has reduced suffering and improved welfare across the past hundred years. Why doesn’t this net balance of benevolence inspire at least grudging enthusiasm for technology among intellectuals? (p. 23)

Great question, and one that we should never stop asking the techno-critics to answer. After all, as Joel Mokyr notes in his wonderful 1990 book, Lever of Riches: Technological Creativity and Economic Progress, “Without [technological creativity], we would all still live nasty and short lives of toil, drudgery, and discomfort.” (p. viii) “Technological progress, in that sense, is worthy of its name,” he says. “It has led to something that we may call an ‘achievement,’ namely the liberation of a substantial portion of humanity from the shackles of subsistence living.” (p. 288) Specifically,

The riches of the post-industrial society have meant longer and healthier lives, liberation from the pains of hunger, from the fears of infant mortality, from the unrelenting deprivation that were the part of all but a very few in preindustrial society. The luxuries and extravagances of the very rich in medieval society pale compared to the diet, comforts, and entertainment available to the average person in Western economies today. (p. 303)

In his new book, Smaller Faster Lighter Denser Cheaper: How Innovation Keeps Proving the Catastrophists Wrong, Robert Bryce hammers this point home when he observes that:

The pessimistic worldview ignores an undeniable truth: more people are living longer, healthier, freer, more peaceful, lives than at any time in human history… the plain reality is that things are getting better, a lot better, for tens of millions of people around the world. Dozens of factors can be cited for the improving conditions of humankind. But the simplest explanation is that innovation is allowing us to do more with less.

This is framework Herman Kahn, Julian Simon, and the other champions of progress used to deconstruct and refute the pessimists of previous eras. In line with that approach, we modern pragmatic optimists must continuously point to the unappreciated but unambiguous benefits of technological innovation and dynamic change. But we should also continue to remind the skeptics of the amazing adaptability of the human species in the face of adversity. As Kahn taught us long ago, is that when it comes to technological progress and humanity’s ingenious responses to it, “we should expect to go on being surprised” — and in mostly positive ways. Humans have consistently responded to technological change in creative, and sometimes completely unexpected ways. There’s no reason to think we can’t get through modern technological disruptions using similar coping and adaptation strategies. As Mokyr noted in his recent City Journal essay on “The Next Age of Invention”:

Much like medication, technological progress almost always has side effects, but bad side effects are rarely a good reason not to take medication and a very good reason to invest in the search for second-generation drugs. To a large extent, technical innovation is a form of adaptation—not only to externally changing circumstances but also to previous adaptations.

In sum, we need to have a little faith in the ability of humanity to adjust to an uncertain future, no matter what it throws at us. We’ll muddle through and come out better because of what we have learned in the process, just as we have so many times before.

I’ll give venture capitalist Marc Andreessen the last word on this since he’s been on an absolute tear on Twitter lately when discussing many of the issues I’ve raised in this essay. While addressing the particular fear that automation is running amuck and that robots will eat all our jobs, Andreessen eloquently noted:

We have no idea what the fields, industries, businesses, and jobs of the future will be. We just know we will create an enormous number of them. Because if robots and AI replace people for many of the things we do today, the new fields we create will be built on the huge number of people those robots and AI systems made available. To argue that huge numbers of people will be available but we will find nothing for them (us) to do is to dramatically short human creativity. And I am way long human creativity.

Me too, buddy. Me too.


Additional Reading:

Journal articles & book chapters:

Blog posts:

]]>
https://techliberation.com/2014/06/17/muddling-through-how-we-learn-to-cope-with-technological-change/feed/ 8 74622
Adam Thierer on Permissionless Innovation https://techliberation.com/2014/05/13/thierer/ https://techliberation.com/2014/05/13/thierer/#respond Tue, 13 May 2014 10:00:30 +0000 http://techliberation.com/?p=74547

Adam Thierer, senior research fellow with the Technology Policy Program at the Mercatus Center at George Mason University, discusses his latest book Permissionless Innovation: The Continuing Case for Comprehensive Technological Freedom. Thierer discusses which types of policies promote technological discoveries as well as those that stifle the freedom to innovate. He also takes a look at new technologies — such as driverless cars, drones, big data, smartphone apps, and Google Glass — and how the American public will adapt to them.

Download

Related Links

]]>
https://techliberation.com/2014/05/13/thierer/feed/ 0 74547