What We’re Reading – Technology Liberation Front https://techliberation.com Keeping politicians' hands off the Net & everything else related to technology Mon, 16 Oct 2023 17:33:58 +0000 en-US hourly 1 6772528 The Future of Progress Studies https://techliberation.com/2022/05/01/the-future-of-progress-studies/ https://techliberation.com/2022/05/01/the-future-of-progress-studies/#comments Sun, 01 May 2022 19:21:03 +0000 https://techliberation.com/?p=76980

If you haven’t yet had the chance to check out the new Progress Forum, I encourage you to do so. It’s a discussion group for progress studies and all things related to it. The Forum is sponsored by The Roots of Progress. Even though the Forum is still in pre-launch phase, there are already many interesting threads worth checking out. I was my honor to contribute one of the first on the topic, “Where is ‘Progress Studies’ Going?” It’s an effort to sort through some of the questions and challenges facing the Progress Studies movement in terms of focus and philosophical grounding. I thought I would just reproduce the essay here, but I encourage you to jump over to the Progress Forum to engage in discussion about it, or the many other excellent discussions happening there on other issues.

________________

Where is “Progress Studies” Going? by Adam Thierer

What do we mean by “Progress Studies” and how can this field of study be advanced? I’ve been thinking about that question a lot since Patrick Collison and Tyler Cowen published their 2019 manifesto in  The Atlantic on why “We Need a New Science of Progress.” At present, there is no overarching “unified field theory” of what Progress Studies entails or what underpins it, and that may be holding up progress on Progress Studies. I recently attended an important conference on the “Moral Foundations of Progress Studies,” co-hosted by The Roots of Progress and the Salem Center at UT Austin, where I discovered that many others were grappling with these same issues.

While a broad range of people are interested in Progress Studies, their moral priors differ, sometimes significantly. For example, the UT Austin conference included scholars from diverse disciplines (philosophy, psychology, economics, political science, history, and others) whose thinking was rooted in different philosophical traditions (utilitarianism, effective altruism, individualism, and various hybrids). Everyone shared the goal of advancing human well-being, but participants had different conceptions of the moral foundations of well-being, and even some disagreement about what well-being meant in concrete terms. There were also differing perspectives about what the “studies” part of Progress Studies should entail. Specifically, does it include progress  advocacy, including the potential for specific policy recommendations?

Comprehension vs. Advocacy

Part of the confusion over the nature and goals of Progress Studies can be traced back to Collison and Cowen’s foundational essay. On one hand, their goal was progress  comprehension. “Progress itself is understudied,” Collison and Cowen argued. They lamented that “there is no broad-based intellectual movement focused on understanding the dynamics of progress.”

But Collison and Cowen went further. Their goal was not merely to inspire the development of a field of study that could give us a better understanding of the prerequisites of progress, but also to formulate a plan for advancing progress. They argued that “mere comprehension is not the goal,” and advocated for “the deeper goal of speeding it up.” They went on to say, “the implicit question is how scientists [and others]  should be acting” and that Progress Studies should be viewed as, “closer to medicine than biology: The goal is to treat, not merely to understand.” The presupposition here is that progress is important and that we need to take steps to get a lot more of it. Again, we can think of this part of Progress Studies as progress advocacy. And advocacy can entail both advocating for progress generally as well as specific types of policy advocacy.

This raises an interesting question we debated at the UT Austin conference: Can you study something and advocate for it at the same time? Some felt you really cannot separate them, while others believed that the broader questions about how progress has worked could be kept separate from any advocacy efforts. Of course, this same tension between comprehension and advocacy comes up in many other fields.

What Progress Studies Can Learn from STS

In this sense, Progress Studies might learn some important lessons by examining the older but loosely related field of Science and Technology Studies (STS). STS incorporates a wide variety of mostly “soft science” academic disciplines, such as law, philosophy, sociology, and anthropology. These scholars analyze the relationship between technology, society, culture, and politics.

One conclusion from studying STS is obvious: comprehension and advocacy frequently get blurred. Many of the STS scholars who engage in critical studies of the history of technology seamlessly transition into anti-technology advocates, even as many of them claim they are “just studying” the issues. As I’ve noted elsewhere:

When thinking about of technology, STS scholars commonly employ words like “anxiety,” “alienation,” “degradation,” and “discrimination.” Consequently, most of them suggest that the burden of proof lies squarely on scientists, engineers, and innovators to prove that their ideas and inventions will bring worth to society before they are deployed. In other words, STS scholars generally fall in the precautionary principle camp, and their policy prescriptions have grown increasingly radical over time.

Meanwhile, as I discussed in my latest book, many STS scholars describe themselves as “humanists” while implicitly suggesting that those who promote technological progress are somehow callous oafs who only care about the cold calculus of profit-seeking and creating shiny new gadgets we don’t need.

While some STS scholars continue to do important and largely objective work, many others routinely show their more radical leanings in books, essays, and social media posts. Most worrying is their newfound love of Luddism, as they spin revisionist histories of “Why Luddites Matter,” insisting that “There’s Nothing Wrong with Being a Luddite,” and that “I’m a Luddite. You Should Be One Too.” Neil Richards, a law professor and leading STS scholar declares bluntly on Twitter: “Less metaverse, less crypto, less disruptive innovation. More regulation, more ethics, more humanity.” In other words, public policy defaults should be set squarely to the Precautionary Principle and anyone opposed to that is unethical and anti-human. Taken to the extreme, STS scholars marry up this Luddite revisionism with the retrograde philosophy of “degrowth” and produce book chapters with titles like, “Methodological Luddism: A Concept for Tying Degrowth to the Assessment and Regulation of Technologies.”

The Progress Studies movement might consider framing its work as a response to the growing extremism of the STS movement. STS scholars have become so remarkably hostile to the very notion that science and technology are central to human advancement that the field might today better be labeled  Anti-Science & Technology Studies. Yet, these are the scholars that dominate many academic departments where students are learning about technological progress. Progress Studies scholars can push back against that radicalism and offer level-headed, empirical responses to it.

Ensuring A Big Tent 

To improve its chances of success, the Progress Studies movement should seek to broaden its appeal by avoiding a dogmatic party line on its moral foundations while ensuring that multiple disciplines and viewpoints are incorporated into it.

In terms of philosophical underpinnings, those interested in Progress Studies can take different approaches to the moral foundations of progress and human well-being. Many philosophers get frustrated when others fail to hammer out all the detailed nuances of the metaphysics, epistemology, and ethics of these matters. I understand that urge, but I’ve now spent over 30 years covering technology policy and have been constantly surprised about how many people can come together and agree on a broad set of principles about the importance of progress without sharing a common philosophical framework.

The same is true as it pertains to policy prescriptions. We need to ensure a “big tent” in this way, too. It is already the case that many people engaged in Progress Studies have very different perspectives on issues like intellectual property and industrial policy, for example. I have many friends on different sides of these issues. Importantly, there are not even clear sides on these issues but rather a very broad spectrum of viewpoints. Progress Studies scholars will likely always disagree on the finer points of both types of “IP” policy. Nonetheless, they can remain more unified in stressing the common goal of moving the needle on progress in a positive direction and highlighting the continuing importance of flexible experimentation with policies aimed at enhancing innovation and growth.

To the extent there is any litmus test for the Progress Studies movement, that’s it:  advancing opportunities for innovation and growth is paramount. Regardless of how one grounds their moral philosophy, or goes about constructing a theory of rights, many people can agree that granting humans the freedom to explore, experiment, and be entrepreneurial has important benefits for individuals, families, organizations, and entire nations. Openness to change is what unifies us. Stagnation and “steady state” thinking—and the Precautionary Principle-based policies that flow from such reasoning—are the enemy. 

Thus, the Progress Studies movement can focus on both studying progress and advancing it at the same time, even if some will devote more effort to one priority than the other. And we shouldn’t forget that these two objectives are reinforcing: Comprehension informs advocacy and vice-versa. Progress is a never-ending process of trial-and-error. It’s all about learning by doing. We try, we fail, we learn, and we try again. This is as true for the individuals attempting to make progress in the real-world as it is for scholars studying it and seeking to promote it.

Let us get on with this important work, regardless of what motivates us to do it.

]]>
https://techliberation.com/2022/05/01/the-future-of-progress-studies/feed/ 3 76980
Book Review: “Questioning the Entrepreneurial State” https://techliberation.com/2022/04/26/book-review-questioning-the-entrepreneurial-state/ https://techliberation.com/2022/04/26/book-review-questioning-the-entrepreneurial-state/#comments Tue, 26 Apr 2022 20:14:03 +0000 https://techliberation.com/?p=76975

An important new book launched this week in Europe on issues related to innovation policy and industrial policy. “Questioning the Entrepreneurial State: Status-quo, Pitfalls, and the Need for Credible Innovation Policy” (Springer, 2022) brings together more than 30 scholars who contribute unique chapters to this impressive volume. It was edited by Karl Wennberg of the Stockholm School of Economics and Christian Sandström of the Jönköping (Sweden) International Business School.

As the title of this book suggests, the authors are generally pushing back against the thesis found in Mariana Mazzucato’s book The Entrepreneurial State (2011). That book, like many other books and essays written recently, lays out a romantic view of industrial policy that sees government as the prime mover of markets and innovation. Mazzucato calls for “a bolder vision for the State’s dynamic role in fostering economic growth” and innovation. She wants the state fully entrenched in technological investments and decision-making throughout the economy because she believes that is the best way to expand the innovative potential of a nation.

The essays in Questioning the Entrepreneurial State offer a different perspective, rooted in the realities on the ground in Europe today. Taken together, the chapters tell a fairly consistent story: Despite the existence of many different industrial policy schemes at the continental and country level, Europe isn’t in very good shape on the tech and innovation front. The heavy-handed policies and volumes of regulations imposed by the European Union and its member states have played a role in that outcome. But these governments have simultaneously been pushing to promote innovation using a variety of technocratic policy levers and industrial policy schemes. Despite all those well-intentioned efforts, the EU has struggled to keep up with the US and China in most important modern tech sectors.

As Wennberg and Sandström note in their introductory chapter:

Grand schemes toward noble outcomes have a disappointing track record in human political and economic history. Conventional wisdom regarding authorities’ inability to selectively pinpoint certain technologies, sectors, or firms as winners, and the fact that large support structures for specific technologies are bound to distort incentives and result in opportunism, seem to have been forgotten.

In summarizing the chapters, they conclude that, “while the idea of aiming high and leveraging large portions of society’s resources to address some fundamental human challenges may sound appealing to many, such ideas have limited scientific credibility.”

Why do governments frequently fail in attempts to be entrepreneurial? Johan P. Larsson gets at the heart of the matter in his chapter when noting how, “[t]he state entrepreneur is not subject to real risk, often faces no market, and cannot be properly evaluated. It pays no price for being wrong and it struggles in assigning responsibility.” Which leads to two questions that are rarely asked, he notes: “[F]irst, how do we ensure that the state pays a price for being wrong? And second, when is that price high enough for us to know it is time to cut our losses?”

The authors of another chapter (Murtinu, Foss & Klein) concur and note how, “even well-intentioned and strongly motivated public actors lack the ability to manage the process of innovation.” “As stewards of resources owned by the public,” they note, “government bureaucrats do not exercise the ultimate responsibility that comes with ownership.” In other words, the state faces problems of misaligned incentives.

Several authors in the book highlight the various public choice problems often associated with large-scale industrial policy initiatives, including rent-seeking and capture. Wennberg and Sandström note how this results in less disruption as established players don’t seek to challenge existing market or technological status quos but instead simply seek to benefit from it. “[S]upport structures, platforms for private-public cooperation, and large volumes of technology-specific money usually end up in the hands of established interest groups,” they note. “Hence, they are not very likely to question these policies but will rather go along with the ride.”

John-Erik Bergkvist and Jerker Moodysson devote an entire chapter to this problem and offer a grim assessment of how past industrial policy schemes have exacerbated it:

Assuming that policies and programs are shaped by the interest groups that are affected by the policies, we highlight the risk that policymaking may end up as support for established interest groups rather than supporting the emergence of those who could act as institutional entrepreneurs or disruptors. Policies and programs may thus be captivated by dominant actors in the established regime, who have superior financial and relational resources. The result would then be that innovation policies sustain the established socio-technical structures of industries rather than contributing to the emergence of new structures.”

Other organizations are incentivized to support the status quo when big money is on the line. One of the most interesting chapters in the book was co-authored by Wennberg and Sandström along with Elias Collin. They examine the conflicts of interest inherent in many evaluations of industrial policy programs by various third parties, including academics and consultants who receive generous state contracts:

the overwhelming majority of evaluations are positive or neutral and that very few evaluations are negative. While this is the case across all categories of evaluators, we note that consulting firms stand out as particularly inclined to provide positive evaluations. The absence of negative or critical reports can be related to the fact that most of the studies do not rely upon methods that make it possible to discuss effects. This discrepancy between so many positive evaluations on the one hand and comparatively weak evaluation methods on the other hand leads us to suspect that evaluators are not sufficiently independent. Consultants and scholars that are funded by a government agency in order to evaluate the agency’s policies and programs are put in a position where it is difficult to maintain objectivity.

This is one reason why industrial policy continues to have such currency in European policy discussions despite a long track record of failure, as documented throughout this new book. The biggest problem for Europe lies in its layers of regulatory bureaucracy and heavy-handed treatment of entrepreneurs.

Later in the book, Zoltan J. Acs offers a grim account of just how bad things have been for Europe on the digital technology front in recent decades, despite the many state-led efforts to promote the sector. “The European Union protected traditional industries and hoped that existing firms would introduce new technologies. This was a policy designed to fail,” Acs argues. “What has been the outcome of E.U. policy in limiting entrepreneurial activity over recent decades?” he asks. Acs concludes that:

It is immediately clear… that the United States and China dominate the platform landscape. Based on the market value of top companies, the United States alone represents 66% of the world’s platform economy with 41 of the top 100 companies. European platform-based companies play a marginal role, with only 3% of market value.

He says that the United Kingdom’s “Brexit” from the European Union was a logical move, “because E.U. regulations were holding back the U.K.’s strong DPE (digital platform economy).” “If the United Kingdom was to realize its economic potential, it had to extricate itself from the European Union,” Acs says, due to the “dysfunctional E.U. bureaucracy.” No amount of industrial policy support is going to allow European firms to overcome those burdens. In fact, many of Europe’s industrial policy programs create the very disincentives that retard innovation and discourage entrepreneurialism in key sectors.

Several of the authors in the collection stress how the better role for the state is usually to set the table for innovation and growth without trying to determine everything that is served on the plate. As Wennberg and Sandström summarize:

the best policies to promote innovation are those that promote productive economic activity more generally: property rights protection, open and contestable markets, a stable monetary system, and legal rules that favor competition and entrepreneurship. Policy should promote an institutional environment in which innovation and entrepreneurship can flourish without trying to anticipate the specific outcomes of those processes—an impossible task in the face of uncertainty, technological change, and a dynamic, knowledge-based economy.

That’s good advice, as is everything found throughout the book. I encourage all those interested in these issues to take a hard look at it because it is particularly relevant even here in the Unites States, as Congress is currently considering a massive new 3,000-page, $350 billion industrial policy bill that I’ve labelled “The Most Corporatist & Wasteful Industrial Policy Ever.” There doesn’t seem to be anything stopping the momentum of this effort with both liberals and conservatives lining up to pass out the pork. I wish I could put a copy of Questioning the Entrepreneurial State in all their hands and ask them to read every word of it before they gamble hundreds of billions on such foolish efforts.


Additional Reading:

]]>
https://techliberation.com/2022/04/26/book-review-questioning-the-entrepreneurial-state/feed/ 1 76975
Samuel Florman & the Continuing Battle over Technological Progress https://techliberation.com/2022/04/06/samuel-florman-the-continuing-battle-over-technological-progress/ https://techliberation.com/2022/04/06/samuel-florman-the-continuing-battle-over-technological-progress/#comments Wed, 06 Apr 2022 18:37:45 +0000 https://techliberation.com/?p=76961

Almost every argument against technological innovation and progress that we hear today was identified and debunked by Samuel C. Florman a half century ago. Few others since him have mounted a more powerful case for the importance of innovation to human flourishing than Florman did throughout his lifetime.

Chances are you’ve never heard of him, however. As prolific as he was, Florman did not command as much attention as the endless parade of tech critics whose apocalyptic predictions grabbed all the headlines. An engineer by training, Florman became concerned about the growing criticism of his profession throughout the 1960s and 70s. He pushed back against that impulse in a series of books over the next two decades, including most notably: The Existential Pleasures of Engineering (1976), Blaming Technology: The Irrational Search for Scapegoats (1981), and The Civilized Engineer (1987). He was also a prolific essayist, penning hundreds of articles for a wide variety of journals, magazines, and newspapers beginning in 1959. He was also a regular columnist for MIT Technology Review for sixteen years.

Florman’s primary mission in his books and many of those essays was to defend the engineering profession against attacks emanating from various corners. More broadly, as he noted in a short autobiography on his personal website, Florman was interested in discussing, “the relationship of technology to the general culture.”

Florman could be considered a “rational optimist,” to borrow Matt Ridley’s notable term [1] for those of us who believe, as I have summarized elsewhere, that there is a symbiotic relationship between innovation, economic growth, pluralism, and human betterment.[2] Rational optimists are highly pragmatic and base their optimism on facts and historical analysis, not on dogmatism or blind faith in any particular viewpoint, ideology, or gut feeling. But they are unified in the belief that technological change is a crucial component of moving the needle on progress and prosperity.

Florman’s unique contribution to advancing rational optimism came in the way he itemized the various claims made by tech critics and then powerfully debunked each one of them. He was providing other rational optimists with a blueprint for how defend technological innovation against its many critics and criticisms. As he argued in The Civilized Engineer, we need to “broaden our conception of engineering to include all technological creativity.”[3] And then we need to defend it with vigor.

In 1982, the American Society of Mechanical Engineers appropriately awarded Florman the distinguished Ralph Coats Roe Medal for his “outstanding contribution toward a better public understanding and appreciation of the engineer’s worth to contemporary society.” Carl Sagan had won the award the previous year. Alas, Forman never attained the same degree of notoriety as Sagan. That is a shame because Florman was as much a philosopher and a historian as he was an engineer, and his robust thinking on technology and society deserves far greater attention. More generally, his plain-spoken style and straight-forward defense of technological progress continues to be a model for how to counter today’s techno-pessimists.

This essay highlights some of the most important themes and arguments found in Florman’s writing and explains its continuing relevance to the ongoing battles over technology and progress.

What Motivates The “Antitechnologists”?

Florman was interested in answering questions about what motivates both engineers as well as their critics. He dug deep into psychology and history to figure out what makes these people tick. Who are engineers, and why do they do what they do? That was his primary question, and we will turn to his answers momentarily. But he also wanted to know what drove the technology critics to oppose innovation so vociferously.

Florman’s most important contribution to the history of ideas lies in his 6-part explanation of “the main themes that run through the works of the antitechnologists.”[4] Florman used the term “antitechnologists” to describe the many different critics of engineering and innovation. He recognized that the term wasn’t perfect and that some people he labelled as such would object to it. Nevertheless, because they offer no umbrella label for their movement or way of thinking, Florman noted that opposition to, or general discomfort with, technology was what motivated these critics. Hence, the label “antitechnologists.”

Florman surveyed a wide swath of technological critics from many different disciplines—philosophy, sociology, law, and other fields. He condensed their main criticisms into six general points:

  • Technology is a “thing” or a force that has escaped from human control and is spoiling our lives.
  • Technology forces man to do work that is tedious and degrading.
  • Technology forces man to consume things that he does not really desire.
  • Technology creates an elite class of technocrats, and so disenfranchises the masses.
  • Technology cripples man by cutting him off from the natural world in which he evolved.
  • Technology provides man with technical diversions which destroy his existential sense of his own being.[5]

No one else before this had ever crafted such a taxonomy of complaints from tech critics, and no one has done it better since Florman did so in 1976. In fact, it is astonishing how well Florman’s list continues to identify what motivates modern technology critics. New technologies have come and gone, but these same concerns tend to be brought up again and again. Florman’s books addressed and debunked each of these concerns in powerful fashion.

The Relentless Pessimism & Elitism of the Antitechnologists

Florman identified the way a persistent pessimism unifies antitechnologists. “Our intellectual journals are full of gloomy tracts that depict a society debased by technology,” he noted.[6] What motivated such gloom and doom? “It is fear. They are terrified by the scene unfolding before their eyes.”[7] He elaborated:

“The antitechnologists are frightened; they counsel halt and retreat. They tell the people that Satan (technology) is leading them astray, but the people have heard that story before. They will not stand still for vague promises of a psychic contentment that is to follow in the wake of voluntary temperance.”[8]

The antitechnologist’s worldview isn’t just relentlessly pessimistic but also highly elitist and paternalistic, Florman argued. He referred to it as “Platonic snobbery.”[9] The economist and political scientist Thomas Sowell would later call that snobbish attitude, “the vision of the anointed.”[10] Like Sowell, Florman was angered at the way critics stared down their noses at average folk and disregarded their values and choices:

“The antitechnologists have every right to be gloomy, and have a bounden duty to express their doubts about the direction our lives are taking. But their persistent disregard of the average person’s sentiments is a crucial weakness in their argument—particularly when they then ask us to consider the ‘real’ satisfactions that they claim ordinary people experienced in other cultures of other times.”[11]

Florman noted that critics commonly complain about “too many people wanting too many things,” but he noted that, “[t]his is not caused by technology; it is a consequence of the type of creature that man is.”[12] One can moralize all they want about supposed over-consumption or “conspicuous consumption,” but in the end, most of us strive to better our lives in various ways—including by working to attain things that may be out of our reach or even superfluous in the eyes of others.

For many antitechnologists and other social critics, only the noble search for truth and wisdom will suffice. Basically, everybody should just get back to studying philosophy, sociology, and other soft sciences. Modern tech critics, Forman said, fashion themselves as the intellectual descendants of Greek philosophers who believed that, “[t]he ideal of the new Athenian citizen was to care for his body in the gymnasium, reason his way to Truth in the academy, gossip in the agora, and debate in the senate. Technology was not deemed worthy of a free man’s time.”[13]

“It is not surprising to find philosophers recommending the study of philosophy as a way of life,” Florman noted amusingly.[14] But that does not mean all of us want (or even need) to devote our lives to such things. Nonetheless, critics often sneer at the choices made by the rest of us—especially when they involve the fruits of science and technology. “The most effective weapon in the arsenal of the antitechnologists is self-righteousness,” he noted,[15] and, “[a]s seen by the antitechnologists, engineers and scientists are half-men whose analysis and manipulation of the world deprives them of the emotional experiences that are the essence of the good life.”[16]

Indeed, it is not uncommon (both in the past and today) to see tech critics self-anoint themselves “humanists” and then suggest that anyone who thinks differently from them (namely, those who are pro-innovation) are the equivalent of anti-humanistic. I wrote about this in my 2018 essay, “Is It ‘Techno-Chauvinist’ & ‘Anti-Humanist’ to Believe in the Transformative Potential of Technology?” I argued that, “[p]roperly understood, ‘technology’ and technological innovation are simply extensions of our humanity and represent efforts to continuously improve the human condition. In that sense, humanism and technology are compliments, not opposites.”

But the critics remain fundamentally hostile to that notion and they often suggest that there is something suspicious about those who believe, along with Florman, that there is a symbiotic relationship between innovation, economic growth, pluralism, and human betterment. We rational optimists, the critics suggest, are simply too focused on crass, materialistic measures of happiness and human flourishing.

Florman observed this when noting how much grief he and fellow engineers and scientists got when engaging with critics. “Anyone who has attempted to defend technology against the reproaches of an avowed humanist soon discovers that beneath all the layers of reasoning—political, environmental, aesthetic, or moral—lies a deep-seated disdain for ‘the scientific view.’”[17]

Everywhere you look in the world of Science & Technology Studies (STS) today, you find this attitude at work. In fact, the field is perhaps better labelled Anti-Science & Technology Studies, or at least Science & Technology Skeptical Studies. For most STSers, the burden of proof lies squarely on scientists, engineers, and innovators who must prove to some (often undefined) higher authorities that their ideas and inventions will bring worth to society (however the critics measure worth and value, which is often very unclear). Until then, just go slow, the critics say. Better yet, consult your local philosophy department for a proper course of action!

The critics will retort that they are just looking out for society’s best interests and trying to counter that selfish, materialist side of humanity. Florman countered by noting how, “most people are in search of the good life—not ‘the goods life’ as [Lewis] Mumford puts it, although some goods are entailed—and most human desires are for good things in moderate amounts.”[18] Trying to better our lives through the creation and acquisition of new and better goods and services is just a natural and quite healthy human instinct to help us attain some ever-changing definition of whatever each of us considers “the good life.” “Something other than technology is responsible for people wanting to live in a house on a grassy plot beyond walking distance to job, market, neighbor, and school,” Florman responded.[19] We all want to “get ahead” and improve our lot in life. That’s not because technology forces the urge upon us. Rather, that urge comes quite naturally as part of a desire to improve our lot in life.

The Power of Nostalgia

I have spent a fair amount of time in my own writing documenting the central role that nostalgia plays in motivating technological criticism.[20] Florman’s books repeatedly highlighted this reality. “The antitechnologists romanticize the work of earlier times in an attempt to make it seem more appealing than work in a technological age,” he noted. “But their idyllic descriptions of peasant life do not ring true.”[21]

The funny thing is, it is hard to pin down the critics regarding exactly when the “golden era” or “good ‘ol days” were. But if there is one thing that they all agree on, it’s that those days have long passed us by. In a 2019 essay on “Four Flavors of Doom: A Taxonomy of Contemporary Pessimism,” philosopher Maarten Boudry noted:

“In the good old days, everything was better. Where once the world was whole and beautiful, now everything has gone to ruin. Different nostalgic thinkers locate their favorite Golden Age in different historical periods. Some yearn for a past that they were lucky enough to experience in their youth, while others locate utopia at a point farther back in time…”

Not all nostalgia is bad. Clay Routledge has written eloquently about how “nostalgia serves important psychological functions,” and can sometimes possess a positive character that strengthens individuals and society. But the nostalgia found in the works of tech critics is usually a different thing altogether. It is rooted in misery about the present and dread of the future—all because technology has apparently stolen away or destroyed all that was supposedly great about the past. Florman noted how, “the current pessimism about technology is a renewed manifestation of pastoralism,” that is typically rooted in historical revisionism about bygone eras.[22] Many critics engage in what rhetoricians call “appeals to nature” and wax poetic about the joys of life for Pre-Technological Man, who apparently enjoyed an idyllic life free of the annoying intrusions created by modern contrivances.

Such “good ol days” romanticism is largely untethered from reality. “For most of recorded history humanity lived on the brink of starvation,” Wall Street Journal columnist Greg Ip noted in a column in early 2019. Even a cursory review of history offers voluminous, unambiguous proof that the old days were, in reality, eras of abject misery. Widespread poverty, mass hunger, poor hygiene, disease, short lifespans, and so on were the norm. What lifted humanity up and improved our lot as a species is that we learned how to apply knowledge to tasks in a better way through incessant trial-and-error experimentation. Recent books by Hans Rosling,[23] Steven Pinker,[24] and many others[25] have thoroughly documented these improvements to human well-being over time.

The critics are unmoved by such evidence, preferring to just jump around in time and cherry-pick moments when they feel life was better than it is now. “Fond as they are of tribal and peasant life, the antitechnologists become positively euphoric over the Middle Ages,” Florman quipped.[26] Why? Mostly because the Middle Ages lacked the technological advances of modern times, which the critics loathe. But facts are pesky things, and as Florman insisted, “it is fair to go on to ask whether or not life was ‘better’ in these earlier cultures than it is in our own.”[27] “We all are moved to reverie by talk of an arcadian golden age,” he noted. “But when we awaken from this reverie, we realize that the antitechnologists have diverted us with half-truths and distortions.”[28]

The critics’ reverence for the old days would be humorous if it wasn’t rooted in an arrogant and dangerous belief that society can be somehow reshaped to resemble whatever preferred past the critics desire. “Recognizing that we cannot return to earlier times, the antitechnologists nevertheless would have us attempt to recapture the satisfactions of these vanished cultures,” Florman noted. “In order to do this, what is required is nothing less than a change in the nature of man.”[29] That is, the critics will insist that, “something must be done” (namely be forced from above via some grand design) to remake humans and discourage their inner homo faber desire to be an incessant tool-builder. But this is madness, Florman argued in one of the best passages from his work:

“we are beginning to realize that for mankind there will never be a time to rest at the top of the mountain. There will be no new arcadian age. There will always be new burdens, new problems, new failures, new beginnings. And the glory of man is to respond to his harsh fate with zest and ever-renewed effort.”[30]

If the critics had their way, however, that zest would be dampened and those efforts restrained in the name of recapturing some mythical lost age. This sort of “rosy retrospection bias” is all the more shocking coming, as it does, from learned people who should know a lot more about the actual history of our species and the long struggle to escape utter despair and destitution. Alas, as the great Scottish philosopher David Hume observed in a 1777 essay, “The humour of blaming the present, and admiring the past, is strongly rooted in human nature, and has an influence even on persons endued with the profoundest judgment and most extensive learning.”[31]

Why Invent? Homo Faber is our Nature

While taking on the critics and debunking their misplaced nostalgia about the past, Florman mounted a defense of engineers and innovators by noting that the need to tinker and create is in our blood. He began by noting how “the nature of engineering has been misconceived”[32] because, in a sense, we are all engineers and innovators to some degree.

Florman’s thinking was very much in line with Benjamin Franklin, who once noted, “man is a tool-making animal.” “Both genetically and culturally the engineering instinct has been nurtured within us,” Florman argued, and this instinct “was as old as the human race.”[33] “To be human is to be technological. When we are being technological we are being human—we are expressing the age-old desire of the tribe to survive and prosper.”[34] In fact, he claimed, it was no exaggeration to say that humans, “are driven to technological creativity because of instincts hardly less basic than hunger and sex.”[35] Had our past situation been as rosy as the critics sometimes suggest, perhaps we would have never bothered to fashion tools to escape those eras! It was precisely because humans wanted to improve their lives and the lives of their loved ones that we started crafting more and better tools. Flint and firewood were never going to suffice.

But our engineering instincts do not end with basic needs. “Engineering responds to impulses that go beyond mere survival: a craving for variety and new possibilities, a feeling for proportion—for beauty—that we share with the artist,” Florman argued.[36] In essence, engineering and innovation respond to both basic human needs and higher ones at every stage of “Maslow’s pyramid,” which describes a five-level hierarchy of human needs. This same theme is developed in Arthur Diamond’s recent book, Openness to Creative Destruction: Sustaining Innovative Dynamism. As Diamond argues, one of the most unheralded features of technological innovation is that, “by providing goods that are especially useful in pursuing a life plan full of challenging, worthwhile creative projects,” it allows each of us the pursue different conceptions of what we consider a good life.[37] But we are only able to do so by first satisfying our basic physiological needs, which innovation also handles for us.

Florman was frustrated that critics failed to understand this point and equally concerned that engineers and innovators had been cast as uncaring gadget-worshipers who did not see beauty and truth in higher arts and other more worldly goals and human values. That’s hogwash, he argued:

“What an ironic turn of events! For if ever there was a group dedicated to—obsessed with—morality, conscience, and social responsibility, it has been the engineering profession. Practically every description of the practice of engineering has stressed the concept of service to humanity.[38] [. . .] Even in an age of global affluence, the main existential pleasure of the engineer will always be to contribute to the well-being of his fellow man.”[39]

Engineers and innovators do not always set out with some grandiose design to change the world, although some aspire to do so. Rather, the “existential pleasures of engineering” that Florman described in the title of his most notable book comes about by solving practical day-to-day problems:

“The engineer does not find existential pleasure by seeking it frontally. It comes to him gratuitously, seeping into him unawares. He does not arise in the morning and say, ‘Today I shall find happiness.’ Quite the contrary. He arises and says, ‘Today I will do the work that needs to be done, the work for which I have been trained, the work which I want to do because in doing it I feel challenged and alive.’ Then happiness arrives mysteriously as a byproduct of his effort.”[40]

And this pleasure of getting practical work done is something that engineers and innovators enjoy collectively by coming together and using specialized skills in new and unique combinations. “[T]echnological progress depends upon a variety of skills and knowledge that are far beyond the capacity of any one individual,” he insisted. “High civilization requires a high degree of specialization, and it was toward high civilization that the human journey appears always to have been directed.”[41] Adam Smith could not have said it any better.

“Muddling Through”: Why Trial-and-Error is the Key to Progress

My favorite insights from Florman’s work relate to the way humans have repeatedly faced up to adversity and found ways to “muddle through.” This was the focus of an old essay of mine— “Muddling Through: How We Learn to Cope with Technological Change”—which argued that humans are a remarkably resilient species and that we regularly find creative ways to deal with major changes through constant trial-and-error experimentation and the learning that results from it.[42]

Florman made this same point far more eloquently long ago:

“We have been attempting to muddle along, acknowledging that we are selfish and foolish, and proceeding by means of trial and error. We call ourselves pragmatists. Mistakes are made, of course. Also, tastes change, so that what seemed desirable to one generation appears disagreeable to the next. But our overriding concern has been to make sure that matters of taste do not become matters of dogma, for that is the way toward violent conflict and tyranny. Trial and error, however, is exactly what the antitechnologists cannot abide.[43]

It is the error part of trial-and-error that is so vital to societal learning. “Even the most cautious engineer recognizes that risk is inherent in what he or she does,” Florman noted. “Over the long haul the improbable becomes the inevitable, and accidents will happen. The unanticipated will occur.”[44] But “[s]ometimes the only way to gain knowledge is by experiencing failure,” he correctly observed[45] “To be willing to learn through failure—failure that cannot be hidden—requires tenacity and courage.”[46]

I’ve argued that this represents the central dividing line between innovation supporters and technology critics. The critics are so focused on risk-adverse, precautionary principle-based thinking that they simply cannot tolerate the idea that society can learn more through trial-and-error than through preemptive planning. They imagine it is possible to override that process and predetermine the proper course of action to create a safer, more stable society. In this mindset, failure is to be avoided at all costs through prescriptions and prohibitions. Innovation is to be treated as guilty until proven innocent in the hope of eliminating the error (or risk / failure) associated with trial-and-error experiments. To reiterate, this logic misses the fact that the entire point of trial-and-error is to learn from our mistakes and “fail better” next time, until we’ve solved the problem at hand entirely.[47]

Florman noted that, “sensible people have agreed that there is no free lunch; there are only difficult choices, options, and trade-offs.”[48] In other words, precautionary controls come at a cost. “All we can do is do the best we can, plan where we can, agree where we can, and compromise where we must,” he said.[49] But, again, the antitechnologists absolutely cannot accept this worldview. They are fundamentally hostile to it because they either believe that a precautionary approach will do a better job improving public welfare, or they believe that trial-and-error fails to safeguard any number of other values or institutions that they regard as sacrosanct. This shuts down the learning process from which wisdom is generated. As the old adage goes, “nothing ventured, nothing gained.” There can be no reward without some risk, and there can be no human advances without unless we are free to learn from the error portion of trial-and-error.

The Costs of Precautionary Regulation

Florman did not spend much time in his writing mulling over the finer points of public policy, but he did express skepticism about our collective ability to define and enforce “the public interest” in various contexts. A great many regulatory regimes—and their underlying statutes—rest on the notion of “protecting the public interest.” It is impossible to be against that notion, but it is often equally impossible to define what it even means.[50]

This leads to what Florman called, “the search for virtues that nobody can define”[51] “As engineers we are agreed that the public interest is very important; but it is folly to think that we can agree on what the public interest is. We cannot even agree on the scientific facts!”[52] This is especially true today in debates over what constitutes “responsible innovation” or “ethical innovation.”[53] What Florman noted about such conversations three decades ago is equally true today:

“Whenever engineering ethics is on the agenda, emotions come quickly to a boil. […] “It is oh so easy to mouth clichés, for example to pledge to protect the public interest, as the various codes of engineering ethics do. But such a pledge is only a beginning and hardly that. The real questions remain: What is the public interest, and how is it to be served?”[54]

That reality makes it extremely difficult to formulate consensus regarding public polices for emerging technologies. And it makes it particularly difficult to define and enforce a “precautionary principle” for emerging technologies that will somehow strike the Goldilocks balance of getting things just right. This was the focus of my 2016 book Permissionless Innovation, which argued that the precautionary principle should be the last resort when contemplating innovation policy. Experimentation with new technologies and business models should generally be permitted by default because, “living in constant fear of worst-case scenarios—and premising public policy on them—means that best-case scenarios will never come about,” I argued. The precautionary principle should only be tapped when the harms alleged to be associated with a new technology are highly probable, tangible, immediate, irreversible, catastrophic, or directly threatening to life and limb in some fashion.

For his part, Florman did not want to get his defense of engineering mixed up with politics and regulatory considerations. Engineers and technologists, he noted, come in many flavors and supported many different causes. Generally speaking, they tend to be quite pragmatic and shun strong ideological leanings and political pronouncements.

Of course, at some point, there is no avoiding this fight; one must comment on how to strike the right balance when politics enter the picture and threatens to stifle technological creativity. Florman’s perspectives on regulatory policy were somewhat jumbled, however. On one hand, he expressed concern about excessive and misguided regulations, but he also saw government playing an important role both in supporting various types of engineering projects and regulating certain technological developments:

“The regulatory impulse, running wild, wreaks havoc, first of all by stifling creative and productive forces that are vital to national survival. But it does harm also—and perhaps more ominously—by fomenting a counter-revolution among outraged industrialists, the intensity of which threatens to sweep away many of the very regulations we most need.”[55]

In his 1987 book, The Civilized Engineer, Florman even expressed surprise and regret about growing pushback against regulation during the Reagan years. He also expressed skepticism about “the deceptive allure” of benefit-cost analysis, which was on the rise at the time, saying that the “attempt to apply mathematical consistency to the regulatory process was deplorably simplistic.”[56] I have always been a big believer in the importance of benefit-cost analysis (BCA), so I was surprised to read of Florman’s skepticism of it. But he was writing in the early days of BCA and it was not entirely clear how well it work in practice. Four decades on, BCA has become far more rigorous, academically respected, and well-established throughout government. It has widespread and bipartisan support as a policy evaluation tool.

Florman adamantly opposed any sort of “technocracy”—or administration of government by technically-skilled elites. He thought it was silly that so many tech critics believe that such a thing already existed. “The myth of the technocratic elite is an expression of fear, like a fairy tale about ogres,” he argued. “It springs from an understandable apprehension, but since it has no basis in reality, it has no place in serious discourse.”[57] Nor did he believe that there was any real chance a technocracy would ever take hold. “No matter how complex technology becomes, and no matter how important it turns out to be in human affairs, we are not likely to see authority vested in a class of technocrats.”[58]

Florman hoped for wiser administration of law and regulations that affected engineering endeavors and innovation more generally. Like so many others, he did not necessarily want more law, just better law. One cannot fault that instinct, but Florman was not really interested in fleshing out the finer details of policy about how to accomplish that objective. He preferred instead to use history as a rough guide for policy. From the fall of the Roman Empire to the decline of Britain’s economic might in more recent times, Florman observed the ways in which societal and governmental attitudes toward innovation influenced the relative growth of science, technology, and national economies. In essence, he was explaining how “innovation culture” and “innovation arbitrage” had been realities for far longer than most people realize.[59]

“Where the entrepreneurial spirit cannot be rewarded, and where non-productive workers cannot be discharged, stagnation will set in,” Florman concluded.[60] This is very much in line with the thinking of economic historians like Joel Mokyr[61] and Deirdre McCloskey,[62] who have identified how attitudes toward creativity and entrepreneurialism affect the aggregate innovative capacity of nations, and thus their competitive advantage and relative prosperity in the world.

Debunking Determinism, Anxiety & Alienation Concerns

One of the ironies of modern technological criticism is the way many critics can’t seem to get their story straight when it comes to “technological determinism” versus social determinism. In the extreme view, technological determinism is the idea that technology drives history and almost has a will of its own. It is like an autonomous force that is practically unstoppable. By contrast, social determinism means that society (individuals, institutions, etc.) guide and control the development of technology.

In the field of Science and Technology Studies, technological determinism is a very hot matter. Academic and social critics are fond of painting innovation advocates as rigid tech determinists who are little better than uncaring anti-humanistic gadget-worshipers. The critics have employed a variety of other creative labels to describe tech determinism, including: “techno-fundamentalism,” “technological solutionism,” and even “techno-chauvinism.”

Engineers and other innovators often get hit with such labels and accused of being rigid technological determinists who just want to see tech plow over people and politics. But this was, and remains, a ridiculous argument. Sure, there will always be some wild-eyed futurists and extropian extremists who make preposterous claims about how “there is no stopping technology.” “Even now the salvation-through-technology doctrine has some adherents whose absurdities have helped to inspire the antitechnological movement, Florman said.”[63] But that hardly represents the majority of innovation supporters, who well understand that society and politics play a crucial role in shaping the future course of technological development.

As Florman noted, we can dismiss extreme deterministic perspectives for a rather simple reason: technologies fail all the time! “If promising technologies can suffer fatal blows from unexpected circumstances,” Florman correctly argued, then “[t]his means that we are still—however precariously—in control of our own destiny.”[64] He believed that, “technology is not an independent force, much less a thing, but merely one of the types of activities in which people engage.”[65] The rigid view of tech determinism can be dismissed, he said, because “it can be shown that technology is still very much under society’s control, that it is in fact an expression of our very human desires, fancies, and fears.”[66]

But what is amazing about this debate is that some of the most rigid technological determinists are the technology critics themselves! Recall how Florman began his 6-part taxonomy of common complaints from tech critics. “A primary characteristic of the antitechnologists,” Florman argued, “is the way in which they refer to ‘technology’ as a thing, or at least a force, as if it had an existence of its own” and which “has escaped from human control and is spoiling our lives.”[67]

He noted that many of the leading tech critics of the post-war era often spoke in remarkably deterministic ways. “The idea that a man of the masses has no thoughts of his own, but is something on the order of a programmed machine, owes part of its popularity with the antitechnologists to the influential writings of Herbert Marcuse,” he believed.[68] But then such thinking accelerated and gained greater favor with the popularity of critics like French philosopher Jacques Ellul, American historian Lewis Mumford, and American cultural critic Neil Postman.

Their books painted a dismal portrait of a future in which humans were subjugated to the evils of “technique” (Ellul), “technics” (Mumford), or “technopoly” (Postman).  The narrative of their works read like dystopian science fiction. Essentially, there was no escaping the iron grip that technology had on us. Postman claimed, for example, that technology was destined to destroy “the vital sources of our humanity” and lead to “a culture without a moral foundation” by undermining “certain mental processes and social relations that make human life worth living.”

Which gets us to commonly heard concerns about how technology leads to “anxiety” and “alienation.” “Having established the view of technology as an evil force, the antitechnologists then proceed to depict the average citizen as a helpless slave, driven by this force to perform work he detests,” Florman notes.[69] “Anxiety and alienation are the watchwords of the day, as if material comforts made life worse, rather than better.”[70]

These concerns about anxiety, alienation, and “dehumanization” are omnipresent in the work of modern tech critics, and they are also tied up with traditional worries about “conspicuous consumption.” It’s all part of the “false consciousness” narrative they also peddle, which basically views humans as too ignorant to look out for their own good. In this worldview, people are sheep being led to the slaughter by conniving capitalists and tech innovators, who are just trying to sell them things they don’t really need.

Florman pointed out how preposterous this line of thinking is when he noted how critics seem to always forget that, “a basic human impulse precedes and underlies each technological development”:[71]

“Very often this impulse, or desire, is directly responsible for the new invention. But even when this is not the case, even when the invention is not a response to any particular consumer demand, the impulse is alive and at the ready, sniffing about like a mouse in a maze, seeking its fulfillment. We may regret having some of these impulses. We certainly regret giving expression to some of them. But this hardly gives us the right to blame our misfortunes on a devil external to ourselves.”[72]

Consider the automobile, for example. Industrial era critics often focused on it and lambasted the way they thought industrialists pushed auto culture and technologies on the masses. Did we really need all those cars? All those colors? All those options? Did we really even need cars? The critics wanted us to believe that all these things were just imposed upon us. We were being force-fed options we really didn’t even need or want. “Choice” in this worldview is just a fiction; a front for the nefarious ends of our corporate overlords.

Florman demolished this reasoning throughout his books. “However much we deplore the growth of our automobile culture, clearly it has been created by people making choices, not by a runaway technology,” he argued.[73] Consumer demand and choice is not some fiction fabricated and forced upon us, as the antitechnologists suggest. We make decisions. “Those who would blame all of life’s problems on an amorphous technology, inevitably reject the concept of individual responsibility,” Florman retorted. “This is not humanism. It is a perversion of the humanistic impulse.”[74]

A modern tweak on the conspicuous consumption and false consciousness arguments is found in the work of leading tech critics like Evgeny Morozov, who pens attention-grabbing screeds decrying what he regards as “the folly of technological solutionism.” Morozov bluntly states that “our enemy is the romantic and revolutionary problem solver who resides within” of us, but most specifically within the engineers and technologists.[75]

But would the world really be better place it tinkerers didn’t try to scratch that itch?[76] In 2021, the Wall Street Journal profiled JoeBen Bevirt, an engineer and serial entrepreneur who has been working to bring flying cars from sci-fi to reality. Channeling Florman’s defense of the existential pleasures associated with engineering, Bevirt spoke passionately about the way innovators can help “move our species forward” through their constant tinkering to find solutions to hard problems. “That’s kind of the ethos of who we are,” he said. “We see problems, we’re engineers, we work to try to fix them.”[77]

When tech critics like Morozov decry “solutionism,” they are essentially saying that innovators like Bevirt need to just shut up and sit down. Don’t try to improve the world through tinkering; just settle for the status quo, the critics basically state. That’s the kiss of death for human progress, however, because it is only through incessant experimentation with the new and different approaches to hard problems that we can advance human well-being. “Solutionism” isn’t about just creating some shiny new toy; it’s about expanding the universe of potentially life-enriching and life-saving technologies available to humanity.

Conclusion

This review of Samuel Florman’s work may seem comprehensive, but it only scratches the surface of his wide-ranging writing. Florman was troubled that engineering lacked support or at least understanding. Perhaps that was because, he reasoned, that “[t]here is no single truth that embodies the practice of engineering, no patron saint, no motto or simple credo. There is no unique methodology that has been distilled from millenia of technological effort.”  Or, more simply, it may also be the case that the profession lacked articulate defenders. “The engineer may merely be waiting for his Shakespeare,” he suggested.[78]

Through his life’s work, however, Samuel Florman became that Shakespeare; the great bard of engineering and passionate defender of technological innovation and rational optimism more generally. In looking for a quote or two to close out my latest book, I ended with this one from Florman:

“By turning our backs on technological change, we would be expressing our satisfaction with current world levels of hunger, disease, and privation. Further, we must press ahead in the name of the human adventure. Without experimentation and change our existence would be a dull business.”[79]

Let us resolve to make sure that Florman’s greatest fear does not come to pass. Let us resolve to make sure that the great human adventure never ends. And let us resolve to counter the antitechnologists and their fundamentally anti-humanist worldview, which would most assuredly make our existence the “dull business” that Florman dreaded.

We can do better when we put our minds and hands to work innovating in an attempt to build a better future for humanity. Samuel Florman, the great prophet of progress, showed us the way forward.

 

Additional Reading from Adam Thierer:

 

Endnotes:

[1]    Matt Ridley, The Rational Optimist: How Prosperity Evolves (New York: Harper Collins, 2010).

[2]    Adam Thierer, “Defending Innovation Against Attacks from All Sides,” Discourse, November 9, 2021, https://www.discoursemagazine.com/ideas/2021/11/09/defending-innovation-against-attacks-from-all-sides.

[3]    Samuel C. Forman, The Civilized Engineer (New York: St. Martin’s Griffin, 1987), p. 26.

[4]    Samuel C. Florman, The Existential Pleasures of Engineering (New York, St. Martins Griffin, 2nd Edition, 1994), p. 53-4.

[5]    Existential Pleasures of Engineering, p. 53-4.

[6]    Samuel C. Florman, Blaming Technology: The Irrational Search for Scapegoats (New York: St. Martin’s Press, 1981), p. 186.

[7]    Existential Pleasures of Engineering, p. 76.

[8]    Existential Pleasures of Engineering, p. 77.

[9]    The Civilized Engineer, p. 38.

[10]   Thomas Sowell, The Vision of the Anointed: Self-Congratulation as a Basis for Social Policy (New York: Basic Books, 1995).

[11]   Existential Pleasures of Engineering, p. 72.

[12]   Existential Pleasures of Engineering, p. 76.

[13]   The Civilized Engineer, p. 35.

[14]   Existential Pleasures of Engineering, p. 102.

[15]   Blaming Technology, p. 162.

[16]   Existential Pleasures of Engineering, p. 55.

[17]   Blaming Technology, p. 70.

[18]   Existential Pleasures of Engineering, p. 77.

[19]   Existential Pleasures of Engineering, p. 60.

[20]   Adam Thierer, “Technopanics, Threat Inflation, and the Danger of an Information Technology Precautionary Principle,” Minnesota Journal of Law, Science & Technology 14, no. 1 (2013), p. 312–50, https://papers.ssrn.com/sol3/papers.cfm?abstract_id=2012494.

[21]   Existential Pleasures of Engineering, p. 62.

[22]   Blaming Technology, p. 9.

[23]   Hans Rosling, Factfulness: Ten Reasons We’re Wrong about the World—and Why Things Are Better Than You Think (New York: Flatiron Books, 2018).

[24]   Steven Pinker, Enlightenment Now: The Case for Reason, Science, Humanism, and Progress (New York: Viking, 2018).

[25]   Gregg Easterbrook, It’s Better than It Looks: Reasons for Optimism in an Age of Fear (New York: Public Affairs, 2018); Michael A. Cohen & Micah Zenko, Clear and Present Safety: The World Has Never Been Better and Why That Matters to Americans (New Haven, CT: Yale University Press, 2019).

[26]   Existential Pleasures of Engineering, p. 54.

[27]   Existential Pleasures of Engineering, p. 72.

[28]   Existential Pleasures of Engineering, p. 72.

[29]   Existential Pleasures of Engineering, p. 55.

[30]   Existential Pleasures of Engineering, p. 117.

[31]   David Hume, “Of the Populousness of Ancient Nations,” (1777), https://oll.libertyfund.org/titles/hume-essays-moral-political-literary-lf-ed.

[32]   The Civilized Engineer, p. 20.

[33]   Existential Pleasures of Engineering, p. 6.

[34]   The Civilized Engineer, p. 20.

[35]   Existential Pleasures of Engineering, p. 115.

[36]   The Civilized Engineer, p. 20.

[37]   Arthur Diamond, Openness to Creative Destruction: Sustaining Innovative Dynamism (Oxford: Oxford University Press, 2019).

[38]   Existential Pleasures of Engineering, p. 19.

[39]   Existential Pleasures of Engineering, p. 147.

[40]   Existential Pleasures of Engineering, p. 148.

[41]   The Civilized Engineer, p. 30.

[42]   Adam Thierer, “Muddling Through: How We Learn to Cope with Technological Change,” Medium, June 30, 2014, https://medium.com/tech-liberation/muddling-through-how-we-learn-to-cope-with-technological-change-6282d0d342a6.

[43]   Existential Pleasures of Engineering, p. 84.

[44]   The Civilized Engineer, p. 71.

[45]   The Civilized Engineer, p. 72.

[46]   The Civilized Engineer, p. 72.

[47]   Adam Thierer, “Failing Better: What We Learn by Confronting Risk and Uncertainty,” in Sherzod Abdukadirov (ed.), Nudge Theory in Action: Behavioral Design in Policy and Markets (Palgrave Macmillan, 2016): 65-94.

[48]   The Civilized Engineer, p. xi.

[49]   Existential Pleasures of Engineering, p. 85.

[50]   Adam Thierer, “Is the Public Served by the Public Interest Standard?” The Freeman, September 1, 1996,  https://fee.org/articles/is-the-public-served-by-the-public-interest-standard.

[51]   The Civilized Engineer, p. 84.

[52]   The Existential Pleasures of Engineering, p. 22.

[53]   Adam Thierer, “Are ‘Permissionless Innovation’ and ‘Responsible Innovation’ Compatible?” Technology Liberation Front, July 12, 2017, https://techliberation.com/2017/07/12/are-permissionless-innovation-and-responsible-innovation-compatible.

[54]   The Civilized Engineer, p. 79.

[55]   Blaming Technology, p. 106.

[56]   The Civilized Engineer, p. 158.

[57]   Blaming Technology, p. 41.

[58]   Blaming Technology, p. 40-1.

[59]   Adam Thierer, “Embracing a Culture of Permissionless Innovation,” Cato Online Forum, November 17, 2014, https://www.cato.org/publications/cato-online-forum/embracing-culture-permissionless-innovation; Christopher Koopman, “Creating an Environment for Permissionless Innovation,” Testimony before the US Congress Joint Economic Committee, May 22, 2018, https://www.mercatus.org/publications/creating-environment-permissionless-innovation.

[60]   The Civilized Engineer, p. 117.

[61]   Joel Mokyr, Lever of Riches: Technological Creativity and Economic Progress (New York: Oxford University Press, 1990).

[62]   Deirdre N. McCloskey, The Bourgeois Virtues: Ethics for an Age of Commerce (Chicago: The University of Chicago Press, 2006); Deirdre N. McCloskey, Bourgeois Dignity: Why Economics Can’t Explain the Modern World (Chicago: The University of Chicago Press. 2010).

[63]   Existential Pleasures of Engineering, p. 57.

[64]   Blaming Technology, p. 22.

[65]   The Existential Pleasures of Engineering, p. 58.

[66]   Blaming Technology, p. 10.

[67]   The Existential Pleasures of Engineering, p. 48, 53.

[68]   Existential Pleasures of Engineering, p. 70.

[69]   Existential Pleasures of Engineering, p. 49.

[70]   Existential Pleasures of Engineering, p. 16.

[71]   Existential Pleasures of Engineering, p. 61.

[72]   Existential Pleasures of Engineering, p. 61.

[73]   Existential Pleasures of Engineering, p. 60.

[74]   Blaming Technology, p. 104.

[75]   Evgeny Morozov, To Save Everything, Click Here: The Folly of Technological Solutionism (New York: Public Affairs, 2013).

[76]   Adam Thierer, “A Net Skeptic’s Conservative Manifesto,” Reason, April 27, 2013, https://reason.com/2013/04/27/a-net-skeptics-conservative-manifesto-2/.

[77]   Emily Bobrow, “JoeBen Bevirt Is Bringing Flying Taxis from Sci-Fi to Reality,” Wall Street Journal, July 9, 2021, https://www.wsj.com/articles/joeben-bevirt-is-bringing-flying-taxis-from-sci-fi-to-reality-11625848177.

[78]   Existential Pleasures of Engineering, p. 96.

[79]   Blaming Technology, p. 193.

]]>
https://techliberation.com/2022/04/06/samuel-florman-the-continuing-battle-over-technological-progress/feed/ 3 76961
The Case for Innovation, Progress & Abundance: Some Readings https://techliberation.com/2022/01/25/the-case-for-innovation-progress-abundance-some-readings/ https://techliberation.com/2022/01/25/the-case-for-innovation-progress-abundance-some-readings/#comments Tue, 25 Jan 2022 20:27:31 +0000 https://techliberation.com/?p=76937

This is a compendium of readings on “ progress studies ,” or essays and books which generally make the case for technological innovation, dynamism, economic growth, and abundance. I will update this list as additional material of relevance is brought to my attention.   

[Last update: 10/11/22]

Recent Essays

Books

]]>
https://techliberation.com/2022/01/25/the-case-for-innovation-progress-abundance-some-readings/feed/ 2 76937
The Most Important Technology Policy Book of the Past Quarter Century https://techliberation.com/2022/01/20/the-most-important-technology-policy-book-of-the-past-quarter-century/ https://techliberation.com/2022/01/20/the-most-important-technology-policy-book-of-the-past-quarter-century/#comments Thu, 20 Jan 2022 14:17:10 +0000 https://techliberation.com/?p=76935

Discourse magazine has just published my review of Where Is My Flying Car?, by J. Storrs Hall, which I argue is the most important book on technology policy written in the past quarter century. Hall perfectly defines what is at stake if we fail to embrace a pro-progress policy vision going forward. Hall documents how a “Jetsons” future was within our grasp, but it was stolen away from us. What held back progress in key sectors like transportation, nanotech & energy was anti-technological thinking and the overregulation that accompanies it. “[T]he Great Stagnation was really the Great Strangulation,” he argues. The culprits: negative cultural attitudes toward innovation, incumbent companies or academics looking to protect their turf, litigation-happy trial lawyers, and a raft of risk-averse laws and regulations.

Hall coins the term “the Machiavelli Effect” to identify why many people simultaneously fear the new and different, and they also want to protect whatever status quo they benefit from (or at least feel comfortable with). He builds on this passage from Niccolò Machiavelli’s classic 1532 study of political power, “The Prince”:

[I]t ought to be remembered that there is nothing more difficult to take in hand, more perilous to conduct, or more uncertain in its success, then to take the lead in the introduction of a new order of things. Because the innovator has for enemies all those who have done well under the old conditions, and lukewarm defenders in those who may do well under the new. This coolness arises partly from fear of the opponents, who have the laws on their side, and partly from the incredulity of men, who do not readily believe in new things until they have had a long experience of them. Thus it happens that whenever those who are hostile have the opportunity to attack they do it like partisans, whilst the others defend lukewarmly, in such wise that the prince is endangered along with them.

Hall notes that the Machiavelli Effect “has nothing to do with any conspiracy.” Rather, it comes down to human nature: Many people simultaneously fear the new and different, and they also want to protect whatever status quo they benefit from (or at least feel comfortable with). Isaac Asimov identified the same problem in a 1974 lecture when he noted how there had been “bitter, exaggerated, last-stitch resistance . . . to every significant technological change that had taken place on earth.” [On this same point, also see Innovation and Its Enemies: Why People Resist New Technologies, by Calestous Juma. It’s the best history on the topic.]

Hall identifies how the Machiavelli Effect held back nuclear, nanotech, and aviation technologies. “Over the long run, unchecked regulation destroys the learning curve, prevents innovation, protects and preserves inefficiency, and makes progress run backward.” The problem is the Precautionary Principle, which undermines the learning curve is by setting policy defaults to no trial and error as opposed to free to experiment. There can be no reward without some risk! Hall quotes Wilbur Wright on this, who once noted that, “If you are looking for perfect safety, you would do well to sit on a fence and watch the birds.”

Over-regulation of those sectors also resulted in massive misallocation of talent, “taking more than a million of the country’s most talented and motivated people and putting them to work making arguments and filing briefs instead of inventing, developing, and manufacturing.” Hall is equally critical of government R&D efforts. “One of the great tragedies of the latter 20th century, and clearly one of the causes of the Great Stagnation,” he argues, “was the increasing centralization and bureaucratization of science and research funding.”

Hall’s book builds on Jason Crawford’s insight that, “We need a new philosophy of progress,” that is rooted in optimism about the future and support for a culture of trial-and-error experimentation. Hall’s book is a major contribution to that effort. Hall makes a profoundly moral case for innovation. “The zero-sum society is a recipe for evil,” because it leaves us with a “static level of existence” that denies us the ability to improve the human condition. Indeed, Hall’s book is the most full-throated defense of innovation by a trained scientist or engineer since Samuel Florman’s 1976 “Existential Pleasures of Engineering.” Both are celebrations of the potential for humanity to build more and better tools to improve the world.

Hall’s book should also be read alongside books from Virginia Postrel (“The Future and Its Enemies”), Steven Pinker (“Enlightenment Now”), Matt Ridley (“How Innovation Works”) and Deirdre McCloskey’s three-volume trilogy about the history of modern economic growth. These scholars argue that there is a symbiotic relationship between innovation, economic growth, pluralism and human betterment, and that to deny people the ability to improve their lot in life is fundamentally anti-human.

Image

I just cannot recommend Hall’s Where Is My Flying Car? highly enough. It’s a masterpiece. And bravo to Stripe Press for publishing a beautiful hardbound edition. It is a stunning book both to behold and read. Order it now, and jump over to Discourse to read my entire review of it.

 

]]>
https://techliberation.com/2022/01/20/the-most-important-technology-policy-book-of-the-past-quarter-century/feed/ 1 76935
Symposium: Hirschman’s “Exit, Voice & Loyalty” at 50 https://techliberation.com/2020/08/27/symposium-hirschmans-exit-voice-loyalty-at-50/ https://techliberation.com/2020/08/27/symposium-hirschmans-exit-voice-loyalty-at-50/#comments Thu, 27 Aug 2020 15:28:01 +0000 https://techliberation.com/?p=76803

Albert Hirschman and the Social Sciences: A Memorial Roundtable – Humanity JournalThis month’s Cato Unbound symposium features a conversation about the continuing relevance of Albert Hirschman’s Exit, Voice and Loyalty: Responses to Decline in Firms, Organizations, and States, fifty years after its publication. It was a slender by important book that has influenced scholars in many different fields over the past five decades. The Cato symposium features a discussion between me and three other scholars who have attempted to use Hirschman’s framework when thinking about modern social, political, and technological developments.

My lead essay considers how we might use Hirschman’s insights to consider how entrepreneurialism and innovative activities might be reconceptualized as types of voice and exit. Response essays by Mikayla NovakIlya Somin, and Max Borders broaden the discussion to highlight how to think about Hirschman’s framework in various contexts. And then I returned to the discussion this week with a response essay of my own attempting to tie those essays together and extend the discussion about how technological innovation might provide us with greater voice and exit options going forward. Each contributor offers important insights and illustrates the continuing importance of Hirschman’s book.

I encourage you to jump over to Cato Unbound to read the essays and join the conversations in the comments.

 

]]>
https://techliberation.com/2020/08/27/symposium-hirschmans-exit-voice-loyalty-at-50/feed/ 2 76803
Wayne Brough Reviews “Evasive Entrepreneurs”  https://techliberation.com/2020/08/04/wayne-brough-reviews-evasive-entrepreneurs/ https://techliberation.com/2020/08/04/wayne-brough-reviews-evasive-entrepreneurs/#comments Tue, 04 Aug 2020 11:48:32 +0000 https://techliberation.com/?p=76792

My thanks to Dr. Wayne Brough, President at Innovation Defense Foundation, for reviewing my new book, Evasive Entrepreneurs and the Future of Governance, over at the AIER website. Brough says of the book:

Adam Thierer has created a thoughtful and surprisingly timely book examining the interplay between entrepreneurs, innovation, and regulators. Thoughtful because he tackles tough questions of innovation and governance in a dynamic market. Timely because the coronavirus pandemic has forced policymakers to seriously reconsider the cumulative regulatory burden and how it may impede the economic recovery. Whether it’s V-shaped or a slower, longer recovery, decades worth of regulatory underbrush has taken its toll on economic activity while providing few, if any, benefits.

He also does a nice job summarizing the key theme of both this latest book and my previous one on Permissionless Innovation:

Thierer takes to task the anti-growth mentality and the political movements against innovation and growth, highlighting the long tradition of hostility toward innovation, from the early 19th-century Luddites up through today’s technophobes advocating restrictions on new technologies such as artificial intelligence. Much of this is driven by the precautionary principle, which Thierer views as an inappropriate guide for regulators. The precautionary principle is a highly risk-averse standard that provides regulators an excuse to stifle innovation for the slightest perceived hazard.

But Dr. Brough rightly takes me to task for not addressing intellectual property issues in either book. He’s right. I did indeed chicken out of bringing IP policy into these books for a variety of reasons. After I co-edited a big book on IP wars in 2002 (Copy Fights), I made so many enemies for trying to walk the moderate middle path that I largely abandoned the field forevermore. I just got tired of the Holy Wars fought over the topic, and every time I tried to play the role of peacemaker in those wars, I just got shot at by both sides in the intellectual crossfire. I was simultaneously being accused of being an “IP anarchist” and “a whore for Big Content,” by people on either side of those wars. At one point, a board member of the Cato Institute suggested I should be removed from my job for not being enough of an IP opponent while, at the exact same time, a Cato adjunct fellow was suggesting I was already far too radical of an IP opponent. I certainly couldn’t be both! It was comical, but also exhausting and incredibly frustrating. And so I raised the white flag of surrender and walked off the IP battlefield around 2005.

But I also did not bring IP policy into either of my latest books simply because I needed to pick my battles and focus on the issues I know best. When you go down the IP rabbit hole, there’s no escaping that endless descent. Both books would have needed to be significantly longer to incorporate nuanced discussions of how copyright and patents affect innovation outcomes.

Regardless, I very much understand the concerns that Dr. Brough raises in his review about how, “the efficacy of intellectual property laws is inextricably tied to innovation, for better or worse,” and how, “[s]ome of the most disruptive innovation has occurred in the shadow of intellectual property laws that still struggle to keep pace with the rate of technological change.” He’s correct, and entire books have been written on the topic… including my old one!

Anyway, you can read the opening chapter of my new book here, or buy the entire thing here. And my thanks again to Wayne Brough for taking the time to read and review it.

]]>
https://techliberation.com/2020/08/04/wayne-brough-reviews-evasive-entrepreneurs/feed/ 2 76792
Some Recent Essays on the Importance of Innovation & the Fight over Technological Progress https://techliberation.com/2020/07/28/some-recent-essays-on-the-importance-of-innovation-the-fight-over-technological-progress/ https://techliberation.com/2020/07/28/some-recent-essays-on-the-importance-of-innovation-the-fight-over-technological-progress/#comments Tue, 28 Jul 2020 15:35:34 +0000 https://techliberation.com/?p=76778

[Updated: March 2022]

I was speaking at a conference recently and discussing my life’s work, which for 30 years has been focused on the importance of innovation and intellectual battles over what we mean progress. I put together up a short list of some things I have written over the last few years on this topic and thought I would just re-post them here. I will try to keep this regularly updated, at least for a few years.

UNDERSTANDING THE CHALLENGE WE FACE:

HOW WE MUST RESPOND = “Rational Optimism” / Right to Earn a Living / Permissionless Innovation

ADDITIONAL READING:

NEW BOOK (tying together all the essays and papers listed above):

 

]]>
https://techliberation.com/2020/07/28/some-recent-essays-on-the-importance-of-innovation-the-fight-over-technological-progress/feed/ 1 76778
Andreessen on Why Innovation Matters https://techliberation.com/2020/06/14/andreessen-on-why-innovation-matters/ https://techliberation.com/2020/06/14/andreessen-on-why-innovation-matters/#respond Sun, 14 Jun 2020 11:44:24 +0000 https://techliberation.com/?p=76754

Marc Andreessen is interviewed by Sriram Krishan in his new newsletter, The Observer Effect, and asked what motivates him to support technological innovation and “to go read up on a new topic every day” related to tech and progress. His answer is inspirational and perfectly encapsulates why I also have made technological progress the focus of my life’s work:

I am a deep believer in – after learning a lot over the years about economic history and of cultural history – that technology really is the driver. There were basically millennia of just subsistence farming industry and all of a sudden, there was this vertical takeoff a few hundred years ago. And quality of life exploded around the world. Not evenly but starting in Europe and expanding out. It’s basically all technology. It’s always the printing press, it’s the internet and on and on. And you get this incredible upward trajectory. We have the potential over the course of the next century or over the next few centuries to really dramatically advance and have life be better for virtually everybody. Technology is quite literally the lever for being able to take natural resources and able to make something better out of them. And so it’s just it’s the most interesting and by far the most useful and the most beneficial thing I can think of doing.

Amen, brother! I devoted my last two books (Permissionless Innovation and Evasive Entrepreneurs) and all my life’s work, to proving that exact point. Also, I really like Andreessen’s definition of technology as, “the lever for being able to take natural resources and able to make something better out of them.” I’ve added that to my running compendium, “Defining Technology,” which features various definitions of technology.

 

]]>
https://techliberation.com/2020/06/14/andreessen-on-why-innovation-matters/feed/ 0 76754
Matt Ridley on the Freedom to Experiment and Try New Things https://techliberation.com/2020/05/17/matt-ridley-on-the-freedom-to-experiment-and-try-new-things/ https://techliberation.com/2020/05/17/matt-ridley-on-the-freedom-to-experiment-and-try-new-things/#respond Sun, 17 May 2020 18:35:34 +0000 https://techliberation.com/?p=76732

Matt RidleyThere are few things more exciting to innovation policy geeks that than the week a new Matt Ridley book drops. Thankfully, that time is upon us once again. This week, Ridley’s latest book, How Innovation Works: And Why It Flourishes in Freedom, is being released. I can’t wait to dig in.

This weekend, the Wall Street Journal published an essay condensed from the book entitled, “Innovation Can’t Be Forced, but It Can Be Quashed.” Here are some of the highlights from Ridley’s piece:

Innovation relies upon freedom to experiment and try new things, which requires sensible regulation that is permissive, encouraging and quick to give decisions. By far the surest way to rediscover rapid economic growth when the pandemic is over will be to study the regulatory delays and hurdles that have now been hastily swept aside to help innovators in medical devices and therapies, and to see whether such reforms could be applied to other parts of the economy too. … Dealing with Covid-19 has forcibly reminded governments of the value of innovation. But if we are to get faster vaccines and treatments—and better still, more innovation across all fields in the future—then innovators need to be freed from the shackles that hold them back.

These are crucial point, and ones I discuss in the launch essay and the afterward of my new book, Evasive Entrepreneurs and the Future of Governance. Alas, as I pointed out in that launch essay and my last book on Permissionless Innovation, a great many barriers stand in the way of the freedom to experiment and try new things. As Ridley points out:

There is nothing new about resistance to innovation. […] Incumbent vested interests, overcautious regulators, opportunistic activists and rent-seeking patent holders combine to oppose or delay almost every innovation.

And that’s a real shame because, Ridley correctly concludes, “It turns out that continuous tinkering to develop and refine a better product is much more important than protecting what you’ve already created.”

Spot on. Head over to the  Wall Street Journal to read the entire thing and then go order a copy of Ridley’s new book. He’s one of the most important living defenders of technological innovation and human progress. His work has had a huge influence on my way of thinking about innovation, science, and technology. Thank you Matt!

 

 

]]>
https://techliberation.com/2020/05/17/matt-ridley-on-the-freedom-to-experiment-and-try-new-things/feed/ 0 76732
“Evasive Entrepreneurs” – 13 Key Terms from the Book https://techliberation.com/2020/04/28/evasive-entrepreneurs-13-key-terms-from-the-book/ https://techliberation.com/2020/04/28/evasive-entrepreneurs-13-key-terms-from-the-book/#comments Tue, 28 Apr 2020 13:09:58 +0000 https://techliberation.com/?p=76701

My latest book, Evasive Entrepreneurs and the Future of Governance How Innovation Improves Economies and Governments, is now live. Here’s the launch essay and online launch event. Also, here’s a summary of 10 major arguments advanced in the book. I will have more to say about the book in coming weeks, but here is a list of 13 key terms discussed in the text. This list appears at the end of the introduction to the book:

  1. Compliance paradox: The situation in which heightened legal or regulatory efforts fail to reverse unwanted behavior and instead lead to increased legal evasion and additional enforcement problems.
  2. Demosclerosis: Growing government dysfunction brought on by the inability of public institutions to adapt to change, especially technological change.
  3. Evasive entrepreneurs: Innovators who do not always conform to social or legal norms.
  4. Free innovation: Bottom-up, noncommercial forms of innovation that often take on an evasive character. Free innovation is sometimes called “grassroots” or “household” innovation or “social entrepreneurialism.” Even though it is typically noncommercial in character, free innovation often involves regulatory entrepreneurialism and technological civil disobedience.
  5. Innovation arbitrage: The movement of ideas, innovations, or operations to jurisdictions that provide legal and regulatory environments most hospitable to entrepreneurial activity. It can also be thought of as a form of jurisdictional shopping and can be facilitated by competitive federalism.
  6. Innovation culture: The various social and political attitudes and pronouncements toward innovation, technology, and entrepreneurial activities that, taken together, influence the innovative capacity of a culture or nation.
  7. Pacing problem: A term that generally refers to the inability of legal or regulatory regimes to keep up with the intensifying pace of technological change.
  8. Permissionless innovation: The general notion that “it’s easier to ask forgiveness than it is to get permission.” As a policy vision, it refers to the idea that experimentation with new technologies and innovations should generally be permitted by default.
  9. Precautionary principle: The practice of crafting public policies to control or limit innovations until their creators can prove that they will not cause any harm or disruptions.
  10. Regulatory entrepreneurs: Evasive entrepreneurs who set out to intentionally challenge and change the law through their innovative activities. In essence, policy change is part of their business model.
  11. Soft law: Informal, collaborative, and constantly evolving governance mechanisms that differ from hard law in that they lack the same degree of enforceability.
  12. Technological civil disobedience: The technologically enabled refusal of individuals, groups, or businesses to obey certain laws or regulations because they find them offensive, confusing, time-consuming, expensive, or perhaps just annoying and irrelevant.
  13. Technologies of freedom: Devices and platforms that let citizens openly defy (or perhaps just ignore) public policies that limit their liberty or freedom to innovate. Another term with the same meaning is “technologies of resistance.”
]]>
https://techliberation.com/2020/04/28/evasive-entrepreneurs-13-key-terms-from-the-book/feed/ 2 76701
“Evasive Entrepreneurs” – 10 Highlights from the Book https://techliberation.com/2020/04/28/evasive-entrepreneurs-10-highlights-from-the-book/ https://techliberation.com/2020/04/28/evasive-entrepreneurs-10-highlights-from-the-book/#comments Tue, 28 Apr 2020 13:08:40 +0000 https://techliberation.com/?p=76698

I’m pleased to announce that the Cato Institute has just published my latest book, Evasive Entrepreneurs and the Future of Governance How Innovation Improves Economies and Governments. Here’s my introductory launch essay about the book as well as the online launch event. And here’s a list of 13 key terms used throughout the book.

In coming days and weeks I will be occasionally blogging about different arguments made in the 368-page book, but here’s a quick summary of some of the key points I make in the book. These ten passages are pulled directly from the text:

  1. “the freedom to innovate is essential to human betterment for each of us individually and for civilization as a whole. That freedom deserves to be taken more seriously today.”
  2. “Entrepreneurialism and technological innovation are the fundamental drivers of economic growth and of the incredible advances in the everyday quality of life we have enjoyed over time. They are the key to expanding economic opportunities, choice, and mobility.”
  3. “Unfortunately, many barriers exist to expanding innovation opportunities and our entrepreneurial efforts to help ourselves, our loved ones, and others. Those barriers include occupational licensing rules, cronyism-based industrial protectionist schemes, inefficient tax schemes, and many other layers of regulatory red tape at the federal, state, and local levels. We should not be surprised, therefore, when citizens take advantage of new technological capabilities to evade some of those barriers in pursuit of their right to earn a living, to tinker with or try doing new things, or just to learn about the world and serve it better.”
  4. “Evasive entrepreneurs rely on a strategy of permissionless innovation in both the business world and the political arena. They push back against ‘the Permission Society,’ or the convoluted labyrinth of permits and red tape that often encumber entrepreneurial activities.” 
  5. “We should be willing to tolerate a certain amount of such outside-the-box thinking because entrepreneurialism expands opportunities for human betterment by constantly replenishing the well of important, life-enhancing ideas and applications.”
  6. “we should better appreciate how creative acts and the innovations they give rise to can help us improve government by keeping public policies fresh, sensible, and in line with common sense and the consent of the governed.”
  7. “Evasive entrepreneurialism is not so much about evading law altogether as it is about trying to get interesting things done, demonstrating a social or an economic need for new innovations in the process, and then creating positive leverage for better results when politics inevitably becomes part of the story. By acting as entrepreneurs in the political arena, innovators expand opportunities for themselves and for the public more generally, which would not have been likely if they had done things by the book.”
  8. “Dissenting through innovation can help make public officials more responsive to the people by reining in the excesses of the administrative state, making government more transparent and accountable, and ensuring that our civil rights and economic liberties are respected.”
  9. “In an age when many of the constitutional limitations on government power are being ignored or unenforced, innovation itself can act as a powerful check on the power of the state and can help serve as a protector of important human liberties.”
  10. “Lawmakers and regulators need to consider a balanced response to evasive entrepreneurialism that is rooted in the realization that technology creators and users are less likely to seek to evade laws and regulations when public policies are more in line with common sense.”

In a nutshell, the core arguments made in the book boil down to this: “evasive entrepreneurialism can transform our society for the better because it can do the following

  • Help expand the range of life-enriching innovations available to society.
  • Help citizens pursue lives of their own choosing—both as creators looking for the freedom to earn a living and as consumers looking to discover and enjoy important new goods and services.
  • Help provide a meaningful, ongoing check on government policies and programs that all too often have outlived their usefulness or simply defy common sense.”

I hope you will consider reading the book.

]]>
https://techliberation.com/2020/04/28/evasive-entrepreneurs-10-highlights-from-the-book/feed/ 3 76698
5 Books that Shaped My Thinking on Innovation https://techliberation.com/2020/04/16/5-books-that-shaped-my-thinking-on-innovation/ https://techliberation.com/2020/04/16/5-books-that-shaped-my-thinking-on-innovation/#comments Thu, 16 Apr 2020 11:42:23 +0000 https://techliberation.com/?p=76684

To commemorate its 40th anniversary, the Mercatus Center asked its scholars to share the books that have been most influential or formative in the development of their analytical approach and worldview. Head over to the Mercatus website to check out my complete write-up of my Top 5 picks for books that influenced my thinking on innovation policy progress studies. But here is a quick summary:

#1) Samuel C. Florman – “The Existential Pleasures of Engineering” (1976). His book surveys “antitechnologists” operating in several academic fields & then proceeds to utterly demolish their claims with remarkable rigor and wit.

#2) Aaron Wildavsky – “Searching for Safety” (1988). The most trenchant indictment of the “precautionary principle” ever penned. His book helped to reshape the way risk analysts would think about regulatory trade-offs going forward.

#3) Thomas Sowell – “A Conflict of Visions: Ideological Origins of Political Struggles” (1987). It’s like the Rosetta Stone of political theory; the key to deciphering why people think the way they do about human nature, economics, and politics.  

#4) Virginia Postrel – “The Future and Its Enemies” (1998). Postrel reconceptualized the debate over progress as not Left vs. Right but rather dynamism— “a world of constant creation, discovery, and competition”—versus the stasis mentality. More true now than ever before.

#5) Calestous Juma – “Innovation and Its Enemies” (2016). A magisterial history of earlier battles over progress. Juma reminds us of the continued importance of “oiling the wheels of novelty” to constantly replenish the well of important ideas and innovations.

The future needs friends because the enemies of innovative dynamism are voluminous and vociferous. It is a lesson we must never forget. Thanks to these five authors and their books, we never will.

Finally, the influence of these scholars is evident on every page of my last book (“Permissionless Innovation”) and my new one (“Evasive Entrepreneurs and the Future of Governance: How Innovation Improves Economies and Governments”). I thank them all!

]]>
https://techliberation.com/2020/04/16/5-books-that-shaped-my-thinking-on-innovation/feed/ 2 76684
Book Review: Cathy O’Neil’s “Weapons of Math Destruction” https://techliberation.com/2018/11/07/book-review-cathy-oneils-weapons-of-math-destruction/ https://techliberation.com/2018/11/07/book-review-cathy-oneils-weapons-of-math-destruction/#comments Wed, 07 Nov 2018 17:01:28 +0000 https://techliberation.com/?p=76408

To read Cathy O’Neil’s Weapons of Math Destruction (2016) is to experience another in a line of progressive pugilists of the technological age. Where Tim Wu took on the future of the Internet and Evgeny Morozov chided online slactivism , O’Neil takes on algorithms, or what she has dubbed weapons of math destruction (WMD).

O’Neil’s book came at just the right moment in 2016. It sounded the alarm about big data just as it was becoming a topic for public discussion. And now, two years later, her worries seem prescient. As she explains in the introduction,

Big Data has plenty of evangelists, but I’m not one of them. This book will focus sharply in the other direction, on the damage inflicted by WMDs and the injustice they perpetuate. We will explore harmful examples that affect people at critical life moments: going to college, borrowing money, getting sentenced to prison, or finding and holding a job. All of these life domains are increasingly controlled by secret models wielding arbitrary punishments.

O’Neil is explicit about laying out the blame at the feet of the WMDs, “You cannot appeal to a WMD. That’s part of their fearsome power. They do not listen.” Yet, these models aren’t deployed and adopted in a frictionless environment. Instead, they “reflect goals and ideology” as O’Neil readily admits. Where Weapons of Math Destruction falters is that it ascribes too much agency to algorithms in places, and in doing so misses the broader politics behind algorithmic decision making.

For example, O’Neil begins her book with a story about Sarah Wysocki, a teacher who got fired from the D.C. public school system because of how the teacher evaluation system ranked her abilities. O’Neil writes,

Yet at the end of the 2010-11 school year, Wysocki received a miserable score on her IMPACT evaluation. Her problem was a new scoring system known as value-added modeling, which purported to measure her effectiveness in teaching math and language skills. That score, generated by an algorithm, represented half of her overall evaluation, and it outweighed the positive reviews from school administrators and the community. This left the district with no choice but to fire her, along with 205 other teachers who has IMPACT scores below the minimal threshold.

In the ensuing pages, O’Neil describes the scoring system, how it was designed, and how it affected Wysocki. But the broader politics behind the scoring system that ousted Wysocki are just as important.

Why, for example, was the value-added score such a prominent feature in the teacher evaluation as compared to administrative and parent input? Well, research from the Bill & Melinda Gates Foundation found that a teacher’s value-added track record is among the strongest predictors of student achievement gains. So, the school district changed around their evaluations to make it a central feature. As Jason Kamras, chief of human capital for D.C. schools, told the Washington Post , “We put a lot of stock in it.” But that decision wasn’t without its critics, including Washington Teachers’ Union President Nathan Saunders who said, “You can get me to walk down the road with you to say value-added is relevant, but 50 percent is too weighted.”

Moreover, the weights changed in 2009 because the Chancellor of D.C. public schools, Michelle Rhee, had negotiated a new deal with the teachers union. In exchange for 20 percent pay raises and bonuses of $20,000 to 30,000 for effective teachers, the district was given more leeway to fire teachers for poor performance, which they did using the IMPACT system. In part, this fight was spurred on because Obama-era Education Secretary Arne Duncan was doling out $3.4 billion in Race to the Top grants that focused on teacher effectiveness measures. Moreover, Rhee was a Chancellor because D.C. Mayor Adrian Fenty had passed legislation that would bypass the Board of Education and give him control of the schools.           

Yes, Wysocki might have been a false positive, but what about all of the poor performing teachers that the previous system hadn’t let go? By focusing on the teachers, O’Neil steers the conversation away from what should be the central concern, did the change actually help students learn and achieve?

Truth be told, my quibbles with Weapons of Math Destruction fit into two types. The first class relates to questions of emphasis and scope, which become important when the reader tallies off the costs and benefits of algorithms. Perhaps it is the case that “The U.S. News college ranking has great scale, inflicts widespread damage, and generates an almost endless spiral of destructive feedback loops.” But on the other hand, lower ranked colleges have decreased their net tuition and accepted a larger share of applicants. Yes, credit scores “open doors for some of us, while slamming them in the face of others,” but in which proportion? In Chile, for example, credit bureaus were forced to stop reporting defaults in 2012. The change was found to reduce the costs for most of the poorer defaulters, but raised the costs for non-defaulters, leading to a 3.5 percent decrease in lending and a reduction in aggregate welfare. It could be case that “the payday loan industry operates WMDs,” but it is unclear where low-income Americans will find short-term loans if they are outlawed.

Second, Weapons of Math Destruction continuously toys with important questions regarding the moral agency of technologies but never explicitly lays them out. How much value should be ascribed to technologies? To what degree are technologies value-neutral or value-laden? All technologies, including the algorithms that O’Neil describes, are designed and implemented for certain kinds of instrumental outcomes by companies and government agencies. An institution has to take on the task on adopting an algorithm for decision-making purposes, and thus, the algorithm reflects the institutional goals.

Should the algorithm be blamed, the institutional structures that put it into place, or some combination of the both? Reading with a careful eye, one will easily see that this is the fundamental question of the book, especially since O’Neil wonders whether “we’ve eliminated human bias or simply camouflaged it with technology.” But the real answer isn’t in this binary. Algorithmic problems are pluralist.

]]>
https://techliberation.com/2018/11/07/book-review-cathy-oneils-weapons-of-math-destruction/feed/ 1 76408
How to Sell a Book about Tech Policy: Turn the Technopanic Dial Up to 11 https://techliberation.com/2018/01/02/how-to-sell-a-book-about-tech-policy-turn-the-technopanic-dial-up-to-11/ https://techliberation.com/2018/01/02/how-to-sell-a-book-about-tech-policy-turn-the-technopanic-dial-up-to-11/#respond Tue, 02 Jan 2018 16:34:22 +0000 https://techliberation.com/?p=76220

Reason magazine recently published my review of Franklin Foer’s new book, World Without Mind: The Existential Threat of Big Tech. My review begins as follows:

If you want to sell a book about tech policy these days, there’s an easy formula to follow. First you need a villain. Google and Facebook should suffice, but if you can throw in Apple, Amazon, or Twitter, that’s even better. Paint their CEOs as either James Bond baddies bent on world domination or naive do-gooders obsessed with the quixotic promise of innovation. Finally, come up with a juicy Chicken Little title. Maybe something like World Without Mind: The Existential Threat of Big Tech. Wait—that one’s taken. It’s the title of Franklin Foer’s latest book, which follows this familiar techno-panic template almost perfectly.

The book doesn’t break a lot of new ground; it serves up the same old technopanicky tales of gloom-and-doom that many others have said will befall us unless  something is done to save us. But Foer’s unique contribution is to unify many diverse strands of modern tech criticism in one tome, and then amp up the volume of panic about it all. Hence, the “existential” threat in the book’s title. I bet you didn’t know the End Times were so near!

Read the rest of my review over at Reason. And, if you care to read some of my other essays on technopanics through the ages, here’s a compendium of them.

]]>
https://techliberation.com/2018/01/02/how-to-sell-a-book-about-tech-policy-turn-the-technopanic-dial-up-to-11/feed/ 0 76220
Book Review: Garry Kasparov’s “Deep Thinking” https://techliberation.com/2017/05/11/book-review-garry-kasparovs-deep-thinking/ https://techliberation.com/2017/05/11/book-review-garry-kasparovs-deep-thinking/#comments Thu, 11 May 2017 22:58:17 +0000 https://techliberation.com/?p=76140

[originally posted on Medium ]

Today is the anniversary of the day the machines took over.

Exactly twenty years ago today, on May 11, 1997, the great chess grandmaster Garry Kasparov became the first chess world champion to lose a match to a supercomputer. His battle with IBM’s “Deep Blue” was a highly-publicized media spectacle, and when he lost Game 6 of his match against the machine, it shocked the world.

At the time, Kasparov was bitter about the loss and even expressed suspicions about how Deep Blue’s team of human programmers and chess consultants might have tipped the match in favor of machine over man. Although he still wonders about how things went down behind the scenes during the match, Kasparov is no longer as sore as he once was about losing to Deep Blue. Instead, Kasparov has built on his experience that fateful week in 1997 and learned how he and others can benefit from it.

The result of this evolution in his thinking is Deep Thinking: Where Machine Intelligence Ends and Human Creativity Begins, a book which serves as a paean to human resiliency and our collective ability as a species to adapt in the face of technological disruption, no matter how turbulent.

Kasparov’s book serves as the perfect antidote to the prevailing gloom-and-doom narrative in modern writing about artificial intelligence (AI) and smart machines. His message is one of hope and rational optimism about future in which we won’t be racing against the machines but rather running alongside them and benefiting in the process.

Overcoming the Technopanic Mentality

There is certainly no shortage of books and articles being written today about AI, robotics, and intelligent machines. The tone of most of these tracts is extraordinarily pessimistic. Each page is usually dripping with dystopian dread and decrying a future in which humanity is essentially doomed.

As I noted in a recent essay about “The Growing AI Technopanic,” after reading through most of these books and articles, one is left to believe that in the future: “Either nefarious-minded robots enslave us or kill us, or AI systems treacherously trick us, or at a minimum turn our brains to mush.” These pessimistic perspectives are clearly on display within the realm of fiction, where every sci-fi book, movie, or TV show depicts humanity as certain losers in the proverbial “race” against machines. But such lugubrious lamentations are equally prevalent within the pages of many non-fiction books, academic papers, editorials, and journalistic articles.

Given the predominantly panicky narrative surrounding the age of smart machines, Kasparov’s Deep Thinking serves as a welcome breath of fresh air. The aim of his book is finding ways of “doing a smarter job of humans and machines working together” to improve well-being.

Chess fans will enjoy Kasparov’s overview of the history of the game as well as his discussion of how the development of computing and smart machines has been intermingled with chess for many decades now. They will also appreciate his detailed postmortem of his losing battle with Deep Blue, which makes up the meat of the middle of the book. But what is important about the book is the way Kasparov draws out lessons about how the game of chess and chess players themselves have adapted to the rise of smart machines over time — just as he had to following his historic loss to Deep Blue.

Kasparov begins by noting that the growing panic over machine-learning and AI is unwarranted, but in another sense entirely unsurprising. He correctly observes that, “doomsaying has always been a popular pastime when it comes to new technology” and that, “With every new encroachment of machines, the voices of panic and doubt are heard, and they are only getting louder today.”

Fears of sectoral disruptions and job displacements are nothing new, of course, and many of them have even proven legitimate, Kasparov notes. He discusses “a pattern that has repeated over and over for centuries,” in which humans initially scoffed at the idea of machines being able to compete with them. “Eventually we have had to concede that there is no physical labor that couldn’t be replicated, or mechanically surpassed.” That includes the game of chess, where smart machines are now superior to the world’s best players.

But that doesn’t mean we can or should stop the progression of machine intelligence, he says, because the history of humanity is fundamentally tied up with the never-ending process of technological improvements and the gradual assimilation of new tools into our lives, jobs, and economy. He argues:

“Every profession will eventually feel this pressure, and it must, or else it will mean humanity has ceased to make progress. We can either see these changes as a robotic hand closing around our necks or one that can lift us up higher than we can reach on our own, as has always been the case. Romanticizing the loss of jobs to technology is little better than complaining that antibiotics put too many grave diggers out of work.”

That is why it is essential, Kasparov argues, that we not waste time trying to avoid these changes altogether. He regards the very idea of it as an exercise in futility. “Fighting to thwart the impact of machine intelligence is like lobbying against electricity or rockets,” he says. Instead, he argues, we must look to adapt, and do so quickly.

Adaptation, Resiliency & Risk-Taking

In that sense, Kasparov suggests that there are lessons for us in the history of chess as well as from his own experience competing against Deep Blue. He notes that his match against IBM’s supercomputer, “was symbolic of how we are in a strange competition both with and against our creation in more ways every day.”

Instead of just throwing our hands up in the air in frustration, we must be willing to embrace the new and unknown — especially AI and machine-learning. “Each of us has a choice to make: to embrace these new challenges, or to resist them.” His consistent plea throughout the book is to not give into to our worst fears, but instead to embrace these new technological challenges with a willingness to try new ways of doing things. “No matter how many people are worried about jobs, or the social structure, or killer machines, we can never go back,” he concludes.

On that point, my favorite passage in his book comes early in a short chapter about the history of chess. Kasparov’s sagacious advice is worth quoting at length:

“The willingness to keep trying new things — different methods, uncomfortable tasks — when you are already an expert at something is what separates good from great. Focusing on your strengths is required for peak performance, but improving your weaknesses has the potential for the greatest gains. This is true for athletes, executives, and entire companies. Leaving your comfort zone involves risk, however, and when you are already doing well the temptation to stick with the status quo can be overwhelming, leading to stagnation.”

Societal attitudes toward risk-taking and disruption matter profoundly in this regard because “our perspective on disruption affects how well prepared for it we will be” for the future. Again, the lessons from the world of chess are clear: “How professional chess changed when computers and databases arrived is a useful metaphor for how new technology is adopted across industries and societies in general.” For modern chess players, “it was a matter of adapting to survive,” he argues. “Those who quickly mastered the new methods thrived; the few who didn’t mostly dropped down the rating lists.”

 

Disrupting Education

Kasparov is particularly concerned about how a deep underlying conservatism and resistance to experimentation has become a chronic problem within the traditional educational system. “The prevailing attitude is that education is too important to take risks. My response is that education is too important not to take risks,” he says.

He again returns to the world of chess and he speaks with excitement about the ways in which young chess prodigies are tapping computers and sophisticated programs to supplement their skill-building. They do this, Kasparov says, even though they often receive little encouragement from the older guard, who often still resist the new methods of learning. “We need to find out what works and the only way to do that is to experiment,” he argues. “The kids can handle it. They are already doing it on their own. It’s the adults who are afraid.”

He’s also bullish on the globalization of these trends and the way in which “technology will enable people from all over the world to become entrepreneurs, or scientists, or anything they want despite where they live.” Kasparov believes this is already happening within the global chess community as new computing technologies help players everywhere raise the level of their skills. “Kids are capable of learning far more, far faster, than tradition educational methods allow for,” he argues. “They are already doing it mostly on their own, living and playing in a far more complex environment than the one their parents grew up in.”

Problems Ahead

Kasparov isn’t blind to the potential problems associated with new technologies, including AI and algorithmic systems. The potential for privacy violations represents one of the major concerns related to our powerful new technological capabilities. “There are countless privacy issues to be negotiated anytime [personal] data is accessed, of course, and that trade-off will continue to be one of the main battlefields of the AI revolution.”

Kasparov says he is “glad privacy advocates are on the job, especially regarding the powers of the government,” yet he also senses that we are our own worst enemies because new digital technologies and AI-enabled systems “will continue to make the benefits of sharing our data practically irresistible.” “Utility always wins,” he argues, and even if one country seeks to clamp down on innovation, others will welcome it. “When the results come back and show that the economic and health benefits are tremendous, the floodgates will open everywhere.”

He is probably right. After all, as I have noted in recent essays, we increasingly live in a world where “global innovation arbitrage” — i.e., the increasingly frictionless movement of innovations to jurisdictions that provide a legal and regulatory environment more hospitable to entrepreneurial activity — is increasingly easy. We already know how challenging it is to control data flows in the age of the Internet, smartphones, and social media. But the combination of more sophisticated forms of machine-learning and the rise of innovation arbitrage opportunities means that formidable challenges lie ahead in terms of digital privacy and cybersecurity.

Other ethical issues will need to be worked out over time, but it is important not to imbue new AI technologies or automated systems with too much moral weight right out of the gates. “Our technology is not concerned about good or evil. It is agnostic,” Kasparov correctly notes. The real question, he says, is how we ourselves put our tools to use. “The ethics are in how we humans use it, not whether we should build it.”

Humility about the Future

Despite some concerns such as these, Kasparov is generally quite bullish about the future of humanity in an age of smart machines. Again, his core message is that, “going backwards isn’t an option” and that “it is almost always better to start looking for alternatives and how to advance the change into something better instead of trying to fight it and hold on to the dying status quo.”

He agrees with many other pundits that new skills and jobs will be needed going forward, but admits they aren’t always easy to plan for in advance. As Yogi Berra once famously said, “It’s tough to make predictions, especially about the future.” Indeed, as I pointed out in the most recent edition of my book Permissionless Innovation, when one looks back at official government labor market studies and forecasts from the 1970s and 1980s, you are struck by the way in which policymakers didn’t even have a vocabulary to describe the jobs and skills of the present. For example, you find no mention in past reports of some of today’s hottest jobs, such as software engineers and architects, UX designers, database scientists and administrators, and so on.

On one hand, therefore, pessimistic pundits and policymakers regularly underestimate the adaptability of workers and the evolution of new skills and professions. On the other hand, they make an equally egregious mistake when they overestimate the impact of technological change on many sectors and professions, or suggest that mass unemployment is just around the corner unless we slow automation down.

Just this week, the Information Technology and Innovation Foundation released a new report on the impact of technological disruption in the U.S. labor market from 1850 to present and decried the “false alarmism” often on display in debates about current and future skills and professions. “Labor market disruption is not abnormally high,” conclude authors Robert D. Atkinson and John Wu, but instead, “it’s occurring at its lowest rate since the Civil War.”

We’ve been through more turbulent labor market disruptions in the past and weathered the storm. Chances are we will do so again, so long as we embrace the potential for that change to improve our lives and economy in the long-term. “In fact,” conclude Atkinson and Wu, “the single biggest economic challenge facing advanced economies today is not too much labor market churn, but too little, and thus too little productivity growth.” This is consistent with Kasparov’s repeated call in Deep Thinking for us not to give in to our fears about a highly uncertain future but to instead embrace its potential. “Our machines will continue to make us healthier and richer as we use them wisely,” he says, while adding, “They will also make us smarter.”

Learning by Doing

What Kasparov is really doing throughout the book is making the case for building human and institutional resiliency through a constant willingness to experiment and learn through trial and error. It is certainly true that many of today’s skillsets, professions, and business models will be challenged by the rise of smarter machines and algorithmic learning. Defeatism in the face of that prospect, however, isn’t the answer; adaptation is.

Boston University economist James Bessen wrote about this process in his new book, Learning by Doing. Bessen argued that periods of profound technological change require a willingness by workers, businesses, and other institutions to adjust to new marketplace realities. For progress to occur, large numbers of ordinary workers must acquire new knowledge and skills. However, “that is a slow and difficult process, and history suggests that it often requires social changes supported by accommodating institutions and culture,” Bessen notes.

Luckily, history also suggests that we have been through this process many times before and can get through it again — and raise the standard of living for workers and average citizens alike over the long-run. The crucial part of that process is a general willingness to continue to experiment with new ways of doing things — i.e., learning by doing — and understanding that new skills and professions will emerge from all that process.

That is essentially the same point Kasparov makes in Deep Thinking. As he summarized in a new podcast conversation with Tyler Cowen:

“There will be redistribution of jobs. Many jobs today — like drone operators or 3D printer managers or social media managers — they didn’t exist 10 years ago, 15 years ago. No doubt in 10, 15 years, there will be many jobs, maybe the best-paid jobs, that don’t exist today, and we don’t even know how these jobs will look. I think that’s natural. All we have to do is realize that this process is inevitable, and we have to prepare us mentally, but also to have some sort of safety cushions to help people that will have great difficulty in adjusting.”

What about more specific public policy solutions? Considering the unclear future that lies ahead, flexibility and plenty of policy experimentation will be crucial to finding and unlocking new methods that could help us cope and adapt in the new world. “The problem comes when the government is inhibiting innovation with overregulation and short-sighted policy,” Kasparov says. Trade wars and restrictive immigration policies won’t help matters either, he argues, because they “will limit America’s ability to attract the best and brightest minds.” Hopefully the Trump Administration is listening to his advice in this regard.

AI skeptics and other technology critics will lament Kasparov’s lack of greater detail and the absence of a more precise blueprint for helping workers and institutions navigate an uncertain future. But, again, the entire point of Kasparov’s book is that there is enormous value in the very act of confronting those new challenges, learning through trial-and-error(including the many accompanying failures), and “muddling through” over time.

Much like looking out over the chessboard and pondering the wisdom of our next move, we cannot be frozen into inaction because of fear. We must be willing to make that next move. And then another, and another. And then we must learn from our experiences, and especially our mistakes, if we hope to prosper. “To keep ahead of the machines, we must not try to slow them down because that slows us down as well,” Kasparov concludes in his closing chapter. “We must speed them up. We must give them, and ourselves, plenty of room to grow. We must go forward, outward, and upward.”

Wise advice from the greatest of all grandmasters.

]]>
https://techliberation.com/2017/05/11/book-review-garry-kasparovs-deep-thinking/feed/ 1 76140
Book Review: Calestous Juma’s “Innovation and Its Enemies” https://techliberation.com/2016/07/29/book-review-calestous-jumas-innovation-and-its-enemies/ https://techliberation.com/2016/07/29/book-review-calestous-jumas-innovation-and-its-enemies/#comments Fri, 29 Jul 2016 15:32:42 +0000 https://techliberation.com/?p=76052

Juma book cover

“The quickest way to find out who your enemies are is to try doing something new.” Thus begins Innovation and Its Enemies, an ambitious new book by Calestous Juma that will go down as one of the decade’s most important works on innovation policy.

Juma, who is affiliated with the Harvard Kennedy School’s Belfer Center for Science and International Affairs, has written a book that is rich in history and insights about the social and economic forces and factors that have, again and again, lead various groups and individuals to oppose technological change. Juma’s extensive research documents how “technological controversies often arise from tensions between the need to innovate and the pressure to maintain continuity, social order, and stability” (p. 5) and how this tension is “one of today’s biggest policy challenges.” (p. 8)

What Juma does better than any other technology policy scholar to date is that he identifies how these tensions develop out of deep-seated psychological biases that eventually come to affect attitudes about innovations among individuals, groups, corporations, and governments. “Public perceptions about the benefits and risks of new technologies cannot be fully understood without paying attention to intuitive aspects of human psychology,” he correctly observes. (p. 24)

Opposition to Change: It’s All in Your Head

Juma documents, for example, how “status quo bias,” loss aversion, and other psychological tendencies tend to encourage resistance to technological change. [Note: I discussed these and other “root-cause” explanations of opposition to technological change in Chapter 2 of my book, Permissionless Innovation: The Continuing Case for Comprehensive Technological Freedom, as well as in my 2012 law review article on “Technopanics, Threat Inflation, and the Danger of an Information Technology Precautionary Principle.”]  Juma notes, for example, that “society is most likely to oppose a new technology if it perceives that the risks are likely to occur in the short run and the benefits will only accrue in the long run.” (p. 5) Moreover, “much of the concern is driven by perception of loss, not necessarily by concrete evidence of loss.” (p. 11)

Juma’s approach to innovation policy studies is strongly influenced by the path-breaking work of Austrian economist Joseph Schumpeter, who long ago documented how entrepreneurial activity and the “perennial gales of creative destruction” were the prime forces that spurred innovation and propelled society forward. But Schumpeter was also one of the first scholars to realize that psychological fears about such turbulent change was what ultimately lead to much of the short-term opposition to new technologies that, in due time, we eventually come to see as life-enriching or even life-essential innovations.  Juma uses Schumpeter’s insight as the launching point for his exploration and he successfully verifies it using meticulously-detailed case studies.

Case Study-Driven Analysis

Juma
Short-term opposition to change is particularly acute among incumbent industries and interest groups, who often feel they have the most to lose. In this regard, Innovation and Its Enemies contains some spectacular histories of how special interests have resisted new technologies and developments throughout the centuries. Those case studies include: coffee and coffeehouses, the printing press, margarine, farm machinery, electricity, mechanical refrigeration, recorded music, transgenic crops, and genetically engineered salmon. These case studies are remarkably detailed histories that offer engaging and enlightening accounts of “the tensions between innovation and incumbency.”

My favorite case study in the book discusses how the dairy industry fought the creation and spread of margarine (excuse the pun!). I had no idea how ugly that situation got, but Juma provides all the gory details in what I consider one of the very best crony capitalist case studies ever penned.

In particular, in a subsection of that chapter entitled “The Laws against Margarine,” he provides a litany of examples of how effective the dairy industry was in convincing lawmakers to enact ridiculous anti-consumer regulations to stop margarine, even though the product offered the public a much-needed, and much more affordable, substitute for traditional butter. At one point, the daily industry successfully lobbied five states to adopt rules mandating that any imitation butter product had to be dyed pink! Other states enacted labelling laws that required butter substitutes to come in ominous-looking black packaging. Again, all this was done at the request of the incumbent dairy industry and the National Dairy Council, which would resort to almost any sort of deceptive tactic to keep a cheaper competing product out of the hands of consumers.

And so it goes in chapter after chapter of Juma’s book. The amount of detail in each of these unique case studies is absolutely stunning, but they nonetheless remain highly readable accounts of sectoral protectionism, special interest rent-seeking, and regulatory capture. In this way, Juma is plowing some familiar ground already covered by other economic historians and political scientists, such as Joel Mokyr and Mancur Olson, both of whom are mentioned in the book, as well as a long line of public choice scholars who are, somewhat surprisingly, not discussed in the text. Nonetheless, Juma’s approach is still fresh, unique, and highly informative. In fact, I don’t think I’ve ever seen so many distinct and highly detailed case studies assembled in one place by a single scholar.  What Juma has done here is truly impressive.

Related Innovation Policy Paradigms

Beyond Schumpeter’s clear influence, Juma’s approach to studying innovation policy also shares a great deal in common with two other unmentioned innovation policy scholars, Virginia Postrel and Robert D. Atkinson.

Postrel’s 1998 book, The Future and Its Enemies, contrasted the conflicting worldviews of “dynamism” and “stasis” and showed how the tensions between these two visions would affect the course of human affairs. She made the case for embracing dynamism — “a world of constant creation, discovery, and competition” — over the “regulated, engineered world” of the stasis mentality. Similarly, in his 2004 book, The Past and Future of America’s Economy, Atkinson documented how “American history is rife with resistance to change,” and in recounting some of the heated battles over previous technological revolutions he showed how two camps were always evident: “preservationists” and “modernizers.”

When Juma repeatedly recounts the fight between “innovation and incumbency” in his case studies, he is essentially describing the same paradigmatic divide that Postrel and Atkinson highlight in their works when they discuss “dynamist” vs. “stasis” tensions and the “modernizers” vs. “preservationists” battles that we have seen throughout history. [Note: In my 2014 essay on, “Thinking about Innovation Policy Debates: 4 Related Paradigms,” I discussed Postrel and Atkinson’s books and other approaches to understanding tech policy divisions and then related them to the paradigms I contrast in my work: the so-called “precautionary principle” vs. “permissionless Innovation” mindsets.]

Finally, Juma’s book could also be compared to another freshly released book, The Politics of Innovation, by Mark Zachary Taylor. Taylor’s book is also essential reading on this lamentable history of industrial protectionism and the resulting political opposition to change we have seen over time. [Note: Brent Skorup and provided many other high-tech cronyist case studies like these in our 2013 law review article, “A History of Cronyism and Capture in the Information Technology Sector.”]

To counter the prevalence of special interest influence and poor policymaking more generally, Juma stresses the need for evidence-based analysis and a corresponding rejection of fear-mongering and deceptive tactics by public officials and activist groups. He’s particularly concerned with “the use of demonization and false analogies to amplify the perception of risks associated with a new product.”

Accordingly, he would like to see improved educational and risk communication efforts aimed at better informing the public about risk trade-offs and the many potential future benefits of emerging technologies. “Learning how to communicate to the general public is an important aspect of reducing distrust [in new technologies],” Juma argues. (p. 312)

On the Pacing Problem

But Juma never really adequately squares that recommendation with another point he makes throughout the text about how “the pace of technological innovation is discernibly fast,” (p. 5) and how it is accelerating in an exponential fashion. “The implications of exponential growth will continue to elude political leaders if they persist in operating with linear worldviews.” (p. 14) But if it is indeed the case that things are moving that fast, then are we not potentially doomed to live in never-ending cycles of technopanics and misinformation campaigns about new technologies no matter how much education we try to do?

Regardless, Juma’s argument about the speed of modern technological change is quite valid and shared by many other scholars. He is essentially making the same case that Larry Downes did in his excellent 2009 book, The Laws of Disruption: Harnessing the New Forces That Govern Life and Business in the Digital Age. Downes argued that lawmaking in the information age is inexorably governed by the “law of disruption” or the fact that “technology changes exponentially, but social, economic, and legal systems change incrementally.”  This law, Downes said, is “a simple but unavoidable principle of modern life,” and it will have profound implications for the way businesses, government, and culture evolve going forward.  “As the gap between the old world and the new gets wider,” he argued, “conflicts between social, economic, political, and legal systems” will intensify and “nothing can stop the chaos that will follow.”

Again, Juma makes that same point repeatedly throughout the chapters of his book. This is also a restatement of the so-called “pacing problem,” as it is called in the field of the philosophy of technology. I discussed the pacing problem at length in my recent review of Wendell Wallach’s important new book, A Dangerous Master: How to Keep Technology from Slipping beyond Our Control. Wallach nicely defined the pacing problem as “the gap between the introduction of a new technology and the establishment of laws, regulations, and oversight mechanisms for shaping its safe development.” “There has always been a pacing problem,” he noted but, like Juma, Wallach believes that modern technological innovation is occurring at an unprecedented pace, making it harder than ever to “govern” using traditional legal and regulatory mechanisms.

New Approaches to Technological Governance Needed

Both Wallach in A Dangerous Master and Juma in Innovation and Its Enemies struggle with how to solve this problem. Wallach advocates “soft law” mechanisms or even informal “Governance Coordinating Committees,” which would oversee the development of new technology policies and advise existing governmental institutions. Juma is somewhat ambiguous regarding potential solutions, but he does stress the general need for a flexible approach to policy, as he notes on pg. 252:

It is important to make clear distinctions between hazards and risks. It is necessary to find a legal framework for addressing hazards. But such a framework should not take the form of rigid laws whose adoption needs to be guided by evidence of harm. More flexible standards that allow continuous assessment of emerging safety issues related to a new product are another way to address hazards. This approach would allow for evidence-based regulation.

Beyond that Juma wants to see “entrepreneurialism exercised in the public arena” (p. 282) and calls for “decisive leaders to champion the application of new technologies.” (p. 283) He argues such leadership is needed to ensure that life-enriching technologies are not derailed by opponents of change.

On the other hand, Juma sees a broader role for policymakers in helping to counter some of the potential side effects associated with many emerging technologies. He highlights three primary areas of concern. First, he suggests political leaders might need to find ways “to help balance the benefits and risks of automation” due to the rapid rise of robotics and artificial intelligence. Second, he notes that synthetic biology and gene-editing will give rise to many thorny issues that require policymakers to balance “potentially extraordinary benefits and the risk of catastrophic consequences.” (p. 284)  Finally, he points out that medicine and healthcare are set to be radically transformed by emerging technologies, but they are also threatened by archaic policies and practices in many countries.

In each case, Juma hopes that “decisive,” “adaptive” and “flexible” leaders will steer a sensible policy course with an eye toward limiting “the spread of political unrest and resentment toward technological innovation.” (p. 284)  That’s a noble goal, but Juma remains a bit vague on the steps needed to accomplish that balancing act without tipping public policy in favor a full-blown precautionary principle-based regime for new technologies. Juma clearly wants to avoid that result, but it remains unclear how or where he would draw clear lines in the sand to prevent it from occurring while at the same time achieving “decisive leadership” aimed at balancing potential risks and benefits.

Similarly, his repeated calls in the closing chapter for “inclusive innovation” efforts and strategies sounds sensible in theory, but Juma speaks in abstract generalities about what the term means and doesn’t provide a clear vision for how that would translate into concrete actions that would not end up giving vested interests a veto over new forms of technological innovation that they disfavor.

[CARTOON] Consider Every Risk Except

Nothing Ventured, Nothing Gained

Generally speaking, however, Juma wants this balance struck in favor of greater openness to change and an ongoing freedom to experiment with new technological capabilities. As he notes in his concluding chapter:

The biggest risk that society faces by adopting approaches that suppress innovation is that they amplify the activities of those who want to preserve the status quo by silencing those arguing for a more open future. […] Keeping the future open and experimenting in an inclusive and transparent way is more rewarding that imposing the dictum of old patterns. (pgs. 289, 316)

In that regard, the thing I liked most about Innovation and Its Enemies is the way throughout the text that Juma stressed the symbiotic relationship between risk-taking and progress. One of the ways he does so is by kicking off every chapter with a fun quote on that theme from some notable figure. He includes gems like these:

  • “Nothing will ever be attempted if all possible objections must be first overcome.” – Samuel Johnson
  • “Only those will risk going too far can possibly find out how far one can go.” – T.S. Eliot
  • “If you risk nothing, then you risk everything.” – Geena Davis
  • “Test fast, fail fast, adjust fast.” – Tom Peters

Of course, I was bound to enjoy his repeated discussion of this theme because that was the central thesis of my latest book, in which I made the argument that, “if we spend all our time living in constant fear of worst-case scenarios—and premising public policy upon such fears—then many best-case scenarios will never come about.” Or more simply, as the old saying goes: “nothing ventured, nothing gained.”

CARTOON - Protesting Against New Technology - the Early Days

On Pastoral Myths

I also liked the way that Juma used his case studies to remind us how “the topics may have changed, but the tactics have not.” (p. 143) For example, much of the fear-mongering and deceptive tactics we have seen through the years are based on “pastoral ideals,” i.e., appeals to nature, farm life, old traditions, of just the proverbial “good old days,” whenever those supposedly were! “Demonizing innovation is often associated with campaigns to romanticize past products and practices,” Juma notes. “Opponents of innovation hark back to traditions as if traditions themselves were not inventions at some point in the past.” (p. 309)  So very true!

That was especially the case in battles over new farming methods and technologies, when opponents of change were frequently “championing a moral cause to preserve a way of life,” as Juma discusses in several chapters. (p. 129) New products or methods of production were repeatedly but wrongly characterized as dangerous simply because they were not supposedly “natural” or “traditional” enough in character.

Of course, if all farming and other work was to remain frozen in some past “natural” state, we’d all still be hunters and gathers struggling to find the next meal to put in our bellies. Or, if we were all still on the farms of the “good old days,” then we’d still be stuck using an ox and plow in the name of preserving the “traditional” ways of doing things.

Humanity has made amazing strides—including being able to feed more people more easily and cheaply than ever before—precisely because we broke with those old, “natural” traditions. Alas, many vested interests and even quite a few academics today still employ these same pastoral appeals and myths to oppose new forms of technological change. Juma’s case studies powerfully illustrate why that dynamic continues to be a driving force in innovation policy debates and how it has delayed the diffusion of many important new goods and services throughout history. When the opponents of change rest their case on pastoral myths and nostalgic arguments about the good old days we should remind them that the good old days weren’t really that great after all.

Conclusion

In closing, Innovation and Its Enemies earns my highest recommendation. Even though 2016 is only half done as I write this, Professor Juma’s book is probably already a shoo-in as my choice for best innovation policy book of the year. And I am certain that it will also go down as one of the decade’s most important innovation policy books. Buy the book now and read every word of it. It is well worth your time.


 

Additional material related to Juma’s book:

Other Related Books

In addition to the books that I already mentioned throughout this review, readers who find Juma’s book and the issues he discusses in it of interest should also consider reading these other books on innovation policy, technological governance, and regulatory capture.  Although many of them are more squarely focused on the information technology sector or other emerging technology fields, they all relate to the general subject matter and approach found throughout Juma’s book. [NOTE: Links, where provided, are to my reviews of these books.]

 

]]>
https://techliberation.com/2016/07/29/book-review-calestous-jumas-innovation-and-its-enemies/feed/ 1 76052
Wendell Wallach on the Challenge of Engineering Better Technology Ethics https://techliberation.com/2016/04/20/wendell-wallach-on-the-challenge-of-engineering-better-technology-ethics/ https://techliberation.com/2016/04/20/wendell-wallach-on-the-challenge-of-engineering-better-technology-ethics/#respond Wed, 20 Apr 2016 19:08:57 +0000 https://techliberation.com/?p=76026

DM cover
On May 3rd, I’m excited to be participating in a discussion with Yale University bioethicist Wendell Wallach at the Microsoft Innovation & Policy Center in Washington, DC. (RSVP here.) Wallach and I will be discussing issues we write about in our new books, both of which focus on possible governance models for emerging technologies and the question of how much preemptive control society should exercise over new innovations.

Wallach’s latest book is entitled, A Dangerous Master: How to Keep Technology from Slipping beyond Our Control. And, as I’ve noted here recently, the greatly expanded second edition of my latest book, Permissionless Innovation: The Continuing Case for Comprehensive Technological Freedom, has just been released.

Of all the books of technological criticism or skepticism that I’ve read in recent years—and I have read stacks of them!— A Dangerous Master is by far the most thoughtful and interesting. I have grown accustomed to major works of technological criticism being caustic, angry affairs. Most of them are just dripping with dystopian dread and a sense of utter exasperation and outright disgust at the pace of modern technological change.

Although he is certainly concerned about a wide variety of modern technologies—drones, robotics, nanotech, and more—Wallach isn’t a purveyor of the politics of panic. There are some moments in the book when he resorts to some hyperbolic rhetoric, such as when he frets about an impending “techstorm” and the potential, as the book’s title suggests, for technology to become a “dangerous master” of humanity. For the most part, however, his approach is deeper and more dispassionate than what is found in the leading tracts of other modern techno-critics.

Many Questions, Few Clear Answers

Wallach does a particularly good job framing the major questions about emerging technologies and their effect on society. “Navigating the future of technological possibilities is a hazardous venture,” he observes. “It begins with learning to ask the right questions—questions that reveal the pitfalls of inaction, and more importantly, the passageways available for plotting a course to a safe harbor.” (p. 7) Wallach then embarks on a 260+ page inquiry that bombards the reader with an astonishing litany of questions about the wisdom of various forms of technological innovation—both large and small. While I wasn’t about to start an exact count, I would say that the number of questions Wallach poses in the book runs well into the hundreds. In fact, many paragraphs of the book are nothing but an endless string of questions.

Thus, if there is a primary weakness with A Dangerous Master, it’s that Wallach spends so much time formulating such a long list of smart and nuanced questions that some readers may come away disappointed when they do not find equally satisfying answers. On the other hand, the lack of clear answers is also completely understandable because, as Wallach notes, there really are no simple answers to most of these questions.

Just Slow Down!

Moving on to substance, let me make clear where Wallach and I generally see eye-to-eye and where we part ways.

Generally speaking, we agree about the need to come up with better “soft governance” systems for emerging technologies, which might include multistakeholder process, developer codes of conduct, sectoral self-regulation, sensible liability rules, and so on. (More on those strategies in a moment.)

But while we both believe it is wise to consider how we might “bake-in” better ethics and norms into the process of technological development, Wallach seems much more inclined than me to expect that we will be able to pre-ordain (or potentially require?) all this happens before much of this experimentation and innovation actually moves forward. Wallach opens by asking:

Determining when to bow to the judgment of experts and whether to intervene in the deployment of a new technology is certainly not easy. How can government leaders or informed citizens effectively discern which fields of research are truly promising and which pose serious risks? Do we have the intelligence and means to mitigate the serious risks that can be anticipated? How should we prepare for unanticipated risks? (p. 6)

Again, many good questions here! But this really gets to the primary difference between Wallach’s preferred approach and my own: I tend to believe that many of these things can only be worked out through ongoing trial and error, the constant reformulation of the various norms that govern the process of innovation, and the development of sensible ex post solutions to some of the most difficult problems posed by turbulent technological change.

By contrast, Wallach’s generally attitude toward technological evolution is probably best summarized by the phrases: “Slow down!” and, “Let’s have a conversation about it first!” As he puts it in his own words: “Slowing down the accelerating adoption of technology should be done as a responsible means to ensure basic human safety and to support broadly shared values.” (p. 13)

But I tend to believe that it’s not always possible to preemptively determine which innovations to slow down, or even how to determine what those “shared values” are that will help us make this determination. More importantly, I worry that there are very serious potential risks and unintended consequences associated with slowing down many forms of technological innovation, which could improve human welfare in important ways. There can be no prosperity, after all, without a certain degree of risk-taking and disruption.

Getting Out Ahead of the Pacing Problem

WW
It’s not that Wallach is completely hostile to new forms of technological innovation or blind to the many ways those innovations might improve our lives. To the contrary, he does a nice job throughout the book highlighting the many benefits associated with various new technologies, or he is at least willing to acknowledge that there can be many downsides associated with efforts aimed at limiting research and experimentation with new technological capabilities.

Yet, what concerns Wallach most is the much-discussed issue from the field of the philosophy of technology, the so-called “pacing problem.” Wallach concisely defines the pacing problem as “the gap between the introduction of a new technology and the establishment of laws, regulations, and oversight mechanisms for shaping its safe development.” (p. 251) “There has always been a pacing problem,” he notes, but he is concerned that technological innovation—especially highly disruptive and potentially uncontrollable forms of innovation—is now accelerating at an absolutely unprecedented pace.

(Just as an aside for all the philosophy nerds out there…  Such a rigid belief in the “pacing problem” represents a techno-deterministic viewpoint that is, ironically, sometimes shared by technological skeptics like Wallach as well as technological optimists like Larry Downes and even many in the middle of this debate, like Vivek Wadhwa. See, for example, The Laws of Disruption by Downes and “Laws and Ethics Can’t Keep Pace with Technology” by Wadhwa. Although these scholars approach technology ethics and politics quite differently, they all seem to believe that the pace of modern technological change is so relentless as to almost be an unstoppable force of nature. I guess the moral of the story is that, to some extent, we’re all technological determinists now!)

Despite his repeated assertions that modern technologies are accelerating at such a potentially uncontrollable pace, Wallach nonetheless hopes we can achieve some semblance of control over emerging technologies before they reach a critical “inflection point.” In the study of history and science, an inflection point generally represents a moment when a situation and trend suddenly changes in a significant way and things begin moving rapidly in a new direction. These inflections points can sometimes develop quite abruptly, ushering in major changes by creating new social, economic, or political paradigms. As it relates to technology in particular, inflection points can refer to the moment with a particular technology achieves critical mass in terms of adoption or, more generally, to the time when that technology begins to profoundly transform the way individuals and institutions act.

Another related concept that Wallach discusses is the so-called “Collingridge dilemma,” which refers to the notion that it is difficult to put the genie back in the bottle once a given technology has reached a critical mass of public adoption or acceptance. The concept is named after David Collingridge, who wrote about this in his 1980 book, The Social Control of Technology. “The social consequences of a technology cannot be predicated early in the life of the technology,” Collingridge argued. “By the time undesirable consequences are discovered, however, the technology is often so much part of the whole economics and social fabric that its control is extremely difficult.”

On “Having a Discussion” & Coming Up with “a Broad Plan”

These related concepts of inflection points and the Collingridge dilemma constitute the operational baseline of Wallach’s worldview. “In weighing speedy development against long-term risks, speedy development wins,” he worries. “This is particularly true when the risks are uncertain and the perceived benefits great.” (p. 85)

Consequently, throughout his book, Wallach pleads with us to take what I will call Technological Time Outs. He says we need to pause at times so that we can have “a full public discussion” (p. 13) and make sure there is a “broad plan in place to manage our deployment of new technologies” (p. 19) to make sure that innovation happens only at “a humanly manageable pace” (p. 261) “to fortify the safety of people affected by unpredictable disruptions.” (p. 262) Wallach’s call for Technological Time Outs is rooted in his belief that “the accelerating pace [of modern technological innovation] undermines the quality of each of our lives.” (p. 263)

That is Wallach’s weakest assertion in the book and he doesn’t really offer much evidence to prove that the velocity of modern technological is hurting us rather than helping us, as many of us believe. Rather, he treats it as a widely accepted truism that necessitates some sort of collective effort to slow things down if the proverbial genie is about to exit the bottle, or to make sure those genies don’t get out of their bottles without a lot of preemptive planning regarding how they are to be released into the world. In the following passage on pg. 72, Wallach very succinctly summarizes his approach recommended throughout A Dangerous Master:

this book will champion the need for more upstream governance: more control over the way that potentially harmful technologies are developed or introduced into the larger society. Upstream management is certainly better than introducing regulations downstream, after a technology is deeply entrenched or something major has already gone wrong. Yet, even when we can access risks, there remain difficulties in recognizing when or determining how much control should be introduced. When does being precautionary make sense, and when is precaution an over-reaction to the risks? (p. 72)

Those who have read my Permissionless Innovation book will recall that I open by framing innovation policy debates in almost exactly the same way as Wallach suggests in that last line above. I argue in the first lines of my book that:

The central fault line in innovation policy debates today can be thought of as ‘the permission question.’  The permission question asks: Must the creators of new technologies seek the blessing of public officials before they develop and deploy their innovations? How that question is answered depends on the disposition one adopts toward new inventions and risk-taking, more generally.  Two conflicting attitudes are evident. One disposition is known as the ‘precautionary principle.’ Generally speaking, it refers to the belief that new innovations should be curtailed or disallowed until their developers can prove that they will not cause any harm to individuals, groups, specific entities, cultural norms, or various existing laws, norms, or traditions. The other vision can be labeled ‘permissionless innovation.’ It refers to the notion that experimentation with new technologies and business models should generally be permitted by default. Unless a compelling case can be made that a new invention will bring serious harm to society, innovation should be allowed to continue unabated and problems, if any develop, can be addressed later.

So, by contrasting these passages, you can see what I am setting up here is a clash of visions between what appears to be Wallach’s precautionary principle-based approach versus my own permissionless innovation-focused worldview.

How Much Formal Precaution?

But that would be a tad bit too simplistic because just a few paragraphs after Wallach makes the statement just above about “upstream management” being superior to ex post solutions formulated “after a technology is deeply entrenched,” Wallach begins slowly backing away from an overly-rigid approach to precautionary principle-based governance of technological processes and systems.

He admits, for example, that “precautionary measures in the form of regulations and governmental oversight can slow the development of research whose overall society impact will be beneficial,” (p. 26) and that can “be costly” and “slow innovation.” For countries, Wallach admits, this can have real consequences because “Countries with more stringent precautionary policies are at a competitive disadvantage to being the first to introduce a new tool or process.” (p. 74)

So, he’s willing to admit that what we might call a hard precautionary principle usually won’t be sensible or effective in practice, but he is far more open to soft precaution. But this is where real problems begin to develop with Wallach’s approach, and it presents us with a chance to turn the tables on him a bit and begin posing some serious questions about his vision for governing technology.

Much of what follows below are my miscellaneous ramblings about the current state of the intellectual dialogue about tech ethics and technological control efforts. I have discussed these issues at greater length in my new book as well as a series of essays here in past years, most notably: “On the Line between Technology Ethics vs. Technology Policy; “What Does It Mean to “Have a Conversation” about a New Technology?”; and, “Making Sure the “Trolley Problem” Doesn’t Derail Life-Saving Innovation.”

As I’ve argued in those and other essays, my biggest problem with modern technological criticism is that specifics are in scandalously short supply in this field! Indeed, I often find the lack of details in this arena to be utterly exasperating. Most modern technological criticism follows a simple formula:

TECHNOLOGY –>> POTENTIAL PROBLEMS –>> DO SOMETHING!

But almost all the details come in the discussion about the nature of the technology in question and the apparent many problems associated with it. Far, far less thought goes into the “DO SOMETHING!” part of the critics’ work. One reason for that is probably self-evident: There are no easy solutions. Wallach admits as much at many junctures throughout the book. But that doesn’t excuse the need for the critics to give us a more concrete blueprint for identifying and then potentially rectifying the supposed problems.

Of course, the other reason that many critics are short of specifics is because what they really mean when they quip how much we need to “have a conversation” about a new disruptive technology is that we need to have a conversation about stopping that technology.

Where Shall We Draw the Line between Hard and Soft Law?

But this is what I found most peculiar about Wallach’s book: He never really gives us a good standard by which to determine when we should look to hard governance (traditional top-down regulation) versus soft governance (more informal, bottom-up and non-regulatory approaches).

On one hand, he very much wants society to exercise greatly restraint and precaution when it comes to many of the technologies he and others worry about today. Again, he’s particularly concerned about the potential runaway development and use of drones, genetic editing, nanotech, robotics, and artificial intelligence. For at least one class of robotics—autonomous military robots—Wallach does call for immediate policy action in the form of an Executive Order to ban “killer” autonomous systems. (Incidentally, there’s also a major effort underway called the “Campaign to Stop Killer Robots” that aims to make such a ban part of international law through a multinational treaty.)

But Wallach also acknowledges the many trade-offs associated with efforts to preemptively controls on robotics and other technology. Perhaps for that reason, Wallach doesn’t develop a clear test for when the Precautionary Principle should be applied to new forms of innovation.

Clearly there are times when it is appropriate, although I believe it is only in an extremely narrow subset of cases. In the 2 nd Edition of my Permissionless Innovation book, I tried to offer a rough framework for when formal precautionary regulation (i.e., highly-restrictive policy defaults are necessary, such as operational restrictions, licensing requirements, research limitations, or even formal bans) might be necessary. I do not want to interrupt the flow of this review of Wallach’s book too much, so I have decided to just cut-and-paste that portion of Chapter 3 of my book (“When Does Precaution Make Sense?”) down below as an appendix to this essay.

The key takeaway of that passage from my book is that all of us who study innovation policy and the philosophy of technology—Wallach, myself, the whole darn movement—have done a remarkably poor job being specific about precisely when formal policy precaution is warranted. What is the test? All too often, we get lazy and apply what we might call an “I-Know-It-When-I-See-It” standard. Consider the possession of bazookas, tanks, and uranium. Almost all of us would agree that citizens should not be allowed to possess or use such things. Why? Well, it seems obvious, right? They just shouldn’t! But what is the exact standard we use to make that determination.

In coming years, I plan on spending a lot more time articulating a better test by which Precautionary Principle-based policies could be reasonably applied. Those who know me may be taken aback by what I just said. After all, I’ve spend many years explaining why Precautionary Principle-based thinking threatens human prosperity and should be rejected in the vast majority of cases. But that doesn’t excuse the lack of a serious and detailed exploration of the exact standard by which we determine when we should impose some limits on technological innovation.

Generally speaking, while I strongly believe that “permissionless innovation” should remain the policy default for most technologies, there certainly exists some scenarios where the threat of harm associated with a new innovation might be highly probable, tangible, immediate, irreversible, and catastrophic in nature. If so, that could qualify it for at least a light version of the Precautionary Principle. In a future paper or book chapter I’m just now starting to research, I hope to fuller develop those qualifiers and formulate a more robust test around them.

I would have very much liked to see Wallach articulate and defend a test of his own for when formal precaution would make sense. And, by extension, when should we default to soft precaution, or soft law and informal governance mechanisms for emerging technologies.

We turn to that issue next.

Toward Soft Governance & the Engineering of Better Technological Ethics

Even though Wallach doesn’t provide us with a test for determining when precaution makes sense or when we should instead default to soft governance, he does a much better job explaining the various models of soft law or informal governance that might help us deal with the potential negative ramifications of highly disruptive forms of technological change.

What Wallach proposes, in essence, is that we bake a dose of precautionary directly into the innovation process through a wide variety of informal governance/oversight mechanisms. “By embedding shared values in the very design of new tools and techniques, engineers improve the prospect of a positive outcome,” he claims. “The upstream embedding of shared values during the design process can ease the need for major course adjustments when it’s often too late.” (p. 261)

Wallach’s favored instrument of soft governance is what he refers to as “Governance Coordinating Committees” (GCCs). These Committees would coordinate “the separate initiatives by the various government agencies, advocacy groups, and representatives of industry” who would serve as “issue managers for the comprehensive oversight of each field of research.” (p. 250) He elaborates and details the function of GCCs as follows:

These committees, led by accomplished elders who have already achieved wide respect, are meant to work together with all the interested stakeholders to monitor technological development and formulate solutions to perceived problems. Rather than overlap with or function as a regulatory body, the committee would work together with existing institutions. (p. 250-51)

Wallach discussed the GCC idea in much greater detail in a 2013 book chapter he penned with Gary E. Marchant for a collected volume of essays on Innovative Governance Models for Emerging Technologies. (I highly recommend you pick up that book if you can afford it! Many terrific essays in that book on these issues.) In their chapter, Marchant and Wallach specify some of the soft law mechanisms we might use to instill a bit of precaution preemptively. These mechanisms include: “codes of conduct, statements of principles, partnership programs, voluntary programs and standards, certification programs and private industry initiatives.”

If done properly, GCCs could provide exactly the sort of wise counsel and smart recommendations that Wallach desires. In my book and many law review articles on various disruptive technologies, I have endorsed many of the ideas and strategies Wallach identifies. I’ve also stressed the importance of many other mechanisms, such as education and empowerment-based strategies that could help the public learn to cope with new innovations or use them appropriately. In addition, I’ve highlighted the many flexible, adaptive ex post remedies that can help when things go wrong. Those mechanisms include common law remedies such as product defects law, various torts, contract law, property law, and even class action lawsuits. Finally, I have written extensively about the very active role played by the Federal Trade Commission (FTC) and other consumer protection agencies, which have broad discretion to police “unfair and deceptive practices” by innovators.

Moreover, we already have a quasi-GCC model developing today with the so-called “multistakeholder governance” model that is often used in both informal and formal ways to handle many emerging technology policy issues.  The Department of Commerce (the National Telecommunications and Information Administration in particular) and the FTC have already developed many industry codes of conduct and best practices for technologies such as biometrics, big data, the Internet of Things, online advertising, and much more. Those agencies and others (such as the FDA and FAA) are continuing to investigate other codes or guidelines for things like advanced medical devices and drones, respectively. Meanwhile, I’ve heard other policymakers and academics float the idea of “digital ombudsmen,” “data ethicists,” and “private IRBs” (institutional review boards) as other potential soft law solutions that technology companies might consider. Perhaps going forward, many tech firms will have Chief Ethical Officers just as many of them today have Chief Privacy Officers or Chief Security Officers.

In other words, there’s already a lot of “soft law” activities going on in this space. And I haven’t even begun an inventory of the many other bodies or groups that already exist in each sector today that has set forth their own industry self-regulatory codes, but they exist in almost every field that Wallach worries about.

So, I’m not sure how much his GCC idea will add to this existing mix, but I would not be opposed to them playing the sort of coordinating “issue manager” role he describes. But I still have many questions about GCC’s, including:

  • How many of them are needed and how we will know which one is the definitive GCC for each sector or technology?
  • If they are overly formal in character and dominated by the most vociferous opponents of any particular technology, a real danger exists that a GCC could end up granting a small cabal a “heckler’s veto” over particular forms of innovation.
  • Alternatively, the possibility of “regulatory capture” could be a problem for some GCCs if incumbent companies come to dominate their membership.
  • Even if everything went fairly smoothly and the GCCs produced balanced reports and recommendations, future developers might wonder if and why they are to be bound by older guidelines.
  • And if those future developers choose not to play by the same set of guidelines, what’s the penalty for non-compliance?
  • And how are such guidelines enforced in a world where what I’ve called “global innovation arbitrage” is an increasing reality?

Challenging Questions for Both Hard and Soft Law

To summarize, whether we are speaking of “hard” or “soft” law approaches to technological governance, I am just not nearly as optimistic as Wallach seems to be that we will be able to find consensus on these three things:

(1) what constitutes “harm” in many of these circumstances;

(2) which “shared values” should prevail when “society” debates the shaping of ethics or guiding norms for emerging technologies but has highly contradictory opinions about those values (consider online privacy as a good example, where many people enjoy hyper-sharing while other demand hyper-privacy); and,

(3) that we can create a legitimate “governing body” (or bodies) that will be responsible for formulating these guidelines in a fair way without completely derailing the benefits of innovation in new fields and also remaining relevant for very long.

Nonetheless, as he and others have suggested, the benefit of adopting a soft law/informal governance approach to these issues is that it at least seeks to address these questions in more flexible and adaptive fashion. As I noted in my book, traditional regulatory systems “tend to be overly rigid, bureaucratic, inflexible, and slow to adapt to new realities. They focus on preemptive remedies that aim to predict the future, and future hypothetical problems that may not ever come about. Worse yet, administrative regulation generally preempts or prohibits the beneficial experiments that yield new and better ways of doing things.” ( Permissionless Innovation, p. 120)

So, despite the questions I have raised here, I welcome the more flexible soft law approach that Wallach sets forth in his book. I think it represents a far more constructive way forward when compared to the opposite “top-down” or “command-and-control” regulatory systems of the past. But I very much want to make sure that even these new and more flexible soft law approaches leave plenty of breathing room for ongoing trial-and-error experimentation with new technologies and systems.

Conclusion

In closing, I want to reiterate that not only did I appreciate the excellent questions raised by Wendell Wallach in A Dangerous Master, but I take them very seriously. When I sat down to revise and expand my Permissionless Innovation book last year, I decided to include this warning from Wallach in my revised preface: “The promoters of new technologies need to speak directly to the disquiet over the trajectory of emerging fields of research. They should not ignore, avoid, or superficially dampen criticism to protect scientific research.” (p. 28–9)

As I noted, in response to Wallach: “I take this charge seriously, as should others who herald the benefits of permissionless innovation as the optimal default for technology policy. We must be willing to take on the hard questions raised by critics and then also offer constructive strategies for dealing with a world of turbulent technological change.”

Serious questions deserve serious answers. Of course, sometimes those posing those questions fail to provide many answers of their own! Perhaps it is because they believe the questions answer themselves. Other times, it’s because they are willing to admit that easy answers to these questions typically prove quite elusive. In Wallach’s case, I believe it’s more the latter.

To wrap up, I’ll just reiterated that both Wallach and I share a common desire to find solutions to the hard questions about technological innovation. But the crucial question that probably separates his worldview and my own is this: Whether we are talking about hard or soft governance, how much faith should we place in preemptive planning vs. ongoing trial and error experimentation to solve technological challenges? Wallach is more inclined to believe we can divine these things with the sagacious foresight of “accomplished elders” and technocratic “issue managers,” who will help us slow things down until we figure out how to properly ease a new technology into society (if at all). But I believe that the only way we will find many of the answers we are searching for is by allowing still more experimentation with the very technologies that he and others seek to control the development of. We humans are outstanding problem-solvers and have the uncanny ability among all mammals to adapt to changing circumstances. We roll with the punches, learn from them, and become more resilient in the process. As I noted in my 2014 essay, “Muddling Through: How We Learn to Cope with Technological Change”:

we modern pragmatic optimists must continuously point to the unappreciated but unambiguous benefits of technological innovation and dynamic change. But we should also continue to remind the skeptics of the amazing adaptability of the human species in the face of adversity. [. . .] Humans have consistently responded to technological change in creative, and sometimes completely unexpected ways. There’s no reason to think we can’t get through modern technological disruptions using similar coping and adaptation strategies.

Will the technologies that Wallach fears bring about a “techstorm” that overwhelms our culture, our economy, and even our very humanity? It’s certainly possible, and we should continue to seriously discuss the issues that he and other skeptics raise about our expanding technological capabilities and the potential for many of them to do great harm. Because some of them truly could.

But it is equally plausible—in fact, some of us would say, highly probable—that instead of overwhelming us, we learn how to bend these new technological capabilities to our will and make them work for our collective benefit. Instead of technology becoming “a dangerous master,” we will instead make it our helpful servant, just as we have so many times before.


APPENDIX: When Does Precaution Make Sense?

[excerpt from chapter 3 of Permissionless Innovation: The Continuing Case for Comprehensive Technological Freedom. Footnotes omitted. See book for all references.]

But aren’t there times when a certain degree of precautionary policymaking makes good sense? Indeed, there are, and it is important to not dismiss every argument in favor of precautionary principle–based policymaking, even though it should not be the default policy rule in debates over technological innovation.

The challenge of determining when precautionary policies make sense comes down to weighing the (often limited) evidence about any given technology and its impact and then deciding whether the potential downsides of unrestricted use are so potentially catastrophic that trial-and-error experimentation simply cannot be allowed to continue. There certainly are some circumstances when such a precautionary rule might make sense. Governments restrict the possession of uranium and bazookas, to name just two obvious examples.

Generally speaking, permissionless innovation should remain the norm in the vast majority of cases, but there will be some scenarios where the threat of tangible, immediate, irreversible, catastrophic harm associated with new innovations could require at least a light version of the precautionary principle to be applied.  In these cases, we might be better suited to think about when an “anti-catastrophe principle” is needed, which narrows the scope of the precautionary principle and focuses it more appropriately on the most unambiguously worst-case scenarios that meet those criteria.

Precaution might make sense when harm is … Precaution generally doesn’t make sense for asserted harms that are …
Highly probable Highly improbable
Tangible (physical) Intangible (psychic)
Immediate Distant / unclear timeline
Irreversible Reversible / changeable
Catastrophic Mundane / trivial

 

But most cases don’t fall into this category. Instead, we generally allow innovators and consumers to freely experiment with technologies, and even engage in risky behaviors, unless a compelling case can be made that precautionary regulation is absolutely necessary.  How is the determination made regarding when precaution makes sense? This is where the role of benefit-cost analysis (BCA) and regulatory impact analysis is essential to getting policy right.  BCA represents an effort to formally identify the tradeoffs associated with regulatory proposals and, to the maximum extent feasible, quantify those benefits and costs.  BCA generally cautions against preemptive, precautionary regulation unless all other options have been exhausted—thus allowing trial-and-error experimentation and “learning by doing” to continue. (The mechanics of BCA are discussed in more detail in section VII.)

This is not the end of the evaluation, however. Policymakers also need to consider the complexities associated with traditional regulatory remedies in a world where technological control is increasingly challenging and quite costly. It is not feasible to throw unlimited resources at every problem, because society’s resources are finite.  We must balance risk probabilities and carefully weigh the likelihood that any given intervention has a chance of creating positive change in a cost-effective fashion.  And it is also essential to take into account the potential unintended consequences and long-term costs of any given solution because, as Harvard law professor Cass Sunstein notes, “it makes no sense to take steps to avert catastrophe if those very steps would create catastrophic risks of their own.”  “The precautionary principle rests upon an illusion that actions have no consequences beyond their intended ends,” observes Frank B. Cross of the University of Texas. But “there is no such thing as a risk-free lunch. Efforts to eliminate any given risk will create some new risks,” he says.

Oftentimes, after working through all these considerations about whether to regulate new technologies or technological processes, the best solution will be to do nothing because, as noted throughout this book, we should never underestimate the amazing ingenuity and resiliency of humans to find creative solutions to the problems posed by technological change.  (Section V discusses the importance of individual and social adaptation and resiliency in greater detail.) Other times we might find that, while some solutions are needed to address the potential risks associated with new technologies, nonregulatory alternatives are also available and should be given a chance before top-down precautionary regulations are imposed. (Section VII considers those alternative solutions in more detail.)

Finally, it is again essential to reiterate that we are talking here about the dangers of precautionary thinking as a public policy prerogative—that is, precautionary regulations that are mandated and enforced by government officials. By contrast, precautionary steps may be far more wise when undertaken in a more decentralized manner by individuals, families, businesses, groups, and other organizations. In other words, as I have noted elsewhere in much longer articles on the topic, “there is a different choice architecture at work when risk is managed in a localized manner as opposed to a society-wide fashion,” and risk-mitigation strategies that might make a great deal of sense for individuals, households, or organizations, might not be nearly as effective if imposed on the entire population as a legal or regulatory directive.

Finally, at times, more morally significant issues may exist that demand an even more exhaustive exploration of the impact of technological change on humanity. Perhaps the most notable examples arise in the field of advance medical treatments and biotechnology. Genetic experimentation and human cloning, for example, raise profound questions about altering human nature or abilities as well as the relationship between generations.

The case for policy prudence in these matters is easier to make because we are quite literally talking about the future of what it means to be human.  Controversies have raged for decades over the question of when life begins and how it should end. But these debates will be greatly magnified and extended in coming years to include equally thorny philosophical questions.  Should parents be allowed to use advanced genetic technologies to select the specific attributes they desire in their children? Or should parents at least be able to take advantage of genetic screening and genome modification technologies that ensure their children won’t suffer from specific diseases or ailments once born?

Outside the realm of technologically enhanced procreation, profound questions are already being raised about the sort of technological enhancements adults might make to their own bodies. How much of the human body can be replaced with robotic or bionic technologies before we cease to be human and become cyborgs?  As another example, “biohacking”—efforts by average citizens working together to enhance various human capabilities, typically by experimenting on their own bodies —could become more prevalent in coming years.  Collaborative forums, such as Biohack.Me, already exist where individuals can share information and collaborate on various projects of this sort.  Advocates of such amateur biohacking sometimes refer to themselves as “grinders,” which Ben Popper of the Verge defines as “homebrew biohackers [who are] obsessed with the idea of human enhancement [and] who are looking for new ways to put machines into their bodies.”

These technologies and capabilities will raise thorny ethical and legal issues as they advance. Ethically, they will raise questions of what it means to be human and the limits of what people should be allowed to do to their own bodies. In the field of law, they will challenge existing health and safety regulations imposed by the FDA and other government bodies.

Again, most innovation policy debates—including most of the technologies discussed throughout this book—do not involve such morally weighty questions. In the abstract, of course, philosophers might argue that every debate about technological innovation has an impact on the future of humanity and “what it means to be human.” But few have much of a direct influence on that question, and even fewer involve the sort of potentially immediate, irreversible, or catastrophic outcomes that should concern policymakers.

In most cases, therefore, we should let trial-and-error experimentation continue because “experimentation is part and parcel of innovation” and the key to social learning and economic prosperity.  If we froze all forms of technological innovation in place while we sorted through every possible outcome, no progress would ever occur. “Experimentation matters,” notes Harvard Business School professor Stefan H. Thomke, “because it fuels the discovery and creation of knowledge and thereby leads to the development and improvement of products, processes, systems, and organizations.”

Of course, ongoing experimentation with new technologies always entails certain risks and potential downsides, but the central argument of this book is that (a) the upsides of technological innovation almost always outweigh those downsides and that (b) humans have proven remarkably resilient in the face of uncertain, ever-changing futures.

In sum, when it comes to managing or coping with the risks associated with technological change, flexibility and patience is essential. One size most certainly does not fit all. And one-size-fits-all approaches to regulating technological risk are particularly misguided when the benefits associated with technological change are so profound. Indeed, “[t]echnology is widely considered the main source of economic progress”; therefore, nothing could be more important for raising long-term living standards than creating a policy environment conducive to ongoing technological change and the freedom to innovate.

]]>
https://techliberation.com/2016/04/20/wendell-wallach-on-the-challenge-of-engineering-better-technology-ethics/feed/ 0 76026
10 Notable Tech Policy Essays from 2015 https://techliberation.com/2015/12/23/10-notable-tech-policy-essays-from-2015/ https://techliberation.com/2015/12/23/10-notable-tech-policy-essays-from-2015/#comments Wed, 23 Dec 2015 14:22:38 +0000 http://techliberation.com/?p=75233

Throughout the year, I collect some of the more notable tech policy-related essays that I’ve read and then publish an end-of-year list here. (Here, for example, are my end-of-year lists from 2014 and 2013.) So, here are some of my favorite essays and editorials from 2015. (Note: They are just in chronological order. No ranking here.)

  1. Larry Downes –Take note Republicans and Democrats, this is what a pro-innovation platform looks like,” Washington Post, January 7. (Downes explains how governments need to adapt to accommodate and embrace new forms of technological innovation. He notes: “Here at home, the opportunity to wrap themselves in the flag of innovation is knocking for both parties, but so far there are few takers. Republicans and Democrats regularly invoke the rhetoric of innovation, entrepreneurship, and the transformative power of technology. But in reality neither party pursues policies that favor the disruptors. Instead, where lawmakers once took a largely hands-off approach to Silicon Valley, as the Internet revolution enters a new stage of industry transformation, the temptation to intervene, to usurp, to micromanage, to circumscribe the future — becomes irresistible.”) Equally excellent was Larry’s essay later in the year, “Fewer, Faster, Smarter.” (“As the technology revolution proceeds, the concept of government may return to its pre-industrial roots, setting the most basic rules of the economy and standing by as regulator of last resort when markets fail for some or all consumers over an extended period of time. Even then, the solution may simply be to tweak the incentives to encourage better behavior, rather than more full-fledged—and usually ill-fated—micromanagement of fast-changing industries.”)
  2. Bryant Walker Smith –Slow Down That Runaway Ethical Trolley,” CIS Blog, January 12. (Smith, a leading expert on autonomous vehicle systems, notes that, while serious ethical dilemmas will always be present with such technologies, we should not allow the perfect to be the enemy of the good. “The fundamental ethical question, in my opinion, is this: In the United States alone, tens of thousands of people die in motor vehicle crashes every year, and many more are injured. Automated vehicles have great potential to one day reduce this toll, but the path to this point will involve mistakes and crashes and fatalities. Given this stark choice, what is the proper balance between caution and urgency in bringing these systems to the market? How safe is safe enough?”)
  3. Tim Worstall –Google gets my data, I get search and email and that. Help help, I’m being OPPRESSED!” The Register, February 4. (A wicked tongue-lashing of the critics of the data-driven economy.)
  4. Aki Ito –Six Things Technology Has Made Insanely Cheap: Behold the power of American progress,” Bloomberg Business, February 5. (The title says it all.)
  5. Andrew McAfee –Who are the humanists, and why do they dislike technology so much?” Financial Times, July 7, 2015. (A brief but brilliant exploration of the philosophical fight over differing conceptions of “humanism.” McAfee, appropriately in my opinion, calls into question technological critics who self-label themselves “humanists” and then suggest that those who believe in the benefits of technological innovation and progress are somehow opposed to humanity. In reality, of course, nothing could be further from the truth!)
  6. Jocelyn Brewer – “Techno-Fear is Hurting Kids, Not Their Use of Digital Devices,” July 7, 2015. (A beautiful piece that makes it clear why “the Internet… is not addictive. Technology is not a drug.” Brewer continues on to make the case for avoiding fear-based messaging about Internet problems and instead adopting a more sensible approach: “Rather than trotting out interminable lists of the negative consequences of our adoption of technology lets raise awareness of how to avoid the pitfalls of not approaching this new era with solutions and proactive thinking.” Amen, sister!)
  7. Evan Ackerman – “We Should Not Ban ‘Killer Robots,’ and Here’s Why,” IEEE Spectrum, July 29, 2015, (A thought-provoking piece about a controversial subject in which Ackerman argues that “banning the technology is not going to solve the problem if the problem is the willingness of humans to use technology for evil”)
  8. Tim O’Reilly –Networks and the Nature of the Firm,” Medium, August 14, 2015.  (Explores the economics of the sharing economy and “the huge economic shift led by software and connectedness.”)
  9. Joe Queenan –America’s Need for Pointless Updates and Cat Videos,” Wall Street Journal, December 3, 2015. (“The back-to-nature, turn-off-your-cellphone movement is based on a false assumption.  . . .  Time not spent doing dumb stuff would otherwise be wasted doing other dumb stuff. It’s called ‘play,’ without which Jack is a dull boy. It is a variation on the old saying that nature abhors a vacuum. So nature created the Internet.”)
  10. Dominic Basulto –Can we just stop with all these tech dystopia stories?” Washington Post, Dec 8, 2015. (“Yes, a dystopian future is possible, but so is a utopian future. Most likely, the answer is somewhere in the middle, the way it’s been for millennia.”)
]]>
https://techliberation.com/2015/12/23/10-notable-tech-policy-essays-from-2015/feed/ 3 75233
“Learning by Doing,” the Process of Innovation & the Future of Employment https://techliberation.com/2015/09/25/learning-by-doing-the-process-of-innovation-the-future-of-employment/ https://techliberation.com/2015/09/25/learning-by-doing-the-process-of-innovation-the-future-of-employment/#comments Fri, 25 Sep 2015 19:08:37 +0000 http://techliberation.com/?p=75807

I recently finished  Learning by Doing: The Real Connection between Innovation, Wages, and Wealth , by James Bessen of the Boston University Law School. It’s a good book to check out if you are worried about whether workers will be able to weather this latest wave of technological innovation.  One of the key insights of Bessen’s book is that, as with previous periods of turbulent technological change, today’s workers and businesses will obviously need find ways to adapt to rapidly-changing marketplace realities brought on by the Information Revolution, robotics, and automated systems.

That sort of adaptation takes time, but for technological revolutions to take hold and have meaningful impact on economic growth and worker conditions, it requires that large numbers of ordinary workers acquire new knowledge and skills, Bessen notes. But, “that is a slow and difficult process, and history suggests that it often requires social changes supported by accommodating institutions and culture.” (p 223) That is not a reason to resist disruptive forms of technological change, however. To the contrary, Bessen says, it is crucial to allow ongoing trial-and-error experimentation and innovation to continue precisely because it represents a learning process which helps people (and workers in particular) adapt to changing circumstances and acquire new skills to deal with them. That, in a nutshell, is “learning by doing.” As he elaborates elsewhere in the book:

Major new technologies become ‘revolutionary’ only after a long process of learning by doing and incremental improvement. Having the breakthrough idea is not enough. But learning through experience and experimentation is expensive and slow. Experimentation involves a search for productive techniques: testing and eliminating bad techniques in order to find good ones. This means that workers and equipment typically operate for extended periods at low levels of productivity using poor techniques and are able to eliminate those poor practices only when they find something better. (p. 50)

Luckily, however, history also suggests that, time and time again, that process has happened and the standard of living for workers and average citizens alike improved at the same time.

Of course, that won’t stop some from proclaiming that,  This time it’s different! Indeed, we’re hearing increasing concerns today about the “rise of the robots,” and the general negative impact of automation on the workforce.

But these concerns are really nothing new. “There have been periodic warnings in the last two centuries that automation and new technology were going to wipe out large numbers of middle class jobs,” notes MIT economist David H. Autor. Luckily, those dire predictions have not come to pass. The reason was because short-sighted skeptics failed to appreciate how as new technologies obliterated old businesses and jobs, it simultaneously opened up many more opportunities that were impossible to predict in advance. For every factory worker that lost a job due to technological innovation, new jobs opened up in entirely new sectors that usually offered workers better wages, a safer work environment, and more leisure time. And society clearly benefited in many other ways.

In a new essay for  The Journal of Economic Perspectives on “The History of Technological Anxiety and the Future of Economic Growth: Is This Time Different?” Joel Mokyr, Chris Vickers, and Nicolas L. Ziebarth, note that “Discussions of how technology may affect labor demand are often focused on existing jobs, which can offer insights about which occupations may suffer the greatest dislocation, but offer much less insight about the emergence of as-yet-nonexistent occupations of the future.” They continue on to note that:

In the end, the fears of the Luddites that machinery would impoverish workers were not realized, and the main reason is well understood. The mechanization of the early 19th century could only replace a limited number of human activities. At the same time, technological change increased the demand for other types of labor that were complementary to the capital goods embodied in the new technologies. This increased demand for labor included such obvious jobs as mechanics to fix the new machines, but it extended to jobs for supervisors to oversee the new factory system and accountants to manage enterprises operating on an unprecedented scale. More importantly, technological progress also took the form of product innovation, and thus created entirely new sectors for the economy, a development that was essentially missed in the discussions of economists of this time.

And despite a resurgence of automation anxiety in recent years, that historic trend still generally holds true. In late 2014, economists at Deloitte LLP published a sweeping survey of the impact of technology and jobs over the past 200 years and found that “Technology has transformed productivity and living standards, and, in the process, created new employment in new sectors.” This is because human needs and wants constantly change and, therefore, “The stock of work in the economy is not fixed; the last 200 years demonstrates that when a machine replaces a human, the result, paradoxically, is faster growth and, in time, rising employment.” And they conclude that: “Machines will take on more repetitive and laborious tasks, but seem no closer to eliminating the need for human labour than at any time in the last 150 years. It is not hard to think of pressing, unmet needs even in the rich world: the care of the elderly and the frail, lifetime education and retraining, health care, physical and mental well-being.”

While it is easy for critics to highlight disruptions in some notable sectors where machines replaced human labor, fewer news reports or panicky books discuss the many new sectors where people have found new opportunities. Again, the historical evidence suggests that there are good reasons to have faith that humans will once again muddle through and prevail in the face of turbulent, disruptive change. As venture capitalist Marc Andreessen has noted when addressing the fear that automation is running amuck and that robots will eat all our jobs,

We have no idea what the fields, industries, businesses, and jobs of the future will be. We just know we will create an enormous number of them. Because if robots and AI replace people for many of the things we do today, the new fields we create will be built on the huge number of people those robots and AI systems made available. To argue that huge numbers of people will be available but we will find nothing for them (us) to do is to dramatically short human creativity. And I am way long human creativity.

Some tech critics may reject Andreessen’s bullish optimism about human resiliency, but real-world evidence already supports that his conclusion that we’ll learn to adapt to a world full of robots and robotic systems. A 2015 economic analysis from Colin Lewis, a behavioral economist who runs Robotenomics, showed that “despite the headlines, companies that have installed industrial robots are actually increasingly employing more people whilst at the same time adding more robots.” His research revealed that 1.25 million new jobs had been added by companies that make extensive use of industrial robots over the previous 6 years. He also found that this trend held among more recent disruptive firms like Amazon and Tesla Motors, but also older and more established companies like Chrysler, Daimler, Philips Electronics and others.

So, it’s worth keeping these facts in mind next time you read an article or book that declares that the sky is falling and that technological innovation is going to destroy labor markets and living standards. The entirety of human history points in the opposite direction. We should be bullish about our ability to muddle through tough times of technological change and flourish in the long run.

]]>
https://techliberation.com/2015/09/25/learning-by-doing-the-process-of-innovation-the-future-of-employment/feed/ 1 75807
5 Great Books on Innovation & Technology Policy https://techliberation.com/2015/09/18/5-great-books-on-innovation-technology-policy/ https://techliberation.com/2015/09/18/5-great-books-on-innovation-technology-policy/#comments Fri, 18 Sep 2015 14:10:10 +0000 http://techliberation.com/?p=75727

I was delivering a lecture to a group of academics and students out in San Jose recently [see the slideshow here] and someone in the crowd asked me to send them a list of some of the many books I had mentioned during my talk, which was about future policy clashes over various emerging technologies. I cut the list down to the five books that I believe best frame the nature of debates over innovation and technology policy. They are:

If you haven’t read these amazing books yet, add them to your collection  right now! They are worth reading again and again. They will forever change the way you think about debates over technology and innovation.

5 innovation book covers

]]>
https://techliberation.com/2015/09/18/5-great-books-on-innovation-technology-policy/feed/ 4 75727
Nominees for The Best & Worst Tech Policy Essays of 2014 https://techliberation.com/2014/12/15/nominees-for-the-best-worst-tech-policy-essays-of-2014/ https://techliberation.com/2014/12/15/nominees-for-the-best-worst-tech-policy-essays-of-2014/#comments Mon, 15 Dec 2014 19:34:54 +0000 http://techliberation.com/?p=74083

Over the course of the year, I collect some of my favorite (and least favorite) tech policy essays and put them together in an end-of-year blog post so I will remember notable essays in the future. (Here’s my list from 2013.) Here are some of the best tech policy essays I read in 2014 (in chronological order).

  • Joel Mokyr – “The Next Age of Invention,” City Journal, Winter 2014. (An absolutely beautiful refutation of the technological pessimism that haunts our age. Mokry concludes by noting that, “technology will continue to develop and change human life and society at a rate that may well dwarf even the dazzling developments of the twentieth century. Not everyone will like the disruptions that this progress will bring. The concern that what we gain as consumers, viewers, patients, and citizens, we may lose as workers is fair. The fear that this progress will create problems that no one can envisage is equally realistic. Yet technological progress still beats the alternatives; we cannot do without it.” Mokyr followed it up with a terrific August 8 Wall Street Journal oped, “What Today’s Economic Gloomsayers Are Missing.“)
  • Michael Moynihan – “ Can a Tweet Put You in Prison? It Certainly Will in the UK ,”  The Daily Beast , January 23, 2014. (Great essay on the right and wrong way to fight online hate. Here’s the kicker: “There is a presumption that ugly ideas are contagious and if the already overburdened police force could only disinfect the Internet, racism would dissipate. This is arrant nonsense.”)
  • Hanni Fakhoury –  The U.S. Crackdown on Hackers Is Our New War on Drugs,” Wired , January 23, 2014. (“We shouldn’t let the government’s fear of computers justify disproportionate punishment. . . . It’s time for the government to learn from its failed 20th century experiment over-punishing drugs and start making sensible decisions about high-tech punishment in the 21st century.”)
  • Carole Cadwalladr – “Meet Cody Wilson, Creator of the 3D-gun, Anarchist, Libertarian,” Guardian/Observer, February 8, 2014. (Entertaining profile of one of the modern digital age’s most fascinating characters. “There are enough headlines out there which ask: Is Cody Wilson a terrorist? Though my favourite is the one that asks: ‘Cody Wilson: troll, genius, patriot, provocateur, anarchist, attention whore, gun nut or Second Amendment champion.’ Though it could have added, ‘Or b) all of the above?'”)

And my nominees for Worst Tech Policy Essays of 2014 go to:

 

]]>
https://techliberation.com/2014/12/15/nominees-for-the-best-worst-tech-policy-essays-of-2014/feed/ 1 74083
New Book Release: “Permissionless Innovation: The Continuing Case for Comprehensive Technological Freedom” https://techliberation.com/2014/03/25/new-book-release-permissionless-innovation-the-continuing-case-for-comprehensive-technological-freedom/ https://techliberation.com/2014/03/25/new-book-release-permissionless-innovation-the-continuing-case-for-comprehensive-technological-freedom/#respond Tue, 25 Mar 2014 15:06:28 +0000 http://techliberation.com/?p=74314

book cover (small)I am pleased to announce the release of my latest book, “Permissionless Innovation: The Continuing Case for Comprehensive Technological Freedom.” It’s a short manifesto (just under 100 pages) that condenses — and attempts to make more accessible — arguments that I have developed in various law review articles, working papers, and blog posts over the past few years. I have two goals with this book.

First, I attempt to show how the central fault line in almost all modern technology policy debates revolves around “the permission question,” which asks: Must the creators of new technologies seek the blessing of public officials before they develop and deploy their innovations? How that question is answered depends on the disposition one adopts toward new inventions. Two conflicting attitudes are evident.

One disposition is known as the “precautionary principle.” Generally speaking, it refers to the belief that new innovations should be curtailed or disallowed until their developers can prove that they will not cause any harms to individuals, groups, specific entities, cultural norms, or various existing laws, norms, or traditions.

The other vision can be labeled “permissionless innovation.” It refers to the notion that experimentation with new technologies and business models should generally be permitted by default. Unless a compelling case can be made that a new invention will bring serious harm to society, innovation should be allowed to continue unabated and problems, if they develop at all, can be addressed later.

I argue that we are witnessing a grand clash of visions between these two mindsets today in almost all major technology policy discussions today.

The second major objective of the book, as is made clear by the title, is to make a forceful case in favor of the latter disposition of “permissionless innovation.” I argue that policymakers should unapologetically embrace and defend the permissionless innovation ethos — not just for the Internet but also for all new classes of networked technologies and platforms. Some of the specific case studies discussed in the book include: the “Internet of Things” and wearable technologies, smart cars and autonomous vehicles, commercial drones, 3D printing, and various other new technologies that are just now emerging.

I explain how precautionary principle thinking is increasingly creeping into policy discussions about these technologies. The urge to regulate preemptively in these sectors is driven by a variety of safety, security, and privacy concerns, which are discussed throughout the book. Many of these concerns are valid and deserve serious consideration. However, I argue that if precautionary-minded regulatory solutions are adopted in a preemptive attempt to head-off these concerns, the consequences will be profoundly deleterious.

The central lesson of the booklet is this: Living in constant fear of hypothetical worst-case scenarios — and premising public policy upon them — means that best-case scenarios will never come about. When public policy is shaped by precautionary principle reasoning, it poses a serious threat to technological progress, economic entrepreneurialism, social adaptation, and long-run prosperity.

Again, that doesn’t mean we should ignore the various problems created by these highly disruptive technologies. But how we address these concerns matters greatly. If and when problems develop, there are many less burdensome ways to address them than through preemptive technological controls. The best solutions to complex social problems are almost always organic and “bottom-up” in nature. Luckily, there exists a wide variety of constructive approaches that can be tapped to address or alleviate concerns associated with new innovations. These include:

  • education and empowerment efforts (including media literacy, digital citizenship efforts);
  • social pressure from activists, academics, and the press and the public more generally.
  • voluntary self-regulation and adoption of best practices (including privacy and security “by design” efforts); and,
  • increased transparency and awareness-building efforts to enhance consumer knowledge about how new technologies work.

Such solutions are almost always superior to top-down, command-and-control regulatory edits and bureaucratic schemes of a “Mother, May I?” (i.e., permissioned) nature. The problem with “top-down” traditional regulatory systems is that they often tend to be overly-rigid, bureaucratic, inflexible, and slow to adapt to new realities. They focus on preemptive remedies that aim to predict the future, and future hypothetical problems that may not ever come about. Worse yet, administrative regulation generally preempts or prohibits the beneficial experiments that yield new and better ways of doing things. It raises the cost of starting or running a business or non-business venture, and generally discourages activities that benefit society.

To the extent that other public policies are needed to guide technological developments, simple legal principles are greatly preferable to technology-specific, micro-managed regulatory regimes. Again, ex ante (preemptive and precautionary) regulation is often highly inefficient, even dangerous. To the extent that any corrective legal action is needed to address harms, ex post measures, especially via the common law (torts, class actions, etc.), are typically superior. And the Federal Trade Commission will, of course, continue to play a backstop here by utilizing the broad consumer protection powers it possesses under Section 5 of the Federal Trade Commission Act, which prohibits “unfair or deceptive acts or practices in or affecting commerce.” In recent years, the FTC has already brought and settled many cases involving its Section 5 authority to address identity theft and data security matters. If still more is needed, enhanced disclosure and transparency requirements would certainly be superior to outright bans on new forms of experimentation or other forms of heavy-handed technological controls.

In the end, however, I argue that, to the maximum extent possible, our default position toward new forms of technological innovation must remain: “innovation allowed.” That is especially the case because, more often than not, citizens find ways to adapt to technological change by employing a variety of coping mechanisms, new norms, or other creative fixes. We should have a little more faith in the ability of humanity to adapt to the challenges new innovations create for our culture and economy. We have done it countless times before. We are creative, resilient creatures. That’s why I remain so optimistic about our collective ability to confront the challenges posed by these new technologies and prosper in the process.

If you’re interested in taking a look, you can find a free PDF of the book at the Mercatus Center website or you can find out how to order it from there as an eBook. Hardcopies are also available. I’ll be doing more blogging about the book in coming weeks and months. The debate between the “permissionless innovation” and “precautionary principle” worldviews is just getting started and it promises to touch every tech policy debate going forward.


Related Essays :

]]>
https://techliberation.com/2014/03/25/new-book-release-permissionless-innovation-the-continuing-case-for-comprehensive-technological-freedom/feed/ 0 74314
Important Cyberlaw & Info-Tech Policy Books (2013 Edition) https://techliberation.com/2013/12/23/important-cyberlaw-info-tech-policy-books-2013-edition/ https://techliberation.com/2013/12/23/important-cyberlaw-info-tech-policy-books-2013-edition/#respond Mon, 23 Dec 2013 18:21:11 +0000 http://techliberation.com/?p=74026

I didn’t have nearly as much time this year to review the steadily growing stream of information policy books that were released. The end-of-year lists I put together in the past were fairly comprehensive (see 2008, 2009, 2010, 2011 and 2012), but I got sidetracked this year with 7 law review articles and an eBook project and had almost no time for book reviews, or even general blogging for that matter.

So, I’ve just listed some of the more notable titles from 2013 even though I didn’t find the time to describe them all.  The first couple are the titles that I believe will have the most lasting influence on information technology policy debates. Needless to say, just because I believe that some of these titles will have an impact on policy going forward does not mean I endorse the perspectives or recommendations in any of them. And that would certainly be the case with my choice for most important Net policy book of the year, Ian Brown and Chris Marsden’s Regulating Code. Their book does a wonderful job mapping the unfolding universe of Internet “co-regulation” and “multi-stakeholderism,” but their defense of a more politicized information policy future leaves lovers of liberty like me utterly demoralized.

The same could be said of many other titles on the list. As I noted in concluding several reviews over the past year, liberty is increasingly a loser in Internet policy circles these days. And it’s not just neo-Marxist rants like McChesney’s Digital Disconnect or Lanier’s restatement of the Unibomber Manifesto, Who Owns the Future? The sad reality is that pretty much everybody these days has a pet peeve they want addressed through pure power politics because, you know, something must be done! The very term “Internet freedom” has already been grotesquely contorted into something akin to an open mandate for governments to meticulously plan virtually every facet of economic and social activity in the Information Age.

Anyway, despite that caveat, many interesting books were released in 2013 on an ever-expanding array of specific information policy topics.  Here’s the list of everything that landed on my desk over the past year.

]]>
https://techliberation.com/2013/12/23/important-cyberlaw-info-tech-policy-books-2013-edition/feed/ 0 74026
My 11 Favorite Internet Policy Essays of 2013 (+ Worst Essay of the Year) https://techliberation.com/2013/12/11/my-11-favorite-internet-policy-essays-of-2013-worst-essay-of-the-year/ https://techliberation.com/2013/12/11/my-11-favorite-internet-policy-essays-of-2013-worst-essay-of-the-year/#comments Wed, 11 Dec 2013 15:37:30 +0000 http://techliberation.com/?p=43567

Here are a few Internet policy essays I collected over the past year which I thought were particularly well done and worth highlighting once more. They are listed in chronological order:

  • L. Gordon Crovitz – “Silicon Valley’s ‘Suicide Impulse,'” Wall Street Journal, January 28. (“It’s a measure of how far Silicon Valley has strayed from its entrepreneurial roots that a top regulator is calling on technology companies to do less lobbying and more competing,” Crovitz argued. “Rather than lobby government to go after one another, Silicon Valley lobbyists should unite to go after overreaching government. Instead of the “suicide impulse” of lobbying for more regulation, Silicon Valley should seek deregulation and a long-overdue freedom to return to its entrepreneurial roots.”)
  • John Gruber – “Open and Shut,Daring Fireball, March 1. (An absolutely brutal evisceration of Tim Wu’s recent work.)
  • R. U. Sirius – “Cypherpunk Rising: WikiLeaks, Encryption, and the Coming Surveillance Dystopia,” The Verge, March 7.
  • Julian Sanchez – “A Reply to Epstein & Pilon on NSA’s Metadata Program,Cato at Liberty, June 16. (A meticulous point-by-point takedown of an essay by Roger Pilon & Richard Epstein defending NSA’s online surveillance tactics.)
  • Ethan Zuckerman – “Is Cybertopianism Really Such a Bad Thing?” Slate, June 17 (A “defense of believing that technology can do good.”)

  • Jill Lepore – “The Prism: Privacy in an Age of Publicity,” New Yorker, June 24. (An examination of the evolution of privacy norms over the past 150 years. Lepore argued that “As a matter of historical analysis, the relationship between secrecy and privacy can be stated in an axiom: the defense of privacy follows, and never precedes, the emergence of new technologies for the exposure of secrets. In other words, the case for privacy always comes too late. The horse is out of the barn.”)
  • Michael Nelson – ” Six Myths of Innovation Policy,” The European Institute Blog, July 2013. (An interesting examination of some myths about innovation policy with a discussion about how it impacts policy in both U.S. and E.U.)
  • Daniel O’Connor – “Rent Seeking and the Internet Economy (Part 1): Why is the Internet So Frequently the Target of Rent Seekers?” DisCo blog, August 15. (Nice overview of what rent-seeking is and why it is increasing in the tech economy.)
  • Bruce Schneier – “Our Decreasing Tolerance To Risk,” Forbes, August 23. (Good exploration of the psychology of risk by one of the great experts on the topic. It’s not strictly about information technology policy, but it has profound ramifications for it. He notes: “We need to relearn how to recognize the trade-offs that come from risk management, especially risk from our fellow human beings.  We need to relearn how to accept risk, and even embrace it, as essential to human progress and our free society.  The more we expect technology to protect us from people in the same way it protects us from nature, the more we will sacrifice the very values of our society in futile attempts to achieve this security.”)
  • Clive Thompson – “Googling Yourself Takes on a Whole New Meaning,” New York Times Magazine, August 30, 2013. (I’d be hard-pressed to find a more gifted and insightful technology pundit than Clive Thompson and he delivers yet again in this interesting piece. My review of his excellent new book was published by Reason. Needless to say, I loved it.)
  • Eli Noam – “Towards the Federated Internet,” InterMEDIA, Autumn 2013. (A provocative essay advocating for an “internet of internets” to replace the current unified global Internet. Noam argues that the time has come to abandon our slavish allegiance to the dream of a single, uniform global network and “we should instead think about a system of federated internets working together in some form of technological coexistence of interoperability.”)

And my vote for worst Internet policy essay of the year goes to Washington Post columnist Robert J. Samuelson for his astonishing essay, “Beware the Internet and the Danger of Cyberattacks,” in which he says, “If I could, I would repeal the Internet. It is the technological marvel of the age, but it is not — as most people imagine — a symbol of progress. Just the opposite. We would be better off without it.”  Where does one even begin with such logic?!  Well, I responded here.  [A close runner-up for the Worst of Year prize would be this essay by Benjamin Kunkel, “Socialize Social Media! A Manifesto.” But it’s so hard to take that essay seriously that it should probably just be disqualified from the competition entirely.]

Anyway, let me know some of your favorite (or even least favorite) Net policy essays of 2013. (And yes, I fully expect some of you to list some of my essays as candidates for Worst of Year honors!)

]]>
https://techliberation.com/2013/12/11/my-11-favorite-internet-policy-essays-of-2013-worst-essay-of-the-year/feed/ 5 43567
Book Review: Anupam Chander’s “Electronic Silk Road” https://techliberation.com/2013/08/24/book-review-anupam-chanders-electronic-silk-road/ https://techliberation.com/2013/08/24/book-review-anupam-chanders-electronic-silk-road/#comments Sat, 24 Aug 2013 21:53:09 +0000 http://techliberation.com/?p=73472

Electronic Silk Road book coverAs I’ve noted before, I didn’t start my professional life in the early 1990s as a tech policy wonk. My real passion 20 years ago was free trade policy. Unfortunately for me, as my boss rudely informed me at the time, the world was already brimming with aspiring trade analysts and probably didn’t need another. This was the time of NAFTA and WTO negotiations and seemingly everybody was lining up to get into the world of trade policy during that period.

And so, while I was finishing a master’s degree with trade theory applications and patiently hoping for opportunities to open up, I decided to take what I thought was going to be a brief detour into the strange new world of the Internet and information technology policy. Of course, I never looked backed. I was hooked on Net policy from Day 1.  But I never stopped caring about trade theory and I have always remained passionate about the essential role that free trade plays in expanding commerce, improving human welfare, and facilitating more peaceful interactions among the diverse cultures and countries of this planet.

I only tell you this part of my own backstory so that you understand why I was so excited to receive a copy of Anupam Chander’s new book, The Electronic Silk Road: How the Web Binds the World Together in Commerce. Chander’s book weaves together trade theory and modern information technology policy issues. His over-arching goal is to sketch out and defend “a middle ground between isolation and unregulated trade, embracing free trade and also its regulation.” (p. 209)

In a writing style that is clear and direct, Chander explores the competing forces that facilitate and threaten what he refers to as “Trade 2.0.”  He identifies four distinctive legal challenges for “net-work,” which is his generic descriptor for “information services delivered remotely through electronic communications systems.” (p. 2):

  1. “Legal roadblocks to the free flow of net-work;
  2. The lack of adequate legal infrastructure, as compared to trade in traditional goods;
  3. The threat to law itself posed by the footloose nature of net-work and the uncertainty of whose law should govern net-work transactions; and
  4. The danger that local control of net-work might lead to either Balkanization – the disintegration of the World Wide Web into local arenas – or Stalinization – the repression of political dissidents, identified through their online activity by compliant net-work service providers.” (p. 143).

At the heart of the book is an old tension that has long haunted trade policy: How do you achieve the benefits of free trade through greater liberalization without completely undermining the sovereign authority of nation-states to continue enforcing their preferred socio-political legal and cultural norms? After all, as Chander notes, “States will be loathe to abandon their law in the face of the offerings mediated by the Internet.” (p. 34)  “If crossborder flows of information grossly undermine our privacy, security, or the standards of locally delivered services, they will not long be tolerated,” he notes. (p. 173)  These are just a few of the reasons that barriers to trade remain and why, as Chander explains, “the flat world of global business and the self-regulating world of cyberspace remain distant ideals.” (p. 173).

Striking the Balance

Chander wants to counter that impulse to expand the horizons of Trade 2.0, but he argues that, to some extent, nation-states will always need to be appeased along the way. Consequently, he argues that “we must dismantle the logistical and regulatory barriers to net-work trade while at the same time ensuring that public policy objectives cannot easily be evaded through simple jurisdictional sleight of hand or keystroke.” (p. 34) Again, this reflects his desire for both greater liberalization of markets as well as the preservation of a residual role for states in shaping online commerce and activities.

He says we can achieve this Goldilocks-like balance through the application of three key principles.

The first is harmonization of laws and policies, preferably through multinational accords. “Efforts to harmonize laws across nations and standards among professional associations will prove essential to preserve a global cyberspace in the face of national regulation,” Chander insists. (p. 187)

The second principle is “glocalization,” or “the creation or distribution of products or services intended for a global market but customized to conform to local laws — within the bounds of international law.” (p. 169)

The final key principle is more self-regulatory in character. It is the operational norm of “do no evil” as it pertains to requests from repressive states to have Internet intermediaries to crack down on free speech or privacy.  “[W]e must seek to nurture a corporate consciousness among information providers of their role in liberation or oppression,” Chander argues. (p. 205)

In a sense, what Chander is recommending here is largely the way global information markets already work. Thus, instead of being aspirational, Chander’s book is actually just more descriptive of the reality we see on the ground today.

For example, the harmonization efforts he recommends to facilitate Trade 2.0 have been underway in various fora and trade accords for several years now. Chander does a nice job describing many of those efforts in the book.

Likewise, his “glocalization” recommendation is to some extent already today’s norm. After a series of high-profile legal skirmishes over the past dozen years, Internet giants such as Yahoo, Google, Facebook, Cisco, Microsoft and others have all eventually folded under legal and regulatory pressure from various governments across the globe and sought to accommodate parochial regulatory requests, even as they expand their efforts internationally. Again, Chander discusses several of the more well-known case studies in the text.

Finally, however, there have been moments when — especially as it pertains to certain free speech matters — some of these corporate players have stood up for a “do no evil” approach when repressive governments come calling.  In this regard, Chander only briefly mentions the work of the Global Network Initiative, which is somewhat surprising since it has been focused on this mission since its inception in 2008. Nonetheless, such “do no evil” moments have happened (for example, Google bowing out of China), although the track record of success here has been spotty to say the least.

Technological Neutrality

Chander also wants to make sure that online markets are not somehow advantaged relative to traditional markets and technologies. “Trade law should not allow countries to insist on a regulatory nirvana in cyberspace unmatched in real space,” he insists. (p. 155)

Fair enough, but how we achieve neutrality and level the proverbial playing field is, of course, important. The problem is that most nation-states seek to harmonize in the direction of greater control. The rise of electronic networks and online commerce presents us with the opportunity to reconsider the wisdom of long-standing statutes and regulations that are either no longer needed or perhaps never should have been on the books in the first place.

This is why I have repeatedly proposed here and elsewhere that, when it comes to domestic information policy spats that involve old and new players and technologies, we should consider borrowing a page from trade law by adopting the equivalent of a “Most Favored Nation” (MFN) clause for communications and media policy. In a nutshell, this policy would state that: “Any operator seeking to offer a new service or entering a new line of business, should be regulated no more stringently than its least regulated competitor.” Such a MFN for communications and media policy would ensure that regulatory parity exists within this arena as the lines between existing technologies and industry sectors continue to blur.

Although it will often be difficult to achieve in practice, the aspirational goal of placing all players and technologies on the same liberalized level playing field should be at the heart of information technology policy to ensure non-discriminatory regulatory treatment of competing providers and technologies.

But let’s be clear about what this means: To level the proverbial playing field properly, I believe we should be “deregulating down” instead of regulating up to place everyone on equal footing. This would achieve technological neutrality through greater technological freedom and marketplace liberalization.

Of course, others (possibly including Chander) would likely claim that could lead to a “race to the bottom” in certain instances by disallowing state action and the application of local laws and norms. But one person’s “race to the bottom” is another person’s race to the top!  It all depends on the perspective you adopt toward liberalization efforts. For me, the more liberalization the better. The history of deregulation has been shown in one market after another to improve consumer welfare by expanding choice, increasing innovation, and generally pushing prices lower.

Policies of Freedom

What other specific policies can help us strike the right balance going forward?

I was extremely pleased to see Chander discuss the Clinton Administration’s July 1997 Framework for Global Electronic Commerce. It was instrumental in setting the right tone for e-commerce policy before the turn of the century. The Framework stressed the importance of taking a general “hands off” approach to these markets and treating the Internet as a global free-trade zone. It set forth five key principles for Net governance, including: “the private sector should lead;” “governments should avoid undue restrictions on electronic commerce;” “where governmental involvement is needed, its aim should be to support and enforce a predictable, minimalist, consistent and simple legal environment for commerce,” and other light-touch policy recommendations.

As I noted in the title of my 2012 Forbes essay on the Framework, “15 Years On, President Clinton’s 5 Principles for Internet Policy Remain the Perfect Paradigm.” Chander generally embraces these principles, too, even though some of his “glocalization” recommendations cut against the grain of this vision.

Importantly, Chander also highlights four specific U.S. policies that have fostered the growth of electronic trade.

  1. “The First Amendment guarantee of freedom of speech;
  2. The Communications Decency Act’s Section 230, granting immunity to web hosts for user-generated information; [see my old Forbes essay, “The Greatest of All Internet Laws Turns 15” for an explanation of why Sec. 230 has been so important.]
  3. Title II of the Digital Millennium Copyright Act (DMCA), granting immunity to web hosts for copyright infringement; and
  4. Weak consumer privacy regulations [which have] created breathing room for the rise of Web 2.0.”

“This permissive legal framework offers the United States as a sort of export-processing zone in which Internet entrepreneurs can experiment and establish services.” (p. 57)  Chander gets it exactly right here. Legally speaking, this is the secret sauce that continues to power the Net.

But Chander doesn’t really fully confront the inherent contradiction in earlier calling for “technological neutrality” between cyberspace and the traditional economy while also praising all these legal policies, which generally treated the Internet in an “exceptionalist” fashion. I would argue that some of that asymmetry was essential, however, not only to allow the Net to get out of its cradle and grow, but also because it taught us how light-touch regulation was generally superior to traditional heavy-handed regulatory paradigms and mechanisms. Now we just need to keep harmonizing in the direction of the greater freedom that the Internet and online markets enjoy.

Multi-stakeholderism?

One surprising thing about Chander’s book is the general absence of the term “multi-stakeholderism.”  It is getting hard to pick up any Internet policy tract these days and not find reference to multi-stakeholder processes of one sort or another. In particular, I expected to see more linkages to broader Net freedom fights involving the U.N. and the WCIT process.

In this sense, it would have been interesting to see Chander bridge the gap between his work here on free trade in information services and the proposals of various Internet governance scholars and advocacy groups. In particular, I would have liked to have heard what Chander thinks about the conflicting Internet policy paradigms set forth in important recent books from Rebecca MacKinnon (“Consent of the Networked”) and Ian Brown and Christopher Marsden (“Regulating Code”) on one hand, versus those of Milton Mueller (“Networks and States”) and David Post (“Jefferson’s Moose”) on the other. I think Chander would generally be more comfortable with the policy paradigms and proposals sketched out by MacKinnon and Brown & Marsden (whereas I am definitely more in league with Mueller and Post), but I’m not entirely sure where he stands.

Regardless, I would have liked to have seen some discussion of these issues in Chander’s otherwise excellent book.

Semantic Choices

I suppose my only other complaint with the book comes down to some semantic issues, beginning with its title.  In some ways, calling it The Electronic Silk Road makes perfect sense since Chander wants us to think of the parallels to the Silk Road of ancient times, of course. Alas, these days it is hard to utter the term “Silk Road” and not think of people buying and selling illegal drugs or other shady stuff in the online black market of the same name. So that will be confusing to some.

I’m also not a big fan of some of the other catch-phrases Chander uses throughout the book. Using the term “net-work,” for example, is a bit too cute for my taste and there are times it gets confusing. And the term “glocalization” is the sort of thing that you’d expect to see on the Fake Jeff Jarvis parody account on Twitter (actually, I think he has used it before) and once critic Evgeny Morozov catches wind of it he will, no doubt, eventually use to linguistically lynch Chander.

Finally, should trade in information and e-commerce be “Trade 2.0” or is it really “Trade 3.0”? To me, Trade 1.0 =agricultural & industrial trade; Trade 2.0 = trade in services; and Trade 3.0 = trade in information and electronic commerce. Doesn’t that make more sense? In any event, the whole 1.0, 2.0, 3.0 thing has gotten a bit clichéd in its own right.

Conclusion

I enjoyed Anupam Chander’s Electronic Silk Road and can recommend it to anyone who is looking to connect the dots between international trade theory and Internet policy / ecommerce developments. The reader will find a little bit of everything in the book, such as classical trade theory from Smith and Ricardo alongside a discussion of Coasean theories of the firm and Benkler-esque theories of commons-based peer production.

Best of all, it is an extremely accessible text such that either a trade policy guru or a Net policy wonk could pick it up and learn a lot about the opposing issues they may not have heard of before. I could also imagine several of the chapters becoming assigned reading in both trade policy courses and cyberlaw programs alike. It’s a supremely balanced treatment of the issues.

]]>
https://techliberation.com/2013/08/24/book-review-anupam-chanders-electronic-silk-road/feed/ 106 73472
Book Review: Ronald Deibert’s “Black Code: Inside the Battle for Cyberspace” https://techliberation.com/2013/07/16/book-review-ronald-deiberts-black-code-inside-the-battle-for-cyberspace/ https://techliberation.com/2013/07/16/book-review-ronald-deiberts-black-code-inside-the-battle-for-cyberspace/#comments Tue, 16 Jul 2013 13:01:57 +0000 http://techliberation.com/?p=45184

Black Code coverRonald J. Deibert is the director of The Citizen Lab at the University of Toronto’s Munk School of Global Affairs and the author of an important new book, Black Code: Inside the Battle for Cyberspace, an in-depth look at the growing insecurity of the Internet. Specifically, Deibert’s book is a meticulous examination of the “malicious threats that are growing from the inside out” and which “threaten to destroy the fragile ecosystem we have come to take for granted.” (p. 14) It is also a remarkably timely book in light of the recent revelations about NSA surveillance and how it is being facilitated with the assistance of various tech and telecom giants.

The clear and colloquial tone that Deibert employs in the text helps make arcane Internet security issues interesting and accessible. Indeed, some chapters of the book almost feel like they were pulled from the pages of techno-thriller, complete with villainous characters, unexpected plot twists, and shocking conclusions. “Cyber crime has become one of the world’s largest growth businesses,” Deibert notes (p. 144) and his chapters focus on many prominent recent examples, including cyber-crime syndicates like Koobface, government cyber-spying schemes like GhostNet, state-sanctioned sabotage like Stuxnet, and the vexing issue of zero-day exploit sales.

Deibert is uniquely qualified to narrate this tale not just because he is a gifted story-teller but also because he has had a front row seat in the unfolding play that we might refer to as “How Cyberspace Grew Less Secure.” Indeed, he and his colleagues at The Citizen Lab have occasionally been major players in this drama as they have researched and uncovered various online vulnerabilities affecting millions of people across the globe. (I have previously reviewed and showered praise on a couple important books that Deibert co-edited with scholars from The Citizen Lab and Harvard’s Berkman Center, including: Access Controlled: The Shaping of Power, Rights, and Rule in Cyberspace and Access Denied: The Practice and Policy of Global Internet Filtering. They are truly outstanding resources worthy of your attention.)

Black Code’s Many Meanings

So, what is “black code” and why should we be worried about it? Deibert uses the term as a metaphor for many closely related concerns. Most generally it includes “that which is hidden, obscured from the view of the average Internet user.” (p. 6) More concretely, it refers to “the criminal forces that are increasingly insinuating themselves into cyberspace, gradually subverting it from the inside out.” (p. 7) “Those who take advantage of the Internet’s vulnerabilities today are not just juvenile pranksters or frat house brats,” Deibert notes, “they are organized criminal groups, armed militants, and nation states.” (p. 7-8) Which leads to the final way Deibert uses the term “black code.” It also, he says, “refers to the growing influence of national security agencies, and the expanding network of contractors and companies with whom they work.” (p. 8)

Deibert is worried about the way these forces and factors are working together to undermine online stability and security, and even delegitimize liberal democracy itself. His thesis is probably most succinctly captured in this passage from Chapter 7:

We live in an era of unprecedented access to information, and many political parties campaign on platforms of transparency and openness. And yet, at the same time, we are gradually shifting the policing of cyberspace to a dark world largely free from public accountability and independent oversight. In entrusting more and more information to third parties, we are signing away legal protections that should be guaranteed by those who have our data. Perversely, in liberal democratic countries we are lowering the standards around basic rights to privacy just as the center of cyberspace gravity is shifting to less democratic parts of the world. (p. 130-1)

What Deibert is grappling with in this book is the same fundamental problem that has long plagued the Internet: How do you preserve the benefits associated with the most open and interconnected “network of networks” the world has ever known while also remedying the various vulnerabilities and pathologies created by that same openness and interconnectedness?  Deibert acknowledges this problem, noting:

Ever since the Internet emerged from the world of academia into the world of the rest of us, its growth trajectory has been shadowed by a grey economy that thrives on opportunities for enrichment made possible by an open, globally connected infrastructure. (p. 141)

The Paradox of the Net’s Open, Interconnected Nature

Again, paradoxically, this inherent instability and vulnerability is due precisely to the Net’s open and globally interconnected nature. And many governments are looking to exploit that fact. “These unfortunate by-products of an open, dynamic network are exacerbated by increasing assertions of state power,” Deibert notes. (p. 233)

More generally, this uncomfortable fact—that the Net’s open, interconnected nature leads to both enormous benefits as well as huge vulnerabilities—isn’t just true for criminal online activity or the cyber-espionage activities that various nation-states are pursuing today. It is equally true for everything online today. There is a sort of yin and the yang to the Net that is simply undeniable and completely unavoidable. For one issue after another we find that the Net’s greatest blessing—its open, interconnected nature—is also its greatest curse.

For example, as I noted here recently in my review of Abraham H. Foxman and Christopher Wolf ‘s new book, Viral Hate: Containing Its Spread on the Internet, the open and interconnected Internet gives us “the most widely accessible, unrestricted communications platform the world has ever known” but also  means we have to tolerate a great many imbeciles “who use it to spew insulting, vile, and hateful comments.” The same is true for other types of online speech and content: You have access to an abundance of informational riches, but there’s also no avoiding all the garbage out there now, too.

Similarly, as I noted in my essay, “Privacy as an Information Control Regime: The Challenges Ahead,” the open and interconnected Internet has given us historically unparalleled platforms for social interaction and commerce. But that same openness and interconnectedness has left us with a world of hyper-exposure and a variety of privacy and surveillance threats—not just from governments and large corporations, but also from each other.

And then there’s the never-ending story of digital copyright. On one hand, the open and globally interconnected network or networks has provided us with an amazing platform for sharing knowledge, art, and expression. On the other hand, as I noted in this essay on “The Twilight of Copyright,” creators of expressive works have less security than ever before in terms of how they can control and monetize their artistic and scientific inventions.

I could go on and on—as I did in my essays on “Copyright, Privacy, Property Rights & Information Control: Common Themes, Common Challenges” and “When It Comes to Information Control, Everybody Has a Pet Issue & Everyone Will Be Disappointed”—but the moral of the story is pretty clear: The Internet giveth and the Internet taketh away. Openness and interconnectedness offer us enormous benefits but also force us to confront major risks as the price of admission to this wonderful network.

Will the Whole System Collapse?

The uncomfortable question that Deibert’s book tees up for discussion is: When will this balance get completely out of whack in terms of online security? Or, has it already? In some portions of the text, he hints that may already be the case. Consider this passage in Chapter 11 in which Deibert discusses whether the Chicken Little-ism of digital security worry-warts like Eugene Kaspersky and Richard Clarke is warranted:

Eugene Kaspersky, Richard Clarke, and others may sound like broken records or self-serving fear mongers, but there is no denying the evolving cyberspace ecosystem around us: we are building a digital edifice for the entire planet, and it sits above us like a house of cards. We are wrapping ourselves in expanding layers of digital instructions, protocols, and authentication mechanisms, some them open scrutinized, and regulated, but many closed, amorphous, and poised for abuse, buried in the black arts of espionage, intelligence gathering, and cyber and military affairs. Is it only a matter of time before the whole system collapses? (p. 186)

That sounds horrific, but is it really the case that the entire system really about to collapse? And, if so, what are we going to do about it?

This raises a small problem with Deibert’s book. He does such a nice job itemizing and describing these security vulnerabilities that by the time the reader wades through 230 pages and nears the end of the book, they are left in a highly demoralized state, searching for some hope and a concrete set of practical solutions. Unfortunately, they won’t find an abundance of either in Deibert’s brief closing chapter, “Toward Distributed Security and Stewardship in Cyberspace.”

Don’t get me wrong; I agree with the general thrust of Deibert’s framework, which I describe below. The problem is that it is highly aspirational in nature and lacks specifics. Perhaps that is simply because there are no easy answers here. Digital security is damn hard and, as with most other online pathologies out there, no silver-bullet solutions exist.

Deibert notes that some government officials will seek to exploit those vulnerabilities—many of which they created themselves—to expand their authority over the Internet. “Faced with mounting problems and pressures to do something, too many policy-makers are tempted by extreme solutions,” he notes. (p. 234) He worries about “a movement towards clamp down” that would be “antithetical to the principles of liberal democratic government” by undermining checks and balances and accountability. (p. 235) In turn, this will undermine the “mixed common-pool resource” that is the current Internet.

Deibert’s alternative cyber security strategy to counter the push to “clamp down” is based on three interrelated notions or components:

  1. Principles of restraint or “mutual restraint”: “Securing cyberspace requires a reinforcement, rather than a relaxation, of restraint on power, including checks and balances on governments, law enforcement, intelligence agencies, and on the private sector,” he argues. (p. 239)
  2. “Distributed security”: “The Internet functions precisely because of the absence of centralized control, because of thousands of loosely coordinated monitoring mechanisms,” Deibert notes. “While these decentralized mechanisms are not perfect and can occasionally fail, they form the basis of a coherent distributed security strategy. Bottom-up, ‘grassroots’ solutions to the Internet’s security problems are consistent with principles of openness, avoid heavy-handedness, and provide checks and balances against the concentrations of power,” he observes. (p. 240)
  3. “Stewardship” which Deibert defines as “an ethic of responsible behavior in regard to shared resources” and which, he argues, “would moderate the dangerously escalating exercise of state power in cyberspace by defining limits and setting thresholds of accountability and mutual restraint.” (p. 243)

Again, as an aspirational vision statement this all generally sounds fairly sensible, but the details are lacking. I think Deibert would have been wise to spend a bit more time developing this alternative “bottom-up” vision of how online security should work and bolstering it with case studies.

Digital Security without Top-Down Controls

Luckily, as my Mercatus Center colleague Eli Dourado noted in an important June 2012 white paper, distributed security and stewardship strategies are already working reasonably well today. Dourado’s paper, “Internet Security Without Law: How Service Providers Create Order Online,” documented the many informal institutions that enforce network security norms on the Internet and shows how cooperation among a remarkably varied set of actors improves online security without extensive regulation or punishing legal liability. “These informal institutions carry out the functions of a formal legal system—they establish and enforce rules for the prevention, punishment, and redress of cybersecurity-related harms,” Dourado noted.

For example, a diverse array of computer security incident response teams (CSIRTs) operates around the globe and share their research and coordinate their responses to viruses and other online attacks. Individual Internet service providers (ISPs), domain name registrars, and hosting companies, work with these CSIRTs and other individuals and organizations to address security vulnerabilities. A growing market for private security consultants and software providers also competes to offer increasingly sophisticated suites of security products for businesses, households, and governments.

A great deal of security knowledge is also “crowd-sourced” today via online discussion forums and security blogs that feature contributions from experts and average users alike. University-based computer science and cyberlaw centers (like Citizen Lab) and experts have also helped by creating projects like “Stop Badware,” which originated at Harvard University but then grew into a broader non-profit organization with diverse financial support.

Dourado continues on in his paper to show how these informal, bottom-up efforts to coordinate security responses offer several advantages over top-down government solutions, such as administrative regulation or punishing liability regimes.

Dourado’s description of the ideal approach to online security is entirely consistent with Deibert’s vision in Black Code. In fact, Deibert notes, “It is important to remind ourselves that in spite of the threats, cyberspace runs well and largely without persistent disruption. On a technical level, this efficiency is founded on open and distributed networks of local engineers who share information as peers,” he observes. (p. 240) That is exactly right, but I wish Deibert would have spent more time discussing how this system works in practice today and how it can be tweaked and improved to head off the heavy-handed and very costly top-down solutions that we both dread.

Toward Resiliency

But there’s one other thing I wish Deibert would have explored in the book: resiliency, or how we have adapted to various cyber-vulnerabilities over time.

For example, in another recent Mercatus Center study entitled “Beyond Cyber Doom: Cyber Attack Scenarios and the Evidence of History,” Sean Lawson, an assistant professor in the Department of Communication at the University of Utah, has stressed the importance of resiliency as it pertains to cybersecurity and concerns about “cyberwar.” “Research by historians of technology, military historians, and disaster sociologists has shown consistently that modern technological and social systems are more resilient than military and disaster planners often assume,” he writes. “Just as more resilient technological systems can better respond in the event of failure, so too are strong social systems better able to respond in the event of disaster of any type.”

More generally, as I noted in my recent law review article on “technopanics” and “threat inflation” in information technology policy debates:

while it is certainly true that “more could be done” to secure networks and critical systems, panic is unwarranted because much is already being done to harden systems and educate the public about risks. Various digital attacks will continue, but consumers, companies, and others organizations are learning to cope and become more resilient in the face of those threats.

What Professor Lawson and I are getting at in our respective articles is that the ability of organizations, institutions, and individuals to bounce back from adversity is a frequently unheralded feature of various systems and that it deserves more serious study. (See Andrew Zolli and Ann Marie Healy’s nice book, Resilience: Why Things Bounce Back, for more on this general topic). In the context of online security, what is most remarkable to me is not that the Internet suffers from vulnerabilities due to its open and interconnected nature; it’s that we don’t suffer far more damage as a result.

This gets us back to that very profound question that Deibert poses in Black Code: “Is it only a matter of time before the whole system collapses?” The better question, I think, is: why hasn’t the system already collapsed? Perhaps the answer is, because things haven’t gotten bad enough yet. But I believe that the more realistic answer is that: individuals and institutions often learn how to cope and become resilient in the face of adversity. This is partially the case online because of the stewardship and distributed, decentralized security we already see at work today that makes digital life tolerable.

But it has to be something more than that. After all, many of the security problems that Deibert describes in his book are quite serious and already affect millions of us today. How, then, are we getting by right now? Again, I think the answer has to be that adaptation and resiliency are at work on many different levels of online life.

Consider, for example, how we have learned to deal with spam, viruses, online porn, various online advertising and privacy concerns, and so on. Our adaptation to these threats and annoyances has not been perfectly smooth, of course. No doubt, some people would still like “something to be done” about these things. But isn’t it remarkable how we have, nonetheless, carried on with online commerce and interactive social life even as these problems have persisted?

Conclusion

Going forward, therefore, perhaps there are some reasons for hope. Perhaps the various generic strategies that Deibert outlines in his book, coupled with the remarkable ability of humans to roll with the punches and adapt, will help us come out of this just fine (or at least reasonably well).

Of course, it could also be the case that these security concerns just multiply and that the Internet then morphs into sometime quite different than the interconnected “network of networks” we know today. As I noted in my 2009 essay on “Internet Security Concerns, Online Anonymity, and Splinternets,” we might be moving toward a world with more separate dis­connected digital networks and online “gated communities.” This could take place spontaneously over time and be driven by corporations seeking to satisfy the demand of some consumers for safer and more secure online experiences. As I noted in my review of Jonathan Zittrain’s book, The Future of the Internet, I am actually fine with some of that. I think we can live in a hybrid world of “walled gardens” alongside of the “Wild West” open Internet, so long as this occurs in a spontaneous, organic, bottom-up fashion. [For a more extensive discussion, see my book chapter, “The Case for Internet Optimism, Part 2 – Saving the Net From Its Supporters.”]

If, however, this “splintering” of the Net is done from the top-down through intentional (or even incidental) government action, then it is far more problematic. We already see signs, for example, that Russia is pushing even more strongly in that direction in the wake of the NSA leaks. (See “N.S.A. Leaks Revive Push in Russia to Control Net,” New York Times, July 14.) The Russians have been using amorphous security concerns to push for greater Internet control for some time now. Of course, China has been there for years. So have many Middle Eastern countries. Of course, there’s no guarantee that their respective “splinternets” are, or would be, any more secure than today’s Internet, but it sure would make those networks far more susceptible to state control and surveillance. If that’s our future, then it certainly is a dismal one.

Anyway, read Ron Deibert’s Black Code for an interesting exploration of these and other issues. It’s an excellent contribution to field of Internet policy studies and a book that I’ll be recommending to others for many years to come.


Additional resources:

Other books you should read alongside “Black Code” (links are for my reviews of each book):

]]>
https://techliberation.com/2013/07/16/book-review-ronald-deiberts-black-code-inside-the-battle-for-cyberspace/feed/ 2 45184
My Two Favorite Technology Policy Books of the Past Half-Century https://techliberation.com/2013/07/12/my-two-favorite-technology-policy-books-of-the-past-half-century/ https://techliberation.com/2013/07/12/my-two-favorite-technology-policy-books-of-the-past-half-century/#respond Fri, 12 Jul 2013 15:21:31 +0000 http://techliberation.com/?p=45143

Future and Its Enemies cover Technologies of FreedomI was honored to be asked by the editors at Reason magazine to be a part of their “Revolutionary Reading” roundup of “The 9 Most Transformative Books of the Last 45 Years.”  Reason is celebrating its 45th anniversary and running a wide variety of essays looking back at how liberty has fared over the past half-century. The magazine notes that “Statism has hardly gone away, but the movement to roll it back is stronger than ever.” For this particular feature, Reason’s editors “asked seven libertarians to recommend some of the books in different fields that made [the anti-statist] cultural and intellectual revolution possible.”

When Jesse Walker of Reason first contacted me about contributing my thoughts about which technology policy books made the biggest difference, I told him I knew exactly what my choices would be: Ithiel de Sola Pool’s Technologies of Freedom (1983) and Virginia Postrel’s The Future and Its Enemies (1998). Faithful readers of this blog know all too well how much I love these two books and how I am constantly reminding people of their intellectual importance all these years later. (See, for example, this and this.) All my thinking and writing about tech policy over the past two decades has been shaped by the bold vision and recommendations set forth by Pool and Postrel in these beautiful books.

As I note in my Reason write-up of the books:

The past 45 years have seen remarkable advances in information technology: the Internet, mobile communications, ubiquitous news and entertainment options, and much more. What made these and other innovations possible was a general openness to the unplanned, the unpredictable, and even the uncontrollable. In our willingness to embrace a world of uncertainty and incessant change, we found unparalleled technological abundance. No two books more eloquently captured and celebrated the information age than Ithiel de Sola Pool’s Technologies of Freedom and Virginia Postrel’s The Future and Its Enemies.

And I conclude by noting that “While plenty of tech pundits and academics cling to… stasist thinking today, Pool and Postrel’s books continue to provide beacons for a better world, free from the top-down, technocratic mentality and prescriptions of the past. At least thus far, permissionless innovation has largely trumped the precautionary principle in tech policy. Let’s hope the dynamist vision can hold the line for another 45 years.”

Head over to Reason to read the rest of my essay as well as all the other excellent books that contributors have recommended as part of the symposium.  There are some really great selections in there.

And if you care about the future of technological freedom and human liberty and progress more generally, please do read (or re-read) both Pool and Postrel’s books when you have a chance.  They changed my life and they will change yours, too.

]]>
https://techliberation.com/2013/07/12/my-two-favorite-technology-policy-books-of-the-past-half-century/feed/ 0 45143
Book Review: Brown & Marsden’s “Regulating Code” https://techliberation.com/2013/06/27/book-review-brown-marsdens-regulating-code/ https://techliberation.com/2013/06/27/book-review-brown-marsdens-regulating-code/#respond Thu, 27 Jun 2013 20:51:52 +0000 http://techliberation.com/?p=45035

Regulating Code book coverIan Brown and Christopher T. Marsden’s new book, Regulating Code: Good Governance and Better Regulation in the Information Age, will go down as one of the most important Internet policy books of 2013 for two reasons. First, their book offers an excellent overview of how Internet regulation has unfolded on five different fronts: privacy and data protection; copyright; content censorship; social networks and user-generated content issues; and net neutrality regulation. They craft detailed case studies that incorporate important insights about how countries across the globe are dealing with these issues. Second, the authors endorse a specific normative approach to Net governance that they argue is taking hold across these policy arenas. They call their preferred policy paradigm “prosumer law” and it envisions an active role for governments, which they think should pursue “smarter regulation” of code.

In terms of organization, Brown and Marsden’s book follows the same format found in Milton Mueller’s important 2010 book Networks and States: The Global Politics of Internet Governance; both books feature meaty case studies in the middle bookended by chapters that endorse a specific approach to Internet policymaking. (Incidentally, both books were published by MIT Press.) And, also like Mueller’s book, Brown and Marsden’s Regulating Code does a somewhat better job using case studies to explore the forces shaping Internet policy across the globe than it does making the normative case for their preferred approach to these issues.

Thus, for most readers, the primary benefit of reading either book will be to see how the respective authors develop rich portraits of the institutional political economy surrounding various Internet policy issues over the past 10 to 15 years. In fact, of all the books I have read and reviewed in recent years, I cannot think of two titles that have done a better job developing detailed case studies for such a diverse set of issues. For that reason alone, both texts are important resources for those studying ongoing Internet policy developments.

That’s not to say that both books don’t also make a solid case for their preferred policy paradigms, it’s just that the normative elements of the texts are over-shadowed by the excellent case studies. As a result, readers are left wanting more detail about what their respective policy paradigms would (or should) mean in practice. Regardless, in the remainder of this review, I’ll discuss Brown and Marsden’s normative approach to digital policy and contrast it with Mueller’s since they stand in stark contrast and help frame the policy battles to come on this front.

Governing Cyberspace: Mueller vs. Brown & Marsden

Mueller’s normative goal in Networks and States was to breathe new life into the old cyber-libertarian philosophy that was more prevalent during the Net’s founding era but which has lost favor in recent years. He made the case for a “cyberliberty” movement rooted in what he described as a “denationalized liberalism” vision of Net governance. He argued that “we need to find ways to translate classical liberal rights and freedoms into a governance framework suitable for the global Internet. There can be no cyberliberty without a political movement to define, defend, and institutionalize individual rights and freedoms on a transnational scale.”

I wholeheartedly endorsed that vision in my review of Mueller’s book, even if he was a bit short on the details of how to bring it about. But it is useful to keep Mueller’s paradigm in mind because it provides a nice contrast with the approach Brown and Marsden advocate, which is quite different.

Generally speaking, Brown and Marsden reject most forms of “Internet exceptionalism” and certainly reject the sort of “cyberliberty” ethos that Mueller and I embrace. They instead endorse a fairly broad role for governments in ordering the affairs of cyberspace. In their self-described “prosumer” paradigm, the State is generally viewed as benevolent actor, well-positioned to guide the course of code development toward supposedly more enlightened ends.

Consistent with the strong focus on European policymaking found throughout the book, the authors are quite enamored with the “co-regulatory” models that have become increasing prevalent across the continent. Like many other scholars and policy advocates today, they occasionally call for “multi-stakeholderism” as a solution but they do not necessarily mean the sort of truly voluntary, bottom-up multi-stakeholderism of the Net’s early days. Rather, they are usually thinking of multi-stakeholderism as what is essentially pluralistic politics; it’s the government setting the table, inviting the stakeholders to it, and then guiding (or at least “nudging”) policy along the way. “We are convinced that fudging with nudges needs to be reinforced with the reality of regulation and coregulation, in order to enable prosumers to maximize their potential on the broadband Internet,” they say. (p. 187)

Meet the New Boss, Same as the Old Boss?

Thus, despite the new gloss, their “prosumer law” paradigm ends up sounding quite a bit like a rehash of traditional “public interest” law and common carrier regulation, albeit with a new appreciation of just how dynamics markets built on code can be. Indeed, Brown and Marsden repeatedly acknowledge how often law and regulation fails to keep pace with the rapid evolution of digital technology. “Code changes quickly, user adoption more slowly, legal contracting and judicial adaptation to new technologies slower yet, and regulation through legislation slowest of all,” they correctly note (p. xv). This reflects what Larry Downes refers to as the most fundamental “law of disruption” of the digital age: “technology changes exponentially, but social, economic, and legal systems change incrementally.”

At the end of the day, however, that insight doesn’t seem to inform Brown and Marsden’s policy prescriptions all that much. Theirs is a world in which policy tinkering errors will apparently be corrected promptly and efficiently by still more policy tinkering, or “smarter regulation.” Moreover, like many other Internet policy scholars today, they don’t mind regulatory interventions that come early and often since they believe that will help regulators get out ahead of the technological curve and steer markets in preferred directions. “If regulators fail to address regulatory objects at first, then the regulatory object can grow until its technique overwhelms the regulator,” they say (p. 31).

This is the same mentality that is often on display in Tim Wu’s work, which I have been quite critical of here and elsewhere. For example, Wu has advocated informal “agency threats” and the use of “threat regimes” to accomplish policy goals that prove difficult to steer though the formal democratic rulemaking process. As part of his “defense of regulatory threats in particular contexts,” Wu stresses the importance of regulators taking control of fast-moving tech markets early in their life cycles. “Threat regimes,” Wu argues, “are best justified when the industry is undergoing rapid change — under conditions of ‘high uncertainty.’ Highly informal regimes are most useful, that is, when the agency faces a problem in an environment in which facts are highly unclear and evolving. Examples include periods surrounding a newly invented technology or business model, or a practice about which little is known,” Wu concludes.

This is essentially where most of the “co-regulation” schemes that Brown and Marsden favor would take us: Code regulators would take an active role in shaping the evolution of digital technologies and markets early in its life cycle. What are the preferred regulatory mechanisms? Like Wu and many other cyberlaw professors today, Brown and Marsden favor robust interconnection and interoperability mandates bolstered by antitrust actions as well. And, again, they aren’t willing to wait around and let the courts adjudicate these issues in an ex post fashion. “Essential facilities law is a very poor substitute for the active role of prosumer law that we advocate, especially in its Chicago school minimalist phase” (p. 185). In other words, we shouldn’t wait for someone to bring a case and litigate it through the courts when preemptive, proactive regulatory interventions can sagaciously steer us to a superior end.

More specifically, they propose that “competition authorities should impose ex ante interoperability requirements upon dominant social utilities… to minimize network barriers” (p. 190) and they model this on traditional regulatory schemes such as must-carry obligations, API interface disclosure requirements, and other interconnection mandates (such as those imposed on AOL/Time Warner a decade ago to alleviate fears about instant messaging dominance). They also note that “Effective, scalable state regulation often depends on the recruitment of intermediaries as enforcers” to help achieve various policy objectives (p. 170).

The Problem with Interoperability Über Alles

So, in essence, the Brown-Marsden Internet policy paradigm might be thought of as interoperability über alles. Interoperability and interconnection in pursuit of more “open” and “neutral” systems is generally considered an unalloyed good and most everything else is subservient to this objective.

This is a serious policy error and one that I address in great detail in my absurdly long review of John Palfrey and Urs Gasser’s Interop: The Promise and Perils of Highly Interconnected Systems. I’m not going to repeat all 6,500 words of that critique here when you can just click back and read it, but here’s the high level summary: There is no such thing as “optimal interoperability” that can be determined in an a priori fashion. Ongoing marketplace experimentation with technical standards, modes of information production and dissemination, and interoperable information systems, is almost always preferable to the artificial foreclosure of this dynamic process through state action. The former allows for better learning and coping mechanisms to develop while also incentivizing the spontaneous, natural evolution of the market and market responses. The latter (regulatory foreclosure of experimentation) limits that potential.

More importantly, when interoperability is treated as sacrosanct and forcibly imposed through top-down regulatory schemes, it will often have many unintended consequences and costs. It can even lock in existing market power and market structures by encouraging users and companies to flock to a single platform instead of trying to innovate around it. (Go back and take a look at how the “Kingsbury Commitment” — the interconnection deal from the early days of the U.S. telecom system — actually allowed AT&T to gain greater control over the industry instead of assisting independent operators.)

Citing Palfrey and Gasser, Brown and Marsden do note that “mandated interoperability is neither necessary in all cases nor necessarily desirable” (p. 32), but they don’t spend as much time as Palfrey and Gasser itemizing these trade-offs and the potential downsides of some interoperability mandates. But what frustrates me about both books is the almost quasi-religious reverence accorded to interoperability and open standards when such faith is simply not warranted after historical experience is taken into consideration.

Plenty of the best forms of digital innovation today are due to a lack of interoperability and openness. Proprietary systems have produced some of the most exciting devices (iPhone) and content (video games) of modern times. Then again, voluntary interoperable and “open” services and devices thrive, too. The key point here — and one that I develop in far greater detail in my book chapter, “The Case for Internet Optimism, Part 2 – Saving the Net From Its Supporters” — is that the market for digital services is working marvelously and providing us with choices of many different flavors. Innovation continues to unfold rapidly in both directions along the “open” vs. “closed” continuum. (Here are 30 more essays I have written on this topic if you need more proof.)

Generally speaking, we should avoid mandatory interop and openness solutions. We should instead push those approaches and solutions in a truly voluntary, bottom-up fashion. And, more importantly, we should be pushing for outside-the-box solutions of the Schumpeterian (creative destruction / disruptive innovation) variety instead of surrendering so quickly on competition through forced sharing mandates.

The Case for Patience & Policy Restraint

But Brown and Marsden clearly do not subscribe to that sort of Schumpeterian thinking. They think most code markets tip and lock into monopoly in fairly short order and that only wise interventions can rectify that. For example, they claim that Facebook’s “monopoly is now durable,” which will certainly come as a big surprise to the millions of us who do not use it all. And the story of MySpace’s rapid rise and equally precipitous fall has little bearing on this story, they argue.

But, no matter how you define the “social networking market,” here are two facts about it: First, it is still very, very young. It’s only about a decade old. Second, in that short period of time, we have already witnessed the entire first generation of players fall by the wayside. While the second generation is currently dominated by Facebook, it is by no means alone. Again, millions like me don’t use it at all and get along just fine with other “social networking” technologies, including Twitter, LinkedIn, Google+, and even older tech like email, SMS, and yes, phone calls! Accusations of “monopoly” in this space strain credulity in the extreme. I invite you to read my Mercatus working paper, “The Perils of Classifying Social Media Platforms as Public Utilities,” for a more thorough debunking of this logic. (Note: The final version of that paper will be published in the CommLaw Conspectus shortly.)

Such facts should have a bearing on the debate about regulatory interventions. We continue to witness the power of Schumpeterian rivalry as new and existing players battle in a race for the prize of market power. Brown and Marsden fear that the race is already over in many sectors and that it is time to throw in the towel and get busy regulating. But when I look around at the information technology marketplace today, I am astonished just how radically different it looks from even just a few years ago, and not just in the social media market. I have written extensively about the smartphone marketplace, where innovation continues at a frantic pace. As I noted in my essay here on “Smartphones & Schumpeter,” it’s hard to remember now, but just 6 short years ago:

  • The iPhone and Android had not yet landed.
  • Most of the best-selling phones of 2007 were made by Nokia and Motorola.
  • Feature phones still dominated the market; smartphones were still a luxury (and a clunky luxury at that).
  • There were no app stores and what “apps” did exist were mostly proprietary and device or carrier-specific; and,
  • There was no 4G service.

It’s also easy to forget just how many market analysts and policy wonks were making absurd predictions at the time about how the telecom operators at the time had so much market power that they would crush new innovation without regulation. Instead, in very short order, the market was completely upended in a way that mobile providers never saw coming. There was a huge shift in relative market power flowing from the core of these markets to the fringes, especially to Apple, which wasn’t even a player in that space before the launch of the iPhone.

As I noted in concluding that piece last year, these facts should lead us to believe that this is a healthy, dynamic marketplace in action. Not even Schumpeter could have imagined creative destruction on this scale. (Just look as BlackBerry). But much the same could be said of many other sectors of the information economy.  While it is certainly true that many large players exist, we continue to see a healthy amount of churn in these markets and an astonishing amount of technological innovation.

Public Choice Insights: What History Tells Us

One would hope these realities would have a greater bearing on the policy prescriptions suggested by analysts like Brown and Marsden, but they don’t seem to. Instead, the attitude on display here is that governments can, generally speaking, act wisely and nudge efficiently to correct short-term market hiccups and set us on a better course. But there are strong reasons to question that presumption.

Specifically, what I found most regrettable about Brown and Marsden’s book was the way — like all too many books in this field these days — the authors briefly introduce “public choice” insights and concerns only to summarily dismiss them as unfounded or overblown. (See my review of Brett Frischmann’s book, Infrastructure: The Social Value of Shared Resources for a more extended discussion of this problem as it pertains to discussions about not just infrastructure regulation by the regulation of all complex industries and technologies.)

Brown and Marsden make it clear that their intentions are pure and that their methods would incorporate the lessons of the past, but they aren’t very interested in dwelling on the long, lamentable history of regulatory failures and capture in the communications and media policy sectors. They do note the dangers of a growing “security-industrial complex” and argue that “commercial actors dominate technical actors in policy debates.” They also say that the “potential for capture by regulated interests, especially large corporate lobbies, is an essential insight” that informs their approach. The problem is that it really doesn’t. They largely ignore those insights and instead imply that, to the extent this is a problem at all, we can build a better breed of bureaucrats going forward who will craft “smarter regulation” that is immune from such pressures. Or, they claim that “multi-stakeholderism” — again, the new, more activist and government-influenced conception of it — can overcome these public choice problems.

A better understanding of power politics that is informed by the wisdom of the ages would instead counsel that minimizing the scope of politicization of technology markets is the better remedy. Capture and cronyism in communications and media markets has always grown in direct proportion to the overall scope of law governing those sectors. (I invite you to read all the troubling examples of this that Brent Skorup and I have documented in our new 72-page working paper, “A History of Cronyism and Capture in the Information Technology Sector.” Warning: It makes for miserable reading but proves beyond any doubt that there is something to public choice concerns.)

To be clear, it’s not that I believe that “market failures” or “code failures” never occur, rather, as I noted in this debate with Larry Lessig, it’s that such problems are typically “better addressed by voluntary, spontaneous, bottom-up, marketplace responses than by coerced, top-down, governmental solutions. Moreover, the decisive advantage of the market-driven approach to correcting code failure comes down to the rapidity and nimbleness of those response(s).” It’s not just that traditional regulatory remedies cannot keep pace with code markets, it’s that those attempting to craft the remedies do not possess the requisite knowledge needed to know how to steer us down a superior path. (See my essay, “Antitrust & Innovation in the New Economy: The Problem with the Static Equilibrium Mindset,” for more on that point.)

Regardless, at a minimum, I expect scholars to take seriously the very real public choice problems at work in this arena. You cannot talk about the history of these sectors without acknowledging the horrifically anti-consumer policies that were often put in place at the request of one industry or another to shield themselves from disruptive innovation. No amount of wishful thinking about “prosumer” policies will change these grim political realities. Only by minimizing chances to politicize technology markets and decisions can we overcome these problems.

Conclusion

For those of us who prefer to focus on freeing code, Brown and Marsden’s Regulating Code is another reminder that liberty is increasingly a loser in Internet policy circles these days. Milton Mueller’s dream of decentralized, denationalized liberalism seems more and more unlikely as armies of policymakers, regulators, special interests, regulatory advocates, academics, and others all line up and plead for their pet interest or cause to be satisfied through pure power politics. No matter what you call it — fudging, nudging, coregulation, smart regulation, multistakeholderism, prosumer law, or whatever else, — there is no escaping the fact that we are witnessing the complete politicization of almost every facet of code creation and digital decisionmaking today.

Despite my deep reservations about a more politicized cyberspace, Brown and Marsden’s book is an important text because it is one of the most sophisticated articulations and defenses of it to date. Their book also helps us better understand the rapidly developing institutional political economy of Internet regulation in both broad and narrow policy contexts. Thus, it is worth your time and attention even if, like me, you are disheartened to be reading yet another Net policy book that ultimately endorses mandates over of markets as the primary modus operandi of the information age.


Additional Resources about the book:

Other books you should read alongside “Regulating Code” (links are for my reviews of each):

]]>
https://techliberation.com/2013/06/27/book-review-brown-marsdens-regulating-code/feed/ 0 45035
Richard Brandt on Jeff Bezos and amazon.com https://techliberation.com/2013/06/25/richard-brandt/ https://techliberation.com/2013/06/25/richard-brandt/#respond Tue, 25 Jun 2013 10:00:04 +0000 http://techliberation.com/?p=45008

Richard Brandt, technology journalist and author, discusses his new book, One Click: Jeff Bezos and the Rise of Amazon.Com. Brandt discusses Bezos’ entrepreneurial drive, his business philosophy, and how he’s grown Amazon to become the biggest retailer in the world. This episode also covers the biggest mistake Bezos ever made, how Amazon uses patent laws to its advantage, whether Amazon will soon become a publishing house, Bezos’ idea for privately-funded space exploration and his plan to revolutionize technology with quantum computing.

Download

Related Links

 

 

]]>
https://techliberation.com/2013/06/25/richard-brandt/feed/ 0 45008