Philosophy & Cyber-Libertarianism

Few people have been more tireless in their defense of the notion of “permissionless innovation” than Wall Street Journal columnist L. Gordon Crovitz. In his weekly “Information Age” column for the Journal (which appears each Monday), Crovitz has consistently sounded the alarm regarding new threats to Internet freedom, technological freedom, and individual liberties. It was, therefore, a great honor for me to wake up Monday morning and read his latest post, “The End of the Permissionless Web,” which discussed my new book “Permissionless Innovation: The Continuing Case for Comprehensive Technological Freedom.”

“The first generation of the Internet did not go well for regulators,” Crovitz begins his column. “Despite early proposals to register websites and require government approval for business practices, the Internet in the U.S. developed largely without bureaucratic control and became an unstoppable engine of innovation and economic growth.” Unfortunately, he correctly notes:

Regulators don’t plan to make the same mistake with the next generation of innovations. Bureaucrats and prosecutors are moving in to undermine services that use the Internet in new ways to offer everything from getting a taxi to using self-driving cars to finding a place to stay.

This is exactly why I penned my little manifesto. As Crovitz continues on to note in his essay, new regulatory threats to both existing and emerging technologies are popping up on almost a daily basis. He highlights currently battles over Uber, Airbnb, 23andme, commercial drones, and more. And his previous columns have discussed many other efforts to “permission” innovation and force heavy-handed top-down regulatory schemes on fast-paced and rapidly-evolving sectors and technologies. Continue reading →

[Last updated July 2021.]

I spend a lot of time reading books and essays about technology; more specifically, books and essays about technology history and criticism. Yet, I am often struck by how few of the authors of these works even bother defining what they mean by “technology.” I find that frustrating because, if you are going to make an attempt to either study or critique a particular technology or technological practice or development, then you probably should take the time to tell us how broadly or narrowly you are defining the term “technology” or “technological process.”

Photo: David HartsteinOf course, it’s not easy. “In fact, technology is a word we use all of the time, and ordinarily it seems to work well enough as a shorthand, catch-all sort of word,” notes the always-insightful Michael Sacasas in his essay “Traditions of Technological Criticism.” “That same sometimes useful quality, however, makes it inadequate and counter-productive in situations that call for more precise terminology,” he says.

Quite right, and for a more detailed and critical discussion of how earlier scholars, historians, and intellectuals have defined or thought about the term “technology,” you’ll want to check out Michael’s other recent essay, “What Are We Talking About When We Talk About Technology?” which preceded the one cited above. We don’t always agree on things — in fact, I am quite certain that most of my comparatively amateurish work must make his blood boil at times! — but you won’t find a more thoughtful technology scholar alive today than Michael Sacasas. If you’re serious about studying technology history and criticism, you should follow his blog and check out his book, The Tourist and The Pilgrim: Essays on Life and Technology in the Digital Age, which is a collection of some of his finest essays.

Anyway, for what it’s worth, I figured I would create this post to list some of the more interesting definitions of “technology” that I have uncovered in my own research. I suspect I will add to it in coming months and years, so please feel free to suggest other additions since I would like this to be a useful resource to others. Continue reading →

What follows is a response to Michael Sacasas, who recently posted an interesting short essay on his blog The Frailest Thing, entitled, “10 Points of Unsolicited Advice for Tech Writers.” As with everything Michael writes, it is very much worth reading and offers a great deal of useful advice about how to be a more thoughtful tech writer. Even though I occasionally find myself disagreeing with Michael’s perspectives, I always learn a great deal from his writing and appreciate the tone and approach he uses in all his work. Anyway, you’ll need to bounce over to his site and read his essay first before my response will make sense.

______________________________

Michael:

Lots of good advice here. I think tech scholars and pundits of all dispositions would be wise to follow your recommendations. But let me offer some friendly pushback on points #2 & #10, because I spend much of my time thinking and writing about those very things.

In those two recommendations you say that those who write about technology “[should] not cite apparent historical parallels to contemporary concerns about technology as if they invalidated those concerns. That people before us experienced similar problems does not mean that they magically cease being problems today.” And you also warn “That people eventually acclimate to changes precipitated by the advent of a new technology does not prove that the changes were inconsequential or benign.”

I think these two recommendations are born of a certain frustration with the tenor of much modern technology writing; the sort of Pollyanna-ish writing that too casually dismisses legitimate concerns about the technological disruptions and usually ends with the insulting phrase, “just get over it.” Such writing and punditry is rarely helpful, and you and others have rightly pointed out the deficiencies in that approach.

That being said, I believe it would be highly unfortunate to dismiss any inquiry into the nature of individual and societal acclimation to technological change. Because adaptation obviously does happen! Certainly there must be much we can learn from it. In particular, what I hope to better understand is the process by which we humans have again and again figured out how to assimilate new technologies into their lives despite how much those technologies “unsettled” well-established personal, social, cultural, and legal norms. Continue reading →

book cover (small)I am pleased to announce the release of my latest book, “Permissionless Innovation: The Continuing Case for Comprehensive Technological Freedom.” It’s a short manifesto (just under 100 pages) that condenses — and attempts to make more accessible — arguments that I have developed in various law review articles, working papers, and blog posts over the past few years. I have two goals with this book.

First, I attempt to show how the central fault line in almost all modern technology policy debates revolves around “the permission question,” which asks: Must the creators of new technologies seek the blessing of public officials before they develop and deploy their innovations? How that question is answered depends on the disposition one adopts toward new inventions. Two conflicting attitudes are evident.

One disposition is known as the “precautionary principle.” Generally speaking, it refers to the belief that new innovations should be curtailed or disallowed until their developers can prove that they will not cause any harms to individuals, groups, specific entities, cultural norms, or various existing laws, norms, or traditions.

The other vision can be labeled “permissionless innovation.” It refers to the notion that experimentation with new technologies and business models should generally be permitted by default. Unless a compelling case can be made that a new invention will bring serious harm to society, innovation should be allowed to continue unabated and problems, if they develop at all, can be addressed later.

I argue that we are witnessing a grand clash of visions between these two mindsets today in almost all major technology policy discussions today. Continue reading →

When Google announced it was acquiring digital thermostat company Nest yesterday, it set off another round of privacy and security-related technopanic talk on Twitter and elsewhere. Fear and loathing seemed to be the order of the day. It seems that each new product launch or business announcement in the “Internet of Things” space is destined to set off another round of Chicken Little hand-wringing. We are typically told that the digital sky will soon fall on our collective heads unless we act preemptively to somehow head-off some sort of pending privacy or security apocalypse.

Meanwhile, however, a whole heck of lot of people are demanding more and more of these technologies, and American entrepreneurs are already engaged in heated competition with European and Asian rivals to be at the forefront of the next round Internet innovation to satisfy those consumer demands. So, how is this going to play out?

This gets to what becoming the defining policy issue of our time, not just for the Internet but for technology policy more generally: To what extent should the creators of new technologies seek the blessing of public officials before they develop and deploy their innovations? We can think of this as “the permission question” and it is creating a massive rift between those who desire more preemptive, precautionary safeguards for a variety of reasons (safety, security, privacy, copyright, etc.) and those of us who continue to believe that permissionless innovation should be the guiding ethos of our age. The chasm between these two worldviews is only going to deepen in coming years as the pace of innovation around new technologies (the Internet of Things, wearable tech, driverless cars, 3D printing, commercial drones, etc) continues to accelerate.

Sarah Kessler of Fast Company was kind enough to call me last night and ask for some general comments about Google buying Nest and she also sought out the comments of Marc Rotenberg of EPIC about privacy in the Internet of Things era more generally. Our comments provide a useful example of the divide between these two worldviews and foreshadow debates to come: Continue reading →

With each booth I pass and presentation I listen to at the 2014 International Consumer Electronics Show (CES), it becomes increasingly evident that the “Internet of Things” era has arrived. In just a few short years, the Internet of Things (IoT) has gone from industry buzzword to marketplace reality. Countless new IoT devices are on display throughout the halls of the Las Vegas Convention Center this week, including various wearable technologies, smart appliances, remote monitoring services, autonomous vehicles, and much more.

This isn’t vaporware; these are devices or services that are already on the market or will launch shortly. Some will fail, of course, just as many other earlier technologies on display at past CES shows didn’t pan out. But many of these IoT technologies will succeed, driven by growing consumer demand for highly personalized, ubiquitous, and instantaneous services.

But will policymakers let the Internet of Things revolution continue or will they stop it dead in its tracks? Interestingly, not too many people out here in Vegas at the CES seem all that worried about the latter outcome. Indeed, what I find most striking about the conversation out here at CES this week versus the one about IoT that has been taking place in Washington over the past year is that there is a large and growing disconnect between consumers and policymakers about what the Internet of Things means for the future.

When every device has a sensor, a chip, and some sort of networking capability, amazing opportunities become available to consumers. And that’s what has them so excited and ready to embrace these new technologies. But those same capabilities are exactly what raise the blood pressure of many policymakers and policy activists who fear the safety, security, or privacy-related problems that might creep up in a world filled with such technologies.

But at least so far, most consumers don’t seem to share the same worries. Continue reading →

How do DC and SF think about the future? Are their visions of how to promote, and adapt to, technological change compatible? Or are America’s policymakers fundamentally in conflict with its innovators? Can technology ultimately trump politics?

In the near-term, are traditional left/right divides breaking down? What are the real fault lines in technology policy? Where might a divided Congress reach consensus on tech policy issues like privacy, immigration, copyright, censorship, Internet freedom and biotech?

For answers and more questions, join moderator Declan McCullagh (Chief Political Correspondent for CNET), and a panel of technology policy experts: Berin Szoka (President, TechFreedom), Larry Downes (author, Laws of Disruption), and Mike McGeary (Co-Founder and Chief Political Strategist, Engine Advocacy). This event will include a complimentary lunch and is co-sponsored by TechFreedom, Reason Foundation, and the Charles Koch Institute.

Continue reading →

do not panicIn a recent essay here “On the Line between Technology Ethics vs. Technology Policy,” I made the argument that “We cannot possibly plan for all the ‘bad butterfly-effects’ that might occur, and attempts to do so will result in significant sacrifices in terms of social and economic liberty.” It was a response to a problem I see at work in many tech policy debates today: With increasing regularity, scholars, activists, and policymakers are conjuring up a seemingly endless parade of horribles that will befall humanity unless “steps are taken” to preemptive head-off all the hypothetical harms they can imagine. (This week’s latest examples involve the two hottest technopanic topics du jour: the Internet of Things and commercial delivery drones. Fear and loathing, and plenty of “threat inflation,” are on vivid display.)

I’ve written about this phenomenon at even greater length in my recent law review article, “Technopanics, Threat Inflation, and the Danger of an Information Technology Precautionary Principle,” as well as in two lengthy blog posts asking the questions, “Who Really Believes in ‘Permissionless Innovation’?” and “What Does It Mean to ‘Have a Conversation’ about a New Technology?” The key point I try to get across in those essays is that letting such “precautionary principle” thinking guide policy poses a serious threat to technological progress, economic entrepreneurialism, social adaptation, and long-run prosperity. If public policy is guided at every turn by the precautionary mindset then innovation becomes impossible because of fear of the unknown; hypothetical worst-case scenarios trump all other considerations. Social learning and economic opportunities become far less likely under such a regime. In practical terms, it means fewer services, lower quality goods, higher prices, diminished economic growth, and a decline in the overall standard of living.

Indeed, if we live in constant fear of the future and become paralyzed by every boogeyman scenario that our creative little heads can conjure up, then we’re bound to end up looking as silly as this classic 2005 parody from The Onion,Everything That Can Go Wrong Listed.” Continue reading →

10 commandmentsWhat works well as an ethical directive might not work equally well as a policy prescription. Stated differently, what one ought to do it certain situations should not always be synonymous with what they must do by force of law.

I’m going to relate this lesson to tech policy debates in a moment, but let’s first think of an example of how this lesson applies more generally. Consider the Ten Commandments. Some of them make excellent ethical guidelines (especially the stuff about not coveting neighbor’s house, wife, or possessions). But most of us would agree that, in a free and tolerant society, only two of the Ten Commandments make good law: Thou shalt not kill and Thou shalt not steal.

In other words, not every sin should be a crime. Perhaps some should be; but most should not. Taking this out of the realm of religion and into the world of moral philosophy, we can apply the lesson more generally as: Not every wise ethical principle makes for wise public policy. Continue reading →

crystal ballFew modern intellectuals gave more serious thought to forecasting the future than Herman Kahn. He wrote several books and essays imagining what the future might look like. But he was also a profoundly humble man who understood the limits of forecasting the future. On that point, I am reminded of my favorite Herman Kahn quote:

History is likely to write scenarios that most observers would find implausible not only prospectively but sometimes, even in retrospect. Many sequences of events seem plausible now only because they have actually occurred; a man who knew no history might not believe any. Future events may not be drawn from the restricted list of those we have learned are possible; we should expect to go on being surprised.[1]

I have always loved that phrase, “a man who knew no history might not believe any.” Indeed, sometimes the truth (how history actually unfolds) really is stranger than fiction (or the hypothetical forecasts that came before it.)

This insight has profound ramifications for public policy and efforts to “plan progress,” something that typically ends badly. Continue reading →