Philosophy & Cyber-Libertarianism

What follows is a response to Michael Sacasas, who recently posted an interesting short essay on his blog The Frailest Thing, entitled, “10 Points of Unsolicited Advice for Tech Writers.” As with everything Michael writes, it is very much worth reading and offers a great deal of useful advice about how to be a more thoughtful tech writer. Even though I occasionally find myself disagreeing with Michael’s perspectives, I always learn a great deal from his writing and appreciate the tone and approach he uses in all his work. Anyway, you’ll need to bounce over to his site and read his essay first before my response will make sense.

______________________________

Michael:

Lots of good advice here. I think tech scholars and pundits of all dispositions would be wise to follow your recommendations. But let me offer some friendly pushback on points #2 & #10, because I spend much of my time thinking and writing about those very things.

In those two recommendations you say that those who write about technology “[should] not cite apparent historical parallels to contemporary concerns about technology as if they invalidated those concerns. That people before us experienced similar problems does not mean that they magically cease being problems today.” And you also warn “That people eventually acclimate to changes precipitated by the advent of a new technology does not prove that the changes were inconsequential or benign.”

I think these two recommendations are born of a certain frustration with the tenor of much modern technology writing; the sort of Pollyanna-ish writing that too casually dismisses legitimate concerns about the technological disruptions and usually ends with the insulting phrase, “just get over it.” Such writing and punditry is rarely helpful, and you and others have rightly pointed out the deficiencies in that approach.

That being said, I believe it would be highly unfortunate to dismiss any inquiry into the nature of individual and societal acclimation to technological change. Because adaptation obviously does happen! Certainly there must be much we can learn from it. In particular, what I hope to better understand is the process by which we humans have again and again figured out how to assimilate new technologies into their lives despite how much those technologies “unsettled” well-established personal, social, cultural, and legal norms. Continue reading →

book cover (small)I am pleased to announce the release of my latest book, “Permissionless Innovation: The Continuing Case for Comprehensive Technological Freedom.” It’s a short manifesto (just under 100 pages) that condenses — and attempts to make more accessible — arguments that I have developed in various law review articles, working papers, and blog posts over the past few years. I have two goals with this book.

First, I attempt to show how the central fault line in almost all modern technology policy debates revolves around “the permission question,” which asks: Must the creators of new technologies seek the blessing of public officials before they develop and deploy their innovations? How that question is answered depends on the disposition one adopts toward new inventions. Two conflicting attitudes are evident.

One disposition is known as the “precautionary principle.” Generally speaking, it refers to the belief that new innovations should be curtailed or disallowed until their developers can prove that they will not cause any harms to individuals, groups, specific entities, cultural norms, or various existing laws, norms, or traditions.

The other vision can be labeled “permissionless innovation.” It refers to the notion that experimentation with new technologies and business models should generally be permitted by default. Unless a compelling case can be made that a new invention will bring serious harm to society, innovation should be allowed to continue unabated and problems, if they develop at all, can be addressed later.

I argue that we are witnessing a grand clash of visions between these two mindsets today in almost all major technology policy discussions today. Continue reading →

When Google announced it was acquiring digital thermostat company Nest yesterday, it set off another round of privacy and security-related technopanic talk on Twitter and elsewhere. Fear and loathing seemed to be the order of the day. It seems that each new product launch or business announcement in the “Internet of Things” space is destined to set off another round of Chicken Little hand-wringing. We are typically told that the digital sky will soon fall on our collective heads unless we act preemptively to somehow head-off some sort of pending privacy or security apocalypse.

Meanwhile, however, a whole heck of lot of people are demanding more and more of these technologies, and American entrepreneurs are already engaged in heated competition with European and Asian rivals to be at the forefront of the next round Internet innovation to satisfy those consumer demands. So, how is this going to play out?

This gets to what becoming the defining policy issue of our time, not just for the Internet but for technology policy more generally: To what extent should the creators of new technologies seek the blessing of public officials before they develop and deploy their innovations? We can think of this as “the permission question” and it is creating a massive rift between those who desire more preemptive, precautionary safeguards for a variety of reasons (safety, security, privacy, copyright, etc.) and those of us who continue to believe that permissionless innovation should be the guiding ethos of our age. The chasm between these two worldviews is only going to deepen in coming years as the pace of innovation around new technologies (the Internet of Things, wearable tech, driverless cars, 3D printing, commercial drones, etc) continues to accelerate.

Sarah Kessler of Fast Company was kind enough to call me last night and ask for some general comments about Google buying Nest and she also sought out the comments of Marc Rotenberg of EPIC about privacy in the Internet of Things era more generally. Our comments provide a useful example of the divide between these two worldviews and foreshadow debates to come: Continue reading →

With each booth I pass and presentation I listen to at the 2014 International Consumer Electronics Show (CES), it becomes increasingly evident that the “Internet of Things” era has arrived. In just a few short years, the Internet of Things (IoT) has gone from industry buzzword to marketplace reality. Countless new IoT devices are on display throughout the halls of the Las Vegas Convention Center this week, including various wearable technologies, smart appliances, remote monitoring services, autonomous vehicles, and much more.

This isn’t vaporware; these are devices or services that are already on the market or will launch shortly. Some will fail, of course, just as many other earlier technologies on display at past CES shows didn’t pan out. But many of these IoT technologies will succeed, driven by growing consumer demand for highly personalized, ubiquitous, and instantaneous services.

But will policymakers let the Internet of Things revolution continue or will they stop it dead in its tracks? Interestingly, not too many people out here in Vegas at the CES seem all that worried about the latter outcome. Indeed, what I find most striking about the conversation out here at CES this week versus the one about IoT that has been taking place in Washington over the past year is that there is a large and growing disconnect between consumers and policymakers about what the Internet of Things means for the future.

When every device has a sensor, a chip, and some sort of networking capability, amazing opportunities become available to consumers. And that’s what has them so excited and ready to embrace these new technologies. But those same capabilities are exactly what raise the blood pressure of many policymakers and policy activists who fear the safety, security, or privacy-related problems that might creep up in a world filled with such technologies.

But at least so far, most consumers don’t seem to share the same worries. Continue reading →

How do DC and SF think about the future? Are their visions of how to promote, and adapt to, technological change compatible? Or are America’s policymakers fundamentally in conflict with its innovators? Can technology ultimately trump politics?

In the near-term, are traditional left/right divides breaking down? What are the real fault lines in technology policy? Where might a divided Congress reach consensus on tech policy issues like privacy, immigration, copyright, censorship, Internet freedom and biotech?

For answers and more questions, join moderator Declan McCullagh (Chief Political Correspondent for CNET), and a panel of technology policy experts: Berin Szoka (President, TechFreedom), Larry Downes (author, Laws of Disruption), and Mike McGeary (Co-Founder and Chief Political Strategist, Engine Advocacy). This event will include a complimentary lunch and is co-sponsored by TechFreedom, Reason Foundation, and the Charles Koch Institute.

Continue reading →

do not panicIn a recent essay here “On the Line between Technology Ethics vs. Technology Policy,” I made the argument that “We cannot possibly plan for all the ‘bad butterfly-effects’ that might occur, and attempts to do so will result in significant sacrifices in terms of social and economic liberty.” It was a response to a problem I see at work in many tech policy debates today: With increasing regularity, scholars, activists, and policymakers are conjuring up a seemingly endless parade of horribles that will befall humanity unless “steps are taken” to preemptive head-off all the hypothetical harms they can imagine. (This week’s latest examples involve the two hottest technopanic topics du jour: the Internet of Things and commercial delivery drones. Fear and loathing, and plenty of “threat inflation,” are on vivid display.)

I’ve written about this phenomenon at even greater length in my recent law review article, “Technopanics, Threat Inflation, and the Danger of an Information Technology Precautionary Principle,” as well as in two lengthy blog posts asking the questions, “Who Really Believes in ‘Permissionless Innovation’?” and “What Does It Mean to ‘Have a Conversation’ about a New Technology?” The key point I try to get across in those essays is that letting such “precautionary principle” thinking guide policy poses a serious threat to technological progress, economic entrepreneurialism, social adaptation, and long-run prosperity. If public policy is guided at every turn by the precautionary mindset then innovation becomes impossible because of fear of the unknown; hypothetical worst-case scenarios trump all other considerations. Social learning and economic opportunities become far less likely under such a regime. In practical terms, it means fewer services, lower quality goods, higher prices, diminished economic growth, and a decline in the overall standard of living.

Indeed, if we live in constant fear of the future and become paralyzed by every boogeyman scenario that our creative little heads can conjure up, then we’re bound to end up looking as silly as this classic 2005 parody from The Onion,Everything That Can Go Wrong Listed.” Continue reading →

10 commandmentsWhat works well as an ethical directive might not work equally well as a policy prescription. Stated differently, what one ought to do it certain situations should not always be synonymous with what they must do by force of law.

I’m going to relate this lesson to tech policy debates in a moment, but let’s first think of an example of how this lesson applies more generally. Consider the Ten Commandments. Some of them make excellent ethical guidelines (especially the stuff about not coveting neighbor’s house, wife, or possessions). But most of us would agree that, in a free and tolerant society, only two of the Ten Commandments make good law: Thou shalt not kill and Thou shalt not steal.

In other words, not every sin should be a crime. Perhaps some should be; but most should not. Taking this out of the realm of religion and into the world of moral philosophy, we can apply the lesson more generally as: Not every wise ethical principle makes for wise public policy. Continue reading →

crystal ballFew modern intellectuals gave more serious thought to forecasting the future than Herman Kahn. He wrote several books and essays imagining what the future might look like. But he was also a profoundly humble man who understood the limits of forecasting the future. On that point, I am reminded of my favorite Herman Kahn quote:

History is likely to write scenarios that most observers would find implausible not only prospectively but sometimes, even in retrospect. Many sequences of events seem plausible now only because they have actually occurred; a man who knew no history might not believe any. Future events may not be drawn from the restricted list of those we have learned are possible; we should expect to go on being surprised.[1]

I have always loved that phrase, “a man who knew no history might not believe any.” Indeed, sometimes the truth (how history actually unfolds) really is stranger than fiction (or the hypothetical forecasts that came before it.)

This insight has profound ramifications for public policy and efforts to “plan progress,” something that typically ends badly. Continue reading →

In June, The Guardian ran a groundbreaking story that divulged a top secret court order forcing Verizon to hand over to the National Security Agency (NSA) all of its subscribers’ telephony metadata—including the phone numbers of both parties to any call involving a person in the United States and the time and duration of each call—on a daily basis. Although media outlets have published several articles in recent years disclosing various aspects the NSA’s domestic surveillance, the leaked court order obtained by The Guardian revealed hard evidence that NSA snooping goes far beyond suspected terrorists and foreign intelligence agents—instead, the agency routinely and indiscriminately targets private information about all Americans who use a major U.S. phone company.

It was only a matter of time before the NSA’s surveillance program—which is purportedly authorized by Section 215 of the USA PATRIOT Act (50 U.S.C. § 1861)—faced a challenge in federal court. The Electronic Privacy Information Center fired the first salvo on July 8, when the group filed a petition urging the U.S. Supreme Court to issue a writ of mandamus nullifying the court orders authorizing the NSA to coerce customer data from phone companies. But as Tim Lee of The Washington Post pointed out in a recent essay, the nation’s highest Court has never before reviewed a decision of the Foreign Intelligence Surveillance Act (FISA) court, which is responsible for issuing the top secret court order authorizing the NSA’s surveillance program.130606-NSA-headquarters-tight-730a-590x400

Today, another crucial lawsuit challenging the NSA’s domestic surveillance program was brought by a diverse coalition of nineteen public interest groups, religious organizations, and other associations. The coalition, represented by the Electronic Frontier Foundation, includes TechFreedom, Human Rights Watch, Greenpeace, the Bill of Rights Defense Committee, among many other groups. The lawsuit, brought in the U.S. district court in northern California, argues that the NSA’s program—aptly described as the “Assocational Tracking Program” in the complaint—violates the First, Fourth, and Fifth Amendments to the Constitution, along with the Foreign Intelligence Surveillance Act.

Continue reading →

Future and Its Enemies coverTechnologies of FreedomI was honored to be asked by the editors at Reason magazine to be a part of their “Revolutionary Reading” roundup of “The 9 Most Transformative Books of the Last 45 Years.”  Reason is celebrating its 45th anniversary and running a wide variety of essays looking back at how liberty has fared over the past half-century. The magazine notes that “Statism has hardly gone away, but the movement to roll it back is stronger than ever.” For this particular feature, Reason’s editors “asked seven libertarians to recommend some of the books in different fields that made [the anti-statist] cultural and intellectual revolution possible.”

When Jesse Walker of Reason first contacted me about contributing my thoughts about which technology policy books made the biggest difference, I told him I knew exactly what my choices would be: Ithiel de Sola Pool’s Technologies of Freedom (1983) and Virginia Postrel’s The Future and Its Enemies (1998). Faithful readers of this blog know all too well how much I love these two books and how I am constantly reminding people of their intellectual importance all these years later. (See, for example, this and this.) All my thinking and writing about tech policy over the past two decades has been shaped by the bold vision and recommendations set forth by Pool and Postrel in these beautiful books.

As I note in my Reason write-up of the books: Continue reading →