Technopanics & the Precautionary Principle

Here’s a new Federalist Society Regulatory Transparency “Tech Roundup” podcast about driverless cars, artificial intelligence and the growth of “soft law” governance for both. The 34-minute podcast features a conversation between Caleb Watney and me about new Trump Administration AI guidelines as well as the Department of Transportation’s new “Version 4.0” guidance for automated vehicles.

This podcast builds on my recent essay, “Trump’s AI Framework & the Future of Emerging Tech Governance” as well as an earlier law review article, “Soft Law for Hard Problems: The Governance of Emerging Technologies in an Uncertain Future.”

In a new essay for the Mercatus Bridge, I ask, “How Many Lives Are Lost Due to the Precautionary Principle?” The essay builds on two recent case studies of how the precautionary principle can result in unnecessary suffering and deaths. The first case study involves the Japanese government’s decision in 2011 to entirely abandon nuclear energy following the Fukushima Daiichi nuclear accident. The second involves Golden Rice, a form of rice that was genetically engineered to contain beta-carotene, which helps combat vitamin A deficiency. Anti-GMO resistance among environmental activists and regulatory officials held up the diffusion of this miracle food. New reports and books now document how these precautionary decisions diminished human welfare instead of improving it. I encourage you to jump over to the Bridge and read the entire story.

I concluded the essay by noting that, “It is time to reject the simplistic logic of the precautionary principle and move toward a more rational, balanced approach to the governance of technologies. Our lives and well-being depend upon it.” Some read that as a complete rejection of all preemptive regulation. I certainly was not arguing that, so let me clarify a few things. Continue reading →

The endless apocalyptic rhetoric surrounding Net Neutrality and many other tech policy debates proves there’s no downside to gloom-and-doomism as a rhetorical strategy. Being a techno-Jeremiah nets one enormous media exposure and even when such a person has been shown to be laughably wrong, the press comes back for more. Not only is there is no penalty for hyper-pessimistic punditry, but the press actually furthers the cause of such “fear entrepreneurs” by repeatedly showering them with attention and letting them double-down on their doomsday-ism. Bad news sells, for both the pundit and the press.

But what is most remarkable is that the press continues to label these preachers of the techno-apocalypse as “experts” despite a track record of failed predictions. I suppose it’s because, despite all the failed predictions, they are viewed as thoughtful & well-intentioned. It is another reminder that John Stuart Mill’s 1828 observation still holds true today: “I have observed that not the man who hopes when others despair, but the man who despairs when others hope, is admired by a large class of persons as a sage.”

Additional Reading:

CollegeHumor has created this amazing video, “Black Mirror Episodes from Medieval Times,” which is a fun parody of the relentless dystopianism of the Netflix show “Black Mirror.” If you haven’t watched Black Mirror, I encourage you to do so. It’s both great fun and ridiculously bleak and over-the-top in how it depicts modern or future technology destroying all that is good on God’s green earth.

The CollegeHumor team picks up on that and rewinds the clock about a 1,000 years to imagine how Black Mirror might have played out on a stage during the medieval period. The actors do quick skits showing how books become sentient, plows dig holes to Hell and unleash the devil, crossbows destroy the dexterity of archers, and labor-saving yokes divert people from godly pursuits. As one of the audience members says after watching all the episodes, “technology will truly be the ruin of us all!” That’s generally the message of not only Black Mirror, but the vast majority of modern science fiction writing about technology (and also a huge chunk of popular non-fiction writing, too.)

Continue reading →

I have been covering telecom and Internet policy for almost 30 years now. During much of that time – which included a nine year stint at the Heritage Foundation — I have interacted with conservatives on various policy issues and often worked very closely with them to advance certain reforms.

If I divided my time in Tech Policy Land into two big chunks of time, I’d say the biggest tech-related policy issue for conservatives during the first 15 years I was in the business (roughly 1990 – 2005) was preventing the resurrection of the so-called Fairness Doctrine. And the biggest issue during the second 15-year period (roughly 2005 – present) was stopping the imposition of “Net neutrality” mandates on the Internet. In both cases, conservatives vociferously blasted the notion that unelected government bureaucrats should sit in judgment of what constituted “fairness” in media or “neutrality” online.

Many conservatives are suddenly changing their tune, however. President Trump and Sen. Ted Cruz, for example, have been increasingly critical of both traditional media and new tech companies in various public statements and suggested an openness to increased regulation. The President has gone after old and new media outlets alike, while Sen. Cruz (along with others like Sen. Lindsay Graham) has suggested during congressional hearings that increased oversight of social media platforms is needed, including potential antitrust action.

Meanwhile, during his short time in office, Sen. Josh Hawley (R-Mo.) has become one of the most vocal Internet critics on the Right. In a shockingly-worded USA Today editorial in late May, Hawley said, “social media wastes our time and resources” and is “a field of little productive value” that have only “given us an addiction economy.” He even referred to these sites as “parasites” and blamed them for a long list of social problems, leading him to suggest that, “we’d be better off if Facebook disappeared” along with various other sites and services.

Hawley’s moral panic over social media has now bubbled over into a regulatory crusade that would unleash federal bureaucrats on the Internet in an attempt to dictate “fair” speech on the Internet. He has introduced an astonishing piece of legislation aimed at undoing the liability protections that Internet providers rely upon to provide open platforms for speech and commerce. If Hawley’s absurdly misnamed new “Ending Support for Internet Censorship Act” is implemented, it would essentially combine the core elements of the Fairness Doctrine and Net Neutrality to create a massive new regulatory regime for the Internet. Continue reading →

[This essay originally appeared on the AIER blog on May 28, 2019. The USA TODAY also ran a shorter version of this essay as a letter to the editor on June 2, 2019.]

In a hotly-worded USA Today op-ed last week, Senator Josh Hawley (R-Missouri) railed against social media sites Facebook, Instagram, and Twitter. He argued that, “social media wastes our time and resources,” and is “a field of little productive value” that have only “given us an addiction economy.” Sen. Hawley refers to these sites as “parasites” and blames them for a litany of social problems (including an unproven link to increased suicide), leading him to declare that, “we’d be better off if Facebook disappeared.”

As far as moral panics go, Sen. Hawley’s will go down as one for the ages. Politicians have always castigated new technologies, media platforms, and content for supposedly corrupting the youth of their generation. But Sen. Hawley’s inflammatory rhetoric and proposals are something we haven’t seen in quite some time.

He sounds like those fire-breathing politicians and pundits of the past century who vociferously protested everything from comic books to cable television, the waltz to the Walkman, and rock-and-roll to rap music. In order to save the youth of America, many past critics said, we must destroy the media or media platforms they are supposedly addicted to. That is exactly what Sen. Hawley would have us do to today’s leading media platforms because, in his opinion, they “do our country more harm than good.”

We have to hope that Sen. Hawley is no more successful than past critics and politicians who wanted to take these choices away from the public. Paternalistic politicians should not be dictating content choices for the rest of us or destroying technologies and platforms that millions of people benefit from. Continue reading →

I (Eye), Robot?

by on May 8, 2019 · 0 comments

[Originally published on the Mercatus Bridge blog on May 7, 2019.]

I became a little bit more of a cyborg this month with the addition of two new eyes—eye lenses, actually. Before I had even turned 50, the old lenses that Mother Nature gave me were already failing due to cataracts. But after having two operations this past month and getting artificial lenses installed, I am seeing clearly again thanks to the continuing miracles of modern medical technology.

Cataracts can be extraordinarily debilitating. One day you can see the world clearly, the next you wake up struggling to see through a cloudy ocular soup. It is like looking through a piece of cellophane wrap or a continuously unfocused camera.

If you depend on your eyes to make a living as most of us do, then cataracts make it a daily struggle to get even basic things done. I spend most of my time reading and writing each workday. Once the cataracts hit, I had to purchase a half-dozen pair of strong reading glasses and spread them out all over the place: in my office, house, car, gym bag, and so on. Without them, I was helpless.

Reading is especially difficult in dimly lit environments, and even with strong glasses you can forget about reading the fine print on anything. Every pillbox becomes a frightening adventure. I invested in a powerful magnifying glass to make sure I didn’t end up ingesting the wrong things.

For those afflicted with particularly bad cataracts, it becomes extraordinarily risky to drive or operate machinery. More mundane things—watching TV, tossing a ball with your kid, reading a menu at many restaurants, looking at art in a gallery—also become frustrating. Continue reading →

It was my great pleasure to recently join Paul Matzko and Will Duffield on the Building Tomorrow podcast to discuss some of the themes in my last book and my forthcoming one. During our 50-minute conversation, which you can listen to here, we discussed:

  • the “pacing problem” and how it complicates technological governance efforts;
  • the steady rise of “innovation arbitrage” and medical tourism across the globe;
  • the continued growth of “evasive entrepreneurialism” (i.e., efforts to evade traditional laws & regs while innovating);
  • new forms of “technological civil disobedience;”
  • the rapid expansion of “soft law” governance mechanism as a response to these challenges; and,
  • craft beer bootlegging tips!  (Seriously, I move a lot of beer in the underground barter markets).

Bounce over to the Building Tomorrow site and give the show a listen. Fun chat.

Contemporary tech criticism displays an anti-nostalgia. Instead of being reverent for the past, anxiety about the future abounds. In these visions, the future is imagined as a strange, foreign land, beset with problems. And yet, to quote that old adage, tomorrow is the visitor that is always coming but never arrives. The future never arrives because we are assembling it today.  

The distance between the now and the future finds its hook in tech policy in the pacing problem, a term describing the mismatch between advancing technologies and society’s efforts to cope with them. Vivek Wadhwa explained that, “We haven’t come to grips with what is ethical, let alone with what the laws should be, in relation to technologies such as social media.” In The Laws of Disruption, Larry Downes explained the pacing problem like this: “technology changes exponentially, but social, economic, and legal systems change incrementally.” Or, as Adam Thierer wondered, “What happens when technological innovation outpaces the ability of laws and regulations to keep up?”

Here are three short responses. Continue reading →

To read Cathy O’Neil’s Weapons of Math Destruction (2016) is to experience another in a line of progressive pugilists of the technological age. Where Tim Wu took on the future of the Internet and Evgeny Morozov chided online slactivism, O’Neil takes on algorithms, or what she has dubbed weapons of math destruction (WMD).

O’Neil’s book came at just the right moment in 2016. It sounded the alarm about big data just as it was becoming a topic for public discussion. And now, two years later, her worries seem prescient. As she explains in the introduction,

Big Data has plenty of evangelists, but I’m not one of them. This book will focus sharply in the other direction, on the damage inflicted by WMDs and the injustice they perpetuate. We will explore harmful examples that affect people at critical life moments: going to college, borrowing money, getting sentenced to prison, or finding and holding a job. All of these life domains are increasingly controlled by secret models wielding arbitrary punishments.

O’Neil is explicit about laying out the blame at the feet of the WMDs, “You cannot appeal to a WMD. That’s part of their fearsome power. They do not listen.” Yet, these models aren’t deployed and adopted in a frictionless environment. Instead, they “reflect goals and ideology” as O’Neil readily admits. Where Weapons of Math Destruction falters is that it ascribes too much agency to algorithms in places, and in doing so misses the broader politics behind algorithmic decision making. Continue reading →