I’ve written here before about the problems associated with the “technopanic mentality,” especially when it comes to how technopanics sometimes come to shape public policy decisions and restict important new, life-enriching innovations. As I argued in a recent book, the problem with this sort Chicken Little thinking is that, “living in constant fear of worst-case scenarios—and premising public policy on them—means that best-case scenarios will never come about. When public policy is shaped by precautionary principle reasoning, it poses a serious threat to technological progress, economic entrepreneurialism, social adaptation, and long-run prosperity.”
Perhaps the worst thing about worst-case thinking is how short-sighted and hubristic it can be. The technopanic crowd often has an air of snooty-ness about them in that they ask us to largely ignore the generally positive long-run trends associated with technological innovation and instead focus on hypothetical fears about an uncertain future that apparently only they can foresee. This is the case whether they are predicting the destruction of jobs, economy, lifestyles, or culture. Techno-dystopia lies just around the corner, they say, but the rest of us ignorant sheep who just can’t see it coming!
In his wonderful 2013 book, Smarter Than You Think: How Technology Is Changing Our Minds for the Better, Clive Thompson correctly noted that “dystopian predictions are easy to generate” and “doomsaying is emotionally self-protective: if you complain that today’s technology is wrecking the culture, you can tell yourself you’re a gimlet-eyed critic who isn’t hoodwinked by high-tech trends and silly, popular activities like social networking. You seem like someone who has a richer, deeper appreciation for the past and who stands above the triviality of today’s life.”
Stated differently, the doomsayers are guilty of a type of social and technical arrogance. They are almost always wrong on history, wrong on culture, and wrong on facts. Again and again, humans have proven remarkably resilient in the face of technological change and have overcome short-term adversity. Yet, the technopanic pundits are almost never called out for their elitist attitudes later when their prognostications are proven wildly off-base. And even more concerning is the fact that their Chicken Little antics lead them and others to ignore the more serious risks that could exist out there and which are worthy of our attention.
Here’s a nice example of that last point that comes from a silent film made all the way back in 1911! (Ironically, it was a tweet by Clive Thompson that brought this clip to my attention.) The short film is called The Automatic Motorist and here’s how Michael Waters summarizes the plot in a post over at Atlas Obscura: “In it, a robot chauffeur is developed to drive a newly wedded couple to their honeymoon destination. But this robot malfunctions, and all of a sudden the couple is marooned in outer space (and then sinking underwater, and then flying through the sky—it’s complicated).” In sum: don’t trust robots or autonomous systems or you will probably die.
Regardless of how silly the plot sounds or the film looks, what I really found interesting about it was the way that they film jumped right into the classic sci-fi dystopian scenario of ROBOTS GONE WILD. Countless other books, stories, movies, and TV shows would follow that same predictable plot line in subsequent decades. In one sense, it’s entirely logical why authors and screenwriters do this. Simply put, bad news sells, and that is especially true when the bad news is delivered in the form of robotic systems running amok and threatening the future of humanity.
But I wonder… did the creators of The Automatic Motorist ever consider the far more risky scenario surrounding automobiles? Specifically, isn’t it a shame that they didn’t foresee the millions upon millions of deaths that would occur due to human error behind the wheel?
The tale of automation-gone-wrong always makes for better box office and book sales, but fear-mongering about technologies can condition people (and policymakers) to think in fearful terms about those products and systems. Robotic cars would have been impossible in 1911, obviously, so perhaps this concern seems meaningless in this context. But it is indicative of the bigger problem of the technopanic crowd focusing on hypothetical worst-case scenarios and avoiding the more mundane — but ultimately far more concerning — real-world risks that might occur in the absence of ongoing technological innovation.
And in many ways this is still the debate we are having in 2017 as the discussion about robotic “driverless” cars has finally ripened. We stand on the brink of what may become one of the great public health success stories of our lifetime. With the roadway death toll climbing for the first time in decades (around 40,000 deaths last year; or over 100 people dying on the roads every day), and with 94 percent of accidents being attributable to human error, those facts alone should constitute the most powerful reason to give autonomous technology a chance to prove itself. If policymakers fail to do so, it could result in countless potential injuries and deaths that driverless cars probably could have prevented.
These “unseen” unintended consequences of misguided policies constitute as sort of hidden tax on humanity’s future. When the technopanic crowd that tells us we must live in fear of each and every new innovation, they are creating the riskiest future scenario of them all: one that is stagnant and backwards-looking. The burden of proof is on them to explain why we should be denied the benefits that accompany ongoing trial and error experimentation with new and better ways of doing things that could ensure us a safer and more prosperous future.