If the techno-pessimists are right and robots are set to take all the jobs, shouldn’t employment in Amazon warehouses be plummeting right now? After all, Amazon’s sorting and fulfillment centers have been automated at a rapid pace, with robotic technologies now being integrated into almost every facet of the process. (Just watch the video below to see it all in action.)
And yet according to this Wall Street Journal story by Laura Stevens, Amazon is looking to immediately fill 50,000 new jobs, which would mean that its U.S. workforce “would swell to around 300,000, compared with 30,000 in 2011.” According to the article, “Nearly 40,000 of the promised jobs are full-time at the company’s fulfillment centers, including some facilities that will open in the coming months. Most of the remainder are part-time positions available at Amazon’s more than 30 sorting centers.”
How can this be? Shouldn’t the robots have eaten all those jobs by now?
Whatever you want to call them–autonomous vehicles, driverless cars, automated systems, unmanned systems, connected cars, piloteless vehicles, etc.–the life-saving potential of this new class of technologies has been shown to be potentially enormous. I’ve spent a lot of time researching and writing about these issues, and I have yet to see any study forecast the opposite (i.e., a net loss of lives due to these technologies.) While the estimated life savings vary, the numbers are uniformly positive across the board, and not just in terms of lives saved, but also for reductions in other injuries, property damage, and aggregate social costs associated with vehicular accidents more generally.
To highlight these important and consistent findings, I asked my research assistant Melody Calkins to help me compile a list of recent studies on this issue and summarize the key takeaways of each one regarding at least the potential for lives saved. The studies and findings are listed below in reverse chronological order of publication. I may try to add to this over time, so please feel free to shoot me suggested updates as they become available.
Needless to say, these findings would hopefully have some bearing on public policy toward these technologies. Namely, we should be taking steps to accelerate this transition and removing roadblocks to the driverless car revolution because we could be talking about the biggest public health success story of our lifetime if we get policy right here. Every day matters because each day we delay this transition is another day during which 90 people die in car crashes and more than 6,500 will be injured. And sadly, those numbers are going up, not down. According to the National Highway Traffic Safety Administration (NHTSA), auto crashes and the roadway death toll is climbing for the first time in decades. Meanwhile, the agency estimated that 94 percent of all crashes are attributable to human error. We have the potential to do something about this tragedy, but we have to get public policy right. Delay is not an option.
I’ve written herebefore about the problems associated with the “technopanic mentality,” especially when it comes to how technopanics sometimes come to shape public policy decisions and restict important new, life-enriching innovations. As I argued in a recent book, the problem with this sort Chicken Little thinking is that, “living in constant fear of worst-case scenarios—and premising public policy on them—means that best-case scenarios will never come about. When public policy is shaped by precautionary principle reasoning, it poses a serious threat to technological progress, economic entrepreneurialism, social adaptation, and long-run prosperity.”
Perhaps the worst thing about worst-case thinking is how short-sighted and hubristic it can be. The technopanic crowd often has an air of snooty-ness about them in that they ask us to largely ignore the generally positive long-run trends associated with technological innovation and instead focus on hypothetical fears about an uncertain future that apparently only they can foresee. This is the case whether they are predicting the destruction of jobs, economy, lifestyles, or culture. Techno-dystopia lies just around the corner, they say, but the rest of us ignorant sheep who just can’t see it coming!
In his wonderful 2013 book, Smarter Than You Think: How Technology Is Changing Our Minds for the Better, Clive Thompson correctly noted that “dystopian predictions are easy to generate” and “doomsaying is emotionally self-protective: if you complain that today’s technology is wrecking the culture, you can tell yourself you’re a gimlet-eyed critic who isn’t hoodwinked by high-tech trends and silly, popular activities like social networking. You seem like someone who has a richer, deeper appreciation for the past and who stands above the triviality of today’s life.”
Stated differently, the doomsayers are guilty of a type of social and technical arrogance. They are almost always wrong on history, wrong on culture, and wrong on facts. Again and again, humans have proven remarkably resilient in the face of technological change and have overcome short-term adversity. Yet, the technopanic pundits are almost never called out for their elitist attitudes later when their prognostications are proven wildly off-base. And even more concerning is the fact that their Chicken Little antics lead them and others to ignore the more serious risks that could exist out there and which are worthy of our attention.
Here’s a nice example of that last point that comes from a silent film made all the way back in 1911! Continue reading →
Americans have schizophrenic opinions about artificial intelligence (AI) technologies. Ask the average American what they think of AI and they will often respond with a combination of fear, loathing, and dread. Yet, the very same AI applications they claim to be so anxious about are already benefiting their lives in profound ways.
Last week, we posted complementary essays about the growing “technopanic” over artificial intelligence and the potential for that panic to undermine many important life-enriching medical innovations or healthcare-related applications. We were inspired to write those essays after reading the results of a recent poll conducted by Morning Consult, which suggested that the public was very uncomfortable with AI technologies. “A large majority of both Republicans and Democrats believe there should be national and international regulations on artificial intelligence,” the poll found, Of the 2,200 American adults surveyed, the poll revealed that “73 percent of Democrats said there should be U.S. regulations on artificial intelligence, as did 74 percent of Republicans and 65 percent of independents.”
We noted that there were reasons to question the significance of those in light of the binary way in which the questions were asked. Nonetheless, there are clearly some serious concerns among the public about AI and robotics. You see that when you read deeper into the poll results for specific questions and find respondents saying that they are “somewhat” to “very uncomfortable” about a wide range of specific AI applications.
Yet, in each case, Americans are already deriving significant benefits from each of the AI applications they claim to be so uncomfortable with.
Today, the U.S. Department of Transportation released its eagerly-awaited “Federal Automated Vehicles Policy.” There’s a lot to like about the guidance document, beginning with the agency’s genuine embrace of the potential for highly automated vehicles (HAVs) to revolutionize this sector and save thousands of lives annually in the process.
It is important we get HAV policy right, the DOT notes, because, “35,092 people died on U.S. roadways in 2015 alone” and “94 percent of crashes can be tied to a human choice or error.” (p. 5) HAVs could help us reverse that trend and save thousands of lives and billions in economic costs annually. The agency also documents many other benefits associated with HAVs, such as increasing personal mobility, reducing traffic and pollution, and cutting infrastructure costs.
I will not attempt here to comment on every specific recommendation or guideline suggested in the new DOT guidance document. I could nit-pick about some of the specific recommended guidelines, but I think many of the guidelines are quite reasonable, whether they are related to safety, security, privacy, or state regulatory issues. Other issues need to be addressed and CEI’s Marc Scribner does a nice job documenting some of them is his response to the new guidelines.
Instead of discussing those specific issues today, I want to ask a more fundamental and far-reaching question which I have been writing about in recent papers and essays: Is this guidance or regulation? And what does the use of informal guidance mechanisms like these signal for the future of technological governance more generally? Continue reading →
In previous essays here I have discussed the rise of “global innovation arbitrage” for genetic testing, drones, and the sharing economy. I argued that: “Capital moves like quicksilver around the globe today as investors and entrepreneurs look for more hospitable tax and regulatory environments. The same is increasingly true for innovation. Innovators can, and increasingly will, move to those countries and continents that provide a legal and regulatory environment more hospitable to entrepreneurial activity.” I’ve been working on a longer paper about this with Samuel Hammond, and in doing research on the issue, we keep finding interesting examples of this phenomenon.
The latest example comes from a terrific new essay (“Humans: Unsafe at Any Speed“) about driverless car technology by Wall Street Journal technology columnist L. Gordon Crovitz. He cites some important recent efforts by Ford and Google and he notes that they and other innovators will need to be given more flexible regulatory treatment if we want these life-saving technologies on the road as soon as possible. “The prospect of mass-producing cars without steering wheels or pedals means U.S. regulators will either allow these innovations on American roads or cede to Europe and Asia the testing grounds for self-driving technologies,” Crovitz observes. “By investing in autonomous vehicles, Ford and Google are presuming regulators will have to allow the new technologies, which are developing faster even than optimists imagined when Google started working on self-driving cars in 2009.” Continue reading →
This week, my Mercatus Center colleague Andrea Castillo and I filed comments with the White House Office of Science and Technology Policy (OSTP) in a proceeding entitled, “Preparing for the Future of Artificial Intelligence.” For more background on this proceeding and the accompanying workshops that OSTP has hosted on this issue, see this White House site.
In our comments, Andrea and I make the case for prudence, patience, and a continuing embrace of “permissionless innovation” as the appropriate policy framework for artificial intelligence (AI) technologies at this nascent stage of their development. Down below, I have pasted our full comments, which were limited to just 2,000 words as required by the OSTP. But we plan on releasing a much longer report on these issues in coming months. You can find the full version of filing that includes footnotes here.
On May 3rd, I’m excited to be participating in a discussion with Yale University bioethicist Wendell Wallach at the Microsoft Innovation & Policy Center in Washington, DC. (RSVP here.) Wallach and I will be discussing issues we write about in our new books, both of which focus on possible governance models for emerging technologies and the question of how much preemptive control society should exercise over new innovations.
Of all the books of technological criticism or skepticism that I’ve read in recent years—and I have read stacks of them!—A Dangerous Master is by far the most thoughtful and interesting. I have grown accustomed to major works of technological criticism being caustic, angry affairs. Most of them are just dripping with dystopian dread and a sense of utter exasperation and outright disgust at the pace of modern technological change.
Although he is certainly concerned about a wide variety of modern technologies—drones, robotics, nanotech, and more—Wallach isn’t a purveyor of the politics of panic. There are some moments in the book when he resorts to some hyperbolic rhetoric, such as when he frets about an impending “techstorm” and the potential, as the book’s title suggests, for technology to become a “dangerous master” of humanity. For the most part, however, his approach is deeper and more dispassionate than what is found in the leading tracts of other modern techno-critics.
The success of the Internet and the modern digital economy was due to its open, generative nature, driven by the ethos of “permissionless innovation.” A “light-touch” policy regime helped make this possible. Of particular legal importance was the immunization of online intermediaries from punishing forms of liability associated with the actions of third parties.
As “software eats the world” and the digital revolution extends its reach to the physical world, policymakers should extend similar legal protections to other “generative” tools and platforms, such as robotics, 3D printing, and virtual reality.
I want to highlight an important new blog post (“Slow Down That Runaway Ethical Trolley“) on the ethical trade-offs at work with autonomous vehicle systems by Bryant Walker Smith, a leading expert on these issues. Writing over at Stanford University’s Center for Internet and Society blog, Smith notes that, while serious ethical dilemmas will always be present with such technologies, “we should not allow the perfect to be the enemy of the good.” He notes that many ethical philosophers, legal theorists, and media pundits have recently been actively debating variations of the classic “Trolley Problem,” and its ramifications for the development of autonomous or semi-autonomous systems. (Here’s some quick background on the Trolley Problem, a thought experiment involving the choices made during various no-win accident scenarios.) Commenting on the increased prevalence of the Trolley Problem in these debates, Smith observes that:
Unfortunately, the reality that automated vehicles will eventually kill people has morphed into the illusion that a paramount challenge for or to these vehicles is deciding who precisely to kill in any given crash. This was probably not the intent of the thoughtful proponents of this thought experiment, but it seems to be the result. Late last year, I was asked the “who to kill” question more than any other — by journalists, regulators, and academics. An influential working group to which I belong even (briefly) identified the trolley problem as one of the most significant barriers to fully automated motor vehicles.
Although dilemma situations are relevant to the field, they have been overhyped in comparison to other issues implicated by vehicle automation. The fundamental ethical question, in my opinion, is this: In the United States alone, tens of thousands of people die in motor vehicle crashes every year, and many more are injured. Automated vehicles have great potential to one day reduce this toll, but the path to this point will involve mistakes and crashes and fatalities. Given this stark choice, what is the proper balance between caution and urgency in bringing these systems to the market? How safe is safe enough?
The Technology Liberation Front is the tech policy blog dedicated to keeping politicians' hands off the 'net and everything else related to technology. Learn more about TLF →