On Tuesday, UN Secretary-General Ban Ki-Moon delivered an address to the UN Security Council “on the Non-Proliferation of Weapons of Mass Destruction.” He made many of the same arguments he and his predecessors have articulated before regarding the need for the Security Council “to develop further initiatives to bring about a world free of weapons of mass destruction.” In particular, he was focused on the great harm that could come about from the use of chemical, biological and nuclear weapons. “Vicious non-state actors that target civilians for carnage are actively seeking chemical, biological and nuclear weapons,” the Secretary-General noted. A stepped-up disarmament agenda is needed, he argued, “to prevent the human, environmental and existential destruction these weapons can cause . . . by eradicating them once and for all.”
The UN has created several multilateral mechanisms to pursue those objectives, including the Nuclear Non-Proliferation Treaty, the Chemical Weapons Convention, and the Biological Weapons Convention. Progress on these fronts has always been slow and limited, however. The Secretary-General observed that nuclear non-proliferation efforts have recently “descended into fractious deadlock,” but the effectiveness of those and similar UN-led efforts have long been challenged by the dual realities of (1) rapid ongoing technological change that has made WMDs more ubiquitous than ever, plus (2) a general lack of teeth in UN treaties and accords to do much to slow those advances, especially among non-signatories.
Despite those challenges, the Secretary-General is right to remain vigilant about the horrors of chemical, biological and nuclear attacks. But what was interesting about this address is that the Secretary-General continued on to discuss his concerns about a rising class of emerging technologies, which we usually don’t hear mentioned in the same breath as those traditional “weapons of mass destruction”:
I will now say a few words about new global threats emerging from the misuse of science and technology, and the power of globalization. Information and communication technologies, artificial intelligence, 3D printing and synthetic biology will bring profound changes to our everyday lives and benefits to millions of people. However, their potential for misuse could also bring destruction. The nexus between these emerging technologies and WMD needs close examination and action.
As a starting point, the international community must step up to expand common ground for the peaceful use of cyberspace and, particularly, the intersection between cyberspace and critical infrastructure. People now live a significant portion of their lives online. They must be protected from online attacks, just as effectively as they are protected from physical attacks.
Disarmament and non-proliferation instruments are only as successful as Member States’ capacity to implement them.
And the Secretary-General concluded by calling on “all Member States to re-commit themselves and to take action. The stakes are simply too high to ignore.”
The Secretary-General’s inclusion of all these emerging technologies in a speech about WMDs and the dangers of chemical, biological and nuclear weapons raises an interesting question: Are all these things actually equivalent? Does a danger exist from the continued evolution of ICTs, AI, 3D printing, and synthetic biology that is equal to the very serious threat posed by chemical, biological and nuclear weapons?
On one hand, it is tempting to say, Yes! If nothing else, most of us have seen more than enough techno-dystopian Hollywood plots through the years to understand the hypothetical dangers that some of these technologies pose. But even if (like me) you dismiss most of the movie plots as far-fetched Chicken Little-ism meant to drum up big box office, plenty of serious scholars out there have sketched out more credible pictures of the threat some of these new technologies might pose to humanity. Information platforms can be hacked and our personal data or security compromised. 3D printers can be used to create cheap, undetectable firearms. Robotics and autonomous systems can be programmed to kill. Synthetic biology might help create genetically-modified super-soldiers. And so on.
These are serious questions with profound ramifications and I discussed them at much greater length in my lengthy review of Wendell Wallach’s important book, A Dangerous Master: How to Keep Technology from Slipping beyond Our Control. Like many other books and essays on these technologies, Wallach champions “the need for more upstream governance” as in “more control over the way that potentially harmful technologies are developed or introduced into the larger society. Upstream management is certainly better than introducing regulations downstream, after a technology is deeply entrenched or something major has already gone wrong,” he suggests. “Yet, even when we can access risks, there remain difficulties in recognizing when or determining how much control should be introduced. When does being precautionary make sense, and when is precaution an over-reaction to the risks?”
Indeed, that is the right question, and quite a profound one. The problem associated with all such “upstream governance” and preemptive controls on emerging technologies is determining how to avoid hypothetical future risks without destroying the potential for these same technologies to be used in life-enriching and even life-saving ways.
Solutions are illusive and involve myriad trade-offs. More generally, it’s not even clear that they would be workable. That is especially true when you expand the scale of governance to include the entire planet. It seems unlikely, for example, that a hypothetical UN-led Synthetic Biology Non-Proliferation Treaty, 3D-Printed Weapons Convention, or Agreement on the Peaceful Use of Cyberspace are going to be workable solutions in a world where these technologies are so radically decentralized and proliferating so rapidly. At least with some of the older technologies, the underlying materials were somewhat harder to obtain, manufacture, weaponize, and then distribute/use. But the same is not true of many of these newer technologies. It’s a heck of lot easier to get access to a computer and 3D printer than uranium and enrichment facilities, for example.
Moreover, when we discuss the risks associated with emerging technologies compared to past technologies, there needs to be some sort of weighing of the actual probability of serious harm coming about. In the expanded Second Edition of my Permissionless Innovation book, I tried to offer a rough framework for when formal precautionary regulation (i.e., operational restrictions, licensing requirements, research limitations, or even formal bans) might be necessary. In a section of Chapter 3 of my book entitled, “When Does Precaution Make Sense?” I argued that:
Generally speaking, permissionless innovation should remain the norm in the vast majority of cases, but there will be some scenarios where the threat of tangible, immediate, irreversible, catastrophic harm associated with new innovations could require at least a light version of the precautionary principle to be applied. In these cases, we might be better suited to think about when an “anti-catastrophe principle” is needed, which narrows the scope of the precautionary principle and focuses it more appropriately on the most unambiguously worst-case scenarios that meet those criteria.
“But most [emerging technology] cases don’t fall into this category,” I concluded. It is simply not the case that most emerging technologies pose the same sort of tangible, immediate, irreversible, catastrophic, and highly probably risk that traditional “weapons of mass destruction” do.
And that gets at my problem with that recent address by UN Secretary-General Ban Ki-Moon. By so casually moving from a heated discussion of traditional WMDs into a brief discussion about the potential risks associated with ICTs, AI, 3D printing and synthetic biology, I really worry about the sort of moral equivalence that some might read into this speech. Again, these things, and the threats they pose, are simply not the same. Yet, when the UN Secretary-General sandwiches these technologies in between impassioned opening and closing statements about the need “to take action” because “the stakes are simply too high to ignore,” it seems to suggest he is prepared to speak of them all in the same breath as traditional “weapons of mass destruction” and suggest similar global control efforts are needed. I do not believe that is sensible.
Does this mean we just throw our hands up in the air and give up any inquiry into the matter? Of course not. As I noted in my review of Wallach’s book, some very sensible “soft law” approaches exist that are worth pursuing. Soft law approaches can include a wide variety of efforts to “bake a dose of precautionary directly into the innovation process through a wide variety of informal governance/oversight mechanisms,” as I noted in my review of Wallach’s book. “By embedding shared values in the very design of new tools and techniques, engineers improve the prospect of a positive outcome,” Wallach says in his book.
Many soft law or informal governance systems already exist in the forms of so-called “multistakeholder governance” systems, informal industry codes of conduct, best practices, and other coordinating mechanisms. But these solutions would likely fall short of addressing some extreme scenarios that many people are worried about. Toward that end, when the case can be made that a particular application of a new general purpose technology will result in tangible, immediate, irreversible, catastrophic, and highly probably dangers, then perhaps some international action should be considered. For example, a case can be made that governments (and perhaps even the UN) should do more to preemptively curb the most nefarious uses of robotics. There’s already a major effort underway called the “Campaign to Stop Killer Robots” that seeks a multinational treaty to stop deadly uses of robotics. Again, I’m not sure how enforcement will work, but I think it’s worth investigating how some of the uses of “killer robots” might be limited through international accords and actions. Moreover, I could imagine an extension of existing the UN’s Biological Weapons Convention framework to cover some synthetic biology applications that involve extreme forms of human modification.
That being said, policymakers and international figures of importance like UN Secretary-General Ban Ki-Moon should be extremely cautious about the language they use to describe new classes of technologies lest they cast too wide a net with calls for controlling “weapons of mass destruction” that may be nothing of the sort.
Additional Reading
- “Wendell Wallach on the Challenge of Engineering Better Technology Ethics“
- “On the Line between Technology Ethics vs. Technology Policy“
- “What Does It Mean to “Have a Conversation” about a New Technology?”
- “Making Sure the “Trolley Problem” Doesn’t Derail Life-Saving Innovation”
- Permissionless Innovation: The Continuing Case for Comprehensive Technological Freedom