[This is an excerpt from Chapter 6 of the forthcoming 2nd edition of my book, “Permissionless Innovation: The Continuing Case for Comprehensive Technological Freedom,” due out later this month. I was presenting on these issues at today’s New America Foundation “Cybersecurity for a New America” event, so I thought I would post this now. To learn more about the contrast between “permissionless innovation” and “precautionary principle” thinking, please consult the earlier edition of my book or see this blog post.]
Viruses, malware, spam, data breeches, and critical system intrusions are just some of the security-related concerns that often motivate precautionary thinking and policy proposals.[1] But as with privacy- and safety-related worries, the panicky rhetoric surrounding these issues is usually unfocused and counterproductive.
In today’s cybersecurity debates, for example, it is not uncommon to hear frequent allusions to the potential for a “digital Pearl Harbor,”[2] a “cyber cold war,”[3] or even a “cyber 9/11.”[4] These analogies are made even though these historical incidents resulted in death and destruction of a sort not comparable to attacks on digital networks. Others refer to “cyber bombs” or technological “time bombs,” even though no one can be “bombed” with binary code.[5] Michael McConnell, a former director of national intelligence, went so far as to say that this “threat is so intrusive, it’s so serious, it could literally suck the life’s blood out of this country.”[6]
Such outrageous statements reflect the frequent use of “threat inflation” rhetoric in debates about online security.[7] Threat inflation has been defined as “the attempt by elites to create concern for a threat that goes beyond the scope and urgency that a disinterested analysis would justify.”[8] Unfortunately, such bombastic rhetoric often conflates minor cybersecurity risks with major ones. For example, dramatic doomsday stories about hackers pushing planes out of the sky misdirects policymakers’ attention from the more immediate, but less gripping, risks of data extraction and foreign surveillance. Well-meaning skeptics might then conclude that our real cybersecurity risks are also not a problem. In the meantime, outdated legislation and inappropriate legal norms continue to impede beneficial defensive measures that could truly improve security.
Meanwhile, similar concerns have already been raised about security vulnerabilities associated with the Internet of Things[9] and driverless cars.[10] Legislation has already been floated to address the latter concern through federal certification standards.[11] More broad-based cybersecurity legislative proposals have also been proposed, most notably the Cybersecurity Information Sharing Act, which would extend legal immunity to corporations that share customer data with intelligence agencies.[12]
Ironically, these efforts to expand federal cybersecurity authority come before the federal government has even gotten its own house in order. According to a recent report, federal information security failures had increased by an astounding 1,169 percent, from 5,503 in fiscal year 2006 to 69,851 in fiscal year 2014.[13] Of course, many of these same agencies would be tasked with securing the massive new datasets containing personally identifiable details about US citizens’ online activities that legislation like the Cybersecurity Information Sharing Act would authorize. In the worst-case scenario, such federal data storage could counterintuitively encourage more attacks on government systems.
It’s important to put all these security issues in some context and to realize that proposed legal remedies are often inappropriate to address online security concerns and sometimes end up backfiring. In his research on the digital security marketplace, my Mercatus Center colleague Eli Dourado has illustrated how we are already able to achieve “Internet Security without Law.”[14] Dourado documented the many informal institutions that enforce network security norms on the Internet to show how cooperation among a remarkably varied set of actors improves online security without extensive regulation or punishing legal liability. “These informal institutions carry out the functions of a formal legal system—they establish and enforce rules for the prevention, punishment, and redress of cybersecurity-related harms,” Dourado says.[15]
For example, a diverse array of computer security incident response teams (CSIRTs) operate around the globe, sharing their research on and coordinating responses to viruses and other online attacks. Individual Internet service providers (ISPs), domain name registrars, and hosting companies work with these CSIRTs and other individuals and organizations to address security vulnerabilities.
Encouraging the development of robust and lawful software vulnerability markets would provide even more effective cybersecurity reporting. Some private companies and nonprofit security research firms have offered financial incentives for hackers to find and report software vulnerabilities to the proper parties for years now.[16] Such “bug bounty” and “vulnerability auction” programs better align hackers’ monetary incentives with the public interest. By allowing a space for security researchers to responsibly report and profit from discovered bugs, these markets dissuade hackers from selling vulnerabilities to criminal or state-backed organizations.[17]
A growing market for private security consultants and software providers also competes to offer increasingly sophisticated suites of security products for businesses, households, and governments. “Corporations, including software vendors, antimalware makers, ISPs, and major websites such as Facebook and Twitter, are aggressively pursuing cyber criminals,” notes Roger Grimes of Infoworld.[18] “These companies have entire legal teams dedicated to national and international cyber crime. They are also taking down malicious websites and bot-spitting command-and-control servers, along with helping to identify, prosecute, and sue bad guys,” he says.[19] Meanwhile, more organizations are employing “active defense” strategies, which are “countermeasures that entail more than merely hardening one’s own network against threats and instead seek to unmask one’s attacker or disable the attacker’s system.”[20]
A great deal of security knowledge is also “crowd-sourced” today via online discussion forums and security blogs that feature contributions from experts and average users alike. University-based computer science and cyber law centers and experts have also helped by creating projects like Stop Badware, which originated at Harvard University but then grew into a broader nonprofit organization with diverse financial support.[21] Meanwhile, informal grassroots security groups like The Cavalry have formed to build awareness about digital security threats among developers and the general public and then devise solutions to protect public safety.[22]
The recent debacle over the Commerce Department’s proposed new export rules for so-called cyberweapons provides a good example of how poorly considered policies can inadvertently undermine such beneficial emergent ecosystems. The agency’s new draft of US “Wassenaar Arrangement” arms control policies would have unintentionally criminalized the normal communication of basic software bug-testing techniques that hundreds of companies employ each day.[23] The regulators who were drafting the new rules had good intentions. They wanted to crack down on cyber criminals’ abilities to sell malware to hostile state-backed initiatives. However, their lack of technical sophistication led them to unknowingly write a proposal that would have compelled software engineers to seek Commerce Department permission before communicating information about minor software quirks. Fortunately, regulators wisely heeded the many concerned industry comments and rescinded the initial proposal.[24]
Dourado notes that informal, bottom-up efforts to coordinate security responses offer several advantages over top-down government solutions such as administrative regulatory regimes or punishing liability regimes. First, the informal cooperative approach “gives network operators flexibility to determine what constitutes due care in a dynamic environment.” “Formal legal standards,” by contrast, “may not be able to adapt as quickly as needed to rapidly changing circumstances,” he says.[25] Simply put, markets are more nimble than mandates when it comes to promptly patching security vulnerabilities.
Second, Dourado notes that “formal legal proceedings are adversarial and could reduce ISPs’ incentives to share information and cooperate.”[26] Heavy-handed regulation or threatening legal liability schemes could have the unintended consequence of discouraging the sort of cooperation that today alleviates security problems swiftly.
Indeed, there is evidence that existing cybersecurity law prevents defensive strategies that could help organizations to more quickly respond to system infiltrations. For example, some argue that private individuals and organizations should be allowed to defend themselves using special measures to expel or track system infiltrators, often called “hacking back” or “active defense.” Anthony Glosson’s analysis for the Mercatus Center discusses how the Computer Fraud and Abuse Act currently prevents computer security specialists from utilizing defensive hacking techniques that could improve system defenses or decrease the number of attempted attacks.[27]
Third, legal solutions are less effective because “the direct costs of going to court can be substantial, as can be the time associated with a trial,” Dourado argues.[28] By contrast, private actors working cooperatively “do not need to go to court to enforce security norms,” meaning that “security concerns are addressed quickly or punishment . . . is imposed rapidly.”[29] For example, if security warnings don’t work, ISPs can “punish” negligent or willfully insecure networks by “de-peering,” or terminating network interconnection agreements. The very threat of de-peering helps keep network operators on their toes.
Finally, and perhaps most importantly, Dourado notes that international cooperation between state-based legal systems is limited, complicated, and costly. By contrast, under today’s informal, voluntary approach to online security, international coordination and cooperation are quite strong. The CSIRTs and other security institutions and researchers mentioned above all interact and coordinate today as if national borders did not exist. Territorial legal system and liability regimes don’t have the same advantage; enforcement ends at the border.
Dourado’s model has ramifications for other fields of tech policy. Indeed, as noted above, these collaborative efforts and approaches are already at work in the realms of online safety and digital privacy. Countless organizations and individuals collaborate on educational initiatives to improve online safety and privacy. And many industry and nonprofit groups have established industry best practices and codes of conduct to ensure a safer and more secure online experience for all users. The efforts of the Family Online Safety Institute were discussed above. Another example comes from the Future of Privacy Forum, a privacy think tank that seeks to advance responsible data practices. The think tank helps create codes of conduct to ensure privacy best practices by online operators and also helps highlight programs run by other organizations.[30] Likewise, the National Cyber Security Alliance helps promote Internet safety and security efforts among a variety of companies and coordinates National Cyber Security Awareness Month (every October) and Data Privacy Day (held annually on January 28).[31]
What these efforts prove is that not every complex social problem requires a convoluted legal regime or heavy-handed regulatory response. We can achieve reasonably effective safety and security without layering on more and more law and regulation.[32] Indeed, the Internet and digital systems could arguably be made more secure by reforming outdated legislation that prevents potential security-increasing collaborations. “Dynamic systems are not merely turbulent,” Postrel notes. “They respond to the desire for security; they just don’t do it by stopping experimentation.”[33] She adds, “Left free to innovate and to learn, people find ways to create security for themselves. Those creations, too, are part of dynamic systems. They provide personal and social resilience.”[34]
Education is a crucial part of building resiliency in the security context as well. People and organizations can prepare for potential security problems rationally if given even more information and better tools to secure their digital systems and to understand how to cope when problems arise. Again, many corporations and organizations already take steps to guard against malware and other types of cyberattacks by offering customers free (or cheap) security software. For example, major broadband operators offer free antivirus software to customers and various parental control tools to parents. In the context of “connected car” technology, automakers have banded together to come up with privacy and security best practices to address worries about remote hacking of cars as well as concerns about how much data they collect about our driving habits.[35]
Thus, although it is certainly true that “more could be done” to secure networks and critical systems, panic is unwarranted because much is already being done to harden systems and educate the public about risks.[36] Various digital attacks will continue, but consumers, companies, and others organizations are learning to cope and become more resilient in the face of those threats through creative “bottom-up” solutions instead of innovation-limiting “top-down” regulatory approaches.
[1] This section partially adapted from Adam Thierer, “Achieving Internet Order without Law,” Forbes, June 24, 2012, http://www.forbes.com/sites/adamthierer/2012/06/24/achieving-internet-order-without-law. The author wishes to thank Andrea Castillo for major contributions to this section.
[2] See Richard A. Serrano, “Cyber Attacks Seen as a Growing Threat,” Los Angeles Times, February 11, 2011, A18. (“[T]he potential for the next Pearl Harbor could very well be a cyber attack.”)
[3] Harry Raduege, “Deterring Attackers in Cyberspace,” The Hill, September 23, 2011, 11, http://thehill.com/opinion/op-ed/183429-deterring-attackers-in-cyberspace.
[4] Kurt Nimmo, “Former CIA Official Predicts Cyber 9/11,” InfoWars.com, August 4, 2011, http://www.infowars.com/former-cia-official-predicts-cyber-911.
[5] Rodney Brown, “Cyber Bombs: Data-Security Sector Hopes Adoption Won’t Require a ‘Pearl Harbor’ Moment,” Innovation Report, October 26, 2011, 10, http://digital.masshightech.com/launch.aspx?referral=other&pnum=&refresh=6t0M1Sr380Rf&EID=1c256165-396b-454f-bc92-a7780169a876&skip=; Craig Spiezle, “Defusing the Internet of Things Time Bomb,” TechCrunch, August 11, 2015, http://techcrunch.com/2015/08/10/defusing-the-internet-of-things-time-bomb.
[6] “Morning Edition: Cybersecurity Bill: Vital Need or Just More Rules?” NPR, March 22, 2012, http://www.npr.org/templates/transcript/transcript.php?storyId=149099866.
[7] Jerry Brito and Tate Watkins, “Loving the Cyber Bomb? The Dangers of Threat Inflation in Cybersecurity Policy” (Mercatus Working Paper No. 11-24, Mercatus Center at George Mason University, Arlington, VA, 2011).
[8] Jane K. Cramer and A. Trevor Thrall, “Introduction: Understanding Threat Inflation,” in American Foreign Policy and the Politics of Fear: Threat Inflation Since 9/11, ed. A. Trevor Thrall and Jane K. Cramer (London: Routledge, 2009), 1.
[9] Tufekci, “Dumb Idea”; Byron Acohido, “Hackers Take Control of Internet Appliances,” USA Today, October 15, 2013, http://www.usatoday.com/story/cybertruth/2013/10/15/hackers-taking-control-of-internet-appliances/2986395.
[10] Ed Markey, Tracking & Hacking: Security & Privacy Gaps Put American Drivers at Risk, US Senate, February 2015, http://www.markey.senate.gov/imo/media/doc/2015-02-06_MarkeyReport-Tracking_Hacking_CarSecurity%202.pdf.
[11] Ed Markey, “Markey, Blumenthal to Introduce Legislation to Protect Drivers from Auto Security and Privacy Vulnerabilities with Standards and ‘Cyber Dashboard,’” press release, February 11, 2015, http://www.markey.senate.gov/news/press-releases/markey-blumenthal-to-introduce-legislation-to-protect-drivers-from-auto-security-and-privacy-vulnerabilities-with-standards-and-cyber-dashboard.
[12] Andrea Castillo, “How CISA Threatens Both Privacy and Cybersecurity,” Reason, May 10, 2015, https://reason.com/archives/2015/05/10/why-cisa-wont-improve-cybersecurity.
[13] Eli Dourado and Andrea Castillo, “Poor Federal Cybersecurity Reveals Weakness of Technocratic Approach” (Mercatus Working Paper, Mercatus Center at George Mason University, Arlington, VA, June 22, 2015), http://mercatus.org/publication/poor-federal-cybersecurity-reveals-weakness-technocratic-approach.
[14] Eli Dourado, “Internet Security without Law: How Security Providers Create Online Order” (Mercatus Working Paper No. 12-19, Mercatus Center at George Mason University, Arlington, VA, June 19, 2012), http://mercatus.org/publication/internet-security-without-law-how-service-providers-create-order-online.
[15] Ibid.
[16] Charlie Miller, “The Legitimate Vulnerability Market: Inside the Secretive World of 0-day Exploit Sales,” Independent Security Evaluators, May 6, 2007, http://www.econinfosec.org/archive/weis2007/papers/29.pdf.
[17] Andrea Castillo, “The Economics of Software-Vulnerability Sales: Can the Feds Encourage ‘Pro-social’ Hacking?” Reason, August 11, 2015, https://reason.com/archives/2015/08/11/economics-of-the-zero-day-sales-market.
[18] Roger Grimes, “The Cyber Crime Tide Is Turning,” Infoworld, August 9, 2011, http://www.pcworld.com/article/237647/the_cyber_crime_tide_is_turning.html.
[19] Dourado, “Internet Security.”
[20] Anthony D. Glosson, “Active Defense: An Overview of the Debate and a Way Forward,” (Mercatus Working Paper, Mercatus Center at George Mason University, Arlington, VA, August 10, 2015), http://mercatus.org/publication/active-defense-overview-debate-and-way-forward-guardians-of-peace-hackers-cybersecurity.
[21] http://stopbadware.org.
[22] https://www.iamthecavalry.org.
[23] Andrea Castillo, “The Government’s Latest Attempt to Stop Hackers Will Only Make Cybersecurity Worse,” Reason, July 28, 2015, https://reason.com/archives/2015/07/28/gov-ploy-to-stop-hackers-will-backfire.
[24] Russell Brandom, “The US is Rewriting its Controversial Zero-Day Export Policy,” The Verge, July 29, 2015, http://www.theverge.com/2015/7/29/9068665/wassenaar-export-zero-day-revisions-department-of-commerce.
[25] Dourado, “Internet Security.”
[26] Ibid.
[27] Glosson, “Active Defense.”
[28] Dourado, “Internet Security.”
[29] Dourado, “Internet Security.”
[30] Future of Privacy Forum, “Best Practices,” http://www.futureofprivacy.org/resources/best-practices/.
[31] See http://www.staysafeonline.org/ncsam and http://www.staysafeonline.org/data-privacy-day.
[32] Glosson, “Active Defense,” 22. (“The precautionary principle is especially inadvisable in the dynamic realm of tech policy, and until the ostensible harms of active defense materialize, the law should facilitate maximum innovation in the network security field.”)
[33] Postrel, Future and Its Enemies, at 199.
[34] Ibid., 202.
[35] See Future of Privacy Forum, “Connected Cars Project,” accessed October 16, 2015, http://www.futureofprivacy.org/connectedcars; Auto Alliance, “Automakers Believe That Strong Consumer Data Privacy Protections Are Essential to Maintaining the Trust of Our Customers,” accessed October 16, 2015, http://www.autoalliance.org/automotiveprivacy. See also Future of Privacy Forum, “Comments of the Future of Privacy Forum on Connected Smart Technologies in Advance of the FTC ‘Internet of Things’ Workshop,” May 31, 2013, http://www.futureofprivacy.org/wp-content/uploads/FPF-Comments-Regarding-Internet-of-Things.pdf.
[36] Adam Thierer, “Don’t Panic over Looming Cybersecurity Threats,” Forbes, August 7, 2011, http://www.forbes.com/sites/adamthierer/2011/08/07/dont-panic-over-looming-cybersecurity-threats.